id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
213,070 | https://en.wikipedia.org/wiki/Artificial%20insemination | Artificial insemination is the deliberate introduction of sperm into a female's cervix or uterine cavity for the purpose of achieving a pregnancy through in vivo fertilization by means other than sexual intercourse. It is a fertility treatment for humans, and is a common practice in animal breeding, including dairy cattle (see frozen bovine semen) and pigs.
Artificial insemination may employ assisted reproductive technology, sperm donation and animal husbandry techniques. Artificial insemination techniques available include intracervical insemination (ICI) and intrauterine insemination (IUI). Where gametes from a third party are used, the procedure may be known as 'assisted insemination'.
Humans
History
The first recorded case of artificial insemination was John Hunter in 1790, who helped impregnate a linen draper's wife. The first reported case of artificial insemination by donor occurred in 1884: William H. Pancoast, a professor in Philadelphia, took sperm from his "best looking" student to inseminate an anesthetized woman without her knowledge. The case was reported 25 years later in a medical journal. The sperm bank was developed in Iowa starting in the 1950s in research conducted by University of Iowa medical school researchers Jerome K. Sherman and Raymond Bunge.
In the United Kingdom, the British obstetrician Mary Barton founded one of the first fertility clinics to offer donor insemination in the 1930s, with her husband Bertold Wiesner fathering hundreds of offspring.
In the 1980s, direct intraperitoneal insemination (DIPI) was occasionally used, where doctors injected sperm into the lower abdomen through a surgical hole or incision, with the intention of letting them find the oocyte at the ovary or after entering the genital tract through the ostium of the fallopian tube.
Patients and gamete donors
There are multiple methods used to obtain the semen necessary for artificial insemination, and the sperm used in artificial insemination may be provided by the recipient patient's partner or by a sperm donor whose identity is known or unknown.
Artificial insemination techniques were originally used mainly to assist heterosexual couples to conceive where they were having difficulties, but with the advancement of techniques in this field, notably ICSI, the use of artificial insemination for such couples has largely been rendered unnecessary. However, there are still reasons why a couple would seek to use artificial insemination using the male partner's sperm. In the case of such couples, before artificial insemination is turned to as the solution, doctors will require an examination of both the male and female involved in order to remove any and all physical hindrances that are preventing them from naturally achieving a pregnancy including any factors which prevent the couple from having satisfactory sexual intercourse. The couple is also given a fertility test to determine the motility, number, and viability of the male's sperm and the success of the female's ovulation. From these tests, the doctor may or may not recommend a form of artificial insemination. The results of investigations may, for example, show that the woman's immune system may be rejecting her partner's sperm as invading molecules. Women who have issues with the cervix – such as cervical scarring, cervical blockage from endometriosis, or thick cervical mucus – may also benefit from artificial insemination, since the sperm must pass through the cervix to result in fertilization.
Nowadays artificial insemination in humans is mainly used as a substitute for sexual intercourse for women without a male partner who wish to have their own children—such as women in lesbian relationships and single women—and thus where sperm from a sperm donor is used.
Barriers for patients and donors
Some countries have laws which restrict and regulate who can donate sperm and who is able to receive artificial insemination. Some women who live in a jurisdiction which does not permit artificial insemination in the circumstance in which she finds herself may travel to another jurisdiction which permits it. Compared with natural insemination, artificial insemination can be more expensive and more invasive, and may require professional assistance.
Preparations
Timing is critical, as the window and opportunity for fertilization is little more than twelve hours from the release of the ovum. To increase the chance of success, the woman's menstrual cycle is closely observed, often using ovulation kits, ultrasounds or blood tests, such as basal body temperature tests over, noting the color and texture of the vaginal mucus, and the softness of the nose of her cervix. To improve the success rate of artificial insemination, drugs to create a stimulated cycle may be used, but the use of such drugs also results in an increased chance of a multiple birth.
Sperm can be provided fresh or washed. Washed sperm is required in certain situations. Pre- and post-concentration of motile sperm is counted. Sperm from a sperm bank will be frozen and quarantined for a period, and the donor will be tested before and after production of the sample to ensure that he does not carry a transmissible disease. Sperm from a sperm bank will also be suspended in a semen extender which assists with freezing, storing and shipping.
If sperm is provided by a private donor, either directly or through a sperm agency, it is usually supplied fresh, not frozen, and it will not be quarantined. Donor sperm provided in this way may be given directly to the recipient woman or her partner, or it may be transported in specially insulated containers. Some donors have their own freezing apparatus to freeze and store their sperm.
Techniques
Semen used is either fresh, raw, or frozen. Where donor sperm is supplied by a sperm bank, it will always be quarantined and frozen, and will need to be thawed before use. The sperm is ideally donated after two or three days of abstinence, without lubrication as the lubricant can inhibit the sperm motility. When an ovum is released, semen is introduced into the woman's vagina, uterus or cervix, depending on the method being used.
Sperm is occasionally inserted twice within a 'treatment cycle'.
Intracervical
Intracervical insemination (ICI) is the method of artificial insemination which most closely mimics the natural ejaculation of semen by the penis into the vagina during sexual intercourse. It is painless and is the simplest, easiest and most common method of artificial insemination involving the introduction of unwashed or raw semen into the vagina at the entrance to the cervix, usually by means of a needleless syringe. The vagina acts as a filter to separate out the sperm from other chemicals in the ejaculate, as with intercourse, so that only sperm pass through the cervix on their way to the uterus.
ICI is commonly used in the home, by self-insemination and practitioner insemination. Sperm used in ICI inseminations does not have to be 'washed' to remove seminal fluid so that raw semen from a private donor may be used. Semen supplied by a sperm bank prepared for ICI or IUI use is suitable for ICI. ICI is a popular method of insemination amongst single and lesbian women purchasing donor sperm on-line.
Although ICI is the simplest method of artificial insemination, a meta-analysis has shown no difference in live birth rates compared with IUI. It may also be performed privately by the woman, or, if she has a partner, in the presence of her partner, or by her partner. ICI was previously used in many fertility centers as a method of insemination, but its popularity in this context has waned as other, more reliable methods of insemination have become available.
During ICI, air is expelled from a needleless syringe which is then filled with semen which has been allowed to liquify. A specially-designed syringe, wider and with a more rounded end, may be used for this purpose. Any further enclosed air is removed by gently pressing the plunger forward. The woman lies on her back and the syringe is inserted into the vagina. Care is optimal when inserting the syringe, so that the tip is as close to the entrance to the cervix as possible. A vaginal speculum may be used for this purpose and a catheter may be attached to the tip of the syringe to ensure delivery of the semen as close to the entrance to the cervix as possible. The plunger is then slowly pushed forward and the semen in the syringe is gently emptied deep into the vagina. It is important that the syringe is emptied slowly for safety and for the best results, bearing in mind that the purpose of the procedure is to replicate as closely as possible a natural deposit of the semen in the vagina. The syringe (and catheter if used) may be left in place for several minutes before removal. The woman can bring herself to orgasm so that the cervix 'dips down' into the pool of semen, again replicating closely vaginal intercourse, and this may improve the success rate.
Following insemination, fertile sperm will swim through the cervix into the uterus and from there to the fallopian tubes in a natural way as if the sperm had been deposited in the vagina through intercourse. The woman is therefore advised to lie still for about half-an-hour to assist conception.
One insemination during a cycle is usually sufficient. Additional inseminations during the same cycle may not improve the chances of a pregnancy.
Ordinary sexual lubricants should not be used in the process, but special fertility or 'sperm-friendly' lubricants can be used for increased ease and comfort.
When performed at home without the presence of a professional, aiming the sperm in the vagina at the neck of the cervix may be more difficult to achieve and the effect may be to 'flood' the vagina with semen, rather than to target it specifically at the entrance to the cervix. This procedure is sometimes referred to as 'intravaginal insemination' (IVI). Sperm supplied by a sperm bank will be frozen and must be allowed to thaw before insemination. The sealed end of the straw itself must be cut off and the open end of the straw is usually fixed straight on to the tip of the syringe, allowing the contents to be drawn into the syringe. Sperm from more than one straw can generally be used in the same syringe. Where fresh semen is used, this must be allowed to liquefy before inserting it into the syringe, or alternatively, the syringe may be back-loaded.
A conception cap, which is a form of conception device, may be inserted into the vagina following insemination and may be left in place for several hours. Using this method, a woman may go about her usual activities while the cervical cap holds the semen in the vagina close to the entrance to the cervix. Advocates of this method claim that it increases the chances of conception. One advantage with the conception device is that fresh, non-liquefied semen may be used. The man may ejaculate straight into the cap so that his fresh semen can be inserted immediately into the vagina without waiting for it to liquefy, although a collection cup may also be used. Other methods may be used to insert semen into the vagina notably involving different uses of a conception cap. These include a specially designed conception cap with a tube attached which may be inserted empty into the vagina after which liquefied semen is poured into the tube. These methods are designed to ensure that semen is inseminated as close as possible to the cervix and that it is kept in place there to increase the chances of conception.
Intrauterine
Intrauterine insemination (IUI) involves injection of 'washed' sperm directly into the uterus with a catheter. Washing involves the removal of chemicals other than sperm which are in the natural ejaculate. In forms of vaginal insemination, including artificial vaginal insemination and ICI, these chemicals will be filtered out by the vagina. Insemination in this way also means that the sperm do not have to swim through the cervix which is coated with a mucus layer. This layer of mucus can slow down the passage of sperm and can result in many sperm perishing before they can enter the uterus. Donor sperm is sometimes tested for mucus penetration if it is to be used for ICI inseminations but partner sperm may or may not be able to pass through the cervix. In these cases, the use of IUI can provide a more efficient delivery of the sperm. In general terms, IUI is usually regarded as more efficient than ICI or IVI. It is therefore the method of choice for single and lesbian women wishing to conceive using donor sperm since this group of recipients usually require artificial insemination because they do not have a male partner, not because they have medical problems. Owing to the high number of these recipients using donor sperm services, IUI is therefore the most popular method of insemination today at a fertility clinic. The term 'artificial insemination' has, in many cases, come to mean IUI insemination.
It is important that washed sperm is used because unwashed sperm may elicit uterine cramping, expelling the semen and causing pain, due to content of prostaglandins. (Prostaglandins are also the compounds responsible for causing the myometrium to contract and expel the menses from the uterus, during menstruation.) Resting on the table for fifteen minutes after an IUI is optimal for the woman to increase the pregnancy rate.
Using this technique, as with ICI, fertilization takes place naturally in the external part of the fallopian tubes in the same way that occurs following intercourse.
For heterosexual couples, the indications to perform an intrauterine insemination are usually a moderate male factor, the incapability to ejaculate in vagina and an idiopathic infertility. A short period of ejaculatory abstinence before intrauterine insemination is associated with higher pregnancy rates. For the man, a TMS of more than 5 million per ml is optimal. In practice, donor sperm will satisfy these criteria and since IUI is a more efficient method of artificial insemination than ICI and, because of its generally higher success rate, IUI is usually the insemination procedure of choice for single women and lesbians using donor semen in a fertility centre. Lesbians and single women are less likely to have fertility issues of their own and enabling donor sperm to be inserted directly into the womb will often produce a better chance of conceiving. A 2019 showed that pregnancy rates were similar between lesbian women and heterosexual women undergoing IUI. However, it was found that there is a significantly higher multiple gestation rate among lesbian women undergoing ovulation induction (OI) when compared to lesbian women undergoing natural cycles.
Unlike ICI, intrauterine insemination normally requires a medical practitioner to perform the procedure. One of the requirements is to have at least one permeable tube, proved by hysterosalpingography. The infertility duration is also important. A female under 30 years of age has optimal chances with IUI; A promising cycle is one that offers two follicles measuring more than 16 mm, and estrogen of more than 500 pg/mL on the day of hCG administration. However, GnRH agonist administration at the time of implantation does not improve pregnancy outcome in intrauterine insemination cycles according to a randomized controlled trial. One of the prominent private clinic in Europe has published a data A multiple logistic regression model showed that sperm origin, maternal age, follicle count at hCG administration day, follicle rupture, and the number of uterine contractions observed after the second insemination procedure were associated with the live-birth rate
The steps to follow in order to perform an intrauterine insemination are:
Mild controlled ovarian stimulation (COS): there is no control of how many oocytes are at the same time when stimulating ovulation. For that reason, it is necessary to check the amount being ovulated via ultrasound (checking the amount of follicles developing at the same time) and administering the desired amount of hormones.
Ovulation induction: using substances known as ovulation inductors.
Semen capacitation: wash and centrifugation, swim-up, or gradient. The insemination should not be performed later than an hour after capacitation. 'Washed sperm' may be purchased directly from a sperm bank if donor semen is used, or 'unwashed semen' may be thawed and capacitated before performing IUI insemination, provided that the capacitation leaves a minimum of, usually, five million motile sperm.
Luteal phase support: a lack of progesterone in the endometrium could end a pregnancy. To avoid that 200 mg/day of micronized progesterone are administered via vagina. If there is pregnancy, this hormone is kept administering until the tenth week of pregnancy.
The cost breakdown for Intrauterine Insemination (IUI) involves several components. The procedure itself typically ranges from $300 to $1,000 per cycle without insurance. The cost of the sperm may vary widely, with prices per vial ranging from $500 to $1,000 or more from a sperm bank. Additional expenses might include consultation fees, ovulation-inducing medication, ultrasounds, and blood tests.
The extent of insurance coverage for fertility treatments, including Intrauterine Insemination (IUI), varies considerably. Some insurance plans may cover some of the costs, while others may not provide any financial support for fertility treatments. Coverage depends on various factors, such as the insurance plan, state policies and regulations, and the underlying cause of infertility. Several states have mandated insurers to provide coverage for infertility services.
IUI can be used in conjunction with controlled ovarian hyperstimulation (COH). Clomiphene Citrate is the first line, Letrozole is second line, in order to stimulate ovaries before moving on to IVF. Still, advanced maternal age causes decreased success rates; women aged 38–39 years appear to have reasonable success during the first two cycles of ovarian hyperstimulation and IUI. However, for women aged over 40 years, there appears to be no benefit after a single cycle of COH/IUI. Medical experts therefore recommend considering in vitro fertilization after one failed COH/IUI cycle for women aged over 40 years.
A double intrauterine insemination theoretically increases pregnancy rates by decreasing the risk of missing the fertile window during ovulation. However, a randomized trial of insemination after ovarian hyperstimulation found no difference in live birth rate between single and double intrauterine insemination. A Cochrane found uncertain evidence about the effect of IUI compared with timed intercourse or expectant management on live birth rates but IUI with controlled ovarian hyperstimulation is probably better than expectant management.
Due to the lack of reliable evidence from controlled clinical trials, it is not certain which semen preparation techniques are more effective (wash and centrifugation; swim-up; or gradient) in terms of pregnancy and live birth rates.
Intrauterine insemination success factors
Intrauterine insemination (IUI) procedures have shown to be more successful and effective with certain factors taken into account. One major factor is the health of the sperm that is used. Sperm motility, which is improved by the sperm washing procedure, sperm density, and the sperm concentration index, all of which are found through washing and studying of the health of the specimen, are major indicators of a positive pregnancy test following IUI.
The age of both the male and female (egg and sperm donors) involved in the process are extremely important. Although age has typically been pinned on the women as a determining factor, research shows that both male and female age has about equal impact on the success of the procedure. Along with age, the duration of fertility is also found to be a factor in IUI success, the longer one faces infertility, the lower the chance of a positive pregnancy test occurring. When people talk about age as a risk factor, they are generally speaking to the way in which the DNA in the eggs and sperm have increased probabilities of mutations.
Lastly, the biological factors of the female’s body can have some impact on the success of the IUI procedure. The endometrial thickness at time of insemination is moderately important, though less of a concern than some of the other factors. The number of follicles developed, grown, and retrieved from the ovaries during ovarian stimulation is particularly important and a major success factor in fertility treatments. And lastly, for the female partner, the estradiol concentration within the body on the day of HCG administration.
Who IUI can be used for
Because IUI is less expensive and less invasive than other fertility options (for example, in vitro fertilisation, or IVF), it is typically the first outlet for those looking for fertility treatments. For individuals or couples who struggle with getting pregnant, but haven’t explored any fertility treatments yet, they would be good candidates for IUI. IUI provides those with a more affordable and accessible outlet for fertility treatments, however, IUI may not be the most successful option if it is determined to be female factor infertility. IUI is also a very good option for single individuals who are using donor sperm, as donor sperm undergoes regulations and checks which may not be the case for a partner sperm donation. IUI can additionally be a good fertility outlet for lesbian or queer couples as they most often do not face infertility, and would most likely be using regulated and checked donor sperm. Furthermore, surrogates can be artificially inseminated through IUI to help other individuals and/or couples become pregnant with their sperm.
Intrauterine tuboperitoneal
Intrauterine tuboperitoneal insemination (IUTPI) involves injection of washed sperm into both the uterus and fallopian tubes. The cervix is then clamped to prevent leakage to the vagina, best achieved with a specially designed double nut bivalve (DNB) speculum. The sperm is mixed to create a volume of 10 ml, sufficient to fill the uterine cavity, pass through the interstitial part of the tubes and the ampulla, finally reaching the peritoneal cavity and the Pouch of Douglas where it would be mixed with the peritoneal and follicular fluid. IUTPI can be useful in unexplained infertility, mild or moderate male infertility, and mild or moderate endometriosis. In non-tubal sub fertility, fallopian tube sperm perfusion may be the preferred technique over intrauterine insemination.
Intratubal
Intratubal insemination (ITI) involves injection of washed sperm into the fallopian tube, although this procedure is no longer generally regarded as having any beneficial effect compared with IUI. ITI however, should not be confused with gamete intrafallopian transfer, where both eggs and sperm are mixed outside the woman's body and then immediately inserted into the fallopian tube where fertilization takes place.
LGBTQ+ concerns
Although many fertilization procedures, such as IUI are typically carried out in a medical setting, society is increasingly recognizing the important role that this plays in the lives of individuals who might otherwise not conceive through heterosexual penetrative sexual intercourse. Artificial insemination using a sperm donor for LGBTQ+ individuals and couples is one of the more cost-effective avenues to parenting. While clinic based IUI may be open to many, it typically still includes hetero-reproductive narratives which dates from the early days of fertilization procedures when these were often exclusively for married couples and when there was a resistance in many societies to extend these services to the LGBTQ+ community. Indeed, in the early days, there were very few fertility clinics which would provide services to single women and lesbian couples. In the UK, notable pioneers in this respect were the British Pregnancy Advisory Service (BPAS) and the Pregnancy Advisory Service (PAS), both of which operated before statutory control of fertility services in 1992, and the London Women's Clinic (LWC) which provided artificial insemination to single women and lesbians from 1998. Most donor insemination procedures undertaken in many countries today are for lesbian couples or single mainly lesbian women, yet much of their rhetoric and advertising is directed at heterosexual couples. Indeed, many sperm banks seem reluctant to inform donors that most of their donations will be used for lesbians and single women.
To improve the way society talks about and carries out donor insemination inclusive language may be used. One way to do this is to bring LGBTQ narratives into this process, with a particular emphasis on this being a family-centered process. Even in a medical setting, it is important to bring intimacy and family-centeredness into this process, as this promotes connectedness and inclusiveness in what can be seen as a hostile and discriminatory environment. LGBTQ couples or individuals typically have to navigate more complexities and barriers than heterosexual couples when undergoing fertility treatment, such as stigma and carrier decisions, so allowing room for intimacy and connectedness in the process can improve the experience for individuals, reduce stress, and minimize barriers that target marginalized individuals.
Lesbian couples may either select a friend or family member as their sperm donor or choose an anonymous donor. After a sperm donor is selected, a couple can proceed with donor sperm IUI. IUI is an economic option for same-sex couples and can be done without the use of medication. According to a study from 2021, lesbian women undergoing IUI had an average clinical pregnancy rate of 13.2% per cycle and 42.2% success rate giving the average number of cycles at 3.6.
Pregnancy rate
The rates of successful pregnancy for artificial insemination are 10-15% per menstrual cycle using ICI, and 15–20% per cycle for IUI. In IUI, about 60 to 70% have achieved pregnancy after 6 cycles.
However, these pregnancy rates may be very misleading, since many factors have to be included to give a meaningful answer, e.g. definition of success and calculation of the total population. These rates can be influenced by age, overall reproductive health, and if the patient had an orgasm during the insemination. The literature is conflicting on immobilization after insemination has increasing the chances of pregnancy. Previous data suggests that it is statistically significant for the patient to remain immobile for 15 minutes after insemination, while another review article claims that it is not. A point of consideration, is that it does cost the patient or healthcare system to remain immobile for 15 minutes if it does increase the chances. For couples with unexplained infertility, unstimulated IUI is no more effective than natural means of conception.
The pregnancy rate also depends on the total sperm count, or, more specifically, the total motile sperm count (TMSC), used in a cycle. The success rate increases with increasing TMSC, but only up to a certain count, when other factors become limiting to success. The summed pregnancy rate of two cycles using a TMSC of 5 million (may be a TSC of ~10 million on graph) in each cycle is substantially higher than one single cycle using a TMSC of 10 million. However, although more cost-efficient, using a lower TMSC also increases the average time taken to achieve pregnancy. Women whose age is becoming a major factor in fertility may not want to spend that extra time.
Samples per child
The number of samples (ejaculates) required to give rise to a child varies substantially from person to person, as well as from clinic to clinic. However, the following equations generalize the main factors involved:
For intracervical insemination:
N is how many children a single sample can give rise to.
Vs is the volume of a sample (ejaculate), usually between 1.0 mL and 6.5 mL
c is the concentration of motile sperm in a sample after freezing and thawing, approximately 5–20 million per ml but varies substantially
rs is the pregnancy rate per cycle, between 10% and 35%
nr is the total motile sperm count recommended for vaginal insemination (VI) or intra-cervical insemination (ICI), approximately 20 million pr. ml.
The pregnancy rate increases with increasing number of motile sperm used, but only up to a certain degree, when other factors become limiting instead.
With these numbers, one sample would on average help giving rise to 0.1–0.6 children, that is, it actually takes on average 2–5 samples to make a child.
For intrauterine insemination, a centrifugation fraction (fc) may be added to the equation:
fc is the fraction of the volume that remains after centrifugation of the sample, which may be about half (0.5) to a third (0.33).
On the other hand, only 5 million motile sperm may be needed per cycle with IUI (nr=5 million)
Thus, only 1–3 samples may be needed for a child if used for IUI.
Social implications
One of the key issues arising from the rise of dependency on assisted reproductive technology (ARTs) is the pressure placed on couples to conceive, "where children are highly desired, parenthood is culturally mandatory, and childlessness socially unacceptable".
The medicalization of infertility creates a framework in which individuals are encouraged to think of infertility quite negatively. In many cultures donor insemination is religiously and culturally prohibited, often meaning that less accessible "high tech" and expensive ARTs, like IVF, are the only solution.
An over-reliance on reproductive technologies in dealing with infertility prevents many – especially, for example, in the "infertility belt" of central and southern Africa – from dealing with many of the key causes of infertility treatable by artificial insemination techniques; namely preventable infections, dietary and lifestyle influences.
If good records are not kept, the offspring when grown up risk accidental incest.
Risk factors
The risk factors of artificial insemination are comparatively low to other forms of fertility treatment. The most prominent risk factor would be infection after the procedure, with other risk factors including a higher risk of having twins or triplets, and minor vaginal bleeding during the procedure.
Although these risk factors are minor and generally manageable, there is a significant knowledge gap between identity groups around risk factors for fertility treatments in general. For instance, it was found that LGBTQ+ individuals had "had significant knowledge gaps of risk factors associated with reproductive outcomes when compared to heterosexual female peers." Therefore, it is imperative that providers take extra care in educating their LGBTQ+ patients on potential risk factors of artificial insemination. The implications of this knowledge gap between LGTBQ+ individuals and their heterosexual counterparts are serious and worth noting. Lack of access to proper information and risk factors around procedures like these may dissuade someone from pursuing these procedures altogether. As a result, there will be less normalization of LGBTQ+ family making and reproduction, which only perpetuates this cycle of lack of information among LGBTQ+ folks.
Legal restrictions
Some countries restrict artificial insemination in a variety of ways. For example, some countries do not permit AI for single women, and other countries do not permit the use of donor sperm.
As of May 2013, the following European countries permit medically assisted AI for single women:
Belarus
Belgium
Britain
Bulgaria
Denmark
Estonia
Finland
Germany
Greece
Hungary
Iceland
Ireland
Latvia
Moldova
Montenegro
Netherlands
North Macedonia
Romania
Russia
Spain
Ukraine
Armenia
Cyprus
Law in the United States
History of Law Around Artificial Insemination
Artificial insemination used to be seen as adultery and was illegal until the 1960s when states started recognizing the child born from artificial insemination as legitimate. Once the children began to be recognized as legitimate, legal questions around who the parents of the child are, how to handle surrogacy, paternity rights, and eventually artificial insemination and LGBT+ parents began to arise. Prior to the use of artificial insemination, the legal parents of a child were the two people who conceived the child or the person who birthed the child and their legal spouse, but artificial insemination complicates the legal process of becoming a parent as well as who is the parent of the child. Deciding who the parents of the child are is the largest legal predicament around artificial insemination. However, questions around surrogacy and donor's rights also appear as a side question to determining the parent(s). Some major cases that deal with artificial insemination and parental rights are, K.M v E.G, Johnson v Calvert, Matter of Baby M, and In Re K.M.H.
Legal Parental Relations and Artificial Insemination
When children are conceived the traditional way, there is little discrepancy around who the legal parents of the child are. However, because children conceived using artificial insemination may not be genetically related to one or more of their parents, who the legal parents of the child are can come into question. Prior to the passage of the Uniform Parentage Act in 1973, children conceived via artificial insemination were deemed as “illegitimate” children. The Uniform Parentage Act then recognized the children born from artificial insemination as legal and laid precedent for how the legal parents of the child were decided. However, this act only applied to the children of those married couples. It established that the person who birthed the child was the mother and the father would be the husband of the woman. In 2002, the Uniform Parentage Act, which is adopted individually on a state by state basis, was revised to address non married couples and states that an unmarried couple has the same rights to the child that a married couple would. This extended who has the right to be a parent to a man who would supposedly fill in the social role as a “father.” There were now numerous ways to establish parental rights for both the mother and the father depending on if the child was born using a sperm donor or a surrogate. Currently, a revised version of the Uniform Parentage Act is starting to be passed in a few states that expands how parental relations can be determined. This bill includes expanding “father” to mean any person who would fill the role of a father, regardless of their gender and “mother” is expanded to anyone who gives birth to the child regardless of gender. In addition, this act would also change any language of “husband” or “wife” to “spouse.”
Paternity rights
There is no federal law that applies to all fifty states when it comes to artificial insemination and paternity rights, but the Uniform Parentage Act is a model which many states have adopted. Under the 1973 UPA, married heterosexual couples making use of artificial insemination through a licensed physician could list the husband as the natural father of the child, rather than the sperm donor. Since then a revised version of the Act has been introduced, though to less widespread adoption
Generally paternity is not an issue when artificial insemination is between a married woman and an anonymous donor. Most states provide that anonymous donors' paternity claims are not recognized, and most sperm donation centers make use of contracts that require donors to sign away their paternity rights before they can participate. When the mother knows the donor, however, or engages in artificial insemination while unmarried, complications may arise. In cases of private sperm donation, paternity rights and responsibilities are often conferred onto sperm donors when: the donor and recipient did not comply with state laws regarding artificial insemination, the sperm donor and recipient know one another, or the donor had the intent of being a father to the child. When one or a number of these things is true, courts have at times found written agreements relinquishing parental rights to be unenforceable.
Opposition and criticism
Religious opposition
Some theologically buttressed arguments reject the moral validity of this practice, such as Pope John XXIII. However, according to a document of the USCCB, the intrauterine insemination (IUI) of “licitly obtained” (normal intercourse with a silastic sheath i.e. a perforated condom) but technologically prepared semen sample (washed, etc.) has been neither approved nor disapproved by Church authority and its moral validity remains under discussion. Some religious groups, such as the Catholic Church, and individuals have also criticized artificial insemination because acquiring sperm for the procedure is seen as "a form of adultery promoting the vice of masturbation."
Other morality-based opposition
There are critics of artificial insemination who voice concerns regarding the potential for AI to encourage eugenicist practices through selection of particular traits. The line of reasoning follows the history of artificial insemination in breeding livestock and other domesticated animals wherein preferred traits are encouraged through human-controlled selection.
Other animals
Artificial insemination is used for pets, livestock, endangered species, and animals in zoos or marine parks difficult to transport.
Reasons and techniques
It may be used for many reasons, including to allow a male to inseminate a much larger number of females, to allow the use of genetic material from males separated by distance or time, to overcome physical breeding difficulties, to control the paternity of offspring, to synchronize births, to avoid injury incurred during natural mating, and to avoid the need to keep a male at all (such as for small numbers of females or in species whose fertile males may be difficult to manage).
Artificial insemination is much more common than natural mating, as it allows several female animals to be impregnated from a single male. For instance, up to 30-40 female pigs can be impregnated from a single boar. Workers collect the semen by masturbating the boars, then insert it into the sows via a raised catheter known as a pork stork. Boars are still physically used to excite the females prior to insemination, but are prevented from actually mating.
Semen is collected, extended, then cooled or frozen. It can be used on-site or shipped to the female's location. If frozen, the small plastic tube holding the semen is referred to as a straw. To allow the sperm to remain viable during the time before and after it is frozen, the semen is mixed with a solution containing glycerol or other cryoprotectants. An extender is a solution that allows the semen from a donor to impregnate more females by making insemination possible with fewer sperm. Antibiotics, such as streptomycin, are sometimes added to the sperm to control some bacterial venereal diseases. Before the actual insemination, estrus may be induced through the use of progestogen and another hormone (usually PMSG or Prostaglandin F2α).
History
The first viviparous animal to be artificially fertilized was a dog. The experiment was conducted with success by the Italian Lazzaro Spallanzani in 1780. Another pioneer was the Russian Ilya Ivanov in 1899. In 1935, diluted semen from Suffolk sheep was flown from Cambridge in Britain to Kraków, Poland, as part of an international research project. The participants included Prawochenki (Poland), Milovanoff (USSR), Hammond and Walton (UK), and Thomasset (Uruguay).
Modern artificial insemination was pioneered by John O. Almquist of Pennsylvania State University. He improved breeding efficiency by the use of antibiotics (first proven with penicillin in 1946) to control bacterial growth, decreasing embryonic mortality, and increase fertility. This, and various new techniques for processing, freezing, and thawing of frozen semen significantly enhanced the practical utilization of artificial insemination in the livestock industry and earned him the 1981 Wolf Foundation Prize in Agriculture. Many techniques developed by him have since been applied to other species, including humans.
Species
Artificial insemination is used in many non-human animals, including sheep, horses, cattle, pigs, dogs, pedigree animals generally, zoo animals, turkeys and creatures as tiny as honeybees and as massive as orcas (killer whales).
Artificial insemination of farm animals is common in the developed world, especially for breeding dairy cattle (75% of all inseminations). Swine are also bred using this method (up to 85% of all inseminations). It is an economical means for a livestock breeder to improve their herds utilizing males having desirable traits.
Although common with cattle and swine, artificial insemination is not as widely practiced in the breeding of horses. A small number of equine associations in North America accept only horses that have been conceived by "natural cover" or "natural service" – the actual physical mating of a mare to a stallion – the Jockey Club being the most notable of these, as no artificial insemination is allowed in Thoroughbred breeding. Other registries such as the AQHA and warmblood registries allow registration of foals created through artificial insemination, and the process is widely used allowing the breeding of mares to stallions not resident at the same facility – or even in the same country – through the use of transported frozen or cooled semen.
In modern species conservation, semen collection and artificial insemination are used also in birds. In 2013 scientist of the Justus-Liebig-University of Giessen, Germany, from the working group of Michael Lierz, Clinic for birds, reptiles, amphibians, and fish, developed a novel technique for semen collection and artificial insemination in parrots producing the world's first macaw by assisted reproduction.
Scientists working with captive orcas were able to pioneer the technique in the early 2000s, resulting in "the first successful conceptions, resulting in live offspring, using artificial insemination in any cetacean species". John Hargrove, a SeaWorld trainer, describes Kasatka as being the first orca to receive artificial insemination.
Violation of rights
Artificial insemination on animals has been criticised as a violation of animal rights, with animal rights advocates equating it with rape and arguing it constitutes institutionalized bestiality. Artificial insemination of farm animals is condemned by animal rights campaigners such as People for the Ethical Treatment of Animals (PETA) and Joey Carbstrong, who identify the practice as a form of rape due to its sexual, involuntary and perceived painful nature. Animal rights organizations such as PETA and Mercy for Animals frequently write against the practice in their articles. Much of the meat production in the United States depends on artificial insemination, resulting in an explosive growth of the procedure over the past three decades. The state of Kansas makes no exceptions for artificial insemination under its bestiality law, thus making the procedure illegal.
Criteria for benefiting from artificial insemination according to the 2021 Bioethics Law
According to the 2021 Bioethics Law, the criteria that must be met to benefit from artificial insemination are as follows:
Artificial insemination can be performed using sperm from the husband or frozen sperm from an anonymous donor.
Both spouses or the unmarried woman must consent in advance to artificial insemination or embryo transfer.
The parenting project must be validated through a series of interviews with professionals (doctors, psychologists, etc.).
Individuals benefiting from artificial insemination must be of reproductive age.
The 2021 Bioethics Law has expanded the scope of Medically Assisted Procreation (MAP).
See also
Accidental incest
Conception device
Donor conceived people
Embryo transfer
Ex-situ conservation
Frozen bovine semen
Frozen zoo
Intracytoplasmic sperm injection
Semen extender
Sperm bank
Sperm donation
Sperm sorting
Surrogacy
References
Further reading
Hammond, John, et al., The Artificial Insemination of Cattle (Cambridge, Heffer, 1947, 61pp)
External links
Detailed description of the different fertility treatment options available
A history of artificial insemination
What are the Ethical Considerations for Sperm Donation?
United States state court rules sperm donor is not liable for children
UK Sperm Donors Lose Anonymity
AI technique in the equine
IntraUterine TuboPeritoneal Insemination (IUTPI)
The Hastings Center's Bioethics Briefing Book entry on assisted reproduction
Annales de Gembloux L´Organisation Scientifique de l Índustrie Animale en URSS, Artificial Insemination in the URSS, by Luis Thomasset, 1936
Fertility medicine
Reproduction in mammals
Livestock
Pets
Cryobiology
Semen
Assisted reproductive technology
Theriogenology
Ethically disputed business practices towards animals | Artificial insemination | [
"Physics",
"Chemistry",
"Biology"
] | 9,459 | [
"Physical phenomena",
"Phase transitions",
"Cryobiology",
"Biochemistry",
"Assisted reproductive technology",
"Medical technology"
] |
213,328 | https://en.wikipedia.org/wiki/Negative%20feedback | Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. A classic example of negative feedback is a heating system thermostat — when the temperature gets high enough, the heater is turned OFF. When the temperature gets too cold, the heat is turned back ON. In each case the "feedback" generated by the thermostat "negates" the trend.
The opposite tendency — called positive feedback — is when a trend is positively reinforced, creating amplification, such as the squealing "feedback" loop that can occur when a mic is brought too close to a speaker which is amplifying the very sounds the mic is picking up, or the runaway heating and ultimate meltdown of a nuclear reactor which has a positive temperature coefficient of reactivity.
Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing, can be very stable, accurate, and responsive.
Negative feedback is widely used in mechanical and electronic engineering, and also within living organisms, and can be seen in many other fields from chemistry and economics to physical systems such as the climate. General negative feedback systems are studied in control systems engineering.
Negative feedback loops also play an integral role in maintaining the atmospheric balance in various systems on Earth. One such feedback system is the interaction between solar radiation, cloud cover, and planet temperature.
General description
In many physical and biological systems, qualitatively different influences can oppose each other. For example, in biochemistry, one set of chemicals drives the system in a given direction, whereas another set of chemicals drives it in an opposing direction. If one or both of these opposing influences are non-linear, equilibrium point(s) result.
In biology, this process (in general, biochemical) is often referred to as homeostasis; whereas in mechanics, the more common term is equilibrium.
In engineering, mathematics and the physical, and biological sciences, common terms for the points around which the system gravitates include: attractors, stable states, eigenstates/eigenfunctions, equilibrium points, and setpoints.
In control theory, negative refers to the sign of the multiplier in mathematical models for feedback. In delta notation, −Δoutput is added to or mixed into the input. In multivariate systems, vectors help to illustrate how several influences can both partially complement and partially oppose each other.
Some authors, in particular with respect to modelling business systems, use negative to refer to the reduction in difference between the desired and actual behavior of a system. In a psychology context, on the other hand, negative refers to the valence of the feedback – attractive versus aversive, or praise versus criticism.
In contrast, positive feedback is feedback in which the system responds so as to increase the magnitude of any particular perturbation, resulting in amplification of the original signal instead of stabilization. Any system in which there is positive feedback together with a gain greater than one will result in a runaway situation. Both positive and negative feedback require a feedback loop to operate.
However, negative feedback systems can still be subject to oscillations. This is caused by a phase shift around any loop. Due to these phase shifts the feedback signal of some frequencies can ultimately become in phase with the input signal and thus turn into positive feedback, creating a runaway condition. Even before the point where the phase shift becomes 180 degrees, stability of the negative feedback loop will become compromised, leading to increasing under- and overshoot following a disturbance. This problem is often dealt with by attenuating or changing the phase of the problematic frequencies in a design step called compensation. Unless the system naturally has sufficient damping, many negative feedback systems have low pass filters or dampers fitted.
Examples
Mercury thermostats (circa 1600) using expansion and contraction of columns of mercury in response to temperature changes were used in negative feedback systems to control vents in furnaces, maintaining a steady internal temperature.
In the invisible hand of the market metaphor of economic theory (1776), reactions to price movements provide a feedback mechanism to match supply and demand.
In centrifugal governors (1788), negative feedback is used to maintain a near-constant speed of an engine, irrespective of the load or fuel-supply conditions.
In a steering engine (1866), power assistance is applied to the rudder with a feedback loop, to maintain the direction set by the steersman.
In servomechanisms, the speed or position of an output, as determined by a sensor, is compared to a set value, and any error is reduced by negative feedback to the input.
In audio amplifiers, negative feedback flattens frequency response, reduces distortion, minimises the effect of manufacturing variations in component parameters, and compensates for changes in characteristics due to temperature change.
In analog computing, feedback around operational amplifiers is used to generate mathematical functions such as addition, subtraction, integration, differentiation, logarithm, and antilog functions.
In delta-sigma analog-to-digital and digital-to-analog converters (particularly for high quality audio), a negative feedback loop is used to repeatedly correct accumulated quantization error during conversion.
In a phase locked loop (1932), feedback is used to maintain a generated alternating waveform in a constant phase to a reference signal. In many implementations the generated waveform is the output, but when used as a demodulator in an FM radio receiver, the error feedback voltage serves as the demodulated output signal. If there is a frequency divider between the generated waveform and the phase comparator, the device acts as a frequency multiplier.
In organisms, feedback enables various measures (e.g. body temperature, or blood sugar level) to be maintained within a desired range by homeostatic processes.
Detailed implementations
Error-controlled regulation
One use of feedback is to make a system (say T) self-regulating to minimize the effect of a disturbance (say D). Using a negative feedback loop, a measurement of some variable (for example, a process variable, say E) is subtracted from a required value (the 'set point') to estimate an operational error in system status, which is then used by a regulator (say R) to reduce the gap between the measurement and the required value. The regulator modifies the input to the system T according to its interpretation of the error in the status of the system. This error may be introduced by a variety of possible disturbances or 'upsets', some slow and some rapid. The regulation in such systems can range from a simple 'on-off' control to a more complex processing of the error signal.
In this framework, the physical form of a signal may undergo multiple transformations. For example, a change in weather may cause a disturbance to the heat input to a house (as an example of the system T) that is monitored by a thermometer as a change in temperature (as an example of an 'essential variable' E). This quantity, then, is converted by the thermostat (a 'comparator') into an electrical error in status compared to the 'set point' S, and subsequently used by the regulator (containing a 'controller' that commands gas control valves and an ignitor) ultimately to change the heat provided by a furnace (an 'effector') to counter the initial weather-related disturbance in heat input to the house.
Error controlled regulation is typically carried out using a Proportional-Integral-Derivative Controller (PID controller). The regulator signal is derived from a weighted sum of the error signal, integral of the error signal, and derivative of the error signal. The weights of the respective components depend on the application.
Mathematically, the regulator signal is given by:
where
is the integral time
is the derivative time
Negative feedback amplifier
The negative feedback amplifier was invented by Harold Stephen Black at Bell Laboratories in 1927, and granted a patent in 1937 (US Patent 2,102,671) "a continuation of application Serial No. 298,155, filed August 8, 1928 ...").
"The patent is 52 pages long plus 35 pages of figures. The first 43 pages amount to a small treatise on feedback amplifiers!"
There are many advantages to feedback in amplifiers. In design, the type of feedback and amount of feedback are carefully selected to weigh and optimize these various benefits.
Advantages of negative voltage feedback in amplifiers
It reduces non-linear distortion, that is, it has higher fidelity.
It increases circuit stability: that is, the gain remains stable though there are variations in ambient temperature, frequency and signal amplitude.
It increases bandwidth slightly.
It modifies the input and output impedances.
Harmonic, phase, amplitude, and frequency distortions are all reduced considerably.
Noise is reduced considerably.
Though negative feedback has many advantages, amplifiers with feedback can oscillate. See the article on step response. They may even exhibit instability. Harry Nyquist of Bell Laboratories proposed the Nyquist stability criterion and the Nyquist plot that identify stable feedback systems, including amplifiers and control systems.
The figure shows a simplified block diagram of a negative feedback amplifier.
The feedback sets the overall (closed-loop) amplifier gain at a value:
where the approximate value assumes βA >> 1. This expression shows that a gain greater than one requires β < 1. Because the approximate gain 1/β is independent of the open-loop gain A, the feedback is said to 'desensitize' the closed-loop gain to variations in A (for example, due to manufacturing variations between units, or temperature effects upon components), provided only that the gain A is sufficiently large. In this context, the factor (1+βA) is often called the 'desensitivity factor', and in the broader context of feedback effects that include other matters like electrical impedance and bandwidth, the 'improvement factor'.
If the disturbance D is included, the amplifier output becomes:
which shows that the feedback reduces the effect of the disturbance by the 'improvement factor' (1+β A). The disturbance D might arise from fluctuations in the amplifier output due to noise and nonlinearity (distortion) within this amplifier, or from other noise sources such as power supplies.
The difference signal I–βO at the amplifier input is sometimes called the "error signal". According to the diagram, the error signal is:
From this expression, it can be seen that a large 'improvement factor' (or a large loop gain βA) tends to keep this error signal small.
Although the diagram illustrates the principles of the negative feedback amplifier, modeling a real amplifier as a unilateral forward amplification block and a unilateral feedback block has significant limitations. For methods of analysis that do not make these idealizations, see the article Negative feedback amplifier.
Operational amplifier circuits
The operational amplifier was originally developed as a building block for the construction of analog computers, but is now used almost universally in all kinds of applications including audio equipment and control systems.
Operational amplifier circuits typically employ negative feedback to get a predictable transfer function. Since the open-loop gain of an op-amp is extremely large, a small differential input signal would drive the output of the amplifier to one rail or the other in the absence of negative feedback. A simple example of the use of feedback is the op-amp voltage amplifier shown in the figure.
The idealized model of an operational amplifier assumes that the gain is infinite, the input impedance is infinite, output resistance is zero, and input offset currents and voltages are zero. Such an ideal amplifier draws no current from the resistor divider.
Ignoring dynamics (transient effects and propagation delay), the infinite gain of the ideal op-amp means this feedback circuit drives the voltage difference between the two op-amp inputs to zero. Consequently, the voltage gain of the circuit in the diagram, assuming an ideal op amp, is the reciprocal of feedback voltage division ratio β:
.
A real op-amp has a high but finite gain A at low frequencies, decreasing gradually at higher frequencies. In addition, it exhibits a finite input impedance and a non-zero output impedance. Although practical op-amps are not ideal, the model of an ideal op-amp often suffices to understand circuit operation at low enough frequencies.
As discussed in the previous section, the feedback circuit stabilizes the closed-loop gain and desensitizes the output to fluctuations generated inside the amplifier itself.
Areas of application
Mechanical engineering
An example of the use of negative feedback control is the ballcock control of water level (see diagram), or a pressure regulator. In modern engineering, negative feedback loops are found in engine governors, fuel injection systems and carburettors. Similar control mechanisms are used in heating and cooling systems, such as those involving air conditioners, refrigerators, or freezers.
Biology
Some biological systems exhibit negative feedback such as the baroreflex in blood pressure regulation and erythropoiesis. Many biological processes (e.g., in the human anatomy) use negative feedback. Examples of this are numerous, from the regulating of body temperature, to the regulating of blood glucose levels. The disruption of feedback loops can lead to undesirable results: in the case of blood glucose levels, if negative feedback fails, the glucose levels in the blood may begin to rise dramatically, thus resulting in diabetes.
For hormone secretion regulated by the negative feedback loop: when gland X releases hormone X, this stimulates target cells to release hormone Y. When there is an excess of hormone Y, gland X "senses" this and inhibits its release of hormone X. As shown in the figure, most endocrine hormones are controlled by a physiologic negative feedback inhibition loop, such as the glucocorticoids secreted by the adrenal cortex. The hypothalamus secretes corticotropin-releasing hormone (CRH), which directs the anterior pituitary gland to secrete adrenocorticotropic hormone (ACTH). In turn, ACTH directs the adrenal cortex to secrete glucocorticoids, such as cortisol. Glucocorticoids not only perform their respective functions throughout the body but also negatively affect the release of further stimulating secretions of both the hypothalamus and the pituitary gland, effectively reducing the output of glucocorticoids once a sufficient amount has been released.
Chemistry
Closed systems containing substances undergoing a reversible chemical reaction can also exhibit negative feedback in accordance with Le Chatelier's principle which shift the chemical equilibrium to the opposite side of the reaction in order to reduce a stress. For example, in the reaction
N2 + 3 H2 ⇌ 2 NH3 + 92 kJ/mol
If a mixture of the reactants and products exists at equilibrium in a sealed container and nitrogen gas is added to this system, then the equilibrium will shift toward the product side in response. If the temperature is raised, then the equilibrium will shift toward the reactant side which, since the reverse reaction is endothermic, will partially reduce the temperature.
Self-organization
Self-organization is the capability of certain systems "of organizing their own behavior or structure". There are many possible factors contributing to this capacity, and most often positive feedback is identified as a possible contributor. However, negative feedback also can play a role.
Economics
In economics, automatic stabilisers are government programs that are intended to work as negative feedback to dampen fluctuations in real GDP.
Mainstream economics asserts that the market pricing mechanism operates to match supply and demand, because mismatches between them feed back into the decision-making of suppliers and demanders of goods, altering prices and thereby reducing any discrepancy. However Norbert Wiener wrote in 1948:
"There is a belief current in many countries and elevated to the rank of an official article of faith in the United States that free competition is itself a homeostatic process... Unfortunately the evidence, such as it is, is against this simple-minded theory."
The notion of economic equilibrium being maintained in this fashion by market forces has also been questioned by numerous heterodox economists such as financier George Soros and leading ecological economist and steady-state theorist Herman Daly, who was with the World Bank in 1988–1994.
Environmental Science
A basic and common example of a negative feedback system in the environment is the interaction among cloud cover, plant growth, solar radiation, and planet temperature. As incoming solar radiation increases, planet temperature increases. As the temperature increases, the amount of plant life that can grow increases. This plant life can then make products such as sulfur which produce more cloud cover. An increase in cloud cover leads to higher albedo, or surface reflectivity, of the Earth. As albedo increases, however, the amount of solar radiation decreases. This, in turn, affects the rest of the cycle.
Cloud cover, and in turn planet albedo and temperature, is also influenced by the hydrological cycle. As planet temperature increases, more water vapor is produced, creating more clouds. The clouds then block incoming solar radiation, lowering the temperature of the planet. This interaction produces less water vapor and therefore less cloud cover. The cycle then repeats in a negative feedback loop. In this way, negative feedback loops in the environment have a stabilizing effect.
History
Negative feedback as a control technique may be seen in the refinements of the water clock introduced by Ktesibios of Alexandria in the 3rd century BCE. Self-regulating mechanisms have existed since antiquity, and were used to maintain a constant level in the reservoirs of water clocks as early as 200 BCE.
Negative feedback was implemented in the 17th century. Cornelius Drebbel had built thermostatically controlled incubators and ovens in the early 1600s, and centrifugal governors were used to regulate the distance and pressure between millstones in windmills. James Watt patented a form of governor in 1788 to control the speed of his steam engine, and James Clerk Maxwell in 1868 described "component motions" associated with these governors that lead to a decrease in a disturbance or the amplitude of an oscillation.
The term "feedback" was well established by the 1920s, in reference to a means of boosting the gain of an electronic amplifier. Friis and Jensen described this action as "positive feedback" and made passing mention of a contrasting "negative feed-back action" in 1924. Harold Stephen Black came up with the idea of using negative feedback in electronic amplifiers in 1927, submitted a patent application in 1928, and detailed its use in his paper of 1934, where he defined negative feedback as a type of coupling that reduced the gain of the amplifier, in the process greatly increasing its stability and bandwidth.
Karl Küpfmüller published papers on a negative-feedback-based automatic gain control system and a feedback system stability criterion in 1928.
Nyquist and Bode built on Black's work to develop a theory of amplifier stability.
Early researchers in the area of cybernetics subsequently generalized the idea of negative feedback to cover any goal-seeking or purposeful behavior.
Cybernetics pioneer Norbert Wiener helped to formalize the concepts of feedback control, defining feedback in general as "the chain of the transmission and return of information", and negative feedback as the case when:
While the view of feedback as any "circularity of action" helped to keep the theory simple and consistent, Ashby pointed out that, while it may clash with definitions that require a "materially evident" connection, "the exact definition of feedback is nowhere important". Ashby pointed out the limitations of the concept of "feedback":
To reduce confusion, later authors have suggested alternative terms such as degenerative, self-correcting, balancing, or discrepancy-reducing in place of "negative".
See also
References
External links
Control theory
Cybernetics
Signal processing
Analog circuits
Feedback | Negative feedback | [
"Mathematics",
"Technology",
"Engineering"
] | 4,155 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Applied mathematics",
"Control theory",
"Analog circuits",
"Electronic engineering",
"Dynamical systems"
] |
7,745,101 | https://en.wikipedia.org/wiki/Desmosine | Desmosine is an amino acid found uniquely in elastin, a protein found in connective tissue such as skin, lungs, and elastic arteries.
Desmosine is a component of elastin and cross links with its isomer, isodesmosine, giving elasticity to the tissue. Detection of desmosine in urine, plasma or sputum samples can be a marker for elastin breakdown due to high elastase activity related to certain diseases.
Structure
Desmosine and its isomer isodesmosine are both composed of four lysine residues, allowing for bonding to multiple peptide chains. The four lysine groups combine to form a pyridinium nucleus, which can be reduced to neutralize positive charge associated, and increase the hydrophobicity. The four lysines form side chains around the pyridinium nucleus with exposed carboxyl groups. The difference between desmosines and isodesmosines are an exchange of a lysine side chain on carbon 1 with a proton on carbon 5. Desmosine is associated with alanine, bonding with it on the N terminal side. It is this alanine association that allows it to bond well with pairs of tropoelastin, to form elastin and elastin networks.
Desmosine and isodesmosine are unable to be differentiated thus far because of the lack of technology. The differentiation would be helpful in order to understand desmosine and its properties better. Currently, mass spectrometry is used and aids in the release of characteristic fragments which would help with differentiation, especially in larger peptides.
Synthesis
Desmosine has pathways for form multiple conformations of itself, both through biosynthesis and through man-made systems.
Biosynthesis
The formation of desmosines occurs within the formation of precursor tropoelastin. The tropoelastin initially lacks any of these complex binding molecules, and has a similar make up to that of the final stage elastin, however it contains a greater amount of lysine side chains, which directly corresponds with desmosines later found. These precursor molecules are processed through Dehydrogenation, along with dihydroD, and ultimately form elastin bound with desmosine. Through the Lysyl oxidase enzyme, lysyl c- amino groups is oxidized, forming allysine. This spontaneously condenses with other allysine molecules to form a bifunctional cross-link, allysine aldol, or with a c-amino group of lysine, forming dehydrolysinonorleucine. These compounds are then further condensed to form a tetrafunctional pyridinium cross-links of desmosines and isodesmosines. These reactions occur with lysines in areas of high alanine, due to alanine having a small side chain that won't block the enzyme binding to the lysine groups.
Lab Synthesis
Desmosines can be synthesized in a lab through a few methods, like palladium catalyzed cross-coupling reactions. The various treatments can create slightly different confirmations.
Bonding
Some models of bonding for desmosines, created through the study of bovine ligament elastin, suggest a combination of desmosine and secondary cross-linking to bind together peptide chains. This model has desmosine bonding near an alanine on the peptide chain, then to 3 other amino acids on the 2 peptide chains, despite being able to bond to up to 4 chains. It has been suggested that the secondary cross-linking occurs with either desmosine or lysinonorleucine, which maintains an alpha helix conformation in alanine rich sections on peptides.
Both isodesmosine and desmosine can have similar bonding sites in elastin, though it rarely shown this way in nature. They more often will appear in close proximity to each other on the peptide chain.
Bonding in elastin/collagen
Desmosine is found to have a hydrogen bond donor count of eight and a hydrogen bond acceptor count of twelve.
Function
Elastin, a protein in the extracellular matrix, provides elasticity and its soluble precursor is tropoelastin. When elastin cross links it produces desmosine and isodesmosine. When desmosine is mentioned, it is usually grouped with isodesmosine, the other tetrafunctional amino acid that is specific to elastin.
Demosine can not only be found in elastin, but also in urine, plasma, sputum, and there are different ways to identify and measure these quantities. This means that it is used as a biomarker for elastin degradation which can be a detection for chronic obstructive pulmonary disease (COPD). Desmosine is a potential biomarker for matrix degradation. Desmosine and Isodesmosine are unable to be differentiated thus far because of the lack of technology. The differentiation would be helpful in order to understand desmosine and its properties better. Currently, mass spectrometry is used and aids in the release of characteristic fragments which would help with differentiation, especially in larger peptides.
Material properties
The molecular weight of this rare amino acid that is found in elastin is 526.611 g/mol. The desmosine pyridinium ring has three allysyl side chains and one unaltered lysyl side chain. It has been tested to show that the pyridinium core of Desmosine remains intact even at very high collision energies.
Current usage in medicine
Desmosine is currently used as a biomarker in the medical field. It is measured in order to monitor elastin breakdown. Since it is connected to the degradation of elastin, it can be used to identify COPD. Desmosine is one of the oldest biomarkers and was developed in the 1960s, but the first time it was correlated to lung elastin content was in the 80s through urinary excretion. Biomarkers are judged in 6 ways:
Biomarkers should be central to the pathophysiological process
They should be a ‘‘true’’ surrogate end-point
Biomarkers should be stable and vary with disease progression only
The severity of the condition should relate to the concentration of the Biomarker
Progression should be predicted
Effective treatment should show change
Even though desmosine can check-off the first three it cannot check off the rest. And this is why research is being done to further the validation of using desmosine as a biomarker for certain diseases like COPD.
Application of desmosine
Because desmosine is most prevalent in mature elastin, it can be consistently located and measured in urine samples after elastin breakdown in the human body. Desmosine does not exist elsewhere within the body, nor can it be sourced from elsewhere outside the body, which isolates it as a key marker for elastin breakdown. Indeed, desmosine "has been studied as a marker of elastin breakdown in several chronic pulmonary conditions, including chronic obstructive pulmonary disease (COPD), cystic fibrosis, and chronic tobacco use." In one study, hyperoxic mice that formed alveoli as a result of lung maturation also showed drastic changes in collagen and elastin within the lungs, as well as a change in cross-linking. In another study, deceased patients with acute respiratory distress syndrome (ARDS) were reported to have higher concentrations of desmosine in their urine than those patients who survived ARDS, and higher concentrations of desmosine revealed that "more severe damage to the extracellular matrix occurred in the most critically ill [acute lung injury] patients."
However, it has been argued in the same study that desmosine does "not correlate well with markers of disease severity," correlating only weakly with age. Instead, it is suggested "that desmosine may be more useful in understanding the pathogenesis of ALI and less useful as a marker of disease severity.” The current standard for measuring lung disease progression, for example, is measured through the forced expiratory volume in one second (FEV1) compared to the maximum lung capacity; in other words, the volume of air a person can exhale from full lungs in one second compared to their maximum lung capacity. This method, while simple and physiologically thorough, has biological limitations, and so a superior biological marker is being sought after. Desmosine has been studied as one such biological marker, with studies in the 1980s to link urinary desmosine concentration with elastin breakdown in the lungs. Though large amounts of data have been collected with regards to desmosine's potential as a replacement biological marker in determining disease progression, some believe there is still insufficient evidence for desmosine to meet and fill this need.
In orthopedics, one study examined equine tendons and how their increasing stiffness and fatigue with age was due to fragmentation of the elastin in the tendons. The superficial digital flexor tendon (SDFT) and the common digital extensor tendon (CDET) were analyzed for elastin composition, comparing older tendons to younger ones. While both the CDET and the SDFT are positional tendons, enabling muscles to move the skeleton, the SDFT also stores energy and is far more elastic than the CDET due to "specialization of the [interfascicular matrix] to enable repeated interfascicular sliding and recoil." Desmosine concentrations were reported to be far greater in new tendons than in tendons that had partially degraded, suggesting that not only is there fragmentation of tendon elastin with age, but also a smaller total composition of elastin within the SDFT, though this was not true in the case of the CDET examined.
Research has also been performed to determine the cross-linking structure of elastin, in an effort to better understand the relationship between elastin and pertinent diseases, such as cystic fibrosis, chronic obstructive pulmonary disease (COPD), and aortic aneurysms. A study was conducted to find this structure through synthesis of a cyclic peptide containing desmosine, to partially mimic elastin in the hopes of running mass spectrometry on the peptide to reveal the cross-linking structure. The elastin mimic was eventually synthesized successfully, and though work has not yet been done to clarify the cross-linking structure of elastin, preliminary mass spectrometry demonstrated the presence of the expected ion formed from the chemical reactions used.
References
Protein structure
Quaternary ammonium compounds
Pyridinium compounds
Amino acids
Polyamines | Desmosine | [
"Chemistry"
] | 2,234 | [
"Amino acids",
"Biomolecules by chemical classification",
"Protein structure",
"Structural biology"
] |
7,747,976 | https://en.wikipedia.org/wiki/Rocket%20engine%20test%20facility | A rocket engine test facility is a location where rocket engines may be tested on the ground, under controlled conditions. A ground test program is generally required before the engine is certified for flight. Ground testing is very inexpensive in comparison to the cost of risking an entire mission or the lives of a flight crew.
The test conditions available are usually described as sea level ambient or altitude. Sea level testing is useful for evaluations of start characteristics for rockets launched from the ground. However, sea level testing does not provide a true simulation of the majority of the operating environment of the rocket. Better simulations are provided by altitude test facilities.
Sea level tests
The facility must restrain the rocket and direct the rocket exhaust safely toward the open atmosphere. Structural integrity, system operations, and sea level thrust can be measured and verified. However, rockets are primarily intended for operations in very thin or no atmosphere. Systems that work well on the ground may behave very differently in space.
A typical sea level test stand may be designed to restrain the rocket engine in either a horizontal or vertical position. Liquid rocket engines are usually fired in a vertical position because the propellant pump intakes are designed to draw fuel from the bottoms of the fuel tanks. The effect of the propellant weight on the thrust measurement system (TMS) must be accounted for as the engine is firing. The rocket exhaust is directed into a flame bucket or trench. The flame trench is designed to redirect the hot exhaust to a safe direction and is protected by a water deluge system that both cools the exhaust and also reduces the sound pressure level (loudness). The sound pressure level of large rocket engines has been measured at greater than 200 decibels — one of the loudest man-made sounds.
Solid rocket engines may be fired in either a vertical or horizontal orientation. The thrust measurement system does not need to account for the changing weight of the rocket in a horizontal position. The associated flame trench need not be so sturdy as with a vertical test stand, however a water system may be less effective at reducing the sound pressure level.
All test stands require safety provisions to protect against the destructive potential of an unplanned engine detonation. The safety provisions generally include building the stand some minimum distance from inhabited areas or other critical facilities, placing the stand behind a thick concrete blast wall or earthen berm, and using some form of inerting system (either gaseous nitrogen or helium) to eliminate the buildup of explosive mixtures.
Altitude tests
The advantage of altitude testing is to obtain a better simulation of the rocket's operating environment. Air pressure decreases with increasing altitude. Effects of the lower air pressure include higher rocket thrust and lower heat transfer.
An altitude facility is much more complex than a sea level facility. The rocket is installed inside an enclosed chamber which is evacuated to a minimum pressure level before rocket firing. A typical chamber operating pressure of 0.16 psia (equivalent to an altitude of 100,000 feet) is established inside the chamber by some form of mechanical pumping. Mechanical pumping is typically provided by steam ejector/diffusers. If the products of combustion from the rocket firing include flammable or explosive materials, the chamber must be inerted, typically with gaseous nitrogen (GN2). The inerting process prevents build-up of potentially explosive materials inside the chamber or exhaust ducting.
Rocket ground test facilities
Test facilities in the United States
California
Active
United States Air Force's Air Force Research Laboratory Propulsion Directorate at Edwards Air Force Base, California
United States Navy's Skytop Rocket Test Propulsion Facility at Naval Air Weapons Station China Lake, California
United States Navy's Naval Base Ventura County's facility on San Nicolas Island, California
United States Space Force's Space Systems Command's facilities at Vandenberg Space Force Base
Astra (American spaceflight company)'s testing facility in Alameda, California
Astra (American spaceflight company)'s testing facility at former Castle Air Force Base in Atwater, California
Lockheed Martin's Santa Cruz Facility in Santa Cruz, California
Northrop Grumman's Capistrano Test Site near San Clemente, California
Mojave Air and Space Port's rocket testing facilities
National Technical Systems' NTS Rocket and Fluids Test Laboratory at San Bernardino International Airport in San Bernardino, California
Dormant
Douglas Aircraft Company's and Rocketdyne's SACTO facility in Rancho Cordova, California
Lockheed Propulsion Company's and Grand Central Rocket Company's Portrero Canyon and Laborde Canyon test sites
Lockheed Propulsion Company's Redlands Proving Grounds in Redlands, California and Loma Linda, California
Marquardt Corporation's Remote Rocket Engine Test Facility in the Angeles National Forest
NASA's Jet Propulsion Laboratory sites at Edwards Air Force Base and Goldstone, California
Rocketdyne's Santa Susana Field Laboratory
United Technologies's Coyote Ridge testing sites south of San Jose, California
Northrop Grumman Innovation Systems, Promontory, Utah (formerly Morton-Thiokol, Thiokol, ATK, Orbital ATK)
Northrop Grumman, Elkton, Maryland (Formally Thiokol, Elkton Controls)
Marshall Space Flight Center
Plum Brook Station
White Sands Test Facility (WSTF) at Las Cruces, New Mexico
Stennis Space Center at Hancock County, Mississippi
United States Air Force Arnold Engineering Development Center
New Mexico Tech's Energetic Materials Research and Testing Center
SpaceX Rocket Development and Test Facility at McGregor, Texas
SpaceX high-altitude test facility at Las Cruces, New Mexico
Blue Origin's Corn Ranch at Van Horn, Texas
XCOR Aerospace Engine Test Facility at Mojave, California
Rocket ground test facilities outside the United States
Sea- level and High Altitude Test Facilities- Satish Dhawan Space Centre SHAR, Sriharikota, Andhra Pradesh, India
DLR Lampoldshausen - Baden-Württemberg, Germany, European Union
ISRO Propulsion Complex - Mahendragiri, Tamil Nadu, India
NII-229 (NIIKhIMMash) - Zagorsk, Moscow Oblast, Russia
RAF Spadeadam - (No longer in use) United Kingdom.
Woomera Test Range - South Australia
High Down Rocket Test Site - (No longer in use) United Kingdom.
Refshaleøen, Copenhagen, Denmark, European Union, Used by Copenhagen Suborbitals, a private non-profit organisation operating on open source principles
References
Bibliography
Sutton, G.P., (1976) Rocket Propulsion Elements
Lawrie, A., (2005) Saturn
Bilstein, R.E., (2003) Stages To Saturn
External links
National Rocket Propulsion Test Alliance
NASA Rocket Propulsion Test Program Office
Aerospace system testing
Rocket propulsion
Rocket engines
Product testing | Rocket engine test facility | [
"Technology",
"Engineering"
] | 1,365 | [
"Rocket engines",
"Aerospace system testing",
"Engines",
"Aerospace engineering"
] |
7,750,025 | https://en.wikipedia.org/wiki/Screening%20router | A screening router performs packet-filtering and is used as a firewall. In some cases a screening router may be used as perimeter protection for the internal network or as the entire firewall solution.
References
See also
Access Control List
DMZ
Data security
Networking hardware
Computer network security | Screening router | [
"Technology",
"Engineering"
] | 59 | [
"Cybersecurity engineering",
"Computer network stubs",
"Computer networks engineering",
"Computer network security",
"Networking hardware",
"Computing stubs",
"Data security"
] |
12,403,587 | https://en.wikipedia.org/wiki/Surface%20of%20constant%20width | In geometry, a surface of constant width is a convex form whose width, measured by the distance between two opposite parallel planes touching its boundary, is the same regardless of the direction of those two parallel planes. One defines the width of the surface in a given direction to be the perpendicular distance between the parallels perpendicular to that direction. Thus, a surface of constant width is the three-dimensional analogue of a curve of constant width, a two-dimensional shape with a constant distance between pairs of parallel tangent lines.
Definition
More generally, any compact convex body D has one pair of parallel supporting planes in a given direction. A supporting plane is a plane that intersects the boundary of D but not the interior of D. One defines the width of the body as before. If the width of D is the same in all directions, then one says that the body is of constant width and calls its boundary a surface of constant width, and the body itself is referred to as a spheroform.
Examples
A sphere, a surface of constant radius and thus diameter, is a surface of constant width.
Contrary to common belief the Reuleaux tetrahedron is not a surface of constant width. However, there are two different ways of smoothing subsets of the edges of the Reuleaux tetrahedron to form Meissner tetrahedra, surfaces of constant width. These shapes were conjectured by to have the minimum volume among all shapes with the same constant width, but this conjecture remains unsolved.
Among all surfaces of revolution with the same constant width, the one with minimum volume is the shape swept out by a Reuleaux triangle rotating about one of its axes of symmetry, while the one with maximum volume is the sphere.
Properties
Every parallel projection of a surface of constant width is a curve of constant width. By Barbier's theorem, the perimeter of this projection is times the width, regardless of the direction of projection. It follows that every surface of constant width is also a surface of constant girth, where the girth of a shape is the perimeter of one of its parallel projections. Conversely, Hermann Minkowski proved that every surface of constant girth is also a surface of constant width.
The shapes whose parallel projections have constant area (rather than constant perimeter) are called bodies of constant brightness.
References
Notes
Sources
.
.
.
Further reading
.
External links
Spheroforms
T. Lachand-Robert & É. Oudet, "Bodies of constant width in arbitrary dimension"
How Round is Your Circle? Solids of constant width
Euclidean solid geometry
Geometric shapes
Constant width | Surface of constant width | [
"Physics",
"Mathematics"
] | 522 | [
"Geometric shapes",
"Euclidean solid geometry",
"Mathematical objects",
"Space",
"Geometric objects",
"Spacetime"
] |
12,404,937 | https://en.wikipedia.org/wiki/Runoff%20model%20%28reservoir%29 | A runoff models or rainfall-runoff model describes how rainfall is converted into runoff in a drainage basin (catchment area or watershed). More precisely, it produces a surface runoff hydrograph in response to a rainfall event, represented by and input as a hyetograph.
Rainfall-runoff models need to be calibrated before they can be used.
A well known runoff model is the linear reservoir, but in practice it has limited applicability.
The runoff model with a non-linear reservoir is more universally applicable, but still it holds only for catchments whose surface area is limited by the condition that the rainfall can be considered more or less uniformly distributed over the area. The maximum size of the watershed then depends on the rainfall characteristics of the region. When the study area is too large, it can be divided into sub-catchments and the various runoff hydrographs may be combined using flood routing techniques.
Linear reservoir
The hydrology of a linear reservoir (figure 1) is governed by two equations.
flow equation: , with units [L/T], where L is length (e.g. mm) and T is time (e.g. h, day)
continuity or water balance equation: , with units [L/T]
where:
Q is the runoff or discharge
R is the effective rainfall or rainfall excess or recharge
A is the constant reaction factor or response factor with unit [1/T]
S is the water storage with unit [L]
dS is a differential or small increment of S
dT is a differential or small increment of T
Runoff equation
A combination of the two previous equations results in a differential equation, whose solution is:
This is the runoff equation or discharge equation, where Q1 and Q2 are the values of Q at time T1 and T2 respectively while T2−T1 is a small time step during which the recharge can be assumed constant.
Computing the total hydrograph
Provided the value of A is known, the total hydrograph can be obtained using a successive number of time steps and computing, with the runoff equation, the runoff at the end of each time step from the runoff at the end of the previous time step.
Unit hydrograph
The discharge may also be expressed as: Q = − dS/dT . Substituting herein the expression of Q in equation (1) gives the differential equation dS/dT = A·S, of which the solution is: S = exp(− A·t) . Replacing herein S by Q/A according to equation (1), it is obtained that: Q = A exp(− A·t) . This is called the instantaneous unit hydrograph (IUH) because the Q herein equals Q2 of the foregoing runoff equation using R = 0, and taking S as unity which makes Q1 equal to A according to equation (1).
The availability of the foregoing runoff equation eliminates the necessity of calculating the total hydrograph by the summation of partial hydrographs using the IUH as is done with the more complicated convolution method.
Determining the response factor A
When the response factor A can be determined from the characteristics of the watershed (catchment area), the reservoir can be used as a deterministic model or analytical model, see hydrological modelling.
Otherwise, the factor A can be determined from a data record of rainfall and runoff using the method explained below under non-linear reservoir. With this method the reservoir can be used as a black box model.
Conversions
1 mm/day corresponds to 10 m3/day per ha of the watershed
1 L/s per ha corresponds to 8.64 mm/day or 86.4 m3/day per ha
Non-linear reservoir
Contrary to the linear reservoir, the non linear reservoir has a reaction factor A that is not a constant, but it is a function of S or Q (figure 2, 3).
Normally A increases with Q and S because the higher the water level is the higher the discharge capacity becomes. The factor is therefore called Aq instead of A.
The non-linear reservoir has no usable unit hydrograph.
During periods without rainfall or recharge, i.e. when R = 0, the runoff equation reduces to
Q2 = Q1 exp { − Aq (T2 − T1) }, or:
or, using a unit time step (T2 − T1 = 1) and solving for Aq:
Aq = − ln (Q2/Q1)
Hence, the reaction or response factor Aq can be determined from runoff or discharge measurements using unit time steps during dry spells, employing a numerical method.
Figure 3 shows the relation between Aq (Alpha) and Q for a small valley (Rogbom) in Sierra Leone.
Figure 4 shows observed and simulated or reconstructed discharge hydrograph of the watercourse at the downstream end of the same valley.
Recharge
The recharge, also called effective rainfall or rainfall excess, can be modeled by a pre-reservoir (figure 6) giving the recharge as overflow. The pre-reservoir knows the following elements:
a maximum storage (Sm) with unit length [L]
an actual storage (Sa) with unit [L]
a relative storage: Sr = Sa/Sm
a maximum escape rate (Em) with units length/time [L/T]. It corresponds to the maximum rate of evaporation plus percolation and groundwater recharge, which will not take part in the runoff process (figure 5, 6)
an actual escape rate: Ea = Sr·Em
a storage deficiency: Sd = Sm + Ea − Sa
The recharge during a unit time step (T2−T1=1) can be found from R = Rain − Sd
The actual storage at the end of a unit time step is found as Sa2 = Sa1 + Rain − R − Ea, where Sa1 is the actual storage at the start of the time step.
The Curve Number method (CN method) gives another way to calculate the recharge. The initial abstraction herein compares with Sm − Si, where Si is the initial value of Sa.
Nash model
The Nash model uses a series (cascade) of linear reservoirs in which each reservoir empties into the next until the runoff is obtained. For calibration, the model requires considerable research.
Software
Figures 3 and 4 were made with the RainOff program, designed to analyse rainfall and runoff using the non-linear reservoir model with a pre-reservoir. The program also contains an example of the hydrograph of an agricultural subsurface drainage system for which the value of A can be obtained from the system's characteristics.
Raven is a robust and flexible hydrological modelling framework, designed for application to challenging hydrological problems in academia and practice. This fully object-oriented code provides complete flexibility in spatial discretization, interpolation, process representation, and forcing function generation. Models built with Raven can be as simple as a single watershed lumped model with only a handful of state variables to a full semi-distributed system model with physically-based infiltration, snowmelt, and routing. This flexibility encourages stepwise modelling while enabling investigation into critical research issues regarding discretization, numerical implementation, and ensemble simulation of surface water hydrological models. Raven is open source, covered under the Artistic License 2.0.
The SMART hydrological model includes agricultural subsurface drainage flow, in addition to soil and groundwater reservoirs, to simulate the flow path contributions to streamflow.
Vflo is another software program for modeling runoff. Vflo uses radar rainfall and GIS data to generate physics-based, distributed runoff simulation.
The WEAP (Water Evaluation And Planning) software platform models runoff and percolation from climate and land use data, using a choice of linear and non-linear reservoir models.
The RS MINERVE software platform simulates the formation of free surface run-off flow and its propagation in rivers or channels. The software is based on object-oriented programming and allows hydrologic and hydraulic modeling according to a semi-distributed conceptual scheme with different rainfall-runoff model such as HBV, GR4J, SAC-SMA or SOCONT.
The IHACRES is a catchment-scale rainfall-streamflow modelling methodology. Its purpose is to assist the hydrologist or water resources engineer to characterise the dynamic relationship between basin rainfall and streamflow.
References
Drainage
Hydrology
Hydrology models
Water management
Land management
Scientific models
Simulation software
Scientific simulation software | Runoff model (reservoir) | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 1,737 | [
"Hydrology",
"Biological models",
"Hydrology models",
"Environmental engineering",
"Environmental modelling"
] |
12,409,041 | https://en.wikipedia.org/wiki/Neurexin | Neurexins (NRXN) are a family of presynaptic cell adhesion proteins that have roles in connecting neurons at the synapse. They are located mostly on the presynaptic membrane and contain a single transmembrane domain. The extracellular domain interacts with proteins in the synaptic cleft, most notably neuroligin, while the intracellular cytoplasmic portion interacts with proteins associated with exocytosis. Neurexin and neuroligin "shake hands," resulting in the connection between the two neurons and the production of a synapse. Neurexins mediate signaling across the synapse, and influence the properties of neural networks by synapse specificity. Neurexins were discovered as receptors for α-latrotoxin, a vertebrate-specific toxin in black widow spider venom that binds to presynaptic receptors and induces massive neurotransmitter release. In humans, alterations in genes encoding neurexins are implicated in autism and other cognitive diseases, such as Tourette syndrome and schizophrenia.
Structure
In mammals, neurexin is encoded by three different genes (NRXN1, NRXN2, and NRXN3) each controlled by two different promoters, an upstream alpha (α) and a downstream beta (β), resulting in alpha-neurexins 1-3 (α-neurexins 1–3) and beta-neurexins 1-3 (β-neurexins 1–3). In addition, there are alternative splicing at 5 sites in α-neurexin and 2 in β-neurexin; more than 2000 splice variants are possible, suggesting its role in determining synapse specificity.
The encoded proteins are structurally similar to laminin, slit, and agrin, other proteins involved in axon guidance and synaptogenesis. α-Neurexins and β-neurexins have identical intracellular domains but different extracellular domains. The extracellular domain of α-neurexin is composed of three neurexin repeats which each contain LNS (laminin, neurexin, sex-hormone binding globulin) – EGF (epidermal growth factor) – LNS domains. N1α binds to a variety of ligands including neuroligins and GABA receptors, though neurons of every receptor type express neurexins. β-Neurexins are shorter versions of α-neurexins, containing only one LNS domain. β-Neurexins (located presynaptically) act as receptors for neuroligin (located postsynaptically). Additionally, β-Neurexin has also been found to play a role in angiogenesis.
The C terminus of the short intracellular section of both types of neurexins binds to synaptotagmin and to the PDZ (postsynaptic density (PSD)-95/discs large/zona-occludens-1) domains of CASK and Mint. These interactions form connections between intracellular synaptic vesicles and fusion proteins. Thus neurexins play an important role in assembling presynaptic and postsynaptic machinery.
Trans-synapse, the extracellular LNS domains have a functional region, the hyper-variable surface, formed by loops carrying 3 splice inserts. This region surrounds a coordinated Ca2+ ion and is the site of neuroligin binding, resulting in a neurexin-neuroligin Ca2+-dependent complex at the junction of chemical synapses.
Expression and function
Neurexins are diffusely distributed in neurons and become concentrated at presynaptic terminals as neurons mature. They have also been found at pancreatic beta islet cells even though the function at this location has yet to be elucidated. There exists a trans-synaptic dialog between neurexin and neuroligin. This bi-directional trigger aids in the formation of synapses and is a key component to modifying the neuronal network. Over-expression of either of these proteins causes an increase in synapse forming sites, thus providing evidence that neurexin plays a functional role in synaptogenesis. Conversely, the blocking of β-neurexin interactions reduces the number of excitatory and inhibitory synapses. It is not clear how exactly neurexin promotes the formation of synapses. One possibility is that actin is polymerized on the tail end of β-neurexin, which traps and stabilizes accumulating synaptic vesicles. This forms a forward feeding cycle, where small clusters of β-neurexins recruit more β-neurexins and scaffolding proteins to form a large synaptic adhesive contact.
Neurexin Binding Partners
Neurexin-Neureoligin binding
The different combinations of neurexin to neuroligin, and alternative splicing of neuroligin and neurexin genes, control binding between neuroligins and neurexins, adding to synapse specificity. Neurexins alone are capable of recruiting neuroligins in postsynaptic cells to a dendritic surface, resulting in clustered neurotransmitter receptors and other postsynaptic proteins and machinery. Their neuroligin partners can induce presynaptic terminals by recruiting neurexins. Synapse formation can therefore be triggered in either direction by these proteins. Neuroligins and neurexins can also regulate formation of glutamatergic (excitatory) synapses, and GABAergic (inhibitory) contacts using a neuroligin link. Regulating these contacts suggests neurexin-neuroligin binding could balance synaptic input, or maintain an optimal ratio of excitatory to inhibitory contacts.
Additional interacting partners
Dystroglycans
Neurexins not only bind to neuroligin. Additional binding partners of neurexin are dystroglycan. Dystroglycan is Ca2+-dependent and binds preferentially to α-neurexins on LNS domains that lack splice inserts. In mice, a deletion of dystroglycan causes long-term potentiation impairment and developmental abnormalities similar to muscular dystrophy; however baseline synaptic transmission is normal.
Neuroexophilins
Neuroexophilins are also known to bind to neurexins and are present at the synaptic cleft but are not membrane bound. Neuroexophilins are Ca2+-independent and bind exclusively to α-neurexins on the second LNS domain. The increased startle responses and impaired motor coordination of neuroexophilin knockout mice indicates that neuroexophilins have a functional role in certain circuits.
Latrophilins
Latrophilins are adhesion G protein-coupled receptors that reside on the postsynaptic membrane. Without latrophillins in mice a loss of excitatory synapses was experienced in pyramidal neurons. Latrophillins while in association with neurexin have been shown to act as postsynaptic recognition molecules for incoming axons.
Cerebellins
Cerebellins are small proteins that are secreted into the synaptic cleft where they associate with other cerebellins to form a hexamer which binds two neurexins. Cerebellins bind to GluD1 and GluD2 on the postsynaptic side while bound to neurexin presynaptically. GluD1 and GluD2 are homologous to ionotropic glutamate receptors, but function as adhesion molecules instead of glutamate receptors. Despite being present throughout the brain, their function is only known within the cerebellum, the structure they are named after. When removed from the cerebellum a decrease of parallel fiber synapses is observed with a loss of half of all these synapses. Outside of the cerebellum the function of Cerebellin is still not clear.
LRRTMs
LRRTM is a postsynaptic protein that binds to neurexin at the same Ca2+ domain that neuroligin does despite having a distinct structure. It has also been found that LRRTM binds AMPA receptors. This is believed to be what causes the loss of excitatory signaling when LRRTM is not present. Much is still not known about LRRTM even though it is the binding partner that binds to neurexin with the highest affinity.
C1q1s
C1Q1's structure is similar to that of cerebellin as it is a small protein that is secreted that associates with multiple copies of itself. C1q1 while in the synaptic cleft binds neurexin on the presynaptic side and BAI3 which is another adhesion G protein-coupled receptor. The deletion of c1q1 causes the loss of climbing fibers and excitatory signaling in general. C1q1s are found broadly throughout the brain including the prefrontal cortex, amygdala, cerebellum, and potentially more.
Species distribution
Members of the neurexin family are found across all animals, including basal metazoans such as porifera (sponges), cnidaria (jellyfish) and ctenophora (comb jellies). Porifera lack synapses so its role in these organisms is unclear.
Homologues of α-neurexin have also been found in several invertebrate species including Drosophila, Caenorhabditis elegans, honeybees and Aplysia. In Drosophila melanogaster, NRXN genes (only one α-neurexin) are critical in the assembly of glutamatergic neuromuscular junctions but are much simpler. Their functional roles in insects are likely similar to those in vertebrates.
Role in synaptic maturation
Neurexin and neuroligin have been found to be active in synapse maturation and adaptation of synaptic strength. Studies in knockout mice show that the trans-synaptic binding team does not increase the number of synaptic sites, but rather increases the strength of the existing synapses. Deletion of the neurexin genes in the mice significantly impaired synaptic function, but did not alter synaptic structure. This is attributed to the impairment of specific voltage gated ion channels. While neuroligin and neurexin are not required for synaptic formation, they are essential components for proper function.
Clinical importance and applications
Recent studies link mutations in genes encoding neurexin and neuroligin to a spectrum of cognitive disorders, such as autism spectrum disorders (ASDs), schizophrenia, and mental retardation. Cognitive diseases remain difficult to understand, as they are characterized by subtle changes in a subgroup of synapses in a circuit rather than impairment of all systems in all circuits. Depending on the circuit, these subtle synapse changes may produce different neurological symptoms, leading to classification of different diseases. Counterarguments to the relationship between cognitive disorders and these mutations exist, prompting further investigation into the underlying mechanisms producing these cognitive disorders.
Autism
Autism is a neurodevelopmental disorder characterized by qualitative deficits in social behavior and communication, often including restricted, repetitive patterns of behavior. It includes a subset of three disorders: childhood disintegrative disorder (CDD), Asperger syndrome (AS), and pervasive developmental disorder – not otherwise specified (PDD-NOS). A small percentage of ASD patients present with single mutations in genes encoding neuroligin-neurexin cell adhesion molecules. Neurexin is crucial to synaptic function and connectivity, as highlighted in wide spectrum of neurodevelopmental phenotypes in individuals with neurexin deletions. This provides strong evidence that neurexin deletions result in increased risk of ASDs, and indicate synapse dysfunction as the possible site of autism origin. Dr. Steven Clapcote et al.'s α-neurexin II (Nrxn2α) KO mice experiments demonstrate a causal role for the loss of Nrxn2α in the genesis of autism-related behaviors in mice.
Schizophrenia
Schizophrenia is a debilitating neuropsychiatric illness with multiple genes and environmental exposures involved in its genesis. Further research indicates that deletion of the NRXN1 gene increases the risk of schizophrenia. Genomic duplications and deletions on a micro-level – known as copy number variants (CNVs) – often underlie neurodevelopmental syndromes. Genomic-wide scans suggest that individuals with schizophrenia have rare structural variants that deleted or duplicated one or more genes. As these studies only indicate an increased risk, further research is necessary to elucidate the underlying mechanisms of the genesis of cognitive diseases.
Intellectual disability and Tourette syndrome
Similar to schizophrenia, studies have shown that intellectual disability and Tourette syndrome are also associated with NRXN1 deletions. A recent study shows that NRXN genes 1-3 are essential for survival and play a pivotal and overlapping role with each other in neurodevelopment. These genes have been directly disrupted in Tourette syndrome by independent genomic rearrangements. Another study suggests that NLGN4 mutations can be associated with a wide spectrum of neuropsychiatric conditions and that carriers may be affected with milder symptoms.
See also
Cell adhesion molecule
Synaptogenesis
References
External links
Scientists finger neurexin 1 defects in autism
Human proteins
Molecular neuroscience | Neurexin | [
"Chemistry"
] | 2,890 | [
"Molecular neuroscience",
"Molecular biology"
] |
3,294,173 | https://en.wikipedia.org/wiki/Vactrain | A vactrain (or vacuum tube train) is a proposed design for very-high-speed rail transportation. It is a maglev (magnetic levitation) line using partly evacuated tubes or tunnels. Reduced air resistance could permit vactrains to travel at very high (hypersonic) speeds with relatively little power—up to . This is 5–6 times the speed of sound in Earth's atmosphere at sea level.
18th century
In 1799, George Medhurst of London conceived of and patented an atmospheric railway that could convey people or cargo through pressurized or evacuated tubes. The early atmospheric railways and pneumatic tube transport systems (such as the Dalkey Atmospheric Railway) relied on steam power for propulsion.
19th century
In 1888, Michel Verne, son of Jules Verne, imagined a submarine pneumatic tube transport system that could propel a passenger capsule at speeds up to under the Atlantic Ocean (a transatlantic tunnel) in a short story called "An Express of the Future".
20th century
The vactrain proper was invented by Robert H. Goddard as a freshman at Worcester Polytechnic Institute in the United States in 1904. Goddard subsequently refined the idea in a 1906 short story called "The High-Speed Bet" which was summarized and published in a Scientific American editorial in 1909 called "The Limit of Rapid Transit". Esther, his wife, was granted a US patent for the vactrain in 1950, five years after his death.
In 1909, Russian professor built the world's first model of his proposed version of the vactrain at Tomsk Polytechnic University. He later published a vactrain concept in 1914 in the book Motion without friction (airless electric way).
In 1955, Polish science-fiction writer Stanisław Lem in a novel The Magellan Nebula wrote about intercontinental vactrain called "organowiec", which moved in a transparent tube at a speed higher than . Later in April 1962, the vactrain appears in the story "Mercenary" by Mack Reynolds, where he mentions Vacuum Tube Transport in passing.
During the 1970s, a leading vactrain advocate, Robert M. Salter of RAND Corporation, published a series of elaborate engineering articles.
An interview with Robert Salter appeared in the Los Angeles Times (June 11, 1972). He discussed, in detail, the relative ease with which the U.S. government could build a tube shuttle system using technologies available at that time. Maglev being poorly developed at the time, he proposed steel wheels. The chamber's door to the tube would be opened, and enough air admitted behind to accelerate the train into the tube. Gravity would further accelerate the departing train down to cruise level. Rising from cruise level, the arriving train would decelerate by compressing the rarefied air ahead of it, which would be vented. Pumps at the stations would make up for losses due to friction or air escaping around the edges of the train, the train itself requiring no motor. This combination of modified (shallow) gravity train and atmospheric railway propulsion would consume little energy but limit the system to subsonic speeds, hence initial routes of tens or hundreds of miles or kilometers rather than transcontinental distances were proposed.
Trains were to require no couplers, each car being directly welded, bolted, or otherwise firmly connected to the next, the route calling for no more bending than the flexibility of steel could easily handle. At the end of the line, the train would be moved sideways into the end chamber of the return tube. The railway would have both an inner evacuated tube and an outer tunnel. At cruise depth, the space between would have enough water to float the vacuum tube, softening the ride.
A route through the Northeast Megalopolis was laid out, with nine stations, one each in Washington DC, Maryland, Delaware, Pennsylvania, New York, Rhode Island, Massachusetts, and two in Connecticut. Commuter rail systems were mapped for the San Francisco and New York areas, the commuter version having longer, heavier trains, to be propelled less by air and more by gravity than the intercity version. The New York system was to have three lines, terminating in Babylon, Paterson, Huntington, Elizabeth, White Plains, and St. George.
Salter pointed out how such a system would help reduce the environmental damage being done to the atmosphere by aviation and surface transportation. He called underground Very High Speed Transportation (tube shuttles) his nation's "logical next step". The plans were never taken to the next stage.
At the time these reports were published, national prestige was an issue as Japan had been operating its showcase shinkansen for several years and maglev train research was hot technology. The American Planetran would establish a transcontinental subway service in the United States and provide a commute from Los Angeles to New York City in one hour. The tunnel would be buried to a depth of several hundred feet in solid rock formations. Construction would make use of lasers to ensure alignment and use tungsten probes to melt through igneous rock formations. The tunnel would maintain a partial vacuum to minimize drag. A trip would average and subject passengers to accelerations up to 1.4 times that of gravity, requiring the use of gimballed compartments. Enormous construction costs (estimated as high as US$1 trillion) were the primary reason why Salter's proposal was never built.
Starting in the late 1970s and early 1980s, the Swissmetro was proposed to leverage the invention of the experimental German Transrapid maglev train, and operate in large tunnels reduced to the pressure altitude of at which the Concorde SST was certified to fly.
In the 1980s, Frank P. Davidson, a founder and chairman of the Channel Tunnel project, and Japanese engineer tackled the transoceanic problems with a proposal to float a tube above the ocean floor, anchored with cables (a submerged floating tunnel). The transit tube would remain at least below the ocean surface to avoid water turbulence.
On November 18, 1991, Gerard K. O'Neill filed a patent application for a vactrain system. He called the company he wanted to form VSE International, for velocity, silence, and efficiency. However, the concept itself he called Magnetic Flight. The vehicles, instead of running on a pair of tracks, would be elevated using electromagnetic force by a single track within a tube (permanent magnets in the track, with variable magnets on the vehicle), and propelled by electromagnetic forces through tunnels. He estimated the trains could reach speeds of up to – about five times faster than a jet airliner – if the air was evacuated from the tunnels. To obtain such speeds, the vehicle would accelerate for the first half of the trip, and then decelerate for the second half of the trip. The acceleration was planned to be a maximum of about one-half of the force of gravity. O'Neill planned to build a network of stations connected by these tunnels, but he died two years before his first patent on it was granted.
21st century
In 2001, James R. Powella co-inventor of superconducting maglev in the 1960sbegan leading an investigation into the prospect of using a maglev vactrain for space launch, dubbed StarTram. Theoretically, this would incur two orders of magnitude less cost than current rockets. The StarTram proposal would have vehicles reach up to within a lengthy acceleration tunnel.
ET3 claim to have achieved some work that resulted in a patent on "evacuated tube transport technology" which was granted in 2009. They presented their idea in 2013 on public stage.
In August 2013 Elon Musk, CEO of Tesla and SpaceX, published the Hyperloop Alpha paper, proposing and examining a route running from the Los Angeles region to the San Francisco Bay Area, roughly following the Interstate 5 corridor. The Hyperloop concept has been explicitly "open-sourced" by Musk and SpaceX, and others have been encouraged to take the ideas and further develop them.
To that end, a few companies have been formed, and several interdisciplinary student-led teams are working to advance the technology. SpaceX built an approximately subscale track for its pod design competition at its headquarters in Hawthorne, California.
See also
Beach Pneumatic Transit
References
External links
Worcester Polytechnic Institute page discussing Goddard's achievements
Vac Trains at Orion's Arm
Rail Journal
Popular science
Hydrogen Tube Vehicle
China Plans 1,000 km/h Super Train
Cargocap
High-speed rail
Proposed infrastructure
Vacuum systems | Vactrain | [
"Physics",
"Engineering"
] | 1,715 | [
"Vacuum systems",
"Vacuum",
"Matter"
] |
3,294,483 | https://en.wikipedia.org/wiki/Shell%20in%20situ%20conversion%20process | The Shell in situ conversion process (Shell ICP) is an in situ shale oil extraction technology to convert kerogen in oil shale to shale oil. It is developed by the Shell Oil Company.
History
Shell's in situ conversion process has been under development since the early 1980s. In 1997, the first small scale test was conducted on the Mahogany property test site, located west of Denver on Colorado's Western Slope in the Piceance Creek Basin. Since 2000, additional research and development activities have carried on as a part of the Mahogany Research Project. The oil shale heating at Mahogany started early 2004. From this test site, Shell has recovered of shale oil.
Process
The process heats sections of the vast oil shale field in situ, releasing the shale oil and oil shale gas from the rock so that it can be pumped to the surface and made into fuel. In this process, a freeze wall is first to be constructed to isolate the processing area from surrounding groundwater. To maximize the functionality of the freeze walls, adjacent working zones will be developed in succession. wells, eight feet apart, are drilled and filled with a circulating super-chilled liquid to cool the ground to . Water is then removed from the working zone. Heating and recovery wells are drilled at intervals within the working zone. Electrical heating elements are lowered into the heating wells and used to heat oil shale to between and over a period of approximately four years. Kerogen in oil shale is slowly converted into shale oil and gases, which then flow to the surface through recovery wells.
Energy consumption
A RAND study in 2005 estimated that production of of oil (5.4 million tons/year) would theoretically require a dedicated power generating capacity of 1.2 gigawatts (10 billion kWh/year), assuming deposit richness of per ton, with 100% pyrolysis efficiency, and 100% extraction of pyrolysis products. If this amount of electricity were to be generated by a coal-fired power plant, it would consume five million ton of coal annually (about 2.2 million toe).
In 2006, Shell estimated that over the project life cycle, for every unit of energy consumed, three to four units would be produced. Such an "energy returned on energy invested" would be significantly better than that achieved in the Mahogany trials. For the 1996 trial, Shell applied 440,000 kWh (which would require about 96 toe energy input in a coal-fired plant), to generate of oil (37 toe output).
Environmental impacts
Shell's underground conversion process requires significant development on the surface. The separation between drilled wells is less than five meters and wells must be connected by electrical wiring and by piping to storage and processing facilities. Shell estimates that the footprint of extraction operations would be similar to that for conventional oil and gas drilling. However, the dimensions of Shell's 2005 trial indicate that a much larger footprint is required. Production of 50,000 bbl/day would require that land be developed at a rate on the order of per year.
Extensive water use and the risk of groundwater pollution are the technology's greatest challenges.
Current implementations
In 2006, Shell received a Bureau of Land Management lease to pursue a large demonstration with a capacity of ; Shell has since dropped those plans and is planning a test based on ICP that would produce a total of minimum , together with nahcolite, over a seven-year period.
In Israel, IEI, a subsidiary of IDT Corp. is planning a shale pilot based on ICP technology. The project would produce a total of 1,500 barrels. However, IEI has also announced that any subsequent projects would not use ICP technology, but would instead utilize horizontal wells and hot gas heating methods.
In Jordan, Shell subsidiary JOSCO plans to use ICP technology to achieve commercial production by the "late 2020s." In October, 2011, it was reported that JOSCO had drilled more than 100 test holes over the prior two years, apparently for the sake of testing shale samples.
The Mahogany Oil Shale Project has been abandoned by Shell in 2013 due to unfavorable project economics
See also
Chevron CRUSH
ExxonMobil Electrofrac
References
External links
Mahogany Research Project
Oil shale technology
Shell plc
1997 introductions | Shell in situ conversion process | [
"Chemistry"
] | 857 | [
"Petroleum technology",
"Oil shale technology",
"Synthetic fuel technologies"
] |
3,295,074 | https://en.wikipedia.org/wiki/Platinum%20silicide | Platinum silicide, also known as platinum monosilicide, is the inorganic compound with the formula PtSi. It is a semiconductor that turns into a superconductor when cooled to 0.8 K.
Structure and bonding
The crystal structure of PtSi is orthorhombic, with each silicon atom having six neighboring platinum atoms. The distances between the silicon and the platinum neighbors are as follows: one at a distance of 2.41 angstroms, two at a distance of 2.43 angstroms, one at a distance of 2.52 angstroms, and the final two at a distance of 2.64 angstroms. Each platinum atom has six silicon neighbors at the same distances, as well as two platinum neighbors, at a distance of 2.87 and 2.90 angstroms. All of the distances over 2.50 angstroms are considered too far to really be involved in bonding interactions of the compound. As a result, it has been shown that two sets of covalent bonds compose the bonds forming the compound. One set is the three center Pt–Si–Pt bond, and the other set the two center Pt–Si bonds. Each silicon atom in the compound has one three center bond and two center bonds. The thinnest film of PtSi would consist of two alternating planes of atoms, a single sheet of orthorhombic structures. Thicker layers are formed by stacking pairs of the alternating sheets. The mechanism of bonding between PtSi is more similar to that of pure silicon than pure platinum or , though experimentation has revealed metallic bonding character in PtSi that pure silicon lacks.
Synthesis
Methods
PtSi can be synthesized in several ways. The standard method involves depositing a thin film of pure platinum onto silicon wafers and heating in a conventional furnace at 450–600 °C for a half an hour in inert ambients. The process cannot be carried out in an oxygenated environment, as this results in the formation of an oxide layer on the silicon, preventing PtSi from forming.
A secondary technique for synthesis requires a sputtered platinum film deposited on a silicon substrate. Due to the ease with which PtSi can become contaminated by oxygen, several variations of the methods have been reported. Rapid thermal processing has been shown to increase the purity of PtSi layers formed. Lower temperatures (200–450 °C) were also found to be successful, higher temperatures produce thicker PtSi layers, though temperatures in excess of 950 °C formed PtSi with increased resistivity due to clusters of large PtSi grains.
Kinetics
Despite the synthesis method employed, PtSi forms in the same way. When pure platinum is first heated with silicon, is formed. Once all the available Pt and Si are used and the only available surfaces are , the silicide will begin the slower reaction of converting into PtSi. The activation energy for the reaction is around 1.38 eV, while it is 1.67 eV for PtSi.
Oxygen is extremely detrimental to the reaction, as it will bind preferably to Pt, limiting the sites available for Pt–Si bonding and preventing the silicide formation. A partial pressure of as low at 10−7 has been found to be sufficient to slow the formation of the silicide. To avoid this issue inert ambients are used, as well as small annealing chambers to minimize amount of potential contamination. The cleanliness of the metal film is also extremely important, and unclean conditions result in poor PtSi synthesis.
In certain cases an oxide layer can be beneficial. When PtSi is used as a Schottky barrier, an oxide layer prevents wear of the PtSi.
Applications
PtSi is a semiconductor and a Schottky barrier with high stability and good sensitivity, and can be used in infrared detection, thermal imaging, or ohmic and Schottky contacts. Platinum silicide was most widely studied and used in the 1980s and 90s, but has become less commonly used, due to its low quantum efficiency. PtSi is now most commonly used in infrared detectors, due to the large size of wavelengths it can be used to detect. It has also been used in detectors for infrared astronomy. It can operate with good stability up to 0.05 °C. Platinum silicide offers high uniformity of arrays imaged. The low cost and stability makes it suited for preventative maintenance and scientific infrared imaging.
See also
HgCdTe
Indium antimonide
References
Platinum(IV) compounds
Semiconductor materials
Infrared sensor materials
Transition metal silicides | Platinum silicide | [
"Chemistry"
] | 916 | [
"Semiconductor materials"
] |
3,295,601 | https://en.wikipedia.org/wiki/Enzyme%20replacement%20therapy | Enzyme replacement therapy (ERT) is a medical treatment which replaces an enzyme that is deficient or absent in the body. Usually, this is done by giving the patient an intravenous (IV) infusion of a solution containing the enzyme.
ERT is available for some lysosomal storage diseases: Gaucher disease, Fabry disease, MPS I, MPS II (Hunter syndrome), MPS VI and Pompe disease. ERT does not correct the underlying genetic defect, but it increases the concentration of the enzyme that the patient is lacking. ERT has also been used to treat patients with severe combined immunodeficiency (SCID) resulting from an adenosine deaminase deficiency (ADA-SCID).
Other treatment options for patients with enzyme or protein deficiencies include substrate reduction therapy, gene therapy, and bone-marrow derived stem cell transplantation.
History
ERT was developed in 1964 by Christian de Duve and Roscoe Brady. Leading work was done on this subject at the Department of Physiology at the University of Alberta by Mark J. Poznansky and Damyanti Bhardwaj, where a model for enzyme therapy was developed using rats. ERT was not used in clinical practice until 1991, after the FDA gave orphan drug approval for the treatment of Gaucher disease with Alglucerase. ERTs were initially manufactured by isolating the therapeutic enzyme from human placenta. The FDA has approved ERTs that are derived from other human cells, animal cells (i.e. Chinese hamster ovary cells, or CHO cells), and plant cells.
Medical uses
Lysosomal storage diseases are a group of diseases and a main application of ERT. Lysosomes are cellular organelles that are responsible for the metabolism of many different macromolecules and proteins. They use enzymes to break down macromolecules, which are recycled or disposed. As of 2012, there are 50 lysosomal storage diseases, and more are still being discovered. These disorders arise because of genetic mutations that prevent the production of certain enzymes used in the lysosomes. The missing enzyme often leads to a build-up of the substrate within the body. This can result in a variety of symptoms, many of which are severe and can affect the skeleton, brain, skin, heart, and the central nervous system. Increasing the concentration of the missing enzyme within the body has been shown to improve the body's normal cellular metabolic processes and reduce substrate concentration in the body.
ERT has also been successful in treating severe combined immunodeficiency caused by an adenosine deaminase deficiency (ADA-SCID). This is a fatal childhood disease that requires early medical intervention. When the enzyme adenosine deaminase is deficient in the body, the result is a toxic build-up of metabolites that impair lymphocyte development and function. Many ADA deficient children with SCID have been treated with the polyethylene glycol-conjugated adenosine deaminase (PEG-ADA) enzyme. This is a form of ERT that has resulted in healthier, longer lives for patients with ADA-SCID.
Administration
ERT is administered by IV infusion. Typically, infusions occur every week or every two weeks. For some types of ERT, these infusions can occur as infrequently as every four weeks.
Complications
ERT is not a cure for lysosomal storage diseases, and it requires lifelong IV infusions of the therapeutic enzyme. This procedure is expensive; in the United States, it may cost over $200,000 annually. The distribution of the therapeutic enzyme in the body (biodistribution) after these IV infusions is not uniform. The enzyme in less available to certain areas in the body, like the bones, lungs, brain. For this reason, many symptoms of lysosomal storage diseases remain untreated by ERT, especially neurological symptoms. Additionally, the efficacy of ERT is often reduced due to an unwanted immune response against the enzyme, which prevents metabolic function.
Other treatments for enzyme deficiencies
Substrate reduction therapy is another method for treating lysosomal storage diseases. In this treatment, the accumulated compounds are inhibited from forming in the body of a patient with a lysosomal storage disease. The accumulated compounds are responsible for the symptoms of these disorders, and they form via a multi-step biological pathway. Substrate reduction therapy uses a small molecule to interrupt this multi-step pathway and inhibit the biosynthesis of these compounds. This type of treatment is taken orally. It does not induce an unwanted immune response, and a single type of small molecule could be used to treat many lysosomal storage diseases. Substrate reduction therapy is FDA approved and there is at least one treatment available on the market.
Gene therapy aims to replace a missing protein in the body through the use of vectors, usually viral vectors. In gene therapy, a gene encoding for a certain protein is inserted into a vector. The vector containing the therapeutic gene is then injected into the patient. Once inside the body the vector introduces the therapeutic gene into host cells, and the protein encoded by the newly inserted gene is then produced by the body's own cells. This type of therapy can correct for the missing protein/enzyme in patients with lysosomal storage diseases.
Hematopoietic stem cell (HSC) transplantation is another treatment for lysosomal storage diseases. HSCs are derived from bone-marrow. These cells have the ability to mature into the many cell types that comprise blood, including red blood cells, platelets, and white blood cells. Patients with enzyme deficiencies often undergo HSC transplantations in which HSCs from a healthy donor are injected. This treatment introduces HSCs that regularly produce the deficient enzyme since they have normal metabolic function. This treatment is often used to treat the central nervous system of patients with some lysosomal storage diseases.
See also
Protein replacement therapy
References
Further reading
Medical treatments
Life sciences industry | Enzyme replacement therapy | [
"Biology"
] | 1,234 | [
"Life sciences industry"
] |
3,296,107 | https://en.wikipedia.org/wiki/Jablonski%20diagram | In molecular spectroscopy, a Jablonski diagram is a diagram that illustrates the electronic states and often the vibrational levels of a molecule, and also the transitions between them. The states are arranged vertically by energy and grouped horizontally by spin multiplicity. Nonradiative transitions are indicated by squiggly arrows and radiative transitions by straight arrows. The vibrational ground states of each electronic state are indicated with thick lines, the higher vibrational states with thinner lines.
The diagram is named after the Polish physicist Aleksander Jabłoński who first proposed it in 1933.
Transitions
When a molecule absorbs a photon, the photon energy is converted and increases the molecule's internal energy level. Likewise, when an excited molecule releases energy, it can do so in the form of a photon. Depending on the energy of the photon, this could correspond to a change in vibrational, electronic, or rotational energy levels. The changes between these levels are called "transitions" and are plotted on the Jablonski diagram.
Radiative transitions involve either the absorption or emission of a photon. As mentioned above, these transitions are denoted with solid arrows with their tails at the initial energy level and their tips at the final energy level.
Nonradiative transitions arise through several different mechanisms, all differently labeled in the diagram. Relaxation of the excited state to its lowest vibrational level is called vibrational relaxation. This process involves the dissipation of energy from the molecule to its surroundings, and thus it cannot occur for isolated molecules.
A second type of nonradiative transition is internal conversion (IC), which occurs when a vibrational state of an electronically excited state can couple to a vibrational state of a lower electronic state. The molecule could then subsequently relax further through vibrational relaxation.
A third type is intersystem crossing (ISC); this is a transition to a state with a different spin multiplicity. In molecules with large spin-orbit coupling, intersystem crossing is much more important than in molecules that exhibit only small spin-orbit coupling. ISC can be followed by phosphorescence.
See also
Franck–Condon principle
Grotrian diagram (for atoms)
References
External links
Florida State University: Jablonski diagram primer
Consequences of Light Absorption – The Jablonski Diagram
Diagrams
Molecular physics
Photochemistry
Spectroscopy | Jablonski diagram | [
"Physics",
"Chemistry",
"Astronomy"
] | 484 | [
"Spectroscopy stubs",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Astronomy stubs",
" molecular",
"nan",
"Atomic",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
" and optical physics"
] |
3,296,134 | https://en.wikipedia.org/wiki/Morita%20equivalence | In abstract algebra, Morita equivalence is a relationship defined between rings that preserves many ring-theoretic properties. More precisely, two rings like R, S are Morita equivalent (denoted by ) if their categories of modules are additively equivalent (denoted by ). It is named after Japanese mathematician Kiiti Morita who defined equivalence and a similar notion of duality in 1958.
Motivation
Rings are commonly studied in terms of their modules, as modules can be viewed as representations of rings. Every ring R has a natural structure on itself where the module action is defined as the multiplication in the ring, so the approach via modules is more general and gives useful information. Because of this, one often studies a ring by studying the category of modules over that ring. Morita equivalence takes this viewpoint to a natural conclusion by defining rings to be Morita equivalent if their module categories are equivalent. This notion is of interest only when dealing with noncommutative rings, since it can be shown that two commutative rings are Morita equivalent if and only if they are isomorphic.
Definition
Two rings R and S (associative, with 1) are said to be (Morita) equivalent if there is an equivalence of the category of (left) modules over R, R-Mod, and the category of (left) modules over S, S-Mod. It can be shown that the left module categories R-Mod and S-Mod are equivalent if and only if the right module categories Mod-R and Mod-S are equivalent. Further it can be shown that any functor from R-Mod to S-Mod that yields an equivalence is automatically additive.
Examples
Any two isomorphic rings are Morita equivalent.
The ring of n-by-n matrices with elements in R, denoted Mn R, is Morita-equivalent to R for any integer n > 0. Notice that this generalizes the classification of simple artinian rings given by Artin–Wedderburn theory. To see the equivalence, notice that if X is a left then Xn is an where the module structure is given by matrix multiplication on the left of column vectors from X. This allows the definition of a functor from the category of left to the category of left . The inverse functor is defined by realizing that for any there is a left X such that the is obtained from X as described above.
Criteria for equivalence
Equivalences can be characterized as follows: if F : R-Mod S-Mod and G : S-Mod R-Mod are additive (covariant) functors, then F and G are an equivalence if and only if there is a balanced (S,R)-bimodule P such that SP and PR are finitely generated projective generators and there are natural isomorphisms of the functors , and of the functors Finitely generated projective generators are also sometimes called progenerators for their module category.
For every right-exact functor F from the category of left to the category of left that commutes with direct sums, a theorem of homological algebra shows that there is a (S,R)-bimodule E such that the functor is naturally isomorphic to the functor . Since equivalences are by necessity exact and commute with direct sums, this implies that R and S are Morita equivalent if and only if there are bimodules RMS and SNR such that as (R,R)-bimodules and as (S,S)-bimodules. Moreover, N and M are related via an (S,R)-bimodule isomorphism: .
More concretely, two rings R and S are Morita equivalent if and only if for a progenerator module PR, which is the case if and only if
(isomorphism of rings) for some positive integer n and full idempotent e in the matrix ring Mn R.
It is known that if R is Morita equivalent to S, then the ring Z(R) is isomorphic to the ring Z(S), where Z(-) denotes the center of the ring, and furthermore R/J(R) is Morita equivalent to S/J(S), where J(-) denotes the Jacobson radical.
While isomorphic rings are Morita equivalent, Morita equivalent rings can be nonisomorphic. An easy example is that a division ring D is Morita equivalent to all of its matrix rings Mn D, but cannot be isomorphic when n > 1. In the special case of commutative rings, Morita equivalent rings are actually isomorphic. This follows immediately from the comment above, for if R is Morita equivalent to S then .
Properties preserved by equivalence
Many properties are preserved by the equivalence functor for the objects in the module category. Generally speaking, any property of modules defined purely in terms of modules and their homomorphisms (and not to their underlying elements or ring) is a categorical property which will be preserved by the equivalence functor. For example, if F(-) is the equivalence functor from R-Mod to S-Mod, then the R module M has any of the following properties if and only if the S module F(M) does: injective, projective, flat, faithful, simple, semisimple, finitely generated, finitely presented, Artinian, and Noetherian. Examples of properties not necessarily preserved include being free, and being cyclic.
Many ring-theoretic properties are stated in terms of their modules, and so these properties are preserved between Morita equivalent rings. Properties shared between equivalent rings are called Morita invariant properties. For example, a ring R is semisimple if and only if all of its modules are semisimple, and since semisimple modules are preserved under Morita equivalence, an equivalent ring S must also have all of its modules semisimple, and therefore be a semisimple ring itself.
Sometimes it is not immediately obvious why a property should be preserved. For example, using one standard definition of von Neumann regular ring (for all a in R, there exists x in R such that a = axa) it is not clear that an equivalent ring should also be von Neumann regular. However another formulation is: a ring is von Neumann regular if and only if all of its modules are flat. Since flatness is preserved across Morita equivalence, it is now clear that von Neumann regularity is Morita invariant.
The following properties are Morita invariant:
simple, semisimple
von Neumann regular
right (or left) Noetherian, right (or left) Artinian
right (or left) self-injective
quasi-Frobenius
prime, right (or left) primitive, semiprime, semiprimitive
right (or left) (semi-)hereditary
right (or left) nonsingular
right (or left) coherent
semiprimary, right (or left) perfect, semiperfect
semilocal
Examples of properties which are not Morita invariant include commutative, local, reduced, domain, right (or left) Goldie, Frobenius, invariant basis number, and Dedekind finite.
There are at least two other tests for determining whether or not a ring property is Morita invariant. An element e in a ring R is a full idempotent when e2 = e and ReR = R.
is Morita invariant if and only if whenever a ring R satisfies , then so does eRe for every full idempotent e and so does every matrix ring Mn R for every positive integer n;
or
is Morita invariant if and only if: for any ring R and full idempotent e in R, R satisfies if and only if the ring eRe satisfies .
Further directions
Dual to the theory of equivalences is the theory of dualities between the module categories, where the functors used are contravariant rather than covariant. This theory, though similar in form, has significant differences because there is no duality between the categories of modules for any rings, although dualities may exist for subcategories. In other words, because infinite-dimensional modules are not generally reflexive, the theory of dualities applies more easily to finitely generated algebras over noetherian rings. Perhaps not surprisingly, the criterion above has an analogue for dualities, where the natural isomorphism is given in terms of the hom functor rather than the tensor functor.
Morita equivalence can also be defined in more structured situations, such as for symplectic groupoids and C*-algebras. In the case of C*-algebras, a stronger type equivalence, called strong Morita equivalence, is needed to obtain results useful in applications, because of the additional structure of C*-algebras (coming from the involutive *-operation) and also because C*-algebras do not necessarily have an identity element.
Significance in K-theory
If two rings are Morita equivalent, there is an induced equivalence of the respective categories of projective modules since the Morita equivalences will preserve exact sequences (and hence projective modules). Since the algebraic K-theory of a ring is defined (in Quillen's approach) in terms of the homotopy groups of (roughly) the classifying space of the nerve of the (small) category of finitely generated projective modules over the ring, Morita equivalent rings must have isomorphic K-groups.
Notes
Citations
References
Further reading
Module
Ring theory
Adjoint functors
Duality theories
Equivalence (mathematics) | Morita equivalence | [
"Mathematics"
] | 1,973 | [
"Mathematical structures",
"Ring theory",
"Fields of abstract algebra",
"Category theory",
"Duality theories",
"Geometry",
"Module theory"
] |
3,296,181 | https://en.wikipedia.org/wiki/Indium%20arsenide | Indium arsenide, InAs, or indium monoarsenide, is a narrow-bandgap semiconductor composed of indium and arsenic. It has the appearance of grey cubic crystals with a melting point of 942 °C.
Indium arsenide is similar in properties to gallium arsenide and is a direct bandgap material, with a bandgap of 0.35 eV at room temperature.
Indium arsenide is used for the construction of infrared detectors, for the wavelength range of 1.0–3.8 μm. The detectors are usually photovoltaic photodiodes. Cryogenically cooled detectors have lower noise, but InAs detectors can be used in higher-power applications at room temperature as well. Indium arsenide is also used for making diode lasers.
InAs is well known for its high electron mobility and narrow energy bandgap. It is widely used as a terahertz radiation source as it is a strong photo-Dember emitter.
Quantum dots can be formed in a monolayer of indium arsenide on indium phosphide or gallium arsenide. The mismatches of lattice constants of the materials create tensions in the surface layer, which in turn leads to the formation of the quantum dots. Quantum dots can also be formed in indium gallium arsenide, as indium arsenide dots sitting in the gallium arsenide matrix.
References
Cited sources
External links
Ioffe institute data archive entry
National Compound Semiconductor Roadmap entry for InAs at ONR web site
Arsenides
Indium compounds
III-V semiconductors
III-V compounds
Zincblende crystal structure | Indium arsenide | [
"Chemistry"
] | 349 | [
"Semiconductor materials",
"III-V compounds",
"Inorganic compounds",
"III-V semiconductors"
] |
3,298,657 | https://en.wikipedia.org/wiki/P-compact%20group | In mathematics, in particular algebraic topology, a p-compact group is a homotopical version of a compact Lie group, but with all the local structure concentrated at a single prime p. This concept was introduced in , making precise earlier notions of a mod p finite loop space. A p-compact group has many Lie-like properties like maximal tori and Weyl groups, which are defined purely homotopically in terms of the classifying space, but with the important difference that the Weyl group, rather than being a finite reflection group over the integers, is now a finite p-adic reflection group. They admit a classification in terms of root data, which mirrors the classification of compact Lie groups, but with the integers replaced by the p-adic integers.
Definition
A p-compact group is a pointed space BG, which is local with respect to mod p homology, and such the pointed loop space G = ΩBG has finite mod p homology. One sometimes also refer to the p-compact group by G, but then one needs to keep in mind that the loop space structure is part of the data (which then allows one to recover BG).
A p-compact group is said to be connected if G is a connected space (in general the group of components of G will be a finite p-group). The rank of a p-compact group is the rank of its maximal torus.
Examples
The p-completion, in the sense of homotopy theory, of (the classifying space of) a compact connected Lie group defines a connected p-compact group. (The Weyl group is just its ordinary Weyl group, now viewed as a p-adic reflection group by tensoring the coweight lattice by .)
More generally the p-completion of a connected finite loop space defines a p-compact group. (Here the Weyl will be a -reflection group that may not come from a -reflection group.)
A rank 1 connected 2-compact group is either the 2-completion of SU(2) or SO(3). A rank 1 connected p-compact group, for p odd, is a "Sullivan sphere", i.e., the p-completion of a 2n-1-sphere S2n-1, where n divides p − 1. These spheres turn out to have a unique loop space structure. They were first constructed by Dennis Sullivan in his 1970 MIT notes. (The Weyl group is a cyclic group of order n, acting on via an nth root of unity.)
Generalizing the rank 1 case, any finite complex reflection group can be realized as the Weyl group of a p-compact group for infinitely many primes, with the primes being determined by whether W and be conjugated into or not, with some embedding of in . The construction of a p-compact group with this Weyl group is then relatively straightforward for large primes where p does not divide the order of W (carried out already in using the Chevalley–Shephard–Todd theorem), but requires more sophisticated methods for the "modular primes" p that divide the order of W.
Classification
The classification of p-compact groups from states that there is a 1-1 correspondence between connected p-compact groups, up to homotopy equivalence, and root data over the p-adic integers, up to isomorphism. This is analogous to the classical classification of connected compact Lie groups, with the p-adic integers replacing the rational integers.
It follows from the classification that any p-compact group can be written as BG = BH × BK where BH is the p-completion of a compact connected Lie group and BK is finite direct product of simple exotic p-compact groups i.e., simple p-compact groups whose Weyl group group is not a -reflection groups. Simple exotic p-compact groups are again in 1-1-correspondence with irreducible complex reflection groups whose character field can be embedded in , but is not .
For instance, when p=2 this implies that every connected 2-compact group can be written BG = BH × BDI(4)s, where BH is the 2-completion of the classifying space of a connected compact Lie group, and BDI(4)s denotes s copies of the "Dwyer-Wilkerson 2-compact group" BDI(4) of rank 3, constructed in with Weyl group corresponding to group number 24 in the Shepard-Todd enumeration of complex reflection groups. For p=3 a similar statement holds but the new exotic 3-compact group is now group number 12 on the Shepard-Todd list, of rank 2. For primes greater than 3, family 2 on the Shepard-Todd list will contain infinitely many exotic p-compact groups.
Some consequences of the classification
A finite loop space is a pointed space BG such that the loop space ΩBG is homotopy equivalent to a finite CW-complex. The classification of connected p-compact groups implies a classification of connected finite loop spaces: Given a connected p-compact group for each prime, all with the same rational type, there is an explicit double coset space of possible connected finite loop spaces with p-completion the give p-compact groups. As connected p-compact groups are classified combinatorially, this implies a classification of connected loop spaces as well.
Using the classification, one can identify the compact Lie groups inside finite loop spaces, giving a homotopical characterisation of compact connected Lie groups: They are exactly those finite loop spaces that admit an integral maximal torus; this was the so-called maximal torus conjecture. (See and .)
The classification also implies a classification of which graded polynomial rings can occur as the cohomology ring of a space, the so-called Steenrod problem. (See .)
References
Homotopy Lie Groups: A Survey (PDF)
Homotopy Lie Groups and Their Classification (PDF)
Algebraic topology
Topology of Lie groups
Homotopy theory
Lie groups
Manifolds
Group theory
Symmetry | P-compact group | [
"Physics",
"Mathematics"
] | 1,251 | [
"Lie groups",
"Mathematical structures",
"Algebraic topology",
"Space (mathematics)",
"Group theory",
"Topological spaces",
"Fields of abstract algebra",
"Topology",
"Algebraic structures",
"Geometry",
"Manifolds",
"Symmetry"
] |
3,299,423 | https://en.wikipedia.org/wiki/Transversality%20%28mathematics%29 | In mathematics, transversality is a notion that describes how spaces can intersect; transversality can be seen as the "opposite" of tangency, and plays a role in general position. It formalizes the idea of a generic intersection in differential topology. It is defined by considering the linearizations of the intersecting spaces at the points of intersection.
Definition
Two submanifolds of a given finite-dimensional smooth manifold are said to intersect transversally if at every point of intersection, their separate tangent spaces at that point together generate the tangent space of the ambient manifold at that point. Manifolds that do not intersect are vacuously transverse. If the manifolds are of complementary dimension (i.e., their dimensions add up to the dimension of the ambient space), the condition means that the tangent space to the ambient manifold is the direct sum of the two smaller tangent spaces. If an intersection is transverse, then the intersection will be a submanifold whose codimension is equal to the sums of the codimensions of the two manifolds. In the absence of the transversality condition the intersection may fail to be a submanifold, having some sort of singular point.
In particular, this means that transverse submanifolds of complementary dimension intersect in isolated points (i.e., a 0-manifold). If both submanifolds and the ambient manifold are oriented, their intersection is oriented. When the intersection is zero-dimensional, the orientation is simply a plus or minus for each point.
One notation for the transverse intersection of two submanifolds and of a given manifold is . This notation can be read in two ways: either as “ and intersect transversally” or as an alternative notation for the set-theoretic intersection of and when that intersection is transverse. In this notation, the definition of transversality reads
Transversality of maps
The notion of transversality of a pair of submanifolds is easily extended to transversality of a submanifold and a map to the ambient manifold, or to a pair of maps to the ambient manifold, by asking whether the pushforwards of the tangent spaces along the preimage of points of intersection of the images generate the entire tangent space of the ambient manifold. If the maps are embeddings, this is equivalent to transversality of submanifolds.
Meaning of transversality for different dimensions
Suppose we have transverse maps and where and are manifolds with dimensions and respectively.
The meaning of transversality differs a lot depending on the relative dimensions of and . The relationship between transversality and tangency is clearest when .
We can consider three separate cases:
When , it is impossible for the image of and 's tangent spaces to span 's tangent space at any point. Thus any intersection between and cannot be transverse. However, non-intersecting manifolds vacuously satisfy the condition, so can be said to intersect transversely.
When , the image of and 's tangent spaces must sum directly to 's tangent space at any point of intersection. Their intersection thus consists of isolated signed points, i.e. a zero-dimensional manifold.
When this sum needn't be direct. In fact it cannot be direct if and are immersions at their point of intersection, as happens in the case of embedded submanifolds. If the maps are immersions, the intersection of their images will be a manifold of dimension
Intersection product
Given any two smooth submanifolds, it is possible to perturb either of them by an arbitrarily small amount such that the resulting submanifold intersects transversally with the fixed submanifold. Such perturbations do not affect the homology class of the manifolds or of their intersections. For example, if manifolds of complementary dimension intersect transversally, the signed sum of the number of their intersection points does not change even if we isotope the manifolds to another transverse intersection. (The intersection points can be counted modulo 2, ignoring the signs, to obtain a coarser invariant.) This descends to a bilinear intersection product on homology classes of any dimension, which is Poincaré dual to the cup product on cohomology. Like the cup product, the intersection product is graded-commutative.
Examples of transverse intersections
The simplest non-trivial example of transversality is of arcs in a surface. An intersection point between two arcs is transverse if and only if it is not a tangency, i.e., their tangent lines inside the tangent plane to the surface are distinct.
In a three-dimensional space, two curves can be transverse only when they have empty intersection, since their tangent spaces could generate at most a two-dimensional space. Curves transverse to surfaces intersect in points, and surfaces transverse to each other intersect in curves. Curves that are tangent to a surface at a point (for instance, curves lying on a surface) do not intersect the surface transversally.
Here is a more specialised example: suppose that is a simple Lie group and is its Lie algebra. By the Jacobson–Morozov theorem every nilpotent element can be included into an -triple . The representation theory of tells us that . The space is the tangent space at to the adjoint orbit and so the affine space intersects the orbit of transversally. The space is known as the "Slodowy slice" after Peter Slodowy.
Applications
Optimal control
In fields utilizing the calculus of variations or the related Pontryagin maximum principle, the transversality condition is frequently used to control the types of solutions found in optimization problems. For example, it is a necessary condition for solution curves to problems of the form:
Minimize where one or both of the endpoints of the curve are not fixed.
In many of these problems, the solution satisfies the condition that the solution curve should cross transversally the nullcline or some other curve describing terminal conditions.
Smoothness of solution spaces
Using Sard's theorem, whose hypothesis is a special case of the transversality of maps, it can be shown that transverse intersections between submanifolds of a space of complementary dimensions or between submanifolds and maps to a space are themselves smooth submanifolds. For instance, if a smooth section of an oriented manifold's tangent bundle—i.e. a vector field—is viewed as a map from the base to the total space, and intersects the zero-section (viewed either as a map or as a submanifold) transversely, then the zero set of the section—i.e. the singularities of the vector field—forms a smooth 0-dimensional submanifold of the base, i.e. a set of signed points. The signs agree with the indices of the vector field, and thus the sum of the signs—i.e. the fundamental class of the zero set—is equal to the Euler characteristic of the manifold. More generally, for a vector bundle over an oriented smooth closed finite-dimensional manifold, the zero set of a section transverse to the zero section will be a submanifold of the base of codimension equal to the rank of the vector bundle, and its homology class will be Poincaré dual to the Euler class of the bundle.
An extremely special case of this is the following: if a differentiable function from reals to the reals has nonzero derivative at a zero of the function, then the zero is simple, i.e. it the graph is transverse to the x-axis at that zero; a zero derivative would mean a horizontal tangent to the curve, which would agree with the tangent space to the x-axis.
For an infinite-dimensional example, the d-bar operator is a section of a certain Banach space bundle over the space of maps from a Riemann surface into an almost-complex manifold. The zero set of this section consists of holomorphic maps. If the d-bar operator can be shown to be transverse to the zero-section, this moduli space will be a smooth manifold. These considerations play a fundamental role in the theory of pseudoholomorphic curves and Gromov–Witten theory. (Note that for this example, the definition of transversality has to be refined in order to deal with Banach spaces!)
Grammar
"Transversal" is a noun; the adjective is "transverse."
quote from J.H.C. Whitehead, 1959
See also
Transversality theorem
Notes
References
Differential topology
Calculus of variations
Geometry | Transversality (mathematics) | [
"Mathematics"
] | 1,758 | [
"Topology",
"Differential topology",
"Geometry"
] |
5,900,871 | https://en.wikipedia.org/wiki/Onium%20ion | In chemistry, an onium ion is a cation formally obtained by the protonation of mononuclear parent hydride of a pnictogen (group 15 of the periodic table), chalcogen (group 16), or halogen (group 17). The oldest-known onium ion, and the namesake for the class, is ammonium, , the protonated derivative of ammonia, .
The name onium is also used for cations that would result from the substitution of hydrogen atoms in those ions by other groups, such as organic groups, or halogens; such as tetraphenylphosphonium, . The substituent groups may be divalent or trivalent, yielding ions such as iminium and nitrilium.
A simple onium ion has a charge of +1. A larger ion that has two onium ion subgroups is called a double onium ion, and has a charge of +2. A triple onium ion has a charge of +3, and so on.
Compounds of an onium cation and some other anion are known as onium compounds or onium salts.
Onium ions and onium compounds are inversely analogous to ions and ate complexes:
Lewis bases form onium ions when the central atom gains one more bond and becomes a positive cation.
Lewis acids form ions when the central atom gains one more bond and becomes a negative anion.
Simple onium cations (hydrides with no substitutions)
Group 13 (boron group) onium cations
boronium cation, (protonated borane)
further boronium cations, (protonated boranes)
Group 14 (carbon group) onium cations
carbonium ions (protonated hydrocarbons) have a pentacoordinated carbon atom with a +1 charge.
alkanium cations, (protonated alkanes)
methanium, (protonated methane) (Sometimes called carbonium, because it is the simplest member of that class, but that use is deprecated because of multiple definitions. Sometimes called methonium, but methonium also has multiple definitions. Abundant in outer space.)
ethanium, (protonated ethane)
propanium, (propane protonated on an unspecified carbon)
propylium, or propan-1-ylium (propane protonated on an end carbon)
propan-2-ylium (propane protonated on the middle carbon)
butanium, (butane protonated on an unspecified carbon)
n-butanium (n-butane protonated on an unspecified carbon)
n-butylium, or n-butan-1-ylium (n-butane protonated on an end carbon)
n-butan-2-ylium (n-butane protonated on a middle carbon)
isobutanium (isobutane protonated on an unspecified carbon)
isobutylium, or isobutan-1-ylium (isobutane protonated on an end carbon)
isobutan-2-ylium (isobutane protonated on the middle carbon)
octonium or octanium, (protonated octane)
silanium (sometimes silonium), (protonated silane) (should not be called siliconium
disilanium, (protonated disilane)
further silanium cations, (protonated silanes)
germonium, (protonated germane)
stannonium, (protonated ) (not protonated stannane )
plumbonium, (protonated ) (not protonated plumbane )
flerovonium, (protonated ) (not protonated flerovane )
Group 15 (pnictogen) onium cations
ammonium (IUPAC name azanium), (protonated ammonia (IUPAC name azane))
phosphonium, (protonated phosphine)
arsonium, (protonated arsine)
stibonium, (protonated stibine)
bismuthonium, (protonated bismuthine)
moscovonium, (protonated moscovine)
Group 16 (chalcogen) onium cations
oxonium, (protonated water (IUPAC name oxidane). Oxonium is better known as hydronium, though hydronium implies a solvated or hydrated proton. It may also be called hydroxonium.)
sulfonium, (protonated hydrogen sulfide)
selenonium, (protonated hydrogen selenide)
telluronium, (protonated hydrogen telluride)
polononium, (protonated hydrogen polonide)
livermoronium, (protonated hydrogen livermoride)
Hydrogen onium cation
hydrogenonium, better known as trihydrogen cation, (protonated molecular or diatomic hydrogen), found in ionized hydrogen and interstellar space
Group 17 (halogen) onium cations, halonium ions, (protonated hydrogen halides)
fluoronium, (protonated hydrogen fluoride)
chloronium, (protonated hydrogen chloride)
bromonium, (protonated hydrogen bromide)
iodonium, (protonated hydrogen iodide)
astatonium, (protonated hydrogen astatide)
tennessonium, (protonated hydrogen tennesside)
Pseudohalogen onium cations
aminodiazonium, (protonated hydrogen azide)
methylidyneammonium and hydrocyanonium, , isomers (protonated hydrogen cyanide)
Group 18 (noble gas) onium cations
hydrohelium or helonium, better known as helium hydride ion, (protonated helium)
neonium, (protonated neon)
argonium, (protonated argon)
kryptonium, (protonated krypton)
xenonium, (protonated xenon)
radonium, (protonated radon)
oganessonium (protonated oganesson)
Onium cations with monovalent substitutions
primary ammonium cations, or (protonated primary amines)
hydroxylammonium, (protonated hydroxylamine)
methylammonium, (protonated methylamine)
ethylammonium, (protonated ethylamine)
hydrazinium, or diazanium, (protonated hydrazine, a.k.a. diazane)
anilinium (a.k.a. phenylammonium), (protonated aniline, a.k.a. phenylamine, aminobenzene)
secondary ammonium cations, (protonated secondary amines)
dimethylammonium (sometimes dimethylaminium), (protonated dimethylamine)
diethylammonium (sometimes diethylaminium), (protonated diethylamine)
ethylmethylammonium, (protonated ethylmethylamine)
diethanolammonium (sometimes diethanolaminium), (protonated diethanolamine)
tertiary ammonium cations, (protonated tertiary amines)
trimethylammonium (protonated trimethylamine)
triethylammonium (protonated triethylamine)
quaternary ammonium cations, or
tetrafluoroammonium,
tetramethylammonium,
tetraethylammonium,
tetrapropylammonium,
tetrabutylammonium, or abbreviated
trimethyl ammonium compounds,
didecyldimethylammonium,
pentamethylhydrazinium,
quaternary phosphonium cations, or
tetraphenylphosphonium,
quaternary arsonium cations, or
tetraphenylarsonium,
quaternary stibonium cations, or
tetraphenylstibonium,
primary oxonium cations, (protonated alcohols )
alkyloxonium cations (protonated alcohols)
methyloxonium, (protonated methanol)
ethyloxonium, (protonated ethanol)
dioxidanonium (hydroxylhydronium), (protonated hydrogen peroxide)
secondary oxonium cations, (protonated ethers )
dialkyloxonium cations (protonated ethers)
dimethyloxonium, (protonated dimethyl ether)
tertiary oxonium cations,
trifluorooxonium, (hypothetical)
trimethyloxonium,
triethyloxonium,
oxatriquinacene, (cyclic oxonium ion)
oxatriquinane, (cyclic oxonium ion)
primary sulfonium cations, (protonated thiols )
secondary sulfonium cations, (protonated thioethers )
dimethylsulfonium, (protonated dimethyl sulfide)
tertiary sulfonium cations,
trimethylsulfonium,
tertiary selenonium cations,
triphenylselenonium,
tertiary telluronium cations,
triphenyltelluronium,
primary fluoronium cations, (protonated fluorides RF)
secondary fluoronium cations,
dichlorofluoronium,
secondary iodonium cations,
diphenyliodonium,
Onium cations with polyvalent substitutions
secondary ammonium cations having one double-bonded substitution,
diazenium, (protonated diazene)
guanidinium, (protonated guanidine) (has a resonance structure and a planar molecular geometry)
tertiary ammonium cations having one triple-bonded substitution, R≡NH+
nitrilium, (protonated nitrile)
diazonium or diazynium, (protonated nitrogen, in other words, protonated diazyne)
cyclic tertiary ammonium cations where nitrogen is a member of a ring, (the ring may be aromatic)
pyridinium, (protonated pyridine)
quaternary ammonium cations having one double-bonded substitution and two single-bonded substitutions,
iminium, (substituted protonated imine)
diazenium, (substituted protonated diazene)
thiazolium, (substituted protonated thiazole)
quaternary ammonium cations having two double-bonded substitutions,
nitronium,
bis(triphenylphosphine)iminium,
quaternary ammonium cations having one triple-bonded substitution and one single-bonded substitution,
diazonium, (substituted protonated nitrogen, in other words, substituted protonated diazyne)
nitrilium, (substituted protonated nitrile)
tertiary oxonium cations having one triple-bonded substitution,
acylium ions,
nitrosonium,
cyclic tertiary oxonium cations where oxygen is a member of a ring, (the ring may be aromatic)
pyrylium,
tertiary sulfonium cations having one triple-bonded substitution,
thionitrosyl,
dihydroxyoxoammonium, (protonated nitric acid)
trihydroxyoxosulfonium, (protonated sulfuric acid)
Double onium dications
hydrazinediium or hydrazinium(2+) dication, (doubly protonated hydrazine, in other words, doubly protonated diazane)
diazenediium cation, (doubly protonated diazene)
diazynediium cation, (doubly protonated dinitrogen, in other words, doubly protonated diazyne)
Enium cations
The extra bond is added to a less-common parent hydride, a carbene analog, typically named -ene or -ylene, which is neutral with 2 fewer bonds than the more-common hydride, typically named -ane or -ine.
borenium cations, (protonated borylenes a.k.a. boranylidenes)
carbenium cations, (protonated carbenes) have a tricoordinated carbon atom with a +1 charge.
alkenium cations, (n ≥ 2) (protonated alkenes)
methenium cation, (protonated methylene)
ethenium, (protonated ethene)
benzenium, (protonated benzene)
tropylium, (protonated tropylidene)
silylium cations, (protonated silylenes)
nitrenium cations, (protonated nitrenes)
phosphinidenium cations, (protonated phosphinidene)
mercurinium cations, (protonated organomercury compounds; formed as intermediates in oxymercuration reactions)
Substituted eniums
diphenylcarbenium, (di-substituted methenium)
triphenylcarbenium, (tri-substituted methenium)
Ynium cations
carbynium ions (protonated carbynes) have a carbon atom with a +1 charge.
alkynium cations, (n ≥ 2) (protonated alkynes)
methynium cation, (protonated methylidyne radical)
ethynium, (protonated ethyne)
See also
Carbonium ion
Lyonium ion, a protonated solvent molecule
Lyate ion, a deprotonated solvent molecule
References
External links
Ions and Radicals, Queen Mary University of London
Cations
Chemical nomenclature | Onium ion | [
"Physics",
"Chemistry"
] | 2,807 | [
"Cations",
"nan",
"Ions",
"Matter"
] |
5,902,060 | https://en.wikipedia.org/wiki/Directionality%20%28molecular%20biology%29 | Directionality, in molecular biology and biochemistry, is the end-to-end chemical orientation of a single strand of nucleic acid. In a single strand of DNA or RNA, the chemical convention of naming carbon atoms in the nucleotide pentose-sugar-ring means that there will be a 5′ end (usually pronounced "five-prime end"), which frequently contains a phosphate group attached to the 5′ carbon of the ribose ring, and a 3′ end (usually pronounced "three-prime end"), which typically is unmodified from the ribose -OH substituent. In a DNA double helix, the strands run in opposite directions to permit base pairing between them, which is essential for replication or transcription of the encoded information.
Nucleic acids can only be synthesized in vivo in the 5′-to-3′ direction, as the polymerases that assemble various types of new strands generally rely on the energy produced by breaking nucleoside triphosphate bonds to attach new nucleoside monophosphates to the 3′-hydroxyl (−OH) group, via a phosphodiester bond. The relative positions of structures along strands of nucleic acid, including genes and various protein binding sites, are usually noted as being either upstream (towards the 5′-end) or downstream (towards the 3′-end). (See also upstream and downstream.)
Directionality is related to, but different from, sense. Transcription of single-stranded RNA from a double-stranded DNA template requires the selection of one strand of the DNA template as the template strand that directly interacts with the nascent RNA due to complementary sequence. The other strand is not copied directly, but necessarily its sequence will be similar to that of the RNA. Transcription initiation sites generally occur on both strands of an organism's DNA, and specify the location, direction, and circumstances under which transcription will occur. If the transcript encodes one or (rarely) more proteins, translation of each protein by the ribosome will proceed in a 5′-to-3′ direction, and will extend the protein from its N-terminus toward its C-terminus. For example, in a typical gene a start codon (5′-ATG-3′) is a DNA sequence within the sense strand. Transcription begins at an upstream site (relative to the sense strand), and as it proceeds through the region it copies the 3′-TAC-5′ from the template strand to produce 5′-AUG-3′ within a messenger RNA (mRNA). The mRNA is scanned by the ribosome from the 5′ end, where the start codon directs the incorporation of a methionine (bacteria, mitochondria, and plastids use N-formylmethionine instead) at the N terminus of the protein. By convention, single strands of DNA and RNA sequences are written in a 5′-to-3′ direction except as needed to illustrate the pattern of base pairing.
5′-end
The 5′-end (pronounced "five prime end") designates the end of the DNA or RNA strand that has the fifth carbon in the sugar-ring of the deoxyribose or ribose at its terminus. A phosphate group attached to the 5′-end permits ligation of two nucleotides, i.e., the covalent binding of a 5′-phosphate to the 3′-hydroxyl group of another nucleotide, to form a phosphodiester bond. Removal of the 5′-phosphate prevents ligation. To prevent unwanted nucleic acid ligation (e.g. self-ligation of a plasmid vector in DNA cloning), molecular biologists commonly remove the 5′-phosphate with a phosphatase.
The 5′-end of nascent messenger RNA is the site at which post-transcriptional capping occurs, a process which is vital to producing mature messenger RNA. Capping increases the stability of the messenger RNA while it undergoes translation, providing resistance to the degradative effects of exonucleases. It consists of a methylated nucleotide (methylguanosine) attached to the messenger RNA in a rare 5′- to 5′-triphosphate linkage.
The 5′-flanking region of a gene often denotes a region of DNA which is not transcribed into RNA. The 5′-flanking region contains the gene promoter, and may also contain enhancers or other protein binding sites.
The 5′-untranslated region (5′-UTR) is a region of a gene which is transcribed into mRNA, and is located at the 5′-end of the mRNA. This region of an mRNA may or may not be translated, but is usually involved in the regulation of translation. The 5′-untranslated region is the portion of the DNA starting from the cap site and extending to the base just before the AUG translation initiation codon of the main coding sequence. This region may have sequences, such as the ribosome binding site and Kozak sequence, which determine the translation efficiency of the mRNA, or which may affect the stability of the mRNA.
3′-end
The 3′-end (three prime end) of a strand is so named due to it terminating at the hydroxyl group of the third carbon in the sugar-ring, and is known as the tail end. The 3′-hydroxyl is necessary in the synthesis of new nucleic acid molecules as it is ligated (joined) to the 5′-phosphate of a separate nucleotide, allowing the formation of strands of linked nucleotides.
Molecular biologists can use nucleotides that lack a 3′-hydroxyl (dideoxyribonucleotides) to interrupt the replication of DNA. This technique is known as the dideoxy chain-termination method or the Sanger method, and is used to determine the order of nucleotides in DNA.
The 3′-end of nascent messenger RNA is the site of post-transcriptional polyadenylation, which attaches a chain of 50 to 250 adenosine residues to produce mature messenger RNA. This chain helps in determining how long the messenger RNA lasts in the cell, influencing how much protein is produced from it.
The 3′-flanking region is a region of DNA that is not copied into the mature mRNA, but which is present adjacent to 3′-end of the gene. It was originally thought that the 3′-flanking DNA was not transcribed at all, but it was discovered to be transcribed into RNA and quickly removed during processing of the primary transcript to form the mature mRNA. The 3′-flanking region often contains sequences that affect the formation of the 3′-end of the message. It may also contain enhancers or other sites to which proteins may bind.
The 3′-untranslated region (3′-UTR) is a region of the DNA which is transcribed into mRNA and becomes the 3′-end of the message, but which does not contain protein coding sequence. Everything between the stop codon and the polyA tail is considered to be 3′-untranslated. The 3′-untranslated region may affect the translation efficiency of the mRNA or the stability of the mRNA. It also has sequences which are required for the addition of the poly(A) tail to the message, including the hexanucleotide AAUAAA.
See also
Sense (molecular biology)
Further reading
External links
A Molecular Biology Glossary
DNA
Molecular genetics
RNA | Directionality (molecular biology) | [
"Chemistry",
"Biology"
] | 1,574 | [
"Molecular genetics",
"Molecular biology"
] |
5,903,372 | https://en.wikipedia.org/wiki/Electrostatic-sensitive%20device | An electrostatic-sensitive device (often abbreviated ESD) is any component (primarily electrical) which can be damaged by common static charges which build up on people, tools, and other non-conductors or semiconductors. ESD commonly also stands for electrostatic discharge.
Overview
As electronic parts like computer central processing units (CPUs) become packed more and more densely with transistors the transistors shrink and become more and more vulnerable to ESD.
Common electrostatic-sensitive devices include:
MOSFET transistors, used to make integrated circuits (ICs)
CMOS ICs (chips), integrated circuits built with MOSFETs. Examples are computer CPUs, graphics ICs.
Computer cards
TTL chips
Laser diodes
Blue light-emitting diodes (LEDs)
High precision resistors
The notion of a symbol for an ESD protection device came about in response to the increased usage and failures of static sensitive components by then the computer systems manufacturer, Sperry Univac. Field repairs to and handling of ESD printed circuit boards (PCBs) were resulting in extremely high failure rates. Studies of PCB failures indicated that static damage to chips and PCBs were being caused by field service engineers who were often unaware of the need to employ precautionary procedures in handling ESD sensitive parts. In response to this problem, Robert F. Gabriel, a Systems Engineer at Sperry Univac devised a large number of possible symbols that could be affixed to parts, packaging, and PCBs to alert the user that the part is ESD-sensitive. Gabriel developed a proposal for an ESD warning symbol and circulated it to numerous electronics standards groups. C. Everett Coon at the EIA (Electronics Industry Association) enthusiastically responded to the concept and coordinated a world-wide effort among various standards bodies and interest groups to devise an appropriate symbol that would be void of any verbiage and be quickly recognizable that handling precautions were necessary for the ESD item. After three years of worldwide debate over the graphics and the color scheme that would be used the symbol at the top right of this page was adopted in the late 1970s. Variations to the design have been adopted afterwards by some but the most recognizable symbol remains as was adopted.
ESD-safe working
Often an ESD-safe foam or ESD-safe bag are required for transporting such components. When working with them, a technician will often use a grounding mat or other grounding tool to keep from damaging the equipment. A technician may also wear antistatic garments or an antistatic wrist strap.
There are several kinds of ESD protective materials:
Conductive: Materials with an electrical resistance between 1kΩ and 1MΩ
Dissipative: Materials with an electrical resistance between 1MΩ and 1TΩ
Shielding: Materials that attenuate current and electrical fields
Low-charging or Anti-static: Materials that limit the buildup of charge by prevention of triboelectric effects through physical separation or by selecting materials that do not build up charge easily.
See also
Antistatic agent
Antistatic device
Antistatic garments
Electrostatic discharge materials
References
External links
ESD Association
Avoid Static Damage to Your PC, from PC World
ESD advice from Intel
Tips for Enhancing ESD Protection, for board designers
Electromagnetic compatibility
de:Elektrostatische Entladung#Elektrostatisch empfindliche Bauelemente | Electrostatic-sensitive device | [
"Engineering"
] | 693 | [
"Radio electronics",
"Electrical engineering",
"Electromagnetic compatibility"
] |
5,903,656 | https://en.wikipedia.org/wiki/Tidal%20heating | Tidal heating (also known as tidal working or tidal flexing) occurs through the tidal friction processes: orbital and rotational energy is dissipated as heat in either (or both) the surface ocean or interior of a planet or satellite. When an object is in an elliptical orbit, the tidal forces acting on it are stronger near periapsis than near apoapsis. Thus the deformation of the body due to tidal forces (i.e. the tidal bulge) varies over the course of its orbit, generating internal friction which heats its interior. This energy gained by the object comes from its orbital energy and/or rotational energy, so over time in a two-body system, the initial elliptical orbit decays into a circular orbit (tidal circularization) and the rotational periods of the two bodies adjust towards matching the orbital period (tidal locking). Sustained tidal heating occurs when the elliptical orbit is prevented from circularizing due to additional gravitational forces from other bodies that keep tugging the object back into an elliptical orbit. In this more complex system, orbital and rotational energy still is being converted to thermal energy; however, now the orbit's semimajor axis would shrink rather than its eccentricity.
Moons of giant planets
Tidal heating is responsible for the geologic activity of the most volcanically active body in the Solar System: Io, a moon of Jupiter. Io's eccentricity persists as the result of its orbital resonances with the Galilean moons Europa and Ganymede. The same mechanism has provided the energy to melt the lower layers of the ice surrounding the rocky mantle of Jupiter's next-closest large moon, Europa. However, the heating of the latter is weaker, because of reduced flexing—Europa has half Io's orbital frequency and a 14% smaller radius; also, while Europa's orbit is about twice as eccentric as Io's, tidal force falls off with the cube of distance and is only a quarter as strong at Europa. Jupiter maintains the moons' orbits via tides they raise on it and thus its rotational energy ultimately powers the system. Saturn's moon Enceladus is similarly thought to have a liquid water ocean beneath its icy crust, due to tidal heating related to its resonance with Dione. The water vapor geysers which eject material from Enceladus are thought to be powered by friction generated within its interior.
Earth
Munk & Wunsch (1998) estimated that Earth experiences 3.7 TW (0.0073 W/m) of tidal heating, of which 95% (3.5 TW or 0.0069 W/m) is associated with ocean tides and 5% (0.2 TW or 0.0004 W/m) is associated with Earth tides, with 3.2 TW being due to tidal interactions with the Moon and 0.5 TW being due to tidal interactions with the Sun. Egbert & Ray (2001) confirmed that overall estimate, writing "The total amount of tidal energy dissipated in the Earth-Moon-Sun system is now well-determined. The methods of space geodesy—altimetry, satellite laser ranging, lunar laser ranging—have converged to 3.7 TW..."
Heller et al. (2021) estimated that shortly after the Moon was formed, when the Moon orbited 10-15 times closer to Earth than it does now, tidal heating might have contributed ~10 W/m of heating over perhaps 100 million years, and that this could have accounted for a temperature increase of up to 5°C on the early Earth.
Moon
Harada et al. (2014) proposed that tidal heating may have created a molten layer at the core-mantle boundary within Earth's Moon.
Formula
The tidal heating rate, , in a satellite that is spin-synchronous, coplanar (), and has an eccentric orbit is given by: where , , , and are respectively the satellite's mean radius, mean orbital motion, orbital distance, and eccentricity. is the host (or central) body's mass and represents the imaginary portion of the second-order Love number which measures the efficiency at which the satellite dissipates tidal energy into frictional heat. This imaginary portion is defined by interplay of the body's rheology and self-gravitation. It, therefore, is a function of the body's radius, density, and rheological parameters (the shear modulus, viscosity, and others – dependent upon the rheological model). The rheological parameters' values, in turn, depend upon the temperature and the concentration of partial melt in the body's interior.
The tidally dissipated power in a nonsynchronised rotator is given by a more complex expression.
See also
Cryovolcano
Tidal acceleration
Tidal locking
Io Volcano Observer
Planetary differentiation
References
Concepts in astrophysics
Planetary science
Tidal forces | Tidal heating | [
"Physics",
"Astronomy"
] | 1,003 | [
"Planetary science",
"Astronomical sub-disciplines",
"Concepts in astrophysics",
"Astrophysics"
] |
690,512 | https://en.wikipedia.org/wiki/Akaike%20information%20criterion | The Akaike information criterion (AIC) is an estimator of prediction error and thereby relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection.
AIC is founded on information theory. When a statistical model is used to represent the process that generated the data, the representation will almost never be exact; so some information will be lost by using the model to represent the process. AIC estimates the relative amount of information lost by a given model: the less information a model loses, the higher the quality of that model.
In estimating the amount of information lost by a model, AIC deals with the trade-off between the goodness of fit of the model and the simplicity of the model. In other words, AIC deals with both the risk of overfitting and the risk of underfitting.
The Akaike information criterion is named after the Japanese statistician Hirotugu Akaike, who formulated it. It now forms the basis of a paradigm for the foundations of statistics and is also widely used for statistical inference.
Definition
Suppose that we have a statistical model of some data. Let be the number of estimated parameters in the model. Let be the maximized value of the likelihood function for the model. Then the AIC value of the model is the following.
Given a set of candidate models for the data, the preferred model is the one with the minimum AIC value. Thus, AIC rewards goodness of fit (as assessed by the likelihood function), but it also includes a penalty that is an increasing function of the number of estimated parameters. The penalty discourages overfitting, which is desired because increasing the number of parameters in the model almost always improves the goodness of the fit.
AIC is founded in information theory. Suppose that the data is generated by some unknown process f. We consider two candidate models to represent f: g1 and g2. If we knew f, then we could find the information lost from using g1 to represent f by calculating the Kullback–Leibler divergence, ; similarly, the information lost from using g2 to represent f could be found by calculating . We would then, generally, choose the candidate model that minimized the information loss.
We cannot choose with certainty, because we do not know f. showed, however, that we can estimate, via AIC, how much more (or less) information is lost by g1 than by g2. The estimate, though, is only valid asymptotically; if the number of data points is small, then some correction is often necessary (see AICc, below).
Note that AIC tells nothing about the absolute quality of a model, only the quality relative to other models. Thus, if all the candidate models fit poorly, AIC will not give any warning of that. Hence, after selecting a model via AIC, it is usually good practice to validate the absolute quality of the model. Such validation commonly includes checks of the model's residuals (to determine whether the residuals seem like random) and tests of the model's predictions. For more on this topic, see statistical model validation.
How to use AIC in practice
To apply AIC in practice, we start with a set of candidate models, and then find the models' corresponding AIC values. There will almost always be information lost due to using a candidate model to represent the "true model," i.e. the process that generated the data. We wish to select, from among the candidate models, the model that minimizes the information loss. We cannot choose with certainty, but we can minimize the estimated information loss.
Suppose that there are R candidate models. Denote the AIC values of those models by AIC1, AIC2, AIC3, ..., AICR. Let AICmin be the minimum of those values. Then the quantity exp((AICmin − AICi)/2) can be interpreted as being proportional to the probability that the ith model minimizes the (estimated) information loss.
As an example, suppose that there are three candidate models, whose AIC values are 100, 102, and 110. Then the second model is times as probable as the first model to minimize the information loss. Similarly, the third model is times as probable as the first model to minimize the information loss.
In this example, we would omit the third model from further consideration. We then have three options: (1) gather more data, in the hope that this will allow clearly distinguishing between the first two models; (2) simply conclude that the data is insufficient to support selecting one model from among the first two; (3) take a weighted average of the first two models, with weights proportional to 1 and 0.368, respectively, and then do statistical inference based on the weighted multimodel.
The quantity is known as the relative likelihood of model i. It is closely related to the likelihood ratio used in the likelihood-ratio test. Indeed, if all the models in the candidate set have the same number of parameters, then using AIC might at first appear to be very similar to using the likelihood-ratio test. There are, however, important distinctions. In particular, the likelihood-ratio test is valid only for nested models, whereas AIC (and AICc) has no such restriction.
Hypothesis testing
Every statistical hypothesis test can be formulated as a comparison of statistical models. Hence, every statistical hypothesis test can be replicated via AIC. Two examples are briefly described in the subsections below. Details for those examples, and many more examples, are given by and .
Replicating Student's t-test
As an example of a hypothesis test, consider the t-test to compare the means of two normally-distributed populations. The input to the t-test comprises a random sample from each of the two populations.
To formulate the test as a comparison of models, we construct two different models. The first model models the two populations as having potentially different means and standard deviations. The likelihood function for the first model is thus the product of the likelihoods for two distinct normal distributions; so it has four parameters: . To be explicit, the likelihood function is as follows (denoting the sample sizes by and ).
The second model models the two populations as having the same means but potentially different standard deviations. The likelihood function for the second model thus sets in the above equation; so it has three parameters.
We then maximize the likelihood functions for the two models (in practice, we maximize the log-likelihood functions); after that, it is easy to calculate the AIC values of the models. We next calculate the relative likelihood. For instance, if the second model was only 0.01 times as likely as the first model, then we would omit the second model from further consideration: so we would conclude that the two populations have different means.
The t-test assumes that the two populations have identical standard deviations; the test tends to be unreliable if the assumption is false and the sizes of the two samples are very different (Welch's t-test would be better). Comparing the means of the populations via AIC, as in the example above, has an advantage by not making such assumptions.
Comparing categorical data sets
For another example of a hypothesis test, suppose that we have two populations, and each member of each population is in one of two categories—category #1 or category #2. Each population is binomially distributed. We want to know whether the distributions of the two populations are the same. We are given a random sample from each of the two populations.
Let be the size of the sample from the first population. Let be the number of observations (in the sample) in category #1; so the number of observations in category #2 is . Similarly, let be the size of the sample from the second population. Let be the number of observations (in the sample) in category #1.
Let be the probability that a randomly-chosen member of the first population is in category #1. Hence, the probability that a randomly-chosen member of the first population is in category #2 is . Note that the distribution of the first population has one parameter. Let be the probability that a randomly-chosen member of the second population is in category #1. Note that the distribution of the second population also has one parameter.
To compare the distributions of the two populations, we construct two different models. The first model models the two populations as having potentially different distributions. The likelihood function for the first model is thus the product of the likelihoods for two distinct binomial distributions; so it has two parameters: , . To be explicit, the likelihood function is as follows.
The second model models the two populations as having the same distribution. The likelihood function for the second model thus sets in the above equation; so the second model has one parameter.
We then maximize the likelihood functions for the two models (in practice, we maximize the log-likelihood functions); after that, it is easy to calculate the AIC values of the models. We next calculate the relative likelihood. For instance, if the second model was only 0.01 times as likely as the first model, then we would omit the second model from further consideration: so we would conclude that the two populations have different distributions.
Foundations of statistics
Statistical inference is generally regarded as comprising hypothesis testing and estimation. Hypothesis testing can be done via AIC, as discussed above. Regarding estimation, there are two types: point estimation and interval estimation. Point estimation can be done within the AIC paradigm: it is provided by maximum likelihood estimation. Interval estimation can also be done within the AIC paradigm: it is provided by likelihood intervals. Hence, statistical inference generally can be done within the AIC paradigm.
The most commonly used paradigms for statistical inference are frequentist inference and Bayesian inference. AIC, though, can be used to do statistical inference without relying on either the frequentist paradigm or the Bayesian paradigm: because AIC can be interpreted without the aid of significance levels or Bayesian priors. In other words, AIC can be used to form a foundation of statistics that is distinct from both frequentism and Bayesianism.
Modification for small sample size
When the sample size is small, there is a substantial probability that AIC will select models that have too many parameters, i.e. that AIC will overfit. To address such potential overfitting, AICc was developed: AICc is AIC with a correction for small sample sizes.
The formula for AICc depends upon the statistical model. Assuming that the model is univariate, is linear in its parameters, and has normally-distributed residuals (conditional upon regressors), then the formula for AICc is as follows.
—where denotes the sample size and denotes the number of parameters. Thus, AICc is essentially AIC with an extra penalty term for the number of parameters. Note that as , the extra penalty term converges to 0, and thus AICc converges to AIC.
If the assumption that the model is univariate and linear with normal residuals does not hold, then the formula for AICc will generally be different from the formula above. For some models, the formula can be difficult to determine. For every model that has AICc available, though, the formula for AICc is given by AIC plus terms that includes both and 2. In comparison, the formula for AIC includes but not 2. In other words, AIC is a first-order estimate (of the information loss), whereas AICc is a second-order estimate.
Further discussion of the formula, with examples of other assumptions, is given by and by . In particular, with other assumptions, bootstrap estimation of the formula is often feasible.
To summarize, AICc has the advantage of tending to be more accurate than AIC (especially for small samples), but AICc also has the disadvantage of sometimes being much more difficult to compute than AIC. Note that if all the candidate models have the same and the same formula for AICc, then AICc and AIC will give identical (relative) valuations; hence, there will be no disadvantage in using AIC, instead of AICc. Furthermore, if is many times larger than 2, then the extra penalty term will be negligible; hence, the disadvantage in using AIC, instead of AICc, will be negligible.
History
The Akaike information criterion was formulated by the statistician Hirotugu Akaike. It was originally named "an information criterion". It was first announced in English by Akaike at a 1971 symposium; the proceedings of the symposium were published in 1973. The 1973 publication, though, was only an informal presentation of the concepts. The first formal publication was a 1974 paper by Akaike.
The initial derivation of AIC relied upon some strong assumptions. showed that the assumptions could be made much weaker. Takeuchi's work, however, was in Japanese and was not widely known outside Japan for many years. (Translated in )
AIC was originally proposed for linear regression (only) by . That instigated the work of , and several further papers by the same authors, which extended the situations in which AICc could be applied.
The first general exposition of the information-theoretic approach was the volume by . It includes an English presentation of the work of Takeuchi. The volume led to far greater use of AIC, and it now has more than 64,000 citations on Google Scholar.
Akaike called his approach an "entropy maximization principle", because the approach is founded on the concept of entropy in information theory. Indeed, minimizing AIC in a statistical model is effectively equivalent to maximizing entropy in a thermodynamic system; in other words, the information-theoretic approach in statistics is essentially applying the second law of thermodynamics. As such, AIC has roots in the work of Ludwig Boltzmann on entropy. For more on these issues, see and .
Usage tips
Counting parameters
A statistical model must account for random errors. A straight line model might be formally described as yi = b0 + b1xi + εi. Here, the εi are the residuals from the straight line fit. If the εi are assumed to be i.i.d. Gaussian (with zero mean), then the model has three parameters:
b0, b1, and the variance of the Gaussian distributions.
Thus, when calculating the AIC value of this model, we should use k=3. More generally, for any least squares model with i.i.d. Gaussian residuals, the variance of the residuals' distributions should be counted as one of the parameters.
As another example, consider a first-order autoregressive model, defined by
xi = c + φxi−1 + εi, with the εi being i.i.d. Gaussian (with zero mean). For this model, there are three parameters: c, φ, and the variance of the εi. More generally, a pth-order autoregressive model has parameters. (If, however, c is not estimated from the data, but instead given in advance, then there are only parameters.)
Transforming data
The AIC values of the candidate models must all be computed with the same data set. Sometimes, though, we might want to compare a model of the response variable, , with a model of the logarithm of the response variable, . More generally, we might want to compare a model of the data with a model of transformed data. Following is an illustration of how to deal with data transforms (adapted from : "Investigators should be sure that all hypotheses are modeled using the same response variable").
Suppose that we want to compare two models: one with a normal distribution of and one with a normal distribution of . We should not directly compare the AIC values of the two models. Instead, we should transform the normal cumulative distribution function to first take the logarithm of . To do that, we need to perform the relevant integration by substitution: thus, we need to multiply by the derivative of the (natural) logarithm function, which is . Hence, the transformed distribution has the following probability density function:
—which is the probability density function for the log-normal distribution. We then compare the AIC value of the normal model against the AIC value of the log-normal model.
For misspecified model, Takeuchi's Information Criterion (TIC) might be more appropriate. However, TIC often suffers from instability caused by estimation errors.
Comparisons with other model selection methods
The critical difference between AIC and BIC (and their variants) is the asymptotic property under well-specified and misspecified model classes. Their fundamental differences have been well-studied in regression variable selection and autoregression order selection problems. In general, if the goal is prediction, AIC and leave-one-out cross-validations are preferred. If the goal is selection, inference, or interpretation, BIC or leave-many-out cross-validations are preferred. A comprehensive overview of AIC and other popular model selection methods is given by Ding et al. (2018)
Comparison with BIC
The formula for the Bayesian information criterion (BIC) is similar to the formula for AIC, but with a different penalty for the number of parameters. With AIC the penalty is , whereas with BIC the penalty is .
A comparison of AIC/AICc and BIC is given by , with follow-up remarks by . The authors show that AIC/AICc can be derived in the same Bayesian framework as BIC, just by using different prior probabilities. In the Bayesian derivation of BIC, though, each candidate model has a prior probability of 1/R (where R is the number of candidate models). Additionally, the authors present a few simulation studies that suggest AICc tends to have practical/performance advantages over BIC.
A point made by several researchers is that AIC and BIC are appropriate for different tasks. In particular, BIC is argued to be appropriate for selecting the "true model" (i.e. the process that generated the data) from the set of candidate models, whereas AIC is not appropriate. To be specific, if the "true model" is in the set of candidates, then BIC will select the "true model" with probability 1, as ; in contrast, when selection is done via AIC, the probability can be less than 1. Proponents of AIC argue that this issue is negligible, because the "true model" is virtually never in the candidate set. Indeed, it is a common aphorism in statistics that "all models are wrong"; hence the "true model" (i.e. reality) cannot be in the candidate set.
Another comparison of AIC and BIC is given by . Vrieze presents a simulation study—which allows the "true model" to be in the candidate set (unlike with virtually all real data). The simulation study demonstrates, in particular, that AIC sometimes selects a much better model than BIC even when the "true model" is in the candidate set. The reason is that, for finite , BIC can have a substantial risk of selecting a very bad model from the candidate set. This reason can arise even when is much larger than 2. With AIC, the risk of selecting a very bad model is minimized.
If the "true model" is not in the candidate set, then the most that we can hope to do is select the model that best approximates the "true model". AIC is appropriate for finding the best approximating model, under certain assumptions. (Those assumptions include, in particular, that the approximating is done with regard to information loss.)
Comparison of AIC and BIC in the context of regression is given by . In regression, AIC is asymptotically optimal for selecting the model with the least mean squared error, under the assumption that the "true model" is not in the candidate set. BIC is not asymptotically optimal under the assumption. Yang additionally shows that the rate at which AIC converges to the optimum is, in a certain sense, the best possible.
Comparison with least squares
Sometimes, each candidate model assumes that the residuals are distributed according to independent identical normal distributions (with zero mean). That gives rise to least squares model fitting.
With least squares fitting, the maximum likelihood estimate for the variance of a model's residuals distributions is
,
where the residual sum of squares is
Then, the maximum value of a model's log-likelihood function is (see Normal distribution#Log-likelihood):
where is a constant independent of the model, and dependent only on the particular data points, i.e. it does not change if the data does not change.
That gives:
Because only differences in AIC are meaningful, the constant can be ignored, which allows us to conveniently take the following for model comparisons:
Note that if all the models have the same , then selecting the model with minimum AIC is equivalent to selecting the model with minimum —which is the usual objective of model selection based on least squares.
Comparison with cross-validation
Leave-one-out cross-validation is asymptotically equivalent to AIC, for ordinary linear regression models. Asymptotic equivalence to AIC also holds for mixed-effects models.
Comparison with Mallows's Cp
Mallows's Cp is equivalent to AIC in the case of (Gaussian) linear regression.
See also
Deviance information criterion
Focused information criterion
Hannan–Quinn information criterion
Maximum likelihood estimation
Principle of maximum entropy
Wilks' theorem
Notes
References
.
. Republished in .
.
.
.
.
.
.
.
. [Note: the AIC defined by Claeskens & Hjort is the negative of the standard definition—as originally given by Akaike and followed by other authors.]
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Further reading
[Hirotogu Akaike comments on how he arrived at AIC]
Entropy and information
Model selection
Regression variable selection
Mathematical modeling
Japanese inventions | Akaike information criterion | [
"Physics",
"Mathematics"
] | 4,641 | [
"Mathematical modeling",
"Physical quantities",
"Applied mathematics",
"Entropy and information",
"Entropy",
"Dynamical systems"
] |
691,277 | https://en.wikipedia.org/wiki/Quantity | Quantity or amount is a property that can exist as a multitude or magnitude, which illustrate discontinuity and continuity. Quantities can be compared in terms of "more", "less", or "equal", or by assigning a numerical value multiple of a unit of measurement. Mass, time, distance, heat, and angle are among the familiar examples of quantitative properties.
Quantity is among the basic classes of things along with quality, substance, change, and relation. Some quantities are such by their inner nature (as number), while others function as states (properties, dimensions, attributes) of things such as heavy and light, long and short, broad and narrow, small and great, or much and little.
Under the name of multitude comes what is discontinuous and discrete and divisible ultimately into indivisibles, such as: army, fleet, flock, government, company, party, people, mess (military), chorus, crowd, and number; all which are cases of collective nouns. Under the name of magnitude comes what is continuous and unified and divisible only into smaller divisibles, such as: matter, mass, energy, liquid, material—all cases of non-collective nouns.
Along with analyzing its nature and classification, the issues of quantity involve such closely related topics as dimensionality, equality, proportion, the measurements of quantities, the units of measurements, number and numbering systems, the types of numbers and their relations to each other as numerical ratios.
Background
In mathematics, the concept of quantity is an ancient one extending back to the time of Aristotle and earlier. Aristotle regarded quantity as a fundamental ontological and scientific category. In Aristotle's ontology, quantity or quantum was classified into two different types, which he characterized as follows:
In his Elements, Euclid developed the theory of ratios of magnitudes without studying the nature of magnitudes, as Archimedes, but giving the following significant definitions:
For Aristotle and Euclid, relations were conceived as whole numbers (Michell, 1993). John Wallis later conceived of ratios of magnitudes as real numbers:
That is, the ratio of magnitudes of any quantity, whether volume, mass, heat and so on, is a number. Following this, Newton then defined number, and the relationship between quantity and number, in the following terms:
Structure
Continuous quantities possess a particular structure that was first explicitly characterized by Hölder (1901) as a set of axioms that define such features as identities and relations between magnitudes. In science, quantitative structure is the subject of empirical investigation and cannot be assumed to exist a priori for any given property. The linear continuum represents the prototype of continuous quantitative structure as characterized by Hölder (1901) (translated in Michell & Ernst, 1996). A fundamental feature of any type of quantity is that the relationships of equality or inequality can in principle be stated in comparisons between particular magnitudes, unlike quality, which is marked by likeness, similarity and difference, diversity. Another fundamental feature is additivity. Additivity may involve concatenation, such as adding two lengths A and B to obtain a third A + B. Additivity is not, however, restricted to extensive quantities but may also entail relations between magnitudes that can be established through experiments that permit tests of hypothesized observable manifestations of the additive relations of magnitudes. Another feature is continuity, on which Michell (1999, p. 51) says of length, as a type of quantitative attribute, "what continuity means is that if any arbitrary length, a, is selected as a unit, then for every positive real number, r, there is a length b such that b = ra". A further generalization is given by the theory of conjoint measurement, independently developed by French economist Gérard Debreu (1960) and by the American mathematical psychologist R. Duncan Luce and statistician John Tukey (1964).
In mathematics
Magnitude (how much) and multitude (how many), the two principal types of quantities, are further divided as mathematical and physical. In formal terms, quantities—their ratios, proportions, order and formal relationships of equality and inequality—are studied by mathematics. The essential part of mathematical quantities consists of having a collection of variables, each assuming a set of values. These can be a set of a single quantity, referred to as a scalar when represented by real numbers, or have multiple quantities as do vectors and tensors, two kinds of geometric objects.
The mathematical usage of a quantity can then be varied and so is situationally dependent. Quantities can be used as being infinitesimal, arguments of a function, variables in an expression (independent or dependent), or probabilistic as in random and stochastic quantities. In mathematics, magnitudes and multitudes are also not only two distinct kinds of quantity but furthermore relatable to each other.
Number theory covers the topics of the discrete quantities as numbers: number systems with their kinds and relations. Geometry studies the issues of spatial magnitudes: straight lines, curved lines, surfaces and solids, all with their respective measurements and relationships.
A traditional Aristotelian realist philosophy of mathematics, stemming from Aristotle and remaining popular until the eighteenth century, held that mathematics is the "science of quantity". Quantity was considered to be divided into the discrete (studied by arithmetic) and the continuous (studied by geometry and later calculus). The theory fits reasonably well elementary or school mathematics but less well the abstract topological and algebraic structures of modern mathematics.
In science
Establishing quantitative structure and relationships between different quantities is the cornerstone of modern science, especially but not restricted to physical sciences. Physics is fundamentally a quantitative science; chemistry, biology and others are increasingly so. Their progress is chiefly achieved due to rendering the abstract qualities of material entities into physical quantities, by postulating that all material bodies marked by quantitative properties or physical dimensions are subject to some measurements and observations. Setting the units of measurement, physics covers such fundamental quantities as space (length, breadth, and depth) and time, mass and force, temperature, energy, and quanta.
A distinction has also been made between intensive quantity and extensive quantity as two types of quantitative property, state or relation. The magnitude of an intensive quantity does not depend on the size, or extent, of the object or system of which the quantity is a property, whereas magnitudes of an extensive quantity are additive for parts of an entity or subsystems. Thus, magnitude does depend on the extent of the entity or system in the case of extensive quantity. Examples of intensive quantities are density and pressure, while examples of extensive quantities are energy, volume, and mass.
In natural language
In human languages, including English, number is a syntactic category, along with person and gender. The quantity is expressed by identifiers, definite and indefinite, and quantifiers, definite and indefinite, as well as by three types of nouns: 1. count unit nouns or countables; 2. mass nouns, uncountables, referring to the indefinite, unidentified amounts; 3. nouns of multitude (collective nouns). The word ‘number’ belongs to a noun of multitude standing either for a single entity or for the individuals making the whole. An amount in general is expressed by a special class of words called identifiers, indefinite and definite and quantifiers, definite and indefinite. The amount may be expressed by: singular form and plural from, ordinal numbers before a count noun singular (first, second, third...), the demonstratives; definite and indefinite numbers and measurements (hundred/hundreds, million/millions), or cardinal numbers before count nouns. The set of language quantifiers covers "a few, a great number, many, several (for count names); a bit of, a little, less, a great deal (amount) of, much (for mass names); all, plenty of, a lot of, enough, more, most, some, any, both, each, either, neither, every, no". For the complex case of unidentified amounts, the parts and examples of a mass are indicated with respect to the following: a measure of a mass (two kilos of rice and twenty bottles of milk or ten pieces of paper); a piece or part of a mass (part, element, atom, item, article, drop); or a shape of a container (a basket, box, case, cup, bottle, vessel, jar).
Further examples
Some further examples of quantities are:
1.76 litres (liters) of milk, a continuous quantity
2πr metres, where r is the length of a radius of a circle expressed in metres (or meters), also a continuous quantity
one apple, two apples, three apples, where the number is an integer representing the count of a denumerable collection of objects (apples)
500 people (also a type of count data)
a couple conventionally refers to two objects.
a few usually refers to an indefinite, but usually small number, greater than one.
quite a few also refers to an indefinite, but surprisingly (in relation to the context) large number.
several refers to an indefinite, but usually small, number – usually indefinitely greater than "a few".
Dimensionless quantity
See also
Physical quantity
Quantification (science)
Observable quantity
Numerical value equation
References
Sources
Aristotle, Logic (Organon): Categories, in Great Books of the Western World, V.1. ed. by Adler, M.J., Encyclopædia Britannica, Inc., Chicago (1990)
Aristotle, Physical Treatises: Physics, in Great Books of the Western World, V.1, ed. by Adler, M.J., Encyclopædia Britannica, Inc., Chicago (1990)
Aristotle, Metaphysics, in Great Books of the Western World, V.1, ed. by Adler, M.J., Encyclopædia Britannica, Inc., Chicago (1990)
Franklin, J. (2014). Quantity and number, in Neo-Aristotelian Perspectives in Metaphysics, ed. D.D. Novotny and L. Novak, New York: Routledge, 221–44.
Hölder, O. (1901). Die Axiome der Quantität und die Lehre vom Mass. Berichte über die Verhandlungen der Königlich Sachsischen Gesellschaft der Wissenschaften zu Leipzig, Mathematische-Physicke Klasse, 53, 1–64.
Klein, J. (1968). Greek Mathematical Thought and the Origin of Algebra. Cambridge. Mass: MIT Press.
Laycock, H. (2006). Words without Objects: Oxford, Clarendon Press. Oxfordscholarship.com
Michell, J. (1993). The origins of the representational theory of measurement: Helmholtz, Hölder, and Russell. Studies in History and Philosophy of Science, 24, 185–206.
Michell, J. (1999). Measurement in Psychology. Cambridge: Cambridge University Press.
Michell, J. & Ernst, C. (1996). The axioms of quantity and the theory of measurement: translated from Part I of Otto Hölder's German text "Die Axiome der Quantität und die Lehre vom Mass". Journal of Mathematical Psychology, 40, 235–252.
Newton, I. (1728/1967). Universal Arithmetic: Or, a Treatise of Arithmetical Composition and Resolution. In D.T. Whiteside (Ed.), The mathematical Works of Isaac Newton, Vol. 2 (pp. 3–134). New York: Johnson Reprint Corp.
Wallis, J. Mathesis universalis (as quoted in Klein, 1968).
External links
Metaphysical properties
Measurement
Ontology | Quantity | [
"Physics",
"Mathematics"
] | 2,441 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
691,838 | https://en.wikipedia.org/wiki/Unruh%20effect | The Unruh effect (also known as the Fulling–Davies–Unruh effect) is a theoretical prediction in quantum field theory that an observer who is uniformly accelerating through empty space will perceive a thermal bath. This means that even in the absence of any external heat sources, an accelerating observer will detect particles and experience a temperature. In contrast, an inertial observer in the same region of spacetime would observe no temperature.
In other words, the background appears to be warm from an accelerating reference frame. In layman's terms, an accelerating thermometer in empty space (like one being waved around), without any other contribution to its temperature, will record a non-zero temperature, just from its acceleration. Heuristically, for a uniformly accelerating observer, the ground state of an inertial observer is seen as a mixed state in thermodynamic equilibrium with a non-zero temperature bath.
The Unruh effect was first described by Stephen Fulling in 1973, Paul Davies in 1975 and W. G. Unruh in 1976. It is currently not clear whether the Unruh effect has actually been observed, since the claimed observations are disputed. There is also some doubt about whether the Unruh effect implies the existence of Unruh radiation.
Temperature equation
The Unruh temperature, sometimes called the Davies–Unruh temperature, was derived separately by Paul Davies and William Unruh and is the effective temperature experienced by a uniformly accelerating detector in a vacuum field. It is given by
where is the reduced Planck constant, is the proper uniform acceleration, is the speed of light, and is the Boltzmann constant. Thus, for example, a proper acceleration of corresponds approximately to a temperature of . Conversely, an acceleration of corresponds to a temperature of .
The Unruh temperature has the same form as the Hawking temperature with denoting the surface gravity of a black hole, which was derived by Stephen Hawking in 1974. In the light of the equivalence principle, it is, therefore, sometimes called the Hawking–Unruh temperature.
Solving the Unruh temperature for the uniform acceleration, it can be expressed as
,
where is Planck acceleration and is Planck temperature.
Explanation
Unruh demonstrated theoretically that the notion of vacuum depends on the path of the observer through spacetime. From the viewpoint of the accelerating observer, the vacuum of the inertial observer will look like a state containing many particles in thermal equilibrium—a warm gas.
The Unruh effect would only appear to an accelerating observer. And although the Unruh effect would initially be perceived as counter-intuitive, it makes sense if the word vacuum is interpreted in the following specific way. In quantum field theory, the concept of "vacuum" is not the same as "empty space": Space is filled with the quantized fields that make up the universe. Vacuum is simply the lowest possible energy state of these fields.
The energy states of any quantized field are defined by the Hamiltonian, based on local conditions, including the time coordinate. According to special relativity, two observers moving relative to each other must use different time coordinates. If those observers are accelerating, there may be no shared coordinate system. Hence, the observers will see different quantum states and thus different vacua.
In some cases, the vacuum of one observer is not even in the space of quantum states of the other. In technical terms, this comes about because the two vacua lead to unitarily inequivalent representations of the quantum field canonical commutation relations. This is because two mutually accelerating observers may not be able to find a globally defined coordinate transformation relating their coordinate choices.
An accelerating observer will perceive an apparent event horizon forming (see Rindler spacetime). The existence of Unruh radiation could be linked to this apparent event horizon, putting it in the same conceptual framework as Hawking radiation. On the other hand, the theory of the Unruh effect explains that the definition of what constitutes a "particle" depends on the state of motion of the observer.
The free field needs to be decomposed into positive and negative frequency components before defining the creation and annihilation operators. This can only be done in spacetimes with a timelike Killing vector field. This decomposition happens to be different in Cartesian and Rindler coordinates (although the two are related by a Bogoliubov transformation). This explains why the "particle numbers", which are defined in terms of the creation and annihilation operators, are different in both coordinates.
The Rindler spacetime has a horizon, and locally any non-extremal black hole horizon is Rindler. So the Rindler spacetime gives the local properties of black holes and cosmological horizons. It is possible to rearrange the metric restricted to these regions to obtain the Rindler metric. The Unruh effect would then be the near-horizon form of Hawking radiation.
The Unruh effect is also expected to be present in de Sitter space.
It is worth stressing that the Unruh effect only says that, according to uniformly-accelerated observers, the vacuum state is a thermal state specified by its temperature, and one should resist reading too much into the thermal state or bath. Different thermal states or baths at the same temperature need not be equal, for they depend on the Hamiltonian describing the system. In particular, the thermal bath seen by accelerated observers in the vacuum state of a quantum field is not the same as a thermal state of the same field at the same temperature according to inertial observers. Furthermore, uniformly accelerated observers, static with respect to each other, can have different proper accelerations (depending on their separation), which is a direct consequence of relativistic red-shift effects. This makes the Unruh temperature spatially inhomogeneous across the uniformly accelerated frame.
Calculations
In special relativity, an observer moving with uniform proper acceleration through Minkowski spacetime is conveniently described with Rindler coordinates, which are related to the standard (Cartesian) Minkowski coordinates by
The line element in Rindler coordinates, i.e. Rindler space is
where , and where is related to the observer's proper time by (here ).
An observer moving with fixed traces out a hyperbola in Minkowski space, therefore this type of motion is called hyperbolic motion. The coordinate is related to the Schwarzschild spherical coordinate by the relation
An observer moving along a path of constant is uniformly accelerating, and is coupled to field modes which have a definite steady frequency as a function of . These modes are constantly Doppler shifted relative to ordinary Minkowski time as the detector accelerates, and they change in frequency by enormous factors, even after only a short proper time.
Translation in is a symmetry of Minkowski space: it can be shown that it corresponds to a boost in x, t coordinate around the origin. Any time translation in quantum mechanics is generated by the Hamiltonian operator. For a detector coupled to modes with a definite frequency in , we can treat as "time" and the boost operator is then the corresponding Hamiltonian. In Euclidean field theory, where the minus sign in front of the time in the Rindler metric is changed to a plus sign by multiplying to the Rindler time, i.e. a Wick rotation or imaginary time, the Rindler metric is turned into a polar-coordinate-like metric. Therefore any rotations must close themselves after 2 in a Euclidean metric to avoid being singular. So
A path integral with real time coordinate is dual to a thermal partition function, related by a Wick rotation. The periodicity of imaginary time corresponds to a temperature of in thermal quantum field theory. Note that the path integral for this Hamiltonian is closed with period 2. This means that the modes are thermally occupied with temperature . This is not an actual temperature, because is dimensionless. It is conjugate to the timelike polar angle , which is also dimensionless. To restore the length dimension, note that a mode of fixed frequency in at position has a frequency which is determined by the square root of the (absolute value of the) metric at , the redshift factor. This can be seen by transforming the time coordinate of a Rindler observer at fixed to an inertial, co-moving observer observing a proper time. From the Rindler-line-element given above, this is just . The actual inverse temperature at this point is therefore
It can be shown that the acceleration of a trajectory at constant in Rindler coordinates is equal to , so the actual inverse temperature observed is
Restoring units yields
The temperature of the vacuum, seen by an isolated observer accelerating at the Earth's gravitational acceleration of = , is only . For an experimental test of the Unruh effect it is planned to use accelerations up to , which would give a temperature of about .
The Rindler derivation of the Unruh effect is unsatisfactory to some, since the detector's path is super-deterministic. Unruh later developed the Unruh–DeWitt particle detector model to circumvent this objection.
Other implications
The Unruh effect would also cause the decay rate of accelerating particles to differ from inertial particles. Stable particles like the electron could have nonzero transition rates to higher mass states when accelerating at a high enough rate.
Unruh radiation
Although Unruh's prediction that an accelerating detector would see a thermal bath is not controversial, the interpretation of the transitions in the detector in the non-accelerating frame is. It is widely, although not universally, believed that each transition in the detector is accompanied by the emission of a particle, and that this particle will propagate to infinity and be seen as Unruh radiation.
The existence of Unruh radiation is not universally accepted. Smolyaninov claims that it has already been observed, while O'Connell and Ford claim that it is not emitted at all. While these skeptics accept that an accelerating object thermalizes at the Unruh temperature, they do not believe that this leads to the emission of photons, arguing that the emission and absorption rates of the accelerating particle are balanced.
Experimental observation
Researchers claim experiments that successfully detected the Sokolov–Ternov effect may also detect the Unruh effect under certain conditions.
Theoretical work in 2011 suggests that accelerating detectors could be used for the direct detection of the Unruh effect with current technology.
The Unruh effect may have been observed for the first time in 2019 in the high energy channeling radiation explored by the NA63 experiment at CERN.
See also
Dynamical Casimir effect
Cosmic Background Radiation
Hawking radiation
Black hole thermodynamics
Pair production
Quantum information
Superradiance
Virtual particle
References
Further reading
External links
Thermodynamics
Quantum field theory
Theory of relativity
Acceleration
Physical phenomena
Hypothetical processes | Unruh effect | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,235 | [
"Quantum field theory",
"Physical phenomena",
"Hypotheses in physics",
"Physical quantities",
"Acceleration",
"Quantity",
"Theoretical physics",
"Quantum mechanics",
"Thermodynamics",
"Theory of relativity",
"Wikipedia categories named after physical quantities",
"Dynamical systems"
] |
691,872 | https://en.wikipedia.org/wiki/Royal%20Aeronautical%20Society | The Royal Aeronautical Society, also known as the RAeS, is a British multi-disciplinary professional institution dedicated to the global aerospace community. Founded in 1866, it is the oldest aeronautical society in the world. Members, Fellows, and Companions of the society can use the post-nominal letters MRAeS, FRAeS, or CRAeS, respectively.
Function
The objectives of The Royal Aeronautical Society include: to support and maintain high professional standards in aerospace disciplines; to provide a unique source of specialist information and a local forum for the exchange of ideas; and to exert influence in the interests of aerospace in the public and industrial arenas, including universities.
The Royal Aeronautical Society is a worldwide society with an international network of 67 branches. Many practitioners of aerospace disciplines use the Society's designatory post-nominals such as FRAeS, CRAeS, MRAeS, AMRAeS, and ARAeS (incorporating the former graduate grade, GradRAeS).
The RAeS headquarters is at 4 Hamilton Place, London, W1J 7BQ. In addition to offices for its staff the building is used for Society events and parts of the building are available for private hire.
Publications
The Journal of the Royal Aeronautical Society: (1923–1967)
The Aeronautical Quarterly: (1949–1983)
Aerospace: (1969–1997)
Aerospace International: (1997–2013)
The Aerospace Professional: (1998–2013)
The Aeronautical Journal: (1897 to date)
The Journal of Aeronautical History: (2011 to date)
AEROSPACE: (2013 to date)
Branches and divisions
Branches deliver membership benefits and disseminate aerospace information. As of September 2013, branches located in the United Kingdom include: Belfast, Birmingham, Boscombe Down, Bristol, Brough, Cambridge, Cardiff, Chester, Christchurch, Coventry, Cranfield, Cranwell, Derby, FAA Yeovilton, Farnborough, Gatwick, Gloucester & Cheltenham, Hatfield, Heathrow, Highland, Isle of Wight, Isle of Man, Loughborough, Manchester, Marham, Medway, Oxford, Preston, Prestwick, Sheffield, Solent, Southend, Stevenage, Swindon, Weybridge, and Yeovil.
The RAeS international branch network includes: Adelaide, Auckland, Blenheim, Brisbane, Brussels, Canberra, Canterbury, Cyprus, Dublin, Hamburg, Hamilton, Hong Kong, Malaysia, Melbourne, Montreal, Munich, Palmerston North, Paris, Perth, Seattle, Singapore, Sydney, Toulouse, and the UAE.
Divisions of the Society have been formed in countries and regions that can sustain a number of Branches. Divisions operate with a large degree of autonomy, being responsible for their own branch network, membership recruitment, subscription levels, conference and lecture programmes.
Specialist Groups covering various facets of the aerospace industry exist under the overall umbrella of the Society, with the aim of serving the interests of both enthusiasts and industry professionals. Their remit is to consider significant developments in their field through conferences and lectures, with the intention of stimulating debate and facilitating action on key industry issues. The Groups also act as focal points for all enquiries to the Society concerning their specialist subject matter.
As of September 2013, the Specialist Group committees are: Aerodynamics, Aerospace Medicine, Air Power, Air Law, Air Transport, Airworthiness & Maintenance, Avionics & Systems, Environment, Flight Operations, Flight Simulation, Flight Test, General Aviation, Greener by Design, Historical, Human Factors, Human Powered Flight, Propulsion, Rotorcraft, Space, Structures & Materials, UAS, Weapons Systems & Technologies, and Women in Aviation & Aerospace.
In 2009, the Royal Aeronautical Society formed a group of experts to document how to better simulate aircraft upset conditions, and thus improve training programmes.
History
The Society was founded in January 1866 with the name "The Aeronautical Society of Great Britain" and is the oldest aeronautical society in the world. Early or founding members included James Glaisher, Francis Wenham, the Duke of Argyll, and Frederick Brearey. In the first year, there were 65 members, at the end of the second year, 91 members, and in the third year, 106 members. Annual reports were produced in the first decades. In 1868 the Society held a major exhibition at London's Crystal Palace with 78 entries. John Stringfellow's steam engine was shown there. The Society sponsored the first wind tunnel in 1870–71, designed by Wenham and Browning.
In 1918, the organisation's name was changed to the Royal Aeronautical Society.
In 1923 its principal journal was renamed from The Aeronautical Journal to The Journal of the Royal Aeronautical Society and in 1927 the Institution of Aeronautical Engineers Journal was merged into it.
In 1940, the RAeS responded to the wartime need to expand the aircraft industry. The Society established a Technical Department to bring together the best available knowledge and present it in an authoritative and accessible form – a working tool for engineers who might come from other industries and lack the specialised knowledge required for aircraft design. This technical department became known as the Engineering Sciences Data Unit (ESDU) and eventually became a separate entity in the 1980s.
In 1987 the 'Society of Licensed Aircraft Engineers and Technologists', previously called the 'Society of Licensed Aircraft Engineers' was incorporated into the Royal Aeronautical Society.
Presidents
The following have served as President of the Royal Aeronautical Society:
1886–95 George Campbell, 8th Duke of Argyll
1895–00 (None)
1900–07 B. Baden-Powell
1907–08 (None)
1908–11 E. P. Frost
1911–19 (None)
1919–26 William Weir, 1st Viscount Weir
1926–27 Air Vice Marshal Sir William Sefton Brancker
1927–30 W Forbes-Sempill
1930–34 C. R. Fairey
1934–36 J. Moore-Brabazon
1936–38 H. E. Wimperis
1938–40 R. Fedden
1940–42 G. Brewer
1942–44 A. Gouge
1944–45 Sir Roy Fedden
1945–47 F. H. Page
1947–49 H. R. Cox
1949–50 J. Buchanan
1950–51 G. P. Bulman
1951–52 F. B. Halford
1952–53 G. Dowty
1953–54 Sir William Farren
1954–55 S. Camm
1955–56 N. E. Rowe
1956–57 E. T. Jones
1957–58 G. R. Edwards
1958–59 A. A. Hall
1959–60 P. G. Masefield
1960–61 E. S. Moult
1961–62 R. O. Jones
1962–63 B. S. Shenstone
1963–64 A. R. Collar
1964–65 H.H Gardner
1965–66 Sir George Gardner
1966- Honorary President Prince Philip, Duke of Edinburgh
1966–67 A.D Baxter
1967–68 M.B Morgan
1968–69 David Keith-Lucas
1969–70 F. R. Banks
1970–71 Air Commodore J.R Morgan
1971–72 S.D Davies
1972–73 K.G Wilkinson
1973–74 Dr G.S Hislop
1974–75 B.P Laight
1975–76 Air Marshal Sir Charles Pringle
1976–77 C. Abell
1977–78 Handel Davies
1978–79 Professor L.F Crabtree
1979–80 R.P Probert
1980–81 P.A Hearne
1981–82 J.T Stamper
1982–83 Captain E.M Brown
1983–84 Professor M.G Farley
1984–85 Geoffrey Pardoe
1985–86 Thomas Kerr
1986–87 John Fozard
1987–88 John Stollery
1988–89 Dr P.H Calder
1989–90 Dr H. Metcalfe
1990–91 G.C Howell
1991–92 G.M McCoombe
1992–93 Air Marshal Sir Frank Holroyd
1993–94 Dr G.G Pope
1994–95 Sir C.B.G Masefield
1995–96 Sir Donald Spiers
1996–97 Professor John Green
1997–98 Stewart M John
1998–99 Captain W.D Lowe
1999–00 Lanthony Edwards
2000–01 Trevor Trueman
2001–02 Professor Ian Poll
2002–03 Lee Balthazor
2003–04 Air Marshal Sir Peter Norriss
2004–05 Roland Fairfield
2005–06 Air Marshal Sir Colin Terry
2006–07 Gordon F. Page
2007–08 David Marshall
2008–09 Captain David Rowland
2009–10 Dr. Mike Steeden
2010–11 Air Vice-Marshal David Couzens
2011–12 Lee Balthazor
2012–13 Phil Boyle
2013–14 Jenny Body
2014–15 Air Commodore Bill Tyack
2015–16 Martin Broadhurst
2016 Honorary President Prince Charles, Prince of Wales
2016–17 Professor Chris Atkin
2017–18 Sir Stephen Dalton FRAeS
2018–19 Rear Adm Simon Henley CEng FRAeS
2019–21 Professor Jonathan Cooper
2021–22 Howard Nye MInstP FRAeS
2022–23 Air Cdre Peter Round FRAeS
2023–24 Kerissa Khan MRAeS
2024–25 David Chinn FRAeS
Chief Executives
Keith Mans was chief executive from 1998–2009
Simon Luxmoore was chief executive from 2009–2018
Sir Brian Burridge CBE FRAeS, from 1 October 2018
David Edwards FRAeS, from 1 October 2021
Medals and awards
In addition to the award of Fellowship of the Royal Aeronautical Society (FRAeS), the Society awards several other medals and prizes. These include its Gold, Silver, and Bronze medals. The very first gold medal was awarded in 1909 to the Wright Brothers. Although it is unusual for more than one medal (in each of the three grades) to be awarded annually, since 2004 the Society has also periodically awarded team medals (Gold, Silver, and Bronze) for exceptional or groundbreaking teamwork in aeronautical research and development. Others awarded have included the R. P. Alston Memorial Prize for developments in flight-testing, the Edward Busk prize for applied aerodynamics, the Wakefield Medal for advances in aviation safety, and an Orville Wright Prize. Honorary Fellowships and Honorary Companionships are awarded as well.
The Sir Robert Hardingham Sword The Sir Robert Hardingham Sword is awarded in recognition of outstanding service to the RAeS by a member of the Society. Nominally an annual award, in practice the award is only made about one year in two.
Notable Medal recipients
Notable Gold Medal recipients include:
1909 – Wilbur and Orville Wright
1910 – Octave Chanute
1945 – Air Cdre Frank Whittle
1950 – Sir Geoffrey de Havilland
1955 – Ernest Hives, 1st Baron Hives
1958 – Sydney Camm
1959 – Marcel Dassault
1960 – Sir Frederick Handley Page
1977 – George Lee
1983 – Geoffrey Lilley
1993 – Reimar Horten
2012 – Elon Musk
2018 – Peter Beck
Honorary Fellows
1950 Sir Thomas Sopwith
1953 The Duke of Edinburgh
1954 Air Commodore Sir Frank Whittle
1957 The Prince of The Netherlands
1959 Professor J. Ackeret
1960 Sir George Edwards
1962 N. E. Rowe
1963 Sir Alfred Pugsley
1964 Sir Denning Pearson
1965 Sir Arnold Hall
1969 Dr R. R. Gilruth
1969 Lord Kings Norton
1969 Sir Archibald Russell
1970 Sir Robert Cockburn
1971 Professor Sydney Goldstein
1974 S. D. Davies
1975 C. Abell
1975 H. A. L. Ziegler
1976 Sir Keith Granville
1977 Sir William Hawthorne
1978 The Prince of Wales
1978 Dr O. Nagano
1978 Dr W. Tye
1979 Professor D. Keith-Lucas
1980 E. H. Heinemann
1980 Sir Frederick Page
1980 Sir Peter Masefield
1981 Sir Robert Hunt
1982 H. Davies
1983 Dr G. S. Hislop
1983 Professor Dipl-Ing G. Madelung
1983 R. H. Beteille
1984 J. T. Stamper
1984 Professor A. D. Young
1984 Sir Philip Foreman
1985 J. F. Sutter
1985 King Hussein of Jordan
1985 Sir Roy Sisson
1986 Professor J. H. Argyris
1986 Dr K. G. Wilkinson
1987 F. Cereti
1988 Professor H. Ashley
1988 G. P. Dollimore
1989 Admiral Sir Raymond Lygo
1989 Air Marshal Sir Charles Pringle
1989 F. d' Allest
1990 P. A. Hearne
1990 Sir James Lighthill
1991 Sir Ralph Robins
1992 Professor Em Dr-Ing K. H. Doetsch
1992 Sir John Charnley
1992 G. H. Lee
1993 The Duke of Kent
1993 Professor Dr.-Ing. B. J. Habibie
1993 R. W. Howard
1994 Baroness Platt of Writtle
1994 Lord Tombs of Brailes
1994 S. Gillibrand
1995 C. H. Kaman
1995 Professor J. L. Stollery
1995 R. W. R. McNulty
1996 P. M. Condit
1996 Sir Richard H. Evans
1997 J. Pierson
1997 N. Augustine
1997 J. Cunningham
1998 M. Flanagan
1998 R. Belyakov
1998 R. Yates
1998 S. Ajaz Ali
1999 A. Caporaletti
1999 D. Burrell
2000 N. Barber
2000 Professor Ing E. Vallerani
2000 R. Collette
2000 Sir Donald Spiers
2001 A. Welch OBE
2001 Dr B. Halse
2001 J. Bechat
2001 Sir Arthur Marshall OBE
2001 Richard Manson
2002 A Mulally
2003 P Ruffles
2003 Prof. Sir John Horlock
2003 J. Thomas
2004 Captain Eric Brown
2004 Alain Garcia
2005 Sir Michael Cobham
2006 General Charles E. Yeager
2006 Air Vice-Marshal Professor R.A. Mason
2008 Edward George Nicholas Paul Patrick Kent
2008 Professor Beric Skews
2008 Norman Barber
2008 Philip Murray Condit
2008 Ralph Robins
2008 Rene Collette
2008 Richard Harry Evans
2008 Roy McNulty
2009 William Kenneth Maciver CBE
2009 Gordon Page CBE
2012 Ing S Pancotti
2012 Professor M Gaster
2013 Professor K Ridgway CBE
2013 Professor R J Stalker
2014 C P Smith CBE
2014 Professor B Cheng
2014 J-P Herteman
2014 Colin Smith
2014 Jean-Paul Herteman
2014 John Balfour
2014 Keith Ridgway
2014 Michael Gaster
2014 Santino Pancotti
2015 Professor Sir Martin Sweeting OBE
2015 J-J Dordain
2015 Professor R K Agarwal
2016 P Fabre
2016 Sir Michael Marshall CBE
2016 Major T N Peake CMG
2016 Dr D W Richardson
2016 M J Ryan CBE
2016 Jean-Jacques Dordain
2016 Martin Sweeting
2016 Michael Ryan
2016 Pierre Fabre
2016 Ramesh Agarwal
2016 Timothy Peake
2017 Professor R Bor
2018 Major General Desmond Barker
2018 M Bryson CBE
2018 F R Donaldson
2018 Francis Donaldson
2018 Joseph Kittinger
2018 Marcus Bryson
2018 Colonel J W Kittinger Jr
2019 Dr G. Satheesh Reddy
2019 Asad Madni
2019 G Satheesh Reddy
2019 Alexander Smits
2019 Ashwani Gupta
2019 Fabio Nannoni
2020 Meyer Benzakein
2020 Tom Williams
2020 Trevor Birch
2021 Gwynne Shotwell
2021 Jim Bridenstine
2021 Johann-Dietrich Wörner
2021 John Tracy
2021 Paul Kaminski
2021 Paul Nielsen
2021 Robert Winn
2022 Colin Paynter
2022 Jonathan Cooper
2022 Tewolde Tewolde
Honorary Companions
1961 Sir John Toothill
1963 Lord Wilberforce
1965 L. A. Wingfield
1966 J. Davison
1973 Lord Elworthy
1975 H. Kremer
1975 Sir R. Verdon-Smith
1978 J. R. Stainton
1979 Lord Keith of Castleacre
1980 Sir Arthur Marshall
1982 Sir Douglas Lowe
1983 L. C. Hunting
1985 Lord King of Wartnaby
1985 F. A. A. Wootton
1986 G. Pattie
1987 Sir Norman Payne
1988 Sir Colin Marshall
1989 Air Chief Marshal Sir Peter Harding
1989 M. D. Bishop
1990 T. Mayer
1991 R. F. Baxter
1991 Sir Adrian Swire
1992 Dr T. A. Ryan
1993 Sir Richard Branson
1994 Professor C. J. Pennycuick
1995 Air Marshal M. Nur Khan
1996 Sir Neil Cossons
1997 A. J. Goldman
1997 R. D. Lapthorne
1998 P. Martin
1999 Sheikh Hamdan bin Mubarak Al Nahyan
2000 Sheikh Ahmed Bin Saeed Al Maktoum
2002 J Travolta
2002 R Turnill
2003 Dr C C Kong
2008 Ahmed Bin Saeed Al Maktoum
2008 Michael David Bishop
2008 Neil Cossons
2008 Richard Charles Nicholas Branson
2010 Giovanni Bisignani
2014 Philip Jarrett
2015 David Bent
2016 Charles Clarke
2016 David Bent
2016 Elizabeth Hughes
2016 Roger Bone
2020 Idris Ben-Tahir
2020 Jeffrey Shane
2022 Michael Turner
Named Lectures
Henson & Stringfellow Lecture and Dinner
The annual Henson & Stringfellow Lecture and Dinner is hosted yearly by the Yeovil Branch of the Royal Aeronautical Society, held at Westland Leisure Complex, and is a key social and networking event of the Yeovil lecture season. It is a black tie event attracting over 200 guests drawn from all sectors of the aerospace community.
John Stringfellow created, alongside William Samuel Henson, the first powered flight aircraft, developed in Chard, Somerset, which flew unmanned in 1848, 63 years prior to brothers Wilbur & Orville Wrights' flight.
Wilbur & Orville Wright Named Lecture
The Wilbur & Orville Wright Named Lecture was established in 1911 to honour the Wright brothers, the successful and experienced mechanical engineers who completed the first successful controlled powered flight on 17 December 1903. The Wilbur & Orville Wright Lecture is the principal event in the Society’s year, given by distinguished members of the US and UK aerospace communities.
The 99th Lecture was given by Piers Sellers, astronaut, on 9 December 2010 at the Society's Headquarters in London.
The 100th Lecture was given by Suzanna Darcy-Henneman, Chief Pilot & Director of Training, Boeing Commercial Airplanes, on 8 December 2011.
The 101st Lecture was given by Tony Parasida, corporate vice president, The Boeing Company, on 20 December 2012.
The 102nd Lecture was given by Thomas Enders, CEO of EADS, on 12 December 2013.
The 103rd Lecture was given by Patrick M Dewar, executive vice president, Lockheed Martin International in December 2014.
The 104th Lecture was given by Nigel Whitehead, Group Managing Director – Programmes and Support, BAE Systems plc in December 2015.
The 105th Lecture was given by ACM Sir Stephen Hillier, Chief of the Air Staff, Royal Air Force on 6 December 2016.
The 106th Lecture was given by Martin Rolfe, chief executive officer, NATS on 5 December 2017.
The 107th Lecture was given by Leanne Caret, Vice President, The Boeing Company and President & CEO, Boeing Defense, Space & Security on 4 December 2018.
The 108th Lecture was given by David Mackay FRAeS, Chief Pilot, Virgin Galactic on 10 December 2019.
Amy Johnson Named Lecture
The Amy Johnson Named Lecture was inaugurated in 2011 by the Royal Aeronautical Society's Women in Aviation and Aerospace Committee to celebrate a century of women in flight and to honour Britain's most famous woman aviator. The Lecture is held on or close to 6 July every year to mark the date in 1929 when Amy Johnson was awarded her pilot’s licence. The Lecture is intended to tackle serious issues of interest to a wide audience, not just women. High-profile women from industry are asked to lecture on a topic that speaks of future challenges of interest to everyone.
Carolyn McCall, chief executive of EasyJet, delivered the Inaugural Lecture on 6 July 2011 at the Society's Headquarters in London.
The second Amy Johnson Named Lecture was delivered by Marion C. Blakey, president and chief executive of Aerospace Industries Association (AIA), on 5 July 2012.
The third Lecture was delivered by Gretchen Haskins, former Group Director of the Safety Regulation Group of the UK Civil Aviation Authority (CAA), on 8 July 2013.
In 2017, Katherine Bennett OBE FRAeS, Senior Vice President Public Affairs, Airbus gave the Amy Johnson Lecture and in 2018 Air Vice-Marshal Sue Gray, CB, OBE from the Royal Air Force gave the Amy Johnson Lecture in honour of the 100th anniversary of the RAF.
Sopwith Named Lecture
The Sopwith Lecture was established in 1990 to honour Sir Thomas Sopwith CBE, Hon FRAeS. In the years prior to World War I, Sopwith became England’s premier aviator and established the first authoritative test pilot school in the world. He also founded England’s first major flight school. Between 1912 and 1920 Sopwith’s Company produced over 16,000 aircraft of 60 types.
In 2017 the lecture was delivered by Tony Wood, chief operating officer of Meggitt PLC.
In 2018 the lecture was delivered by Group Captain Ian Townsend ADC MA RAF, Station Commander, RAF Marham.
In 2019 the lecture was delivered by Billie Flynn, F-35 Lightning II Test Pilot, Lockheed Martin.
In 2020 the lecture was delivered online by Dirk Hoke, CEO, Airbus Defence & Space.
In popular culture
The July 18th.,1975 edition of the society's Journal included the first use of the misattributed term, "Beam Me Up, Scotty", in a sentence, viz:"...in a sort of, 'Beam me up, Scotty', routine".
References
External links
Official RAeS site
List of awards of Medals
RAeS Flight Simulation Group site
New Zealand Division site
Australian Division site
Montreal Branch site
Chard Museum The Birth of Powered Flight.
Aero Society Podcast The Official RAeS online media channel
Video clips
Aero Society YouTube channel
RAeS Careers
1866 establishments in the United Kingdom
Aerospace engineering organizations
Aeronautics organizations
Aviation organisations based in the United Kingdom
ECUK Licensed Members
Learned societies of the United Kingdom
Organisations based in London with royal patronage
Organisations based in the City of Westminster
Organizations established in 1866
Aeronautical
Science and technology in the United Kingdom | Royal Aeronautical Society | [
"Engineering"
] | 4,391 | [
"Aerospace engineering",
"Aerospace engineering organizations",
"Royal Aeronautical Society",
"Aeronautics organizations"
] |
691,927 | https://en.wikipedia.org/wiki/Invariant%20subspace%20problem | In the field of mathematics known as functional analysis, the invariant subspace problem is a partially unresolved problem asking whether every bounded operator on a complex Banach space sends some non-trivial closed subspace to itself. Many variants of the problem have been solved, by restricting the class of bounded operators considered or by specifying a particular class of Banach spaces. The problem is still open for separable Hilbert spaces (in other words, each example, found so far, of an operator with no non-trivial invariant subspaces is an operator that acts on a Banach space that is not isomorphic to a separable Hilbert space).
History
The problem seems to have been stated in the mid-20th century after work by Beurling and von Neumann, who found (but never published) a positive solution for the case of compact operators. It was then posed by Paul Halmos for the case of operators such that is compact. This was resolved affirmatively, for the more general class of polynomially compact operators (operators such that is a compact operator for a suitably chosen non-zero polynomial ), by Allen R. Bernstein and Abraham Robinson in 1966 (see for a summary of the proof).
For Banach spaces, the first example of an operator without an invariant subspace was constructed by Per Enflo. He proposed a counterexample to the invariant subspace problem in 1975, publishing an outline in 1976. Enflo submitted the full article in 1981 and the article's complexity and length delayed its publication to 1987 Enflo's long "manuscript had a world-wide circulation among mathematicians" and some of its ideas were described in publications besides Enflo (1976). Enflo's works inspired a similar construction of an operator without an invariant subspace for example by Bernard Beauzamy, who acknowledged Enflo's ideas.
In the 1990s, Enflo developed a "constructive" approach to the invariant subspace problem on Hilbert spaces.
In May 2023, a preprint of Enflo appeared on arXiv, which, if correct, solves the problem for Hilbert spaces and completes the picture.
In July 2023, a second and independent preprint of Neville appeared on arXiv, claiming the solution of the problem for separable Hilbert spaces.
In September 2024, a peer-reviewed article published in Axioms by a team of four Jordanian academic researchers announced that they had solved the invariant subspace problem. However, basic mistakes in the proof were pointed out.
Precise statement
Formally, the invariant subspace problem for a complex Banach space of dimension > 1 is the question whether every bounded linear operator has a non-trivial closed -invariant subspace: a closed linear subspace of , which is different from and from , such that .
A negative answer to the problem is closely related to properties of the orbits . If is an element of the Banach space , the orbit of under the action of , denoted by , is the subspace generated by the sequence . This is also called the -cyclic subspace generated by . From the definition it follows that is a -invariant subspace. Moreover, it is the minimal -invariant subspace containing : if is another invariant subspace containing , then necessarily for all (since is -invariant), and so . If is non-zero, then is not equal to , so its closure is either the whole space (in which case is said to be a cyclic vector for ) or it is a non-trivial -invariant subspace. Therefore, a counterexample to the invariant subspace problem would be a Banach space and a bounded operator for which every non-zero vector is a cyclic vector for . (Where a "cyclic vector" for an operator on a Banach space means one for which the orbit of is dense in .)
Known special cases
While the case of the invariant subspace problem for separable Hilbert spaces is still open, several other cases have been settled for topological vector spaces (over the field of complex numbers):
For finite-dimensional complex vector spaces, every operator admits an eigenvector, so it has a 1-dimensional invariant subspace.
The conjecture is true if the Hilbert space is not separable (i.e. if it has an uncountable orthonormal basis). In fact, if is a non-zero vector in , the norm closure of the linear orbit is separable (by construction) and hence a proper subspace and also invariant.
von Neumann showed that any compact operator on a Hilbert space of dimension at least 2 has a non-trivial invariant subspace.
The spectral theorem shows that all normal operators admit invariant subspaces.
proved that every compact operator on any Banach space of dimension at least 2 has an invariant subspace.
proved using non-standard analysis that if the operator on a Hilbert space is polynomially compact (in other words is compact for some non-zero polynomial ) then has an invariant subspace. Their proof uses the original idea of embedding the infinite-dimensional Hilbert space in a hyperfinite-dimensional Hilbert space (see Non-standard analysis#Invariant subspace problem).
, after having seen Robinson's preprint, eliminated the non-standard analysis from it and provided a shorter proof in the same issue of the same journal.
gave a very short proof using the Schauder fixed point theorem that if the operator on a Banach space commutes with a non-zero compact operator then has a non-trivial invariant subspace. This includes the case of polynomially compact operators because an operator commutes with any polynomial in itself. More generally, he showed that if commutes with a non-scalar operator that commutes with a non-zero compact operator, then has an invariant subspace.
The first example of an operator on a Banach space with no non-trivial invariant subspaces was found by , and his example was simplified by .
The first counterexample on a "classical" Banach space was found by , who described an operator on the classical Banach space with no invariant subspaces.
Later constructed an operator on without even a non-trivial closed invariant subset, that is that for every vector the set is dense, in which case the vector is called hypercyclic (the difference with the case of cyclic vectors is that we are not taking the subspace generated by the points in this case).
gave an example of an operator without invariant subspaces on a nuclear Fréchet space.
proved that any infinite dimensional Banach space of countable type over a non-Archimedean field admits a bounded linear operator without a non-trivial closed invariant subspace. This completely solves the non-Archimedean version of this problem, posed by van Rooij and Schikhof in 1992.
gave the construction of an infinite-dimensional Banach space such that every continuous operator is the sum of a compact operator and a scalar operator, so in particular every operator has an invariant subspace.
Notes
References
Invariant subspaces
Operator theory
Functional analysis
Unsolved problems in mathematics
Mathematical problems | Invariant subspace problem | [
"Mathematics"
] | 1,463 | [
"Unsolved problems in mathematics",
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Mathematical problems"
] |
692,003 | https://en.wikipedia.org/wiki/Kibble%20balance | A Kibble balance (also formerly known as a watt balance) is an electromechanical measuring instrument that measures the weight of a test object very precisely by the electric current and voltage needed to produce a compensating force. It is a metrological instrument that can realize the definition of the kilogram unit of mass based on fundamental constants.
It was originally known as a watt balance because the weight of the test mass is proportional to the product of current and voltage, which is measured in watts. In June 2016, two months after the death of its inventor, Bryan Kibble, metrologists of the Consultative Committee for Units of the International Committee for Weights and Measures agreed to rename the device in his honor.
Prior to 2019, the definition of the kilogram was based on a physical object known as the International Prototype of the Kilogram (IPK).
After considering alternatives, in 2013 the General Conference on Weights and Measures (CGPM) agreed on accuracy criteria for replacing this definition with one based on the use of a Kibble balance. After these criteria had been achieved, the CGPM voted unanimously on November 16, 2018, to change the definition of the kilogram and several other units, effective May 20, 2019, to coincide with World Metrology Day. There is also a method called the joule balance. All methods that use the fixed numerical value of the Planck constant are sometimes called the Planck balance.
Design
The Kibble balance is a more accurate version of the ampere balance, an early current measuring instrument in which the force between two current-carrying coils of wire is measured and then used to calculate the magnitude of the current. The Kibble balance operates in the opposite sense; the current in the coils set very precisely by the Planck constant, and the force between the coils is used to measure the weight of a test kilogram mass. Then the mass is calculated from the weight by accurately measuring the local Earth's gravity (the net acceleration combining gravitational and centrifugal effects) with a gravimeter. Thus the mass of the object is defined in terms of a current and a voltage— allowing the device to "measure mass without recourse to the IPK (International Prototype Kilogram) or any physical object".
Origin
The principle that is used in the Kibble balance was proposed by Bryan Kibble (1938-2016) of the UK National Physical Laboratory (NPL) in 1975 for measurement of the gyromagnetic ratio. In 1978 the Mark I watt balance was built at the NPL with Ian Robinson and Ray Smith. It operated until 1988.
The main weakness of the ampere balance method is that the result depends on the accuracy with which the dimensions of the coils are measured. The Kibble balance uses an extra calibration step to cancel the effect of the geometry of the coils, removing the main source of uncertainty. This extra step involves moving the force coil through a known magnetic flux at a known speed. This was possible by setting of the conventional values of the von Klitzing constant and Josephson constant, which are used throughout the world for voltage and resistance calibration. Using these principles, in 1990 Bryan Kibble and Ian Robinson invented the Kibble Mark II balance, which uses a circular coil and operates in vacuum conditions . Bryan Kibble worked with Ian Robinson and Janet Belliss to build this Mark Two version of the balance. This design allowed for measurements accurate enough for use in the redefinition of the SI unit of mass: the kilogram.
The Kibble balance originating from the National Physical Laboratory was transferred to the National Research Council of Canada (NRC) in 2009, where scientists from the two labs continued to refine the instrument.
In 2014, NRC researchers published the most accurate measurement of the Planck constant at that time, with a relative uncertainty of 1.8. A final paper by NRC researchers was published in May 2017, presenting a measurement of the Planck constant with an uncertainty of only 9.1 parts per billion, the measurement with the least uncertainty to that date. Other Kibble balance experiments are conducted in the US National Institute of Standards and Technology (NIST), the Swiss Federal Office of Metrology (METAS) in Berne, the International Bureau of Weights and Measures (BIPM) near Paris and Laboratoire national de métrologie et d’essais (LNE) in Trappes, France.
Principle
A conducting wire of length that carries an electric current perpendicular to a magnetic field of strength experiences a Lorentz force equal to the product of these variables. In the Kibble balance, the current is varied so that this force counteracts the weight of a mass to be measured. This principle is derived from the ampere balance. is given by the mass multiplied by the local gravitational acceleration . Thus,
The Kibble balance avoids the problems of measuring and in a second calibration step. The same wire (in practice, a coil) is moved through the same magnetic field at a known speed . By Faraday's law of induction, a potential difference is generated across the ends of the wire, which equals . Thus
The unknown product can be eliminated from the equations to give
With , , , and accurately measured, this gives an accurate value for .
Both sides of the equation have the dimensions of power, measured in watts in the International System of Units; hence the original name "watt balance". The product , also called the geometric factor, is not trivially equal in both calibration steps. The geometric factor is only constant under certain stability conditions on the coil.
Implementation
The Kibble balance is constructed so that the mass to be measured and the wire coil are suspended from one side of a balance scale, with a counterbalance mass on the other side. The system operates by alternating between two modes: "weighing" and "moving". The entire mechanical subsystem operates in a vacuum chamber to remove the effects of air buoyancy.
While "weighing", the system measures . The system controls the current in the coil to keep the electromagnetic force on the coil balanced with the force of gravity. Coil position and velocity measurement circuitry uses an interferometer together with a precision clock input to determine the velocity and control the current needed to maintain it. The required current is measured, using an ammeter comprising a Josephson junction voltage standard and an integrating voltmeter.
While "moving", the system measures and . The system ceases to provide current to the coil. This allows the counterbalance to pull the coil (and mass) upward through the magnetic field, which causes a voltage difference across the coil. The velocity measurement circuitry measures the speed of movement of the coil. This voltage is measured, using the same voltage standard and integrating voltmeter.
A typical Kibble balance measures , , and , but does not measure the local gravitational acceleration , because does not vary rapidly with time. Instead, is measured in the same laboratory using a highly accurate and precise gravimeter. In addition, the balance depends on a highly accurate and precise frequency reference such as an atomic clock to compute voltage and current. Thus, the precision and accuracy of the mass measurement depends on the Kibble balance, the gravimeter, and the clock.
Like the early atomic clocks, the early Kibble balances were one-of-a-kind experimental devices and were large, expensive, and delicate. As of 2019, work is underway to produce standardized devices at prices that permit use in any metrology laboratory that requires high-precision measurement of mass.
As well as large Kibble balances, microfabricated or MEMS watt balances (now called Kibble balances) have been demonstrated since around 2003. These are fabricated on single silicon dies similar to those used in microelectronics and accelerometers, and are capable of measuring small forces in the nanonewton to micronewton range traceably to the SI-defined physical constants via electrical and optical measurements. Due to their small scale, MEMS Kibble balances typically use electrostatic rather than the inductive forces used in larger instruments. Lateral and torsional variants have also been demonstrated, with the main application (as of 2019) being in the calibration of the atomic force microscope. Accurate measurements by several teams will enable their results to be averaged and so reduce the experimental error.
Measurements
Accurate measurements of electric current and potential difference are made in conventional electrical units (rather than SI units), which are based on fixed "conventional values" of the Josephson constant and the von Klitzing constant, and respectively. The current Kibble balance experiments are equivalent to measuring the value of the conventional watt in SI units. From the definition of the conventional watt, this is equivalent to measuring the value of the product in SI units instead of its fixed value in conventional electrical units:
The importance of such measurements is that they are also a direct measurement of the Planck constant :
The principle of the electronic kilogram relies on the value of the Planck constant, which is as of 2019 an exact value. This is similar to the metre being defined by the speed of light. With the constant defined exactly, the Kibble balance is not an instrument to measure the Planck constant, but is instead an instrument to measure mass:
Effect of gravity
Gravity and the nature of the Kibble balance, which oscillates test masses up and down against the local gravitational acceleration g, are exploited so that mechanical power is compared against electrical power, which is the square of voltage divided by electrical resistance. However, g varies significantly—by nearly 1%—depending on where on the Earth's surface the measurement is made (see Earth's gravity). There are also slight seasonal variations in g at a location due to changes in underground water tables, and larger semimonthly and diurnal changes due to tidal distortions in the Earth's shape caused by the Moon and the Sun. Although g is not a term in the definition of the kilogram, it is crucial in the process of measurement of the kilogram when relating energy to power in a kibble balance. Accordingly, g must be measured with at least as much precision and accuracy as are the other terms, so measurements of g must also be traceable to fundamental constants of nature. For the most precise work in mass metrology, g is measured using dropping-mass absolute gravimeters that contain an iodine-stabilised helium–neon laser interferometer. The fringe-signal, frequency-sweep output from the interferometer is measured with a rubidium atomic clock. Since this type of dropping-mass gravimeter derives its accuracy and stability from the constancy of the speed of light as well as the innate properties of helium, neon, and rubidium atoms, the 'gravity' term in the delineation of an all-electronic kilogram is also measured in terms of invariants of nature—and with very high precision. For instance, in the basement of the NIST's Gaithersburg facility in 2009, when measuring the gravity acting upon Pt10Ir test masses (which are denser, smaller, and have a slightly lower center of gravity inside the Kibble balance than stainless steel masses), the measured value was typically within 8 ppb of .
See also
Gouy balance
References
External links
Bureau International des Poids et Mesures
Swiss Federal Office of Metrology
Measuring instruments
Weighing instruments | Kibble balance | [
"Physics",
"Technology",
"Engineering"
] | 2,334 | [
"Weighing instruments",
"Mass",
"Matter",
"Measuring instruments"
] |
692,024 | https://en.wikipedia.org/wiki/Ampere%20balance | The ampere balance (also current balance or Kelvin balance) is an electromechanical apparatus used for the precise measurement of the SI unit of electric current, the ampere. It was invented by William Thomson, 1st Baron Kelvin.
The current to be measured is passed in series through two coils of wire, one of which is attached to one arm of a sensitive balance. The magnetic force between the two coils is measured by the amount of weight needed on the other arm of the balance to keep it in equilibrium. This is used to calculate the numerical value of the current.
The main weakness of the ampere balance is that the calculation of the current involves the dimensions of the coils. So the accuracy of the current measurement is limited by the accuracy with which the coils can be measured, and their mechanical rigidity.
A more complicated version of an ampere balance, that removes this source of inaccuracy by a calibration step, is the Kibble balance, invented by Bryan Kibble in 1975. This experimental device was developed at government metrology laboratories worldwide with the goal of providing a more accurate definition of the kilogram, the world's standard of mass. In this application, the Kibble balance functions in the reverse sense to the Ampere balance: it was used to weigh the International Prototype of the Kilogram, defining the kilogram in terms of an electric current and a voltage. In 2019, the kilogram, ampere, kelvin, and mole were redefined in terms of fundamental constants, removing the dependence on physical objects.
Usage
Approximate readings may be obtained by reading the position of the weight on the scale, or a more accurate reading may be obtained as follows: The upper edge of the shelf on which the weights slide is graduated into equal divisions, and the weight is provided with a sharp tongue of metal in order that its position on the shelf may be accurately determined. Since the current passing through the balance when equilibrium is obtained with a given weight is proportional to the square root of the couple due to this weight, it follows that the current strength when equilibrium is obtained is proportional to the product of the square root of the weight used and the displacement of this weight from its zero position. Each instrument is accompanied by a pair of weights and by a square root table, so that the product of the square roots of the number corresponding to the position of the sliding weight and the ascertained constant for each weight, gives at once the value of the current in amperes. Each of these balances is made to cover a certain range of reading. Thus, the centiampere balance ranges from 1 to 100 centiamperes, the deciampere balance from 1 to 100 deciamperes, the ampere balance from 1 to 100 amperes, etc.
References
Electricity
Measuring instruments | Ampere balance | [
"Technology",
"Engineering"
] | 573 | [
"Measuring instruments"
] |
692,112 | https://en.wikipedia.org/wiki/Vacuum%20ejector | A vacuum ejector, or simply ejector, or aspirator, is a type of vacuum pump, which produces vacuum by means of the Venturi effect.
In an ejector, a working fluid (liquid or gaseous) flows through a jet nozzle into a tube that first narrows and then expands in cross-sectional area. The fluid leaving the jet is flowing at a high velocity which due to Bernoulli's principle results in it having low pressure, thus generating a vacuum. The outer tube then narrows into a mixing section where the high velocity working fluid mixes with the fluid that is drawn in by the vacuum, imparting enough velocity for it to be ejected, the tube then typically expands in order to decrease the velocity of the ejected stream, allowing the pressure to smoothly increase to the external pressure.
The strength of the vacuum produced depends on the velocity and shape of the fluid jet and the shape of the constriction and mixing sections, but if a liquid is used as the working fluid, the strength of the vacuum produced is limited by the vapor pressure of the liquid (for water, or 32 mbar at ). If a gas is used, however, this restriction does not exist.
If not considering the source of the working fluid, vacuum ejectors can be significantly more compact than a self-powered vacuum pump of the same capacity.
Common types
Water aspirator
The cheap and simple water aspirator is commonly used in chemistry and biology laboratories and consists of a tee fitting attached to a tap and has a hose barb at one side. The flow of water passes through the straight portion of the tee, which has a restriction at the intersection, where the hose barb is attached. The vacuum hose should be connected to this barb. In the past, water aspirators were common for low-strength vacuums in chemistry benchwork. However, they are water-intensive, and depending on what the vacuum is being used for (e.g. solvent removal), they can violate environmental protection laws such as the RCRA by mixing potentially hazardous chemicals into the water stream, then flushing them down a drain that often leads directly to the municipal sewer. Their use has decreased somewhat as small electric vacuum pumps are far more effective, environmentally safe, and have become more affordable, but the unmatched simplicity and reliability of this device have caused it to remain popular for small labs or as a backup.
Another, much larger version of this device is used in maritime operations as a device to dewater (drain) areas in a ship that have been flooded in emergency situations. Typically referred to as an eductor in these applications, this is preferred over electrical pumps due to their simplicity, compact size, and greatly mitigated risk of explosion in the event that flammable liquids and/or vapors are present. Additionally, unlike many mechanical pumps, they can also pass debris as the eductor has no moving parts that can be fouled. This makes an eductor especially useful in situations where fitting a debris strainer to the suction port will present more issues than it resolves. The size of the debris that can be passed depends on the physical size of the eductor. Sizes, flow ratings, and applications vary, including eductors that are permanently installed (typically used in very large spaces, such as a ship's main engine room), or portable models that can be lowered into spaces by a rope and supplied and drained through firefighting hoses. Most are supplied through a ship's firefighting main, and portable models can also be supplied by an emergency pump, provided it can supply sufficient flow to operate the eductor.
Steam ejector
The industrial steam ejector (also called the "steam jet ejector", "steam aspirator", or "evactor") uses steam as a working fluid and multistage systems can produce very high vacuums. Due to the lack of delicate moving parts and the flow of steam providing somewhat of cleaning action, steam ejectors can handle gas flows containing liquids, dust, or even solid particles that would damage or clog many other vacuum pumps. Ejectors made entirely from specialised materials such as PTFE or graphite have allowed usage of extremely corrosive gasses, since steam ejectors have no moving parts they can be constructed in their entirety from almost any material that has sufficient durability.
In order to avoid using too much steam or impractical operating pressures, a single steam-ejector stage is generally not used to generate vacuum below approximately 10 kPa (75 mmHg). To generate higher vacuum, multiple stages are used; in a two-stage steam ejector, for example, the second stage provides vacuum for the waste steam output by the first stage. Condensers are typically used between stages to significantly reduce the load on the later stages. Steam ejectors with two, three, four, five and six stages may be used to produce vacuums down to 2.5 kPa, 300 Pa, 40 Pa, 4 Pa, and 0.4 Pa, respectively.
Steam ejectors are also suitable for pumping many liquids since if the steam can be easily condensed into the liquid then there is no need to separate the working fluid or manage a mist of liquid droplets. This is the manner in which a steam injector operates.
An additional use for the injector technology is in vacuum ejectors in continuous train braking systems, which were made compulsory in the UK by the Regulation of Railways Act 1889. A vacuum ejector uses steam pressure to draw air out of the vacuum pipe and reservoirs of continuous train brake. Steam locomotives, with a ready source of steam, found ejector technology ideal with its rugged simplicity and lack of moving parts. A steam locomotive usually has two ejectors: a large ejector for releasing the brakes when stationary and a small ejector for maintaining the vacuum against leaks. The exhaust from the ejectors is invariably directed to the smokebox, by which means it assists the blower in draughting the fire. The small ejector is sometimes replaced by a reciprocating pump driven from the crosshead because this is more economical of steam and is only required to operate when the train is moving.
Air ejector
Commonly called an air ejector, Venturi pump, or vacuum ejector. This ejector is similar in operation to the steam ejector but uses high-pressure air as the working fluid. Multistage air ejectors can be used, but since air cannot easily be condensed at room temperature, an air ejector is usually limited to two stages as each subsequent stage would have to be significantly larger than the last. These are commonly used in pneumatic handling equipment when a small vacuum is required to pick up objects since compressed air is often already present to power other parts of the equipment. Air ejectors used to suction liquids directly will produce a fine mist of droplets, this is how airbrushes and many other spraying systems operate, but when a spray is not required it is typically an undesirable effect that limits the applications to gas suction.
See also
Deaerator
Diffusion pump
Injector
Vacuum pump
External links
eductors.net, water eductor
References
Laboratory equipment
Vacuum pumps | Vacuum ejector | [
"Physics",
"Engineering"
] | 1,504 | [
"Vacuum pumps",
"Vacuum systems",
"Vacuum",
"Matter"
] |
692,369 | https://en.wikipedia.org/wiki/Fractional%20coloring | Fractional coloring is a topic in a young branch of graph theory known as fractional graph theory. It is a generalization of ordinary graph coloring. In a traditional graph coloring, each vertex in a graph is assigned some color, and adjacent vertices — those connected by edges — must be assigned different colors. In a fractional coloring however, a set of colors is assigned to each vertex of a graph. The requirement about adjacent vertices still holds, so if two vertices are joined by an edge, they must have no colors in common.
Fractional graph coloring can be viewed as the linear programming relaxation of traditional graph coloring. Indeed, fractional coloring problems are much more amenable to a linear programming approach than traditional coloring problems.
Definitions
A b-fold coloring of a graph G is an assignment of sets of size b to vertices of a graph such that adjacent vertices receive disjoint sets. An a:b-coloring is a b-fold coloring out of a available colors. Equivalently, it can be defined as a homomorphism to the Kneser graph . The b-fold chromatic number is the least a such that an a:b-coloring exists.
The fractional chromatic number is defined to be:
Note that the limit exists because is subadditive, meaning:
The fractional chromatic number can equivalently be defined in probabilistic terms. is the smallest k for which there exists a probability distribution over the independent sets of G such that for each vertex v, given an independent set S drawn from the distribution:
Properties
We have:
with equality for vertex-transitive graphs,
where n(G) is the order of G, α(G) is the independence number.
Moreover:
where ω(G) is the clique number, and is the chromatic number.
Furthermore, the fractional chromatic number approximates the chromatic number within a logarithmic factor, in fact:
Kneser graphs give examples where: is arbitrarily large, since: while
Linear programming (LP) formulation
The fractional chromatic number of a graph G can be obtained as a solution to a linear program. Let be the set of all independent sets of G, and let be the set of all those independent sets which include vertex x. For each independent set I, define a nonnegative real variable xI. Then is the minimum value of:
subject to:
for each vertex .
The dual of this linear program computes the "fractional clique number", a relaxation to the rationals of the integer concept of clique number. That is, a weighting of the vertices of G such that the total weight assigned to any independent set is at most 1. The strong duality theorem of linear programming guarantees that the optimal solutions to both linear programs have the same value. Note however that each linear program may have size that is exponential in the number of vertices of G, and that computing the fractional chromatic number of a graph is NP-hard. This stands in contrast to the problem of fractionally coloring the edges of a graph, which can be solved in polynomial time. This is a straightforward consequence of Edmonds' matching polytope theorem.
Applications
Applications of fractional graph coloring include activity scheduling. In this case, the graph G is a conflict graph: an edge in G between the nodes u and v denotes that u and v cannot be active simultaneously. Put otherwise, the set of nodes that are active simultaneously must be an independent set in graph G.
An optimal fractional graph coloring in G then provides a shortest possible schedule, such that each node is active for (at least) 1 time unit in total, and at any point in time the set of active nodes is an independent set. If we have a solution x to the above linear program, we simply traverse all independent sets I in an arbitrary order. For each I, we let the nodes in I be active for time units; meanwhile, each node not in I is inactive.
In more concrete terms, each node of G might represent a radio transmission in a wireless communication network; the edges of G represent interference between radio transmissions. Each radio transmission needs to be active for 1 time unit in total; an optimal fractional graph coloring provides a minimum-length schedule (or, equivalently, a maximum-bandwidth schedule) that is conflict-free.
Comparison with traditional graph coloring
If one further required that each node must be active continuously for 1 time unit (without switching it off and on every now and then), then traditional graph vertex coloring would provide an optimal schedule: first the nodes of color 1 are active for 1 time unit, then the nodes of color 2 are active for 1 time unit, and so on. Again, at any point in time, the set of active nodes is an independent set.
In general, fractional graph coloring provides a shorter schedule than non-fractional graph coloring; there is an integrality gap. It may be possible to find a shorter schedule, at the cost of switching devices (such as radio transmitters) on and off more than once.
Notes
References
.
.
See also
Fractional matching
Graph coloring
Fractional graph theory | Fractional coloring | [
"Mathematics"
] | 1,048 | [
"Graph coloring",
"Mathematical relations",
"Graph theory",
"Fractional graph theory"
] |
692,463 | https://en.wikipedia.org/wiki/Rotation%20operator%20%28quantum%20mechanics%29 | This article concerns the rotation operator, as it appears in quantum mechanics.
Quantum mechanical rotations
With every physical rotation , we postulate a quantum mechanical rotation operator that is the rule that assigns to each vector in the space the vector
that is also in . We will show that, in terms of the generators of rotation,
where is the rotation axis, is angular momentum operator, and is the reduced Planck constant.
The translation operator
The rotation operator , with the first argument indicating the rotation axis and the second the rotation angle, can operate through the translation operator for infinitesimal rotations as explained below. This is why, it is first shown how the translation operator is acting on a particle at position x (the particle is then in the state according to Quantum Mechanics).
Translation of the particle at position to position :
Because a translation of 0 does not change the position of the particle, we have (with 1 meaning the identity operator, which does nothing):
Taylor development gives:
with
From that follows:
This is a differential equation with the solution
Additionally, suppose a Hamiltonian is independent of the position. Because the translation operator can be written in terms of , and , we know that This result means that linear momentum for the system is conserved.
In relation to the orbital angular momentum
Classically we have for the angular momentum This is the same in quantum mechanics considering and as operators. Classically, an infinitesimal rotation of the vector about the -axis to leaving unchanged can be expressed by the following infinitesimal translations (using Taylor approximation):
From that follows for states:
And consequently:
Using
from above with and Taylor expansion we get:
with the -component of the angular momentum according to the classical cross product.
To get a rotation for the angle , we construct the following differential equation using the condition :
Similar to the translation operator, if we are given a Hamiltonian which rotationally symmetric about the -axis, implies . This result means that angular momentum is conserved.
For the spin angular momentum about for example the -axis we just replace with (where is the Pauli Y matrix) and we get the spin rotation operator
Effect on the spin operator and quantum states
Operators can be represented by matrices. From linear algebra one knows that a certain matrix can be represented in another basis through the transformation
where is the basis transformation matrix. If the vectors respectively are the z-axis in one basis respectively another, they are perpendicular to the y-axis with a certain angle between them. The spin operator in the first basis can then be transformed into the spin operator of the other basis through the following transformation:
From standard quantum mechanics we have the known results and where and are the top spins in their corresponding bases. So we have:
Comparison with yields .
This means that if the state is rotated about the -axis by an angle , it becomes the state , a result that can be generalized to arbitrary axes.
See also
Symmetry in quantum mechanics
Spherical basis
Optical phase space
References
L.D. Landau and E.M. Lifshitz: Quantum Mechanics: Non-Relativistic Theory, Pergamon Press, 1985
P.A.M. Dirac: The Principles of Quantum Mechanics, Oxford University Press, 1958
R.P. Feynman, R.B. Leighton and M. Sands: The Feynman Lectures on Physics, Addison-Wesley, 1965
Rotational symmetry
Quantum mechanics
Unitary operators | Rotation operator (quantum mechanics) | [
"Physics"
] | 682 | [
"Quantum operators",
"Quantum mechanics",
"Symmetry",
"Rotational symmetry"
] |
692,601 | https://en.wikipedia.org/wiki/Fludrocortisone | Fludrocortisone, sold under the brand name Florinef, among others, is a corticosteroid used to treat adrenogenital syndrome, postural hypotension, and adrenal insufficiency. In adrenal insufficiency, it is generally taken together with hydrocortisone. Fludrocortisone is taken by mouth and is most commonly used in its acetate form.
Common side effects of fludrocortisone include high blood pressure, swelling, heart failure, and low blood potassium. Other serious side effects can include low immune-system function, cataracts, muscle weakness, and mood changes. Whether use of fludrocortisone during pregnancy is safe for the fetus is unknown. Fludrocortisone is mostly a mineralocorticoid, but it also has glucocorticoid effects.
Fludrocortisone was patented in 1953. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Fludrocortisone has been used in the treatment of cerebral salt-wasting syndrome. It is used primarily to replace the missing hormone aldosterone in various forms of adrenal insufficiency such as Addison's disease and the classic salt-wasting (21-hydroxylase deficiency) form of congenital adrenal hyperplasia. Due to its effects on increasing Na+ levels, and therefore blood volume, fludrocortisone is the first-line of treatment for orthostatic intolerance, and postural orthostatic tachycardia syndrome (POTS). It can be used to treat low blood pressure.
Fludrocortisone is also a confirmation test for diagnosing Conn's syndrome (aldosterone-producing adrenal adenoma), the fludrocortisone suppression test. Loading the patient with fludrocortisone would suppress serum aldosterone level in a normal patient, whereas the level would remain elevated in a Conn's patient. The fludrocortisone suppression test is an alternative to the NaCl challenge (which would use normal saline or salt tablets).
Side effects
Use of fludrocortisone can lead to one or more of the following side effects:
Sodium and water retention
Swelling due to fluid retention (edema)
High blood pressure (hypertension)
Headache
Low blood potassium level (hypokalemia)
Muscle weakness
Fatigue
Increased susceptibility to infection
Impaired wound healing
Increased sweating
Increased hair growth (hirsutism)
Thinning of skin and stretch marks
Disturbances of the gut such as indigestion (dyspepsia), distention of the abdomen and ulceration (peptic ulcer)
Decreased bone density and increased risk of fractures of the bones
Difficulty in sleeping (insomnia)
Depression
Weight gain
Raised blood sugar level
Changes to the menstrual cycle
Partial loss of vision due to opacity in the lens of the eye (cataracts)
Raised pressure in the eye (glaucoma)
Increased pressure in the skull (intracranial pressure)
Pharmacology
Fludrocortisone is a corticosteroid and acts as a powerful mineralocorticoid, along with some additional but comparatively very weak glucocorticoid activity. Relative to cortisol, it is said to have 10 times the glucocorticoid potency but 250 to 800 times the mineralocorticoid potency. Fludrocortisone acetate is a prodrug of fludrocortisone, which is the active form of the drug.
Plasma renin, sodium, and potassium are checked through blood tests to verify that the correct dosage is reached.
Chemistry
Fludrocortisone, also known as 9α-fluorocortisol (9α-fluorohydrocortisone) or as 9α-fluoro-11β,17α,21-trihydroxypregn-4-ene-3,20-dione, is a synthetic pregnane steroid and a halogenated derivative of cortisol (11β,17α,21-trihydroxypregn-4-ene-3,20-dione). Specifically, it is a modification of cortisol with a fluorine atom substituted in place of one hydrogen atom at the C9α position. Fluorine is a good bioisostere for hydrogen because it is similar in size, with the major difference being in its electronegativity. The acetate form of fludrocortisone, fludrocortisone acetate, is the C21 acetate ester of fludrocortisone, and is hydrolyzed into fludrocortisone in the body.
History
Fludrocortisone was described in the literature in 1953 and was introduced for medical use (as the acetate ester) in 1954. It was the first synthetic corticosteroid to be marketed, and followed the introduction of cortisone in 1948 and hydrocortisone (cortisol) in 1951. Fludrocortisone was also the first fluorine-containing pharmaceutical drug to be marketed.
Society and culture
Generic name
Fludrocortisone is the generic name of fludrocortisone and its , , , , and , whereas fludrocortisone acetate is the generic name of fludrocortisone acetate and its , and .
Brand names
Fludrocortisone is marketed mainly under the brand names Astonin and Astonin-H, whereas the more widely used fludrocortisone acetate is sold mainly as Florinef, but also under several other brand names including Cortineff, Florinefe, and Fludrocortison.
Availability
Fludrocortisone is marketed in Austria, Croatia, Denmark, Germany, Luxembourg, Romania, and Spain, whereas fludrocortisone acetate is more widely available throughout the world and is marketed in the United States, Canada, the United Kingdom, various other European countries, Australia, Japan, China, Brazil, and many other countries.
References
External links
Acetate esters
Antihypotensive agents
Corticosteroid esters
Corticosteroids
Diketones
Glucocorticoids
Halohydrins
Mineralocorticoids
Organofluorides
Pregnanes
Prodrugs
Triols
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Fludrocortisone | [
"Chemistry"
] | 1,339 | [
"Chemicals in medicine",
"Prodrugs"
] |
693,002 | https://en.wikipedia.org/wiki/L%C3%B6b%27s%20theorem | In mathematical logic, Löb's theorem states that in Peano arithmetic (PA) (or any formal system including PA), for any formula P, if it is provable in PA that "if P is provable in PA then P is true", then P is provable in PA. If Prov(P) means that the formula P is provable, we may express this more formally as
If
then
.
An immediate corollary (the contrapositive) of Löb's theorem is that, if P is not provable in PA, then "if P is provable in PA, then P is true" is not provable in PA. For example, "If is provable in PA, then " is not provable in PA.
Löb's theorem is named for Martin Hugo Löb, who formulated it in 1955. It is related to Curry's paradox.
Löb's theorem in provability logic
Provability logic abstracts away from the details of encodings used in Gödel's incompleteness theorems by expressing the provability of in the given system in the language of modal logic, by means of the modality . That is, when is a logical formula, another formula can be formed by placing a box in front of , and is intended to mean that is provable.
Then we can formalize Löb's theorem by the axiom
known as axiom GL, for Gödel–Löb. This is sometimes formalized by means of the inference rule:
If
then
.
The provability logic GL that results from taking the modal logic K4 (or K, since the axiom schema 4, , then becomes redundant) and adding the above axiom GL is the most intensely investigated system in provability logic.
Modal proof of Löb's theorem
Löb's theorem can be proved within normal modal logic using only some basic rules about the provability operator (the K4 system) plus the existence of modal fixed points.
Modal formulas
We will assume the following grammar for formulas:
If is a propositional variable, then is a formula.
If is a propositional constant, then is a formula.
If is a formula, then is a formula.
If and are formulas, then so are , , , , and
A modal sentence is a formula in this syntax that contains no propositional variables. The notation is used to mean that is a theorem.
Modal fixed points
If is a modal formula with only one propositional variable , then a modal fixed point of is a sentence such that
We will assume the existence of such fixed points for every modal formula with one free variable. This is of course not an obvious thing to assume, but if we interpret as provability in Peano Arithmetic, then the existence of modal fixed points follows from the diagonal lemma.
Modal rules of inference
In addition to the existence of modal fixed points, we assume the following rules of inference for the provability operator , known as Hilbert–Bernays provability conditions:
(necessitation) From conclude : Informally, this says that if A is a theorem, then it is provable.
(internal necessitation) : If A is provable, then it is provable that it is provable.
(box distributivity) : This rule allows you to do modus ponens inside the provability operator. If it is provable that A implies B, and A is provable, then B is provable.
Proof of Löb's theorem
Much of the proof does not make use of the assumption , so for ease of understanding, the proof below is subdivided to leave the parts depending on until the end.
Let be any modal sentence.
Apply the existence of modal fixed points to the formula . It then follows that there exists a sentence such that .
, from 1.
, from 2 by the necessitation rule.
, from 3 and the box distributivity rule.
, box distributivity rule " " with and .
, from 4 and 5.
, internal necessitation rule.
, from 6 and 7.Now comes the part of the proof where the hypothesis is used.
Assume that . Roughly speaking, it is a theorem that if is provable, then it is, in fact true. This is a claim of soundness.
, from 8 and 9.
, from 1.
, from 10 and 11.
, from 12 by the necessitation rule.
, from 13 and 10.
More informally, we can sketch out the proof as follows.
Since by assumption, we also have , which implies .
Now, the hybrid theory can reason as follows:
Suppose is inconsistent, then PA proves , which is the same as .
However, already knows that , a contradiction.
Therefore, is consistent.
By Godel's second incompleteness theorem, this implies is inconsistent.
Thus, PA proves , which is the same as .
Examples
An immediate corollary of Löb's theorem is that, if P is not provable in PA, then "if P is provable in PA, then P is true" is not provable in PA. Given we know PA is consistent (but PA does not know PA is consistent), here are some simple examples:
"If is provable in PA, then " is not provable in PA, as is not provable in PA (as it is false).
"If is provable in PA, then " is provable in PA, as is any statement of the form "If X, then ".
"If the strengthened finite Ramsey theorem is provable in PA, then the strengthened finite Ramsey theorem is true" is not provable in PA, as "The strengthened finite Ramsey theorem is true" is not provable in PA (despite being true).
In Doxastic logic, Löb's theorem shows that any system classified as a reflexive "type 4" reasoner must also be "modest": such a reasoner can never believe "my belief in P would imply that P is true", without also believing that P is true.
Gödel's second incompleteness theorem follows from Löb's theorem by substituting the false statement for P.
Converse: Löb's theorem implies the existence of modal fixed points
Not only does the existence of modal fixed points imply Löb's theorem, but the converse is valid, too. When Löb's theorem is given as an axiom (schema), the existence of a fixed point (up to provable equivalence) for any formula A(p) modalized in p can be derived. Thus in normal modal logic, Löb's axiom is equivalent to the conjunction of the axiom schema 4, , and the existence of modal fixed points.
Notes
References
External links
Mathematical logic
Theorems in the foundations of mathematics
Metatheorems
Provability logic
Mathematical axioms
Modal logic | Löb's theorem | [
"Mathematics"
] | 1,448 | [
"Mathematical theorems",
"Proof theory",
"Foundations of mathematics",
"Mathematical logic",
"Provability logic",
"Mathematical axioms",
"Mathematical problems",
"Modal logic",
"Theorems in the foundations of mathematics"
] |
693,673 | https://en.wikipedia.org/wiki/Transition%20radiation | Transition radiation (TR) is a form of electromagnetic radiation emitted when a charged particle passes through inhomogeneous media, such as a boundary between two different media. This is in contrast to Cherenkov radiation, which occurs when a charged particle passes through a homogeneous dielectric medium at a speed greater than the phase velocity of electromagnetic waves in that medium.
History
Transition radiation was demonstrated theoretically by Ginzburg and Frank in 1945. They showed the existence of Transition radiation when a charged particle perpendicularly passed through a boundary between two different homogeneous media. The frequency of radiation emitted in the backwards direction relative to the particle was mainly in the range of visible light. The intensity of radiation was logarithmically proportional to the Lorentz factor of the particle. After the first observation of the transition radiation in the optical region, many early studies indicated that the application of the optical transition radiation for the detection and identification of individual particles seemed to be severely limited due to the inherent low intensity of the radiation.
Interest in transition radiation was renewed when Garibian showed that the radiation should also appear in the x-ray region for ultrarelativistic particles. His theory predicted some remarkable features for transition radiation in the x-ray region.
In 1959 Garibian showed theoretically that energy losses of an ultrarelativistic particle, when emitting TR while passing the boundary between media and vacuum, were directly proportional to the Lorentz factor of the particle. Theoretical discovery of x-ray transition radiation, which was directly proportional to the Lorentz factor, made possible further use of TR in high-energy physics.
Thus, from 1959 intensive theoretical and experimental research of TR, and x-ray TR in particular began.
Transition radiation in the x-ray region
Transition radiation in the x-ray region (TR) is produced by relativistic charged particles when they cross the interface of two media of different dielectric constants. The emitted radiation is the homogeneous difference between the two inhomogeneous solutions of Maxwell's equations of the electric and magnetic fields of the moving particle in each medium separately. In other words, since the electric field of the particle is different in each medium, the particle has to "shake off" the difference when it crosses the boundary. The total energy loss of a charged particle on the transition depends on its Lorentz factor and is mostly directed forward, peaking at an angle of the order of relative to the particle's path. The intensity of the emitted radiation is roughly proportional to the particle's energy .
Optical transition radiation is emitted both in the forward direction and reflected by the interface surface. In case of a foil having an angle at 45 degrees with respect to a particle beam, the particle beam's shape can be visually seen at an angle of 90 degrees. More elaborate analysis of the emitted visual radiation may allow for the determination of and emittance.
In the approximation of relativistic motion (), small angles () and high frequency (), the energy spectrum can be expressed as:
Where is the atomic charge, is the charge of an electron, is the Lorentz factor, is the Plasma Frequency. This divergences at low frequencies where the approximations fail. The total energy emitted is:
The characteristics of this electromagnetic radiation makes it suitable for particle discrimination, particularly of electrons and hadrons in the momentum range between and .
The transition radiation photons produced by electrons have wavelengths in the x-ray range, with energies typically in the range from 5 to . However, the number of produced photons per interface crossing is very small: for particles with = 2×103, about 0.8 x-ray photons are detected. Usually several layers of alternating materials or composites are used to collect enough transition radiation photons for an adequate measurement—for example, one layer of inert material followed by one layer of detector (e.g. microstrip gas chamber), and so on.
By placing interfaces (foils) of very precise thickness and foil separation, coherence effects will modify the transition radiation's spectral and angular characteristics. This allows a much higher number of photons to be obtained in a smaller angular "volume". Applications of this x-ray source are limited by the fact that the radiation is emitted in a cone, with a minimum intensity at the center. X-ray
focusing devices (crystals/mirrors) are not easy to build for such radiation patterns.
A special type of transition radiation is diffusive radiation. It is emitted provided that a charged particle crosses a medium with randomly inhomogeneous dielectric permittivity^{9,10,11}.
See also
Transition radiation detector
References
9. ^S.R.Atayan and Zh.S.Gevorkian, Pseudophoton diffusion and radiation of a charged particle in a randomly inhomogeneous medium, Sov.Phys.JETP,v.71(5),862,(1990).\\
10. ^Zh.S.Gevorkian, Radiation of a relativistic charged particle in a system with one-dimensional randomness, Phys.Rev.E,v.57,2338,(1998).\\
11. ^ Zh.S.Gevorkian, C.P.Chen and Chin-Kun Hu, New Mechanism of X-ray radiation from a relativistic charged particle in a dielectric random medium, Phys.Rev.Lett. v.86,3324,(2001).
Sources
Interference phenomenon in optical transition radiation and its application to particle beam diagnostics and multiple-scattering measurements, L. Wartski et al., Journal of Applied Physics -- August 1975 -- Volume 46, Issue 8, pp. 3644-3653.
External links
Article on transition radiation
Experimental particle physics
Particle physics | Transition radiation | [
"Physics"
] | 1,194 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
18,039,954 | https://en.wikipedia.org/wiki/Irish%20Centre%20for%20High-End%20Computing | The Irish Centre for High-End Computing (ICHEC) is the national high-performance computing centre in Ireland. It was established in 2005 and provides supercomputing resources, support, training and related services. ICHEC is involved in education and training, including providing courses for researchers.
Kay supercomputer
ICHEC's newest supercomputer, Kay, was commissioned in August 2018 and was named after Irish-American ENIAC programmer Kathleen Antonelli following a public poll, in which the other shortlist candidates were botanist Ellen Hutchins, scientist and inventor Nicholas Callan, geologist Richard Kirwan, chemist Eva Philbin, and hydrographer Francis Beaufort. Kay's system is composed of:
A cluster of 336 nodes, each node having 2x 20-core 2.4 GHz Intel Xeon Gold 6148 (Skylake) processors, 192 GiB of RAM, a 400 GiB local SSD for scratch space and a 100Gbit OmniPath network adaptor. This partition has a total of 13,440 cores and 63 TiB of distributed memory.
A GPU partition of 16 nodes with the same specification as above, plus 2x Nvidia Tesla V100 16GB PCIe (Volta architecture) GPUs on each node. Each GPU has 5,120 CUDA cores and 640 Tensor Cores.
A "Phi" partition of 16 nodes, each containing 1x self-hosted Intel Xeon Phi Processor 7210 (Knights Landing or KNL architecture) with 64 cores @ 1.3 GHz, 192 GiB RAM and a 400 GiB local SSD for scratch space.
A "high memory" set of 6 nodes each containing 1.5 TiB of RAM, 2x 20-core 2.4 GHz Intel Xeon Gold 6148 (Skylake) processors and 1 TiB of dedicated local SSD for scratch storage.
A set of service and administrative nodes to provide user login, batch scheduling, management, networking, etc. Storage is provided via Lustre filesystems on a high-performance DDN SFA14k system with 1 PiB of capacity.
Like all previous HPC systems, ICHEC is connected to the HEAnet and GÉANT networks.
Fionn supercomputer
Between 2014 and August 2018, ICHEC managed the Fionn supercomputer, a heterogeneous system composed of:
an SGI ICE X cluster with 320 nodes or 7,680 Intel Ivy Bridge processor cores with a combined 20 TB of memory (24 cores and 64 GB memory per node).
a hybrid partition with 32 nodes. Each node has 20 Intel Ivy Bridge processor cores, 64 GB of memory along with many-core hardware from Intel (2x Xeon Phi 5110P coprocessors on 16 nodes) and Nvidia (2x Tesla K20X GPGPU cards on 16 nodes).
a shared memory compute node (14 internal NUMA nodes) with 112 Intel Sandy Bridge processor cores, 2 Intel Xeon Phi 5110P coprocessors and 1.7 TB of memory.
a set of service and administrative nodes to provide user login, batch scheduling, management, tape backup, switches, etc. Storage is provided via a DDN SFA12k-20 platform with 560 TB of capacity to all components of the machine via a Lustre filesystem.
Fionn was connected to HEAnet's networking infrastructure. Irish researchers were able to apply for access to Fionn via several schemes. A helpdesk was available for user support. Fionn was replaced by Kay in August 2018.
Other ICHEC functions
ICHEC was designated a Nvidia CUDA Research Center in 2010 Its work in this area has included the porting to CUDA of the Quantum ESPRESSO and DL_POLY molecular dynamics packages as well as various industrial benchmarking studies.
ICHEC became an Intel Parallel Computing Center (IPCC) in 2014 to conduct research on many-core technology in high performance computing and big data analytics.
In collaboration with Met Éireann, ICHEC provides hardware and support to publish climate and weather forecast models. ICHEC computational scientists also take an active part in the ongoing development of the models and conduct related climate/environmental research.
ICHEC works with a number of Irish government departments and agencies (e.g. Enterprise Ireland, IDA Ireland) to provide consultancy services to Irish companies in various areas including data mining, visualisation, data management and software development/optimization.
References
External links
ICHEC web site
Supercomputer sites
Computational science
Research institutes in the Republic of Ireland | Irish Centre for High-End Computing | [
"Mathematics"
] | 939 | [
"Computational science",
"Applied mathematics"
] |
18,041,723 | https://en.wikipedia.org/wiki/TADIXS | The Tactical Data Information Exchange Subsystem (TADIXS) is a military communications system designed to allow the exchange of tactical information between commanders using the Global Command and Control System-Maritime (GCCS-M). Specifically, TADIXS allows for communication between land-based (shore) computer systems and those on U.S. Navy fleet ships deployed around the world.
TADIXS has entered its fourth phase of development and is likely to replace the Officer in Tactical Command Information Exchange System (OTCIXS). This information exchange improves the overall situational awareness of tactical commanders in the field and strategic commanders at command and control centers.
References
Military communications
Military technology | TADIXS | [
"Engineering"
] | 139 | [
"Military communications",
"Telecommunications engineering"
] |
18,048,054 | https://en.wikipedia.org/wiki/Hypograph%20%28mathematics%29 | In mathematics, the hypograph or subgraph of a function is the set of points lying on or below its graph.
A related definition is that of such a function's epigraph, which is the set of points on or above the function's graph.
The domain (rather than the codomain) of the function is not particularly important for this definition; it can be an arbitrary set instead of .
Definition
The definition of the hypograph was inspired by that of the graph of a function, where the of is defined to be the set
The or of a function valued in the extended real numbers is the set
Similarly, the set of points on or above the function is its epigraph.
The is the hypograph with the graph removed:
Despite the fact that might take one (or both) of as a value (in which case its graph would be a subset of ), the hypograph of is nevertheless defined to be a subset of rather than of
Properties
The hypograph of a function is empty if and only if is identically equal to negative infinity.
A function is concave if and only if its hypograph is a convex set. The hypograph of a real affine function is a halfspace in
A function is upper semicontinuous if and only if its hypograph is closed.
See also
Citations
References
Mathematical analysis
Convex analysis | Hypograph (mathematics) | [
"Mathematics"
] | 291 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
14,039,006 | https://en.wikipedia.org/wiki/Nucleic%20acid%20analogue | Nucleic acid analogues are compounds which are analogous (structurally similar) to naturally occurring RNA and DNA, used in medicine and in molecular biology research. Nucleic acids are chains of nucleotides, which are composed of three parts: a phosphate backbone, a pentose sugar, either ribose or deoxyribose, and one of four nucleobases. An analogue may have any of these altered. Typically the analogue nucleobases confer, among other things, different base pairing and base stacking properties. Examples include universal bases, which can pair with all four canonical bases, and phosphate-sugar backbone analogues such as PNA, which affect the properties of the chain (PNA can even form a triple helix).
Nucleic acid analogues are also called xeno nucleic acids and represent one of the main pillars of xenobiology, the design of new-to-nature forms of life based on alternative biochemistries.
Artificial nucleic acids include peptide nucleic acids (PNA), morpholino, and locked nucleic acids (LNA), as well as glycol nucleic acids (GNA), threose nucleic acids (TNA), and hexitol nucleic acids (HNA). Each of these is distinguished from naturally occurring DNA or RNA by changes to the backbone of the molecule. However, the polyelectrolyte theory of the gene proposes that a genetic molecule require a charged backbone to function.
In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, and by including individual artificial nucleotides in the culture media, were able to passage the bacteria 24 times; they did not create mRNA or proteins able to use the artificial nucleotides. The artificial nucleotides featured 2 fused aromatic rings.
Medicine
Several nucleoside analogues are used as antiviral or anticancer agents. The viral polymerase incorporates these compounds with non-canonical bases. These compounds are activated in the cells by being converted into nucleotides, they are administered as nucleosides since charged nucleotides cannot easily cross cell membranes.
Molecular biology
Nucleic acid analogues are used in molecular biology for several purposes:
Investigation of possible scenarios of the origin of life: By testing different analogs, researchers try to answer the question of whether life's use of DNA and RNA was selected over time due to its advantages, or if they were chosen by arbitrary chance;
As a tool to detect particular sequences: XNA can be used to tag and identify a wide range of DNA and RNA components with high specificity and accuracy;
As an enzyme acting on DNA, RNA and XNA substrates - XNA has been shown to have the ability to cleave and ligate DNA, RNA and other XNA molecules similar to the actions of RNA ribozymes;
As a tool with resistance to RNA hydrolysis;
Investigation of the mechanisms used by enzyme; and
Investigation of the structural features of nucleic acids.
Backbone analogues
Hydrolysis resistant RNA-analogues
Ribose's 2' hydroxy group reacts with the phosphate linked 3' hydroxy group, making RNA too unstable to be used or synthesized reliably. To overcome this, a ribose analogue can be used. The most common RNA analogues are 2'-O-methyl-substituted RNA, locked nucleic acid (LNA) or bridged nucleic acid (BNA), morpholino, and peptide nucleic acid (PNA). Although these oligonucleotides have a different backbone sugar—or, in the case of PNA, an amino acid residue in place of the ribose phosphate—they still bind to RNA or DNA according to Watson and Crick pairing while being immune to nuclease activity. They cannot be synthesized enzymatically and can only be obtained synthetically using the phosphoramidite strategy or, for PNA, other methods of peptide synthesis.
Other notable analogues used as tools
Dideoxynucleotides are used in sequencing. These nucleoside triphosphates possess a non-canonical sugar, dideoxyribose, which lacks the 3' hydroxyl group normally present in DNA and therefore cannot bond with the next base. The lack of the 3' hydroxyl group terminates the chain reaction as the DNA polymerases mistake it for a regular deoxyribonucleotide. Another chain-terminating analogue that lacks a 3' hydroxyl and mimics adenosine is called cordycepin. Cordycepin is an anticancer drug that targets RNA replication. Another analogue in sequencing is a nucleobase analogue, 7-deaza-GTP and is used to sequence CG rich regions, instead 7-deaza-ATP is called tubercidin, an antibiotic.
Precursors to the RNA world
It has been suggested that the RNA world may have been preceded by an "RNA-like world" where other nucleic acids with a different backbone, such as GNA, PNA, and TNA existed, however, evidence for this hypothesis been called "tenuous".
Base analogues
Nucleobase structure and nomenclature
Naturally occurring bases can be divided into two classes according to their structure:
Pyrimidines are six-membered heterocyclic with nitrogen atoms in position 1 and 3.
Purines are bicyclic, consisting of a pyrimidine fused to an imidazole ring.
Artificial nucleotides (Unnatural Base Pairs (UBPs) named d5SICS UBP and dNaM UBP) have been inserted into bacterial DNA but these genes did not template mRNA or induce protein synthesis. The artificial nucleotides featured two fused aromatic rings which formed a (d5SICS–dNaM) complex mimicking the natural (dG–dC) base pair.
Mutagens
One of the most common base analogs is 5-bromouracil (5BU), the abnormal base found in the mutagenic nucleotide analog BrdU. When a nucleotide containing 5-bromouracil is incorporated into the DNA, it is most likely to pair with adenine; however, it can spontaneously shift into another isomer which pairs with a different nucleobase, guanine. If this happens during DNA replication, a guanine will be inserted as the opposite base analog, and in the next DNA replication, that guanine will pair with a cytosine. This results in a change in one base pair of DNA, specifically a transition mutation.
Additionally, nitrous acid (HNO2) is a potent mutagen that acts on replicating and non-replicating DNA. It can cause deamination of the amino groups of adenine, guanine and cytosine. Adenine is deaminated to hypoxanthine, which base pairs to cytosine instead of thymine. Cytosine is deaminated to uracil, which base pairs with adenine instead of guanine. Deamination of guanine is not mutagenic. Nitrous acid-induced mutations also are induced to mutate back to wild-type.
Fluorophores
Commonly fluorophores (such as rhodamine or fluorescein) are linked to the ring linked to the sugar (in para) via a flexible arm, presumably extruding from the major groove of the helix. Due to low processivity of the nucleotides linked to bulky adducts such as florophores by [Taq polymerase]s, the sequence is typically copied using a nucleotide with an arm and later coupled with a reactive fluorophore (indirect labelling):
Amine reactive: aminoallyl nucleotides contain a primary amine group on a linker that reacts with the amino-reactive dye such as cyanine or Alexa Fluor dyes, which contain a reactive leaving group like succinimidyl ester (NHS). Base-pairing amino groups are not affected.
Thiol reactive: thiol-containing nucleotides react with the fluorophore linked to a reactive leaving group like maleimide.
Biotin-linked nucleotides rely on the same indirect labelling principle (and fluorescent streptavidin) and are used in Affymetrix DNAchips.
Fluorophores find a variety of uses in medicine and biochemistry.
Fluorescent base analogues
The most commonly used and commercially available fluorescent base analogue, 2-aminopurine (2-AP), has a high-fluorescence quantum yield free in solution (0.68) that is considerably reduced (appr. 100 times but highly dependent on base sequence) when incorporated into nucleic acids. The emission sensitivity of 2-AP to immediate surroundings is shared by other promising and useful fluorescent base analogues like 3-MI, 6-MI, 6-MAP, pyrrolo-dC (also commercially available), modified and improved derivatives of pyrrolo-dC, furan-modified bases and many other ones (see recent reviews). This sensitivity to the microenvironment has been utilized in studies of e.g. structure and dynamics within both DNA and RNA, dynamics and kinetics of DNA-protein interaction and electron transfer within DNA.
A newly developed and very interesting group of fluorescent base analogues that has a fluorescence quantum yield that is nearly insensitive to their immediate surroundings is the tricyclic cytosine family. 1,3-Diaza-2-oxophenothiazine, tC, has a fluorescence quantum yield of approximately 0.2 both in single- and in double-strands irrespective of surrounding bases. Also the oxo-homologue of tC called tCO (both commercially available), 1,3-diaza-2-oxophenoxazine, has a quantum yield of 0.2 in double-stranded systems. However, it is somewhat sensitive to surrounding bases in single-strands (quantum yields of 0.14–0.41). The high and stable quantum yields of these base analogues make them very bright, and, in combination with their good base analogue properties (leaves DNA structure and stability next to unperturbed), they are especially useful in fluorescence anisotropy and FRET measurements, areas where other fluorescent base analogues are less accurate. Also, in the same family of cytosine analogues, a FRET-acceptor base analogue, tCnitro, has been developed. Together with tCO as a FRET-donor this constitutes the first nucleic acid base analogue FRET-pair ever developed. The tC-family has, for example, been used in studies related to polymerase DNA-binding and DNA-polymerization mechanisms.
Natural non-canonical bases
In a cell, there are several non-canonical bases present: CpG islands in DNA (often methylated), all eukaryotic mRNA (capped with a methyl-7-guanosine), and several bases of rRNAs (methylated). Often, tRNAs are heavily modified postranscriptionally in order to improve their conformation or base pairing, in particular in or near the anticodon: inosine can base pair with C, U, and even with A, whereas thiouridine (with A) is more specific than uracil (with a purine). Other common tRNA base modifications are pseudouridine (which gives its name to the TΨC loop), dihydrouridine (which does not stack as it is not aromatic), queuosine, wyosine, and so forth. Nevertheless, these are all modifications to normal bases and are not placed by a polymerase.
Base-pairing
Canonical bases may have either a carbonyl or an amine group on the carbons surrounding the nitrogen atom furthest away from the glycosidic bond, which allows them to base pair (Watson-Crick base pairing) via hydrogen bonds (amine with ketone, purine with pyrimidine). Adenine and 2-aminoadenine have one/two amine group(s), whereas thymine has two carbonyl groups, and cytosine and guanine are mixed amine and carbonyl (inverted in respect to each other).
The precise reason why there are only four nucleotides is debated, but there are several unused possibilities.
Furthermore, adenine is not the most stable choice for base pairing: in Cyanophage S-2L, diaminopurine (DAP) is used instead of adenine. Diaminopurine basepairs perfectly with thymine as it is identical to adenine but has an amine group at position 2 forming 3 intramolecular hydrogen bonds, eliminating the major difference between the two types of basepairs (weak A-T vs strong C-G). This improved stability affects protein-binding interactions that rely on those differences.
Other combination include:
Isoguanine and isocytosine, which have their amine and ketone inverted compared to standard guanine and cytosine. They are not used probably as tautomers are problematic for base pairing, but isoC and isoG can be amplified correctly with PCR even in the presence of the 4 canonical bases.
Diaminopyrimidine and xanthine, which bind like 2-aminoadenine and thymine but with inverted structures. This pair is not used as xanthine is a deamination product.
However, correct DNA structure can form even when the bases are not paired via hydrogen bonding; that is, the bases pair thanks to hydrophobicity, as studies have shown with DNA isosteres (analogues with same number of atoms) such as the thymine analogue 2,4-difluorotoluene (F) or the adenine analogue 4-methylbenzimidazole (Z). An alternative hydrophobic pair could be isoquinoline and pyrrolo[2,3-b]pyridine
Other noteworthy basepairs:
Several fluorescent bases have also been made, such as the 2-amino-6-(2-thienyl)purine and pyrrole-2-carbaldehyde base pair.
Metal-coordinated bases, such as pairing between a pyridine-2,6-dicarboxylate (tridentate ligand) and a pyridine (monodentate ligand) through square planar coordination to a central copper ion.
Universal bases may pair indiscriminately with any other base, but, in general, lower the melting temperature of the sequence considerably; examples include 2'-deoxyinosine (hypoxanthine deoxynucleotide) derivatives, nitroazole analogues, and hydrophobic aromatic non-hydrogen-bonding bases (strong stacking effects). These are used as proof of concept and, in general, are not utilized in degenerate primers (which are a mixture of primers).
The numbers of possible base pairs is doubled when xDNA is considered. xDNA contains expanded bases, in which a benzene ring has been added, which may pair with canonical bases, resulting in four additional possible base-pairs (xA-T, xT-A, xC-G, xG-C) with eight bases (or 16 bases if the unused arrangements are used). Another form of benzene added bases is yDNA, in which the base is widened by the benzene.
Metal base-pairs
In metal base-pairing, the Watson-Crick hydrogen bonds are replaced by the interaction between a metal ion with nucleosides acting as ligands. The possible geometries of the metal that would allow for duplex formation with two bidentate nucleosides around a central metal atom are tetrahedral, dodecahedral, and square planar. Metal-complexing with DNA can occur by the formation of non-canonical base pairs from natural nucleobases with participation by metal ions and also by the exchanging the hydrogen atoms that are part of the Watson-Crick base pairing by metal ions. Introduction of metal ions into a DNA duplex has shown to have potential magnetic or conducting properties, as well as increased stability.
Metal complexing has been shown to occur between natural nucleobases. A well-documented example is the formation of T-Hg-T, which involves two deprotonated thymine nucleobases that are brought together by Hg2+ and forms a connected metal-base pair. This motif does not accommodate stacked Hg2+ in a duplex due to an intrastrand hairpin formation process that is favored over duplex formation. Two thymines across from each other do not form a Watson-Crick base pair in a duplex; this is an example where a Watson-Crick basepair mismatch is stabilized by the formation of the metal-base pair. Another example of a metal complexing to natural nucleobases is the formation of A-Zn-T and G-Zn-C at high pH; Co2+ and Ni2+ also form these complexes. These are Watson-Crick base pairs where the divalent cation in coordinated to the nucleobases. The exact binding is debated.
A large variety of artificial nucleobases have been developed for use as metal base pairs. These modified nucleobases exhibit tunable electronic properties, sizes, and binding affinities that can be optimized for a specific metal. For example, a nucleoside modified with a pyridine-2,6-dicarboxylate has shown to bind tightly to Cu2+, whereas other divalent ions are only loosely bound. The tridentate character contributes to this selectivity. The fourth coordination site on the copper is saturated by an oppositely arranged pyridine nucleobase. The asymmetric metal base pairing system is orthogonal to the Watson-Crick base pairs. Another example of an artificial nucleobase is that with hydroxypyridone nucleobases, which are able to bind Cu2+ inside the DNA duplex. Five consecutive copper-hydroxypyridone base pairs were incorporated into a double strand, which were flanked by only one natural nucleobase on both ends. EPR data showed that the distance between copper centers was estimated to be 3.7 ± 0.1 Å, while a natural B-type DNA duplex is only slightly larger (3.4 Å). The appeal for stacking metal ions inside a DNA duplex is the hope to obtain nanoscopic self-assembling metal wires, though this has not been realized yet.
Unnatural base pair (UBP)
An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA that is created in a laboratory and does not occur in nature. In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team had designed two unnatural base pairs
named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases feature two fused aromatic rings that form a d5SICS–dNaM complex or base pair in DNA. In 2014, the same team reported that they had synthesized a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed and inserted it into cells of the common bacterium E. coli, which successfully replicated the unnatural base pairs through multiple generations. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate the plasmid containing d5SICS–dNaM.
The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. Earlier, the artificial strings of DNA did not encode for anything, but scientists speculated they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses. Transcription of DNA containing unnatural base pairs and translation of corresponding mRNA were actually achieved recently. In November 2017, the same team at the Scripps Research Institute that first introduced two extra nucleobases into bacterial DNA reported having constructed a semi-synthetic E. coli bacteria able to make proteins using such DNA. Its DNA contained six different nucleobases: four canonical and two artificially added, dNaM and dTPT3 (these two form a pair). The bacteria had two corresponding RNA bases included in two new codons, additional tRNAs recognizing these new codons (these tRNAs also contained two new RNA bases within their anticodons) and additional amino acids, enabling the bacteria to synthesize "unnatural" proteins.
Another demonstration of UBPs were achieved by Ichiro Hirao's group at RIKEN institute in Japan. In 2002, they developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in vitro in transcription and translation, for the site-specific incorporation of non-standard amino acids into proteins. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription. Afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins.
Orthogonal system
The possibility has been proposed and studied, both theoretically and experimentally, of implementing an orthogonal system inside cells independent of the cellular genetic material in order to make a completely safe system, with the possible increase in encoding potentials.
Several groups have focused on different aspects:
Novel backbones and base pairs as discussed above;
XNA artificial replication and transcription polymerases starting generally from T7 RNA polymerase;
(16S ribosomal sequences with altered anti-Shine-Dalgarno sequences allowing the translation of only orthogonal mRNA with a matching altered Shine-Dalgarno sequence; and
Novel tRNA encoding non-natural aminoacids for an expanded genetic code.
See also
Biotin
Dark quencher
Deoxyribozyme
Expanded genetic code
Fluorophore
Genetics
Molecular biology
Nucleic acid
Nucleobase
Nucleoside
Nucleotide
Oligonucleotide synthesis
Ribozyme
Synthetic biology
Xenobiology
xDNA
Hachimoji DNA
Artificially Expanded Genetic Information System (AEGIS)
Xeno nucleic acid
References
Molecular genetics
Nucleic acids
RNA
RNA interference
Gene expression | Nucleic acid analogue | [
"Chemistry",
"Biology"
] | 4,931 | [
"Biomolecules by chemical classification",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Nucleic acids"
] |
14,041,249 | https://en.wikipedia.org/wiki/Kab%20101 | Kab 101 is a Sea Pony-type minimum-facilities light-production oil platform operated by Mexican state-owned oil company PEMEX, and installed about off the coast of Tabasco, near the port of , in 1994. The platform was designed by British engineering firm SLP Engineering Limited. The platform also produces the wells Kab 103 and Kab 121. This platform was the site of the accident which eventually led to the death of 22 workers. Pemex would contract two independent studies and one by itself and in an exercise of transparency, posted the reports on its website. On October 31, 2008, PEMEX released the result of the independent studies of the accident.
Usumacinta accident timeline
October 21, 2007: The jackup rig Usumacinta is moved to the location of Kab 101 to prepare for work on the well Kab 103.
October 23, 2007, 0700: The arrival of Cold Front No. 4 with winds exceeding causes all personnel on the rig to cease operations.
08:00 – 11:00: The Usumacinta begins moving with the seas, because its ballast and anchor points have not been properly set.
11:30: The Usumacinta's auxiliary cover below its cantilever collides with the wellhead Kab 121, which begins leaking oil and gas.
11:40 – 13:55: The crew of the Usumacinta attempts to close the sub-surface storm valves for both Kab 101 and the leaking Kab 121, to prevent further danger to the people on board the platform. This is only temporarily successful at stopping the leak from Kab 121.
15:30: The storm valves on Kab 121 fail and hydrogen sulfide is detected, prompting the order to evacuate the platform.
15:45: All 73 people from the platform were accounted for in the two lifeboats, and were wearing life jackets.
Lifeboat #1
Around 16:13 the lifeboat began to fill with water. This eventually led to the lifeboat crew becoming panicked and attempting to leave the lifeboat, in an attempt to board the M/V Morrison Tide. The lifeboat eventually capsized around 17:28, and a collision between the two lifeboats left most of Lifeboat #1's occupants drifting in the water. Two crew members from the Morrison Tide were lost in rescue operations; one died from injuries and another was lost at sea. Some survivors were also rescued by the M/V Isla Del Torro.
The lifeboat eventually drifted ashore east of Nuevo Campechito with nobody on board.
Lifeboat #2
This lifeboat started out with poor visibility due to the oil from the leaking well Kab 121. The hatches were opened so the helmsman could see, and to allow for ventilation due to several people complaining of dizziness inside. At 17:42 the lifeboat was struck by a huge wave, and overturned. 33 to 35 people were trapped and had to fight to escape the boat.
The next day Lifeboat #2 reached the shore west of Nuevo Progreso upside down, with 12 survivors on top, and one survivor and four bodies inside.
Well control and aftermath
An early estimate of the oil leak from Kab 121 was per day of light crude. In December 2007 Pemex estimated that of leaked oil were recovered with another remaining in the environment.
On November 13, during attempts to control the leaks resulting from the accident, the well Kab 121 ignited and was brought under control the same day. On the 20th of November, Kab 121 ignited again, this time the fire destroyed the remains of the Derrick and was controlled on December 3.
Scrappers
On August 4, 2008, another fire was extinguished on the Usumacinta. This fire is suspected to have been caused by scrappers attempting to steal from the abandoned rig. This blaze was extinguished by the ships “Isla Guadalupe”, “Isla Cozumel”, “Pionero”, “Conquistador”, and "Deep Endeavour". The Mexican Navy also sent the interceptor "Auriga" to the area at the request of PEMEX.
Notes
External links
Complete Battelle Report of the Accident
Official Pemex photos
Usumacinta accident article
MEXICO: Oil Rig Accident Kills 18 Early ABC News article.
Pemex Probes Usumacinta Accident Rigzone article.
Oil platforms
Pemex
Oil platform disasters
Oil spills in Mexico
2007 industrial disasters
2007 disasters in Mexico
Gulf Coast of Mexico
1994 establishments in Mexico
October 2007 events in Mexico | Kab 101 | [
"Chemistry",
"Engineering"
] | 912 | [
"Oil platforms",
"Petroleum technology",
"Natural gas technology",
"Structural engineering"
] |
14,045,585 | https://en.wikipedia.org/wiki/Alkylglycerol%20monooxygenase | Alkylglycerol monooxygenase (AGMO) () is an enzyme that catalyzes the hydroxylation of alkylglycerols, a specific subclass of ether lipids. This enzyme was first described in 1964 as a pteridine-dependent ether lipid cleaving enzyme. In 2010 finally, the gene coding for alkylglycerol monooxygenase was discovered as transmembrane protein 195 (TMEM195) on chromosome 7.
In analogy to the enzymes phenylalanine hydroxylase, tyrosine hydroxylase, tryptophan hydroxylase and nitric oxide synthase, alkylglycerol monooxygenase critically depends on the cofactor tetrahydrobiopterin and iron.
The reaction catalyzed by alkylglycerol monooxygenase:
1-alkyl-sn-glycerol + tetrahydrobiopterin + O2 1-hydroxyalkyl-sn-glycerol + 6,7[8H]-dihydrobiopterin + H2O
The unstable intermediate product 1-hydroxyalkyl-sn-glycerol rearranges into the fatty aldehyde and the free glycerol derivative. The fatty aldehyde is then further oxidized to the corresponding acid by fatty aldehyde dehydrogenase.
Alkylglycerol monooxygenase is a membrane-bound mixed-function oxidase and harbours a fatty acid hydroxylase motif. The iron is believed to be coordinated by a diiron center composed of eight histidines, which can be found in all enzymes containing this motif.
Nomenclature
The systematic name for this enzyme is 1-alkyl-sn-glycerol,tetrahydrobiopterin:oxygen oxidoreductase. Other names in use are glyceryl-ether monooxygenase, glyceryl-ether cleaving enzyme, glyceryl ether oxygenase, glyceryl etherase, and O-alkylglycerol monooxygenase.
References
Further reading
EC 1.14.16
Phospholipids
Enzymes of unknown structure | Alkylglycerol monooxygenase | [
"Chemistry"
] | 481 | [
"Phospholipids",
"Signal transduction"
] |
14,048,614 | https://en.wikipedia.org/wiki/Brainbow | Brainbow is a process by which individual neurons in the brain can be distinguished from neighboring neurons using fluorescent proteins. By randomly expressing different ratios of red, green, and blue derivatives of green fluorescent protein in individual neurons, it is possible to flag each neuron with a distinctive color. This process has been a major contribution to the field of neural connectomics.
The technique was originally developed in 2007 by a team led by Jeff W. Lichtman and Joshua R. Sanes, both at Harvard University. The original technique has recently been adapted for use with other model research organisms including the fruit fly (Drosophila melanogaster), zebrafish (Danio rerio), and Arabidopsis thaliana.
While earlier labeling techniques allowed for the mapping of only a few neurons, this new method allows more than 100 differently mapped neurons to be simultaneously and differentially illuminated in this manner. This leads to its characteristic multicolored appearance on imaging, earning its name and winning awards in science photography competitions.
History and development
Brainbow was initially developed by Jeff W. Lichtman and Joshua R. Sanes at Washington University in St. Louis. The team constructed Brainbow using a two-step process: first, a specific genetic construct was generated that could be recombined in multiple arrangements to produce one of either three or four colors based on the particular fluorescent proteins (XFPs) being implemented. Next, multiple copies of the same transgenic construct were inserted into the genome of the target species, resulting in the random expression of different XFP ratios and subsequently causing different cells to exhibit a variety of colorful hues.
Brainbow was originally created as an improvement over more traditional neuroimaging techniques, such as Golgi staining and dye injection, both of which presented severe limitations to researchers in their ability to visualize the intricate architecture of neural circuitry in the brain. While older techniques were only able to stain cells with a constricted range of colors, often utilizing bi- and tri-color transgenic mice to unveil limited information in regards to neuronal structures, Brainbow is much more flexible in that it has the capacity to fluorescently label individual neurons with up to approximately 100 different hues so that scientists can identify and even differentiate between dendritic and axonal processes. By revealing such detailed information about neuronal connectivity and patterns, sometimes even in vivo, scientists are often able to infer information regarding neuronal interactions and their subsequent impact upon behavior and function. Thus, Brainbow filled the void left by previous neuroimaging methods.
With the recent advent of Brainbow in neuroscience, researchers are now able to construct specific maps of neural circuits and better investigate how these relate to various mental activities and their connected behaviors (i.e. Brainbow reveals information about the interconnections between neurons and their subsequent interactions that affect overall brain functionality). As a further extrapolation of this method, Brainbow can therefore also be used to study both neurological and psychological disorders by analyzing differences in neural maps.
Methods
Brainbow techniques rely on the Cre-Lox recombination, in which the protein Cre recombinase drives inversion or excision of DNA between loxP sites. The original Brainbow method includes both Brainbow-1 and Brainbow-2, which utilize different forms of cre/lox recombination. Brainbow-3, a modified version of Brainbow-1, was developed in 2013. For all Brainbow subtypes, the expression of a given XFP is a stochastic, or random, event.
Brainbow-1 uses DNA constructs with different fluorescent protein genes (XFPs) separated by mutant and canonical forms of loxP. This creates a set of mutually exclusive excision possibilities, since cre-mediated recombination occurs only between identical loxP sites. After recombination occurs, the fluorescent protein that is left directly after the promoter is uniquely expressed. Thus, a construct with four XFPs separated by three different loxP sites, three excision events, and the original construct can produce four different fluorescent proteins.
Brainbow-2 uses Cre excision and inversion to allow multiple expression possibilities in a given construct. In one DNA segment with two oppositely oriented XFPs, Cre will induce a random inversion event that leaves one fluorescent protein in the proper orientation for expression. If two of these invertible sequences are aligned, three different inversion events are possible. When excision events are also considered, one of four fluorescent proteins will be expressed for a given combination of Cre excisions and inversions.
Brainbow-3 retains the Brainbow-1 loxP format, but replaces the RFP, YFP, and CFP genes with mOrange2, EGFP, and mKate2. mO2, EGFP, and mK2 were chosen both because their fluorescent excitation and emission spectra overlap minimally, and because they share minimal sequence homology, allowing for the design of selective antibodies that can be used to detect them in immunohistochemical protocols. Brainbow-3 also addresses the issue of uneven filling of neurons with XFPs by using farnesylated derivatives of the XFPs, which are more evenly trafficked to neuronal membranes.
Brainbow is implemented in vivo by crossing two transgenic organism strains: one that expresses the Cre protein and another that has been transfected with several versions of a loxP/XFP construct. Using multiple copies of the transgene allows the XFPs to combine in a way that can give one of approximately 100 different colors. Thus, each neuron is labeled with a different hue based on its given combinatorial and stochastic expression of fluorescent proteins.
In order to elucidate differential XFP expression patterns into a visible form, brain slices are imaged with confocal microscopy. When exposed to a photon with its particular excitation wavelength, each fluorophore emits a signal that is collected into a red, green, or blue channel, and the resultant light combination is analyzed with data analysis software. Superimposition of differentially colored neurons allows visual disentanglement of complicated neural circuits.
Brainbow has predominantly been tested in mice to date; however, the basic technique described above has also been modified for use in more recent studies since the advent of the original method introduced in 2007.
Mice
The mouse brain has 75,000,000 neurons and is more similar to a human brain than drosophila and other commonly used organisms to model this technique, such as C. elegans. Mice were the first organisms in which the Brainbow method of neuroimaging was successfully employed. Livet et al. (2007) developed two versions of Brainbow mice using Brainbow-1 and Brainbow-2, which are described above. In using these methods to create a complete map and track the axons of a mouse muscle, it is necessary to collect tens of thousands of images and compile them into stacks to create a complete schematic. It is then possible to trace each motor axon and its synaptic contacts to construct a complete connectome of the muscle.
More examples of neurons examined using the Brainbow technique in transgenic mice are located in the motor nerve innervating ear muscles, axon tracts in the brainstem, and the hippocampal dentate gyrus.
Drosophila
The complexity of the Drosophila brain, consisting of about 100,000 neurons, makes it an excellent candidate for implementing neurophysiology and neuroscience techniques like Brainbow. In fact, Stefanie Hampel et al. (2011) combined Brainbow in conjunction with genetic targeting tools to identify individual neurons within the Drosophila brain and various neuronal lineages. One of the genetic targeting tools was a GAL4/UAS binary expression system that controls the expression of UAS-Brainbow and targets the expression to small groups of neurons. Utilizing ‘Flip Out’ methods increased the cellular resolution of the reporter construct. The expression of fluorescent proteins, as with the original Brainbow, depended on Cre recombination corresponding with matched lox sites. Hampel et al. (2011) also developed their own variation of Brainbow (dBrainbow), based on antibody labeling of epitopes rather than endogenous fluorescence. Two copies of their construct yield six bright, separable colors. This, along with simplifications in color assignment, enabled them to observe the trajectories of each neuron over long distances. Specifically, they traced motor neurons from the antennal lobe to neuromuscular junctions, allowing them to identify the specific muscle targets of individual neurons.
Ultimately, this technique provides the ability to efficaciously map the neuronal circuitry in Drosophila so that researchers are able to uncover more information about the brain structure of this invertebrate and how it relates to its ensuing behavior.
Limitations
As with any neuroimaging technique, Brainbow has a number of limitations that stem from the methods required to perform it. For example, the process of breeding at least two strains of transgenic animals from embryonic stem cells is both time-consuming and complex. Even if two transgenic species are successfully created, not all of their offspring will show the recombination. Thus, this requires extensive planning prior to performing an experiment.
In addition, due to the random nature in the expression of the fluorescent proteins, scientists are unable to precisely control the labeling of neural circuitry, which may result in the poor identification of specific neurons.
The use of brainbow in mammalian populations is also hampered by the incredible diversity of neurons of the central nervous system. The sheer density of neurons coupled with the presence of long tracts of axons make viewing larger regions of the CNS with high resolution difficult. Brainbow is most useful when examining single cell resolution against the background of a complex multicellular environment. However, due to the resolution limits of optical microscopy, conclusive identification of synaptic connections between neurons is not easily accomplished. This issue is somewhat avoided by the use of synaptic markers to supplement the use of optical microscopy in viewing synaptic connections.
See also
GFP
Fluorescence
Cre-Lox recombination
References
External links
Podcast on NPR's Science Friday
"Brainbow" A cool use of GFP
Cell imaging
Fluorescent proteins
Neuroimaging | Brainbow | [
"Chemistry",
"Biology"
] | 2,147 | [
"Biochemistry methods",
"Fluorescent proteins",
"Microscopy",
"Bioluminescence",
"Cell imaging"
] |
16,850,500 | https://en.wikipedia.org/wiki/KCNG1 | Potassium voltage-gated channel subfamily G member 1 is a protein that in humans is encoded by the KCNG1 gene.
Voltage-gated potassium (Kv) channels represent the most complex class of voltage-gated ion channels from both functional and structural standpoints. Their diverse functions include regulating neurotransmitter release, heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, smooth muscle contraction, and cell volume. This gene encodes a member of the potassium channel, voltage-gated, subfamily G. This gene is abundantly expressed in skeletal muscle. Alternative splicing results in at least two transcript variants encoding distinct isoforms.
See also
Voltage-gated potassium channel
References
Further reading
External links
Ion channels | KCNG1 | [
"Chemistry"
] | 159 | [
"Neurochemistry",
"Ion channels"
] |
16,850,535 | https://en.wikipedia.org/wiki/KIF2A | Kinesin-like protein KIF2A is a protein that in humans is encoded by the KIF2A gene. In mice, KIF2A is essential for proper neurogenesis and deficiency of KIF2A in mature neurons results in the loss of those neurons.
Kinesins, such as KIF2, are microtubule-associated motor proteins. For background information on kinesins, see MIM 148760.[supplied by OMIM]
References
Further reading
External links
Human proteins
Motor proteins | KIF2A | [
"Chemistry"
] | 109 | [
"Molecular machines",
"Motor proteins"
] |
16,850,898 | https://en.wikipedia.org/wiki/PI4KAP2 | Putative phosphatidylinositol 4-kinase alpha-like protein P2 is an enzyme that in humans is encoded by the PI4KAP2 gene.
References
Further reading
Proteins | PI4KAP2 | [
"Chemistry"
] | 42 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
16,851,994 | https://en.wikipedia.org/wiki/PRRX2 | Paired mesoderm homeobox protein 2 is a protein that in humans is encoded by the PRRX2 gene.
Function
The DNA-associated protein encoded by this gene is a member of the paired family of homeobox proteins. Expression is localized to proliferating fetal fibroblasts and the developing dermal layer, with downregulated expression in adult skin. Increases in expression of this gene during fetal but not adult wound healing suggest a possible role in mechanisms that control mammalian dermal regeneration and prevent formation of scar response to wounding. The expression patterns provide evidence consistent with a role in fetal skin development and a possible role in cellular proliferation.
References
Further reading
Transcription factors | PRRX2 | [
"Chemistry",
"Biology"
] | 140 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
16,852,203 | https://en.wikipedia.org/wiki/PHF1 | PHD finger protein 1 is a protein that in humans is encoded by the PHF1 gene.
Function
This gene encodes a protein with significant sequence similarity to Drosophila Polycomblike. The encoded protein contains a zinc finger-like PHD (plant homeodomain) finger which is distinct from other classes of zinc finger motifs and which shows the typical Cys4-His-Cys3 arrangement. PHD finger genes are thought to belong to a diverse group of transcriptional regulators possibly affecting eukaryotic gene expression by influencing chromatin structure. Two transcript variants have been found for this gene.
References
Further reading
External links
Transcription factors | PHF1 | [
"Chemistry",
"Biology"
] | 131 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
16,852,328 | https://en.wikipedia.org/wiki/SIX4 | Homeobox protein SIX4 is a protein that in humans is encoded by the SIX4 gene.
References
Further reading
Transcription factors | SIX4 | [
"Chemistry",
"Biology"
] | 27 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
16,855,525 | https://en.wikipedia.org/wiki/CRTC3 | CREB-regulated transcription coactivator 3 is a protein that in humans is encoded by the CRTC3 gene.
This gene has been shown to be linked to weight gain.
References
Further reading
External links
Gene expression
Transcription coregulators | CRTC3 | [
"Chemistry",
"Biology"
] | 50 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
16,855,603 | https://en.wikipedia.org/wiki/Classical%20electromagnetism%20and%20special%20relativity | The theory of special relativity plays an important role in the modern theory of classical electromagnetism. It gives formulas for how electromagnetic objects, in particular the electric and magnetic fields, are altered under a Lorentz transformation from one inertial frame of reference to another. It sheds light on the relationship between electricity and magnetism, showing that frame of reference determines if an observation follows electric or magnetic laws. It motivates a compact and convenient notation for the laws of electromagnetism, namely the "manifestly covariant" tensor form.
Maxwell's equations, when they were first stated in their complete form in 1865, would turn out to be compatible with special relativity. Moreover, the apparent coincidences in which the same effect was observed due to different physical phenomena by two different observers would be shown to be not coincidental in the least by special relativity. In fact, half of Einstein's 1905 first paper on special relativity, "On the Electrodynamics of Moving Bodies", explains how to transform Maxwell's equations.
Transformation of the fields between inertial frames
E and B fields
This equation considers two inertial frames. The primed frame is moving relative to the unprimed frame at velocity v. Fields defined in the primed frame are indicated by primes, and fields defined in the unprimed frame lack primes. The field components parallel to the velocity v are denoted by E∥ and B∥ while the field components perpendicular to v are denoted as E⟂ and B⟂. In these two frames moving at relative velocity v, the E-fields and B-fields are related by:
where
is called the Lorentz factor and c is the speed of light in free space. Lorentz factor (γ) is the same in both systems. The inverse transformations are the same except for the substitution .
An equivalent, alternative expression is:
where is the velocity unit vector. With previous notations, one actually has and .
Component by component, for relative motion along the x-axis , this works out to be the following:
If one of the fields is zero in one frame of reference, that doesn't necessarily mean it is zero in all other frames of reference. This can be seen by, for instance, making the unprimed electric field zero in the transformation to the primed electric field. In this case, depending on the orientation of the magnetic field, the primed system could see an electric field, even though there is none in the unprimed system.
This does not mean two completely different sets of events are seen in the two frames, but that the same sequence of events is described in two different ways (see below).
If a particle of charge q moves with velocity u with respect to frame S, then the Lorentz force in frame S is:
In frame S, the Lorentz force is:
A derivation for the transformation of the Lorentz force for the particular case is given here. A more general one can be seen here.
The transformations in this form can be made more compact by introducing the electromagnetic tensor (defined below), which is a covariant tensor.
D and H fields
For the electric displacement D and magnetic field strength H, using the constitutive relations and the result for c2:
gives
Analogously for E and B, the D and H form the electromagnetic displacement tensor.
φ and A fields
An alternative simpler transformation of the EM field uses the electromagnetic potentials – the electric potential φ and magnetic potential A:
where A∥ is the component of A that is parallel to the direction of relative velocity between frames v, and A⟂ is the perpendicular component. These transparently resemble the characteristic form of other Lorentz transformations (like time-position and energy-momentum), while the transformations of E and B above are slightly more complicated. The components can be collected together as:
ρ and J fields
Analogously for the charge density ρ and current density J,
Collecting components together:
Non-relativistic approximations
For speeds v ≪ c, the relativistic factor γ ≈ 1, which yields:
so that there is no need to distinguish between the spatial and temporal coordinates in Maxwell's equations.
Relationship between electricity and magnetism
Deriving magnetism from electric laws
The chosen reference frame determines whether an electromagnetic phenomenon is viewed as an electric or magnetic effect or a combination of the two. Authors usually derive magnetism from electrostatics when special relativity and charge invariance are taken into account. The Feynman Lectures on Physics (vol. 2, ch. 13–6) uses this method to derive the magnetic force on charge in parallel motion next to a current-carrying wire. See also Haskell and Landau.
If the charge instead moves perpendicular to a current-carrying wire, electrostatics cannot be used to derive the magnetic force. In this case, it can instead be derived by considering the relativistic compression of the electric field due to the motion of the charges in the wire.
Fields intermix in different frames
The above transformation rules show that the electric field in one frame contributes to the magnetic field in another frame, and vice versa. This is often described by saying that the electric field and magnetic field are two interrelated aspects of a single object, called the electromagnetic field. Indeed, the entire electromagnetic field can be represented in a single rank-2 tensor called the electromagnetic tensor; see below.
Moving magnet and conductor problem
A famous example of the intermixing of electric and magnetic phenomena in different frames of reference is called the "moving magnet and conductor problem", cited by Einstein in his 1905 paper on special relativity.
If a conductor moves with a constant velocity through the field of a stationary magnet, eddy currents will be produced due to a magnetic force on the electrons in the conductor. In the rest frame of the conductor, on the other hand, the magnet will be moving and the conductor stationary. Classical electromagnetic theory predicts that precisely the same microscopic eddy currents will be produced, but they will be due to an electric force.
Covariant formulation in vacuum
The laws and mathematical objects in classical electromagnetism can be written in a form which is manifestly covariant. Here, this is only done so for vacuum (or for the microscopic Maxwell equations, not using macroscopic descriptions of materials such as electric permittivity), and uses SI units.
This section uses Einstein notation, including Einstein summation convention. See also Ricci calculus for a summary of tensor index notations, and raising and lowering indices for definition of superscript and subscript indices, and how to switch between them. The Minkowski metric tensor η here has metric signature .
Field tensor and 4-current
The above relativistic transformations suggest the electric and magnetic fields are coupled together, in a mathematical object with 6 components: an antisymmetric second-rank tensor, or a bivector. This is called the electromagnetic field tensor, usually written as Fμν. In matrix form:
where c the speed of light; in natural units .
There is another way of merging the electric and magnetic fields into an antisymmetric tensor, by replacing and , to get its Hodge dual Gμν.
In the context of special relativity, both of these transform according to the Lorentz transformation according to
,
where Λα′ν is the Lorentz transformation tensor for a change from one reference frame to another. The same tensor is used twice in the summation.
The charge and current density, the sources of the fields, also combine into the four-vector
called the four-current.
Maxwell's equations in tensor form
Using these tensors, Maxwell's equations reduce to:
where the partial derivatives may be written in various ways, see 4-gradient. The first equation listed above corresponds to both Gauss's Law (for ) and the Ampère-Maxwell Law (for ). The second equation corresponds to the two remaining equations, Gauss's law for magnetism (for ) and Faraday's Law (for ).
These tensor equations are manifestly covariant, meaning they can be seen to be covariant by the index positions. This short form of Maxwell's equations illustrates an idea shared amongst some physicists, namely that the laws of physics take on a simpler form when written using tensors.
By lowering the indices on Fαβ to obtain Fαβ:
the second equation can be written in terms of Fαβ as:
where εδαβγ is the contravariant Levi-Civita symbol. Notice the cyclic permutation of indices in this equation: from each term to the next.
Another covariant electromagnetic object is the electromagnetic stress-energy tensor, a covariant rank-2 tensor which includes the Poynting vector, Maxwell stress tensor, and electromagnetic energy density.
4-potential
The EM field tensor can also be written
where
is the four-potential and
is the four-position.
Using the 4-potential in the Lorenz gauge, an alternative manifestly-covariant formulation can be found in a single equation (a generalization of an equation due to Bernhard Riemann by Arnold Sommerfeld, known as the Riemann–Sommerfeld equation, or the covariant form of the Maxwell equations):
where is the d'Alembertian operator, or four-Laplacian.
See also
Mathematical descriptions of the electromagnetic field
Relativistic electromagnetism
References
Electromagnetism
Special relativity | Classical electromagnetism and special relativity | [
"Physics"
] | 1,932 | [
"Electromagnetism",
"Physical phenomena",
"Special relativity",
"Fundamental interactions",
"Theory of relativity"
] |
2,404,348 | https://en.wikipedia.org/wiki/Magnetization | In classical electromagnetism, magnetization is the vector field that expresses the density of permanent or induced magnetic dipole moments in a magnetic material. Accordingly, physicists and engineers usually define magnetization as the quantity of magnetic moment per unit volume.
It is represented by a pseudovector M. Magnetization can be compared to electric polarization, which is the measure of the corresponding response of a material to an electric field in electrostatics.
Magnetization also describes how a material responds to an applied magnetic field as well as the way the material changes the magnetic field, and can be used to calculate the forces that result from those interactions.
The origin of the magnetic moments responsible for magnetization can be either microscopic electric currents resulting from the motion of electrons in atoms, or the spin of the electrons or the nuclei. Net magnetization results from the response of a material to an external magnetic field.
Paramagnetic materials have a weak induced magnetization in a magnetic field, which disappears when the magnetic field is removed. Ferromagnetic and ferrimagnetic materials have strong magnetization in a magnetic field, and can be magnetized to have magnetization in the absence of an external field, becoming a permanent magnet. Magnetization is not necessarily uniform within a material, but may vary between different points.
Definition
The magnetization field or M-field can be defined according to the following equation:
Where is the elementary magnetic moment and is the volume element; in other words, the M-field is the distribution of magnetic moments in the region or manifold concerned. This is better illustrated through the following relation:
where m is an ordinary magnetic moment and the triple integral denotes integration over a volume. This makes the M-field completely analogous to the electric polarisation field, or P-field, used to determine the electric dipole moment p generated by a similar region or manifold with such a polarization:
where is the elementary electric dipole moment.
Those definitions of P and M as a "moments per unit volume" are widely adopted, though in some cases they can lead to ambiguities and paradoxes.
The M-field is measured in amperes per meter (A/m) in SI units.
In Maxwell's equations
The behavior of magnetic fields (B, H), electric fields (E, D), charge density (ρ), and current density (J) is described by Maxwell's equations. The role of the magnetization is described below.
Relations between B, H, and M
The magnetization defines the auxiliary magnetic field H as
(SI)
(Gaussian system)
which is convenient for various calculations. The vacuum permeability μ0 is, approximately, ).
A relation between M and H exists in many materials. In diamagnets and paramagnets, the relation is usually linear:
where χ is called the volume magnetic susceptibility, and μ is called the magnetic permeability of the material. The magnetic potential energy per unit volume (i.e. magnetic energy density) of the paramagnet (or diamagnet) in the magnetic field is:
the negative gradient of which is the magnetic force on the paramagnet (or diamagnet) per unit volume (i.e. force density).
In diamagnets () and paramagnets (), usually , and therefore .
In ferromagnets there is no one-to-one correspondence between M and H because of magnetic hysteresis.
Magnetic polarization
Alternatively to the magnetization, one can define the magnetic polarization, (often the symbol is used, not to be confused with current density).
(SI).
This is by direct analogy to the electric polarization, .
The magnetic polarization thus differs from the magnetization by a factor of :
(SI).
Whereas magnetization is given with the unit ampere/meter, the magnetic polarization is given with the unit tesla.
Magnetization current
The magnetization M makes a contribution to the current density J, known as the magnetization current.
and for the bound surface current:
so that the total current density that enters Maxwell's equations is given by
where Jf is the electric current density of free charges (also called the free current), the second term is the contribution from the magnetization, and the last term is related to the electric polarization P.
Magnetostatics
In the absence of free electric currents and time-dependent effects, Maxwell's equations describing the magnetic quantities reduce to
These equations can be solved in analogy with electrostatic problems where
In this sense −∇⋅M plays the role of a fictitious "magnetic charge density" analogous to the electric charge density ρ; (see also demagnetizing field).
Dynamics
The time-dependent behavior of magnetization becomes important when considering nanoscale and nanosecond timescale magnetization. Rather than simply aligning with an applied field, the individual magnetic moments in a material begin to precess around the applied field and come into alignment through relaxation as energy is transferred into the lattice.
Reversal
Magnetization reversal, also known as switching, refers to the process that leads to a 180° (arc) re-orientation of the magnetization vector with respect to its initial direction, from one stable orientation to the opposite one. Technologically, this is one of the most important processes in magnetism that is linked to the magnetic data storage process such as used in modern hard disk drives. As it is known today, there are only a few possible ways to reverse the magnetization of a metallic magnet:
an applied magnetic field
spin injection via a beam of particles with spin
magnetization reversal by circularly polarized light; i.e., incident electromagnetic radiation that is circularly polarized
Demagnetization
Demagnetization is the reduction or elimination of magnetization. One way to do this is to heat the object above its Curie temperature, where thermal fluctuations have enough energy to overcome exchange interactions, the source of ferromagnetic order, and destroy that order. Another way is to pull it out of an electric coil with alternating current running through it, giving rise to fields that oppose the magnetization.
One application of demagnetization is to eliminate unwanted magnetic fields. For example, magnetic fields can interfere with electronic devices such as cell phones or computers, and with machining by making cuttings cling to their parent.
See also
Magnetometer
Orbital magnetization
References
Electric and magnetic fields in matter | Magnetization | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,325 | [
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
2,404,544 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20recurrence%20theorem | In mathematics and physics, the Poincaré recurrence theorem states that certain dynamical systems will, after a sufficiently long but finite time, return to a state arbitrarily close to (for continuous state systems), or exactly the same as (for discrete state systems), their initial state.
The Poincaré recurrence time is the length of time elapsed until the recurrence. This time may vary greatly depending on the exact initial state and required degree of closeness. The result applies to isolated mechanical systems subject to some constraints, e.g., all particles must be bound to a finite volume. The theorem is commonly discussed in the context of ergodic theory, dynamical systems and statistical mechanics. Systems to which the Poincaré recurrence theorem applies are called conservative systems.
The theorem is named after Henri Poincaré, who discussed it in 1890. A proof was presented by Constantin Carathéodory using measure theory in 1919.
Precise formulation
Any dynamical system defined by an ordinary differential equation determines a flow map f t mapping phase space on itself. The system is said to be volume-preserving if the volume of a set in phase space is invariant under the flow. For instance, all Hamiltonian systems are volume-preserving because of Liouville's theorem. The theorem is then: If a flow preserves volume and has only bounded orbits, then, for each open set, any orbit that intersects this open set intersects it infinitely often.
Discussion of proof
The proof, speaking qualitatively, hinges on two premises:
A finite upper bound can be set on the total potentially accessible phase space volume. For a mechanical system, this bound can be provided by requiring that the system is contained in a bounded physical region of space (so that it cannot, for example, eject particles that never return) – combined with the conservation of energy, this locks the system into a finite region in phase space.
The phase volume of a finite element under dynamics is conserved (for a mechanical system, this is ensured by Liouville's theorem).
Imagine any finite starting volume of the phase space and to follow its path under the dynamics of the system. The volume evolves through a "phase tube" in the phase space, keeping its size constant. Assuming a finite phase space, after some number of steps the phase tube must intersect itself. This means that at least a finite fraction of the starting volume is recurring.
Now, consider the size of the non-returning portion of the starting phase volume – that portion that never returns to the starting volume. Using the principle just discussed in the last paragraph, we know that if the non-returning portion is finite, then a finite part of it must return after steps. But that would be a contradiction, since in a number lcm of step, both and would be returning, against the hypothesis that only was. Thus, the non-returning portion of the starting volume cannot be the empty set, i.e. all is recurring after some number of steps.
The theorem does not comment on certain aspects of recurrence which this proof cannot guarantee:
There may be some special phases that never return to the starting phase volume, or that only return to the starting volume a finite number of times then never return again. These however are extremely "rare", making up an infinitesimal part of any starting volume.
Not all parts of the phase volume need to return at the same time. Some will "miss" the starting volume on the first pass, only to make their return at a later time.
Nothing prevents the phase tube from returning completely to its starting volume before all the possible phase volume is exhausted. A trivial example of this is the harmonic oscillator. Systems that do cover all accessible phase volume are called ergodic (this of course depends on the definition of "accessible volume").
What can be said is that for "almost any" starting phase, a system will eventually return arbitrarily close to that starting phase. The recurrence time depends on the required degree of closeness (the size of the phase volume). To achieve greater accuracy of recurrence, we need to take smaller initial volume, which means longer recurrence time.
For a given phase in a volume, the recurrence is not necessarily a periodic recurrence. The second recurrence time does not need to be double the first recurrence time.
Formal statement
Let
be a finite measure space and let
be a measure-preserving transformation. Below are two alternative statements of the theorem.
Theorem 1
For any , the set of those points of for which there exists such that for all has zero measure.
In other words, almost every point of returns to . In fact, almost every point returns infinitely often; i.e.
Theorem 2
The following is a topological version of this theorem:
If is a second-countable Hausdorff space and contains the Borel sigma-algebra, then the set of recurrent points of has full measure. That is, almost every point is recurrent.
More generally, the theorem applies to conservative systems, and not just to measure-preserving dynamical systems. Roughly speaking, one can say that conservative systems are precisely those to which the recurrence theorem applies.
Quantum mechanical version
For time-independent quantum mechanical systems with discrete energy eigenstates, a similar theorem holds. For every and there exists a time T larger than , such that , where denotes the state vector of the system at time t.
The essential elements of the proof are as follows. The system evolves in time according to:
where the are the energy eigenvalues (we use natural units, so ), and the are the energy eigenstates. The squared norm of the difference of the state vector at time and time zero, can be written as:
We can truncate the summation at some n = N independent of T, because
which can be made arbitrarily small by increasing N, as the summation , being the squared norm of the initial state, converges to 1.
The finite sum
can be made arbitrarily small for specific choices of the time T, according to the following construction. Choose an arbitrary , and then choose T such that there are integers that satisfies
,
for all numbers . For this specific choice of T,
As such, we have:
.
The state vector thus returns arbitrarily close to the initial state .
See also
Arnold's cat map
Ergodic hypothesis
Quantum revival
Recurrence period density entropy
Recurrence plot
Wandering set
References
Further reading
External links
Recent version of the original site.
Ergodic theory
Recurrence theorem
Statistical mechanics
Theorems in dynamical systems | Poincaré recurrence theorem | [
"Physics",
"Mathematics"
] | 1,370 | [
"Theorems in dynamical systems",
"Ergodic theory",
"Mathematical problems",
"Statistical mechanics",
"Mathematical theorems",
"Dynamical systems"
] |
2,405,440 | https://en.wikipedia.org/wiki/Enthalpy%20of%20atomization | In chemistry, the enthalpy of atomization (also atomisation in British English) is the enthalpy change that accompanies the total separation of all atoms in a chemical substance either an element or a compound. This is often represented by the symbol or All bonds in the compound are broken in atomization and none are formed, so enthalpies of atomization are always positive. The associated standard enthalpy is known as the standard enthalpy of atomization, /(kJ mol−1), at 298.15 K (or 25 degrees Celsius) and 100 kPa.
Definition
Enthalpy of atomization is the amount of enthalpy change when bonds of the compound are broken and the component atoms are separated into single atoms ( or monoatom).
Enthalpy of atomization is denoted by the symbol ΔHat. The enthalpy change of atomization of gaseous H2O is, for example, the sum of the HO–H and H–OH bond dissociation enthalpies.
The enthalpy of atomization of an elemental solid is exactly the same as the enthalpy of sublimation for any elemental solid that becomes a monatomic gas upon evaporation.
When a diatomic element is converted to gaseous atoms, only half a mole of molecules will be needed, as the standard enthalpy change is based purely on the production of one mole of gaseous atoms.
See also
Ionization energy
Electron gain enthalpy
References
Enthalpy | Enthalpy of atomization | [
"Physics",
"Chemistry",
"Mathematics"
] | 314 | [
"Thermodynamics stubs",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Enthalpy",
"Thermodynamics",
"Physical chemistry stubs"
] |
2,406,183 | https://en.wikipedia.org/wiki/Data%20buffer | In computer science, a data buffer (or just buffer) is a region of memory used to store data temporarily while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such as a microphone) or just before it is sent to an output device (such as speakers); however, a buffer may be used when data is moved between processes within a computer, comparable to buffers in telecommunication. Buffers can be implemented in a fixed memory location in hardware or by using a virtual data buffer in software that points at a location in the physical memory.
In all cases, the data stored in a data buffer is stored on a physical storage medium. The majority of buffers are implemented in software, which typically use RAM to store temporary data because of its much faster access time when compared with hard disk drives. Buffers are typically used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case that these rates are variable, for example in a printer spooler or in online video streaming. In a distributed computing environment, data buffers are often implemented in the form of burst buffers, which provides distributed buffering services.
A buffer often adjusts timing by implementing a queue (or FIFO) algorithm in memory, simultaneously writing data into the queue at one rate and reading it at another rate.
Applications
Buffers are often used in conjunction with I/O to hardware, such as disk drives, sending or receiving data to or from a network, or playing sound on a speaker. A line to a rollercoaster in an amusement park shares many similarities. People who ride the coaster come in at an unknown and often variable pace, but the roller coaster will be able to load people in bursts (as a coaster arrives and is loaded). The queue area acts as a buffer—a temporary space where those wishing to ride wait until the ride is available. Buffers are usually used in a FIFO (first in, first out) method, outputting data in the order it arrived.
Buffers can increase application performance by allowing synchronous operations such as file reads or writes to complete quickly instead of blocking while waiting for hardware interrupts to access a physical disk subsystem; instead, an operating system can immediately return a successful result from an API call, allowing an application to continue processing while the kernel completes the disk operation in the background. Further benefits can be achieved if the application is reading or writing small blocks of data that do not correspond to the block size of the disk subsystem, which allows a buffer to be used to aggregate many smaller read or write operations into block sizes that are more efficient for the disk subsystem, or in the case of a read, sometimes to completely avoid having to physically access a disk.
Telecommunication buffer
A buffer routine or storage medium used in telecommunications compensates for a difference in rate of flow of data or time of occurrence of events when data is transferred from one device to another.
Buffers are used for many purposes, including:
Interconnecting two digital circuits operating at different rates.
Holding data for later use.
Allowing timing corrections to be made on a data stream.
Collecting binary data bits into groups that can then be operated on as a unit.
Delaying the transit time of a signal in order to allow other operations to occur.
Examples
The BUFFERS command/statement in CONFIG.SYS of DOS.
The buffer between a serial port (UART) and a modem. The COM port speed may be 38400 bit/s while the modem may have only a 14400 bit/s carrier.
The integrated disk buffer on a hard disk drive, solid state drive or BD/DVD/CD drive.
The integrated SRAM buffer on an Ethernet adapter.
The framebuffer on a video card.
History
An early mention of a print buffer is the "Outscriber" devised by image processing pioneer Russel A. Kirsch for the SEAC computer in 1952:
One of the most important
problems in the design of automatic digital computers is that of getting the calculated results out of the machine rapidly enough to avoid delaying the further progress of the calculations. In many of the problems to which a general-purpose computer is applied the amount of output data is relatively big — so big that serious inefficiency would result from forcing the computer to wait for these data to be typed on existing printing devices. This difficulty has been solved in the SEAC by providing magnetic recording devices as output units. These devices are able to receive information from the machine at rates up to 100 times as fast as an electric typewriter can be operated. Thus, better efficiency is achieved in recording the output data; transcription can be made later from the magnetic recording device to a printing device without tying up the main computer.
See also
Buffer overflow
Buffer underrun
Circular buffer
Disk buffer
Streaming media
Frame buffer for use in graphical display
Double buffering and Triple buffering for techniques mainly in graphics
Depth buffer, Stencil buffer, for different parts of image information
Variable length buffer
Optical buffer
MissingNo., the result of buffer data not being cleared properly in Pokémon Red and Blue
UART buffer
ENOBUFS, POSIX error caused by lack of memory in buffers
Write buffer, a type of memory buffer
Zero-copy
512k day
References
Synchronization
Computer memory | Data buffer | [
"Engineering"
] | 1,108 | [
"Telecommunications engineering",
"Synchronization"
] |
2,407,249 | https://en.wikipedia.org/wiki/Geitonogamy | Geitonogamy (from Greek geiton (γείτων) = neighbor + gamein (γαμεῖν) = to marry) is a type of self-pollination. Geitonogamous pollination is sometimes distinguished from the fertilizations that can result from it, geitonogamy. If a plant is self-incompatible, geitonogamy can reduce seed production.
Geitonogamy is when pollen is exported using a vector (pollinator or wind) out of one flower but only to another flower on the same plant. It is a form of self-fertilization.
In flowering plants, pollen is transferred from a flower to another flower on the same plant, and in animal pollinated systems this is accomplished by a pollinator visiting multiple flowers on the same plant. Geitonogamy is also possible within species that are wind-pollinated, and may actually be a quite common source of self-fertilized seeds in self-compatible species. It also occurs in monoecious gymnosperms. Although geitonogamy is functionally cross-pollination involving a pollinating agent, genetically it is similar to autogamy since the pollen grains come from the same plant.
Monoecious plants like maize show geitonogamy. Geitonogamy is not possible for strictly dioecious plants, namely those with separate male and female flowers on different plants.
See also
Plant reproductive morphology
Self-fertilization
Autogamy Depression
References
Pollination
Plant reproduction | Geitonogamy | [
"Biology"
] | 322 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction"
] |
2,408,425 | https://en.wikipedia.org/wiki/Hypoxia-inducible%20factor | Hypoxia-inducible factors (HIFs) are transcription factors that respond to decreases in available oxygen in the cellular environment, or hypoxia. They also respond to instances of pseudohypoxia, such as thiamine deficiency. Both hypoxia and pseudohypoxia leads to impairment of adenosine triphosphate (ATP) production by the mitochondria.
Discovery
The HIF transcriptional complex was discovered in 1995 by Gregg L. Semenza and postdoctoral fellow Guang Wang. In 2016, William Kaelin Jr., Peter J. Ratcliffe and Gregg L. Semenza were presented the Lasker Award for their work in elucidating the role of HIF-1 in oxygen sensing and its role in surviving low oxygen conditions. In 2019, the same three individuals were jointly awarded the Nobel Prize in Physiology or Medicine for their work in elucidating how HIF senses and adapts cellular response to oxygen availability.
Structure
Oxygen-breathing species express the highly conserved transcriptional complex HIF-1, which is a heterodimer composed of an alpha and a beta subunit, the latter being a constitutively-expressed aryl hydrocarbon receptor nuclear translocator (ARNT). HIF-1 belongs to the PER-ARNT-SIM (PAS) subfamily of the basic helix-loop-helix (bHLH) family of transcription factors. The alpha and beta subunit are similar in structure and both contain the following domains:
N-terminus – a bHLH domain for DNA binding
central region – Per-ARNT-Sim (PAS) domain, which facilitates heterodimerization
C-terminus – recruits transcriptional coregulatory proteins
Members
The following are members of the human HIF family:
Function
HIF1α expression in haematopoietic stem cells explains the quiescence nature of stem cells for being metabolically maintaining at a low rate so as to preserve the potency of stem cells for long periods in a life cycle of an organism.
The HIF signaling cascade mediates the effects of hypoxia, the state of low oxygen concentration, on the cell. Hypoxia often keeps cells from differentiating. However, hypoxia promotes the formation of blood vessels, and is important for the formation of a vascular system in embryos and tumors. The hypoxia in wounds also promotes the migration of keratinocytes and the restoration of the epithelium. It is therefore not surprising that HIF-1 modulation was identified as a promising treatment paradigm in wound healing.
In general, HIFs are vital to development. In mammals, deletion of the HIF-1 genes results in perinatal death. HIF-1 has been shown to be vital to chondrocyte survival, allowing the cells to adapt to low-oxygen conditions within the growth plates of bones. HIF plays a central role in the regulation of human metabolism.
Mechanism
The alpha subunits of HIF are hydroxylated at conserved proline residues by HIF prolyl-hydroxylases, allowing their recognition and ubiquitination by the VHL E3 ubiquitin ligase, which labels them for rapid degradation by the proteasome. This occurs only in normoxic conditions. In hypoxic conditions, HIF prolyl-hydroxylase is inhibited, since it utilizes oxygen as a cosubstrate.
Inhibition of electron transfer in the succinate dehydrogenase complex due to mutations in the SDHB or SDHD genes can cause a build-up of succinate that inhibits HIF prolyl-hydroxylase, stabilizing HIF-1α. This is termed pseudohypoxia.
HIF-1, when stabilized by hypoxic conditions, upregulates several genes to promote survival in low-oxygen conditions. These include glycolysis enzymes, which allow ATP synthesis in an oxygen-independent manner, and vascular endothelial growth factor (VEGF), which promotes angiogenesis. HIF-1 acts by binding to hypoxia-responsive elements (HREs) in promoters that contain the sequence 5'-RCGTG-3' (where R is a purine, either A or G). Studies demonstrate that hypoxia modulates histone methylation and reprograms chromatin. This paper was published back-to-back with that of 2019 Nobel Prize in Physiology or Medicine winner for Medicine William Kaelin Jr. This work was highlighted in an independent editorial.
It has been shown that muscle A kinase–anchoring protein (mAKAP) organized E3 ubiquitin ligases, affecting stability and positioning of HIF-1 inside its action site in the nucleus. Depletion of mAKAP or disruption of its targeting to the perinuclear (in cardiomyocytes) region altered the stability of HIF-1 and transcriptional activation of genes associated with hypoxia. Thus, "compartmentalization" of oxygen-sensitive signaling components may influence the hypoxic response.
The advanced knowledge of the molecular regulatory mechanisms of HIF1 activity under hypoxic conditions contrast sharply with the paucity of information on the mechanistic and functional aspects governing NF-κB-mediated HIF1 regulation under normoxic conditions. However, HIF-1α stabilization is also found in non-hypoxic conditions through an unknown mechanism. It was shown that NF-κB (nuclear factor κB) is a direct modulator of HIF-1α expression in the presence of normal oxygen pressure. siRNA (small interfering RNA) studies for individual NF-κB members revealed differential effects on HIF-1α mRNA levels, indicating that NF-κB can regulate basal HIF-1α expression. Finally, it was shown that, when endogenous NF-κB is induced by TNFα (tumour necrosis factor α) treatment, HIF-1α levels also change in an NF-κB-dependent manner. HIF-1 and HIF-2 have different physiological roles. HIF-2 regulates erythropoietin production in adult life.
Repair, regeneration and rejuvenation
In normal circumstances after injury HIF-1a is degraded by prolyl hydroxylases (PHDs). In June 2015, scientists found that the continued up-regulation of HIF-1a via PHD inhibitors regenerates lost or damaged tissue in mammals that have a repair response; and the continued down-regulation of Hif-1a results in healing with a scarring response in mammals with a previous regenerative response to the loss of tissue. The act of regulating HIF-1a can either turn off, or turn on the key process of mammalian regeneration. One such regenerative process in which HIF1A is involved is skin healing. Researchers at the Stanford University School of Medicine demonstrated that HIF1A activation was able to prevent and treat chronic wounds in diabetic and aged mice. Not only did the wounds in the mice heal more quickly, but the quality of the new skin was even better than the original. Additionally the regenerative effect of HIF-1A modulation on aged skin cells was described and a rejuvenating effect on aged facial skin was demonstrated in patients. HIF modulation has also been linked to a beneficial effect on hair loss. The biotech company Tomorrowlabs GmbH, founded in Vienna in 2016 by the physician Dominik Duscher and pharmacologist Dominik Thor, makes use of this mechanism. Based on the patent-pending HSF ("HIF strengthening factor") active ingredient, products have been developed that are supposed to promote skin and hair regeneration.
As a therapeutic target
Anemia
Several drugs that act as selective HIF prolyl-hydroxylase inhibitors have been developed. The most notable compounds are: Roxadustat (FG-4592); Vadadustat (AKB-6548), Daprodustat (GSK1278863), Desidustat (ZYAN-1), and Molidustat (Bay 85-3934), all of which are intended as orally acting drugs for the treatment of anemia. Other significant compounds from this family, which are used in research but have not been developed for medical use in humans, include MK-8617, YC-1, IOX-2, 2-methoxyestradiol, GN-44028, AKB-4924, Bay 87-2243, FG-2216 and FG-4497. By inhibiting prolyl-hydroxylase enzyme, the stability of HIF-2α in the kidney is increased, which results in an increase in endogenous production of erythropoietin. Both FibroGen compounds made it through to Phase II clinical trials, but these were suspended temporarily in May 2007 following the death of a trial participant taking FG-2216 from fulminant hepatitis (liver failure), however it is unclear whether this death was actually caused by FG-2216. The hold on further testing of FG-4592 was lifted in early 2008, after the FDA reviewed and approved a thorough response from FibroGen. Roxadustat, vadadustat, daprodustat and molidustat have now all progressed through to Phase III clinical trials for treatment of renal anemia.
Inflammation and cancer
In other scenarios and in contrast to the therapy outlined above, research suggests that HIF induction in normoxia is likely to have serious consequences in disease settings with a chronic inflammatory component. It has also been shown that chronic inflammation is self-perpetuating and that it distorts the microenvironment as a result of aberrantly active transcription factors. As a consequence, alterations in growth factor, chemokine, cytokine, and ROS balance occur within the cellular milieu that in turn provide the axis of growth and survival needed for de novo development of cancer and metastasis. These results have numerous implications for a number of pathologies where NF-κB and HIF-1 are deregulated, including rheumatoid arthritis and cancer. Therefore, it is thought that understanding the cross-talk between these two key transcription factors, NF-κB and HIF, will greatly enhance the process of drug development.
HIF activity is involved in angiogenesis required for cancer tumor growth, so HIF inhibitors such as phenethyl isothiocyanate and Acriflavine are (since 2006) under investigation for anti-cancer effects.
Neurology
Research conducted on mice suggests that stabilizing HIF using an HIF prolyl-hydroxylase inhibitor enhances hippocampal memory, likely by increasing erythropoietin expression. HIF pathway activators such as ML-228 may have neuroprotective effects and are of interest as potential treatments for stroke and spinal cord injury.
von Hippel–Lindau disease-associated renal cell carcinoma
Belzutifan is an hypoxia-inducible factor-2α inhibitor under investigation for the treatment of von Hippel–Lindau disease-associated renal cell carcinoma.
References
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human Hypoxia-inducible factor 1-alpha
PDBe-KB provides an overview of all the structure information available in the PDB for Human Aryl hydrocarbon receptor nuclear translocator
PDBe-KB provides an overview of all the structure information available in the PDB for Human Endothelial PAS domain-containing protein 1
PDBe-KB provides an overview of all the structure information available in the PDB for Human Hypoxia-inducible factor 3-alpha
short scientific animation visualises the crystal structure of the Heterodimeric HIF-1a:ARNT Complex with HRE DNA
Developmental genes and proteins
Transcription factors
World Anti-Doping Agency prohibited substances | Hypoxia-inducible factor | [
"Chemistry",
"Biology"
] | 2,532 | [
"Transcription factors",
"Gene expression",
"Signal transduction",
"Developmental genes and proteins",
"Induced stem cells"
] |
2,408,688 | https://en.wikipedia.org/wiki/Nucleophilic%20aromatic%20substitution | A nucleophilic aromatic substitution (SNAr) is a substitution reaction in organic chemistry in which the nucleophile displaces a good leaving group, such as a halide, on an aromatic ring. Aromatic rings are usually nucleophilic, but some aromatic compounds do undergo nucleophilic substitution. Just as normally nucleophilic alkenes can be made to undergo conjugate substitution if they carry electron-withdrawing substituents, so normally nucleophilic aromatic rings also become electrophilic if they have the right substituents.This reaction differs from a common SN2 reaction, because it happens at a trigonal carbon atom (sp2 hybridization). The mechanism of SN2 reaction does not occur due to steric hindrance of the benzene ring. In order to attack the C atom, the nucleophile must approach in line with the C-LG (leaving group) bond from the back, where the benzene ring lies. It follows the general rule for which SN2 reactions occur only at a tetrahedral carbon atom.
The SN1 mechanism is possible but very unfavourable unless the leaving group is an exceptionally good one. It would involve the unaided loss of the leaving group and the formation of an aryl cation. In the SN1 reactions all the cations employed as intermediates were planar with an empty p orbital. This cation is planar but the p orbital is full (it is part of the aromatic ring) and the empty orbital is an sp2 orbital outside the ring.
Nucleophilic aromatic substitution mechanisms
Aromatic rings undergo nucleophilic substitution by several pathways.
SNAr (addition-elimination) mechanism
aromatic SN1 mechanism encountered with diazonium salts
benzyne mechanism (E1cB-AdN)
free radical SRN1 mechanism
ANRORC mechanism
Vicarious nucleophilic substitution
The SNAr mechanism is the most important of these. Electron withdrawing groups activate the ring towards nucleophilic attack. For example if there are nitro functional groups positioned ortho or para to the halide leaving group, the SNAr mechanism is favored.
SNAr reaction mechanism
The following is the reaction mechanism of a nucleophilic aromatic substitution of 2,4-dinitrochlorobenzene (1) in a basic solution in water.
Since the nitro group is an activator toward nucleophilic substitution, and a meta director, it is able to stabilize the additional electron density (via resonance) when the aromatic compound is attacked by the hydroxide nucleophile. The resulting intermediate, named the Meisenheimer complex (2a), the ipso carbon is temporarily bonded to the hydroxyl group. This Meisenheimer complex is extra stabilized by the additional electron-withdrawing nitro group (2b).
In order to return to a lower energy state, either the hydroxyl group leaves, or the chloride leaves. In solution, both processes happen. A small percentage of the intermediate loses the chloride to become the product (2,4-dinitrophenol, 3), while the rest return to the reactant (1). Since 2,4-dinitrophenol is in a lower energy state, it will not return to form the reactant, so after some time has passed, the reaction reaches chemical equilibrium that favors the 2,4-dinitrophenol, which is then deprotonated by the basic solution (4).
The formation of the resonance-stabilized Meisenheimer complex is slow because the loss of aromaticity due to nucleophilic attack results in a higher-energy state. By the same coin, the loss of the chloride or hydroxide is fast, because the ring regains aromaticity. Recent work indicates that, sometimes, the Meisenheimer complex is not always a true intermediate but may be the transition state of a 'frontside SN2' process, particularly if stabilization by electron-withdrawing groups is not very strong. A 2019 review argues that such 'concerted SNAr' reactions are more prevalent than previously assumed.
Aryl halides cannot undergo the classic 'backside' SN2 reaction. The carbon-halogen bond is in the plane of the ring because the carbon atom has a trigonal planar geometry. Backside attack is blocked and this reaction is therefore not possible. An SN1 reaction is possible but very unfavourable. It would involve the unaided loss of the leaving group and the formation of an aryl cation. The nitro group is the most commonly encountered activating group, other groups are the cyano and the acyl group. The leaving group can be a halogen or a sulfide. With increasing electronegativity the reaction rate for nucleophilic attack increases. This is because the rate-determining step for an SNAr reaction is attack of the nucleophile and the subsequent breaking of the aromatic system; the faster process is the favourable reforming of the aromatic system after loss of the leaving group. As such, the following pattern is seen with regard to halogen leaving group ability for SNAr: F > Cl ≈ Br > I (i.e. an inverted order to that expected for an SN2 reaction). If looked at from the point of view of an SN2 reaction this would seem counterintuitive, since the C-F bond is amongst the strongest in organic chemistry, when indeed the fluoride is the ideal leaving group for an SNAr due to the extreme polarity of the C-F bond. Nucleophiles can be amines, alkoxides, sulfides and stabilized carbanions.
Nucleophilic aromatic substitution reactions
Some typical substitution reactions on arenes are listed below.
In the Bamberger rearrangement N-phenylhydroxylamines rearrange to 4-aminophenols. The nucleophile is water.
The Smiles rearrangement is the intramolecular version of this reaction type.
Nucleophilic aromatic substitution is not limited to arenes, however; the reaction takes place even more readily with heteroarenes. Pyridines are especially reactive when substituted in the aromatic ortho position or aromatic para position because then the negative charge is effectively delocalized at the nitrogen position. One classic reaction is the Chichibabin reaction (Aleksei Chichibabin, 1914) in which pyridine is reacted with an alkali-metal amide such as sodium amide to form 2-aminopyridine.
In the compound methyl 3-nitropyridine-4-carboxylate, the meta nitro group is actually displaced by fluorine with cesium fluoride in DMSO at 120 °C.
Although the Sandmeyer reaction of diazonium salts and halides is formally a nucleophilic substitution, the reaction mechanism is in fact radical.
Asymmetric nucleophilic aromatic substitution
With carbon nucleophiles such as 1,3-dicarbonyl compounds the reaction has been demonstrated as a method for the asymmetric synthesis of chiral molecules. First reported in 2005, the organocatalyst (in a dual role with that of a phase transfer catalyst) is derived from cinchonidine (benzylated at N and O).
See also
Electrophilic aromatic substitution
Nucleophile
Substitution reaction
SN1 reaction
SN2 reaction
SNi reaction
Nucleophilic aliphatic substitution
References
Nucleophilic substitution reactions
Reaction mechanisms | Nucleophilic aromatic substitution | [
"Chemistry"
] | 1,590 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
2,408,809 | https://en.wikipedia.org/wiki/Basilis%20C.%20Xanthopoulos | Basilis C. Xanthopoulos (also Vasilis; ; 8 April 1951 – 27 November 1990) was a Greek theoretical physicist, well known in the field of general relativity for his contributions to the study of colliding plane waves.
Early years
Basilis Xanthopoulos was born in Drama. He excelled in high school showing advanced analytic abilities in physics and mathematics. He was awarded the 1st prize in the national mathematics competition, organised by the Greek mathematical society in 1969 and at the same year he was admitted with the highest grade among all students in Greece to the Department of Mathematics of the University of Thessaloniki. Four years later he also graduated first in his class and after scoring at the top 1% in the GRE, he was admitted for graduate studies in Physics at the University of Chicago. He moved to Chicago in December 1974 and earned his Ph.D. on May 30, 1978, under the supervision of Prof. Robert Geroch. The title of his dissertation was "Exact vacuum solutions of Einsteins equation from linearized solutions". During this time, he commenced a close lifelong collaboration with Subrahmanyan Chandrasekhar, who was effectively his co-supervisor and became a close friend and life-long mentor. Chandrasekhar, having been awarded the Nobel prize in Physics in 1983, visited Crete several times in the mid to late 1980's to collaborate with Xanthopoulos, and actually mentions in 1991 in Current Science that "My association with Basilis is the most binding in all my sixty years of science".
Academic career
Upon completing his PhD Xanthopoulos moved as a visiting assistant professor at the Department of Physics of Montana State University until June 1979 and continued as a postdoctoral researcher at Syracuse University. In December 1979 he returned, as a Chief Assistant, to the Department of Physics of the University of Thessaloniki. On November 29, 1982, he moved as a faculty to the newly established Department of Physics of the University of Crete, where he advanced through the ranks becoming full professor in 1987. He served as a Chairman of department from 1987 until his murder on the evening of November 27, 1990, shot together with his colleague Stephanos Pnevmatikos while giving a seminar by a 32-year-old disgruntled mentally unstable named Giorgos Petrodaskalakis (who later committed suicide).
Scientific contributions
Xanthopoulos contributed in a number of areas of mathematical physics and general theory of relativity. In particular, he worked on:
Asymptotic structure of spacetime. This is a research field in which he was influenced by the supervisor R. Geroch and his works (in collaboration with A. Ashtekar, C. Hoenselaers, W. Kinnerslay);
Creation of "anomalies" in space-time by colliding gravitational waves. It was one of the two pillars of his collaboration mainly with Chandrashekhar, which models two gravitational plane waves which collide, interact nonlinearly, and create in the interaction zone a curved region of spacetime which is locally isometric to the Kerr vacuum. This is now called the 'Chandrasekhar–Xanthopoulos colliding plane wave model;
Disorders of spacetime (mainly Reissner-Nordstrom). This is the second pillar of his collaboration with Chandrashekhar and his work has characterized the field. They were an important part of the book "The Mathematical Theory of Black Holes" by Chandrashekhar in 1983;
Cosmic Strings. In the 80's Xanthopoulos also dealt with mathematical issues concerning the cosmic strings, concerning the period when he worked in Crete;
Scalar fields, a very important topic in gravitational physics. Xanthopoulos contributed in this direction in collaboration with V. Ferrari, on the possibility of black holes to support the existence of scalar fields.
The complete list of his publications is available from NASA/ADS here, while his Google Scholar profile is available here.
Recognition
The appreciation on the contributions of Basilis Xanthopoulos to science and education is reflected by a number of events in his memory:
"Basilis Xanthopoulos - Stefanos Pnevmatikos" Award for Excellence in Academic Teaching, by the Foundation for Research and Technology - Hellas (FORTH), established in 1991 and awarded annually by the President of the Hellenic Republic.
International "Xanthopoulos Award", also by the Foundation for Research and Technology - Hellas (FORTH), and awarded once every three years by the Society on General Relativity and Gravitation to a scientist under the age of 40 who has an "outstanding (preferably theoretical) contribution to the field of gravitational physics" .
The "Basilis Xanthopoulos" Competition in Physics and Mathematics for high school students in the prefecture of Drama.
In his honor, the amphitheater of the 1st General Lyceum of Drama was named "Basilis Xanthopoulos Amphitheater".
In his honor, the lecture hall of the Observatory of the Aristotle University of Thessaloniki was named "Basilis Xanthopoulos Hall".
In his honor, the amphitheater A of the Department of Physics of the University of Crete was named "Basilis Xanthopoulos - Stefanos Pnevmatikos Amphitheater".
On April 8, 2021, on the occasion of the 70 years since his birth, the Department of Physics of the University of Crete and the Institute of Astrophysics of FORTH made available online his personal archive in a dedicated web page.
Notes
References
Page containing information about Xanthopoulos' life and death, and a short publication list of his works.
; this paper gives the Chandrasekhar/Xanthopoulos colliding plane wave solution.
External links
Basilis Xanthopoulos International Award
1951 births
1990 deaths
1990 murders in Europe
People from Drama, Greece
20th-century Greek physicists
Academic staff of the University of Crete
Greek murder victims
Deaths by firearm in Greece
Murder–suicides in Europe
People murdered in Greece
Theoretical physicists | Basilis C. Xanthopoulos | [
"Physics"
] | 1,226 | [
"Theoretical physics",
"Theoretical physicists"
] |
2,408,819 | https://en.wikipedia.org/wiki/Supramolecular%20electronics | Supramolecular electronics is the experimental field of supramolecular chemistry that bridges the gap between molecular electronics and bulk plastics in the construction of electronic circuitry at the nanoscale. In supramolecular electronics, assemblies of pi-conjugated systems on the 5 to 100 nanometer scale are prepared by molecular self-assembly with the aim to fit these structures between electrodes. With single molecules as researched in molecular electronics at the 5 nanometer scale this would be impractical. Nanofibers can be prepared from polymers such as polyaniline and polyacetylene. Chiral oligo(p-phenylenevinylene)s self-assemble in a controlled fashion into (helical) wires. An example of actively researched compounds in this field are certain coronenes.
References
Electronics
Molecular electronics
Nanoelectronics | Supramolecular electronics | [
"Chemistry",
"Materials_science"
] | 179 | [
"Molecular physics",
"Molecular electronics",
"Nanoelectronics",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
2,409,019 | https://en.wikipedia.org/wiki/Small%20t%20intron | Plasmid vectors are circular strands of DNA, found in virions, that are used in genetic engineering to integrate new genes into a host cell genome.
The small T intron is an intron, that is used in some plasmid vectors, in order to induce gene expression in mammalian cells.
Function
The function of this intron in the vectors is unknown, but it is theorized that it might be involved in splicing or translation efficiency.
Vectors such as pME18s contain it.
References
Gene expression | Small t intron | [
"Chemistry",
"Biology"
] | 110 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
2,409,216 | https://en.wikipedia.org/wiki/Baeyer%E2%80%93Villiger%20oxidation | The Baeyer–Villiger oxidation is an organic reaction that forms an ester from a ketone or a lactone from a cyclic ketone, using peroxyacids or peroxides as the oxidant. The reaction is named after Adolf von Baeyer and Victor Villiger who first reported the reaction in 1899.
Reaction mechanism
In the first step of the reaction mechanism, the peroxyacid protonates the oxygen of the carbonyl group. This makes the carbonyl group more susceptible to be attacked by the peroxyacid. Next, the peroxyacid attacks the carbon of the carbonyl group forming what is known as the Criegee intermediate. Through a concerted mechanism, one of the substituents on the ketone group migrates to the oxygen of the peroxide group while a carboxylic acid leaves. This migration step is thought to be the rate determining step. Finally, deprotonation of the oxocarbenium ion produces the ester.
The products of the Baeyer–Villiger oxidation are believed to be controlled through both primary and secondary stereoelectronic effects. The primary stereoelectronic effect in the Baeyer–Villiger oxidation refers to the necessity of the oxygen-oxygen bond in the peroxide group to be antiperiplanar to the group that migrates. This orientation facilitates optimum overlap of the 𝛔 orbital of the migrating group to the 𝛔* orbital of the peroxide group. The secondary stereoelectronic effect refers to the necessity of the lone pair on the oxygen of the hydroxyl group to be antiperiplanar to the migrating group. This allows for optimum overlap of the oxygen nonbonding orbital with the 𝛔* orbital of the migrating group. This migration step is also (at least in silico) assisted by two or three peroxyacid units enabling the hydroxyl proton to shuttle to its new position.
The migratory ability is ranked tertiary > secondary > aryl > primary. Allylic groups are more apt to migrate than primary alkyl groups but less so than secondary alkyl groups. Electron-withdrawing groups on the substituent decrease the rate of migration. There are two explanations for this trend in migration ability. One explanation relies on the buildup of positive charge in the transition state for breakdown of the Criegee intermediate (illustrated by the carbocation resonance structure of the Criegee intermediate). Keeping this structure in mind, it makes sense that the substituent that can maintain positive charge the best would be most likely to migrate. The higher the degree of substitution, the more stable a carbocation generally is. Therefore, the tertiary > secondary > primary trend is observed.
Another explanation uses stereoelectronic effects and steric arguments. As mentioned, the substituent that is antiperiplanar to the peroxide group in the transition state will migrate. This transition state has a gauche interaction between the peroxyacid and the non-migrating substituent. If the bulkier group is placed antiperiplanar to the peroxide group, the gauche interaction between the substituent on the forming ester and the carbonyl group of the peroxyacid will be reduced. Thus, it is the bulkier group that will prefer to be antiperiplanar to the peroxide group, enhancing its aptitude for migration.
The migrating group in acyclic ketones, usually, is not 1° alkyl group. However, they may be persuaded to migrate in preference to the 2° or 3° groups by using CF3CO3H or BF3 + H2O2 as reagents.
Historical background
In 1899, Adolf Baeyer and Victor Villiger first published a demonstration of the reaction that we now know as the Baeyer–Villiger oxidation. They used peroxymonosulfuric acid to make the corresponding lactones from camphor, menthone, and tetrahydrocarvone.
There were three suggested reaction mechanisms of the Baeyer–Villiger oxidation that seemed to fit with observed reaction outcomes. These three reaction mechanisms can really be split into two pathways of peroxyacid attack – on either the oxygen or the carbon of the carbonyl group. Attack on oxygen could lead to two possible intermediates: Baeyer and Villiger suggested a dioxirane intermediate, while Georg Wittig and Gustav Pieper suggested a peroxide with no dioxirane formation. Carbon attack was suggested by Rudolf Criegee. In this pathway, the peracid attacks the carbonyl carbon, producing what is now known as the Criegee intermediate.
In 1953, William von Eggers Doering and Edwin Dorfman elucidated the correct pathway for the reaction mechanism of the Baeyer–Villiger oxidation by using oxygen-18-labelling of benzophenone. The three different mechanisms would each lead to a different distribution of labelled products. The Criegee intermediate would lead to a product only labelled on the carbonyl oxygen. The product of the Wittig and Pieper intermediate is only labeled on the alkoxy group of the ester. The Baeyer and Villiger intermediate leads to a 1:1 distribution of both of the above products. The outcome of the labelling experiment supported the Criegee intermediate, which is now the generally accepted pathway.
Stereochemistry
The migration does not change the stereochemistry of the group that transfers, i.e.: it is stereoretentive.
Reagents
Although many different peroxyacids are used for the Baeyer–Villiger oxidation, some of the more common oxidants include meta-chloroperbenzoic acid (mCPBA) and trifluoroperacetic acid (TFPAA). The general trend is that higher reactivity is correlated with lower pKa (i.e.: stronger acidity) of the corresponding carboxylic acid (or alcohol in the case of the peroxides). Therefore, the reactivity trend shows TFPAA > 4-nitroperbenzoic acid > mCPBA and performic acid > peracetic acid > hydrogen peroxide > tert-butyl hydroperoxide. The peroxides are much less reactive than the peroxyacids. The use of hydrogen peroxide even requires a catalyst. In addition, using organic peroxides and hydrogen peroxide tends to generate more side-reactivity due to their promiscuity.
Limitations
The use of peroxyacids and peroxides when performing the Baeyer–Villiger oxidation can cause the undesirable oxidation of other functional groups. Alkenes and amines are a few of the groups that can be oxidized. For instance, alkenes in the substrate, particularly when electron-rich, may be oxidized to epoxides. However, methods have been developed that will allow for the tolerance of these functional groups. In 1962, G. B. Payne reported that the use of hydrogen peroxide in the presence of a selenium catalyst will produce the epoxide from alkenyl ketones, while use of peroxyacetic acid will form the ester.
Modifications
Catalytic Baeyer-Villiger oxidation
The use of hydrogen peroxide as an oxidant would be advantageous, making the reaction more environmentally friendly as the sole byproduct is water. Benzeneseleninic acid derivatives as catalysts have been reported to give high selectivity with hydrogen peroxide as the oxidant. Another class of catalysts which show high selectivity with hydrogen peroxide as the oxidant are solid Lewis acid catalysts such as stannosilicates. Among stannosilicates, particularly the zeotype Sn-beta and the amorphous Sn-MCM-41 show promising activity and close to full selectivity towards the desired product.
Asymmetric Baeyer-Villiger oxidation
There have been attempts to use organometallic catalysts to perform enantioselective Baeyer–Villiger oxidations. The first reported instance of one such oxidation of a prochiral ketone used dioxygen as the oxidant with a copper catalyst. Other catalysts, including platinum and aluminum compounds, followed.
Baeyer-Villiger monooxygenases
In nature, enzymes called Baeyer-Villiger monooxygenases (BVMOs) perform the oxidation analogously to the chemical reaction. To facilitate this chemistry, BVMOs contain a flavin adenine dinucleotide (FAD) cofactor. In the catalytic cycle (see figure on the right), the cellular redox equivalent NADPH first reduces the cofactor, which allows it subsequently to react with molecular oxygen. The resulting peroxyflavin is the catalytic entity oxygenating the substrate, and theoretical studies suggest that the reaction proceeds through the same Criegee intermediate as observed in the chemical reaction. After the rearrangement step forming the ester product, a hydroxyflavin remains, which spontaneously eliminates water to form oxidized flavin, thereby closing the catalytic cycle.
BVMOs are closely related to the flavin-containing monooxygenases (FMOs), enzymes that also occur in the human body, functioning within the frontline metabolic detoxification system of the liver along the cytochrome P450 monooxygenases. Human FMO5 was in fact shown to be able to catalyse Baeyer-Villiger reactions, indicating that the reaction may occur in the human body as well.
BVMOs have been widely studied due to their potential as biocatalysts, that is, for an application in organic synthesis. Considering the environmental concerns for most of the chemical catalysts, the use of enzymes is considered a greener alternative. BVMOs in particular are interesting for application because they fulfil a range of criteria typically sought for in biocatalysis: besides their ability to catalyse a synthetically useful reaction, some natural homologs were found to have a very large substrate scope (i.e. their reactivity was not restricted to a single compound, as often assumed in enzyme catalysis), they can be easily produced on a large scale, and because the three-dimensional structure of many BVMOs has been determined, enzyme engineering could be applied to produce variants with improved thermostability and/or reactivity. Another advantage of using enzymes for the reaction is their frequently observed regio- and enantioselectivity, owed to the steric control of substrate orientation during catalysis within the enzyme's active site.
Applications
Zoapatanol
Zoapatanol is a biologically active molecule that occurs naturally in the zeopatle plant, which has been used in Mexico to make a tea that can induce menstruation and labor. In 1981, Vinayak Kane and Donald Doyle reported a synthesis of zoapatanol. They used the Baeyer–Villiger oxidation to make a lactone that served as a crucial building block that ultimately led to the synthesis of zoapatanol.
Steroids
In 2013, Alina Świzdor reported the transformation of the steroid dehydroepiandrosterone to anticancer agent testololactone by use of a Baeyer–Villiger oxidation induced by fungus that produces Baeyer-Villiger monooxygenases.
See also
Dakin reaction
Schmidt reaction - converts ketones to amides or lactams
References
External links
Animation of the Baeyer–Villiger oxidation
Organic oxidation reactions
Esterification reactions
Name reactions | Baeyer–Villiger oxidation | [
"Chemistry"
] | 2,446 | [
"Esterification reactions",
"Organic redox reactions",
"Organic reactions",
"Name reactions",
"Organic oxidation reactions"
] |
15,121,214 | https://en.wikipedia.org/wiki/OpenSim%20%28simulation%20toolkit%29 | OpenSim is an open source software system for biomechanical modeling, simulation and analysis. Its purpose is to provide free and widely accessible tools for conducting biomechanics research and motor control science. OpenSim enables a wide range of studies, including analysis of walking dynamics, studies of sports performance, simulations of surgical procedures, analysis of joint loads, design of medical devices, and animation of human and animal movement. The software performs inverse dynamics analysis and forward dynamics simulations. OpenSim is used in hundreds of biomechanics laboratories around the world to study movement and has a community of software developers contributing new features.
OpenSim is one of the flagship applications from Simbios, a NIH Center for Biomedical Computation at Stanford University. Founded in 2004, Simbios is charged with a mandate to provide leading software and computational tools for physics-based modeling and simulation of biological structures. OpenSim was designed to propel biomechanics research by providing a common framework for investigation and a vehicle for exchanging complex musculoskeletal models.
History
OpenSim 1.0 was released on August 20, 2007 and provided capabilities for viewing musculoskeletal models, importing models developed in SIMM (Musculographics Inc.), editing muscle paths, and generating muscle actuated simulations that track experimental data.
OpenSim 1.1 was released on December 11, 2007, which added new features such as user-specified camera positions for recording movies of simulations, and a perturbation (sensitivity) analysis for inquiry into the function of individual muscles.
OpenSim 2.2.1 was released on April 11, 2011. This software update enhanced the user interface and allowed the user to set bounds on activations of muscles and actuators relating to static optimization not dynamic optimization.
OpenSim 2.4 was released on October 10, 2011. This newest and most recent update includes faster and more robust tools for Inverse Dynamics and Inverse Kinematics, new visualization tools, enhanced access for API users, and many usability improvements.
OpenSim 3.2 was released on March 13, 2014. This update focused on improving the OpenSim scripting interface, accessible through the graphical user interface (GUI), Matlab, and now Python. It also added new visualization capabilities and usability improvements in the OpenSim application. Full list of features can be found here.
References
External links
http://opensim.stanford.edu/
https://github.com/opensim-org/opensim-core
http://simtk.org/home/opensim/
http://nmbl.stanford.edu/
http://simbios.stanford.edu/
http://simtk.org/
https://github.com/stanfordnmbl/osim-rl
http://osim-rl.stanford.edu/
Biomechanics | OpenSim (simulation toolkit) | [
"Physics"
] | 607 | [
"Biomechanics",
"Mechanics"
] |
15,121,242 | https://en.wikipedia.org/wiki/Hydrogen%20infrastructure | A hydrogen infrastructure is the infrastructure of hydrogen pipeline transport, points of hydrogen production and hydrogen stations for distribution as well as the sale of hydrogen fuel, and thus a crucial prerequisite before a successful commercialization of fuel cell technology.
The hydrogen infrastructure would consist mainly of industrial hydrogen pipeline transport and hydrogen-equipped filling stations. Hydrogen stations which were not situated near a hydrogen pipeline would get supply via hydrogen tanks, compressed hydrogen tube trailers, liquid hydrogen trailers, liquid hydrogen tank trucks or dedicated onsite production.
Pipelines are the cheapest way to move hydrogen over long distances compared to other options. Hydrogen gas piping is routine in large oil-refineries, because hydrogen is used to hydrocrack fuels from crude oil. The IEA recommends existing industrial ports be used for production and existing natural gas pipelines for transport: also international co-operation and shipping.
South Korea and Japan, which as of 2019 lack international electrical interconnectors, are investing in the hydrogen economy. In March 2020, the Fukushima Hydrogen Energy Research Field was opened in Japan, claiming to be the world's largest hydrogen production facility. Much of the site is occupied by a solar array; power from the grid is also used for electrolysis of water to produce hydrogen fuel.
Network
Hydrogen highways
A hydrogen highway is a chain of hydrogen-equipped filling stations and other infrastructure along a road or highway which allow hydrogen vehicles to travel.
Hydrogen stations
Hydrogen stations which are not situated near a hydrogen pipeline get supply via hydrogen tanks, compressed hydrogen tube trailers, liquid hydrogen trailers, liquid hydrogen tank trucks or dedicated onsite production. Some firms as ITM Power are also providing solutions to make your own hydrogen (for use in the car) at home. Government supported activities to expand an hydrogen fuel infrastructure are ongoing in the US state of California, in some member states of the European Union (most notably in Germany) and in particular in Japan.
Hydrogen pipeline transport
Hydrogen pipeline transport is a transportation of hydrogen through a pipe as part of the hydrogen infrastructure. Hydrogen pipeline transport is used to connect the point of hydrogen production or delivery of hydrogen with the point of demand, pipeline transport costs are similar to CNG, the technology is proven, however most hydrogen is produced on the place of demand with every an industrial production facility. , there are of low pressure hydrogen pipelines in the US and in Europe.
According to a 2024 research report, the United States has 1,600 miles (2,570 kilometers) of hydrogen pipelines; the global total stands at 2,800 miles (4,500 kilometers). The World Economic Forum, in December 2023, estimated that Europe had approximately 1,600 kilometers of hydrogen pipelines.
Hydrogen embrittlement (a reduction in the ductility of a metal due to absorbed hydrogen) is not a problem for hydrogen gas pipelines. Hydrogen embrittlement only happens with 'diffusible' hydrogen, i.e. atoms or ions. Hydrogen gas, however, is molecular (H2), and there is a very significant energy barrier to splitting it into atoms.
Buffer for renewable energy
The National Renewable Energy Laboratory believes that US counties have the potential to produce more renewable hydrogen for fuel cell vehicles than the gasoline they consumed in 2002.
As an energy buffer, hydrogen produced via water electrolysis and in combination with underground hydrogen storage or other large-scale storage technologies, could play an important role for the introduction of fluctuating renewable energy sources like wind or solar power.
Hydrogen production plants
98% of hydrogen production uses the steam reforming method. Methods such as electrolysis of water are also used. The world's largest facility for producing electrolytic hydrogen fuel is claimed to be the Fukushima Hydrogen Energy Research Field (FH2R), a 10MW-class hydrogen production unit, inaugurated on 7 March 2020, in Namie, Fukushima Prefecture. The site occupies 180,000 square meters of land, much of which is occupied by a solar array; but power from the grid is also used to conduct electrolysis of water to produce hydrogen fuel.
Hydrogen pipeline transport
Hydrogen pipeline transport is a transportation of hydrogen through a pipe as part of the hydrogen infrastructure.
History
1938 – Rhine-Ruhr The first hydrogen pipes that are constructed of regular pipe steel, compressed hydrogen pressure , diameter . Still in operation.
1973 – pipeline in Isbergues, France.
1985 – Extension of the pipeline from Isbergues to Zeebrugge
1997 – Connection of the pipeline to Rotterdam
1997 – 2000: Development of two hydrogen networks, one near Corpus Christi, Texas, and one between Freeport and Texas City.
2009 – extension of the pipeline from Plaquemine to Chalmette.
Economics
Hydrogen pipeline transport is used to transport hydrogen from the point of production or delivery to the point of demand. Although hydrogen pipeline transport is technologically mature, and the transport costs are similar to those of CNG, most hydrogen is produced in the place of demand, with an industrial production facility every
Piping
For process metal piping at pressures up to , high-purity stainless steel piping with a maximum hardness of 80 HRB is preferred. This is because higher hardnesses are associated with lower fracture toughness so stronger, higher hardness steel is less safe.
Composite pipes are assessed like:
carbon fiber structure with fiberglass overlay .
perfluoroalkoxy (PFA, MFA).
polytetrafluoroethylene (PTFE)
fluorinated ethylene propylene (FEP) .
carbon-fiber-reinforced polymers (FRP)
Fiber-Reinforced Polymer pipelines (or FRP pipeline) and reinforced thermoplastic pipes are researched.
Carrying hydrogen in steel pipelines (grades: API5L-X42 and X52; up to 1,000psi/7,000kPa, constant pressure/low pressure cycling) does not lead to hydrogen embrittlement. Hydrogen is typically stored in steel cylinders without problems.
Coal gas (also known as town gas) is 50% hydrogen and was carried in cast-iron pipes for half a century without any embrittlement issues.
Infrastructure
2024: USA – of low pressure hydrogen pipelines
2024: Europe – of low pressure hydrogen pipelines.
Hydrogen highway
A hydrogen highway is a chain of hydrogen-equipped public filling stations, along a road or highway, that allows hydrogen powered cars to travel. William Clay Ford Jr. has stated that infrastructure is one of three factors (also including costs and manufacturability in high volumes) that hold back the marketability of fuel cell cars.
Supply issues, cost and pollution
Hydrogen fueling stations generally receive deliveries of hydrogen by tanker truck from hydrogen suppliers. An interruption at a hydrogen supply facility can shut down multiple hydrogen fueling stations. A hydrogen fueling station costs between $1 million and $4 million to build.
As of 2019, 98% of hydrogen is produced by steam methane reforming, which emits carbon dioxide. The bulk of hydrogen is also transported in trucks, so pollution is emitted in its transportation.
Hydrogen station
A hydrogen station is a storage or filling station for hydrogen fuel. The hydrogen is dispensed by weight. There are two filling pressures in common use: H70 or 700 bar, and the older standard H35 or 350 bar. , around 550 filling stations were available worldwide. According to H2stations.org by Ludwig-Bölkow-Systemtechnik (LBST), as of the end of 2023, there were 921 hydrogen refueling stations globally, although this number clearly conflicts with those published by AFDC. The distribution of these stations is highly uneven, with a concentration in East Asia, particularly in China, Japan and South Korea; Central Europe and California in the United States. Other regions have very few, if any, hydrogen refuelling stations.
Delivery methods
Hydrogen fueling stations can be divided into off-site stations, where hydrogen is delivered by truck or pipeline, and on-site stations that produce and compress hydrogen for the vehicles.
Types of recharging stations
Home hydrogen fueling station
Home hydrogen fueling stations are available to consumers. A model that can produce 12 kilograms of hydrogen per day sells for $325,000.
Solar powered water electrolysing hydrogen home stations are composed of solar cells, power converter, water purifier, electrolyzer, piping, hydrogen purifier, oxygen purifier, compressor, pressure vessels and a hydrogen outlet.
Disadvantages
Volatility
Hydrogen fuel is hazardous because of its low ignition energy, high combustion energy, and because it easily leaks from tanks. Explosions at hydrogen filling stations have been reported.
Supply
Hydrogen fuelling stations generally receive deliveries by truck from hydrogen suppliers. An interruption at a hydrogen supply facility can shut down multiple hydrogen fuelling stations due to an interruption of the supply of hydrogen.
Costs
There are far fewer Hydrogen filling stations than gasoline fuel stations, which in the US alone numbered 168,000 in 2004. Replacing the US gasoline infrastructure with hydrogen fuel infrastructure is estimated to cost a half trillion U.S. dollars. A hydrogen fueling station costs between $1 million and $4 million to build. In comparison, battery electric vehicles can charge at home or at public chargers. As of 2023, there are more than 60,000 public charging stations in the United States, with more than 160,000 outlets. A public Level 2 charger, which comprise the majority of public chargers in the US, costs about $2,000, and DC fast chargers, of which there are more than 30,000 in the U.S., generally cost between $100,000 and $250,000, although Tesla superchargers are estimated to cost approximately $43,000.
Freezing of the nozzle
During refueling, the flow of cold hydrogen can cause frost to form on the dispenser nozzle, sometimes leading to the nozzle becoming frozen to the vehicle being refueled.
Locations
Consulting firm Ludwig-Bölkow-Systemtechnik tracks global hydrogen filling stations and publishes a map.
Asia
In 2019, there were 178 publicly available hydrogen fuel stations in operation.
, there are 167 publicly available hydrogen fuel stations in operation in Japan. In 2012 there were 17 hydrogen stations, and in 2021, there were 137 publicly available hydrogen fuel stations in Japan.
By the end of 2023, China had built 354 hydrogen refueling stations.
In 2019, there were 33 publicly available hydrogen fuel stations in operation in South Korea. In November 2023, however, due to hydrogen supply problems and broken stations, most fueling stations in South Korea offered no hydrogen. 41 out of the 159 hydrogen stations in the country were listed as open, and some of these were rationing supplies of hydrogen.
Europe
In 2019, there were 177 stations in Europe. According to H2stations.org by Ludwig-Bölkow-Systemtechnik (LBST), there were 265 hydrogen refuelling stations in Europe by the end of 2023.
there were 105 hydrogen fuel stations in Germany, there were 5 publicly available hydrogen fuel stations in France, 3 publicly available hydrogen fuel stations in Iceland, one publicly available hydrogen fuel station in Italy, 4 publicly available hydrogen fuel stations in The Netherlands, 2 publicly available hydrogen fuel stations in Belgium, 4 publicly available hydrogen fuel stations in Sweden, 3 publicly available hydrogen fuel stations in Switzerland and 6 publicly available hydrogen fuel stations in Denmark. Everfuel, the only operator of hydrogen stations in Denmark, announced in 2023 the closure of all of its public hydrogen stations in the country.
there were 2 publicly available hydrogen fuel stations in Norway, both in the Oslo area. Since the explosion at the hydrogen filling station in Sandvika in June 2019, the sale of hydrogen cars in Norway has halted. In 2023, Everfuel announced the closure of its two public hydrogen stations in Norway and cancelled the opening of a third. In 2024 Shell discontinued its hydrogen fuel projects in Norway.
there were 11 publicly available hydrogen fuel stations in the United Kingdom, but as of 2023, the number decreased to 5. In 2022, Shell closed its three hydrogen stations in the UK,
North America
Canada
As of July 2023, there were 10 fueling stations in Canada, 9 of which were open to the public:
British Columbia: Five stations are in the Greater Vancouver Area and Vancouver Island, with one station in Kelowna. All six stations are operated by HTEC (co-branded with Shell and Esso).
Ontario: One station in Mississauga is operated by Hydrogenics Corporation. The station is only available to certain commercial customers.
Quebec: Three stations in the Greater Montreal area are operated by Shell, and one station in Quebec City is operated by Harnois Énergies (co-branded with Esso).
United States
, there were 54 publicly accessible hydrogen refueling stations in the US, 53 of which were located in California, with one in Hawaii.
California: there were 53 retail stations. Continued state funding for hydrogen refueling stations is uncertain. In September 2023, Shell announced that it had closed its hydrogen stations in the state and discontinued plans to build further stations. In 2024 it was reported that "a majority of the hydrogen stations in Southern California are offline or operating with reduced hours" due to hydrogen shortages and unreliable station performance.
Hawaii opened its first hydrogen station at Hickam in 2009. In 2012, the Aloha Motor Company opened a hydrogen station in Honolulu. however, only one publicly accessible station was in operation in Hawaii.
Michigan: In 2000, the Ford Motor Company and Air Products & Chemicals opened the first hydrogen station in North America in Dearborn, MI. no publicly accessible stations were in operation in Michigan.
Oceania
In 2021, the first Australian publicly available hydrogen fuel station opened in Canberra, operated by ActewAGL.
Hydrogen tank
A hydrogen tank (other names- cartridge or canister) is used for hydrogen storage. The first type IV hydrogen tanks for compressed hydrogen at were demonstrated in 2001, the first fuel cell vehicles on the road with type IV tanks are the Toyota FCHV, Mercedes-Benz F-Cell and the GM HydroGen4.
Low-pressure tanks
Various applications have allowed the development of different H2 storage scenarios.
Recently, the Hy-Can consortium has introduced a small one liter, format. Horizon Fuel Cells is now selling a refillable metal hydride form factor for consumer use called HydroStik.
Type I
Metal tank (steel/aluminum)
Approximate maximum pressures: aluminum , steel .
Type II
Aluminum tank with filament windings such as glass fiber/aramid or carbon fiber around the metal cylinder. See composite overwrapped pressure vessel.
Approximate maximum pressures: aluminum/glass , steel/carbon or aramide .
Type III
Tanks made from composite material, fiberglass/aramid or carbon fiber with a metal liner (aluminum or steel).
Approximate maximum pressures: aluminum/glass , aluminum/aramid , aluminium/carbon .
Type IV
Composite tanks such of carbon fiber with a polymer liner (thermoplastic). See rotational molding and fibre-reinforced plastic.
Approximate maximum pressure: .
Type V
All-composite, linerless tank. Composites Technology Development (Colorado, USA) built a prototype tank for a satellite application in 2010 although it had an operating pressure of only 200 psi and was used to store argon.
Approximate maximum pressure: .
Tank testing and safety considerations
In accordance with ISO/TS 15869 (revised):
Burst test: the pressure at which the tank bursts, typically more than 2× the working pressure.
Proof pressure: the pressure at which the test will be executed, typically above the working pressure.
Leak test or permeation test, in NmL/hr/L (Normal liter of H2/time in hr/volume of the tank.)
Fatigue test, typically several thousand cycles of charging/emptying.
Bonfire test where the tank is exposed to an open fire.
Bullet test where live ammunition is fired at the tank.
This specification was replaced by ISO 13985:2006 and only applies to liquid hydrogen tanks.
Actual Standard EC 79/2009
U.S. Department of Energy maintains a hydrogen safety best practices site with a lot of information about tanks and piping. They dryly observe "Hydrogen is a very small molecule with low viscosity, and therefore prone to leakage.".
Metal hydride storage tank
Magnesium hydride
Using magnesium for hydrogen storage, a safe but weighty reversible storage technology. Typically the pressure requirement are limited to .
The charging process generates heat whereas the discharge process will require some heat to release the H2 contained in the storage material. To activate these types of hydrides, at the current state of development you need to reach approximately .
Other hydrides
See also sodium aluminium hydride
Research
2008 - Japan, a clay-based film sandwiched between prepregs of CFRP.
See also
Hydrogen leak testing
Hydrogen sensor
Gas cylinder
Hydrogen compressor
Liquid hydrogen
References
Sources
External links
Hydrogen Embrittlement group
California Hydrogen Highway
Hydrogen Highway, Norway to Germany
Interactive map of hydrogen stations in Europe and worldwide
Interactive map of hydrogen stations in Europe and worldwide (includes non-public stations)
H2Map.com Map of hydrogen refueling stations in the UK
H2stations.org Map of hydrogen refueling stations worldwide (GIS)
California Fuel Cell Partnership Map Map of hydrogen fueling stations in California, with real-time status reports
EUhyfis
ISO-TC 197
Industrial gases
Hydrogen storage
Pressure vessels
Pipeline transport
Piping
Hydrogen technologies
Road infrastructure
Hydrogen economy
Filling stations
Sustainable transport
Gas technologies | Hydrogen infrastructure | [
"Physics",
"Chemistry",
"Engineering"
] | 3,572 | [
"Structural engineering",
"Chemical equipment",
"Building engineering",
"Chemical engineering",
"Physical systems",
"Transport",
"Sustainable transport",
"Hydraulics",
"Industrial gases",
"Piping",
"Mechanical engineering",
"Chemical process engineering",
"Pressure vessels"
] |
15,126,504 | https://en.wikipedia.org/wiki/Parachor | Parachor is a quantity related to surface tension that was proposed by S. Sugden in 1924. It is defined according to the formula:
,
where is the surface tension, is the molar mass, is the liquid density, and is the vapor density in equilibrium with liquid. Parachor has a volume multiplier and is therefore extensible from components to mixtures. Parachor "has been used in solving various structural problems."
The etymology of parachor is from a combination of prefix para "para," meaning "aside," and Greek "chor," meaning "space." Sugden in other publications showed that each compound had a characteristic parachor value. Since the work of Sugden, parachor has been used to "correlate" the surface tension data of a variety of pure liquids and liquid mixtures.
Boudh-Hir and Mansoori (1990) presented a general molecular theory for parachor valid for all ranges of temperature.
Using the molecular theory of Boudh-Hir and Mansoori, Escobedo and Mansoori (1996) produced an analytical solution for parachor, as a function of temperature valid in all temperatures ranging from melting point to critical point. They also used the resulting analytic equation to predict surface tensions of a variety of liquids in all ranges of temperature from melting point to critical point. It is shown to represent the experimental
surface tension data of 94 different organic compounds within 1.05 AAD%. This analytic equation represents an accurate and generalized expression to predict surface tensions of pure liquids of practical interest.
Escobedo and Mansoori (1998), extended applications of the same theory to the case of mixtures of organic liquids. Using the proposed equation surface tensions of 55 binary mixtures are predicted within an overall 0.50 AAD% which is better than all the available prediction and correlation methods. When the resulting equations are made compound-insensitive using a corresponding states principle, the surface tension of all the same 55 binary mixtures are predicted within an overall 2.10 AAD%. It is shown that the proposed model is also applicable to multicomponent liquid mixtures.
Surface Tension of Binary Mixtures
The surface tension of binary carbon dioxide mixtures was predicted using a modified parachor approach that took temperature-dependent characteristics into account.
Individual solvent parachors rise almost linearly with decreasing temperature. The exponent of the parachor equation drops consistently as the temperature is decreased for all binary mixtures.
References
Fluid mechanics | Parachor | [
"Engineering"
] | 525 | [
"Civil engineering",
"Fluid mechanics"
] |
15,127,042 | https://en.wikipedia.org/wiki/Thermodynamic%20integration | Thermodynamic integration is a method used to compare the difference in free energy between two given states (e.g., A and B) whose potential energies and have different dependences on the spatial coordinates. Because the free energy of a system is not simply a function of the phase space coordinates of the system, but is instead a function of the Boltzmann-weighted integral over phase space (i.e. partition function), the free energy difference between two states cannot be calculated directly from the potential energy of just two coordinate sets (for state A and B respectively). In thermodynamic integration, the free energy difference is calculated by defining a thermodynamic path between the states and integrating over ensemble-averaged enthalpy changes along the path. Such paths can either be real chemical processes or alchemical processes. An example alchemical process is the Kirkwood's coupling parameter method.
Derivation
Consider two systems, A and B, with potential energies and . The potential energy in either system can be calculated as an ensemble average over configurations sampled from a molecular dynamics or Monte Carlo simulation with proper Boltzmann weighting. Now consider a new potential energy function defined as:
Here, is defined as a coupling parameter with a value between 0 and 1, and thus the potential energy as a function of varies from the energy of system A for and system B for . In the canonical ensemble, the partition function of the system can be written as:
In this notation, is the potential energy of state in the ensemble with potential energy function as defined above. The free energy of this system is defined as:
,
If we take the derivative of F with respect to λ, we will get that it equals the ensemble average of the derivative of potential energy with respect to λ.
The change in free energy between states A and B can thus be computed from the integral of the ensemble averaged derivatives of potential energy over the coupling parameter . In practice, this is performed by defining a potential energy function , sampling the ensemble of equilibrium configurations at a series of values, calculating the ensemble-averaged derivative of with respect to at each value, and finally computing the integral over the ensemble-averaged derivatives.
Umbrella sampling is a related free energy method. It adds a bias to the potential energy. In the limit of an infinite strong bias it is equivalent to thermodynamic integration.
See also
Free energy perturbation
Bennett acceptance ratio
Parallel tempering
Alchemy
References
Computational chemistry
Statistical mechanics | Thermodynamic integration | [
"Physics",
"Chemistry"
] | 502 | [
"Theoretical chemistry",
"Statistical mechanics",
"Computational chemistry"
] |
15,127,988 | https://en.wikipedia.org/wiki/Excess%20chemical%20potential | In thermodynamics, the excess chemical potential is defined as the difference between the chemical potential of a given species and that of an ideal gas under the same conditions (in particular, at the same pressure, temperature, and composition).
The chemical potential of a particle species is therefore given by an ideal part and an excess part.
Chemical potential of a pure fluid can be estimated by the Widom insertion method.
Derivation and Measurement
For a system of diameter and volume , at constant temperature , the classical canonical partition function
with a scaled coordinate, the free energy is given by:
Combining the above equation with the definition of chemical potential,
we get the chemical potential of a sufficiently large system from (and the fact that the smallest allowed change in the particle number is )
wherein the chemical potential of an ideal gas can be evaluated analytically.
Now let's focus on since the potential energy of an system can be separated into the potential energy of an system and the potential of the excess particle interacting with the system, that is,
and
Thus far we converted the excess chemical potential into an ensemble average, and the integral in the above equation can be sampled by the brute force Monte Carlo method.
The calculating of excess chemical potential is not limited to homogeneous systems, but has also been extended to inhomogeneous systems by the Widom insertion method, or other ensembles such as NPT and NVE.
See also
Apparent molar property
References
Note: the equations and presentation in this article are drawn from Excess Chemical Potential via the Widom Method
Potentials
Thermodynamics
Chemical thermodynamics
ro:Mărimi molare de exces | Excess chemical potential | [
"Physics",
"Chemistry",
"Mathematics"
] | 331 | [
"Chemical thermodynamics",
"Thermodynamics",
"Dynamical systems"
] |
15,128,823 | https://en.wikipedia.org/wiki/Phenom%20%28electron%20microscope%29 | Phenom is a small, table-top sized scanning electron microscope (SEM) originally developed by Philips and FEI and further developed by Phenom-World.
Features
The microscope features a combination of optical and electron-optical images; the optical image enables a "Neverlost" function so operators may navigate to any point on the sample. Sample loading takes place in four seconds (to obtain the CMOS overview image) and 30 seconds into the vacuum space via rapid transfer technology (no conventional load lock).
The Phenom system user interface is controlled with a touch screen. It achieves magnifications of up to 100,000 times with a resolution of down to 15 nm. An optional fully integrated X-ray analysis (EDS) system shows the composition of the sample in a few seconds..
Gallery
External links
US Patent #7906762 - Compact scanning electron microscope
Publication in: Systems Research Forum (SRF) Vol. 1 (2006) of the Stevens Institute of technology
Microscopes | Phenom (electron microscope) | [
"Chemistry",
"Technology",
"Engineering"
] | 208 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
15,131,179 | https://en.wikipedia.org/wiki/Alogliptin | Alogliptin, sold under the brand names Nesina and Vipidia, is an oral anti-diabetic drug in the DPP-4 inhibitor (gliptin) class. Like other members of the gliptin class, it causes little or no weight gain, exhibits relatively little risk of hypoglycemia, and has relatively modest glucose-lowering activity. Alogliptin and other gliptins are commonly used in combination with metformin in people whose diabetes cannot adequately be controlled with metformin alone.
In April 2016, the U.S. Food and Drug Administration (FDA) added a warning about increased risk of heart failure. It was developed by Syrrx, a company which was acquired by Takeda Pharmaceutical Company in 2005. In 2020, it was the 295th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Medical uses
Alogliptin is a dipeptidyl peptidase-4 inhibitor (DDP-4) that decreases blood sugar levels similar to other DPP-4 inhibitors.
Side effects
Adverse events include hypoglycemia, pruritis (itching), nasopharyngitis, headache, and upper respiratory tract infection. It may also cause joint pain that can be severe and disabling. Like other DDP-4 inhibitors, alogliptin is weight-neutral.
A 2014 letter to the editor claimed alogliptin is not associated with increased risk of cardiovascular events. In April 2016, the U.S. Food and Drug Administration (FDA) added a warning about increased risk of heart failure.
Market access
In December 2007, Takeda submitted a New Drug Application (NDA) for alogliptin to the United States Food and Drug Administration (FDA), after positive results from Phase III clinical trials. In September 2008, the company also filed for approval in Japan, winning approval in April 2010. The company also filed a Marketing Authorization Application elsewhere outside the United States, which was withdrawn in June 2009 needing more data. The first NDA failed to gain approval and was followed by a pair of NDAs (one for alogliptin and a second for a combination of alogliptin and pioglitazone) in July 2011. In 2012, Takeda received a negative response from the FDA on both of these NDAs, citing a need for additional data.
In 2013, the FDA approved the drug in three formulations: as a stand-alone with the brand-name Nesina, combined with metformin using the name Kazano, and when combined with pioglitazone as Oseni.
References
External links
Dipeptidyl peptidase-4 inhibitors
Nitriles
Piperidines
Ureas
Imides
Pyrimidinediones
Enantiopure drugs
Drugs developed by Takeda Pharmaceutical Company
Sanofi | Alogliptin | [
"Chemistry"
] | 592 | [
"Stereochemistry",
"Functional groups",
"Enantiopure drugs",
"Organic compounds",
"Imides",
"Nitriles",
"Ureas"
] |
11,458,899 | https://en.wikipedia.org/wiki/Metka | METKA ATE is the business unit of the Greek company Mytilineos S.A., undertaking the construction of large-scale projects in the sectors of energy, infrastructure and defence.
Metka’s main business activity is in construction of large power generation plants, most notably highly efficient combined cycle power plants. The company also has significant industrial manufacturing facilities, which enables it to produce specialized mechanical equipment, fabrications and machinery used in industrial and defence applications.
Metka is also classified in the highest category of construction contractors for major public works projects in Greece.
History
1962–1980
Metka was founded in 1962 by the Hellenic Industrial Development Bank in the port city of Volos, Central Greece.
In 1964 Metka’s manufacturing plant for metal constructions initiated its operation, with its activities relating mainly to the construction of large and sophisticated metal and mechanical projects.
In 1971 Metka was privatized and in 1973 its shares were listed on the Athens Stock Exchange, a move which was followed by acquisitions that enabled Metka to become a contractor for large-scale projects. Metka carries out its first international projects.
1980–1998
In 1980 the company absorbed the technical contracting firm Technom S.A. thus acquiring the capacity to build and assemble items at an industrialized level and obtaining the ability to undertake and implement large-scale public works.
In 1989 followed the acquisition of the “Hellenic Steel Process Industry” (Servisteel) and with its modern automated equipment, Metka could start industrializing metal works (blasting, cutting, drilling).
After 39 consecutive years of operation, Metka opens up to new areas of activity such as: energy, defence, renewable energy sources, exports and refineries.
1998–2009
During the period of July 1998 through to January 1999, Mytilineos gradually acquired a controlling interest in the company and in early 1999 the acquisition was officially completed.
In December 1999, Metka proceeds to a 40% share acquisition of EKME. The company deals mainly with the design and construction of units for petrochemical and power production plants.
In 2006 Metka acquires Elemka, a company specialized in civil engineering applications.
2009–2014
In 2009 takes place the establishment of Power Projects Limited, subsidiary of Metka, in Turkey.
In 2012 Metka opens a Representation Office in Algeria and develops a series of energy projects, particularly with mobile power generating units, on a fast-track basis.
2015–2016
In 2015 Metka establishes a new Representation Office in Ghana, following the company's strategic focus on African markets with booming energy needs.
Also in 2015, Metka establishes the new affiliated company Metka EGN, as a result of the joint venture with Egnatia Group, aiming to further strengthen the company’s portfolio of activities, as well as its positioning on the rapidly growing solar power market.
In 2016 Metka undertakes its second major project in Ghana, in consortium with General Electric, for the engineering, procurement, construction and commissioning of a 192MW combined cycle power plant in Takoradi. New solar and hybrid power EPC contracts for Metka EGN with a total capacity of 75MW.
Also in 2016 a strategic partnership in the off-grid power market with International Power Supply (IPS), the manufacturer of the award-winning all-in-one modular power conversion system EXERON.
2017
In 2017 the merger was announced of Mytilineos Holdings S.A. with its principal subsidiaries Metka S.A., Protergia S.A. and Aluminium of Greece I.C.S.A.
Operations
Metka's main offices are in Athens, Greece with operations in several countries through the Middle East and Africa. The company's industrial facilities are located in the port city of Volos.
Metka focuses mainly on serving the needs of international customers and markets, mainly in Europe, the Middle East and Africa.
See also
List of Greek Companies
References
External links
Corporate Brochure
Energy Brochure
Official Website
Mytilineos Holdings S.A.
Metka EGN
The Top 225 Global Contractors
The only Greek stock at Forbes’ investment guide
Metka on Athens Exchange
L.S. Skartsis, "Greek Vehicle & Machine Manufacturers 1800 to present: A Pictorial History", Marathon (2012) (eBook)
EXERON
Companies listed on the Athens Exchange
Companies in the FTSE/Athex Large Cap
Energy engineering and contractor companies
Engineering companies of Greece
Construction and civil engineering companies of Greece
Construction and civil engineering companies established in 1962
Greek companies established in 1962
Mytilineos SA
Multinational companies headquartered in Greece
Greek brands | Metka | [
"Engineering"
] | 931 | [
"Energy engineering and contractor companies",
"Engineering companies"
] |
11,460,461 | https://en.wikipedia.org/wiki/Macrosporium%20cocos | Macrosporium cocos is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporaceae
Fungus species | Macrosporium cocos | [
"Biology"
] | 38 | [
"Fungi",
"Fungus species"
] |
11,461,759 | https://en.wikipedia.org/wiki/Heme%20arginate | Heme arginate (or haem arginate) is a compound of heme and arginine used in the treatment of acute porphyrias. This heme product is only available outside the United States and is equivalent to hematin.
Heme arginate is a heme compound, whereby L-arginine is added to prevent rapid degradation. It is given intravenously, and its action of mechanism is to reduce the overproduction of δ-aminolevulinic acid, which can cause the acute symptoms in an attack of the acute porphyrias.
See also
Acute intermittent porphyria
Aminolevulinic acid
Inborn error of metabolism
References
Porphyrins | Heme arginate | [
"Chemistry"
] | 145 | [
"Porphyrins",
"Biomolecules"
] |
11,461,804 | https://en.wikipedia.org/wiki/Hemin | Hemin (haemin; ferric chloride heme) is an iron-containing porphyrin with chlorine that can be formed from a heme group, such as heme B found in the hemoglobin of human blood.
Chemistry
Hemin is protoporphyrin IX containing a ferric iron (Fe3+) ion with a coordinating chloride ligand.
Chemically, hemin differs from the related heme-compound hematin chiefly in that the coordinating ion is a chloride ion in hemin, whereas the coordinating ion is a hydroxide ion in hematin. The iron ion in haem is ferrous (Fe2+), whereas it is ferric (Fe3+) in both hemin and hematin.
Hemin is endogenously produced in the human body, for example during the turnover of old red blood cells. It can form inappropriately as a result of hemolysis or vascular injury. Several proteins in human blood bind to hemin, such as hemopexin and serum albumin.
Pharmacological use
A lyophilised form of hemin is used as a pharmacological agent in certain cases for the treatment of porphyria attacks, particularly in acute intermittent porphyria. Administration of hemin can reduce heme deficits in such patients, thereby suppressing the activity of delta-amino-levulinic acid synthase (a key enzyme in the synthesis of the porphyrins) by biochemical feedback, which in turn reduces the production of porphyrins and of the toxic precursors of heme. In such pharmacological contexts, hemin is typically formulated with human albumin prior to administration by a medical professional, to reduce the risk of phlebitis and to stabilize the compound, which is potentially reactive if allowed to circulate in free-form. Such pharmacological forms of hemin are sold under a range of trade names including the trademarks Panhematin and Normosang.
History of isolation
Hemin was first crystallized out of blood in 1853, by Ludwik Karol Teichmann. Teichmann discovered that blood pigments can form microscopic crystals. Thus, crystals of hemin are occasionally referred to as 'Teichmann crystals'. Hans Fischer synthesized hemin, for which he was awarded the Nobel Prize in Chemistry in 1930. Fischer's procedure involves treating defibrinated blood with a solution of sodium chloride in acetic acid.
Forensics
Hemin can be produced from hemoglobin by the so-called Teichmann test, when hemoglobin is heated with glacial acetic acid (saturated with saline). This can be used to detect blood traces.
Other
Hemin is considered the "X factor" required for the growth of Haemophilus influenzae.
References
External links
Porphyrins
Orphan drugs | Hemin | [
"Chemistry"
] | 594 | [
"Porphyrins",
"Biomolecules"
] |
11,462,382 | https://en.wikipedia.org/wiki/Self-tuning | In control theory a self-tuning system is capable of optimizing its own internal running parameters in order to maximize or minimize the fulfilment of an objective function; typically the maximization of efficiency or error minimization.
Self-tuning and auto-tuning often refer to the same concept. Many software research groups consider auto-tuning the proper nomenclature.
Self-tuning systems typically exhibit non-linear adaptive control. Self-tuning systems have been a hallmark of the aerospace industry for decades, as this sort of feedback is necessary to generate optimal multi-variable control for non-linear processes. In the telecommunications industry, adaptive communications are often used to dynamically modify operational system parameters to maximize efficiency and robustness.
Examples
Examples of self-tuning systems in computing include:
TCP (Transmission Control Protocol)
Microsoft SQL Server (Newer implementations only)
FFTW (Fastest Fourier Transform in the West)
ATLAS (Automatically Tuned Linear Algebra Software)
libtune (Tunables library for Linux)
PhiPAC (Self Tuning Linear Algebra Software for RISC)
MILEPOST GCC (Machine learning based self-tuning compiler)
Performance benefits can be substantial. Professor Jack Dongarra, an American computer scientist, claims self-tuning boosts performance, often on the order of 300%.
Digital self-tuning controllers are an example of self-tuning systems at the hardware level.
Architecture
Self-tuning systems are typically composed of four components: expectations, measurement, analysis, and actions. The expectations describe how the system should behave given exogenous conditions.
Measurements gather data about the conditions and behaviour. Analysis helps determine whether the expectations are being met- and which subsequent actions should be performed. Common actions are gathering more data and performing dynamic reconfiguration of the system.
Self-tuning (self-adapting) systems of automatic control are systems whereby adaptation to randomly changing conditions is performed by means of automatically changing parameters or via automatically determining their optimum configuration. In any non-self-tuning automatic control system there are parameters which have an influence on system stability and control quality and which can be tuned. If these parameters remain constant whilst operating conditions (such as input signals or different characteristics of controlled objects) are substantially varying, control can degrade or even become unstable. Manual tuning is often cumbersome and sometimes impossible. In such cases, not only is using self-tuning systems technically and economically worthwhile, but it could be the only means of robust control. Self-tuning systems can be with or without parameter determination.
In systems with parameter determination the required level of control quality is achieved by automatically searching for an optimum (in some sense) set of parameter values. Control quality is described by a generalised characteristic which is usually a complex and not completely known or stable function of the primary parameters. This characteristic is either measured directly or computed based on the primary parameter values. The parameters are then tentatively varied. An analysis of the control quality characteristic oscillations caused by the varying of the parameters makes it possible to figure out if the parameters have optimum values, i.e.. if those values deliver extreme (minimum or maximum) values of the control quality characteristic. If the characteristic values deviate from an extremum, the parameters need to be varied until optimum values are found. Self-tuning systems with parameter determination can reliably operate in environments characterised by wide variations of exogenous conditions.
In practice systems with parameter determination require considerable time to find an optimum tuning, i.e. time necessary for self-tuning in such systems is bounded from below. Self-tuning systems without parameter determination do not have this disadvantage. In such systems, some characteristic of control quality is used (e.g., the first time derivative of a controlled parameter). Automatic tuning makes sure that this characteristic is kept within given bounds. Different self-tuning systems without parameter determination exist that are based on controlling transitional processes, frequency characteristics, etc. All of those are examples of closed-circuit self-tuning systems, whereby parameters are automatically corrected every time the quality characteristic value falls outside the allowable bounds. In contrast, open-circuit self-tuning systems are systems with para-metrical compensation, whereby input signal itself is controlled and system parameters are changed according to a specified procedure. This type of self-tuning can be close to instantaneous. However, in order to realise such self-tuning one needs to control the environment in which the system operates and a good enough understanding of how the environment influences the controlled system is required.
In practice self-tuning is done through the use of specialised hardware or adaptive software algorithms. Giving software the ability to self-tune (adapt):
Facilitates controlling critical processes of systems;
Approaches optimum operation regimes;
Facilitates design unification of control systems;
Shortens the lead times of system testing and tuning;
Lowers the criticality of technological requirements on control systems by making the systems more robust;
Saves personnel time for system tuning.
Metaheuristics
Literature
External links
Using Probabilistic Reasoning to Automate Software Tuning
Frigo, M. and Johnson, S. G., "The design and implementation of FFTW3", Proceedings of the IEEE, 93(2), February 2005, 216 - 231. .
Optimizing Matrix Multiply using PHiPAC: a Portable, High-Performance, ANSI C Coding Methodology
Faster than a Speeding Algorithm
Rethinking Database System Architecture: Towards a Self-tuning RISC-style Database System
Self-Tuning Systems Software
Microsoft Research Adds Data Mining and Self-tuning Technology to SQL Server 2000
A Comparison of TCP Automatic Tuning Techniques for Distributed Computing
Tunables library for Linux
A Review of Relay Auto-tuning Methods for the Tuning of PID-type Controllers
Control engineering
Control theory
Electronic feedback | Self-tuning | [
"Mathematics",
"Engineering"
] | 1,167 | [
"Applied mathematics",
"Control theory",
"Control engineering",
"Dynamical systems"
] |
19,074,000 | https://en.wikipedia.org/wiki/Refractory%20%28planetary%20science%29 | In planetary science, any material that has a relatively high equilibrium condensation temperature is called refractory. The opposite of refractory is volatile.
The refractory group includes elements and compounds like metals and silicates (commonly termed rocks) which make up the bulk of the mass of the terrestrial planets and asteroids in the inner belt. A fraction of the mass of other asteroids, giant planets, their moons and trans-Neptunian objects is also made of refractory materials.
Classification
The elements can be divided into several categories:
The condensation temperatures are the temperatures at which 50% of the element will be in the form of a solid (rock) under a pressure of 10−4 bar. However, slightly different groups and temperature ranges are used sometimes. Refractory material are also often divided into refractory lithophile elements and refractory siderophile elements.
References
Planetary geology
Petrology
Volcanology
Astrobiology
Origins
Prebiotic chemistry | Refractory (planetary science) | [
"Chemistry",
"Astronomy",
"Biology"
] | 200 | [
"Origin of life",
"Speculative evolution",
"Prebiotic chemistry",
"Astrobiology",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
19,074,153 | https://en.wikipedia.org/wiki/PGF/TikZ | PGF/TikZ is a pair of languages for producing vector graphics (e.g., technical illustrations and drawings) from a geometric/algebraic description, with standard features including the drawing of points, lines, arrows, paths, circles, ellipses and polygons. PGF is a lower-level language, while TikZ is a set of higher-level macros that use PGF. The top-level PGF and TikZ commands are invoked as TeX macros, but in contrast with PSTricks, the PGF/TikZ graphics themselves are described in a language that resembles MetaPost. Till Tantau is the designer of the PGF and TikZ languages. He is also the main developer of the only known interpreter for PGF and TikZ, which is written in TeX. PGF is an acronym for "Portable Graphics Format". TikZ was introduced in version 0.95 of PGF, and it is a recursive acronym for "TikZ ist kein Zeichenprogramm" (German for "TikZ is not a drawing program").
Overview
The PGF/TikZ interpreter can be used from the popular LaTeX and ConTeXt macro packages, and also directly from the original TeX. Since TeX itself is not concerned with graphics, the interpreter supports multiple TeX output backends: dvips, dvipdfm/dvipdfmx/xdvipdfmx, TeX4ht, and pdftex's internal PDF output driver. Unlike PSTricks, PGF can thus directly produce either PostScript or PDF output, but it cannot use some of the more advanced PostScript programming features that PSTricks can use due to the "least common denominator" effect. PGF/TikZ comes with an extensive documentation; the version 3.1.4a of the manual has over 1300 pages.
The standard LaTeX picture environment can also be used as a front end for PGF by using the pgfpict2e package.
The project has been under constant development since 2005. Most of the development until 2018 was done by Till Tantau and since then Henri Menke has been the main contributor. Version 3.0.0 was released on 20 December 2013. One of the major new features of this version was graph drawing using the graphdrawing package, which however requires LuaTeX. This version also added a new data visualization method and support for direct SVG output via the new dvisvgm driver.
Export
Several graphical editors can produce output for PGF/TikZ, such as the KDE program Cirkuit and the math drawing program GeoGebra. Export to TikZ is also available as extensions for Inkscape, Blender, MATLAB, matplotlib, Gnuplot, Julia, and R. The circuit-macros package of m4 macros exports circuit diagrams to TikZ using the dpic -g command line option. The dot2tex program can convert files in the DOT graph description language to PGF/TikZ.
Libraries
TikZ features libraries for easy drawing of many kinds of diagrams, such as the following (alphabetized by library name):
3D drawing3d
Finite automata and Turing machinesautomata
Coordinate system calculationscalc
Calendarscalendar
Chains: nodes typically connected by edges and arranged in rows and columnschain
Logic circuit and electrical circuit diagramscircuits.logic and circuits.ee
Entity–relationship diagramser
Polygon folding diagramsfolding
Graph drawing with automatic layout optionsgraphdrawing
L-system drawingslindenmayersystems
Sequences of basic math operationsmath
Matricesmatrix
Mind mapsmindmap
Three-point perspective drawingsperspective
Petri netspetri
Quantum circuitsquantikz
RDF semantic annotations (only in SVG output)rdf
Special shapes and symbolsshapes.geometric and shapes.symbols
Magnification of part of a graphic in an insetspy
Paths in SVG syntaxsvg.path
Treestrees
Turtle graphicsturtle
Zooming and panning graphicsviews
Gallery
The following images were created with TikZ and show some examples of the range of graphic types that can be produced. The link in each caption points to the source code for the image.
See also
Asymptote (vector graphics language)
References
Further reading
Conference talk video (version archived by archive.org; the previous site is unavailable) based on an earlier version of that paper.
Comparison of several graphics systems in LaTeX.
According to a 2011 review of the book in TUGboat: "It contains a detailed introduction to the TikZ suite—probably one of the best existing descriptions of this highly useful package."
External links
PGF/TikZ on CTAN
PGF/TikZ manual on CTAN
PGF/TikZ gallery at TeXample.net
Cross-platform free software
Free TeX software
Graph description languages
Graph drawing software
Object-oriented programming languages
TeX SourceForge projects
Vector graphics markup languages | PGF/TikZ | [
"Mathematics"
] | 1,046 | [
"Graph description languages",
"Mathematical relations",
"Graph theory"
] |
19,074,264 | https://en.wikipedia.org/wiki/Amalgam%20%28chemistry%29 | An amalgam is an alloy of mercury with another metal. It may be a liquid, a soft paste or a solid, depending upon the proportion of mercury. These alloys are formed through metallic bonding, with the electrostatic attractive force of the conduction electrons working to bind all the positively charged metal ions together into a crystal lattice structure. Almost all metals can form amalgams with mercury, the notable exceptions being iron, platinum, tungsten, and tantalum. Silver-mercury amalgams are important in dentistry, and gold-mercury amalgam is used in the extraction of gold from ore. Dentistry has used alloys of mercury with metals such as silver, copper, indium, tin and zinc.
Important amalgams
Zinc amalgam
Zinc amalgam finds use in organic synthesis (e.g., for the Clemmensen reduction).
It is the reducing agent in the Jones reductor, used in analytical chemistry. Formerly the zinc plates of dry batteries were amalgamated with a small amount of mercury to prevent deterioration in storage. It is a binary solution (liquid-solid) of mercury and zinc.
Potassium amalgam
For the alkali metals, amalgamation is exothermic, and distinct chemical forms can be identified, such as KHg and KHg2. KHg is a gold-coloured compound with a melting point of 178 °C, and KHg2 a silver-coloured compound with a melting point of 278 °C. These amalgams are very sensitive to air and water, but can be worked with under dry nitrogen. The Hg-Hg distance is around 300 picometres, Hg-K around 358 pm.
Phases K5Hg7 and KHg11 are also known; rubidium, strontium and barium undecamercurides are known and isostructural. Sodium amalgam (NaHg2) has a different structure, with the mercury atoms forming hexagonal layers, and the sodium atoms a linear chain which fits into the holes in the hexagonal layers, but the potassium atom is too large for this structure to work in KHg2.
Sodium amalgam
Sodium amalgam is produced as a byproduct of the chloralkali process and used as an important reducing agent in organic and inorganic chemistry. With water, it decomposes into concentrated sodium hydroxide solution, hydrogen and mercury, which can then return to the chloralkali process anew. If absolutely water-free alcohol is used instead of water, an alkoxide of sodium is produced instead of the alkali solution.
Aluminium amalgam
Aluminium can form an amalgam through a reaction with mercury. Aluminium amalgam may be prepared by either grinding aluminium pellets or wire in mercury, or by allowing aluminium wire or foil to react with a solution of mercuric chloride. This amalgam is used as a reagent to reduce compounds, such as the reduction of imines to amines. The aluminium is the ultimate electron donor, and the mercury serves to mediate the electron transfer.
The reaction itself and the waste from it contain mercury, so special safety precautions and disposal methods are needed. As an environmentally friendlier alternative, hydrides or other reducing agents can often be used to accomplish the same synthetic result. Another environmentally friendly alternative is an alloy of aluminium and gallium which similarly renders the aluminium more reactive by preventing it from forming an oxide layer.
Tin amalgam
Tin amalgam was used in the middle of the 19th century as a reflective mirror coating.
Other amalgams
A variety of amalgams are known that are of interest mainly in the research context.
Ammonium amalgam is a grey, soft, spongy mass discovered in 1808 by Humphry Davy and Jöns Jakob Berzelius. It decomposes readily at room temperature or in contact with water or alcohol:
Thallium amalgam has a freezing point of −58 °C, which is lower than that of pure mercury (−38.8 °C) so it has found a use in low temperature thermometers.
Gold amalgam: Refined gold, when finely ground and brought into contact with mercury where the surfaces of both metals are clean, amalgamates readily and quickly forms alloys ranging from AuHg2 to Au8Hg.
Lead forms an amalgam when filings are mixed with mercury and is also listed as a naturally occurring alloy called leadamalgam in the Nickel–Strunz classification.
Dental amalgam
Dentistry has used alloys of mercury with metals such as silver, copper, indium, tin and zinc. Amalgam is an "excellent and versatile restorative material" and is used in dentistry because it is inexpensive and relatively easy to use and manipulate during placement. It remains soft for a short time so it can be packed to fill any irregular volume, and then forms a hard compound. Amalgam possesses greater longevity when compared to other direct restorative materials, such as composite. However, this difference has decreased with continual development of composite resins.
Amalgam is typically compared to resin-based composites because many applications are similar and many physical properties and costs are comparable.
Dental amalgam has been studied and is generally considered to be safe for humans, though the validity of some studies and their conclusions have been questioned.
In July 2018 the EU, in consideration of the persistent pollution and environmental toxicity of amalgam's mercury, prohibited amalgam for dental treatment of children under 15 years and of pregnant or breastfeeding women.
Use in mining
Mercury has been used in gold and silver mining because of the convenience and the ease with which mercury and the precious metals will amalgamate. In gold placer mining, in which minute specks of gold are washed from sand or gravel deposits, mercury was often used to separate the gold from other heavy minerals.
After all of the practical metal had been taken out from the ore, the mercury was dispensed down a long copper trough, which formed a thin coating of mercury on the exterior. The waste ore was then transferred down the trough, and gold in the waste amalgamated with the mercury. This coating would then be scraped off and refined by evaporation to get rid of the mercury, leaving behind somewhat high-purity gold.
Mercury amalgamation was first used on silver ores with the development of the patio process in Mexico in 1557. There were also additional amalgamation processes that were created for processing silver ores, including pan amalgamation and the Washoe process.
Gold amalgam
Gold extraction (mining)
Gold amalgam has proved effective where gold fines ("flour gold") would not be extractable from ore using hydro-mechanical methods. Large amounts of mercury were used in placer mining, where deposits composed largely of decomposed granite slurry were separated in long runs of "riffle boxes", with mercury dumped in at the head of the run. The amalgam formed is a heavy dull gray solid mass. The use of mercury in 19th century placer mining in California, now prohibited, has caused extensive pollution problems in riverine and estuarine environments, ongoing to this day. Sometimes substantial slugs of amalgam are found in downstream river and creek bottoms by amateur wet-suited miners seeking gold nuggets with the aid of an engine-powered water vacuum/dredge mounted on a float.
Gold extraction (ore processing)
Where stamp mills were used to crush gold-bearing ore to fines, a part of the extraction process involved the use of mercury-wetted copper plates, over which the crushed fines were washed. A periodic scraping and re-mercurizing of the plate resulted in amalgam for further processing.
Gold extraction (retorting)
Amalgam obtained by either process was then heated in a distillation retort, recovering the mercury for reuse and leaving behind the gold. As this released mercury vapors to the atmosphere, the process could induce adverse health effects and long term pollution.
Today, mercury amalgamation has been replaced by other methods to recuperate gold and silver from ore in developed nations. Hazards of mercurial toxic waste have played a major role in the phasing out of the mercury amalgamation processes. Mercury amalgamation is still regularly used by small-scale gold placer miners (often illegally), particularly in developing countries.
Amalgam probe
Mercury salts are, compared to mercury metal and amalgams, highly toxic due to their solubility in water. The presence of these salts in water can be detected with a probe that uses the readiness of mercury ions to form an amalgam with copper. A nitric acid solution of salts under investigation is applied to a piece of copper foil, and any mercury ions present will leave spots of silvery-coloured amalgam. Silver ions leave similar spots but are easily washed away, making this a means of distinguishing silver from mercury.
The redox reaction involved where mercury oxidizes the copper is:
Hg2+ + Cu → Hg + Cu2+.
See also
References
Further reading
Prandtl, W.: Humphry Davy, Jöns Jacob Berzelius, zwei führende Chemiker aus der ersten Hälfte des 19. Jahrhunderts. Wissenschaftliche Verlagsgesellschaft, Stuttgart, 1948
Hofmann, H., Jander, G.: Qualitative Analyse, 1972, Walter de Gruyter,
External links | Amalgam (chemistry) | [
"Chemistry"
] | 1,944 | [
"Amalgams",
"Alloys"
] |
19,074,735 | https://en.wikipedia.org/wiki/Wireless%20Home%20Digital%20Interface | Wireless Home Digital Interface (WHDI) is a consumer electronic specification for a wireless HDTV connectivity throughout the home.
WHDI enables delivery of uncompressed high-definition digital video over a wireless radio channel connecting any video source (computers, mobile phones, Blu-ray players etc.) to any compatible display device. WHDI is supported and driven by Amimon, Hitachi, LG Electronics, Motorola, Samsung, Sharp Corporation and Sony.
Versions
The WHDI 1.0 specification was finalized in December 2009. Sharp Corporation will be one of the first companies to roll out wireless HDTVs. AT CES 2010 LG Electronics announced a WHDI wireless HDTV product line.
In June 2010, WHDI announced an update to WHDI 1.0 which allows support for stereoscopic 3D, and WHDI 2.0 specification to be completed in Q2 2011.
WHDI 3D update due in Q4 2010 will allow support for 3D formats defined in HDMI 1.4a specification
WHDI 2.0 will increase available bandwidth even further, allowing additional 3D formats such as "dual 1080p60", and support for 4K × 2K resolutions.
Technology
WHDI 1.0 provides a high-quality, uncompressed wireless link which supports data rates of up to 3 Gbit/s (allowing 1920×1080 @ 60 Hz @ 24-bit) in a 40 MHz channel, and data rates of up to 1.5 Gbit/s (allowing 1280×720 @ 60 Hz @ 24-bit or 1920×1080 @ 30 Hz @ 24-bit) in a single 20 MHz channel of the 5 GHz unlicensed band, conforming to FCC and worldwide 5 GHz spectrum regulations. Range is beyond , through walls, and latency is less than one millisecond.
History
2005
December
AMIMON releases news of a device capable of "uncompressed high definition video streaming wirelessly."
2007
January
AMIMON showcases its WHDI (wireless high definition interface) at CES. Sanyo demonstrates the "world's first wireless HD projector," using AMIMON's technology, which allows for the same quality as a DVI / HDMI cable.
August
AMIMON begins shipping its WHDI chips to manufacturers.
December
WHDI becomes High-Bandwidth Digital Content Protection (HDCP) Certified, garnering the necessary approval for any device to deliver HD video to another device, a requirement in Hollywood movie studios. It is considered an Approved Retransmission Technology (ART). The approval allows for WHDI to begin selling devices that will carry HD content to a broader market.
2008
April
Sharp partners with AMIMON to offer Sharp's X-Series LCD HDTVs offered with WHDI wireless link, the first CE product to use WHDI technology.
July
AMIMON collaborates with Motorola, Samsung, Sony and Sharp in order to form 'a special interest group to develop a comprehensive new industry standard for multi-room audio, video and control connectivity'.
August
Mitsubishi announces that it will offer television sets in Japan capable of communicating with WHDI-enabled equipment.
September
JVC plans to produce a wireless HDMI box to launch in 2009.
December
AMIMON Ships Its 100,000th Wireless High-definition Chipset.
ABI Research reports wireless HDTV vendors are putting money into products though few are available for consumption in North America.
Stryker Endoscopy's WiSe HDTV will use WHDI and be the first HD wireless display specifically for the operating room, the first use of WHDI technology in the professional market.
2009
April
AMIMON introduces its second-generation chipset operating in the 5 GHz unlicensed band with AMN 2120 transmitter and AMN 2220 receiver. The chipset is capable of full uncompressed 1080p/60 Hz HD and supports HDCP 2.0. The unit also becomes available to manufacturers.
May
Gefen begins shipping its WHDI towers, targeting the custom installation market. The towers use AMIMON's 5 GHz technology and can support a maximum of five remote receivers on the same video stream. They support 1080p with Dolby 5.1 surround audio.
September
Philips launches Wireless HDTV Link with an HDMI transmitter and receiver and 1080p/30 HD video transmission.
Sony announces it will release the ZX5 LCD television in November. It is capable of receiving 1080p wirelessly.
2010
January
LG announces a partnership with AMIMON and prepares shipment of a wireless HDTV product line with second-generation WHDI technology embedded.
July
WHDI becomes 3D video capable.
September
ASUS joins the WHDI Consortium and aligns with AMIMON to introduce the WiCast EW2000. The WiCast connects a PC via USB to an HDTV via HDMI.
October
Galaxy announces the GeForce GTX 460 WHDI Edition video card. The card is intended for PC gamers.
AMIMON announces the WHDI stick reference design, a noticeably smaller device than those previously released.
November
HP announces the WHDI certified HP Wireless TV Connect
2011
January
WHDI comes to TVs, PCs, tablets and a projector at the 2011 Consumer Electronics Show (CES).
KFA2 (Galaxy) releases the first wireless graphics card, GeForce GTX460 WHDI 1024MB PCIe 2.0. The card uses five aerials to stream 1080p video from a PC to a WHDI-capable television.
September
AMIMON showcases the HD camera link Falcon-HD, a transmitter and receiver accessory for professional HD cameras and monitors at the International Broadcasting Convention (IBC) in Amsterdam.
2012
January
AMIMON teams up with Lenovo to integrate WHDI technology in the IdeaPad S2 7, removing the need for an external transmitter.
April
AMIMON launches Falcon, a wireless transmitter/receiver system kit for the professional camera and monitor market, at the National Association of Broadcasters (NAB) Show in Las Vegas.
June
AMIMON announces the AMIMON Pro Line, using WHDI technology to expand uses from the CE market to the Professional market.
Elmo introduces MO-1w Visual Presenter, the first use of WHDI technology in the presentation industry.
Supporters
Promoters
AMIMON
Hitachi
LG Electronics
Motorola
Samsung
Sharp Corporation
Sony
Contributors
D-link
Haier
Maxim
Mitsubishi Electric
Rohde & Schwarz
Toshiba
Adopters
Askey
ASUS
ATEN International Co., Ltd.
Belkin
Dfine Technology
Domo Technologies
Elmo
Galaxy Microsystems Ltd.
Gemtek
Hefei Radio
Hosiden
HP
Hunan space satellite Communication co.ltd.
IOGear
Jupiter (MTI)
LiteOn Technology Corp.
Murata Manufacturing
Olympus Corporation
Quanta Microsystems - QMI
Seamon Science International
SRI Radio Systems
Syvio Image Limited
TCL Corporation
TDK
Telecommunication Metrology Center
Winstars
Zinwell
See also
Ultra-wideband
Wireless USB
Wireless HDMI:
Intel Wireless Display (WiDi) version 3.5 to 6.0 supports Miracast; discontinued
Miracast
WirelessHD
WiGig
Wi-Fi Direct
ip based:
Chromecast (proprietary media broadcast over ip: Google Cast for audio or audiovisual playback)
AirPlay (proprietary ip based)
Digital Living Network Alliance (DLNA) (ip based)
port / standard for mobile equipment:
Mobile High-Definition Link - MHL
SlimPort (Mobility DisplayPort), also known as MyDP
External links
WHDI.org, the official website of WHDI SIG
Developing Wireless High-Definition Video Modems for Consumer Electronics Devices by Guy Dorman, AMIMON
VE829, FHD 5x2 HDMI Wireless Extender
The Main Wireless HDMI Transmission Protocols and Their Typical Products, Comparison of main wireless HDMI transmission protocols
References
Networking standards
Wireless display technologies | Wireless Home Digital Interface | [
"Technology",
"Engineering"
] | 1,570 | [
"Wireless display technologies",
"Computer standards",
"Wireless networking",
"Computer networks engineering",
"Networking standards"
] |
19,075,439 | https://en.wikipedia.org/wiki/Slot-waveguide | A slot-waveguide is an optical waveguide that guides strongly confined light in a subwavelength-scale low refractive index region by total internal reflection.
A slot-waveguide consists of two strips or slabs of high-refractive-index (nH) materials separated by a subwavelength-scale low-refractive-index (nS) slot region and surrounded by low-refractive-index (nC) cladding materials.
Principle of operation
The principle of operation of a slot-waveguide is based on the discontinuity of the electric field (E-field) at high-refractive-index-contrast interfaces. Maxwell’s equations state that, to satisfy the continuity of the normal component of the electric displacement field D at an interface, the corresponding E-field must undergo a discontinuity with higher amplitude in the low-refractive-index side. That is, at an interface between two regions of dielectric constants εS and εH, respectively:
DSN=DHN
εSESN=εHEHN
nS2ESN=nH2EHN
where the superscript N indicates the normal components of D and E vector fields. Thus, if , then ESN>>EHN.
Given that the slot critical dimension (distance between the high-index slabs or strips) is comparable to the exponential decay length of the fundamental eigenmode of the guided-wave structure, the resulting E-field normal to the high-index-contrast interfaces is enhanced in the slot and remains high across it. The power density in the slot is much higher than that in the high-index regions. Since wave propagation is due to total internal reflection, there is no interference effect involved and the slot-structure exhibits very low wavelength sensitivity.
Invention
The slot-waveguide was born in 2003 as an unexpected outcome of theoretical studies on metal-oxide-semiconductor (MOS) electro-optic modulation in high-confinement silicon photonic waveguides by Vilson Rosa de Almeida and Carlos Angulo Barrios, then a Ph.D. student and a postdoctoral associate, respectively, at Cornell University. Theoretical analysis and experimental demonstration of the first slot-waveguide implemented in the Si/SiO2 material system at 1.55 μm operation wavelength were reported by Cornell researchers in 2004.
Since these pioneering works, several guided-wave configurations based on the slot-waveguide concept have been proposed and demonstrated. Relevant examples are the following:
In 2005, researchers at the Massachusetts Institute of Technology proposed to use multiple slot regions in the same guided-wave structure (multi-slot waveguide) in order to increase the optical field in the low-refractive-index regions. The experimental demonstration of such multiple slot waveguide in a horizontal configuration was first published in 2007.
In 2006, the slot-waveguide approach was extended to the terahertz frequency band by researchers at RWTH Aachen University. Researchers at the California Institute of Technology also demonstrated that a slot waveguide, in combination with nonlinear electrooptic polymers, could be used to build ring modulators with exceptionally high tunability. Later this same principle enabled Baehr-Jones et al. to demonstrate a mach-zehnder modulator with an exceptionally low drive voltage of 0.25 V
In 2007, a non-planar implementation of the slot-waveguide principle of operation was demonstrated by researchers at the University of Bath. They showed concentration of optical energy within a subwavelength-scale air hole running down the length of a photonic-crystal fiber.
Recently, in 2016, it is shown that slots in a pair of waveguides if off-shifted away from each other can enhance the coupling coefficient even more than 100% if optimized properly, and thus the effective power coupling length between the waveguides can significantly be reduced. Hybrid slot (having vertical slot in one waveguide and horizontal slot in the other) assisted polarization beam splitter is also numerically demonstrated. Though, the losses are high for such slot structures, this scheme exploiting the asymmetric slots may have potential to design very compact optical directional couplers and polarization beam splitters for on-chip integrated optical devices.
The slot waveguide bend is another structure essential to the waveguide design of several Integrated micro- and nano-optics devices. One of the benefits of waveguide bends is the reduction of the footprint size of the device. There are two approaches based on the similarity of Si rails width to form the sharp bend in slot waveguide, which are the symmetric and asymmetric slot waveguides.
Fabrication
Planar slot-waveguides have been fabricated in different material systems such as Si/SiO2 and Si3N4/SiO2. Both vertical (slot plane is normal to the substrate plane) and horizontal (slot plane is parallel to the substrate plane) configurations have been implemented by using conventional micro- and nano-fabrication techniques. These processing tools include electron beam lithography, photolithography, chemical vapour deposition [usually low-pressure chemical vapour deposition (LPCVD) or plasma enhanced chemical vapour deposition (PECVD)], thermal oxidation, reactive-ion etching and focused ion beam.
In vertical slot-waveguides, the slot and strips widths are defined by electron- or photo-lithography and dry etching techniques whereas in horizontal slot-waveguides the slot and strips thicknesses are defined by a thin-film deposition technique or thermal oxidation. Thin film deposition or oxidation provides better control of the layers dimensions and smoother interfaces between the high-index-contrast materials than lithography and dry etching techniques. This makes horizontal slot-waveguides less sensitive to scattering optical losses due to interface roughness than vertical configurations.
Fabrication of a non-planar (fiber-based) slot-waveguide configuration has also been demonstrated by means of conventional microstructured optical fiber technology.
Applications
A slot-waveguide produces high E-field amplitude, optical power, and optical intensity in low-index materials at levels that cannot be achieved with conventional waveguides. This property allows highly efficient interaction between fields and active materials, which may lead to all-optical switching, optical amplification and optical detection on integrated photonics. Strong E-field confinement can be localized in a nanometer-scale low-index region. As firstly pointed out in, the slot waveguide can be used to greatly increase the sensitivity of compact optical sensing devices or to enhance the efficiency of near-field optics probes.
At Terahertz frequencies, slot waveguide based splitter has been designed which allows for low loss propagation of Terahertz waves. The device acts as a splitter through which maximum throughput can be achieved by adjusting the arm length ratio of the input to the output side.
References
Optical components
Photonics | Slot-waveguide | [
"Materials_science",
"Technology",
"Engineering"
] | 1,404 | [
"Glass engineering and science",
"Optical components",
"Components"
] |
19,078,871 | https://en.wikipedia.org/wiki/Leonard%E2%80%93Merritt%20mass%20estimator | In astronomy, the Leonard–Merritt mass estimator is a formula for estimating the mass of a spherical stellar system using the apparent (angular) positions and proper motions of its component stars. The distance to the stellar system must also be known.
Like the virial theorem, the Leonard–Merritt estimator yields correct results regardless of the degree of velocity anisotropy. Its statistical properties are superior to those of the virial theorem. However, it requires that two components of the velocity be known for every star, rather than just one for the virial theorem.
The estimator has the general form
The angle brackets denote averages over the ensemble of observed stars. is the mass contained within a distance from the center of the stellar system; is the projected distance of a star from the apparent center; and are the components of a star's velocity parallel to, and perpendicular to, the apparent radius vector; and is the gravitational constant.
Like all estimators based on moments of the Jeans equations, the Leonard–Merritt estimator requires an assumption about the relative distribution of mass and light. As a result, it is most useful when applied to stellar systems that have one of two properties:
All or almost all of the mass resides in a central object, or,
the mass is distributed in the same way as the observed stars.
Case (1) applies to the nucleus of a galaxy containing a supermassive black hole. Case (2) applies to a stellar system composed entirely of luminous stars (i.e. no dark matter or black holes).
In a cluster with constant mass-to-light ratio and total mass , the Leonard–Merritt estimator becomes:
On the other hand, if all the mass is located in a central point of mass , then:
In its second form, the Leonard–Merritt estimator has been successfully used to measure the mass of the supermassive black hole at the center of the Milky Way galaxy.
See also
Globular cluster
Proper motion
Virial theorem
References
Astrometry
Celestial mechanics
Stellar astronomy
Supermassive black holes
Equations of astronomy | Leonard–Merritt mass estimator | [
"Physics",
"Astronomy"
] | 425 | [
"Black holes",
"Concepts in astronomy",
"Unsolved problems in physics",
"Classical mechanics",
"Supermassive black holes",
"Astrophysics",
"Astrometry",
"Equations of astronomy",
"Celestial mechanics",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
19,079,107 | https://en.wikipedia.org/wiki/Tritium%20Systems%20Test%20Assembly | The Tritium Systems Test Assembly (TSTA) was a facility at Los Alamos National Laboratory dedicated to the development and demonstration of technologies required for fusion-relevant deuterium-tritium processing. Facility design was launched in 1977. It was commissioned in 1982, and the first tritium was processed in 1984. The maximum tritium inventory was 140 grams.
References
Los Alamos National Laboratory
Deuterium
Tritium | Tritium Systems Test Assembly | [
"Physics"
] | 85 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
10,016,360 | https://en.wikipedia.org/wiki/Excellent%20ring | In commutative algebra, a quasi-excellent ring is a Noetherian commutative ring that behaves well with respect to the operation of completion, and is called an excellent ring if it is also universally catenary. Excellent rings are one answer to the problem of finding a natural class of "well-behaved" rings containing most of the rings that occur in number theory and algebraic geometry. At one time it seemed that the class of Noetherian rings might be an answer to this problem, but Masayoshi Nagata and others found several strange counterexamples showing that in general Noetherian rings need not be well-behaved: for example, a normal Noetherian local ring need not be analytically normal.
The class of excellent rings was defined by Alexander Grothendieck (1965) as a candidate for such a class of well-behaved rings. Quasi-excellent rings are conjectured to be the base rings for which the problem of resolution of singularities can be solved; showed this in characteristic 0, but the positive characteristic case is (as of 2024) still a major open problem. Essentially all Noetherian rings that occur naturally in algebraic geometry or number theory are excellent; in fact it is quite hard to construct examples of Noetherian rings that are not excellent.
Definitions
The definition of excellent rings is quite involved, so we recall the definitions of the technical conditions it satisfies. Although it seems like a long list of conditions, most rings in practice are excellent, such as fields, polynomial rings, complete Noetherian rings, Dedekind domains over characteristic 0 (such as ), and quotient and localization rings of these rings.
Recalled definitions
A ring containing a field is called geometrically regular over if for any finite extension of the ring is regular.
A homomorphism of rings from is called regular if it is flat and for every the fiber is geometrically regular over the residue field of .
A ring is called a G-ring (or Grothendieck ring) if it is Noetherian and its formal fibers are geometrically regular; this means that for any , the map from the local ring to its completion is regular in the sense above.
Finally, a ring is J-2 if any finite type -algebra is J-1, meaning the regular subscheme is open.
Definition of (quasi-)excellence
A ring is called quasi-excellent if it is a G-ring and J-2 ring. It is called excellentpg 214 if it is quasi-excellent and universally catenary. In practice almost all Noetherian rings are universally catenary, so there is little difference between excellent and quasi-excellent rings.
A scheme is called excellent or quasi-excellent if it has a cover by open affine subschemes with the same property, which implies that every open affine subscheme has this property.
Properties
Because an excellent ring is a G-ring, it is Noetherian by definition. Because it is universally catenary, every maximal chain of prime ideals has the same length. This is useful for studying the dimension theory of such rings because their dimension can be bounded by a fixed maximal chain. In practice, this means infinite-dimensional Noetherian rings which have an inductive definition of maximal chains of prime ideals, giving an infinite-dimensional ring, cannot be constructed.
Schemes
Given an excellent scheme and a locally finite type morphism , then is excellentpg 217.
Quasi-excellence
Any quasi-excellent ring is a Nagata ring.
Any quasi-excellent reduced local ring is analytically reduced.
Any quasi-excellent normal local ring is analytically normal.
Examples
Excellent rings
Most naturally occurring commutative rings in number theory or algebraic geometry are excellent. In particular:
All complete Noetherian local rings, for instance all fields and the ring of -adic integers, are excellent.
All Dedekind domains of characteristic are excellent. In particular the ring of integers is excellent. Dedekind domains over fields of characteristic greater than need not be excellent.
The rings of convergent power series in a finite number of variables over or are excellent.
Any localization of an excellent ring is excellent.
Any finitely generated algebra over an excellent ring is excellent. This includes all polynomial algebras with excellent. This means most rings considered in algebraic geometry are excellent.
A J-2 ring that is not a G-ring
Here is an example of a discrete valuation ring of dimension and characteristic which is but not a -ring and so is not quasi-excellent. If is any field of characteristic with and is the ring of power series such that is finite then the formal fibers of are not all geometrically regular so is not a -ring. It is a ring as all Noetherian local rings of dimension at most are rings. It is also universally catenary as it is a Dedekind domain. Here denotes the image of under the Frobenius morphism .
A G-ring that is not a J-2 ring
Here is an example of a ring that is a G-ring but not a J-2 ring and so not quasi-excellent. If is the subring of the polynomial ring in infinitely many generators generated by the squares and cubes of all generators, and is obtained from by adjoining inverses to all elements not in any of the ideals generated by some , then is a 1-dimensional Noetherian domain that is not a ring as has a cusp singularity at every closed point, so the set of singular points is not closed, though it is a G-ring.
This ring is also universally catenary, as its localization at every prime ideal is a quotient of a regular ring.
A quasi-excellent ring that is not excellent
Nagata's example of a 2-dimensional Noetherian local ring that is catenary but not universally catenary is a G-ring, and is also a J-2 ring as any local G-ring is a J-2 ring . So it is a quasi-excellent catenary local ring that is not excellent.
Resolution of singularities
Quasi-excellent rings are closely related to the problem of resolution of singularities, and this seems to have been Grothendieck's motivationpg 218 for defining them. Grothendieck (1965) observed that if it is possible to resolve singularities of all complete integral local Noetherian rings, then it is possible to resolve the singularities of all reduced quasi-excellent rings. Hironaka (1964) proved this for all complete integral Noetherian local rings over a field of characteristic 0, which implies his theorem that all singularities of excellent schemes over a field of characteristic 0 can be resolved. Conversely if it is possible to resolve all singularities of the spectra of all integral finite algebras over a Noetherian ring R then the ring R is quasi-excellent.
See also
Resolution of singularities
References
Alexandre Grothendieck, Jean Dieudonné, Eléments de géométrie algébrique IV Publications Mathématiques de l'IHÉS 24 (1965), section 7
Algebraic geometry
Commutative algebra | Excellent ring | [
"Mathematics"
] | 1,454 | [
"Fields of abstract algebra",
"Commutative algebra",
"Algebraic geometry"
] |
10,018,162 | https://en.wikipedia.org/wiki/First%20fix%20and%20second%20fix | First fix and second fix are terms used in the UK and Irish housebuilding and commercial building construction industry.
First fix comprises all the work needed to take a building from foundation to putting plaster on the internal walls. This includes constructing walls, floors and ceilings, and inserting cables for electrical supply and pipes for water supply.
Some argue that First Fix starts after the shell of the building is complete, and ends when the walls are plastered. Here is a list, in no particular order, of the elements of First Fix.
Drain runs: must be downhill and straight
Spare conduits: draw strings
Soil pipes
Copper pipes
MVHR (mechanical heat recovery ventilation runs)
Push-fit or other plastic piping
Electrical back boxes
Electricity cable runs
Telephone, data and audiovisual cables
Socket location
Security
Fire alarm
Normal pipes
Door bell
Door frames
Pocket doorframes
Stair well: floating / cantilevered?
Sound insulation
Plasterboarding
The list is not exhaustive.
Second fix comprises all the work after the plastering of a finished house. Electrical fixtures are connected to the cables, sinks and baths connected to the pipes, and doors fitted into doorframes. Second fix work requires a neater finish than first fix.
The division of work is a convenient description because electricians, plumbers and carpenters will probably have to make two separate visits to one property under construction, at separate times. Project managers can report "first fix complete" or "second fix 50% done" and others can understand.
Some construction companies specialise in first fix work or second fix work, but most do both.
In North America, terms such as roughing in and finishing or rough-in and finish work are often heard, referring to similar concepts. Another related set of terms is outside work and inside work (the building is closed to the weather when the latter occurs). Carpenters speak of rough work and trim work (or framing versus trimming), and other fields have analogues, such as machining (roughing versus finishing cuts) and communications (rough draft versus revised draft).
Electrical installations and "third fixes"
Electrical installations can be further divided into first, second and third fixes:
First Fix: Positioning and securing of accessory boxes
Second Fix: Preparation and positioning of cables
Third Fix: Termination of conductors to accessories and protective devices
As modern society's reliance on technology increases, the need to properly house sensitive electronic equipment becomes a greater concern. The installation of this equipment takes place in the "third fix" segment of a construction project. It is especially important that installation of sensitive electronic equipment be installed only when a construction site is dust-controlled and prepared for what would be considered "dust free" conditions. For example, for the modern computer server room, equipment would be installed only when dust and atmospheric conditions are minimized and controlled. Similar to the atmospheric needs of medical and scientific research laboratories, the production of discrete semiconductor devices and integrated circuits is undertaken in a cleanroom atmosphere where low levels of environmental pollutants such as particulates and airborne microbes are strictly minimised and most preferably eliminated.
The UK national building specifications, British Standard 5295:1989, specifically addresses "clean room" environments serving electronics manufacturers, as well as the pharmaceutical industry (the Pharmaceutical Industry has, for some time, worked to the ISO standard 14644 which is subtly different). Standard 5295:1989 specifically pertains to constructed interior spaces where higher than normal environmental standards must be maintained, in order to control particulate contamination, temperature and humidity. It is only at the third fix stage, when building site conditions are rendered virtually dust free, so as to minimise the introduction, generation and retention of particles which may contaminate equipment serving the electronics and pharmaceuticals manufacturing process, that the build-out of "clean room" spaces can commence.
References
Building engineering | First fix and second fix | [
"Engineering"
] | 778 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
10,023,138 | https://en.wikipedia.org/wiki/Chandrasekhar%20number | The Chandrasekhar number is a dimensionless quantity used in magnetic convection to represent ratio of the Lorentz force to the viscosity. It is named after the Indian astrophysicist Subrahmanyan Chandrasekhar.
The number's main function is as a measure of the magnetic field, being proportional to the square of a characteristic magnetic field in a system.
Definition
The Chandrasekhar number is usually denoted by the letter , and is motivated by a dimensionless form of the Navier-Stokes equation in the presence of a magnetic force in the equations of magnetohydrodynamics:
where is the Prandtl number, and is the magnetic Prandtl number.
The Chandrasekhar number is thus defined as:
where is the magnetic permeability, is the density of the fluid, is the kinematic viscosity, and is the magnetic diffusivity. and are a characteristic magnetic field and a length scale of the system respectively.
It is related to the Hartmann number, , by the relation:
See also
Rayleigh number
Taylor number
References
Magnetohydrodynamics
Dimensionless numbers of fluid mechanics
Fluid dynamics
Astrophysics | Chandrasekhar number | [
"Physics",
"Chemistry",
"Astronomy",
"Mathematics",
"Engineering"
] | 236 | [
"Chemical engineering",
"Mathematical objects",
"Astrophysics",
"Number stubs",
"Piping",
"Fluid dynamics",
"Numbers",
"Astronomical sub-disciplines",
"Magnetohydrodynamics"
] |
10,025,116 | https://en.wikipedia.org/wiki/Tippe%20top | A tippe top is a kind of top that when spun, will spontaneously invert itself to spin on its narrow stem. It was invented by a German nurse, Helene Sperl in 1898.
Description
A tippe top usually has a body shaped like a truncated sphere, with a short narrow stem attached perpendicular to the center of the flat circular surface of truncation. The stem may be used as a handle to pick up the top, and is also used to spin the top into motion.
When a tippe top is spun at a high angular velocity, its stem slowly tilts downwards more and more until it suddenly lifts the body of the spinning top off the ground, with the stem now pointing downward. Eventually, as the top's spinning rate slows, it loses stability and eventually topples over, like an ordinary top.
At first glance the top's inversion may mistakenly seem to be a situation where the object spontaneously gains overall energy. This is because the inversion of the top raises the object's center of mass, which means the potential energy has in fact increased. What causes the inversion (and the increase in potential energy) is a torque due to surface friction, which also decreases the kinetic energy of the top, so the total energy does not actually increase.
Once the top is spinning on its stem, it does not spin in the opposite direction to which its spin was initiated. For example, if the top was spun clockwise, as soon as it is on its stem, it will be spinning clockwise viewed from above. This constant spin direction is due to conservation of angular momentum.
Theory
It is usually assumed that the speed of the tippe top at the point of contact with the plane is zero (i.e. there is no slippage). However, as indicated by P. Contensou, this assumption does not lead to a correct physical description of the top's motion. The unusual behavior of the top can be fully described by considering dry friction forces at the contact point.
See also
Euler's Disk – Another spinning physics toy that exhibits surprising behavior
Tennis racket theorem - a similar dynamical rotation effect
References
Further reading
External links
FYSIKbasen.dk
Eric Weisstein's World of Physics
"The Tippe Top" by F.A. Bilsen
Patent
Classical mechanics
Educational toys
Spinning tops
Novelty items | Tippe top | [
"Physics"
] | 478 | [
"Mechanics",
"Classical mechanics"
] |
10,026,140 | https://en.wikipedia.org/wiki/Tropical%20cyclone%20seasonal%20forecasting | Tropical cyclone seasonal forecasting is the process of predicting the number of tropical cyclones in one of the world's seven tropical cyclone basins during a particular tropical cyclone season. In the north Atlantic Ocean, one of the most widely publicized annual predictions comes from the Tropical Meteorology Project at Colorado State University. These reports are written by Philip J. Klotzbach and William M. Gray.
Colorado State University's Tropical Meteorology Project
Since 1984, Dr. William M. Gray and his associates at the Colorado State University have issued a seasonal forecast, that has aimed to predict the number of tropical storms and hurricanes that will develop within the Atlantic basin during the upcoming season amongst other factors. The forecasts were initially issued ahead of time for June and August.
After the active 2005 Atlantic hurricane season, Dr Gray decided to allow Philip J. Klotzbach to take the primary responsibility for the project's seasonal, monthly and landfall probability forecasts effective with the first forecast for the 2006 Atlantic hurricane season.
National Meteorological Services
Ahead of each season several national meteorological services issue forecasts of how many tropical cyclones will form during a season and/or how many tropical cyclones will affect a particular country. Examples include the United Kingdom's Met Office which issues a seasonal forecast in May/June of the number of tropical storms for the upcoming Atlantic hurricane season, while the Philippine Atmospheric, Geophysical and Astronomical Service tries to predict how many tropical cyclones will move into its area of responsibility.
United States National Oceanic and Atmospheric Administration
In August 1998, the United States Climate Prediction Center in conjunction with the National Hurricane Center and the Hurricane Research Division issued a tropical cyclone outlook, which accurately predicted that there would be an above-normal number of tropical storms and hurricanes in the Atlantic between August and October. The NOAA centres subsequently started to issue an outlook that gave a general guide to the expected overall activity within the Atlantic Ocean.
Ahead of the 2003 Pacific hurricane season, the NOAA forecasters decided to start issuing an experimental tropical cyclone outlook for the Eastern Pacific, which was designed not to be updated during the mid-season. As a result of both the 2003 and 2004 outlooks being successful, the predictions became an operational product during 2005.
NOAA is also one of the contributors to New Zealand's National Institute of Water & Atmospheric Research Tropical Cyclone Outlook, through its National Weather Service forecast offices in the region and the Climate Prediction Center.
Australia and the Pacific Islands
New Zealand's National Institute of Water & Atmospheric Research (NIWA) and collaborating agencies including the Meteorological Service of New Zealand and Pacific Island National Meteorological Services issue the "Island Climate Update Tropical Cyclone Outlook" for the Pacific. This forecast attempts to predict how many tropical cyclones and severe tropical cyclones will develop within the Southern Pacific between 135°E and 120°W as well as how many will affect a particular island nation. The Fiji Meteorological Service while collaborating with NIWA and partners also publishes its own seasonal forecast but for the South Pacific basin between 160°E and 120°W. Since the start of the 2009–10 season, the Australian Bureau of Meteorology's National Climate Center has publicly released a forecast for the Australian region which focused on the broadscale aspects of the cyclone season, and forecasted how likely it was that a subregion was to see activity above the average as well as how many tropical cyclones may occur within the basin and each of its subregions. However ahead of the 2011–12 season the NCC stopped forecasting publicly how many tropical cyclones may occur in a certain region and just forecasted how likely it was that a subregion was to see activity above the average.
See also
Tropical cyclone forecast model
Numerical weather prediction
Tropical cyclone observation
Tropical cyclone warnings and watches
Tropical Meteorology Project
References
External links
Seasonal predictions
The Tropical Meteorology Project
Seasonal outlook for Western Australia issued by the Bureau of Meteorology
Seasonal outlook for the Northern Territory issued by the Bureau of Meteorology
Seasonal outlook for Queensland issued by the Bureau of Meteorology
Tropical cyclones
Weather prediction
Weather warnings and advisories | Tropical cyclone seasonal forecasting | [
"Physics"
] | 806 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
10,027,043 | https://en.wikipedia.org/wiki/Comparison%20of%20privilege%20authorization%20features | A number of computer operating systems employ security features to help prevent malicious software from gaining sufficient privileges to compromise the computer system. Operating systems lacking such features, such as DOS, Windows implementations prior to Windows NT (and its descendants), CP/M-80, and all Mac operating systems prior to Mac OS X, had only one category of user who was allowed to do anything. With separate execution contexts it is possible for multiple users to store private files, for multiple users to use a computer at the same time, to protect the system against malicious users, and to protect the system against malicious programs. The first multi-user secure system was Multics, which began development in the 1960s; it wasn't until UNIX, BSD, Linux, and NT in the late 80s and early 90s that multi-tasking security contexts were brought to x86 consumer machines.
Introduction to implementations
Microsoft Windows
macOS
Unix and Unix-like
Security considerations
Falsified/intercepted user input
A major security consideration is the ability of malicious applications to simulate keystrokes or mouse clicks, thus tricking or spoofing the security feature into granting malicious applications higher privileges.
Using a terminal based client (standalone or within a desktop/GUI): su and sudo run in the terminal, where they are vulnerable to spoofed input. Of course, if the user was not running a multitasking environment (i.e. a single user in the shell only), this would not be a problem. Terminal windows are usually rendered as ordinary windows to the user, therefore on an intelligent client or desktop system used as a client, the user must take responsibility for preventing other malware on their desktop from manipulating, simulating, or capturing input.
Using a GUI/desktop tightly integrated to the operating system: Commonly, the desktop system locks or secures all common means of input, before requesting passwords or other authentication, so that they cannot be intercepted, manipulated, or simulated:
PolicyKit (GNOME - directs the X server to capture all keyboard and mouse input. Other desktop environments using PolicyKit may use their own mechanisms.
gksudo - by default "locks" the keyboard, mouse, and window focus, preventing anything but the actual user from inputting the password or otherwise interfering with the confirmation dialog.
UAC (Windows) - by default runs in the Secure Desktop, preventing malicious applications from simulating clicking the "Allow" button or otherwise interfering with the confirmation dialog. In this mode, the user's desktop appears dimmed and cannot be interacted with.
If either gksudo's "lock" feature or UAC's Secure Desktop were compromised or disabled, malicious applications could gain administrator privileges by using keystroke logging to record the administrator's password; or, in the case of UAC if running as an administrator, spoofing a mouse click on the "Allow" button. For this reason, voice recognition is also prohibited from interacting with the dialog. Note that since gksu password prompt runs without special privileges, malicious applications can still do keystroke logging using e.g. the strace tool. (ptrace was restricted in later kernel versions)
Fake authentication dialogs
Another security consideration is the ability of malicious software to spoof dialogs that look like legitimate security confirmation requests. If the user were to input credentials into a fake dialog, thinking the dialog was legitimate, the malicious software would then know the user's password. If the Secure Desktop or similar feature were disabled, the malicious software could use that password to gain higher privileges.
Though it is not the default behavior for usability reasons, UAC may be configured to require the user to press Ctrl+Alt+Del (known as the secure attention sequence) as part of the authentication process. Because only Windows can detect this key combination, requiring this additional security measure would prevent spoofed dialogs from behaving the same way as a legitimate dialog. For example, a spoofed dialog might not ask the user to press Ctrl+Alt+Del, and the user could realize that the dialog was fake. Or, when the user did press Ctrl+Alt+Del, the user would be brought to the screen Ctrl+Alt+Del normally brings them to instead of a UAC confirmation dialog. Thus the user could tell whether the dialog was an attempt to trick them into providing their password to a piece of malicious software.
In GNOME, PolicyKit uses different dialogs, depending on the configuration of the system. For example, the authentication dialog for a system equipped with a fingerprint reader might look different from an authentication dialog for a system without one. Applications do not have access to the configuration of PolicyKit, so they have no way of knowing which dialog will appear and thus how to spoof it.
Usability considerations
Another consideration that has gone into these implementations is usability.
Separate administrator account
su require the user to know the password to at least two accounts: the regular-use account, and an account with higher privileges such as root.
sudo, kdesu and gksudo use a simpler approach. With these programs, the user is pre-configured to be granted access to specific administrative tasks, but must explicitly authorize applications to run with those privileges. The user enters their own password instead of that of the superuser or some another account.
UAC and Authenticate combine these two ideas into one. With these programs, administrators explicitly authorize programs to run with higher privileges. Non-administrators are prompted for an administrator username and password.
PolicyKit can be configured to adopt any of these approaches. In practice, the distribution will choose one.
Simplicity of dialog
In order to grant an application administrative privileges, sudo, gksudo, and Authenticate prompt administrators to re-enter their password.
With UAC, when logged in as a standard user, the user must enter an administrator's name and password each time they need to grant an application elevated privileges; but when logged in as a member of the Administrators group, they (by default) simply confirm or deny, instead of re-entering their password each time (though that is an option). While the default approach is simpler, it is also less secure, since if the user physically walks away from the computer without locking it, another person could walk up and have administrator privileges over the system.
PolicyKit requires the user to re-enter his or her password or provide some other means of authentication (e.g. fingerprint).
Saving credentials
UAC prompts for authorization each time it is called to elevate a program.
sudo, gksudo, and kdesu do not ask the user to re-enter their password every time it is called to elevate a program. Rather, the user is asked for their password once at the start. If the user has not used their administrative privileges for a certain period of time (sudo's default is 5 minutes), the user is once again restricted to standard user privileges until they enter their password again.
sudo's approach is a trade-off between security and usability. On one hand, a user only has to enter their password once to perform a series of administrator tasks, rather than having to enter their password for each task. But at the same time, the surface area for attack is larger because all programs that run in that tty (for sudo) or all programs not running in a terminal (for gksudo and kdesu) prefixed by either of those commands before the timeout receive administrator privileges. Security-conscious users may remove the temporary administrator privileges upon completing the tasks requiring them by using the sudo -k command when from each tty or pts in which sudo was used (in the case of pts's, closing the terminal emulator is not sufficient). The equivalent command for kdesu is kdesu -s. There is no gksudo option to do the same; however, running sudo -k not within a terminal instance (e.g. through the Alt + F2 "Run Application" dialogue box, unticking "Run in terminal") will have the desired effect.
Authenticate does not save passwords. If the user is a standard user, they must enter a username and a password. If the user is an administrator, the current user's name is already filled in, and only needs to enter their password. The name can still be modified to run as another user.
The application only requires authentication once, and is requested at the time the application needs the privilege. Once "elevated", the application does not need to authenticate again until the application has been Quit and relaunched.
However, there are varying levels of authentication, known as Rights. The right that is requested can be shown by expanding the triangle next to "details", underneath the password. Normally, applications use system.privilege.admin, but another may be used, such as a lower right for security, or a higher right if higher access is needed. If the right the application has is not suitable for a task, the application may need to authenticate again to increase the privilege level.
PolicyKit can be configured to adopt either of these approaches.
Identifying when administrative rights are needed
In order for an operating system to know when to prompt the user for authorization, an application or action needs to identify itself as requiring elevated privileges. While it is technically possible for the user to be prompted at the exact moment that an operation requiring such privileges is executed, it is often not ideal to ask for privileges partway through completing a task. If the user were unable to provide proper credentials, the work done before requiring administrator privileges would have to be undone because the task could not be seen through to the end.
In the case of user interfaces such as the Control Panel in Microsoft Windows, and the Preferences panels in Mac OS X, the exact privilege requirements are hard-coded into the system so that the user is presented with an authorization dialog at an appropriate time (for example, before displaying information that only administrators should see). Different operating systems offer distinct methods for applications to identify their security requirements:
sudo centralizes all privilege authorization information in a single configuration file, /etc/sudoers, which contains a list of users and the privileged applications and actions that those users are permitted to use. The grammar of the sudoers file is intended to be flexible enough to cover many different scenarios, such as placing restrictions on command-line parameters. For example, a user can be granted access to change anybody's password except for the root account, as follows:
pete ALL = /usr/bin/passwd [A-z]*, !/usr/bin/passwd root
User Account Control uses a combination of heuristic scanning and "application manifests" to determine if an application requires administrator privileges. Manifest (.manifest) files, first introduced with Windows XP, are XML files with the same name as the application and a suffix of ".manifest", e.g. Notepad.exe.manifest. When an application is started, the manifest is looked at for information about what security requirements the application has. For example, this XML fragment will indicate that the application will require administrator access, but will not require unfettered access to other parts of the user desktop outside the application:
<security>
<requestedPrivileges>
<requestedExecutionLevel level="requireAdministrator" uiAccess="false" />
</requestedPrivileges>
</security>
Manifest files can also be compiled into the application executable itself as an embedded resource. Heuristic scanning is also used, primarily for backwards compatibility. One example of this is looking at the executable's file name; if it contains the word "Setup", it is assumed that the executable is an installer, and a UAC prompt is displayed before the application starts.
UAC also makes a distinction between elevation requests from a signed executable and an unsigned executable; and if the former, whether or not the publisher is 'Windows Vista'. The color, icon, and wording of the prompts are different in each case: for example, attempting to convey a greater sense of warning if the executable is unsigned than if not.
Applications using PolicyKit ask for specific privileges when prompting for authentication, and PolicyKit performs those actions on behalf of the application. Before authenticating, users are able to see which application requested the action and which action was requested.
See also
Privilege escalation, a type of security exploit
Principle of least privilege, a security design pattern
Privileged Identity Management, the methodology of managing privileged accounts
Privileged password management, similar concept to privileged identity management:
i.e., periodically scramble privileged passwords; and
store password values in a secure, highly available vault; and
apply policy regarding when, how and to whom these passwords may be disclosed.
References
Operating system security
Privilege authorization features
Computer access control | Comparison of privilege authorization features | [
"Engineering"
] | 2,685 | [
"Cybersecurity engineering",
"Computer access control"
] |
10,027,284 | https://en.wikipedia.org/wiki/Hook%20%28hand%20tool%29 | A hook is a hand tool used for securing and moving loads. It consists of a round wooden handle with a strong metal hook about long projecting at a right angle from the center of the handle. The appliance is held in a closed fist with the hook projecting between two fingers.
This type of hook is used in many different industries, and has many different names. It may be called a box hook, cargo hook, loading hook, docker's hook when used by longshoremen, and a baling hook, bale hook, or hay hook in the agricultural industry. Other variants exist, such as in forestry, for moving logs, and a type with a long shaft, used by city workers to remove manhole covers.
Smaller hooks may also be used in food processing and transport.
Dockwork
The longshoreman's hook was historically used by longshoremen (stevedores). Before the age of containerization, freight was moved on and off ships with extensive manual labor, and the longshoreman's hook was the basic tool of the dockworker. The hook became an emblem of the longshoreman's profession in the same way that a hammer and anvil are associated with blacksmiths, or the pipe wrench with pipefitters, sprinklerfitters and plumbers. When longshoremen went on strike or retired, it was known as "hanging up the hook" or "slinging the hook", and the newsletter for retired members of the International Longshore and Warehouse Union's Seattle Local is called The Rusty Hook. A longshoreman's hook was often carried by hooking it through the belt.
Longshoremen carried various types of hooks depending on the cargo they would handle. Cargo could come in the form of bales, sacks, barrels, wood crates, or it could be stowed individually in the cargo hold of the ship. The primary function of the hook was to protect the hands of the longshoreman from being injured while handling the cargo. Hooks also improved the reach of the worker and allowed greater strength and handling of the cargo.
Some cargo items are liable to be damaged if pulled at with a longshoreman's hook: hence the "Use No Hooks" warning sign.
A longshoreman's hook looks somewhat intimidating, and as it was also associated with strong, tough dockworkers, it became a commonly used weapon in crime fiction, similar to the ice pick. For example, in an episode of Alfred Hitchcock Presents entitled Shopping for Death, a character is murdered (off-screen) using a longshoreman's hook. It was sometimes used as a weapon and means of intimidation in real life as well; the book Joey the Hit Man: The Autobiography of a Mafia Killer states "One guy who used to work on the docks was called Charlie the Hook. If he didn't like you he would pick you up with his hook." In the 1957 New York drama film Edge of the City, two longshoremen settle their dispute in a deadly baling hook fight. They are also the primary weapon of Spider Splicers in the BioShock series, so named due to their use of the hooks to crawl on ceilings and attack unexpectedly.
Haying
A hay hook is slightly different in design from a longshoreman's hook, in that the shaft is typically longer. It is used in hay bucking on farms to secure and move bales of hay, which are otherwise awkward to pick up manually.
Gardening
In gardening and agriculture, a variant with a long shaft is used to move large plants. A hook is placed in either side of the baled roots, allowing workers to carry or place the heavy load.
Forestry
Called a "Packhaken", "Hebehaken", or "Forsthaken" in German, this type is used in forestry mainly to lift or move firewood. In Sweden, this tool, though slightly different, is called a "timmerkrok", which translates as "timberhook". It is used mainly by two people to move logs by hooking them in each end.
See also
Cant hook
Fishing gaff
Pickaroon
Prosthetic hook
References
External links
Smithsonian Institution exhibit on the mechanization of the cargo shipping industry.
prohandymantools.com
Images of longshoreman's hooks:
Hand tools
Forestry tools
Food processing
Maritime culture | Hook (hand tool) | [
"Engineering"
] | 900 | [
"Human–machine interaction",
"Hand tools"
] |
1,133,088 | https://en.wikipedia.org/wiki/Elastic%20scattering | Elastic scattering is a form of particle scattering in scattering theory, nuclear physics and particle physics. In this process, the internal states of the particles involved stay the same. In the non-relativistic case, where the relative velocities of the particles are much less than the speed of light, elastic scattering simply means that the total kinetic energy of the system is conserved. At relativistic velocities, elastic scattering also requires the final state to have the same number of particles as the initial state and for them to be of the same kind.
Rutherford scattering
When the incident particle, such as an alpha particle or electron, is diffracted in the Coulomb potential of atoms and molecules, the elastic scattering process is called Rutherford scattering. In many electron diffraction techniques like reflection high energy electron diffraction (RHEED), transmission electron diffraction (TED), and gas electron diffraction (GED), where the incident electrons have sufficiently high energy (>10 keV), the elastic electron scattering becomes the main component of the scattering process and the scattering intensity is expressed as a function of the momentum transfer defined as the difference between the momentum vector of the incident electron and that of the scattered electron.
Optical elastic scattering
In Thomson scattering light interacts with electrons (this is the low-energy limit of Compton scattering).
In Rayleigh scattering a medium composed of particles whose sizes are much smaller than the wavelength scatters light sideways. In this scattering process, the energy (and therefore the wavelength) of the incident light is conserved and only its direction is changed. In this case, the scattering intensity is inversely proportional to the fourth power of the reciprocal wavelength of the light.
Nuclear particle physics
For particles with the mass of a proton or greater, elastic scattering is one of the main methods by which the particles interact with matter. At relativistic energies, protons, neutrons, helium ions, and HZE ions will undergo numerous elastic collisions before they are dissipated. This is a major concern with many types of ionizing radiation, including galactic cosmic rays, solar proton events, free neutrons in nuclear weapon design and nuclear reactor design, spaceship design, and the study of the Earth's magnetic field. In designing an effective biological shield, proper attention must be made to the linear energy transfer of the particles as they propagate through the shield. In nuclear reactors, the neutron's mean free path is critical as it undergoes elastic scattering on its way to becoming a slow-moving thermal neutron.
Besides elastic scattering, charged particles also undergo effects from their elementary charge, which repels them away from nuclei and causes their path to be curved inside an electric field. Particles can also undergo inelastic scattering and capture due to nuclear reactions. Protons and neutrons do this more often than heavier particles. Neutrons are also capable of causing fission in an incident nucleus. Light nuclei like deuterium and lithium can combine in nuclear fusion.
See also
Elastic collision
Inelastic scattering
Scattering theory
Thomson scattering
References
Particle physics
Scattering | Elastic scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 624 | [
"Condensed matter physics",
"Scattering",
"Particle physics",
"Nuclear physics"
] |
1,134,659 | https://en.wikipedia.org/wiki/Block%20code | In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks.
There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way.
Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.
Examples of block codes are Reed–Solomon codes, Hamming codes, Hadamard codes, Expander codes, Golay codes, Reed–Muller codes and Polar codes. These examples also belong to the class of linear codes, and hence they are called linear block codes. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using Boolean polynomials.
Algebraic block codes are typically hard-decoded using algebraic decoders.
The term block code may also refer to any error-correcting code that acts on a block of bits of input data to produce bits of output data . Consequently, the block coder is a memoryless device. Under this definition codes such as turbo codes, terminated convolutional codes and other iteratively decodable codes (turbo-like codes) would also be considered block codes. A non-terminated convolutional encoder would be an example of a non-block (unframed) code, which has memory and is instead classified as a tree code.
This article deals with "algebraic block codes".
The block code and its parameters
Error-correcting codes are used to reliably transmit digital data over unreliable communication channels subject to channel noise.
When a sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is called message and the procedure given by the block code encodes each message individually into a codeword, also called a block in the context of block codes. The sender then transmits all blocks to the receiver, who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly corrupted received blocks.
The performance and success of the overall transmission depends on the parameters of the channel and the block code.
Formally, a block code is an injective mapping
.
Here, is a finite and nonempty set and and are integers. The meaning and significance of these three parameters and other parameters related to the code are described below.
The alphabet Σ
The data stream to be encoded is modeled as a string over some alphabet . The size of the alphabet is often written as . If , then the block code is called a binary block code. In many applications it is useful to consider to be a prime power, and to identify with the finite field .
The message length k
Messages are elements of , that is, strings of length .
Hence the number is called the message length or dimension of a block code.
The block length n
The block length of a block code is the number of symbols in a block. Hence, the elements of are strings of length and correspond to blocks that may be received by the receiver. Hence they are also called received words.
If for some message , then is called the codeword of .
The rate R
The rate of a block code is defined as the ratio between its message length and its block length:
.
A large rate means that the amount of actual message per transmitted block is high. In this sense, the rate measures the transmission speed and the quantity measures the overhead that occurs due to the encoding with the block code.
It is a simple information theoretical fact that the rate cannot exceed since data cannot in general be losslessly compressed. Formally, this follows from the fact that the code is an injective map.
The distance d
The distance or minimum distance of a block code is the minimum number of positions in which any two distinct codewords differ, and the relative distance is the fraction .
Formally, for received words , let denote the Hamming distance between and , that is, the number of positions in which and differ.
Then the minimum distance of the code is defined as
.
Since any code has to be injective, any two codewords will disagree in at least one position, so the distance of any code is at least . Besides, the distance equals the minimum weight for linear block codes because:
.
A larger distance allows for more error correction and detection.
For example, if we only consider errors that may change symbols of the sent codeword but never erase or add them, then the number of errors is the number of positions in which the sent codeword and the received word differ.
A code with distance allows the receiver to detect up to transmission errors since changing positions of a codeword can never accidentally yield another codeword. Furthermore, if no more than transmission errors occur, the receiver can uniquely decode the received word to a codeword. This is because every received word has at most one codeword at distance . If more than transmission errors occur, the receiver cannot uniquely decode the received word in general as there might be several possible codewords. One way for the receiver to cope with this situation is to use list decoding, in which the decoder outputs a list of all codewords in a certain radius.
Popular notation
The notation describes a block code over an alphabet of size , with a block length , message length , and distance .
If the block code is a linear block code, then the square brackets in the notation are used to represent that fact.
For binary codes with , the index is sometimes dropped.
For maximum distance separable codes, the distance is always , but sometimes the precise distance is not known, non-trivial to prove or state, or not needed. In such cases, the -component may be missing.
Sometimes, especially for non-block codes, the notation is used for codes that contain codewords of length . For block codes with messages of length over an alphabet of size , this number would be .
Examples
As mentioned above, there are a vast number of error-correcting codes that are actually block codes.
The first error-correcting code was the Hamming(7,4) code, developed by Richard W. Hamming in 1950. This code transforms a message consisting of 4 bits into a codeword of 7 bits by adding 3 parity bits. Hence this code is a block code. It turns out that it is also a linear code and that it has distance 3. In the shorthand notation above, this means that the Hamming(7,4) code is a code.
Reed–Solomon codes are a family of codes with and being a prime power. Rank codes are family of codes with . Hadamard codes are a family of codes with and .
Error detection and correction properties
A codeword could be considered as a point in the -dimension space and the code is the subset of . A code has distance means that , there is no other codeword in the Hamming ball centered at with radius , which is defined as the collection of -dimension words whose Hamming distance to is no more than . Similarly, with (minimum) distance has the following properties:
can detect errors : Because a codeword is the only codeword in the Hamming ball centered at itself with radius , no error pattern of or fewer errors could change one codeword to another. When the receiver detects that the received vector is not a codeword of , the errors are detected (but no guarantee to correct).
can correct errors. Because a codeword is the only codeword in the Hamming ball centered at itself with radius , the two Hamming balls centered at two different codewords respectively with both radius do not overlap with each other. Therefore, if we consider the error correction as finding the codeword closest to the received word , as long as the number of errors is no more than , there is only one codeword in the hamming ball centered at with radius , therefore all errors could be corrected.
In order to decode in the presence of more than errors, list-decoding or maximum likelihood decoding can be used.
can correct erasures. By erasure it means that the position of the erased symbol is known. Correcting could be achieved by -passing decoding : In passing the erased position is filled with the symbol and error correcting is carried out. There must be one passing that the number of errors is no more than and therefore the erasures could be corrected.
Lower and upper bounds of block codes
Family of codes
is called family of codes, where is an code with monotonic increasing .
Rate of family of codes is defined as
Relative distance of family of codes is defined as
To explore the relationship between and , a set of lower and upper bounds of block codes are known.
Hamming bound
Singleton bound
The Singleton bound is that the sum of the rate and the relative distance of a block code cannot be much larger than 1:
.
In other words, every block code satisfies the inequality .
Reed–Solomon codes are non-trivial examples of codes that satisfy the singleton bound with equality.
Plotkin bound
For , . In other words, .
For the general case, the following Plotkin bounds holds for any with distance :
If
If
For any -ary code with distance ,
Gilbert–Varshamov bound
, where ,
is the -ary entropy function.
Johnson bound
Define .
Let be the maximum number of codewords in a Hamming ball of radius for any code of distance .
Then we have the Johnson Bound : , if
Elias–Bassalygo bound
Sphere packings and lattices
Block codes are tied to the sphere packing problem which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is), the dimensions refer to the length of the codeword as defined above.
The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called perfect codes. There are very few of these codes.
Another property is the number of neighbors a single codeword may have.
Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. Respectively, in three and four dimensions, the maximum packing is given by the 12-face and 24-cell with 12 and 24 neighbors, respectively. When we increase the dimensions, the number of near neighbors increases very rapidly. In general, the value is given by the kissing numbers.
The result is that the number of ways for noise to make the receiver choose
a neighbor (hence an error) grows as well. This is a fundamental limitation
of block codes, and indeed all codes. It may be harder to cause an error to
a single neighbor, but the number of neighbors can be large enough so the
total error probability actually suffers.
See also
Channel capacity
Shannon–Hartley theorem
Noisy channel
List decoding
Sphere packing
References
External links
Charan Langton (2001) Coding Concepts and Block Coding
Coding theory | Block code | [
"Mathematics"
] | 2,441 | [
"Discrete mathematics",
"Coding theory"
] |
1,134,865 | https://en.wikipedia.org/wiki/Door%20furniture | Door furniture (British and Australian English) or door hardware (North American English) refers to any of the items that are attached to a door or a drawer to enhance its functionality or appearance.
Design of door furniture is an issue to disabled persons who might have difficulty opening or using some kinds of door, and to specialists in interior design as well as those usability professionals which often take their didactic examples from door furniture design and use.
Items of door furniture fall into several categories, described below.
Hinges
A hinge is a component that attaches one edge of a door to the frame, while allowing the other edge to swing from it. It usually consists of a pair of plates, each with a set of open cylindrical rings (the knuckles) attached to them. The knuckles of the two plates are offset from each other and mesh together. A hinge pin is then placed through the two sets of knuckles and usually fixed, to combine the plates and make the hinge a single unit. One door usually has about three hinges, but it can vary.
Handles
Doors generally have at least one fixed handle, usually accompanied with a latch (see below). A typical "handle set" is composed of the exterior handle, escutcheon, an independent deadbolt, and the interior package (knob or lever). On some doors the latch is incorporated into a hinged handle that releases when pulled on.
See also:
Doorknob – A knob or lever on an axle that is rotated to release the bolt;
Crash bar or Panic bar;
Flush pull handle for sliding glass door.
Locks
A lock is a device that prevents access by those without a key or combination, generally by preventing one or more latches from being operated. Often accompanied by an escutcheon. Some doors, particularly older ones, will have a keyhole accompanying the lock.
Fasteners
Functionally, all but swinging doors use some form of fastener to hold them closed. Typical forms of fasteners include:
Latch – A device that allows one to fasten a door from one side (but, if designed to, open from either).
Bolt – A (nearly always) metal shaft attached by cleats or a specific form of bracket, that slides into the jamb to fasten a door.
Draw bolt - A form of crossbar latch where a bolt, held in place by metal bails, is slid to engage a bail fixed to a jamb or mating door. A small, typically knob, handle is affixed to the bolt to aid grip. An Aldrop is similar, but incorporates a slot through the handle which can engage a hasp for securing the bolt with a lock.
Latch bolt – A bolt that has an angled surface that acts as a wedge to push the bolt in while the door is being closed. By the use of a latch bolt, a door can be closed without having to operate the handle.
Deadbolt – Deadbolts usually extend deeper into the frame and are not automatically retractable the way latch bolts are. They are typically manipulated with a lock on the outside and either a lock or a latch on the inside. Deadbolts are generally used for security purposes on external doors in case somebody tries to force the door in or use tools such as a crowbar, hammer, screwdriver, etc.
Fastener accessories
Strike plate – A plate with a hole in the middle made to receive a bolt. If the strike is for a latch bolt, it typically also includes a small ramped area to help the bolt move inward while the door is being closed. (Also known as just "strike") It's also available as electric strike which allows you to open the door even though the mechanical lock is locked.
Dust Socket - A metal or plastic socket that sits behind the Strike plate concealing the rough wood of the mortise.
Accessories
Numerous devices exist to serve specific purposes related to how a door should (or should not) be used. See:
Door chain - A device to secure door opening
Door closer – Mechanical or electromagnetic device to close an open door (in the event of a fire)
Door opener - Automatic door opening device activated by motion sensors or pressure pads
Door damper – A hydraulic device employed to slow the door's closure
Door knocker
Door stop – used to prevent the door from opening too far or striking another object
Espagnolette (for a window)
Fingerplate
Letter box or mail slot
Peephole
Kickplate
A number of items normally accompany doors but are not necessarily mounted on the door itself, such as doorbells.
See also
Architectural ironmongery
Drawer pull
References
External links
Doors
Architectural elements
Hardware (mechanical) | Door furniture | [
"Physics",
"Technology",
"Engineering"
] | 942 | [
"Machines",
"Building engineering",
"Architecture",
"Physical systems",
"Construction",
"Architectural elements",
"Hardware (mechanical)",
"Components"
] |
1,135,199 | https://en.wikipedia.org/wiki/Stroboscopic%20effect | The stroboscopic effect is a visual phenomenon caused by aliasing that occurs when continuous rotational or other cyclic motion is represented by a series of short or instantaneous samples (as opposed to a continuous view) at a sampling rate close to the period of the motion. It accounts for the "wagon-wheel effect", so-called because in video, spoked wheels (such as on horse-drawn wagons) sometimes appear to be turning backwards.
A strobe fountain, a stream of water droplets falling at regular intervals lit with a strobe light, is an example of the stroboscopic effect being applied to a cyclic motion that is not rotational. When viewed under normal light, this is a normal water fountain. When viewed under a strobe light with its frequency tuned to the rate at which the droplets fall, the droplets appear to be suspended in mid-air. Adjusting the strobe frequency can make the droplets seemingly move slowly up or down.
Depending upon the frequency of illumination there are different names for the visual effect. Up to about 80 Hertz or the flicker fusion threshold it is called visible flicker. From about 80 Hertz to 2000 Hertz it is called the stroboscopic effect (this article). Overlapping in frequency, but from 80 Hertz up to about 6500 Hertz a third effect exists called the phantom array effect or the ghosting effect, an optical phenomenon caused by rapid eye movements (saccades) of the observer.
Simon Stampfer, who coined the term in his 1833 patent application for his stroboscopische Scheiben (better known as the "phenakistiscope"), explained how the illusion of motion occurs when during unnoticed regular and very short interruptions of light, one figure gets replaced by a similar figure in a slightly different position. Any series of figures can thus be manipulated to show movements in any desired direction.
Explanation
Consider the stroboscope as used in mechanical analysis. This may be a "strobe light" that is fired at an adjustable rate. For example, an object is rotating at 60 revolutions per second: if it is viewed with a series of short flashes at 60 times per second, each flash illuminates the object at the same position in its rotational cycle, so it appears that the object is stationary. Furthermore, at a frequency of 60 flashes per second, persistence of vision smooths out the sequence of flashes so that the perceived image is continuous.
If the same rotating object is viewed at 61 flashes per second, each flash will illuminate it at a slightly earlier part of its rotational cycle. Sixty-one flashes will occur before the object is seen in the same position again, and the series of images will be perceived as if it is rotating backwards once per second.
The same effect occurs if the object is viewed at 59 flashes per second, except that each flash illuminates it a little later in its rotational cycle and so, the object will seem to be rotating forwards.
The same could be applied at other frequencies like the 50 Hz characteristic of electric distribution grids of most of countries in the world.
In the case of motion pictures, action is captured as a rapid series of still images and the same stroboscopic effect can occur.
Audio conversion from light patterns
The stroboscopic effect also plays a role in audio playback. Compact discs rely on strobing reflections of the laser from the surface of the disc in order to be processed (it is also used for computer data). DVDs and Blu-ray Discs have similar functions.
The stroboscopic effect also plays a role for laser microphones.
Wagon-wheel effect
Motion-picture cameras conventionally film at 24 frames per second. Although the wheels of a vehicle are not likely to be turning at 24 revolutions per second (as that would be extremely fast), suppose each wheel has 12 spokes and rotates at only two revolutions per second. Filmed at 24 frames per second, the spokes in each frame will appear in exactly the same position. Hence, the wheel will be perceived to be stationary. In fact, each photographically captured spoke in any one position will be a different actual spoke in each successive frame, but since the spokes are close to identical in shape and color, no difference will be perceived. Thus, as long as the number of times the wheel rotates per second is a factor of 24 and 12, the wheel will appear to be stationary.
If the wheel rotates a little more slowly than two revolutions per second, the position of the spokes is seen to fall a little further behind in each successive frame and therefore, the wheel will seem to be turning backwards.
Beneficial effects
Stroboscopic principles, and their ability to create an illusion of motion, underlie the theory behind animation, film, and other moving pictures.
In some special applications, stroboscopic pulsations have benefits. For instance, a stroboscope is tool that produces short repetitive flashes of light that can be used for measurement of movement frequencies or for analysis or timing of moving objects. An automotive timing light is a specialized stroboscope used to manually set the ignition timing of an internal combustion engine.
Stroboscopic visual training (SVT) is a recent tool aimed at improving visual and perceptual performance of sporters by executing activities under conditions of modulated lighting or intermittent vision.
Unwanted effects in common lighting
Stroboscopic effect is one of the particular temporal light artefacts. In common lighting applications, the stroboscopic effect is an unwanted effect which may become visible if a person is looking at a moving or rotating object which is illuminated by a time-modulated light source. The temporal light modulation may come from fluctuations of the light source itself or may be due to the application of certain dimming or light level regulation technologies. Another cause of light modulations may be lamps with unfiltered pulse-width modulation type external dimmers. Whether this is so may be tested with a rotating fidget spinner.
Effects
Various scientific committees have assessed the potential health, performance and safety-related aspects resulting from temporal light modulations (TLMs) including stroboscopic effect. Adverse effects in common lighting application areas include annoyance, reduced task performance, visual fatigue and headache. The visibility aspects of stroboscopic effect are given in a technical note of CIE, see CIE TN 006:2016 and in the thesis of Perz.
Stroboscopic effects may also lead to unsafe situations in workplaces with fast moving or rotating machinery. If the frequency of fast rotating machinery or moving parts coincides with the frequency, or multiples of the frequency, of the light modulation, the machinery can appear to be stationary, or to move with another speed, potentially leading to hazardous situations. Stroboscopic effects that become visible in rotating objects are also referred to as the wagon-wheel effect.
In general, undesired effects in the visual perception of a human observer induced by light intensity fluctuations are called Temporal Light Artefacts (TLAs). Further background and explanations on the different TLA phenomena including stroboscopic effect is given in a recorded webinar “Is it all just flicker?”.
Possible stroboscopic induced medical issues in some people include migraines & headaches, autistic repetitive behaviors, eye strain & fatigue, reduced visual task performance, anxiety and (rarer) epileptic seizures.
Root causes
Light emitted from lighting equipment such as luminaires and lamps may vary in strength as function of time, either intentionally or unintentionally. Intentional light variations are applied for warning, signalling (e.g. traffic-light signalling, flashing aviation light signals), entertainment (like stage lighting) with the purpose that flicker is perceived by people. Generally, the light output of lighting equipment may also have residual unintentional light level modulations due to the lighting equipment technology in connection with the type of electrical mains connection. For example, lighting equipment connected to a single-phase mains supply will typically have residual TLMs of twice the mains frequency, either at 100 or 120 Hz (depending on country).
The magnitude, shape, periodicity and frequency of the TLMs will depend on many factors such as the type of light source, the electrical mains-supply frequency, the driver or ballast technology and type of light regulation technology applied (e.g. pulse-width modulation). If the modulation frequency is below the flicker fusion threshold and if the magnitude of the TLM exceeds a certain level, then such TLMs are perceived as flicker. Light modulations with modulation frequencies beyond the flicker fusion threshold are not directly perceived, but illusions in the form of stroboscopic effect may become visible (example see Figure 1).
LEDs do not intrinsically produce temporal modulations; they just reproduce the input current waveform very well, and any ripple in the current waveform is reproduced by a light ripple because LEDs have a fast response; therefore, compared to conventional lighting technologies (incandescent, fluorescent), for LED lighting more variety in the TLA properties is seen. Many types and topologies of LED driver circuits are applied; simpler electronics and limited or no buffer capacitors often result in larger residual current ripple and thus larger temporal light modulation.
Dimming technologies of either externally applied dimmers (incompatible dimmers) or internal light-level regulators may have additional impact on the level of stroboscopic effect; the level of temporal light modulation generally increases at lower light levels.
NOTE – The root cause temporal light modulation is often referred to as flicker. Also, stroboscopic effect is often referred to as flicker. Flicker is however a directly visible effect resulting from light modulations at relatively low modulation frequencies, typically below 80 Hz, whereas stroboscopic effect in common (residential) applications may become visible if light modulations are present with modulation frequencies, typically above 80 Hz.
Mitigation
Generally, undesirable stroboscopic effect can be avoided by reducing the level of TLMs.
Design of lighting equipment to reduce the TLMs of the light sources is typically a tradeoff for other product properties and generally increases cost and size, shortens lifetime or lowers energy efficiency.
For instance, to reduce the modulation in the current to drive LEDs, which also reduces the visibility of TLAs, a large storage capacitor, such as electrolytic capacitor, is required. However, use of such capacitors significantly shortens the lifetime of the LED, as they are found to have the highest failure rate among all components. Another solution to lower the visibility of TLAs is to increase the frequency of the driving current, however this decreases the efficiency of the system and it increases its overall size.
Visibility
Stroboscopic effect becomes visible if the modulation frequency of the TLM is in the range of 80 Hz to 2000 Hz and if the magnitude of the TLM exceeds a certain level. Other important factors that determine the visibility of TLMs as stroboscopic effect are:
The shape of the temporary modulated light waveform (e.g. sinusoidal, rectangular pulse and its duty cycle);
The illumination level of the light source;
The speed of movement of the moving objects observed;
Physiological factors such as age and fatigue.
All observer-related influence quantities are stochastic parameters, because not all humans perceive the effect of same light ripple in the same way. That is why perception of stroboscopic effect is always expressed with a certain probability. For light levels encountered in common applications and for moderate speeds of movement of objects (connected to speeds that can be made by humans), an average sensitivity curve has been derived based on perception studies. The average sensitivity curve for sinusoidal modulated light waveforms, also called the stroboscopic effect contrast threshold function, as a function of frequency f is as follows:
The contrast threshold function is depicted in Figure 2. Stroboscopic effect becomes visible if the modulation frequency of the TLM is in the region between approximately 10 Hz to 2000 Hz and if the magnitude of the TLM exceeds a certain level. The contrast threshold function shows that at modulation frequencies near 100 Hz, stroboscopic effect will be visible at relatively low magnitudes of modulation. Although stroboscopic effect in theory is also visible in the frequency range below 100 Hz, in practice visibility of flicker will dominate over stroboscopic effect in the frequency range up to 60 Hz. Moreover, large magnitudes of intentional repetitive TLMs with frequencies below 100 Hz are unlikely to occur in practice because residual TLMs generally occur at modulation frequencies that are twice the mains frequency (100 Hz or 120 Hz).
Detailed explanations on the visibility of stroboscopic effect and other temporal light artefacts are also given in CIE TN 006:2016 and in a recorded webinar “Is it all just flicker?”.
Objective assessment of stroboscopic effect
Stroboscopic effect visibility meter
For objective assessment of stroboscopic effect the stroboscopic effect visibility measure (SVM) has been developed. The specification of the stroboscopic effect visibility meter and the test method for objective assessment of lighting equipment is published in IEC technical report IEC TR 63158. SVM is calculated using the following summation formula:
where Cm is the relative amplitude of the m-th Fourier component (trigonometric Fourier series representation) of the relative illuminance (relative to the DC-level);
Tm is the stroboscopic effect contrast threshold function for visibility of stroboscopic effect of a sine wave at the frequency of the m-th Fourier component (see ). SVM can be used for objective assessment by a human observer of visible stroboscopic effects of temporal light modulation of lighting equipment in general indoor applications, with typical indoor light levels (> 100 lx) and with moderate movements of an observer or a nearby handled object (< 4 m/s). For assessing unwanted stroboscopic effects in other applications, such as the misperception of rapidly rotating or moving machinery in a workshop for example, other metrics and methods can be required or the assessment can be done by subjective testing (observation).
NOTE – Several alternative metrics such as modulation depth, flicker percentage or flicker index are being applied for specifying the stroboscopic effect performance of lighting equipment. None of these metrics are suitable to predict actual human perception because human perception is impacted by modulation depth, modulation frequency, wave shape and if applicable the duty cycle of the TLM.
Matlab toolbox
A Matlab stroboscopic effect visibility measure toolbox including a function for calculating SVM and some application examples are available on the Matlab Central via the Mathworks Community.
Acceptance criterion
If the value of SVM equals one, the input modulation of the light waveform produces a stroboscopic effect that is just visible, i.e. at the visibility threshold. This means that an average observer will be able to detect the artefact with a probability of 50%. If the value of the visibility measure is above unity, the effect has a probability of detection of more than 50%. If the value of the visibility measure is smaller than unity, the probability of detection is less than 50%. These visibility thresholds show the average detection of an average human observer in a population. This does not, however, guarantee acceptability. For some less critical applications, the acceptability level of an artefact might be well above the visibility threshold. For other applications, the acceptable levels might be below the visibility threshold. NEMA 77-2017 amongst others gives guidance for acceptance criteria in different applications.
Test and measurement applications
A typical test setup for stroboscopic effect testing is shown in Figure 3. The stroboscopic effect visibility meter can be applied for different purposes (see IEC TR 63158):
Measurement of the intrinsic stroboscopic-effect performance of lighting equipment when supplied with a stable mains voltage;
Testing the effect of light regulation of lighting equipment or the effect of an external dimmer (dimmer compatibility).
Publication of standards development organisations
CIE TN 006:2016: introduces terms, definitions, methodologies and measures for quantification of TLAs including stroboscopic effect.
IEC TR 63158:2018: includes the stroboscopic effect visibility meter specification and verification method, and test procedures a.o. for dimmer compatibility.
NEMA 77-2017: amongst others, flicker test Methods and guidance for acceptance criteria.
Dangers in workplaces
Stroboscopic effect may lead to unsafe situations in workplaces with fast moving or rotating machinery. If the frequency of fast rotating machinery or moving parts coincides with the frequency, or multiples of the frequency, of the light modulation, the machinery can appear to be stationary, or to move with another speed, potentially leading to hazardous situations.
Because of the illusion that the stroboscopic effect can give to moving machinery, it is advised that single-phase lighting is avoided. For example, a factory that is lit from a single-phase supply with basic lighting will have a flicker of 100 or 120 Hz (depending on country, 50 Hz x 2 in Europe, 60 Hz x 2 in US, double the nominal frequency), thus any machinery rotating at multiples of 50 or 60 Hz (3000–3600rpm) may appear to not be turning, increasing the risk of injury to an operator. Solutions include deploying the lighting over a full 3-phase supply, or by using high-frequency controllers that drive the lights at safer frequencies or direct current lighting.
The 100/120 Hertz stroboscopic effect in commercial lighting may lead to disruptive issues and non-productive results in workspaces such as hospitals & medical facilities, industrial facilities, offices, schools or video conferencing rooms.
See also
3D zoetrope
Temporal light artefacts
Temporal light effects
Flicker (light)
Flicker fusion threshold
References
External links
https://www.youtube.com/watch?v=3_vVB9u-07I A clear example of this effect.
Interactive Strobe Fountain – lets you adjust the strobe frequency to control the apparent movement of falling droplets.
Yutaka Nishiyama (2012), "Mathematics of Fans" (PDF), International Journal of Pure and Applied Mathematics, 78 (5): 669–678.
Film and video technology
Optical illusions
Articles containing video clips | Stroboscopic effect | [
"Physics"
] | 3,788 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
1,135,311 | https://en.wikipedia.org/wiki/Matched%20filter | In signal processing, the output of the matched filter is given by correlating a known delayed signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the presence of additive stochastic noise.
Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected signal is examined for common elements of the out-going signal. Pulse compression is an example of matched filtering. It is so called because the impulse response is matched to input pulse signals. Two-dimensional matched filters are commonly used in image processing, e.g., to improve the SNR of X-ray observations. Additional applications of note are in seismology and gravitational-wave astronomy.
Matched filtering is a demodulation technique with LTI (linear time invariant) filters to maximize SNR.
It was originally also known as a North filter.
Derivation
Derivation via matrix algebra
The following section derives the matched filter for a discrete-time system. The derivation for a continuous-time system is similar, with summations replaced with integrals.
The matched filter is the linear filter, , that maximizes the output signal-to-noise ratio.
where is the input as a function of the independent variable , and is the filtered output. Though we most often express filters as the impulse response of convolution systems, as above (see LTI system theory), it is easiest to think of the matched filter in the context of the inner product, which we will see shortly.
We can derive the linear filter that maximizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise.
Let us formally define the problem. We seek a filter, , such that we maximize the output signal-to-noise ratio, where the output is the inner product of the filter and the observed signal .
Our observed signal consists of the desirable signal and additive noise :
Let us define the auto-correlation matrix of the noise, reminding ourselves that this matrix has Hermitian symmetry, a property that will become useful in the derivation:
where denotes the conjugate transpose of , and denotes expectation (note that in case the noise has zero-mean, its auto-correlation matrix is equal to its covariance matrix).
Let us call our output, , the inner product of our filter and the observed signal such that
We now define the signal-to-noise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise:
We rewrite the above:
We wish to maximize this quantity by choosing . Expanding the denominator of our objective function, we have
Now, our becomes
We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the auto-correlation matrix , we can write
We would like to find an upper bound on this expression. To do so, we first recognize a form of the Cauchy–Schwarz inequality:
which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectors and are parallel. We resume our derivation by expressing the upper bound on our in light of the geometric inequality above:
Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified:
We can achieve this upper bound if we choose,
where is an arbitrary real number. To verify this, we plug into our expression for the output :
Thus, our optimal matched filter is
We often choose to normalize the expected value of the power of the filter output due to the noise to unity. That is, we constrain
This constraint implies a value of , for which we can solve:
yielding
giving us our normalized filter,
If we care to write the impulse response of the filter for the convolution system, it is simply the complex conjugate time reversal of the input .
Though we have derived the matched filter in discrete time, we can extend the concept to continuous-time systems if we replace with the continuous-time autocorrelation function of the noise, assuming a continuous signal , continuous noise , and a continuous filter .
Derivation via Lagrangian
Alternatively, we may solve for the matched filter by solving our maximization problem with a Lagrangian. Again, the matched filter endeavors to maximize the output signal-to-noise ratio () of a filtered deterministic signal in stochastic additive noise. The observed sequence, again, is
with the noise auto-correlation matrix,
The signal-to-noise ratio is
where and .
Evaluating the expression in the numerator, we have
and in the denominator,
The signal-to-noise ratio becomes
If we now constrain the denominator to be 1, the problem of maximizing is reduced to maximizing the numerator. We can then formulate the problem using a Lagrange multiplier:
which we recognize as a generalized eigenvalue problem
Since is of unit rank, it has only one nonzero eigenvalue. It can be shown that this eigenvalue equals
yielding the following optimal matched filter
This is the same result found in the previous subsection.
Interpretation as a least-squares estimator
Derivation
Matched filtering can also be interpreted as a least-squares estimator for the optimal location and scaling of a given model or template. Once again, let the observed sequence be defined as
where is uncorrelated zero mean noise. The signal is assumed to be a scaled and shifted version of a known model sequence :
We want to find optimal estimates and for the unknown shift and scaling by minimizing the least-squares residual between the observed sequence and a "probing sequence" :
The appropriate will later turn out to be the matched filter, but is as yet unspecified. Expanding and the square within the sum yields
The first term in brackets is a constant (since the observed signal is given) and has no influence on the optimal solution. The last term has constant expected value because the noise is uncorrelated and has zero mean. We can therefore drop both terms from the optimization. After reversing the sign, we obtain the equivalent optimization problem
Setting the derivative w.r.t. to zero gives an analytic solution for :
Inserting this into our objective function yields a reduced maximization problem for just :
The numerator can be upper-bounded by means of the Cauchy–Schwarz inequality:
The optimization problem assumes its maximum when equality holds in this expression. According to the properties of the Cauchy–Schwarz inequality, this is only possible when
for arbitrary non-zero constants or , and the optimal solution is obtained at as desired. Thus, our "probing sequence" must be proportional to the signal model , and the convenient choice yields the matched filter
Note that the filter is the mirrored signal model. This ensures that the operation to be applied in order to find the optimum is indeed the convolution between the observed sequence and the matched filter . The filtered sequence assumes its maximum at the position where the observed sequence best matches (in a least-squares sense) the signal model .
Implications
The matched filter may be derived in a variety of ways, but as a special case of a least-squares procedure it may also be interpreted as a maximum likelihood method in the context of a (coloured) Gaussian noise model and the associated Whittle likelihood.
If the transmitted signal possessed no unknown parameters (like time-of-arrival, amplitude,...), then the matched filter would, according to the Neyman–Pearson lemma, minimize the error probability. However, since the exact signal generally is determined by unknown parameters that effectively are estimated (or fitted) in the filtering process, the matched filter constitutes a generalized maximum likelihood (test-) statistic. The filtered time series may then be interpreted as (proportional to) the profile likelihood, the maximized conditional likelihood as a function of the ("arrival") time parameter.
This implies in particular that the error probability (in the sense of Neyman and Pearson, i.e., concerning maximization of the detection probability for a given false-alarm probability) is not necessarily optimal.
What is commonly referred to as the Signal-to-noise ratio (SNR), which is supposed to be maximized by a matched filter, in this context corresponds to , where is the (conditionally) maximized likelihood ratio.
The construction of the matched filter is based on a known noise spectrum. In practice, however, the noise spectrum is usually estimated from data and hence only known up to a limited precision. For the case of an uncertain spectrum, the matched filter may be generalized to a more robust iterative procedure with favourable properties also in non-Gaussian noise.
Frequency-domain interpretation
When viewed in the frequency domain, it is evident that the matched filter applies the greatest weighting to spectral components exhibiting the greatest signal-to-noise ratio (i.e., large weight where noise is relatively low, and vice versa). In general this requires a non-flat frequency response, but the associated "distortion" is no cause for concern in situations such as radar and digital communications, where the original waveform is known and the objective is the detection of this signal against the background noise. On the technical side, the matched filter is a weighted least-squares method based on the (heteroscedastic) frequency-domain data (where the "weights" are determined via the noise spectrum, see also previous section), or equivalently, a least-squares method applied to the whitened data.
Examples
Radar and sonar
Matched filters are often used in signal detection. As an example, suppose that we wish to judge the distance of an object by reflecting a signal off it. We may choose to transmit a pure-tone sinusoid at 1 Hz. We assume that our received signal is an attenuated and phase-shifted form of the transmitted signal with added noise.
To judge the distance of the object, we correlate the received signal with a matched filter, which, in the case of white (uncorrelated) noise, is another pure-tone 1-Hz sinusoid. When the output of the matched filter system exceeds a certain threshold, we conclude with high probability that the received signal has been reflected off the object. Using the speed of propagation and the time that we first observe the reflected signal, we can estimate the distance of the object. If we change the shape of the pulse in a specially-designed way, the signal-to-noise ratio and the distance resolution can be even improved after matched filtering: this is a technique known as pulse compression.
Additionally, matched filters can be used in parameter estimation problems (see estimation theory). To return to our previous example, we may desire to estimate the speed of the object, in addition to its position. To exploit the Doppler effect, we would like to estimate the frequency of the received signal. To do so, we may correlate the received signal with several matched filters of sinusoids at varying frequencies. The matched filter with the highest output will reveal, with high probability, the frequency of the reflected signal and help us determine the radial velocity of the object, i.e. the relative speed either directly towards or away from the observer. This method is, in fact, a simple version of the discrete Fourier transform (DFT). The DFT takes an -valued complex input and correlates it with matched filters, corresponding to complex exponentials at different frequencies, to yield complex-valued numbers corresponding to the relative amplitudes and phases of the sinusoidal components (see Moving target indication).
Digital communications
The matched filter is also used in communications. In the context of a communication system that sends binary messages from the transmitter to the receiver across a noisy channel, a matched filter can be used to detect the transmitted pulses in the noisy received signal.
Imagine we want to send the sequence "0101100100" coded in non polar non-return-to-zero (NRZ) through a certain channel.
Mathematically, a sequence in NRZ code can be described as a sequence of unit pulses or shifted rect functions, each pulse being weighted by +1 if the bit is "1" and by -1 if the bit is "0". Formally, the scaling factor for the bit is,
We can represent our message, , as the sum of shifted unit pulses:
where is the time length of one bit and is the rectangular function.
Thus, the signal to be sent by the transmitter is
If we model our noisy channel as an AWGN channel, white Gaussian noise is added to the signal. At the receiver end, for a Signal-to-noise ratio of 3 dB, this may look like:
A first glance will not reveal the original transmitted sequence. There is a high power of noise relative to the power of the desired signal (i.e., there is a low signal-to-noise ratio). If the receiver were to sample this signal at the correct moments, the resulting binary message could be incorrect.
To increase our signal-to-noise ratio, we pass the received signal through a matched filter. In this case, the filter should be matched to an NRZ pulse (equivalent to a "1" coded in NRZ code). Precisely, the impulse response of the ideal matched filter, assuming white (uncorrelated) noise should be a time-reversed complex-conjugated scaled version of the signal that we are seeking. We choose
In this case, due to symmetry, the time-reversed complex conjugate of is in fact , allowing us to call the impulse response of our matched filter convolution system.
After convolving with the correct matched filter, the resulting signal, is,
where denotes convolution.
Which can now be safely sampled by the receiver at the correct sampling instants, and compared to an appropriate threshold, resulting in a correct interpretation of the binary message.
Gravitational-wave astronomy
Matched filters play a central role in gravitational-wave astronomy. The first observation of gravitational waves was based on large-scale filtering of each detector's output for signals resembling the expected shape, followed by subsequent screening for coincident and coherent triggers between both instruments. False-alarm rates, and with that, the statistical significance of the detection were then assessed using resampling methods. Inference on the astrophysical source parameters was completed using Bayesian methods based on parameterized theoretical models for the signal waveform and (again) on the Whittle likelihood.
Seismology
Matched filters find use in seismology to detect similar earthquake or other seismic signals, often using multicomponent and/or multichannel empirically determined templates. Matched filtering applications in seismology include the generation of large event catalogues to study earthquake seismicity and volcanic activity, and in the global detection of nuclear explosions.
Biology
Animals living in relatively static environments would have relatively fixed features of the environment to perceive. This allows the evolution of filters that match the expected signal with the highest signal-to-noise ratio, the matched filter. Sensors that perceive the world "through such a 'matched filter' severely limits the amount of information the brain can pick up from the outside world, but it frees the brain from the need to perform more intricate computations to extract the information finally needed for fulfilling a particular task."
See also
Periodogram
Filtered backprojection (Radon transform)
Digital filter
Statistical signal processing
Whittle likelihood
Profile likelihood
Detection theory
Multiple comparisons problem
Channel capacity
Noisy-channel coding theorem
Spectral density estimation
Least mean squares (LMS) filter
Wiener filter
MUltiple SIgnal Classification (MUSIC), a popular parametric superresolution method
SAMV
Notes
References
Further reading
Statistical signal processing
Signal estimation
Telecommunication theory
Gravitational-wave astronomy | Matched filter | [
"Physics",
"Astronomy",
"Engineering"
] | 3,403 | [
"Statistical signal processing",
"Astrophysics",
"Engineering statistics",
"Gravitational-wave astronomy",
"Astronomical sub-disciplines"
] |
1,135,324 | https://en.wikipedia.org/wiki/Conjugate%20variables | Conjugate variables are pairs of variables mathematically defined in such a way that they become Fourier transform duals, or more generally are related through Pontryagin duality. The duality relations lead naturally to an uncertainty relation—in physics called the Heisenberg uncertainty principle—between them. In mathematical terms, conjugate variables are part of a symplectic basis, and the uncertainty relation corresponds to the symplectic form. Also, conjugate variables are related by Noether's theorem, which states that if the laws of physics are invariant with respect to a change in one of the conjugate variables, then the other conjugate variable will not change with time (i.e. it will be conserved).
Conjugate variables in thermodynamics are widely used.
Examples
There are many types of conjugate variables, depending on the type of work a certain system is doing (or is being subjected to). Examples of canonically conjugate variables include the following:
Time and frequency: the longer a musical note is sustained, the more precisely we know its frequency, but it spans a longer duration and is thus a more-distributed event or 'instant' in time. Conversely, a very short musical note becomes just a click, and so is more temporally-localized, but one can't determine its frequency very accurately.
Doppler and range: the more we know about how far away a radar target is, the less we can know about the exact velocity of approach or retreat, and vice versa. In this case, the two dimensional function of doppler and range is known as a radar ambiguity function or radar ambiguity diagram.
Surface energy: γ dA (γ = surface tension; A = surface area).
Elastic stretching: F dL (F = elastic force; L length stretched).
Energy and time: Units being Kg
Derivatives of action
In classical physics, the derivatives of action are conjugate variables to the quantity with respect to which one is differentiating. In quantum mechanics, these same pairs of variables are related by the Heisenberg uncertainty principle.
The energy of a particle at a certain event is the negative of the derivative of the action along a trajectory of that particle ending at that event with respect to the time of the event.
The linear momentum of a particle is the derivative of its action with respect to its position.
The angular momentum of a particle is the derivative of its action with respect to its orientation (angular position).
The mass-moment () of a particle is the negative of the derivative of its action with respect to its rapidity.
The electric potential (φ, voltage) and electric charge in a quantum LC circuit.
The magnetic potential (A) at an event is the derivative of the action of the electromagnetic field with respect to the density of (free) electric current at that event.
The electric field (E) at an event is the derivative of the action of the electromagnetic field with respect to the electric polarization density at that event.
The magnetic induction (B) at an event is the derivative of the action of the electromagnetic field with respect to the magnetization at that event.
The Newtonian gravitational potential at an event is the negative of the derivative of the action of the Newtonian gravitation field with respect to the mass density at that event.
Quantum theory
In quantum mechanics, conjugate variables are realized as pairs of observables whose operators do not commute. In conventional terminology, they are said to be incompatible observables. Consider, as an example, the measurable quantities given by position and momentum . In the quantum-mechanical formalism, the two observables and correspond to operators and , which necessarily satisfy the canonical commutation relation:
For every non-zero commutator of two operators, there exists an "uncertainty principle", which in our present example may be expressed in the form:
In this ill-defined notation, and denote "uncertainty" in the simultaneous specification of and . A more precise, and statistically complete, statement involving the standard deviation reads:
More generally, for any two observables and corresponding to operators and , the generalized uncertainty principle is given by:
Now suppose we were to explicitly define two particular operators, assigning each a specific mathematical form, such that the pair satisfies the aforementioned commutation relation. It's important to remember that our particular "choice" of operators would merely reflect one of many equivalent, or isomorphic, representations of the general algebraic structure that fundamentally characterizes quantum mechanics. The generalization is provided formally by the Heisenberg Lie algebra , with a corresponding group called the Heisenberg group .
Fluid mechanics
In Hamiltonian fluid mechanics and quantum hydrodynamics, the action itself (or velocity potential) is the conjugate variable of the density (or ''probability density).
See also
Canonical coordinates
Notes
Classical mechanics
Quantum mechanics | Conjugate variables | [
"Physics"
] | 1,003 | [
"Quantum mechanics",
"Theoretical physics",
"Mechanics",
"Classical mechanics"
] |
1,135,333 | https://en.wikipedia.org/wiki/Ambiguity%20function | In pulsed radar and sonar signal processing, an ambiguity function is a two-dimensional function of propagation delay and Doppler frequency , . It represents the distortion of a returned pulse due to the receiver matched filter (commonly, but not exclusively, used in pulse compression radar) of the return from a moving target. The ambiguity function is defined by the properties of the pulse and of the filter, and not any particular target scenario.
Many definitions of the ambiguity function exist; some are restricted to narrowband signals and others are suitable to describe the delay and Doppler relationship of wideband signals. Often the definition of the ambiguity function is given as the magnitude squared of other definitions (Weiss).
For a given complex baseband pulse , the narrowband ambiguity function is given by
where denotes the complex conjugate and is the imaginary unit. Note that for zero Doppler shift (), this reduces to the autocorrelation of . A more concise way of representing the
ambiguity function consists of examining the one-dimensional
zero-delay and zero-Doppler "cuts"; that is, and
, respectively. The matched filter output as a function of time (the signal one would observe in a radar system) is a Doppler cut, with the constant frequency given by the target's Doppler shift: .
Background and motivation
Pulse-Doppler radar equipment sends out a series of radio frequency pulses. Each pulse has a certain shape (waveform)—how long the pulse is, what its frequency is, whether the frequency changes during the pulse, and so on. If the waves reflect off a single object, the detector will see a signal which, in the simplest case, is a copy of the original pulse but delayed by a certain time —related to the object's distance—and shifted by a certain frequency —related to the object's velocity (Doppler shift). If the original emitted pulse waveform is , then the detected signal (neglecting noise, attenuation, and distortion, and wideband corrections) will be:
The detected signal will never be exactly equal to any because of noise. Nevertheless, if the detected signal has a high correlation with , for a certain delay and Doppler shift , then that suggests that there is an object with . Unfortunately, this procedure may yield false positives, i.e. wrong values which are nevertheless highly correlated with the detected signal. In this sense, the detected signal may be ambiguous.
The ambiguity occurs specifically when there is a high correlation between and for . This motivates the ambiguity function . The defining property of is that the correlation between and is equal to .
Different pulse shapes (waveforms) have different ambiguity functions, and the ambiguity function is relevant when choosing what pulse to use.
The function is complex-valued; the degree of "ambiguity" is related to its magnitude .
Relationship to time–frequency distributions
The ambiguity function plays a key role in the field of time–frequency signal processing, as it is related to the Wigner–Ville distribution by a 2-dimensional Fourier transform. This relationship is fundamental to the formulation of other time–frequency distributions: the bilinear time–frequency distributions are obtained by a 2-dimensional filtering in the ambiguity domain (that is, the ambiguity function of the signal). This class of distribution may be better adapted to the signals considered.
Moreover, the ambiguity distribution can be seen as the short-time Fourier transform of a signal using the signal itself as the window function. This remark has been used to define an ambiguity distribution over the time-scale domain instead of the time-frequency domain.
Wideband ambiguity function
The wideband ambiguity function of is:
where is a time scale factor of the received signal relative to the transmitted signal given by:
for a target moving with constant radial velocity v. The reflection of the signal is represented with compression (or expansion) in time by the factor , which is equivalent to a compression by the factor in the frequency domain (with an amplitude scaling). When the wave speed in the medium is sufficiently faster than the target speed, as is common with radar, this compression in frequency is closely approximated by a shift in frequency Δf = fc*v/c (known as the doppler shift). For a narrow band signal, this approximation results in the narrowband ambiguity function given above, which can be computed efficiently by making use of the FFT algorithm.
Ideal ambiguity function
An ambiguity function of interest is a 2-dimensional Dirac delta function or "thumbtack" function; that is, a function which is infinite at (0,0) and zero elsewhere.
An ambiguity function of this kind would be somewhat of a misnomer; it would have no ambiguities at all, and both the zero-delay and zero-Doppler cuts would be an impulse. This is not usually desirable (if a target has any Doppler shift from an unknown velocity it will disappear from the radar picture), but if Doppler processing is independently performed, knowledge of the precise Doppler frequency allows ranging without interference from any other targets which are not also moving at exactly the same velocity.
This type of ambiguity function is produced by ideal white noise (infinite in duration and infinite in bandwidth). However, this would require infinite power and is not physically realizable. There is no pulse that will produce from the definition of the ambiguity function. Approximations exist, however, and noise-like signals such as binary phase-shift keyed waveforms using maximal-length sequences are the best known performers in this regard.
Properties
(1) Maximum value
(2) Symmetry about the origin
(3) Volume invariance
(4) Modulation by a linear FM signal
(5) Frequency energy spectrum
(6) Upper bounds for and lower bounds for exist for the power integrals
.
These bounds are sharp and are achieved if and only if is a Gaussian function.
Square pulse
Consider a simple square pulse of duration and
amplitude :
where is the Heaviside step function. The
matched filter output is given by the autocorrelation of the pulse, which is a triangular pulse of height and
duration (the zero-Doppler cut). However, if the
measured pulse has a frequency offset due to Doppler shift, the
matched filter output is distorted into a sinc function. The
greater the Doppler shift, the smaller the peak of the resulting sinc,
and the more difficult it is to detect the target.
In general, the square pulse is not a desirable waveform from a pulse compression standpoint, because the autocorrelation function is too short in amplitude, making it difficult to detect targets in noise, and too wide in time, making it difficult to discern multiple overlapping targets.
LFM pulse
A commonly used radar or sonar pulse is the linear frequency modulated (LFM) pulse (or "chirp"). It has the advantage of greater bandwidth while keeping the pulse duration short and envelope constant. A constant envelope LFM pulse has an ambiguity function similar to that of the square pulse, except that it is skewed in the delay-Doppler plane. Slight Doppler mismatches for the LFM pulse do not change the general shape of the pulse and reduce the amplitude very little, but they do appear to shift the pulse
in time. Thus, an uncompensated Doppler shift changes the target's apparent range; this phenomenon is called range-Doppler coupling.
Multistatic ambiguity functions
The ambiguity function can be extended to multistatic radars, which comprise multiple non-colocated transmitters and/or receivers (and can include bistatic radar as a special case).
For these types of radar, the simple linear relationship between time and range that exists in the monostatic case no longer applies, and is instead dependent on the specific geometry – i.e. the relative location of transmitter(s), receiver(s) and target. Therefore, the multistatic ambiguity function is mostly usefully defined as a function of two- or three-dimensional position and velocity vectors for a given multistatic geometry and transmitted waveform.
Just as the monostatic ambiguity function is naturally derived from the matched filter, the multistatic ambiguity function is derived from the corresponding optimal multistatic detector – i.e. that which maximizes the probability of detection given a fixed probability of false alarm through joint processing of the signals at all receivers. The nature of this detection algorithm depends on whether or not the target fluctuations observed by each bistatic pair within the multistatic system are mutually correlated. If so, the optimal detector performs phase coherent summation of received signals which can result in very high target location accuracy. If not, the optimal detector performs incoherent summation of received signals which gives diversity gain. Such systems are sometimes described as MIMO radars due to the information theoretic similarities to MIMO communication systems.
Ambiguity function plane
An ambiguity function plane can be viewed as a combination of an infinite
number of radial lines.
Each radial line can be viewed as the fractional Fourier transform of a
stationary random process.
Example
The Ambiguity function (AF) is the operators that are related to the WDF.
(1)If
WDF and AF for the signal with only 1 term
(2) If
+
+
+
When
where
,
,
,
,
,
When ≠
WDF and AF for the signal with 2 terms
For the ambiguity function:
The auto term is always near to the origin
See also
Matched filter
Pulse compression
Pulse-Doppler radar
Digital signal processing
Philip Woodward
References
Further reading
Richards, Mark A. Fundamentals of Radar Signal Processing. McGraw–Hill Inc., 2005. .
Ipatov, Valery P. Spread Spectrum and CDMA. Wiley & Sons, 2005.
Chernyak V.S. Fundamentals of Multisite Radar Systems, CRC Press, 1998.
Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005.
M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the Faculty of Science and Technology (printed by Elanders Sverige AB), 2014.
Nadav Levanon, and Eli Mozeson. Radar signals. Wiley. com, 2004.
Augusto Aubry, Antonio De Maio, Bo Jiang, and Shuzhong Zhang. "Ambiguity function shaping for cognitive radar via complex quartic optimization." IEEE Transactions on Signal Processing 61 (2013): 5603-5619.
Mojtaba Soltanalian, and Petre Stoica. "Computational design of sequences with good correlation properties." IEEE Transactions on Signal Processing, 60.5 (2012): 2180-2193.
G. Krötzsch, M. A. Gómez-Méndez, Transformada Discreta de Ambigüedad, Revista Mexicana de Física, Vol. 63, pp. 505–515 (2017). "Transformada Discreta de Ambigüedad".
2 National Taiwan University, Time-Frequency Analysis and Wavelet Transform 2021, Professor of Jian-Jiun Ding, Department of Electrical Engineering
3 National Taiwan University, Time-Frequency Analysis and Wavelet Transform 2021, Professor of Jian-Jiun Ding, Department of Electrical Engineering
4 National Taiwan University, Time-Frequency Analysis and Wavelet Transform 2021, Professor of Jian-Jiun Ding, Department of Electrical Engineering
Time–frequency analysis
Signal processing | Ambiguity function | [
"Physics",
"Technology",
"Engineering"
] | 2,349 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Frequency-domain analysis"
] |
1,136,348 | https://en.wikipedia.org/wiki/Hoeffding%27s%20inequality | In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963.
Hoeffding's inequality is a special case of the Azuma–Hoeffding inequality and McDiarmid's inequality. It is similar to the Chernoff bound, but tends to be less sharp, in particular when the variance of the random variables is small. It is similar to, but incomparable with, one of Bernstein's inequalities.
Statement
Let be independent random variables such that almost surely. Consider the sum of these random variables,
Then Hoeffding's theorem states that, for all ,
Here is the expected value of .
Note that the inequalities also hold when the have been obtained using sampling without replacement; in this case the random variables are not independent anymore. A proof of this statement can be found in Hoeffding's paper. For slightly better bounds in the case of sampling without replacement, see for instance the paper by .
Generalization
Let be independent observations such that and . Let . Then, for any ,
Special Case: Bernoulli RVs
Suppose and for all i. This can occur when Xi are independent Bernoulli random variables, though they need not be identically distributed. Then we get the inequality
or equivalently,
for all . This is a version of the additive Chernoff bound which is more general, since it allows for random variables that take values between zero and one, but also weaker, since the Chernoff bound gives a better tail bound when the random variables have small variance.
General case of bounded from above random variables
Hoeffding's inequality can be extended to the case of bounded from above random variables.
Let be independent random variables such that and almost surely.
Denote by
Hoeffding's inequality for bounded from above random variables states that for all ,
In particular, if for all ,
then for all ,
General case of sub-Gaussian random variables
The proof of Hoeffding's inequality can be generalized to any sub-Gaussian distribution. Recall that a random variable is called sub-Gaussian, if
for some . For any bounded variable , for for some sufficiently large . Then for all so taking yields
for . So every bounded variable is sub-Gaussian.
For a random variable , the following norm is finite if and only if is sub-Gaussian:
Then let be zero-mean independent sub-Gaussian random variables, the general version of the Hoeffding's inequality states that:
where c > 0 is an absolute constant.
Proof
The proof of Hoeffding's inequality follows similarly to concentration inequalities like Chernoff bounds. The main difference is the use of Hoeffding's Lemma:
Suppose is a real random variable such that almost surely. Then
Using this lemma, we can prove Hoeffding's inequality. As in the theorem statement, suppose are independent random variables such that almost surely for all i, and let .
Then for , Markov's inequality and the independence of implies:
This upper bound is the best for the value of minimizing the value inside the exponential. This can be done easily by optimizing a quadratic, giving
Writing the above bound for this value of , we get the desired bound:
Usage
Confidence intervals
Hoeffding's inequality can be used to derive confidence intervals. We consider a coin that shows heads with probability and tails with probability . We toss the coin times, generating samples (which are i.i.d Bernoulli random variables). The expected number of times the coin comes up heads is . Furthermore, the probability that the coin comes up heads at least times can be exactly quantified by the following expression:
where is the number of heads in coin tosses.
When for some , Hoeffding's inequality bounds this probability by a term that is exponentially small in :
Since this bound holds on both sides of the mean, Hoeffding's inequality implies that the number of heads that we see is concentrated around its mean, with exponentially small tail.
Thinking of as the "observed" mean, this probability can be interpreted as the level of significance (probability of making an error) for a confidence interval around of size 2:
Finding for opposite inequality sign in the above, i.e. that violates inequality but not equality above, gives us:
Therefore, we require at least samples to acquire a -confidence interval .
Hence, the cost of acquiring the confidence interval is sublinear in terms of confidence level and quadratic in terms of precision. Note that there are more efficient methods of estimating a confidence interval.
See also
Concentration inequality – a summary of tail-bounds on random variables.
Hoeffding's lemma
Bernstein inequalities (probability theory)
Notes
References
.
Probabilistic inequalities | Hoeffding's inequality | [
"Mathematics"
] | 1,048 | [
"Theorems in probability theory",
"Probabilistic inequalities",
"Inequalities (mathematics)"
] |
1,137,227 | https://en.wikipedia.org/wiki/DNA%20methylation | DNA methylation is a biological process by which methyl groups are added to the DNA molecule. Methylation can change the activity of a DNA segment without changing the sequence. When located in a gene promoter, DNA methylation typically acts to repress gene transcription. In mammals, DNA methylation is essential for normal development and is associated with a number of key processes including genomic imprinting, X-chromosome inactivation, repression of transposable elements, aging, and carcinogenesis.
As of 2016, two nucleobases have been found on which natural, enzymatic DNA methylation takes place: adenine and cytosine. The modified bases are N6-methyladenine, 5-methylcytosine and N4-methylcytosine.
Cytosine methylation is widespread in both eukaryotes and prokaryotes, even though the rate of cytosine DNA methylation can differ greatly between species: 14% of cytosines are methylated in Arabidopsis thaliana, 4% to 8% in Physarum, 7.6% in Mus musculus, 2.3% in Escherichia coli, 0.03% in Drosophila; methylation is essentially undetectable in Dictyostelium; and virtually absent (0.0002 to 0.0003%) from Caenorhabditis or fungi such as Saccharomyces cerevisiae and S. pombe (but not N. crassa). Adenine methylation has been observed in bacterial, plant, and recently in mammalian DNA, but has received considerably less attention.
Methylation of cytosine to form 5-methylcytosine occurs at the same 5 position on the pyrimidine ring where the DNA base thymine's methyl group is located; the same position distinguishes thymine from the analogous RNA base uracil, which has no methyl group. Spontaneous deamination of 5-methylcytosine converts it to thymine. This results in a T:G mismatch. Repair mechanisms then correct it back to the original C:G pair; alternatively, they may substitute A for G, turning the original C:G pair into a T:A pair, effectively changing a base and introducing a mutation. This misincorporated base will not be corrected during DNA replication as thymine is a DNA base. If the mismatch is not repaired and the cell enters the cell cycle the strand carrying the T will be complemented by an A in one of the daughter cells, such that the mutation becomes permanent. The near-universal use of thymine exclusively in DNA and uracil exclusively in RNA may have evolved as an error-control mechanism, to facilitate the removal of uracils generated by the spontaneous deamination of cytosine. DNA methylation as well as many of its contemporary DNA methyltransferases have been thought to evolve from early world primitive RNA methylation activity and is supported by several lines of evidence.
In plants and other organisms, DNA methylation is found in three different sequence contexts: CG (or CpG), CHG or CHH (where H correspond to A, T or C). In mammals however, DNA methylation is almost exclusively found in CpG dinucleotides, with the cytosines on both strands being usually methylated. Non-CpG methylation can however be observed in embryonic stem cells, and has also been indicated in neural development. Furthermore, non-CpG methylation has also been observed in hematopoietic progenitor cells, and it occurred mainly in a CpApC sequence context.
Conserved function of DNA methylation
The DNA methylation landscape of vertebrates is very particular compared to other organisms. In mammals, around 75% of CpG dinucleotides are methylated in somatic cells, and DNA methylation appears as a default state that has to be specifically excluded from defined locations. By contrast, the genome of most plants, invertebrates, fungi, or protists show "mosaic" methylation patterns, where only specific genomic elements are targeted, and they are characterized by the alternation of methylated and unmethylated domains.
High CpG methylation in mammalian genomes has an evolutionary cost because it increases the frequency of spontaneous mutations. Loss of amino-groups occurs with a high frequency for cytosines, with different consequences depending on their methylation. Methylated C residues spontaneously deaminate to form T residues over time; hence CpG dinucleotides steadily deaminate to TpG dinucleotides, which is evidenced by the under-representation of CpG dinucleotides in the human genome (they occur at only 21% of the expected frequency). (On the other hand, spontaneous deamination of unmethylated C residues gives rise to U residues, a change that is quickly recognized and repaired by the cell.)
CpG islands
In mammals, the only exception for this global CpG depletion resides in a specific category of GC- and CpG-rich sequences termed CpG islands that are generally unmethylated and therefore retained the expected CpG content. CpG islands are usually defined as regions with: 1) a length greater than 200bp, 2) a G+C content greater than 50%, 3) a ratio of observed to expected CpG greater than 0.6, although other definitions are sometimes used. Excluding repeated sequences, there are around 25,000 CpG islands in the human genome, 75% of which being less than 850bp long. They are major regulatory units and around 50% of CpG islands are located in gene promoter regions, while another 25% lie in gene bodies, often serving as alternative promoters. Reciprocally, around 60-70% of human genes have a CpG island in their promoter region. The majority of CpG islands are constitutively unmethylated and enriched for permissive chromatin modification such as H3K4 methylation. In somatic tissues, only 10% of CpG islands are methylated, the majority of them being located in intergenic and intragenic regions.
Repression of CpG-dense promoters
DNA methylation was probably present at some extent in very early eukaryote ancestors. In virtually every organism analyzed, methylation in promoter regions correlates negatively with gene expression. CpG-dense promoters of actively transcribed genes are never methylated, but, reciprocally, transcriptionally silent genes do not necessarily carry a methylated promoter. In mouse and human, around 60–70% of genes have a CpG island in their promoter region and most of these CpG islands remain unmethylated independently of the transcriptional activity of the gene, in both differentiated and undifferentiated cell types. Of note, whereas DNA methylation of CpG islands is unambiguously linked with transcriptional repression, the function of DNA methylation in CG-poor promoters remains unclear; albeit there is little evidence that it could be functionally relevant.
DNA methylation may affect the transcription of genes in two ways. First, the methylation of DNA itself may physically impede the binding of transcriptional proteins to the gene, and second, and likely more important, methylated DNA may be bound by proteins known as methyl-CpG-binding domain proteins (MBDs). MBD proteins then recruit additional proteins to the locus, such as histone deacetylases and other chromatin remodeling proteins that can modify histones, thereby forming compact, inactive chromatin, termed heterochromatin. This link between DNA methylation and chromatin structure is very important. In particular, loss of methyl-CpG-binding protein 2 (MeCP2) has been implicated in Rett syndrome; and methyl-CpG-binding domain protein 2 (MBD2) mediates the transcriptional silencing of hypermethylated genes in "cancer."
Repression of transposable elements
DNA methylation is a powerful transcriptional repressor, at least in CpG dense contexts. Transcriptional repression of protein-coding genes appears essentially limited to very specific classes of genes that need to be silent permanently and in almost all tissues. While DNA methylation does not have the flexibility required for the fine-tuning of gene regulation, its stability is perfect to ensure the permanent silencing of transposable elements. Transposon control is one of the most ancient functions of DNA methylation that is shared by animals, plants and multiple protists. It is even suggested that DNA methylation evolved precisely for this purpose.
Genome expansion
DNA methylation of transposable elements has been known to be related to genome expansion. However, the evolutionary driver for genome expansion remains unknown. There is a clear correlation between the size of the genome and CpG, suggesting that the DNA methylation of transposable elements led to a noticeable increase in the mass of DNA.
Methylation of the gene body of highly transcribed genes
A function that appears even more conserved than transposon silencing is positively correlated with gene expression. In almost all species where DNA methylation is present, DNA methylation is especially enriched in the body of highly transcribed genes. The function of gene body methylation is not well understood. A body of evidence suggests that it could regulate splicing and suppress the activity of intragenic transcriptional units (cryptic promoters or transposable elements). Gene-body methylation appears closely tied to H3K36 methylation. In yeast and mammals, H3K36 methylation is highly enriched in the body of highly transcribed genes. In yeast at least, H3K36me3 recruits enzymes such as histone deacetylases to condense chromatin and prevent the activation of cryptic start sites. In mammals, DNMT3a and DNMT3b PWWP domain binds to H3K36me3 and the two enzymes are recruited to the body of actively transcribed genes.
In mammals
During embryonic development
DNA methylation patterns are largely erased and then re-established between generations in mammals. Almost all of the methylations from the parents are erased, first during gametogenesis, and again in early embryogenesis, with demethylation and remethylation occurring each time. Demethylation in early embryogenesis occurs in the preimplantation period in two stages – initially in the zygote, then during the first few embryonic replication cycles of morula and blastula. A wave of methylation then takes place during the implantation stage of the embryo, with CpG islands protected from methylation. This results in global repression and allows housekeeping genes to be expressed in all cells. In the post-implantation stage, methylation patterns are stage- and tissue-specific, with changes that would define each individual cell type lasting stably over a long period. Studies on rat limb buds during embryogenesis have further illustrated the dynamic nature of DNA methylation in development. In this context, variations in global DNA methylation were observed across different developmental stages and culture conditions, highlighting the intricate regulation of methylation during organogenesis and its potential implications for regenerative medicine strategies.
Whereas DNA methylation is not necessary per se for transcriptional silencing, it is thought nonetheless to represent a "locked" state that definitely inactivates transcription. In particular, DNA methylation appears critical for the maintenance of mono-allelic silencing in the context of genomic imprinting and X chromosome inactivation. In these cases, expressed and silent alleles differ by their methylation status, and loss of DNA methylation results in loss of imprinting and re-expression of Xist in somatic cells. During embryonic development, few genes change their methylation status, at the important exception of many genes specifically expressed in the germline. DNA methylation appears absolutely required in differentiated cells, as knockout of any of the three competent DNA methyltransferase results in embryonic or post-partum lethality. By contrast, DNA methylation is dispensable in undifferentiated cell types, such as the inner cell mass of the blastocyst, primordial germ cells or embryonic stem cells. Since DNA methylation appears to directly regulate only a limited number of genes, how precisely DNA methylation absence causes the death of differentiated cells remain an open question.
Due to the phenomenon of genomic imprinting, maternal and paternal genomes are differentially marked and must be properly reprogrammed every time they pass through the germline. Therefore, during gametogenesis, primordial germ cells must have their original biparental DNA methylation patterns erased and re-established based on the sex of the transmitting parent. After fertilization, the paternal and maternal genomes are once again demethylated and remethylated (except for differentially methylated regions associated with imprinted genes). This reprogramming is likely required for totipotency of the newly formed embryo and erasure of acquired epigenetic changes.
In cancer
In many disease processes, such as cancer, gene promoter CpG islands acquire abnormal hypermethylation, which results in transcriptional silencing that can be inherited by daughter cells following cell division. Alterations of DNA methylation have been recognized as an important component of cancer development. Hypomethylation, in general, arises earlier and is linked to chromosomal instability and loss of imprinting, whereas hypermethylation is associated with promoters and can arise secondary to gene (oncogene suppressor) silencing, but might be a target for epigenetic therapy. In developmental contexts, dynamic changes in DNA methylation patterns also have significant implications. For instance, in rat limb buds, shifts in methylation status were associated with different stages of chondrogenesis, suggesting a potential link between DNA methylation and the progression of certain developmental processes.
Global hypomethylation has also been implicated in the development and progression of cancer through different mechanisms. Typically, there is hypermethylation of tumor suppressor genes and hypomethylation of oncogenes.
Generally, in progression to cancer, hundreds of genes are silenced or activated. Although silencing of some genes in cancers occurs by mutation, a large proportion of carcinogenic gene silencing is a result of altered DNA methylation (see DNA methylation in cancer). DNA methylation causing silencing in cancer typically occurs at multiple CpG sites in the CpG islands that are present in the promoters of protein coding genes.
Altered expressions of microRNAs also silence or activate many genes in progression to cancer (see microRNAs in cancer). Altered microRNA expression occurs through hyper/hypo-methylation of CpG sites in CpG islands in promoters controlling transcription of the microRNAs.
Silencing of DNA repair genes through methylation of CpG islands in their promoters appears to be especially important in progression to cancer (see methylation of DNA repair genes in cancer).
In atherosclerosis
Epigenetic modifications such as DNA methylation have been implicated in cardiovascular disease, including atherosclerosis. In animal models of atherosclerosis, vascular tissue, as well as blood cells such as mononuclear blood cells, exhibit global hypomethylation with gene-specific areas of hypermethylation. DNA methylation polymorphisms may be used as an early biomarker of atherosclerosis since they are present before lesions are observed, which may provide an early tool for detection and risk prevention.
Two of the cell types targeted for DNA methylation polymorphisms are monocytes and lymphocytes, which experience an overall hypomethylation. One proposed mechanism behind this global hypomethylation is elevated homocysteine levels causing hyperhomocysteinemia, a known risk factor for cardiovascular disease. High plasma levels of homocysteine inhibit DNA methyltransferases, which causes hypomethylation. Hypomethylation of DNA affects genes that alter smooth muscle cell proliferation, cause endothelial cell dysfunction, and increase inflammatory mediators, all of which are critical in forming atherosclerotic lesions. High levels of homocysteine also result in hypermethylation of CpG islands in the promoter region of the estrogen receptor alpha (ERα) gene, causing its down regulation. ERα protects against atherosclerosis due to its action as a growth suppressor, causing the smooth muscle cells to remain in a quiescent state. Hypermethylation of the ERα promoter thus allows intimal smooth muscle cells to proliferate excessively and contribute to the development of the atherosclerotic lesion.
Another gene that experiences a change in methylation status in atherosclerosis is the monocarboxylate transporter (MCT3), which produces a protein responsible for the transport of lactate and other ketone bodies out of many cell types, including vascular smooth muscle cells. In atherosclerosis patients, there is an increase in methylation of the CpG islands in exon 2, which decreases MCT3 protein expression. The downregulation of MCT3 impairs lactate transport and significantly increases smooth muscle cell proliferation, which further contributes to the atherosclerotic lesion. An ex vivo experiment using the demethylating agent Decitabine (5-aza-2 -deoxycytidine) was shown to induce MCT3 expression in a dose dependent manner, as all hypermethylated sites in the exon 2 CpG island became demethylated after treatment. This may serve as a novel therapeutic agent to treat atherosclerosis, although no human studies have been conducted thus far.
In heart failure
In addition to atherosclerosis described above, specific epigenetic changes have been identified in the failing human heart. This may vary by disease etiology. For example, in ischemic heart failure DNA methylation changes have been linked to changes in gene expression that may direct gene expression associated with the changes in heart metabolism known to occur. Additional forms of heart failure (e.g. diabetic cardiomyopathy) and co-morbidities (e.g. obesity) must be explored to see how common these mechanisms are. Most strikingly, in failing human heart these changes in DNA methylation are associated with racial and socioeconomic status which further impact how gene expression is altered, and may influence how the individual's heart failure should be treated.
In aging
In humans and other mammals, DNA methylation levels can be used to accurately estimate the age of tissues and cell types, forming an accurate epigenetic clock.
A longitudinal study of twin children showed that, between the ages of 5 and 10, there was divergence of methylation patterns due to environmental rather than genetic influences. There is a global loss of DNA methylation during aging.
In a study that analyzed the complete DNA methylomes of CD4+ T cells in a newborn, a 26 years old individual and a 103 years old individual were observed that the loss of methylation is proportional to age. Hypomethylated CpGs observed in the centenarian DNAs compared with the neonates covered all genomic compartments (promoters, intergenic, intronic and exonic regions). However, some genes become hypermethylated with age, including genes for the estrogen receptor, p16, insulin-like growth factor 2, ELOVL2 and FHL2
In exercise
High intensity exercise has been shown to result in reduced DNA methylation in skeletal muscle. Promoter methylation of PGC-1α and PDK4 were immediately reduced after high intensity exercise, whereas PPAR-γ methylation was not reduced until three hours after exercise. At the same time, six months of exercise in previously sedentary middle-age men resulted in increased methylation in adipose tissue. One study showed a possible increase in global genomic DNA methylation of white blood cells with more physical activity in non-Hispanics.
In B-cell differentiation
A study that investigated the methylome of B cells along their differentiation cycle, using whole-genome bisulfite sequencing (WGBS), showed that there is a hypomethylation from the earliest stages to the most differentiated stages. The largest methylation difference is between the stages of germinal center B cells and memory B cells. Furthermore, this study showed that there is a similarity between B cell tumors and long-lived B cells in their DNA methylation signatures.
In the brain
Two reviews summarize evidence that DNA methylation alterations in brain neurons are important in learning and memory. Contextual fear conditioning (a form of associative learning) in animals, such as mice and rats, is rapid and is extremely robust in creating memories. In mice and in rats contextual fear conditioning, within 1–24 hours, it is associated with altered methylations of several thousand DNA cytosines in genes of hippocampus neurons. Twenty four hours after contextual fear conditioning, 9.2% of the genes in rat hippocampus neurons are differentially methylated. In mice, when examined at four weeks after conditioning, the hippocampus methylations and demethylations had been reset to the original naive conditions. The hippocampus is needed to form memories, but memories are not stored there. For such mice, at four weeks after contextual fear conditioning, substantial differential CpG methylations and demethylations occurred in cortical neurons during memory maintenance, and there were 1,223 differentially methylated genes in their anterior cingulate cortex. Mechanisms guiding new DNA methylations and new DNA demethylations in the hippocampus during memory establishment were summarized in 2022. That review also indicated the mechanisms by which the new patterns of methylation gave rise to new patterns of messenger RNA expression. These new messenger RNAs were then transported by messenger RNP particles (neuronal granules) to synapses of the neurons, where they could be translated into proteins. Active changes in neuronal DNA methylation and demethylation appear to act as controllers of synaptic scaling and glutamate receptor trafficking in learning and memory formation.
DNA methyltransferases (in mammals)
In mammalian cells, DNA methylation occurs mainly at the C5 position of CpG dinucleotides and is carried out by two general classes of enzymatic activities – maintenance methylation and de novo methylation.
Maintenance methylation activity is necessary to preserve DNA methylation after every cellular DNA replication cycle. Without the DNA methyltransferase (DNMT), the replication machinery itself would produce daughter strands that are unmethylated and, over time, would lead to passive demethylation. DNMT1 is the proposed maintenance methyltransferase that is responsible for copying DNA methylation patterns to the daughter strands during DNA replication. Mouse models with both copies of DNMT1 deleted are embryonic lethal at approximately day 9, due to the requirement of DNMT1 activity for development in mammalian cells.
It is thought that DNMT3a and DNMT3b are the de novo methyltransferases that set up DNA methylation patterns early in development. DNMT3L is a protein that is homologous to the other DNMT3s but has no catalytic activity. Instead, DNMT3L assists the de novo methyltransferases by increasing their ability to bind to DNA and stimulating their activity. Mice and rats have a third functional de novo methyltransferase enzyme named DNMT3C, which evolved as a paralog of Dnmt3b by tandem duplication in the common ancestral of Muroidea rodents. DNMT3C catalyzes the methylation of promoters of transposable elements during early spermatogenesis, an activity shown to be essential for their epigenetic repression and male fertility. It is yet unclear if in other mammals that do not have DNMT3C (like humans) rely on DNMT3B or DNMT3A for de novo methylation of transposable elements in the germline. Finally, DNMT2 (TRDMT1) has been identified as a DNA methyltransferase homolog, containing all 10 sequence motifs common to all DNA methyltransferases; however, DNMT2 (TRDMT1) does not methylate DNA but instead methylates cytosine-38 in the anticodon loop of aspartic acid transfer RNA.
Since many tumor suppressor genes are silenced by DNA methylation during carcinogenesis, there have been attempts to re-express these genes by inhibiting the DNMTs. 5-Aza-2'-deoxycytidine (decitabine) is a nucleoside analog that inhibits DNMTs by trapping them in a covalent complex on DNA by preventing the β-elimination step of catalysis, thus resulting in the enzymes' degradation. However, for decitabine to be active, it must be incorporated into the genome of the cell, which can cause mutations in the daughter cells if the cell does not die. In addition, decitabine is toxic to the bone marrow, which limits the size of its therapeutic window. These pitfalls have led to the development of antisense RNA therapies that target the DNMTs by degrading their mRNAs and preventing their translation. However, it is currently unclear whether targeting DNMT1 alone is sufficient to reactivate tumor suppressor genes silenced by DNA methylation.
In plants
Significant progress has been made in understanding DNA methylation in the model plant Arabidopsis thaliana. DNA methylation in plants differs from that of mammals: while DNA methylation in mammals mainly occurs on the cytosine nucleotide in a CpG site, in plants the cytosine can be methylated at CpG, CpHpG, and CpHpH sites, where H represents any nucleotide but not guanine. Overall, Arabidopsis DNA is highly methylated, mass spectrometry analysis estimated 14% of cytosines to be modified. Later, bisulfite sequencing data estimated that around 25% of Arabidopsis CG sites are methylated, but these levels vary based on the geographic location of Arabidopsis accessions (plants in the north are more highly methylated than southern accessions).
The principal Arabidopsis DNA methyltransferase enzymes, which transfer and covalently attach methyl groups onto DNA, are DRM2, MET1, and CMT3. Both the DRM2 and MET1 proteins share significant homology to the mammalian methyltransferases DNMT3 and DNMT1, respectively, whereas the CMT3 protein is unique to the plant kingdom. There are currently two classes of DNA methyltransferases: 1) the de novo class or enzymes that create new methylation marks on the DNA; 2) a maintenance class that recognizes the methylation marks on the parental strand of DNA and transfers new methylation to the daughter strands after DNA replication. DRM2 is the only enzyme that has been implicated as a de novo DNA methyltransferase. DRM2 has also been shown, along with MET1 and CMT3 to be involved in maintaining methylation marks through DNA replication. Other DNA methyltransferases are expressed in plants but have no known function (see the Chromatin Database).
Genome-wide levels of DNA methylation vary widely between plant species, and Arabidopsis cytosines tend to be less densely methylated than those in other plants. For example, ~92.5% of CpG cytosines are methylated in Beta vulgaris. The patterns of methylation also differ between cytosine sequence contexts; universally, CpG methylation is higher than CHG and CHH methylation, and CpG methylation can be found in both active genes and transposable elements, while CHG and CHH are usually relegated to silenced transposable elements.
It is not clear how the cell determines the locations of de novo DNA methylation, but evidence suggests that for many (though not all) locations, RNA-directed DNA methylation (RdDM) is involved. In RdDM, specific RNA transcripts are produced from a genomic DNA template, and this RNA forms secondary structures called double-stranded RNA molecules. The double-stranded RNAs, through either the small interfering RNA (siRNA) or microRNA (miRNA) pathways direct de-novo DNA methylation of the original genomic location that produced the RNA. This sort of mechanism is thought to be important in cellular defense against RNA viruses and/or transposons, both of which often form a double-stranded RNA that can be mutagenic to the host genome. By methylating their genomic locations, through an as yet poorly understood mechanism, they are shut off and are no longer active in the cell, protecting the genome from their mutagenic effect. Recently, it was described that methylation of the DNA is the main determinant of embryogenic cultures formation from explants in woody plants and is regarded the main mechanism that explains the poor response of mature explants to somatic embryogenesis in the plants (Isah 2016).
In insects
Diverse orders of insects show varied patterns of DNA methylation, from almost undetectable levels in flies to low levels in butterflies and higher in true bugs and some cockroaches (up to 14% of all CG sites in Blattella asahinai).
Functional DNA methylation has been discovered in Honey Bees. DNA methylation marks are mainly on the gene body, and current opinions on the function of DNA methylation is gene regulation via alternative splicing
DNA methylation levels in Drosophila melanogaster are nearly undetectable. Sensitive methods applied to Drosophila DNA Suggest levels in the range of 0.1–0.3% of total cytosine. A 2014 study of found that the low level of methylation in fruit fruit flies appeared "at specific short motifs and is independent of DNMT2 activity." Further, highly sensitive mass spectrometry approaches, have now demonstrated the presence of low (0.07%) but significant levels of adenine methylation during the earliest stages of Drosophila embryogenesis.
In fungi
Many fungi have low levels (0.1 to 0.5%) of cytosine methylation, whereas other fungi have as much as 5% of the genome methylated. This value seems to vary both among species and among isolates of the same species. There is also evidence that DNA methylation may be involved in state-specific control of gene expression in fungi. However, at a detection limit of 250 attomoles by using ultra-high sensitive mass spectrometry DNA methylation was not confirmed in single cellular yeast species such as Saccharomyces cerevisiae or Schizosaccharomyces pombe, indicating that yeasts do not possess this DNA modification.
Although brewers' yeast (Saccharomyces), fission yeast (Schizosaccharomyces), and Aspergillus flavus have no detectable DNA methylation, the model filamentous fungus Neurospora crassa has a well-characterized methylation system. Several genes control methylation in Neurospora and mutation of the DNA methyl transferase, dim-2, eliminates all DNA methylation but does not affect growth or sexual reproduction. While the Neurospora genome has very little repeated DNA, half of the methylation occurs in repeated DNA including transposon relics and centromeric DNA. The ability to evaluate other important phenomena in a DNA methylase-deficient genetic background makes Neurospora an important system in which to study DNA methylation.
In other eukaryotes
DNA methylation is largely absent from Dictyostelium discoideum where it appears to occur at about 0.006% of cytosines. In contrast, DNA methylation is widely distributed in Physarum polycephalum where 5-methylcytosine makes up as much as 8% of total cytosine
In bacteria
Adenine or cytosine methylation are mediated by restriction modification systems of many bacteria, in which specific DNA sequences are methylated periodically throughout the genome. A methylase is the enzyme that recognizes a specific sequence and methylates one of the bases in or near that sequence. Foreign DNAs (which are not methylated in this manner) that are introduced into the cell are degraded by sequence-specific restriction enzymes and cleaved. Bacterial genomic DNA is not recognized by these restriction enzymes. The methylation of native DNA acts as a sort of primitive immune system, allowing the bacteria to protect themselves from infection by bacteriophage.
E. coli DNA adenine methyltransferase (Dam) is an enzyme of ~32 kDa that does not belong to a restriction/modification system. The target recognition sequence for E. coli Dam is GATC, as the methylation occurs at the N6 position of the adenine in this sequence (G meATC). The three base pairs flanking each side of this site also influence DNA–Dam binding. Dam plays several key roles in bacterial processes, including mismatch repair, the timing of DNA replication, and gene expression. As a result of DNA replication, the status of GATC sites in the E. coli genome changes from fully methylated to hemimethylated. This is because adenine introduced into the new DNA strand is unmethylated. Re-methylation occurs within two to four seconds, during which time replication errors in the new strand are repaired. Methylation, or its absence, is the marker that allows the repair apparatus of the cell to differentiate between the template and nascent strands. It has been shown that altering Dam activity in bacteria results in an increased spontaneous mutation rate. Bacterial viability is compromised in dam mutants that also lack certain other DNA repair enzymes, providing further evidence for the role of Dam in DNA repair.
One region of the DNA that keeps its hemimethylated status for longer is the origin of replication, which has an abundance of GATC sites. This is central to the bacterial mechanism for timing DNA replication. SeqA binds to the origin of replication, sequestering it and thus preventing methylation. Because hemimethylated origins of replication are inactive, this mechanism limits DNA replication to once per cell cycle.
Expression of certain genes, for example, those coding for pilus expression in E. coli, is regulated by the methylation of GATC sites in the promoter region of the gene operon. The cells' environmental conditions just after DNA replication determine whether Dam is blocked from methylating a region proximal to or distal from the promoter region. Once the pattern of methylation has been created, the pilus gene transcription is locked in the on or off position until the DNA is again replicated. In E. coli, these pili operons have important roles in virulence in urinary tract infections. It has been proposed that inhibitors of Dam may function as antibiotics.
On the other hand, DNA cytosine methylase targets CCAGG and CCTGG sites to methylate cytosine at the C5 position (C meC(A/T) GG). The other methylase enzyme, EcoKI, causes methylation of adenines in the sequences AAC(N6)GTGC and GCAC(N6)GTT.
In Clostridioides difficile, DNA methylation at the target motif CAAAAA was shown to impact sporulation, a key step in disease transmission, as well as cell length, biofilm formation and host colonization.
Molecular cloning
Most strains used by molecular biologists are derivatives of E. coli K-12, and possess both Dam and Dcm, but there are commercially available strains that are dam-/dcm- (lack of activity of either methylase). In fact, it is possible to unmethylate the DNA extracted from dam+/dcm+ strains by transforming it into dam-/dcm- strains. This would help digest sequences that are not being recognized by methylation-sensitive restriction enzymes.
The restriction enzyme DpnI can recognize 5'-GmeATC-3' sites and digest the methylated DNA. Being such a short motif, it occurs frequently in sequences by chance, and as such its primary use for researchers is to degrade template DNA following PCRs (PCR products lack methylation, as no methylases are present in the reaction). Similarly, some commercially available restriction enzymes are sensitive to methylation at their cognate restriction sites and must as mentioned previously be used on DNA passed through a dam-/dcm- strain to allow cutting.
Detection
DNA methylation can be detected by the following assays currently used in scientific research:
Mass spectrometry is a very sensitive and reliable analytical method to detect DNA methylation. MS, in general, is however not informative about the sequence context of the methylation, thus limited in studying the function of this DNA modification.
Methylation-Specific PCR (MSP), which is based on a chemical reaction of sodium bisulfite with DNA that converts unmethylated cytosines of CpG dinucleotides to uracil or UpG, followed by traditional PCR. However, methylated cytosines will not be converted in this process, and primers are designed to overlap the CpG site of interest, which allows one to determine methylation status as methylated or unmethylated.
Whole genome bisulfite sequencing, also known as BS-Seq, which is a high-throughput genome-wide analysis of DNA methylation. It is based on the aforementioned sodium bisulfite conversion of genomic DNA, which is then sequenced on a Next-generation sequencing platform. The sequences obtained are then re-aligned to the reference genome to determine the methylation status of CpG dinucleotides based on mismatches resulting from the conversion of unmethylated cytosines into uracil.
Enzymatic methyl-seq (EM-seq) works similarly to bisulfite sequencing, but uses enzymes, APOBEC and TET2, to deaminate unmethylated cytosine into uracil prior to sequencing. EM-seq libraries are less prone to DNA damage than bisulfite-treated libraries.
Reduced representation bisulfite sequencing, also known as RRBS knows several working protocols. The first RRBS protocol was called RRBS and aims for around 10% of the methylome, a reference genome is needed. Later came more protocols that were able to sequence a smaller portion of the genome and higher sample multiplexing. EpiGBS was the first protocol where you could multiplex 96 samples in one lane of Illumina sequencing and were a reference genome was no longer needed. A de novo reference construction from the Watson and Crick reads made population screening of SNP's and SMP's simultaneously a fact.
The HELP assay, which is based on restriction enzymes' differential ability to recognize and cleave methylated and unmethylated CpG DNA sites.
GLAD-PCR assay, which is based on a new type of enzymes – site-specific methyl-directed DNA endonucleases, which hydrolyze only methylated DNA.
ChIP-on-chip assays, which is based on the ability of commercially prepared antibodies to bind to DNA methylation-associated proteins like MeCP2.
Restriction landmark genomic scanning, a complicated and now rarely used assay based upon restriction enzymes' differential recognition of methylated and unmethylated CpG sites; the assay is similar in concept to the HELP assay.
Methylated DNA immunoprecipitation (MeDIP), analogous to chromatin immunoprecipitation, immunoprecipitation is used to isolate methylated DNA fragments for input into DNA detection methods such as DNA microarrays (MeDIP-chip) or DNA sequencing (MeDIP-seq).
Pyrosequencing of bisulfite treated DNA. This is the sequencing of an amplicon made by a normal forward primer but a biotinylated reverse primer to PCR the gene of choice. The Pyrosequencer then analyses the sample by denaturing the DNA and adding one nucleotide at a time to the mix according to a sequence given by the user. If there is a mismatch, it is recorded and the percentage of DNA for which the mismatch is present is noted. This gives the user a percentage of methylation per CpG island.
Molecular break light assay for DNA adenine methyltransferase activity – an assay that relies on the specificity of the restriction enzyme DpnI for fully methylated (adenine methylation) GATC sites in an oligonucleotide labeled with a fluorophore and quencher. The adenine methyltransferase methylates the oligonucleotide making it a substrate for DpnI. Cutting of the oligonucleotide by DpnI gives rise to a fluorescence increase.
Methyl Sensitive Southern Blotting is similar to the HELP assay, although uses Southern blotting techniques to probe gene-specific differences in methylation using restriction digests. This technique is used to evaluate local methylation near the binding site for the probe.
MethylCpG Binding Proteins (MBPs) and fusion proteins containing just the Methyl Binding Domain (MBD) are used to separate native DNA into methylated and unmethylated fractions. The percentage methylation of individual CpG islands can be determined by quantifying the amount of the target in each fraction. Extremely sensitive detection can be achieved in FFPE tissues with abscription-based detection.
High Resolution Melt Analysis (HRM or HRMA), is a post-PCR analytical technique. The target DNA is treated with sodium bisulfite, which chemically converts unmethylated cytosines into uracils, while methylated cytosines are preserved. PCR amplification is then carried out with primers designed to amplify both methylated and unmethylated templates. After this amplification, highly methylated DNA sequences contain a higher number of CpG sites compared to unmethylated templates, which results in a different melting temperature that can be used in quantitative methylation detection.
Ancient DNA methylation reconstruction, a method to reconstruct high-resolution DNA methylation from ancient DNA samples. The method is based on the natural degradation processes that occur in ancient DNA: with time, methylated cytosines are degraded into thymines, whereas unmethylated cytosines are degraded into uracils. This asymmetry in degradation signals was used to reconstruct the full methylation maps of the Neanderthal and the Denisovan. In September 2019, researchers published a novel method to infer morphological traits from DNA methylation data. The authors were able to show that linking down-regulated genes to phenotypes of monogenic diseases, where one or two copies of a gene are perturbed, allows for ~85% accuracy in reconstructing anatomical traits directly from DNA methylation maps.
Methylation Sensitive Single Nucleotide Primer Extension Assay (msSNuPE), which uses internal primers annealing straight 5' of the nucleotide to be detected.
Illumina Methylation Assay measures locus-specific DNA methylation using array hybridization. Bisulfite-treated DNA is hybridized to probes on "BeadChips." Single-base base extension with labeled probes is used to determine methylation status of target sites. In 2016, the Infinium MethylationEPIC BeadChip was released, which interrogates over 850,000 methylation sites across the human genome.
Differentially methylated regions (DMRs)
Differentially methylated regions, which are genomic regions with different methylation statuses among multiple samples (tissues, cells, individuals or others), are regarded as possible functional regions involved in gene transcriptional regulation. The identification of DMRs among multiple tissues (T-DMRs) provides a comprehensive survey of epigenetic differences among human tissues. For example, these methylated regions that are unique to a particular tissue allow individuals to differentiate between tissue type, such as semen and vaginal fluid. Current research conducted by Lee et al., showed DACT1 and USP49 positively identified semen by examining T-DMRs. The use of T-DMRs has proven useful in the identification of various body fluids found at crime scenes. Researchers in the forensic field are currently seeking novel T-DMRs in genes to use as markers in forensic DNA analysis. DMRs between cancer and normal samples (C-DMRs) demonstrate the aberrant methylation in cancers. It is well known that DNA methylation is associated with cell differentiation and proliferation. Many DMRs have been found in the development stages (D-DMRs) and in the reprogrammed progress (R-DMRs). In addition, there are intra-individual DMRs (Intra-DMRs) with longitudinal changes in global DNA methylation along with the increase of age in a given individual. There are also inter-individual DMRs (Inter-DMRs) with different methylation patterns among multiple individuals.
QDMR (Quantitative Differentially Methylated Regions) is a quantitative approach to quantify methylation difference and identify DMRs from genome-wide methylation profiles by adapting Shannon entropy. The platform-free and species-free nature of QDMR makes it potentially applicable to various methylation data. This approach provides an effective tool for the high-throughput identification of the functional regions involved in epigenetic regulation. QDMR can be used as an effective tool for the quantification of methylation difference and identification of DMRs across multiple samples.
Gene-set analysis (a.k.a. pathway analysis; usually performed tools such as DAVID, GoSeq or GSEA) has been shown to be severely biased when applied to high-throughput methylation data (e.g. MeDIP-seq, MeDIP-ChIP, HELP-seq etc.), and a wide range of studies have thus mistakenly reported hyper-methylation of genes related to development and differentiation; it has been suggested that this can be corrected using sample label permutations or using a statistical model to control for differences in the numbers of CpG probes / CpG sites that target each gene.
DNA methylation marks
DNA methylation marks – genomic regions with specific methylation patterns in a specific biological state such as tissue, cell type, individual – are regarded as possible functional regions involved in gene transcriptional regulation. Although various human cell types may have the same genome, these cells have different methylomes. The systematic identification and characterization of methylation marks across cell types are crucial to understanding the complex regulatory network for cell fate determination. Hongbo Liu et al. proposed an entropy-based framework termed SMART to integrate the whole genome bisulfite sequencing methylomes across 42 human tissues/cells and identified 757,887 genome segments. Nearly 75% of the segments showed uniform methylation across all cell types. From the remaining 25% of the segments, they identified cell type-specific hypo/hypermethylation marks that were specifically hypo/hypermethylated in a minority of cell types using a statistical approach and presented an atlas of the human methylation marks. Further analysis revealed that the cell type-specific hypomethylation marks were enriched through H3K27ac and transcription factor binding sites in a cell type-specific manner. In particular, they observed that the cell type-specific hypomethylation marks are associated with the cell type-specific super-enhancers that drive the expression of cell identity genes. This framework provides a complementary, functional annotation of the human genome and helps to elucidate the critical features and functions of cell type-specific hypomethylation.
The entropy-based Specific Methylation Analysis and Report Tool, termed "SMART", which focuses on integrating a large number of DNA methylomes for the de novo identification of cell type-specific methylation marks. The latest version of SMART is focused on three main functions including de novo identification of differentially methylated regions (DMRs) by genome segmentation, identification of DMRs from predefined regions of interest, and identification of differentially methylated CpG sites.
In identification and detection of body fluids
DNA methylation allows for several tissues to be analyzed in one assay as well as for small amounts of body fluid to be identified with the use of extracted DNA. Usually, the two approaches of DNA methylation are either methylated-sensitive restriction enzymes or treatment with sodium bisulphite. Methylated sensitive restriction enzymes work by cleaving specific CpG, cytosine and guanine separated by only one phosphate group, recognition sites when the CpG is methylated. In contrast, unmethylated cytosines are transformed to uracil and in the process, methylated cytosines remain methylated. In particular, methylation profiles can provide insight on when or how body fluids were left at crime scenes, identify the kind of body fluid, and approximate age, gender, and phenotypic characteristics of perpetrators. Research indicates various markers that can be used for DNA methylation. Deciding which marker to use for an assay is one of the first steps of the identification of body fluids. In general, markers are selected by examining prior research conducted. Identification markers that are chosen should give a positive result for one type of cell. One portion of the chromosome that is an area of focus when conducting DNA methylation are tissue-specific differentially methylated regions, T-DMRs. The degree of methylation for the T-DMRs ranges depending on the body fluid. A research team developed a marker system that is two-fold. The first marker is methylated only in the target fluid while the second is methylated in the rest of the fluids. For instance, if venous blood marker A is un-methylated and venous blood marker B is methylated in a fluid, it indicates the presence of only venous blood. In contrast, if venous blood marker A is methylated and venous blood marker B is un-methylated in some fluid, then that indicates venous blood is in a mixture of fluids. Some examples for DNA methylation markers are Mens1(menstrual blood), Spei1(saliva), and Sperm2(seminal fluid).
DNA methylation provides a relatively good means of sensitivity when identifying and detecting body fluids. In one study, only ten nanograms of a sample was necessary to ascertain successful results. DNA methylation provides a good discernment of mixed samples since it involves markers that give "on or off" signals. DNA methylation is not impervious to external conditions. Even under degraded conditions using the DNA methylation techniques, the markers are stable enough that there are still noticeable differences between degraded samples and control samples. Specifically, in one study, it was found that there were not any noticeable changes in methylation patterns over an extensive period of time.
The detection of DNA methylation in cell-free DNA and other body fluids has recently become one of the main approaches to Liquid biopsy. In particular, the identification of tissue-specific and disease-specific patterns allows for non-invasive detection and monitoring of diseases such as cancer. If compared to strictly genomic approaches to liquid biopsy, DNA methylation profiling offers a larger number of differentially methylated CpG sites and differentially methylated regions (DMRSs), potentially enhancing its sensitivity. Signal deconvolution algorithms based on DNA methylation have been successfully applied to cell-free DNA and can nominate the tissue of origin of cancers of unknown primary, allograft rejection, and resistance to hormone therapy.
Computational prediction
DNA methylation can also be detected by computational models through sophisticated algorithms and methods. Computational models can facilitate the global profiling of DNA methylation across chromosomes, and often such models are faster and cheaper to perform than biological assays. Such up-to-date computational models include Bhasin, et al., Bock, et al., and Zheng, et al. Together with biological assay, these methods greatly facilitate the DNA methylation analysis.
See also
5-Hydroxymethylcytosine
5-Methylcytosine
7-Methylguanosine
Decrease in DNA Methylation I (DDM1), a plant methylation gene
Demethylating agent
Differentially methylated regions
DNA demethylation
DNA methylation reprogramming
Epigenetics, of which DNA methylation is a significant contributor
Epigenetic clock, a method to calculate age based on DNA methylation
Epigenome
Genome
Genomic imprinting, an inherited repression of an allele, relying on DNA methylation
MethBase DNA Methylation database hosted on the UCSC Genome Browser
MethDB DNA Methylation database
N6-Methyladenosine
Protein methylation
Methylenetetrahydrofolate reductase deficiency
References
Further reading
External links
ENCODE threads explorer Non-coding RNA characterization. Nature (journal)
PCMdb Pancreatic Cancer Methylation Database.
SMART Specific Methylation Analysis and Report Tool
Human Methylation Mark Atlas
DiseaseMeth Human disease methylation database
EWAS Atlas A knowledgebase of epigenome-wide association studies
DNA
Epigenetics
Methylation
fr:Méthylation#Génétique | DNA methylation | [
"Chemistry"
] | 11,202 | [
"Methylation"
] |
4,472,066 | https://en.wikipedia.org/wiki/Atomic%20formula | In mathematical logic, an atomic formula (also known as an atom or a prime formula) is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the atomic formulas using the logical connectives.
The precise form of atomic formulas depends on the logic under consideration; for propositional logic, for example, a propositional variable is often more briefly referred to as an "atomic formula", but, more precisely, a propositional variable is not an atomic formula but a formal expression that denotes an atomic formula. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term. In model theory, atomic formulas are merely strings of symbols with a given signature, which may or may not be satisfiable with respect to a given model.
Atomic formula in first-order logic
The well-formed terms and propositions of ordinary first-order logic have the following syntax:
Terms:
,
that is, a term is recursively defined to be a constant c (a named object from the domain of discourse), or a variable x (ranging over the objects in the domain of discourse), or an n-ary function f whose arguments are terms tk. Functions map tuples of objects to objects.
Propositions:
,
that is, a proposition is recursively defined to be an n-ary predicate P whose arguments are terms tk, or an expression composed of logical connectives (and, or) and quantifiers (for-all, there-exists) used with other propositions.
An atomic formula or atom is simply a predicate applied to a tuple of terms; that is, an atomic formula is a formula of the form P (t1 ,…, tn) for P a predicate, and the tn terms.
All other well-formed formulae are obtained by composing atoms with logical connectives and quantifiers.
For example, the formula ∀x. P (x) ∧ ∃y. Q (y, f (x)) ∨ ∃z. R (z) contains the atoms
.
As there are no quantifiers appearing in an atomic formula, all occurrences of variable symbols in an atomic formula are free.
See also
In model theory, structures assign an interpretation to the atomic formulas.
In proof theory, polarity assignment for atomic formulas is an essential component of focusing.
Atomic sentence
References
Further reading
Predicate logic
Logical expressions
de:Aussage (Logik)#einfache Aussagen - zusammengesetzte Aussagen | Atomic formula | [
"Mathematics"
] | 559 | [
"Logical expressions",
"Basic concepts in set theory",
"Predicate logic",
"Mathematical logic"
] |
4,472,911 | https://en.wikipedia.org/wiki/Traverse%20%28surveying%29 | Traverse is a method in the field of surveying to establish control networks. It is also used in geodesy. Traverse networks involve placing survey stations along a line or path of travel, and then using the previously surveyed points as a base for observing the next point. Connected survey lines form the framework and the directions and lengths of the survey lines are measured with an angle measuring instrument and tape or chain. Traverse networks have many advantages, including:
Less reconnaissance and organization needed;
While in other systems, which may require the survey to be performed along a rigid polygon shape, the traverse can change to any shape and thus can accommodate a great deal of different terrains;
Only a few observations need to be taken at each station, whereas in other survey networks a great deal of angular and linear observations need to be made and considered;
Traverse networks are free of the strength of figure considerations that happen in triangular systems;
Scale error does not add up as the traverse is performed. Azimuth swing errors can also be reduced by increasing the distance between stations.
The traverse is more accurate than triangulateration (a combined function of the triangulation and trilateration practice).
Types
Frequently in surveying engineering and geodetic science, control points (CP) are setting/observing distance and direction (bearings, angles, azimuths, and elevation). The CP throughout the control network may consist of monuments, benchmarks, vertical control, etc. There are mainly two types of traverse:
Closed traverse: either originates from a station and returns to the same station completing a circuit, or runs between two known stations
Open traverse: neither returns to its starting station, nor closes on any other known station.
Compound traverse: it is where an open traverse is linked at its ends to an existing traverse to form a closed traverse. The closing line may be defined by coordinates at the end points which have been determined by previous survey. The difficulty is, where there is linear misclosure, it is not known whether the error is in the new survey or the previous survey.
Components
Control point — The primary/base control used for preliminary measurements; it may consist of any known point capable of establishing accurate control of distance and direction (i.e. coordinates, elevation, bearings, etc.).
Starting – The initial starting control point of the traverse.
Observation – All known control points that are set or observed within the traverse.
Terminal – The initial ending control point of the traverse; its coordinates are unknown.tr
See also
Great Trigonometrical Survey
Polygonal chain
Side shot
Transcontinental Traverse
References
Civil engineering
Earth sciences
Geodesy
Surveying | Traverse (surveying) | [
"Mathematics",
"Engineering"
] | 532 | [
"Applied mathematics",
"Construction",
"Surveying",
"Civil engineering",
"Geodesy"
] |
4,472,938 | https://en.wikipedia.org/wiki/Hyperelastic%20material | A hyperelastic or Green elastic material is a type of constitutive model for ideally elastic material for which the stress–strain relationship derives from a strain energy density function. The hyperelastic material is a special case of a Cauchy elastic material.
For many materials, linear elastic models do not accurately describe the observed material behaviour. The most common example of this kind of material is rubber, whose stress-strain relationship can be defined as non-linearly elastic, isotropic and incompressible. Hyperelasticity provides a means of modeling the stress–strain behavior of such materials. The behavior of unfilled, vulcanized elastomers often conforms closely to the hyperelastic ideal. Filled elastomers and biological tissues are also often modeled via the hyperelastic idealization. In addition to being used to model physical materials, hyperelastic materials are also used as fictitious media, e.g. in the third medium contact method.
Ronald Rivlin and Melvin Mooney developed the first hyperelastic models, the Neo-Hookean and Mooney–Rivlin solids. Many other hyperelastic models have since been developed. Other widely used hyperelastic material models include the Ogden model and the Arruda–Boyce model.
Hyperelastic material models
Saint Venant–Kirchhoff model
The simplest hyperelastic material model is the Saint Venant–Kirchhoff model which is just an extension of the geometrically linear elastic material model to the geometrically nonlinear regime. This model has the general form and the isotropic form respectively
where is tensor contraction, is the second Piola–Kirchhoff stress, is a fourth order stiffness tensor and is the Lagrangian Green strain given by
and are the Lamé constants, and is the second order unit tensor.
The strain-energy density function for the Saint Venant–Kirchhoff model is
and the second Piola–Kirchhoff stress can be derived from the relation
Classification of hyperelastic material models
Hyperelastic material models can be classified as:
phenomenological descriptions of observed behavior
Fung
Mooney–Rivlin
Ogden
Polynomial
Saint Venant–Kirchhoff
Yeoh
Marlow
mechanistic models deriving from arguments about the underlying structure of the material
Arruda–Boyce model
Neo–Hookean model
Buche–Silberstein model
hybrids of phenomenological and mechanistic models
Gent
Van der Waals
Generally, a hyperelastic model should satisfy the Drucker stability criterion.
Some hyperelastic models satisfy the Valanis-Landel hypothesis which states that the strain energy function can be separated into the sum of separate functions of the principal stretches :
Stress–strain relations
Compressible hyperelastic materials
First Piola–Kirchhoff stress
If is the strain energy density function, the 1st Piola–Kirchhoff stress tensor can be calculated for a hyperelastic material as
where is the deformation gradient. In terms of the Lagrangian Green strain ()
In terms of the right Cauchy–Green deformation tensor ()
Second Piola–Kirchhoff stress
If is the second Piola–Kirchhoff stress tensor then
In terms of the Lagrangian Green strain
In terms of the right Cauchy–Green deformation tensor
The above relation is also known as the Doyle-Ericksen formula in the material configuration.
Cauchy stress
Similarly, the Cauchy stress is given by
In terms of the Lagrangian Green strain
In terms of the right Cauchy–Green deformation tensor
The above expressions are valid even for anisotropic media (in which case, the potential function is understood to depend implicitly on reference directional quantities such as initial fiber orientations). In the special case of isotropy, the Cauchy stress can be expressed in terms of the left Cauchy-Green deformation tensor as follows:
Incompressible hyperelastic materials
For an incompressible material . The incompressibility constraint is therefore . To ensure incompressibility of a hyperelastic material, the strain-energy function can be written in form:
where the hydrostatic pressure functions as a Lagrangian multiplier to enforce the incompressibility constraint. The 1st Piola–Kirchhoff stress now becomes
This stress tensor can subsequently be converted into any of the other conventional stress tensors, such as the Cauchy stress tensor which is given by
Expressions for the Cauchy stress
Compressible isotropic hyperelastic materials
For isotropic hyperelastic materials, the Cauchy stress can be expressed in terms of the invariants of the left Cauchy–Green deformation tensor (or right Cauchy–Green deformation tensor). If the strain energy density function is then
(See the page on the left Cauchy–Green deformation tensor for the definitions of these symbols).
Incompressible isotropic hyperelastic materials
For incompressible isotropic hyperelastic materials, the strain energy density function is . The Cauchy stress is then given by
where is an undetermined pressure. In terms of stress differences
If in addition , then
If , then
Consistency with linear elasticity
Consistency with linear elasticity is often used to determine some of the parameters of hyperelastic material models. These consistency conditions can be found by comparing Hooke's law with linearized hyperelasticity at small strains.
Consistency conditions for isotropic hyperelastic models
For isotropic hyperelastic materials to be consistent with isotropic linear elasticity, the stress–strain relation should have the following form in the infinitesimal strain limit:
where are the Lamé constants. The strain energy density function that corresponds to the above relation is
For an incompressible material and we have
For any strain energy density function to reduce to the above forms for small strains the following conditions have to be met
If the material is incompressible, then the above conditions may be expressed in the following form.
These conditions can be used to find relations between the parameters of a given hyperelastic model and shear and bulk moduli.
Consistency conditions for incompressible based rubber materials
Many elastomers are modeled adequately by a strain energy density function that depends only on . For such materials we have .
The consistency conditions for incompressible materials for may then be expressed as
The second consistency condition above can be derived by noting that
These relations can then be substituted into the consistency condition for isotropic incompressible hyperelastic materials.
References
See also
Cauchy elastic material
Continuum mechanics
Deformation (mechanics)
Finite strain theory
Ogden–Roxburgh model
Rubber elasticity
Stress measures
Stress (mechanics)
Continuum mechanics
Elasticity (physics)
Rubber properties
Solid mechanics | Hyperelastic material | [
"Physics",
"Materials_science"
] | 1,392 | [
"Physical phenomena",
"Solid mechanics",
"Continuum mechanics",
"Elasticity (physics)",
"Deformation (mechanics)",
"Classical mechanics",
"Mechanics",
"Physical properties"
] |
4,473,265 | https://en.wikipedia.org/wiki/Calogero%20conjecture | The Calogero conjecture is a minority interpretation of quantum mechanics. It is a quantization explanation involving quantum mechanics, originally stipulated in 1997 and further republished in 2004 by Francesco Calogero that suggests the classical stochastic background field to which Edward Nelson attributes quantum mechanical behavior in his theory of stochastic quantization is a fluctuating space-time, and that there are further mathematical relations between the involved quantities.
The hypothesis itself suggests that if the angular momentum associated with a stochastic tremor with spatial coherence provides an action purported by that motion within the order of magnitude of the Planck constant then the order of magnitude of the associated angular momentum has the same value. Calogero himself suggests that these findings, originally based on the simplified model of the universe "are affected (and essentially, unaffected) by the possible presence in the mass of the Universe of a large component made up of particles much lighter than nucleons".
Essentially, the relation explained by Calogero can be expressed with the formulas:
Furthermore:
Const G,m
Where:
represents the gravitational constant
represents the mass of a hydrogen atom.
represents the radius of the universe accessible by gravitational interactions in time, t.
is a dimensional constant.
Despite its common description, it has been noted that the conjecture is not entirely defined within the realms of Nelson's stochastic mechanics, but can also be thought of as a means of inquiring into the statistical effects of interaction with distant masses in the universe and was expected by Calogero himself to be within the same order of magnitude as quantum mechanical effects.
Analysis
Compatibility with fundamental constants
After the publication of Calogero's original paper, "[The] [c]osmic origin of quantization" a response was published by Giuseppe Gaeta of the University of Rome in which he discussed the compatibility of the conjecture with present bounds on variation of fundamental constants, but also outlined his focus on the modification of the relation between redshift and distance, and of the estimations attained from observations of elapsed time from the production of cosmic radiation and implications—both being related also, to the observed blackbody distribution of background cosmic radiation.
References
Quantum mechanics
Conjectures | Calogero conjecture | [
"Physics",
"Mathematics"
] | 450 | [
"Unsolved problems in mathematics",
"Theoretical physics",
"Quantum mechanics",
"Conjectures",
"Mathematical problems"
] |
4,473,277 | https://en.wikipedia.org/wiki/Gabriel%E2%80%93Colman%20rearrangement | The Gabriel–Colman rearrangement is the chemical reaction of a saccharin or phthalimido ester with a strong base, such as an alkoxide, to form substituted isoquinolines. First described in 1900 by chemists Siegmund Gabriel and James Colman, this rearrangement, a ring expansion, is seen to be general if there is an enolizable hydrogen on the group attached to the nitrogen, since it is necessary for the nitrogen to abstract a hydrogen to form the carbanion that will close the ring. As shown in the case of the general example below, X is either CO or SO2.
Mechanism
The reaction mechanism starts with an attack on the carbonyl group by a strong base, such as methoxide ion. The ring is then opened, forming an imide anion. This is then followed by a rapid isomerization of the imide anion to the carbanion. This is facilitated by the electron withdrawing effect of the substituent, which allows for greater stabilization of the adjacent carbanion with respect to the imide anion. The reaction is then completed when the methoxide is displaced by the ring closing, which results in a ring expansion. The rate determining step of this reaction is the attack of the carbanion on the carbomethoxy group.
The displacement of the methoxide is analogous to the displacement seen in the Dieckman condensation, as it is also a result of a ring closure.
Furthermore, tautomerization can occur on both of the carbonyl groups on the ring, with interconversion of the keto form to the enol form and the amide form to the imidic acid form.
Applications
The major application of the Gabriel–Colman rearrangement is in the formation of isoquinolines, due to the relatively high yield of the desired products. Therefore, studies in which either the product or an intermediate is an isoquinoline, the Gabriel–Colman rearrangement can be utilized. This reaction has been utilized in the production of intermediates for the synthesis of potential anti-inflammatory agents. It has also been used in the study of phthalimide and saccharin derivatives as mechanism based inhibitors for three enzymes; the human leukocyte elastase, cathepsin G and proteinase 3. Phthalimide derivatives were seen to be inactive, while saccharin derivatives were seen to be fair inhibitors of these enzymes.
In a study of the derivatives of 3-Oxo-1,2-benzoisothiazoline-2-acetic acid 1,1-dioxide, the Gabriel–Colman rearrangement was employed in the conversion of Isopropyl (1,1-dioxido-3-oxo-1,2-benzothiazol-2(3H)-yl)acetate to Isopropyl 4-hydroxy-2H-1,2-benzothiazine-3-carboxylate 1,1-dioxide, as shown above. This reaction has shown a percent yield of 85%.
In another study, N-phthalimidoglycine ethyl ester was used to synthesize 4-hydroxyisoquinoline through use of a Gabriel–Colman rearrangement, as shown above. This reaction has shown a percent yield of 91%. The formation of this product was an important step in the study of the synthesis of 4,4′-functionalized 1,1′-biisoquinolines.
See also
Dieckmann reaction
Claisen condensation
Chan rearrangement
References
Carbon-carbon bond forming reactions
Heterocycle forming reactions
Rearrangement reactions
Name reactions | Gabriel–Colman rearrangement | [
"Chemistry"
] | 773 | [
"Carbon-carbon bond forming reactions",
"Heterocycle forming reactions",
"Name reactions",
"Organic reactions",
"Rearrangement reactions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.