id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
57,880
https://en.wikipedia.org/wiki/In%20vitro%20fertilisation
In vitro fertilisation (IVF) is a process of fertilisation in which an egg is combined with sperm in vitro ("in glass"). The process involves monitoring and stimulating a woman's ovulatory process, then removing an ovum or ova (egg or eggs) from her ovaries and enabling a man's sperm to fertilise them in a culture medium in a laboratory. After a fertilised egg (zygote) undergoes embryo culture for 2–6 days, it is transferred by catheter into the uterus, with the intention of establishing a successful pregnancy. IVF is a type of assisted reproductive technology used to treat infertility, enable gestational surrogacy, and, in combination with pre-implantation genetic testing, avoid the transmission of abnormal genetic conditions. When a fertilised egg from egg and sperm donors implants in the uterus of a genetically unrelated surrogate, the resulting child is also genetically unrelated to the surrogate. Some countries have banned or otherwise regulated the availability of IVF treatment, giving rise to fertility tourism. Financial cost and age may also restrict the availability of IVF as a means of carrying a healthy pregnancy to term. In July 1978, Louise Brown was the first child successfully born after her mother received IVF treatment. Brown was born as a result of natural-cycle IVF, where no stimulation was made. The procedure took place at Dr Kershaw's Cottage Hospital (later Dr Kershaw's Hospice) in Royton, Oldham, England. Robert Edwards was awarded the Nobel Prize in Physiology or Medicine in 2010. (The physiologist co-developed the treatment together with Patrick Steptoe and embryologist Jean Purdy but the latter two were not eligible for consideration as they had died: the Nobel Prize is not awarded posthumously.) When assisted by egg donation and IVF, many women who have reached menopause, have infertile partners, or have idiopathic female-fertility issues, can still become pregnant. After the IVF treatment, some couples get pregnant without any fertility treatments. In 2023, it was estimated that twelve million children had been born worldwide using IVF and other assisted reproduction techniques. A 2019 study that evaluated the use of 10 adjuncts with IVF (screening hysteroscopy, DHEA, testosterone, GH, aspirin, heparin, antioxidants, seminal plasma and PRP) suggested that (with the exception of hysteroscopy) these adjuncts should be avoided until there is more evidence to show that they are safe and effective. Terminology The Latin term in vitro, meaning "in glass", is used because early biological experiments involving cultivation of tissues outside the living organism were carried out in glass containers, such as beakers, test tubes, or Petri dishes. The modern scientific term "in vitro" refers to any biological procedure that is performed outside the organism in which it would normally have occurred, to distinguish it from an in vivo procedure (such as in vivo fertilisation), where the tissue remains inside the living organism in which it is normally found. A colloquial term for babies conceived as the result of IVF, "test tube babies", refers to the tube-shaped containers of glass or plastic resin, called test tubes, that are commonly used in chemistry and biology labs. However, IVF is usually performed in Petri dishes, which are both wider and shallower and often used to cultivate cultures. IVF is a form of assisted reproductive technology. History The first successful birth of a child after IVF treatment, Louise Brown, occurred in 1978. Louise Brown was born as a result of natural cycle IVF where no stimulation was made. The procedure took place at Dr Kershaw's Cottage Hospital (now Dr Kershaw's Hospice) in Royton, Oldham, England. Robert G. Edwards, the physiologist who co-developed the treatment, was awarded the Nobel Prize in Physiology or Medicine in 2010. His co-workers, Patrick Steptoe and Jean Purdy, were not eligible for consideration as the Nobel Prize is not awarded posthumously. The second successful birth of a 'test tube baby' occurred in India on October 3, 1978, just 67 days after Louise Brown was born. The girl, named Durga, was conceived in vitro using a method developed independently by Subhash Mukhopadhyay, a physician and researcher from Hazaribag. Mukhopadhyay had been performing experiments on his own with primitive instruments and a household refrigerator. However, state authorities prevented him from presenting his work at scientific conferences, and it was many years before Mukhopadhyay's contribution was acknowledged in works dealing with the subject. Adriana Iliescu held the record as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66, a record passed in 2006. After the IVF treatment some couples are able to get pregnant without any fertility treatments. In 2018 it was estimated that eight million children had been born worldwide using IVF and other assisted reproduction techniques. Medical uses Indications IVF may be used to overcome female infertility when it is due to problems with the fallopian tubes, making in vivo fertilisation difficult. It can also assist in male infertility, in those cases where there is a defect in sperm quality; in such situations intracytoplasmic sperm injection (ICSI) may be used, where a sperm cell is injected directly into the egg cell. This is used when sperm has difficulty penetrating the egg. ICSI is also used when sperm numbers are very low. When indicated, the use of ICSI has been found to increase the success rates of IVF. According to UK's National Institute for Health and Care Excellence (NICE) guidelines, IVF treatment is appropriate in cases of unexplained infertility for people who have not conceived after 2 years of regular unprotected sexual intercourse. In people with anovulation, it may be an alternative after 7–12 attempted cycles of ovulation induction, since the latter is expensive and more easy to control. Success rates IVF success rates are the percentage of all IVF procedures that result in favourable outcomes. Depending on the type of calculation used, this outcome may represent the number of confirmed pregnancies, called the pregnancy rate, or the number of live births, called the live birth rate. Due to advances in reproductive technology, live birth rates by cycle five of IVF have increased from 76% in 2005 to 80% in 2010, despite a reduction in the number of embryos being transferred (which decreased the multiple birth rate from 25% to 8%). The success rate depends on variable factors such as age of the woman, cause of infertility, embryo status, reproductive history, and lifestyle factors. Younger candidates of IVF are more likely to get pregnant. People older than 41 are more likely to get pregnant with a donor egg. People who have been previously pregnant are in many cases more successful with IVF treatments than those who have never been pregnant. Live birth rate The live birth rate is the percentage of all IVF cycles that lead to a live birth. This rate does not include miscarriage or stillbirth; multiple-order births, such as twins and triplets, are counted as one pregnancy. A 2021 summary compiled by the Society for Assisted Reproductive Technology (SART) which reports the average IVF success rates in the United States per age group using non-donor eggs compiled the following data: In 2006, Canadian clinics reported a live birth rate of 27%. Birth rates in younger patients were slightly higher, with a success rate of 35.3% for those 21 and younger, the youngest group evaluated. Success rates for older patients were also lower and decrease with age, with 37-year-olds at 27.4% and no live births for those older than 48, the oldest group evaluated. Some clinics exceeded these rates, but it is impossible to determine if that is due to superior technique or patient selection, since it is possible to artificially increase success rates by refusing to accept the most difficult patients or by steering them into oocyte donation cycles (which are compiled separately). Further, pregnancy rates can be increased by the placement of several embryos at the risk of increasing the chance for multiples. Because not each IVF cycle that is started will lead to oocyte retrieval or embryo transfer, reports of live birth rates need to specify the denominator, namely IVF cycles started, IVF retrievals, or embryo transfers. The SART summarised 2008–9 success rates for US clinics for fresh embryo cycles that did not involve donor eggs and gave live birth rates by the age of the prospective mother, with a peak at 41.3% per cycle started and 47.3% per embryo transfer for patients under 35 years of age. IVF attempts in multiple cycles result in increased cumulative live birth rates. Depending on the demographic group, one study reported 45% to 53% for three attempts, and 51% to 71% to 80% for six attempts. According to the 2021 National Summary Report compiled by the Society for Assisted Reproductive Technology (SART), the mean number of embryos transfers for patients achieving live birth go as follows: Effective from 15 February 2021 the majority of Australian IVF clinics publish their individual success rate online via YourIVFSuccess.com.au. This site also contains a predictor tool. Pregnancy rate Pregnancy rate may be defined in various ways. In the United States, SART and the Centers for Disease Control (and appearing in the table in the Success Rates section above) include statistics on positive pregnancy test and clinical pregnancy rate. The 2019 summary compiled by the SART the following data for non-donor eggs (first embryo transfer) in the United States: In 2006, Canadian clinics reported an average pregnancy rate of 35%. A French study estimated that 66% of patients starting IVF treatment finally succeed in having a child (40% during the IVF treatment at the centre and 26% after IVF discontinuation). Achievement of having a child after IVF discontinuation was mainly due to adoption (46%) or spontaneous pregnancy (42%). Miscarriage rate According to a study done by the Mayo Clinic, miscarriage rates for IVF are somewhere between 15 and 25% for those under the age of 35. In naturally conceived pregnancies, the rate of miscarriage is between 10 and 20% for those under the age of 35. Risk of miscarriage, regardless of the method of conception, does increase with age. Predictors of success The main potential factors that influence pregnancy (and live birth) rates in IVF have been suggested to be maternal age, duration of infertility or subfertility, bFSH and number of oocytes, all reflecting ovarian function. Optimal age is 23–39 years at time of treatment. Biomarkers that affect the pregnancy chances of IVF include: Antral follicle count, with higher count giving higher success rates. Anti-Müllerian hormone levels, with higher levels indicating higher chances of pregnancy, as well as of live birth after IVF, even after adjusting for age. Level of DNA fragmentation as measured, e.g. by Comet assay, advanced maternal age and semen quality. People with ovary-specific FMR1 genotypes including het-norm/low have significantly decreased pregnancy chances in IVF. Progesterone elevation on the day of induction of final maturation is associated with lower pregnancy rates in IVF cycles in women undergoing ovarian stimulation using GnRH analogues and gonadotrophins. At this time, compared to a progesterone level below 0.8 ng/ml, a level between 0.8 and 1.1 ng/ml confers an odds ratio of pregnancy of approximately 0.8, and a level between 1.2 and 3.0 ng/ml confers an odds ratio of pregnancy of between 0.6 and 0.7. On the other hand, progesterone elevation does not seem to confer a decreased chance of pregnancy in frozen–thawed cycles and cycles with egg donation. Characteristics of cells from the cumulus oophorus and the membrana granulosa, which are easily aspirated during oocyte retrieval. These cells are closely associated with the oocyte and share the same microenvironment, and the rate of expression of certain genes in such cells are associated with higher or lower pregnancy rate. An endometrial thickness (EMT) of less than 7 mm decreases the pregnancy rate by an odds ratio of approximately 0.4 compared to an EMT of over 7 mm. However, such low thickness rarely occurs, and any routine use of this parameter is regarded as not justified. Other determinants of outcome of IVF include: As maternal age increases, the likelihood of conception decreases and the chance of miscarriage increases. With increasing paternal age, especially 50 years and older, the rate of blastocyst formation decreases. Tobacco smoking reduces the chances of IVF producing a live birth by 34% and increases the risk of an IVF pregnancy miscarrying by 30%. A body mass index (BMI) over 27 causes a 33% decrease in likelihood to have a live birth after the first cycle of IVF, compared to those with a BMI between 20 and 27. Also, pregnant people who are obese have higher rates of miscarriage, gestational diabetes, hypertension, thromboembolism and problems during delivery, as well as leading to an increased risk of fetal congenital abnormality. Ideal body mass index is 19–30, and many clinics restrict this BMI range as a criterion for initiation of the IVF process. Salpingectomy or laparoscopic tubal occlusion before IVF treatment increases chances for people with hydrosalpinges. Success with previous pregnancy and/or live birth increases chances Low alcohol/caffeine intake increases success rate The number of embryos transferred in the treatment cycle Embryo quality Some studies also suggest that autoimmune disease may also play a role in decreasing IVF success rates by interfering with the proper implantation of the embryo after transfer. Aspirin is sometimes prescribed to people for the purpose of increasing the chances of conception by IVF, but there was no evidence to show that it is safe and effective. A 2013 review and meta analysis of randomised controlled trials of acupuncture as an adjuvant therapy in IVF found no overall benefit, and concluded that an apparent benefit detected in a subset of published trials where the control group (those not using acupuncture) experienced a lower than average rate of pregnancy requires further study, due to the possibility of publication bias and other factors. A Cochrane review came to the result that endometrial injury performed in the month prior to ovarian induction appeared to increase both the live birth rate and clinical pregnancy rate in IVF compared with no endometrial injury. There was no evidence of a difference between the groups in miscarriage, multiple pregnancy or bleeding rates. Evidence suggested that endometrial injury on the day of oocyte retrieval was associated with a lower live birth or ongoing pregnancy rate. Intake of antioxidants (such as N-acetyl-cysteine, melatonin, vitamin A, vitamin C, vitamin E, folic acid, myo-inositol, zinc or selenium) has not been associated with a significantly increased live birth rate or clinical pregnancy rate in IVF according to Cochrane reviews. The review found that oral antioxidants given to the sperm donor with male factor or unexplained subfertility may improve live birth rates, but more evidence is needed. A Cochrane review in 2015 came to the result that there is no evidence identified regarding the effect of preconception lifestyle advice on the chance of a live birth outcome. Method Theoretically, IVF could be performed by collecting the contents from the fallopian tubes or uterus after natural ovulation, mixing it with sperm, and reinserting the fertilised ova into the uterus. However, without additional techniques, the chances of pregnancy would be extremely small. The additional techniques that are routinely used in IVF include ovarian hyperstimulation to generate multiple eggs, ultrasound-guided transvaginal oocyte retrieval directly from the ovaries, co-incubation of eggs and sperm, as well as culture and selection of resultant embryos before embryo transfer into a uterus. Ovarian hyperstimulation Ovarian hyperstimulation is the stimulation to induce development of multiple follicles of the ovaries. It should start with response prediction based on factors such as age, antral follicle count and level of anti-Müllerian hormone. The resulting prediction (e.g. poor or hyper-response to ovarian hyperstimulation) determines the protocol and dosage for ovarian hyperstimulation. Ovarian hyperstimulation also includes suppression of spontaneous ovulation, for which two main methods are available: Using a (usually longer) GnRH agonist protocol or a (usually shorter) GnRH antagonist protocol. In a standard long GnRH agonist protocol the day when hyperstimulation treatment is started and the expected day of later oocyte retrieval can be chosen to conform to personal choice, while in a GnRH antagonist protocol it must be adapted to the spontaneous onset of the previous menstruation. On the other hand, the GnRH antagonist protocol has a lower risk of ovarian hyperstimulation syndrome (OHSS), which is a life-threatening complication. For the ovarian hyperstimulation in itself, injectable gonadotropins (usually FSH analogues) are generally used under close monitoring. Such monitoring frequently checks the estradiol level and, by means of gynecologic ultrasonography, follicular growth. Typically approximately 10 days of injections will be necessary. When stimulating ovulation after suppressing endogenous secretion, it is necessary to supply exogenous gonadotropines. The most common one is the human menopausal gonadotropin (hMG), which is obtained by donation of menopausal women. Other pharmacological preparations are FSH+LH or coripholitropine alpha. Natural IVF There are several methods termed natural cycle IVF: IVF using no drugs for ovarian hyperstimulation, while drugs for ovulation suppression may still be used. IVF using ovarian hyperstimulation, including gonadotropins, but with a GnRH antagonist protocol so that the cycle initiates from natural mechanisms. Frozen embryo transfer; IVF using ovarian hyperstimulation, followed by embryo cryopreservation, followed by embryo transfer in a later, natural, cycle. IVF using no drugs for ovarian hyperstimulation was the method for the conception of Louise Brown. This method can be successfully used when people want to avoid taking ovarian stimulating drugs with its associated side-effects. HFEA has estimated the live birth rate to be approximately 1.3% per IVF cycle using no hyperstimulation drugs for women aged between 40 and 42. Mild IVF is a method where a small dose of ovarian stimulating drugs are used for a short duration during a natural menstrual cycle aimed at producing 2–7 eggs and creating healthy embryos. This method appears to be an advance in the field to reduce complications and side-effects for women, and it is aimed at quality, and not quantity of eggs and embryos. One study comparing a mild treatment (mild ovarian stimulation with GnRH antagonist co-treatment combined with single embryo transfer) to a standard treatment (stimulation with a GnRH agonist long-protocol and transfer of two embryos) came to the result that the proportions of cumulative pregnancies that resulted in term live birth after 1 year were 43.4% with mild treatment and 44.7% with standard treatment. Mild IVF can be cheaper than conventional IVF and with a significantly reduced risk of multiple gestation and OHSS. Final maturation induction When the ovarian follicles have reached a certain degree of development, induction of final oocyte maturation is performed, generally by an injection of human chorionic gonadotropin (hCG). Commonly, this is known as the "trigger shot." hCG acts as an analogue of luteinising hormone, and ovulation would occur between 38 and 40 hours after a single HCG injection, but the egg retrieval is performed at a time usually between 34 and 36 hours after hCG injection, that is, just prior to when the follicles would rupture. This avails for scheduling the egg retrieval procedure at a time where the eggs are fully mature. HCG injection confers a risk of ovarian hyperstimulation syndrome. Using a GnRH agonist instead of hCG eliminates most of the risk of ovarian hyperstimulation syndrome, but with a reduced delivery rate if the embryos are transferred fresh. For this reason, many centers will freeze all oocytes or embryos following agonist trigger. Egg retrieval The eggs are retrieved from the patient using a transvaginal technique called transvaginal ultrasound aspiration involving an ultrasound-guided needle being injected through follicles upon collection. Through this needle, the oocyte and follicular fluid are aspirated and the follicular fluid is then passed to an embryologist to identify ova. It is common to remove between ten and thirty eggs. The retrieval process, which lasts approximately 20 to 40 minutes, is performed under conscious sedation or general anesthesia to ensure patient comfort. Following optimal follicular development, the eggs are meticulously retrieved using transvaginal ultrasound guidance with the aid of a specialised ultrasound probe and a fine needle aspiration technique. The follicular fluid, containing the retrieved eggs, is expeditiously transferred to the embryology laboratory for subsequent processing. Egg and sperm preparation In the laboratory, for ICSI treatments, the identified eggs are stripped of surrounding cells (also known as cumulus cells) and prepared for fertilisation. An oocyte selection may be performed prior to fertilisation to select eggs that can be fertilised, as it is required they are in metaphase II. There are cases in which if oocytes are in the metaphase I stage, they can be kept being cultured so as to undergo a posterior sperm injection. In the meantime, semen is prepared for fertilisation by removing inactive cells and seminal fluid in a process called sperm washing. If semen is being provided by a sperm donor, it will usually have been prepared for treatment before being frozen and quarantined, and it will be thawed ready for use. Co-incubation The sperm and the egg are incubated together at a ratio of about 75,000:1 in a culture media in order for the actual fertilisation to take place. A review in 2013 came to the result that a duration of this co-incubation of about 1 to 4 hours results in significantly higher pregnancy rates than 16 to 24 hours. In most cases, the egg will be fertilised during co-incubation and will show two pronuclei. In certain situations, such as low sperm count or motility, a single sperm may be injected directly into the egg using intracytoplasmic sperm injection (ICSI). The fertilised egg is passed to a special growth medium and left for about 48 hours until the embryo consists of six to eight cells. In gamete intrafallopian transfer, eggs are removed from the woman and placed in one of the fallopian tubes, along with the man's sperm. This allows fertilisation to take place inside the woman's body. Therefore, this variation is actually an in vivo fertilisation, not in vitro. Embryo culture The main durations of embryo culture are until cleavage stage (day two to four after co-incubation) or the blastocyst stage (day five or six after co-incubation). Embryo culture until the blastocyst stage confers a significant increase in live birth rate per embryo transfer, but also confers a decreased number of embryos available for transfer and embryo cryopreservation, so the cumulative clinical pregnancy rates are increased with cleavage stage transfer. Transfer day two instead of day three after fertilisation has no differences in live birth rate. There are significantly higher odds of preterm birth (odds ratio 1.3) and congenital anomalies (odds ratio 1.3) among births having from embryos cultured until the blastocyst stage compared with cleavage stage. Embryo selection Laboratories have developed grading methods to judge ovocyte and embryo quality. In order to optimise pregnancy rates, there is significant evidence that a morphological scoring system is the best strategy for the selection of embryos. Since 2009 where the first time-lapse microscopy system for IVF was approved for clinical use, morphokinetic scoring systems has shown to improve to pregnancy rates further. However, when all different types of time-lapse embryo imaging devices, with or without morphokinetic scoring systems, are compared against conventional embryo assessment for IVF, there is insufficient evidence of a difference in live-birth, pregnancy, stillbirth or miscarriage to choose between them. Active efforts to develop a more accurate embryo selection analysis based on Artificial Intelligence and Deep Learning are underway. Embryo Ranking Intelligent Classification Assistant (ERICA), is a clear example. This Deep Learning software substitutes manual classifications with a ranking system based on an individual embryo's predicted genetic status in a non-invasive fashion. Studies on this area are still pending and current feasibility studies support its potential. Embryo transfer The number to be transferred depends on the number available, the age of the patient and other health and diagnostic factors. In countries such as Canada, the UK, Australia and New Zealand, a maximum of two embryos are transferred except in unusual circumstances. In the UK and according to HFEA regulations, a woman over 40 may have up to three embryos transferred, whereas in the US, there is no legal limit on the number of embryos which may be transferred, although medical associations have provided practice guidelines. Most clinics and country regulatory bodies seek to minimise the risk of multiple pregnancy, as it is not uncommon for multiple embryos to implant if multiple embryos are transferred. Embryos are transferred to the patient's uterus through a thin, plastic catheter, which goes through their vagina and cervix. Several embryos may be passed into the uterus to improve chances of implantation and pregnancy. Luteal support Luteal support is the administration of medication, generally progesterone, progestins, hCG, or GnRH agonists, and often accompanied by estradiol, to increase the success rate of implantation and early embryogenesis, thereby complementing and/or supporting the function of the corpus luteum. A Cochrane review found that hCG or progesterone given during the luteal phase may be associated with higher rates of live birth or ongoing pregnancy, but that the evidence is not conclusive. Co-treatment with GnRH agonists appears to improve outcomes, by a live birth rate RD of +16% (95% confidence interval +10 to +22%). On the other hand, growth hormone or aspirin as adjunctive medication in IVF have no evidence of overall benefit. Expansions There are various expansions or additional techniques that can be applied in IVF, which are usually not necessary for the IVF procedure itself, but would be virtually impossible or technically difficult to perform without concomitantly performing methods of IVF. Preimplantation genetic screening or diagnosis Preimplantation genetic screening (PGS) or preimplantation genetic diagnosis (PGD) has been suggested to be able to be used in IVF to select an embryo that appears to have the greatest chances for successful pregnancy. However, a systematic review and meta-analysis of existing randomised controlled trials came to the result that there is no evidence of a beneficial effect of PGS with cleavage-stage biopsy as measured by live birth rate. On the contrary, for those of advanced maternal age, PGS with cleavage-stage biopsy significantly lowers the live birth rate. Technical drawbacks, such as the invasiveness of the biopsy, and non-representative samples because of mosaicism are the major underlying factors for inefficacy of PGS. Still, as an expansion of IVF, patients who can benefit from PGS/PGD include: Those who have a family history of inherited disease Those who want prenatal sex discernment. This can be used to diagnose monogenic disorders with sex linkage. It can potentially be used for sex selection, wherein a fetus is aborted if having an undesired sex. Those who already have a child with an incurable disease and need compatible cells from a second healthy child to cure the first, resulting in a "saviour sibling" that matches the sick child in HLA type. PGS screens for numeral chromosomal abnormalities while PGD diagnosis the specific molecular defect of the inherited disease. In both PGS and PGD, individual cells from a pre-embryo, or preferably trophectoderm cells biopsied from a blastocyst, are analysed during the IVF process. Before the transfer of a pre-embryo back to a person's uterus, one or two cells are removed from the pre-embryos (8-cell stage), or preferably from a blastocyst. These cells are then evaluated for normality. Typically within one to two days, following completion of the evaluation, only the normal pre-embryos are transferred back to the uterus. Alternatively, a blastocyst can be cryopreserved via vitrification and transferred at a later date to the uterus. In addition, PGS can significantly reduce the risk of multiple pregnancies because fewer embryos, ideally just one, are needed for implantation. Cryopreservation Cryopreservation can be performed as oocyte cryopreservation before fertilisation, or as embryo cryopreservation after fertilisation. The Rand Consulting Group has estimated there to be 400,000 frozen embryos in the United States in 2006. The advantage is that patients who fail to conceive may become pregnant using such embryos without having to go through a full IVF cycle. Or, if pregnancy occurred, they could return later for another pregnancy. Spare oocytes or embryos resulting from fertility treatments may be used for oocyte donation or embryo donation to another aspiring parent, and embryos may be created, frozen and stored specifically for transfer and donation by using donor eggs and sperm. Also, oocyte cryopreservation can be used for those who are likely to lose their ovarian reserve due to undergoing chemotherapy. By 2017, many centres have adopted embryo cryopreservation as their primary IVF therapy, and perform few or no fresh embryo transfers. The two main reasons for this have been better endometrial receptivity when embryos are transferred in cycles without exposure to ovarian stimulation and also the ability to store the embryos while awaiting the results of preimplantation genetic testing. The outcome from using cryopreserved embryos has uniformly been positive with no increase in birth defects or development abnormalities. Other expansions Intracytoplasmic sperm injection (ICSI) is where a single sperm is injected directly into an egg. Its main usage as an expansion of IVF is to overcome male infertility problems, although it may also be used where eggs cannot easily be penetrated by sperm, and occasionally in conjunction with sperm donation. It can be used in teratozoospermia, since once the egg is fertilised abnormal sperm morphology does not appear to influence blastocyst development or blastocyst morphology. Additional methods of embryo profiling. For example, methods are emerging in making comprehensive analyses of up to entire genomes, transcriptomes, proteomes and metabolomes which may be used to score embryos by comparing the patterns with ones that have previously been found among embryos in successful versus unsuccessful pregnancies. Assisted zona hatching (AZH) can be performed shortly before the embryo is transferred to the uterus. A small opening is made in the outer layer surrounding the egg in order to help the embryo hatch out and aid in the implantation process of the growing embryo. In egg donation and embryo donation, the resultant embryo after fertilisation is inserted in another person than the one providing the eggs. These are resources for those with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donor's ovaries, fertilised in the laboratory with sperm, and the resulting healthy embryos are returned to the recipient's uterus. In oocyte selection, the oocytes with optimal chances of live birth can be chosen. It can also be used as a means of preimplantation genetic screening. Embryo splitting can be used for twinning to increase the number of available embryos. Cytoplasmic transfer is where the cytoplasm from a donor egg is injected into an egg with compromised mitochondria. The resulting egg is then fertilised with sperm and introduced into a uterus, usually that of the person who provided the recipient egg and nuclear DNA. Cytoplasmic transfer was created to aid those who experience infertility due to deficient or damaged mitochondria, contained within an egg's cytoplasm. Complications and health effects Multiple births The major complication of IVF is the risk of multiple births. This is directly related to the practice of transferring multiple embryos at embryo transfer. Multiple births are related to increased risk of pregnancy loss, obstetrical complications, prematurity, and neonatal morbidity with the potential for long term damage. Strict limits on the number of embryos that may be transferred have been enacted in some countries (e.g. Britain, Belgium) to reduce the risk of high-order multiples (triplets or more), but are not universally followed or accepted. Spontaneous splitting of embryos in the uterus after transfer can occur, but this is rare and would lead to identical twins. A double blind, randomised study followed IVF pregnancies that resulted in 73 infants, and reported that 8.7% of singleton infants and 54.2% of twins had a birth weight of less than . There is some evidence that making a double embryo transfer during one cycle achieves a higher live birth rate than a single embryo transfer; but making two single embryo transfers in two cycles has the same live birth rate and would avoid multiple pregnancies. Sex ratio distortions Certain kinds of IVF have been shown to lead to distortions in the sex ratio at birth. Intracytoplasmic sperm injection (ICSI), which was first applied in 1991, leads to slightly more female births (51.3% female). Blastocyst transfer, which was first applied in 1984, leads to significantly more male births (56.1% male). Standard IVF done at the second or third day leads to a normal sex ratio. Epigenetic modifications caused by extended culture leading to the death of more female embryos has been theorised as the reason why blastocyst transfer leads to a higher male sex ratio; however, adding retinoic acid to the culture can bring this ratio back to normal. A second theory is that the male-biased sex ratio may due to a higher rate of selection of male embryos. Male embryos develop faster in vitro, and thus may appear more viable for transfer. Spread of infectious disease By sperm washing, the risk that a chronic disease in the individual providing the sperm would infect the birthing parent or offspring can be brought to negligible levels. If the sperm donor has hepatitis B, The Practice Committee of the American Society for Reproductive Medicine advises that sperm washing is not necessary in IVF to prevent transmission, unless the birthing partner has not been effectively vaccinated. In women with hepatitis B, the risk of vertical transmission during IVF is no different from the risk in spontaneous conception. However, there is not enough evidence to say that ICSI procedures are safe in women with hepatitis B in regard to vertical transmission to the offspring. Regarding potential spread of HIV/AIDS, Japan's government prohibited the use of IVF procedures in which both partners are infected with HIV. Despite the fact that the ethics committees previously allowed the Ogikubo, Tokyo Hospital, located in Tokyo, to use IVF for couples with HIV, the Ministry of Health, Labour and Welfare of Japan decided to block the practice. Hideji Hanabusa, the vice president of the Ogikubo Hospital, states that together with his colleagues, he managed to develop a method through which scientists are able to remove HIV from sperm. In the United States, people seeking to be an embryo recipient undergo infectious disease screening required by the Food and Drug Administration (FDA), and reproductive tests to determine the best placement location and cycle timing before the actual embryo transfer occurs. The amount of screening the embryo has already undergone is largely dependent on the genetic parents' own IVF clinic and process. The embryo recipient may elect to have their own embryologist conduct further testing. Other risks to the egg provider/retriever A risk of ovarian stimulation is the development of ovarian hyperstimulation syndrome, particularly if hCG is used for inducing final oocyte maturation. This results in swollen, painful ovaries. It occurs in 30% of patients. Mild cases can be treated with over the counter medications and cases can be resolved in the absence of pregnancy. In moderate cases, ovaries swell and fluid accumulated in the abdominal cavities and may have symptoms of heartburn, gas, nausea or loss of appetite. In severe cases, patients have sudden excess abdominal pain, nausea, vomiting and will result in hospitalisation. During egg retrieval, there exists a small chance of bleeding, infection, and damage to surrounding structures such as bowel and bladder (transvaginal ultrasound aspiration) as well as difficulty in breathing, chest infection, allergic reactions to medication, or nerve damage (laparoscopy). Ectopic pregnancy may also occur if a fertilised egg develops outside the uterus, usually in the fallopian tubes and requires immediate destruction of the foetus. IVF does not seem to be associated with an elevated risk of cervical cancer, nor with ovarian cancer or endometrial cancer when neutralising the confounder of infertility itself. Nor does it seem to impart any increased risk for breast cancer. Regardless of pregnancy result, IVF treatment is usually stressful for patients. Neuroticism and the use of escapist coping strategies are associated with a higher degree of distress, while the presence of social support has a relieving effect. A negative pregnancy test after IVF is associated with an increased risk for depression, but not with any increased risk of developing anxiety disorders. Pregnancy test results do not seem to be a risk factor for depression or anxiety among men in the case of relationships between two cisgender, heterosexual people. Hormonal agents such as gonadotropin-releasing hormone agonist (GnRH agonist) are associated with depression. Studies show that there is an increased risk of venous thrombosis or pulmonary embolism during the first trimester of IVF. When looking at long-term studies comparing patients who received or did not receive IVF, there seems to be no correlation with increased risk of cardiac events. There are more ongoing studies to solidify this. Spontaneous pregnancy has occurred after successful and unsuccessful IVF treatments. Within 2 years of delivering an infant conceived through IVF, subfertile patients had a conception rate of 18%. Birth defects A review in 2013 came to the result that infants resulting from IVF (with or without ICSI) have a relative risk of birth defects of 1.32 (95% confidence interval 1.24–1.42) compared to naturally conceived infants. In 2008, an analysis of the data of the National Birth Defects Study in the US found that certain birth defects were significantly more common in infants conceived through IVF, notably septal heart defects, cleft lip with or without cleft palate, esophageal atresia, and anorectal atresia; the mechanism of causality is unclear. However, in a population-wide cohort study of 308,974 births (with 6,163 using assisted reproductive technology and following children from birth to age five) researchers found: "The increased risk of birth defects associated with IVF was no longer significant after adjustment for parental factors." Parental factors included known independent risks for birth defects such as maternal age, smoking status, etc. Multivariate correction did not remove the significance of the association of birth defects and ICSI (corrected odds ratio 1.57), although the authors speculate that underlying male infertility factors (which would be associated with the use of ICSI) may contribute to this observation and were not able to correct for these confounders. The authors also found that a history of infertility elevated risk itself in the absence of any treatment (odds ratio 1.29), consistent with a Danish national registry study and "implicates patient factors in this increased risk." The authors of the Danish national registry study speculate: "our results suggest that the reported increased prevalence of congenital malformations seen in singletons born after assisted reproductive technology is partly due to the underlying infertility or its determinants." Other risks to the offspring If the underlying infertility is related to abnormalities in spermatogenesis, male offspring will have a higher risk for sperm abnormalities. In some cases genetic testing may be recommended to help assess the risk of transmission of defects to progeny and to consider whether treatment is desirable. IVF does not seem to confer any risks regarding cognitive development, school performance, social functioning, and behaviour. Also, IVF infants are known to be as securely attached to their parents as those who were naturally conceived, and IVF adolescents are as well-adjusted as those who have been naturally conceived. Limited long-term follow-up data suggest that IVF may be associated with an increased incidence of hypertension, impaired fasting glucose, increase in total body fat composition, advancement of bone age, subclinical thyroid disorder, early adulthood clinical depression and binge drinking in the offspring. It is not known, however, whether these potential associations are caused by the IVF procedure in itself, by adverse obstetric outcomes associated with IVF, by the genetic origin of the children or by yet unknown IVF-associated causes. Increases in embryo manipulation during IVF result in more deviant fetal growth curves, but birth weight does not seem to be a reliable marker of fetal stress. IVF, including ICSI, is associated with an increased risk of imprinting disorders (including Prader–Willi syndrome and Angelman syndrome), with an odds ratio of 3.7 (95% confidence interval 1.4 to 9.7). An IVF-associated incidence of cerebral palsy and neurodevelopmental delay are believed to be related to the confounders of prematurity and low birthweight. Similarly, an IVF-associated incidence of autism and attention-deficit disorder are believed to be related to confounders of maternal and obstetric factors. Overall, IVF does not cause an increased risk of childhood cancer. Studies have shown a decrease in the risk of certain cancers and an increased risks of certain others including retinoblastoma, hepatoblastoma and rhabdomyosarcoma. Controversial cases Mix-ups In some cases, laboratory mix-ups (misidentified gametes, transfer of wrong embryos) have occurred, leading to legal action against the IVF provider and complex paternity suits. An example is the case of a woman in California who received the embryo of another couple and was notified of this mistake after the birth of her son. This has led to many authorities and individual clinics implementing procedures to minimise the risk of such mix-ups. The HFEA, for example, requires clinics to use a double witnessing system, the identity of specimens is checked by two people at each point at which specimens are transferred. Alternatively, technological solutions are gaining favour, to reduce the manpower cost of manual double witnessing, and to further reduce risks with uniquely numbered RFID tags which can be identified by readers connected to a computer. The computer tracks specimens throughout the process and alerts the embryologist if non-matching specimens are identified. Although the use of RFID tracking has expanded in the US, it is still not widely adopted. Preimplantation genetic diagnosis or screening Pre-implantation genetic diagnosis (PGD) is criticised for giving select demographic groups disproportionate access to a means of creating a child possessing characteristics that they consider "ideal". Many fertile couples now demand equal access to embryonic screening so that their child can be just as healthy as one created through IVF. Mass use of PGD, especially as a means of population control or in the presence of legal measures related to population or demographic control, can lead to intentional or unintentional demographic effects such as the skewed live-birth sex ratios seen in China following implementation of its one-child policy. While PGD was originally designed to screen for embryos carrying hereditary genetic diseases, the method has been applied to select features that are unrelated to diseases, thus raising ethical questions. Examples of such cases include the selection of embryos based on histocompatibility (HLA) for the donation of tissues to a sick family member, the diagnosis of genetic susceptibility to disease, and sex selection. These examples raise ethical issues because of the morality of eugenics. It becomes frowned upon because of the advantage of being able to eliminate unwanted traits and selecting desired traits. By using PGD, individuals are given the opportunity to create a human life unethically and rely on science and not by natural selection. For example, a deaf British couple, Tom and Paula Lichy, have petitioned to create a deaf baby using IVF. Some medical ethicists have been very critical of this approach. Jacob M. Appel wrote that "intentionally culling out blind or deaf embryos might prevent considerable future suffering, while a policy that allowed deaf or blind parents to select for such traits intentionally would be far more troublesome." Industry corruption Robert Winston, professor of fertility studies at Imperial College London, had called the industry "corrupt" and "greedy" stating that "one of the major problems facing us in healthcare is that IVF has become a massive commercial industry," and that "what has happened, of course, is that money is corrupting this whole technology", and accused authorities of failing to protect couples from exploitation: "The regulatory authority has done a consistently bad job. It's not prevented the exploitation of people, it's not put out very good information to couples, it's not limited the number of unscientific treatments people have access to". The IVF industry has been described as a market-driven construction of health, medicine and the human body. The industry has been accused of making unscientific claims, and distorting facts relating to infertility, in particular through widely exaggerated claims about how common infertility is in society, in an attempt to get as many couples as possible and as soon as possible to try treatments (rather than trying to conceive naturally for a longer time). This risks removing infertility from its social context and reducing the experience to a simple biological malfunction, which not only can be treated through bio-medical procedures, but should be treated by them. Older patients All pregnancies can be risky, but there are greater risk for mothers who are older and are over the age of 40. As people get older, they are more likely to develop conditions such as gestational diabetes and pre-eclampsia. If the mother does conceive over the age of 40, their offspring may be of lower birth weight, and more likely to requires intensive care. Because of this, the increased risk is a sufficient cause for concern. The high incidence of caesarean in older patients is commonly regarded as a risk. Those conceiving at 40 have a greater risk of gestational hypertension and premature birth. The offspring is at risk when being born from older mothers, and the risks associated with being conceived through IVF. Adriana Iliescu held the record for a while as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66. In September 2019, a 74-year-old woman became the oldest-ever to give birth after she delivered twins at a hospital in Guntur, Andhra Pradesh. Pregnancy after menopause Although menopause is a natural barrier to further conception, IVF has allowed people to be pregnant in their fifties and sixties. People whose uteruses have been appropriately prepared receive embryos that originated from an egg donor. Therefore, although they do not have a genetic link with the child, they have a physical link through pregnancy and childbirth. Even after menopause, the uterus is fully capable of carrying out a pregnancy. Same-sex couples, single and unmarried parents A 2009 statement from the ASRM found no persuasive evidence that children are harmed or disadvantaged solely by being raised by single parents, unmarried parents, or homosexual parents. It did not support restricting access to assisted reproductive technologies on the basis of a prospective parent's marital status or sexual orientation. A 2018 study found that children's psychological well-being did not differ when raised by either same-sex parents or heterosexual parents, even finding that psychological well-being was better amongst children raised by same-sex parents. Ethical concerns include reproductive rights, the welfare of offspring, nondiscrimination against unmarried individuals, homosexual, and professional autonomy. A controversy in California focused on the question of whether physicians opposed to same-sex relationships should be required to perform IVF for a lesbian couple. Guadalupe T. Benitez, a lesbian medical assistant from San Diego, sued doctors Christine Brody and Douglas Fenton of the North Coast Woman's Care Medical Group after Brody told her that she had "religious-based objections to treating her and homosexuals in general to help them conceive children by artificial insemination," and Fenton refused to authorise a refill of her prescription for the fertility drug Clomid on the same grounds. The California Medical Association had initially sided with Brody and Fenton, but the case, North Coast Women's Care Medical Group v. Superior Court, was decided unanimously by the California State Supreme Court in favour of Benitez on 19 August 2008. Nadya Suleman came to international attention after having twelve embryos implanted, eight of which survived, resulting in eight newborns being added to her existing six-child family. The Medical Board of California sought to have fertility doctor Michael Kamrava, who treated Suleman, stripped of his licence. State officials allege that performing Suleman's procedure is evidence of unreasonable judgment, substandard care, and a lack of concern for the eight children she would conceive and the six she was already struggling to raise. On 1 June 2011 the Medical Board issued a ruling that Kamrava's medical licence be revoked effective 1 July 2011. Transgender parents The research on transgender reproduction and family planning is limited. A 2020 comparative study of children born to a transgender father and cisgender mother via donor sperm insemination in France showed no significant differences to IVF and naturally conceived children of cisgender parents. Transgender men can experience challenges in pregnancy and birthing from the cis-normative structure within the medical system, as well as psychological challenges such as renewed gender dysphoria. The effect of continued testosterone therapy during pregnancy and breastfeeding is undetermined. Ethical concerns include reproductive rights, reproductive justice, physician autonomy, and transphobia within the health care setting. Anonymous donors Alana Stewart, who was conceived using donor sperm, began an online forum for donor children called AnonymousUS in 2010. The forum welcomes the viewpoints of anyone involved in the IVF process. In May 2012, a court ruled making anonymous sperm and egg donation in British Columbia illegal. In the U.K., Sweden, Norway, Germany, Italy, New Zealand, and some Australian states, donors are not paid and cannot be anonymous. In 2000, a website called Donor Sibling Registry was created to help biological children with a common donor connect with each other. Leftover embryos or eggs, unwanted embryos There may be leftover embryos or eggs from IVF procedures if the person for whom they were originally created has successfully carried one or more pregnancies to term, and no longer wishes to use them. With the patient's permission, these may be donated to help others conceive by means of third party reproduction. In embryo donation, these extra embryos are given to others for transfer, with the goal of producing a successful pregnancy. Embryo recipients have genetic issues or poor-quality embryos or eggs of their own. The resulting child is considered the child of whoever birthed them, and not the child of the donor, the same as occurs with egg donation or sperm donation. As per The National Infertility Association, typically, genetic parents donate the eggs or embryos to a fertility clinic where they are preserved by oocyte cryopreservation or embryo cryopreservation until a carrier is found for them. The process of matching the donation with the prospective parents is conducted by the agency itself, at which time the clinic transfers ownership of the embryos to the prospective parent(s). Alternatives to donating unused embryos are destroying them (or having them transferred at a time when pregnancy is very unlikely), keeping them frozen indefinitely, or donating them for use in research (rendering them non-viable). Individual moral views on disposing of leftover embryos may depend on personal views on the beginning of human personhood and the definition and/or value of potential future persons, and on the value that is given to fundamental research questions. Some people believe donation of leftover embryos for research is a good alternative to discarding the embryos when patients receive proper, honest and clear information about the research project, the procedures and the scientific values. During the embryo selection and transfer phases, many embryos may be discarded in favour of others. This selection may be based on criteria such as genetic disorders or the sex. One of the earliest cases of special gene selection through IVF was the case of the Collins family in the 1990s, who selected the sex of their child. The ethic issues remain unresolved as no worldwide consensus exists in science, religion, and philosophy on when a human embryo should be recognised as a person. For those who believe that this is at the moment of conception, IVF becomes a moral question when multiple eggs are fertilised, begin development, and only a few are chosen for uterus transfer. If IVF were to involve the fertilisation of only a single egg, or at least only the number that will be transferred, then this would not be an issue. However, this has the chance of increasing costs dramatically as only a few eggs can be attempted at a time. As a result, the couple must decide what to do with these extra embryos. Depending on their view of the embryo's humanity or the chance the couple will want to try to have another child, the couple has multiple options for dealing with these extra embryos. Couples can choose to keep them frozen, donate them to other infertile couples, thaw them, or donate them to medical research. Keeping them frozen costs money, donating them does not ensure they will survive, thawing them renders them immediately unviable, and medical research results in their termination. In the realm of medical research, the couple is not necessarily told what the embryos will be used for, and as a result, some can be used in stem cell research. In February 2024, the Alabama Supreme Court ruled in LePage v. Center for Reproductive Medicine that cryopreserved embryos were "persons" or "extrauterine children". After Dobbs v. Jackson Women's Health Organization (2022), some antiabortionists had hoped to get a judgement that fetuses and embryos were "person[s]". Religious response The Catholic Church opposes all kinds of assisted reproductive technology and artificial contraception, on the grounds that they separate the procreative goal of marital sex from the goal of uniting married couples. The Catholic Church permits the use of a small number of reproductive technologies and contraceptive methods such as natural family planning, which involves charting ovulation times, and allows other forms of reproductive technologies that allow conception to take place from normative sexual intercourse, such as a fertility lubricant. Pope Benedict XVI had publicly re-emphasised the Catholic Church's opposition to in vitro fertilisation, saying that it replaces love between a husband and wife. The Catechism of the Catholic Church, in accordance with the Catholic understanding of natural law, teaches that reproduction has an "inseparable connection" to the sexual union of married couples. In addition, the church opposes IVF because it might result in the disposal of embryos; in Catholicism, an embryo is viewed as an individual with a soul that must be treated as a person. The Catholic Church maintains that it is not objectively evil to be infertile, and advocates adoption as an option for such couples who still wish to have children. Hindus welcome IVF as a gift for those who are unable to bear children and have declared doctors related to IVF to be conducting punya as there are several characters who were claimed to be born without intercourse, mainly Kaurav and five Pandavas. Regarding the response to IVF by Islam, a general consensus from the contemporary Sunni scholars concludes that IVF methods are immoral and prohibited. However, Gad El-Hak Ali Gad El-Hak's ART fatwa includes that: IVF of an egg from the wife with the sperm of her husband and the transfer of the fertilised egg back to the uterus of the wife is allowed, provided that the procedure is indicated for a medical reason and is carried out by an expert physician. Since marriage is a contract between the wife and husband during the span of their marriage, no third party should intrude into the marital functions of sex and procreation. This means that a third party donor is not acceptable, whether he or she is providing sperm, eggs, embryos, or a uterus. The use of a third party is tantamount to zina, or adultery. Within the Orthodox Jewish community the concept is debated as there is little precedent in traditional Jewish legal textual sources. Regarding laws of sexuality, religious challenges include masturbation (which may be regarded as "seed wasting"), laws related to sexual activity and menstruation (niddah) and the specific laws regarding intercourse. An additional major issue is that of establishing paternity and lineage. For a baby conceived naturally, the father's identity is determined by a legal presumption (chazakah) of legitimacy: rov bi'ot achar ha'baal – a woman's sexual relations are assumed to be with her husband. Regarding an IVF child, this assumption does not exist and as such Rabbi Eliezer Waldenberg (among others) requires an outside supervisor to positively identify the father. Reform Judaism has generally approved IVF. Society and culture Many women of sub-Saharan Africa choose to foster their children to infertile women. IVF enables these infertile women to have their own children, which imposes new ideals to a culture in which fostering children is seen as both natural and culturally important. Many infertile women are able to earn more respect in their society by taking care of the children of other mothers, and this may be lost if they choose to use IVF instead. As IVF is seen as unnatural, it may even hinder their societal position as opposed to making them equal with fertile women. It is also economically advantageous for infertile women to raise foster children as it gives these children greater ability to access resources that are important for their development and also aids the development of their society at large. If IVF becomes more popular without the birth rate decreasing, there could be more large family homes with fewer options to send their newborn children. This could result in an increase of orphaned children and/or a decrease in resources for the children of large families. This would ultimately stifle the children's and the community's growth. In the US, the pineapple has emerged as a symbol of IVF users, possibly because some people thought, without scientific evidence, that eating pineapple might slightly increase the success rate for the procedure. Emotional involvement with children Studies have indicated that IVF mothers show greater emotional involvement with their child, and they enjoy motherhood more than mothers by natural conception. Similarly, studies have indicated that IVF fathers express more warmth and emotional involvement than fathers by adoption and natural conception and enjoy fatherhood more. Some IVF parents become overly involved with their children. Men and IVF Research has shown that men largely view themselves as "passive contributors" since they have "less physical involvement" in IVF treatment. Despite this, many men feel distressed after seeing the toll of hormonal injections and ongoing physical intervention on their female partner. Fertility was found to be a significant factor in a man's perception of his masculinity, driving many to keep the treatment a secret. In cases where the men did share that he and his partner were undergoing IVF, they reported to have been teased, mainly by other men, although some viewed this as an affirmation of support and friendship. For others, this led to feeling socially isolated. In comparison with females, males showed less deterioration in mental health in the years following a failed treatment. However, many men did feel guilt, disappointment and inadequacy, stating that they were simply trying to provide an "emotional rock" for their partners. Ability to withdraw consent In certain countries, including Austria, Italy, Estonia, Hungary, Spain and Israel, the male does not have the full ability to withdraw consent to storage or use of embryos once they are fertilised. In the United States, the matter has been left to the courts on a more or less ad hoc basis. If embryos are implanted and a child is born contrary to the wishes of the male, he still has legal and financial responsibilities of a father. Availability and utilisation Cost Costs of IVF can be broken down into direct and indirect costs. Direct costs include the medical treatments themselves, including doctor consultations, medications, ultrasound scanning, laboratory tests, the actual IVF procedure, and any associated hospital charges and administrative costs. Indirect costs includes the cost of addressing any complications with treatments, compensation for the gestational surrogate, patients' travel costs, and lost hours of productivity. These costs can be exaggerated by the increasing age of the woman undergoing IVF treatment (particularly those over the age of 40), and the increase costs associated with multiple births. For instance, a pregnancy with twins can cost up to three times that of a singleton pregnancy. While some insurances cover one cycle of IVF, it takes multiple cycles of IVF to have a successful outcome. A study completed in Northern California reveals that the IVF procedure alone that results in a successful outcome costs $61,377, and this can be more costly with the use of a donor egg. The cost of IVF rather reflects the costliness of the underlying healthcare system than the regulatory or funding environment, and ranges, on average for a standard IVF cycle and in 2006 United States dollars, between $12,500 in the United States to $4,000 in Japan. In Ireland, IVF costs around €4,000, with fertility drugs, if required, costing up to €3,000. The cost per live birth is highest in the United States ($41,000) and United Kingdom ($40,000) and lowest in Scandinavia and Japan (both around $24,500). The high cost of IVF is also a barrier to access for disabled individuals, who typically have lower incomes, face higher health care costs, and seek health care services more often than non-disabled individuals. Navigating insurance coverage for transgender expectant parents presents a unique challenge. Insurance plans are designed to cater towards a specific population, meaning that some plans can provide adequate coverage for gender-affirming care but fail to provide fertility services for transgender patients. Additionally, insurance coverage is constructed around a person's legally recognised sex and not their anatomy; thus, transgender people may not get coverage for the services they need, including transgender men for fertility services. Use by LGBT individuals Same-sex couples In larger urban centres, studies have noted that lesbian, gay, bisexual, transgender and queer (LGBTQ+) populations are among the fastest-growing users of fertility care. IVF is increasingly being used to allow lesbian and other LGBT couples to share in the reproductive process through a technique called reciprocal IVF. The eggs of one partner are used to create embryos which the other partner carries through pregnancy. For gay male couples, many elect to use IVF through gestational surrogacy, where one partner's sperm is used to fertilise a donor ovum, and the resulting embryo is transplanted into a surrogate carrier's womb. There are various IVF options available for same-sex couples including, but not limited to, IVF with donor sperm, IVF with a partner's oocytes, reciprocal IVF, IVF with donor eggs, and IVF with gestational surrogate. IVF with donor sperm can be considered traditional IVF for lesbian couples, but reciprocal IVF or using a partner's oocytes are other options for lesbian couples trying to conceive to include both partners in the biological process. Using a partner's oocytes is an option for partners who are unsuccessful in conceiving with their own, and reciprocal IVF involves undergoing reproduction with a donor egg and sperm that is then transferred to a partner who will gestate. Donor IVF involves conceiving with a third party's eggs. Typically, for gay male couples hoping to use IVF, the common techniques are using IVF with donor eggs and gestational surrogates. Transgender parents Many LGBT communities centre their support around cisgender gay, lesbian and bisexual people and neglect to include proper support for transgender people. The same 2020 literature review analyses the social, emotional and physical experiences of pregnant transgender men. A common obstacle faced by pregnant transgender men is the possibility of gender dysphoria. Literature shows that transgender men report uncomfortable procedures and interactions during their pregnancies as well as feeling misgendered due to gendered terminology used by healthcare providers. Outside of the healthcare system, pregnant transgender men may experience gender dysphoria due to cultural assumptions that all pregnant people are cisgender women. These people use three common approaches to navigating their pregnancy: passing as a cisgender woman, hiding their pregnancy, or being out and visibly pregnant as a transgender man. Some transgender and gender diverse patients describe their experience in seeking gynaecological and reproductive health care as isolating and discriminatory, as the strictly binary healthcare system often leads to denial of healthcare coverage or unnecessary revelation of their transgender status to their employer. Many transgender people retain their original sex organs and choose to have children through biological reproduction. Advances in assisted reproductive technology and fertility preservation have broadened the options transgender people have to conceive a child using their own gametes or a donor's. Transgender men and women may opt for fertility preservation before any gender affirming surgery, but it is not required for future biological reproduction. It is also recommended that fertility preservation is conducted before any hormone therapy. Additionally, while fertility specialists often suggest that transgender men discontinue their testosterone hormones prior to pregnancy, research on this topic is still inconclusive. However, a 2019 study found that transgender male patients seeking oocyte retrieval via assisted reproductive technology (including IVF) were able to undergo treatment four months after stopping testosterone treatment, on average. All patients experienced menses and normal AMH, FSH and E2 levels and antral follicle counts after coming off testosterone, which allowed for successful oocyte retrieval. Despite assumptions that the long-term androgen treatment negatively impacts fertility, oocyte retrieval, an integral part of the IVF process, does not appear to be affected. Biological reproductive options available to transgender women include, but are not limited to, IVF and IUI with the trans woman's sperm and a donor or a partner's eggs and uterus. Fertility treatment options for transgender men include, but are not limited to, IUI or IVF using his own eggs with a donor's sperm and/or donor's eggs, his uterus, or a different uterus, whether that is a partner's or a surrogate's. Use by disabled individuals People with disabilities who wish to have children are equally or more likely than the non-disabled population to experience infertility, yet disabled individuals are much less likely to have access to fertility treatment such as IVF. There are many extraneous factors that hinder disabled individuals access to IVF, such as assumptions about decision-making capacity, sexual interests and abilities, heritability of a disability, and beliefs about parenting ability. These same misconceptions about people with disabilities that once led health care providers to sterilise thousands of women with disabilities now lead them to provide or deny reproductive care on the basis of stereotypes concerning people with disabilities and their sexuality. Not only do misconceptions about disabled individuals parenting ability, sexuality, and health restrict and hinder access to fertility treatment such as IVF, structural barriers such as providers uneducated in disability healthcare and inaccessible clinics severely hinder disabled individuals access to receiving IVF. By country Australia In Australia, the average age of women undergoing ART treatment is 35.5 years among those using their own eggs (one in four being 40 or older) and 40.5 years among those using donated eggs. While IVF is available in Australia, Australians using IVF are unable to choose their baby's gender. Cameroon Ernestine Gwet Bell supervised the first Cameroonian child born by IVF in 1998. Canada In Canada, one cycle of IVF treatment can cost between $7,750 to $12,250 CAD, and medications alone can cost between $2,500 to over $7,000 CAD. The funding mechanisms that influence accessibility in Canada vary by province and territory, with some provinces providing full, partial or no coverage. New Brunswick provides partial funding through their Infertility Special Assistance Fund – a one time grant of up to $5,000. Patients may only claim up to 50% of treatment costs or $5,000 (whichever is less) occurred after April 2014. Eligible patients must be a full-time New Brunswick resident with a valid Medicare card and have an official medical infertility diagnosis by a physician. In December 2015, the Ontario provincial government enacted the Ontario Fertility Program for patients with medical and non-medical infertility, regardless of sexual orientation, gender or family composition. Eligible patients for IVF treatment must be Ontario residents under the age of 43 and have a valid Ontario Health Insurance Plan card and have not already undergone any IVF cycles. Coverage is extensive, but not universal. Coverage extends to certain blood and urine tests, physician/nurse counselling and consultations, certain ultrasounds, up to two cycle monitorings, embryo thawing, freezing and culture, fertilisation and embryology services, single transfers of all embryos, and one surgical sperm retrieval using certain techniques only if necessary. Drugs and medications are not covered under this Program, along with psychologist or social worker counselling, storage and shipping of eggs, sperm or embryos, and the purchase of donor sperm or eggs. China IVF is expensive in China and not generally accessible to unmarried women. In August 2022, China's National Health Authority announced that it will take steps to make assisted reproductive technology more accessible, including by guiding local governments to include such technology in its national medical system. Croatia No egg or sperm donations take place in Croatia, however using donated sperm or egg in ART and IUI is allowed. With donated eggs, sperm or embryo, a heterosexual couple and single women have legal access to IVF. Male or female couples do not have access to ART as a form of reproduction. The minimum age for males and females to access ART in Croatia is 18 there is no maximum age. Donor anonymity applies, but the born child can be given access to the donor's identity at a certain age India The penetration of the IVF market in India is quite low, with only 2,800 cycles per million infertile people in the reproductive age group (20–44 years), as compared to China, which has 6,500 cycles. The key challenges are lack of awareness, affordability and accessibility. Since 2018, however, India has become a destination for fertility tourism, because of lower costs than in the Western world. In December 2021, the Lok Sabha passed the Assisted Reproductive Technology (Regulation) Bill 2020, to regulate ART services including IVF centres, sperm and egg banks. Israel Israel has the highest rate of IVF in the world, with 1,657 procedures performed per million people per year. Couples without children can receive funding for IVF for up to two children. The same funding is available for people without children who will raise up to two children in a single parent home. IVF is available for people aged 18 to 45. The Israeli Health Ministry says it spends roughly $3450 per procedure. Sweden One, two or three IVF treatments are government subsidised for people who are younger than 40 and have no children. The rules for how many treatments are subsidised, and the upper age limit for the people, vary between different county councils. Single people are treated, and embryo adoption is allowed. There are also private clinics that offer the treatment for a fee. United Kingdom Availability of IVF in England is determined by Clinical Commissioning Groups (CCGs). The National Institute for Health and Care Excellence (NICE) recommends up to 3 cycles of treatment for people under 40 years old with minimal success conceiving after 2 years of unprotected sex. Cycles will not be continued for people who are older than 40 years. CCGs in Essex, Bedfordshire and Somerset have reduced funding to one cycle, or none, and it is expected that reductions will become more widespread. Funding may be available in "exceptional circumstances" – for example if a male partner has a transmittable infection or one partner is affected by cancer treatment. According to the campaign group Fertility Fairness "at the end of 2014 every CCG in England was funding at least one cycle of IVF". Prices paid by the NHS in England varied between under £3,000 to more than £6,000 in 2014/5. In February 2013, the cost of implementing the NICE guidelines for IVF along with other treatments for infertility was projected to be £236,000 per year per 100,000 members of the population. IVF increasingly appears on NHS treatments blacklists. In August 2017 five of the 208 CCGs had stopped funding IVF completely and others were considering doing so. By October 2017 only 25 CCGs were delivering the three recommended NHS IVF cycles to eligible people under 40. Policies could fall foul of discrimination laws if they treat same sex couples differently from heterosexual ones. In July 2019 Jackie Doyle-Price said that women were registering with surgeries further away from their own home in order to get around CCG rationing policies. The Human Fertilisation and Embryology Authority said in September 2018 that parents who are limited to one cycle of IVF, or have to fund it themselves, are more likely choose to implant multiple embryos in the hope it increases the chances of pregnancy. This significantly increases the chance of multiple births and the associated poor outcomes, which would increase NHS costs. The president of the Royal College of Obstetricians and Gynaecologists said that funding 3 cycles was "the most important factor in maintaining low rates of multiple pregnancies and reduce(s) associated complications". United States In the United States, overall availability of IVF in 2005 was 2.5 IVF physicians per 100,000 population, and utilisation was 236 IVF cycles per 100,000. 126 procedures are performed per million people per year. Utilisation highly increases with availability and IVF insurance coverage, and to a significant extent also with percentage of single persons and median income. In the US, an average cycle, from egg retrieval to embryo implantation, costs $12,400, and insurance companies that do cover treatment, even partially, usually cap the number of cycles they pay for. As of 2015, more than 1 million babies had been born utilising IVF technologies. In the US, as of September 2023, 21 states and the District of Columbia had passed laws for fertility insurance coverage. In 15 of those jurisdictions, some level of IVF coverage is included, and in 17, some fertility preservation services are included. Eleven states require coverage for both fertility preservation and IVF: Colorado, Connecticut, Delaware, Maryland, Maine, New Hampshire, New Jersey, New York, Rhode Island, Utah, and Washington D.C. The states that have infertility coverage laws are Arkansas, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Louisiana, Maryland, Massachusetts, Montana, New Hampshire, New Jersey, New York, Ohio, Rhode Island, Texas, Utah, and West Virginia. As of July 2023, New York was reportedly the only state Medicaid program to cover IVF. These laws differ by state but many require an egg be fertilised with sperm from a spouse and that in order to be covered you must show you cannot become pregnant through penile-vaginal sex. These requirements are not possible for a same-sex couple to meet. Many fertility clinics in the United States limit the upper age at which people are eligible for IVF to 50 or 55 years. These cut-offs make it difficult for people older than fifty-five to utilise the procedure. Legal status Government agencies in China passed bans on the use of IVF in 2003 by unmarried people or by couples with certain infectious diseases. In India, the use of IVF as a means of sex selection (preimplantation genetic diagnosis) is banned under the Pre-Conception and Pre-Natal Diagnostic Techniques Act, 1994. Sunni Muslim nations generally allow IVF between married couples when conducted with their own respective sperm and eggs, but not with donor eggs from other couples. But Iran, which is Shi'a Muslim, has a more complex scheme. Iran bans sperm donation but allows donation of both fertilised and unfertilised eggs. Fertilised eggs are donated from married couples to other married couples, while unfertilised eggs are donated in the context of mut'ah or temporary marriage to the father. By 2012 Costa Rica was the only country in the world with a complete ban on IVF technology, it having been ruled unconstitutional by the nation's Supreme Court because it "violated life." Costa Rica had been the only country in the western hemisphere that forbade IVF. A law project sent reluctantly by the government of President Laura Chinchilla was rejected by parliament. President Chinchilla has not publicly stated her position on the question of IVF. However, given the massive influence of the Catholic Church in her government any change in the status quo seems very unlikely. In spite of Costa Rican government and strong religious opposition, the IVF ban has been struck down by the Inter-American Court of Human Rights in a decision of 20 December 2012. The court said that a long-standing Costa Rican guarantee of protection for every human embryo violated the reproductive freedom of infertile couples because it prohibited them from using IVF, which often involves the disposal of embryos not implanted in a woman's uterus. On 10 September 2015, President Luis Guillermo Solís signed a decree legalising in-vitro fertilisation. The decree was added to the country's official gazette on 11 September. Opponents of the practice have since filed a lawsuit before the country's Constitutional Court. All major restrictions on single but infertile people using IVF were lifted in Australia in 2002 after a final appeal to the Australian High Court was rejected on procedural grounds in the Leesa Meldrum case. A Victorian federal court had ruled in 2000 that the existing ban on all single women and lesbians using IVF constituted sex discrimination. Victoria's government announced changes to its IVF law in 2007 eliminating remaining restrictions on fertile single women and lesbians, leaving South Australia as the only state maintaining them. United States Despite strong popular support (7 out of 10 adults consider IVF access a good thing and 67% believe that health insurance plans should cover IVF), IVF can involve complicated legal issues and has become a contentious issue in US politics. Federal regulations include screening requirements and restrictions on donations, but these generally do not affect heterosexually intimate partners. Doctors may be required to provide treatments to unmarried or LGBTQ couples under non-discrimination laws, as for example in California. The state of Tennessee proposed a bill in 2009 that would have defined donor IVF as adoption. During the same session, another bill proposed barring adoption from any unmarried and cohabitating couple, and activist groups stated that passing the first bill would effectively stop unmarried women from using IVF. Neither of these bills passed. In 2023, the Practice Committee of the American Society for Reproductive Medicine (ASRM) updated its guidelines for the definition of “infertility” to include those who need medical interventions “in order to achieve a successful pregnancy either as an individual or with a partner.” In many states, legal and financial decisions about provision of infertility treatments reference this “official” definition. On September 29, 2024, California Governor Gavin Newsom signed SB 729, legislation which aligns with the ASRM definition of “infertility”. In the United States, much of the opposition to the use of IVF is associated with the anti-abortion movement, evangelicals, and denominations such as the Southern Baptists. Current legal opposition to IVF and other fertility treatment access has stemmed from recent court rulings regarding women's reproductive healthcare. In the 2022 Dobbs v. Jackson Women's Health Organization decision, the U.S. Supreme Court overturned the 1973 Roe v. Wade decision which had federally protected the right to abortion. The 2024 Alabama Supreme Court decision regarding IVF has since threatened IVF access and legality in the U.S. Frozen embryos at an IVF clinic were accidentally destroyed resulting in a lawsuit during which the attorneys for the plaintiff sought damages under the Wrongful Death of a Minor Act. The court ruled in favor of the plaintiffs, setting a state-level precedent that embryos and fetuses are given the same rights as minors/children, regardless of whether they are in utero or not. This has created confusion over the status of unused embryos and questions surrounding when life begins. After the court's decision, numerous IVF clinics in Alabama halted IVF treatment services for fears of civil and criminal liability associated with the new rights granted to embryos. Since, laws proposing embryonic personhood have been proposed in 13 other states, creating fear of further state restrictions. This ruling raised concerns from The National Infertility Association and the American Society for Reproductive Medicine that the decision would mean Alabama's bans on abortion prohibit IVF as well, while the University of Alabama at Birmingham health system paused IVF treatments. Eight days later the Alabama legislature voted to protect IVF providers and patients from criminal or civil liability. The Right to IVF Act, federal legislation that would have codified a right to fertility treatments and provided insurance coverage for in vitro fertilisation treatments, was twice brought to a vote in the Senate in 2024. Both times it was blocked by Senate Republicans, of whom only Lisa Murkowski and Susan Collins voted to move the bill forward. Few American courts have addressed the issue of the "property" status of a frozen embryo. This issue might arise in the context of a divorce case, in which a court would need to determine which spouse would be able to decide the disposition of the embryos. It could also arise in the context of a dispute between a sperm donor and egg donor, even if they were unmarried. In 2015, an Illinois court held that such disputes could be decided by reference to any contract between the parents-to-be. In the absence of a contract, the court would weigh the relative interests of the parties. Alternatives Some alternatives to IVF are: Artificial insemination, including intracervical insemination and intrauterine insemination of semen. It requires that a woman ovulates, but is a relatively simple procedure, and can be used in the home for self-insemination without medical practitioner assistance. The beneficiaries of artificial insemination are people who desire to give birth to their own child who may be single, people who are in a lesbian relationship or females who are in a heterosexual relationship but with a male partner who is infertile or who has a physical impairment which prevents full intercourse from taking place. Ovulation induction (in the sense of medical treatment aiming for the development of one or two ovulatory follicles) is an alternative for people with anovulation or oligoovulation, since it is less expensive and more easy to control. It generally involves antiestrogens such as clomifene citrate or letrozole, and is followed by natural or artificial insemination. Surrogacy, the process in which a surrogate agrees to bear a child for another person or persons, who will become the child's parent(s) after birth. People may seek a surrogacy arrangement when pregnancy is medically impossible, when pregnancy risks are too dangerous for the intended gestational carrier, or when a single man or a male couple wish to have a child. Adoption whereby a person assumes the parenting of another, usually a child, from that person's biological or legal parent or parents. See also Semen cryopreservation Evans v United Kingdom, a key case at the European Court of Human Rights Sex selection Stem cell controversy Reciprocal IVF Test Tube Babies (film) References Further reading External links Fertility Female genital procedures Cryobiology Fertility medicine Obstetrics Human pregnancy Reproduction British inventions 1977 introductions Egg donation Sperm donation
In vitro fertilisation
[ "Physics", "Chemistry", "Biology" ]
18,173
[ "Physical phenomena", "Phase transitions", "Behavior", "Reproduction", "Biological interactions", "Cryobiology", "Biochemistry" ]
57,885
https://en.wikipedia.org/wiki/Soil%20retrogression%20and%20degradation
Soil retrogression and degradation are two regressive evolution processes associated with the loss of equilibrium of a stable soil. Retrogression is primarily due to soil erosion and corresponds to a phenomenon where succession reverts the land to its natural physical state. Degradation is an evolution, different from natural evolution, related to the local climate and vegetation. It is due to the replacement of primary plant communities (known as climax vegetation) by the secondary communities. This replacement modifies the humus composition and amount, and affects the formation of the soil. It is directly related to human activity. Soil degradation may also be viewed as any change or ecological disturbance to the soil perceived to be deleterious or undesirable. According to the Center for Development Research at the University of Bonn and the International Food Policy Research Institute in Washington, the quality of 33% of pastureland, 25% of arable land and 23% of forests has deteriorated globally over the last 30 years. 3.2 billion people are dependent on this land. General At the beginning of soil formation, the bare rock outcrops are gradually colonized by pioneer species (lichens and mosses). They are succeeded by herbaceous vegetation, shrubs, and finally forest. In parallel, the first humus-bearing horizon is formed (the A horizon), followed by some mineral horizons (B horizons). Each successive stage is characterized by a certain association of soil/vegetation and environment, which defines an ecosystem. After a certain time of parallel evolution between the ground and the vegetation, a state of steady balance is reached. This stage of development is called climax by some ecologists and "natural potential" by others. Succession is the evolution towards climax. Regardless of its name, the equilibrium stage of primary succession is the highest natural form of development that the environmental factors are capable of producing. The cycles of evolution of soils have very variable durations, between tens, hundreds, or thousands of years for quickly evolving soils (A horizon only) to more than a million years for slowly developing soils. The same soil may achieve several successive steady state conditions during its existence, as exhibited by the Pygmy forest sequence in Mendocino County, California. Soils naturally reach a state of high productivity, from which they naturally degrade as mineral nutrients are removed from the soil system. Thus older soils are more vulnerable to the effects of induced retrogression and degradation. Ecological factors influencing soil formation There are two types of ecological factors influencing the evolution of a soil (through alteration and humification). These two factors are extremely significant to explain the evolution of soils of short development. A first type of factor is the average climate of an area and the vegetation which is associated (biome). A second type of factor is more local, and is related to the original rock and local drainage. This type of factor explains appearance of specialized associations (ex peat bogs). Biorhexistasy theory The destruction of the vegetation implies the destruction of evoluted soils, or a regressive evolution. Cycles of succession-regression of soils follow one another within short intervals of time (human actions) or long intervals of time (climate variations). The climate role in the deterioration of the rocks and the formation of soils lead to the formulation of the theory of the biorhexistasy. In wet climate, the conditions are favorable to the deterioration of the rocks (mostly chemically), the development of the vegetation and the formation of soils; this period favorable to life is called biostasy. In dry climate, the rocks exposed are mostly subjected to mechanical disintegration which produces coarse detrital materials: this is referred to as rhexistasy. Perturbations of the balance of a soil When the state of balance, characterized by the ecosystem climax is reached, it tends to be maintained stable in the course of time. The vegetation installed on the ground provides the humus and ensures the ascending circulation of the matters. It protects the ground from erosion by playing the role of barrier (for example, protection from water and wind). Plants can also reduce erosion by binding the particles of the ground to their roots. A disturbance of climax will cause retrogression, but often, secondary succession will start to guide the evolution of the system after that disturbance. Secondary succession is much faster than primary because the soil is already formed, although deteriorated and needing restoration as well. However, when a significant destruction of the vegetation takes place (of natural origin such as an avalanche or human origin), the disturbance undergone by the ecosystem is too important. In this latter case, erosion is responsible for the destruction of the upper horizons of the ground, and is at the origin of a phenomenon of reversion to pioneer conditions. The phenomenon is called retrogression and can be partial or total (in this case, nothing remains beside bare rock). For example, the clearing of an inclined ground, subjected to violent rains, can lead to the complete destruction of the soil. Man can deeply modify the evolution of the soils by direct and brutal action, such as clearing, abusive cuts, forest pasture, litters raking. The climax vegetation is gradually replaced and the soil modified (example: replacement of leafy tree forests by moors or pines plantations). Retrogression is often related to very old human practices. Influence of human activity Soil erosion is the main factor for soil degradation and is due to several mechanisms: water erosion, wind erosion, chemical degradation and physical degradation. Erosion can be influenced by human activity. For example, roads which increase impermeable surfaces lead to streaming and ground loss. Improper agriculture practices can also accelerate soil erosion, including by way of: Overgrazing of animals Monoculture planting Row cropping Tilling or plowing Crop removal Land-use conversion Consequences of soil regression and degradation Here are a few of the consequences of soil regression and degradation: Yields impact: Recent increases in the human population have placed a great strain on the world's soil systems. More than 6 billion people are now using about 38% of the land area of the Earth to raise crops and livestock. Many soils suffer from various types of degradation, that can ultimately reduce their ability to produce food resources. This reduces the food security, which many countries facing soil degradation already do not have. Slight degradation refers to land where yield potential has been reduced by 10%, moderate degradation refers to a yield decrease of 10–50%. Severely degraded soils have lost more than 50% of their potential. Most severely degraded soils are located in developing countries. In Africa, yield reduction is 2–40%, with an average loss of 8.2% of the continent. Natural disasters: natural disasters such as mud flows, floods are responsible for the death of many living beings each year. This causes a cycle as floods can degrade soil, and soil degradation can cause floods. Deterioration of the water quality: the increase in the turbidity of water and the contribution of nitrogen and of phosphorus can result in eutrophication. Soils particles in surface waters are also accompanied by agricultural inputs and by some pollutants of industrial, urban and road origin (such as heavy metals). The run-off with pesticides and fertilizers make water quality dangerous. The ecological impact of agricultural inputs (such as weed killer) is known but difficult to evaluate because of the multiplicity of the products and their broad spectrum of action. Biological diversity: soil degradation may involve perturbation of microbial communities, disappearance of the climax vegetation and decrease in animal habitat, thus leading to a biodiversity loss and animal extinction. Economic loss: the estimated costs for land degradation are US$44 billion per year. Globally, the annual loss of 76 billion tons of soil costs the world about US$400 billion per year. In Canada, on-farm effects of land degradation were estimated to range from US$700 to US$915 million in 1984. The economic impact of land degradation is extremely severe in densely populated South Asia, and sub-Saharan Africa. Soil enhancement, rebuilding, and regeneration Problems of soil erosion can be fought, and certain practices can lead to soil enhancement and rebuilding. Even though simple, methods for reducing erosion are often not chosen because these practices outweigh the short-term benefits. Rebuilding is especially possible through the improvement of soil structure, addition of organic matter and limitation of runoff. However, these techniques will never totally succeed to restore a soil (and the fauna and flora associated to it) that took more than a 1000 years to build up. Soil regeneration is the reformation of degraded soil through biological, chemical, and or physical processes. When productivity declined in the low-clay soils of northern Thailand, farmers initially responded by adding organic matter from termite mounds, but this was unsustainable in the long-term. Scientists experimented with adding bentonite, one of the smectite family of clays, to the soil. In field trials, conducted by scientists from the International Water Management Institute (IWMI) in cooperation with Khon Kaen University and local farmers, this had the effect of helping retain water and nutrients. Supplementing the farmer's usual practice with a single application of 200 kg bentonite per rai (6.26 rai = 1 hectare) resulted in an average yield increase of 73%. More work showed that applying bentonite to degraded sandy soils reduced the risk of crop failure during drought years. In 2008, three years after the initial trials, IWMI scientists conducted a survey among 250 farmers in northeast Thailand, half who had applying bentonite to their fields and half who had not. The average output for those using the clay addition was 18% higher than for non-clay users. Using the clay had enabled some farmers to switch to growing vegetables, which need more fertile soil. This helped to increase their income. The researchers estimated that 200 farmers in northeast Thailand and 400 in Cambodia had adopted the use of clays, and that a further 20,000 farmers were introduced to the new technique. See also References Environmental issues with soil de:Bodendegradation
Soil retrogression and degradation
[ "Environmental_science" ]
2,049
[ "Environmental soil science", "Environmental issues with soil" ]
57,947
https://en.wikipedia.org/wiki/Theobroma%20cacao
Theobroma cacao (cacao tree or cocoa tree) is a small ( tall) evergreen tree in the family Malvaceae. Its seeds - cocoa beans - are used to make chocolate liquor, cocoa solids, cocoa butter and chocolate. Although the tree is native to the tropics of the Americas, the largest producer of cocoa beans in 2022 was Ivory Coast. The plant's leaves are alternate, entire, unlobed, long and broad. Description Flowers The flowers are produced in clusters directly on the trunk and older branches; this is known as cauliflory. The flowers are small, diameter, with pink calyx. The floral formula, used to represent the structure of a flower using numbers, is ✶ K5 C5 A(5°+52) (5). While many of the world's flowers are pollinated by bees (Hymenoptera) or butterflies/moths (Lepidoptera), cacao flowers are pollinated by tiny flies, Forcipomyia biting midges. Using the natural pollinator Forcipomyia midges produced more fruit than using artificial pollinators. Fruit The fruit, called a cacao pod, is ovoid, long and wide, ripening yellow to orange, and weighs about when ripe. The pod contains 20 to 60 seeds, usually called "beans", embedded in a white pulp. The seeds are the main ingredient of chocolate, while the pulp is used in some countries to prepare refreshing juice, smoothies, jelly, and cream. Usually discarded until practices changed in the 21st century, the fermented pulp may be distilled into an alcoholic beverage. Each seed contains a significant amount of fat (40–50%) as cocoa butter. The fruit's active constituent is the stimulant theobromine, a compound similar to caffeine. Nomenclature The generic name Theobroma is derived from the Greek for "food of the gods"; from (), meaning 'god' or 'divine', and (), meaning 'food'. The specific name cacao is the Hispanization of the name given to the plant in indigenous Mesoamerican languages such as in Tzeltal, Kʼicheʼ and Classic Maya; in Sayula Popoluca; and in Nahuatl meaning "bean of the cocoa-tree". Taxonomy Cacao (Theobroma cacao) is one of 26 species belonging to the genus Theobroma classified under the subfamily Byttnerioideae of the mallow family Malvaceae. In 2008, researchers proposed a new classification based upon morphological, geographic, and genomic criteria: 10 groups have been named according to their geographic origin or the traditional cultivar name. These groups are: Amelonado, Criollo, Nacional, Contamana, Curaray, Cacao guiana, Iquitos, Marañon, Nanay, and Purús. Distribution and domestication T. cacao is widely distributed from southeastern Mexico to the Amazon basin. There were originally two hypotheses about its domestication; one said that there were two foci for domestication, one in the Lacandon Jungle area of Mexico and another in lowland South America. More recent studies of patterns of DNA diversity, however, suggest that this is not the case. One study sampled 1241 trees and classified them into 10 distinct genetic clusters. This study also identified areas, for example around Iquitos in modern Peru and Ecuador, where representatives of several genetic clusters originated more than 5000 years ago, leading to development of the variety, Nacional cocoa bean. This result suggests that this is where T. cacao was originally domesticated, probably for the pulp that surrounds the beans, which is eaten as a snack and fermented into a mildly alcoholic beverage. Using the DNA sequences and comparing them with data derived from climate models and the known conditions suitable for cacao, one study refined the view of domestication, linking the area of greatest cacao genetic diversity to a bean-shaped area that encompasses Ecuador, the border between Brazil and Peru and the southern part of the Colombian–Brazilian border. Climate models indicate that at the peak of the last ice age 21,000 years ago, when habitat suitable for cacao was at its most reduced, this area was still suitable, and so provided a refugium for species. Cacao trees grow well as understory plants in humid forest ecosystems. This is equally true of abandoned cultivated trees, making it difficult to distinguish truly wild trees from those whose parents may originally have been cultivated. Cultivation In 2016, cocoa beans were cultivated on roughly worldwide. Cocoa beans are grown by large agroindustrial plantations and small producers, the bulk of production coming from millions of farmers with small plots. A tree begins to bear when it is four or five years old. A mature tree may have 6,000 flowers in a year, yet only about 20 pods. About 1,200 seeds (40 pods) are required to produce of cocoa paste. Historically, chocolate makers have recognized three main cultivar groups of cacao beans used to make cocoa and chocolate: Forastero, Criollo and Trinitario. The most prized, rare, and expensive is the Criollo group, the cocoa bean used by the Maya. Only 10% of chocolate is made from Criollo, which is arguably less bitter and more aromatic than any other bean. In November 2000, the cacao beans coming from Chuao were awarded an appellation of origin under the title (from Spanish: 'cacao of Chuao'). The cacao bean in 80% of chocolate is made using beans of the Forastero group, the main and most ubiquitous variety being the Amenolado variety, while the Arriba variety (such as the Nacional variety) are less commonly found in Forastero produce. Forastero trees are significantly hardier and more disease-resistant than Criollo trees, resulting in cheaper cacao beans. Major cocoa bean processors include Hershey's, Nestlé and Mars. Chocolate can be made from T. cacao through a process of steps that involve harvesting, fermenting of T. cacao pulp, drying, harvesting, and then extraction. Roasting T. cacao by using superheated steam was found to be better than conventional oven-roasting because it resulted in the same quality of cocoa beans in a shorter time. Production In 2022, world production of cocoa beans was 5.9 million tonnes, led by Ivory Coast with 38% of the total. Other major producers were Ghana (19%) and Indonesia (11%). Conservation The pests and diseases to which cacao is subject, along with climate change, mean that new varieties will be needed to respond to these challenges. Breeders rely on the genetic diversity conserved in field genebanks to create new varieties, because cacao has recalcitrant seeds that cannot be stored in a conventional genebank. In an effort to improve the diversity available to breeders, and ensure the future of the field genebanks, experts have drawn up A Global Strategy for the Conservation and Use of Cacao Genetic Resources, as the Foundation for a Sustainable Cocoa Economy. The strategy has been adopted by the cacao producers and their clients, and seeks to improve the characterization of cacao diversity, the sustainability and diversity of the cacao collections, the usefulness of the collections, and to ease access to better information about the conserved material. Some natural areas of cacao diversity are protected by various forms of conservation, for example national parks. However, a recent study of genetic diversity and predicted climates suggests that many of those protected areas will no longer be suitable for cacao by 2050. It also identifies an area around Iquitos in Peru that will remain suitable for cacao and that is home to considerable genetic diversity, and recommends that this area be considered for protection. Other projects, such as the International Cocoa Quarantine Centre, aim to combat cacao diseases and preserve genetic diversity. Phytopathogens (parasitic organisms) cause much damage to Theobroma cacao plantations around the world. Many of those phytopathogens, which include many of the pests named below, were analyzed using mass spectrometry and allow for guiding on the correct approaches to get rid of the specific phytopathogens. This method was found to be quick, reproducible, and accurate showing promising results in the future to prevent damage to Theobroma cacao by various phytopathogens. A specific bacterium Streptomyces camerooniansis was found to be beneficial for T. cacao by helping plant growth by accelerating seed germination of T. cacao, inhibiting growth of various types of microorganisms (such as different oomycetes, fungi, and bacteria), and preventing rotting by Phytophthora megakarya. Pests Various plant pests and diseases can cause serious problems for cacao production. Insects Cocoa mirids or capsids worldwide (but especially Sahlbergella singularis and Distantiella theobroma in West Africa and Helopeltis spp. in Southeast Asia) Bathycoelia thalassina - West Africa Conopomorpha cramerella (cocoa pod borer – in Southeast Asia) Carmenta theobromae - C. & S. America Fungi Moniliophthora roreri (frosty pod rot) Moniliophthora perniciosa (witches' broom) Ceratocystis cacaofunesta (mal de machete) or (Ceratocystis wilt) Verticillium dahliae Oncobasidium theobromae (vascular streak dieback) Oomycetes Phytophthora spp. (black pod) especially Phytophthora megakarya in West Africa Viruses Cacao swollen shoot virus Mistletoe Rats and other vertebrate pests (squirrels, woodpeckers, etc.) Genome The genome of T. cacao is diploid, its size is 430 Mbp, and it comprises 10 chromosome pairs (2n=2x=20). In September 2010, a team of scientists announced a draft sequence of the cacao genome (Matina1-6 genotype). In a second, unrelated project, the International Cocoa Genome Sequencing Consortium-ICGS, coordinated by CIRAD, first published in December 2010 (online, paper publication in January 2011), the sequence of the cacao genome, of the Criollo cacao (of a landrace from Belize, B97-61/B2). In their publication, they reported a detailed analysis of the genomic and genetic data. The sequence of the cacao genome identified 28,798 protein-coding genes, compared to the roughly 23,000 protein-coding genes of the human genome. About 20% of the cacao genome consists of transposable elements, a low proportion compared to other plant species. Many genes were identified as coding for flavonoids, aromatic terpenes, theobromine and many other metabolites involved in cocoa flavor and quality traits, among which a relatively high proportion code for polyphenols, which constitute up to 8% of cacao pods dry weight. The cacao genome appears close to the hypothetical hexaploid ancestor of all dicotyledonous plants, and it is proposed as an evolutionary mechanism by which the 21 chromosomes of the dicots' hypothetical hexaploid ancestor underwent major fusions leading to cacao's 10 chromosome pairs. The genome sequence enables cacao molecular biology and breeding for elite varieties through marker-assisted selection, in particular for genetic resistance to fungal, oomycete and viral diseases responsible for huge yield losses each year. In 2017–18, due to concerns about survivability of cacao plants in an era of global warming in which climates become more extreme in the narrow band of latitudes where cacao is grown (20 degrees north and south of the equator), the commercial company, Mars, Incorporated and the University of California, Berkeley are using CRISPR to adjust DNA for improved hardiness of cacao in hot climates. History of cultivation Domestication The cacao tree, native of the Amazon rainforest, was first domesticated at least 5,300 years ago, in equatorial South America from the Santa Ana-La Florida (SALF) site in what is present-day southeast Ecuador (Zamora-Chinchipe Province) by the Mayo-Chinchipe culture before being introduced in Mesoamerica. In Mesoamerica, ceramic vessels with residues from the preparation of cacao beverages have been found from the Early Formative (1900–900 BC) period. For example, one such vessel found at an Olmec archaeological site on the Gulf Coast of Veracruz, Mexico dates cacao's preparation by pre-Olmec peoples as early as 1750 BC. On the Pacific coast of Chiapas, Mexico, a Mokaya archaeological site provides evidence of even earlier cacao beverages, to 1900 BC. The initial domestication was probably related to the making of a fermented alcoholic beverage. In 2018, researchers who analysed the genome of cultivated cacao trees concluded that the domesticated cacao trees all originated from a single domestication event that occurred about 3,600 years ago somewhere in Central America. Ancient uses Several mixtures of cacao are described in ancient texts, for ceremonial or medicinal, as well as culinary, purposes. Some mixtures included maize, chili, vanilla (Vanilla planifolia), and honey. Archaeological evidence for use of cacao, while relatively sparse, has come from the recovery of whole cacao beans at Uaxactun, Guatemala and from the preservation of wood fragments of the cacao tree at Belize sites including Cuello and Pulltrouser Swamp. In addition, analysis of residues from ceramic vessels has found traces of theobromine and caffeine in early formative vessels from Puerto Escondido, Honduras (1100–900 BC) and in middle formative vessels from Colha, Belize (600–400 BC) using similar techniques to those used to extract chocolate residues from four classic period (around 400 AD) vessels from a tomb at the Maya archaeological site of Rio Azul. As cacao is the only known commodity from Mesoamerica containing both of these alkaloid compounds, it seems likely these vessels were used as containers for cacao drinks. In addition, cacao is named in a hieroglyphic text on one of the Rio Azul vessels. Cacao is also believed to have been ground by the Aztecs and mixed with tobacco for smoking purposes. Cocoa was being domesticated by the Mayo Chinchipe of the upper Amazon around 3,000 BC. The Maya believed the (cacao) was discovered by the gods in a mountain that also contained other delectable foods to be used by them. According to Maya mythology, the Plumed Serpent gave cacao to the Maya after humans were created from maize by divine grandmother goddess Xmucane. The Maya celebrated an annual festival in April to honor their cacao god, Ek Chuah, an event that included the sacrifice of a dog with cacao-colored markings, additional animal sacrifices, offerings of cacao, feathers and incense, and an exchange of gifts. In a similar creation story, the Mexica (Aztec) god Quetzalcoatl discovered cacao (: "bitter water"), in a mountain filled with other plant foods. Cacao was offered regularly to a pantheon of Mexica deities and the Madrid Codex depicts priests lancing their ear lobes (autosacrifice) and covering the cacao with blood as a suitable sacrifice to the gods. The cacao beverage was used as a ritual only by men, as it was believed to be an intoxicating food unsuitable for women and children. Cacao beans constituted both a ritual beverage and a major currency system in pre-Columbian Mesoamerican civilizations. At one point, the Aztec empire received a yearly tribute of 980 loads () of cacao, in addition to other goods. Each load represented exactly 8,000 beans. The buying power of quality beans was such that 80–100 beans could buy a new cloth mantle. The use of cacao beans as currency is also known to have spawned counterfeiters during the Aztec empire. Modern history The first European knowledge about chocolate came in the form of a beverage which was first introduced to the Spanish at their meeting with Moctezuma in the Aztec capital of Tenochtitlan in 1519. Cortés and others noted the vast quantities of this beverage the Aztec emperor consumed, and how it was carefully whipped by his attendants beforehand. Examples of cacao beans, along with other agricultural products, were brought back to Spain at that time, but it seems the beverage made from cacao was introduced to the Spanish court in 1544 by Kekchi Maya nobles brought from the New World to Spain by Dominican friars to meet Prince Philip. Within a century, chocolate had spread to France, England and elsewhere in Western Europe. Demand for this beverage led the French to establish cacao plantations in the Caribbean, while Spain subsequently developed their cacao plantations in their Venezuelan and Philippine colonies (Bloom 1998, Coe 1996). A painting by Dutch Golden Age artist Albert Eckhout shows a wild cacao tree in mid-seventeenth century Dutch Brazil. The Nahuatl-derived Spanish word cacao entered scientific nomenclature in 1753 after the Swedish naturalist Linnaeus published his taxonomic binomial system and coined the genus and species Theobroma cacao. Traditional pre-Hispanic beverages made with cacao are still consumed in Mesoamerica. These include the Oaxacan beverage known as tejate. Gallery See also Ceratonia siliqua, the carob tree Kola nut References Sources Further reading External links International Cocoa Organization (ICCO) – includes cacao daily market prices and charts cacao Cacao Agriculture in Mesoamerica Agriculture in Ecuador Chocolate Cocoa production Components of chocolate Crops Crops originating from Ecuador Crops originating from Peru Plants described in 1753 Taxa named by Carl Linnaeus Crops originating from indigenous Americans
Theobroma cacao
[ "Technology" ]
3,734
[ "Components of chocolate", "Components" ]
57,977
https://en.wikipedia.org/wiki/Mad%20scientist
The mad scientist (also mad doctor or mad professor) is a stock character of a scientist who is perceived as "mad, bad and dangerous to know" or "insane" owing to a combination of unusual or unsettling personality traits and the unabashedly ambitious, taboo or hubristic nature of their experiments. As a motif in fiction, the mad scientist may be villainous (evil genius) or antagonistic, benign, or neutral; may be insane, eccentric, or clumsy; and often works with fictional technology or fails to recognise or value common human objections to attempting to play God. Some may have benevolent intentions, even if their actions are dangerous or questionable, which can make them accidental antagonists. History Prototypes The prototypical fictional mad scientist was Victor Frankenstein, creator of his eponymous monster, who made his first appearance in 1818, in the novel Frankenstein, or the Modern Prometheus by Mary Shelley. Though the novel's title character, Victor Frankenstein, is a sympathetic character, the critical element of conducting experiments that cross "boundaries that ought not to be crossed", heedless of the consequences, is present in Shelley's novel. Frankenstein was trained as both an alchemist and a modern scientist, which makes him the bridge between two eras of an evolving archetype. The book is said to be a precursor of a new genre, science fiction, although as an example of gothic horror it is connected with other antecedents as well. The year 1896 saw the publication of H. G. Wells's The Island of Doctor Moreau, in which the titular doctor—a controversial vivisectionist—has isolated himself entirely from civilisation in order to continue his experiments in surgically reshaping animals into humanoid forms, heedless of the suffering he causes. In 1925, the novelist Alexander Belyaev introduced mad scientists to the Russian people through the novel Professor Dowell's Head, in which the antagonist performs experimental head transplants on bodies stolen from the morgue, and reanimates the corpses. Cinema depictions Fritz Lang's movie Metropolis (1927) brought the archetypical mad scientist to the screen in the form of Rotwang, the evil genius whose machines had originally given life to the dystopian city of the title. Rotwang's laboratory influenced many subsequent movie sets with its electrical arcs, bubbling apparatus, and bizarrely complicated arrays of dials and controls. Portrayed by actor Rudolf Klein-Rogge, Rotwang himself is the prototypically conflicted mad scientist; though he is master of almost mystical scientific power, he remains a slave to his own desires for power and revenge. Rotwang's appearance was also influential—the character's shock of flyaway hair, wild-eyed demeanor, and his quasi-fascist laboratory garb have all been adopted as shorthand for the mad scientist "look." Even his mechanical right hand has become a mark of twisted scientific power, echoed notably in Stanley Kubrick's film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb and in the novel The Three Stigmata of Palmer Eldritch (1965) by Philip K. Dick. A recent survey of 1,000 horror films distributed in the UK between the 1930s and 1980s reveals mad scientists or their creations have been the villains of 30 percent of the films; scientific research has produced 39 percent of the threats; and, by contrast, scientists have been the heroes of a mere 11 percent. Boris Karloff played mad scientists in several of his 1930s and 1940s films. Movie serials The Mad scientist was a staple of the Republic/Universal/Columbia movie serials of the 1930s and 40s. Examples include: "Dr. Zorka" (The Phantom Creeps, 1939) "Dr. Fu Manchu" (Drums of Fu Manchu, Republic, 1940) "Dr. Satan" (Mysterious Doctor Satan, 1940) "Dr. Vulcan" (King of the Rocket Men, 1949) "Atom Man/Lex Luthor" Atom Man vs. Superman, 1950) Post–World War II depictions Mad scientists were most conspicuous in popular culture after World War II. The sadistic human experimentation conducted under the auspices of the Nazis, especially those of Josef Mengele, and the invention of the atomic bomb, gave rise in this period to genuine fears that science and technology had gone out of control. That the scientific and technological build-up during the Cold War brought about increasing threats of unparalleled destruction of the human species did not lessen the impression. Mad scientists frequently figure in science fiction and motion pictures from the period. Animation Mad scientists in animation include Professor Frink, Professor Farnsworth, Rick Sanchez, Rintaro Okabe, and Dr. Heinz Doofenshmirtz. Walt Disney Pictures had Mickey Mouse trying to save his dog Pluto from The Mad Doctor (1933). Depictions of mad scientists in Warner Brothers' Merrie Melodies/Looney Tunes cartoons include: Hair-Raising Hare (1946, based on Peter Lorre) Birth of a Notion (1947, again based on Lorre) Water, Water Every Hare (1952, based on Boris Karloff) While both Tom and Jerry dabbled in mad science in some of the Hanna-Barbera cartoons, an actual mad scientist did not appear until Switchin' Kitten (1961). See also Absent-minded professor Boffin British scientists (meme) Creativity techniques Creativity and mental illness Edisonade, a similar trope, about a brilliant inventor, but of positive attitudes Egghead Faust Fringe science Girl Genius List of mad scientists Mad scientists of Stanislaw Lem References Further reading Allen, Glen Scott (2009). Master Mechanics and Wicked Wizards: Images of the American Scientist from Colonial Times to the Present. Amherst: University of Massachusetts Press. . Garboden, Nick (2007). Mad Scientist or Angry Lab Tech: How to Spot Insanity. Portland: Doctored Papers. . Haynes, Roslynn Doris (1994). From Faust to Strangelove: Representations of the Scientist in Western Literature. Baltimore: Johns Hopkins University Press. . Junge, Torsten; Doerthe Ohlhoff (2004). Wahnsinnig genial: Der Mad Scientist Reader. Aschaffenburg: Alibri. . Norton, Trevor (2010). Smoking Ears and Screaming Teeth. (A witty celebration of the great eccentrics...). Century. . Schlesinger, Judith (2012). The Insanity Hoax: Exposing the Myth of the Mad Genius. Ardsley-on-Hudson, N.Y. Shrinktunes Media . Schneider, Reto U. (2008). The Mad Science Book. 100 Amazing Experiments from the History of Science. London: Quercus. . Tudor, Andrew (1989). Monsters and Mad Scientists: A Cultural History of the Horror Movie. Oxford: Blackwell. . Weart, Spencer R. (1988). Nuclear Fear: A History of Images. Cambridge, Massachusetts: Harvard University Press. Levi, Pfaff J. (1956). Wahnsinnig genial: Der Mad Scientist Reader. Aschaffenburg: Alibri. . External links Gary Hoppenstand, "Dinosaur Doctors and Jurassic Geniuses: The Changing Image of the Scientist in the Lost World Adventure" Cultural depictions of scientists Ethics of science and technology Experimental medical treatments in fiction Stock characters Villains
Mad scientist
[ "Technology" ]
1,503
[ "Ethics of science and technology" ]
57,980
https://en.wikipedia.org/wiki/Shortwave%20radio
Shortwave radio is radio transmission using radio frequencies in the shortwave bands (SW). There is no official definition of the band range, but it always includes all of the high frequency band (HF), which extends from 3 to 30 MHz (approximately 100 to 10 metres in wavelength). It lies between the medium frequency band (MF) and the bottom of the VHF band. Radio waves in the shortwave band can be reflected or refracted from a layer of electrically charged atoms in the atmosphere called the Ionosphere. Therefore, short waves directed at an angle into the sky can be reflected back to Earth at great distances, beyond the horizon. This is called skywave or "skip" propagation. Thus shortwave radio can be used for communication over very long distances, in contrast to radio waves of higher frequency, which travel in straight lines (line-of-sight propagation) and are generally limited by the visual horizon, about 64 km (40 miles). Shortwave broadcasts of radio programs played an important role in international broadcasting for many decades, serving both to provide news and information and as a propaganda tool for an international audience. The heyday of international shortwave broadcasting was during the Cold War between 1960 and 1990. With the wide implementation of other technologies for the long-distance distribution of radio programs, such as satellite radio, cable broadcasting and IP-based transmissions, shortwave broadcasting lost importance. Initiatives for the digitization of broadcasting did not bear fruit either, and , relatively few broadcasters continue to broadcast programs on shortwave. However, shortwave remains important in war zones, such as in the Russo-Ukrainian war, and shortwave broadcasts can be transmitted over thousands of miles from a single transmitter, making it difficult for government authorities to censor them. Shortwave radio is also often used by aircraft. History Development The name "shortwave" originated during the beginning of radio in the early 20th century, when the radio spectrum was divided into long wave (LW), medium wave (MW), and short wave (SW) bands based on the length of the wave. Shortwave radio received its name because the wavelengths in this band are shorter than 200 m (1,500 kHz) which marked the original upper limit of the medium frequency band first used for radio communications. The broadcast medium wave band now extends above the 200 m / 1,500 kHz limit. Early long-distance radio telegraphy used long waves, below 300 kilohertz (kHz) / above 1000 m. The drawbacks to this system included a very limited spectrum available for long-distance communication, and the very expensive transmitters, receivers and gigantic antennas. Long waves are also difficult to beam directionally, resulting in a major loss of power over long distances. Prior to the 1920s, the shortwave frequencies above 1.5 MHz were regarded as useless for long-distance communication and were designated in many countries for amateur use. Guglielmo Marconi, pioneer of radio, commissioned his assistant Charles Samuel Franklin to carry out a large-scale study into the transmission characteristics of short-wavelength waves and to determine their suitability for long-distance transmissions. Franklin rigged up a large antenna at Poldhu Wireless Station, Cornwall, running on 25 kW of power. In June and July 1923, wireless transmissions were completed during nights on 97 meters (about 3 MHz) from Poldhu to Marconi's yacht Elettra in the Cape Verde Islands. In September 1924, Marconi arranged for transmissions to be made day and night on 32 meters (about 9.4 MHz) from Poldhu to his yacht in the harbour at Beirut, to which he had sailed, and was "astonished" to find he could receive signals "throughout the day". Franklin went on to refine the directional transmission by inventing the curtain array aerial system. In July 1924, Marconi entered into contracts with the British General Post Office (GPO) to install high-speed shortwave telegraphy circuits from London to Australia, India, South Africa and Canada as the main element of the Imperial Wireless Chain. The UK-to-Canada shortwave "Beam Wireless Service" went into commercial operation on 25 October 1926. Beam Wireless Services from the UK to Australia, South Africa and India went into service in 1927. Shortwave communications began to grow rapidly in the 1920s. By 1928, more than half of long-distance communications had moved from transoceanic cables and longwave wireless services to shortwave, and the overall volume of transoceanic shortwave communications had vastly increased. Shortwave stations had cost and efficiency advantages over massive longwave wireless installations. However, some commercial longwave communications stations remained in use until the 1960s. Long-distance radio circuits also reduced the need for new cables, although the cables maintained their advantages of high security and a much more reliable and better-quality signal than shortwave. The cable companies began to lose large sums of money in 1927. A serious financial crisis threatened viability of cable companies that were vital to strategic British interests. The British government convened the Imperial Wireless and Cable Conference in 1928 "to examine the situation that had arisen as a result of the competition of Beam Wireless with the Cable Services". It recommended and received government approval for all overseas cable and wireless resources of the Empire to be merged into one system controlled by a newly formed company in 1929, Imperial and International Communications Ltd. The name of the company was changed to Cable and Wireless Ltd. in 1934. A resurgence of long-distance cables began in 1956 with the laying of TAT-1 across the Atlantic Ocean, the first voice frequency cable on this route. This provided 36 high-quality telephone channels and was soon followed by even higher-capacity cables all around the world. Competition from these cables soon ended the economic viability of shortwave radio for commercial communication. Amateur use of shortwave propagation Amateur radio operators also discovered that long-distance communication was possible on shortwave bands. Early long-distance services used surface wave propagation at very low frequencies, which are attenuated along the path at wavelengths shorter than 1,000 meters. Longer distances and higher frequencies using this method meant more signal loss. This, and the difficulties of generating and detecting higher frequencies, made discovery of shortwave propagation difficult for commercial services. Radio amateurs may have conducted the first successful transatlantic tests in December 1921, operating in the 200 meter mediumwave band (near 1,500 kHz, inside the modern AM broadcast band), which at that time was the shortest wavelength / highest frequency available to amateur radio. In 1922 hundreds of North American amateurs were heard in Europe on 200 meters and at least 20 North American amateurs heard amateur signals from Europe. The first two-way communications between North American and Hawaiian amateurs began in 1922 at 200 meters. Although operation on wavelengths shorter than 200 meters was technically illegal (but tolerated at the time as the authorities mistakenly believed that such frequencies were useless for commercial or military use), amateurs began to experiment with those wavelengths using newly available vacuum tubes shortly after World War I. Extreme interference at the longer edge of the 150–200 meter band – the official wavelengths allocated to amateurs by the Second National Radio Conference in 1923 – forced amateurs to shift to shorter and shorter wavelengths; however, amateurs were limited by regulation to wavelengths longer than 150 meters (2 MHz). A few fortunate amateurs who obtained special permission for experimental communications at wavelengths shorter than 150 meters completed hundreds of long-distance two-way contacts on 100 meters (3 MHz) in 1923 including the first transatlantic two-way contacts. By 1924 many additional specially licensed amateurs were routinely making transoceanic contacts at distances of 6,000 miles (9,600 km) and more. On 21 September 1924 several amateurs in California completed two-way contacts with an amateur in New Zealand. On 19 October amateurs in New Zealand and England completed a 90 minute two-way contact nearly halfway around the world. On 10 October the Third National Radio Conference made three shortwave bands available to U.S. amateurs at 80 meters (3.75 MHz), 40 meters (7 MHz) and 20 meters (14 MHz). These were allocated worldwide, while the 10 meter band (28 MHz) was created by the Washington International Radiotelegraph Conference on 25 November 1927. The 15 meter band (21 MHz) was opened to amateurs in the United States on 1 May 1952. Propagation characteristics Shortwave radio frequency energy is capable of reaching any location on the Earth as it is influenced by ionospheric reflection back to Earth by the ionosphere, (a phenomenon known as "skywave propagation"). A typical phenomenon of shortwave propagation is the occurrence of a skip zone where reception fails. With a fixed working frequency, large changes in ionospheric conditions may create skip zones at night. As a result of the multi-layer structure of the ionosphere, propagation often simultaneously occurs on different paths, scattered by the ‘E’ or ‘F’ layer and with different numbers of hops, a phenomenon that may be disturbed for certain techniques. Particularly for lower frequencies of the shortwave band, absorption of radio frequency energy in the lowest ionospheric layer, the ‘D’ layer, may impose a serious limit. This is due to collisions of electrons with neutral molecules, absorbing some of a radio frequency's energy and converting it to heat. Predictions of skywave propagation depend on: The distance from the transmitter to the target receiver. Time of day. During the day, frequencies higher than approximately 12 MHz can travel longer distances than lower ones. At night, this property is reversed. With lower frequencies the dependence on the time of the day is mainly due to the lowest ionospheric layer, the ‘D’ Layer, forming only during the day when photons from the sun break up atoms into ions and free electrons. Season. During the winter months of the Northern or Southern hemispheres, the AM/MW broadcast band tends to be more favorable because of longer hours of darkness. Solar flares produce a large increase in D region ionization – so great, sometimes for periods of several minutes, that skywave propagation is nonexistent. Types of modulation Several different types of modulation are used to incorporate information in a short-wave signal. Audio modes AM Amplitude modulation is the simplest type and the most commonly used for shortwave broadcasting. The instantaneous amplitude of the carrier is controlled by the amplitude of the signal (speech, or music, for example). At the receiver, a simple detector recovers the desired modulation signal from the carrier. SSB Single-sideband transmission is a form of amplitude modulation but in effect filters the result of modulation. An amplitude-modulated signal has frequency components both above and below the carrier frequency. If one set of these components is eliminated as well as the residual carrier, only the remaining set is transmitted. This reduces power in the transmission, as roughly of the energy sent by an AM signal is in the carrier, which is not needed to recover the information contained in the signal. It also reduces signal bandwidth, enabling less than one-half the AM signal bandwidth to be used. The drawback is the receiver is more complicated, since it must re-create the carrier to recover the signal. Small errors in the detection process greatly affect the pitch of the received signal. As a result, single sideband is not used for music or general broadcast. Single sideband is used for long-range voice communications by ships and aircraft, citizen's band, and amateur radio operators. In amateur radio operation lower sideband (LSB) is customarily used below 10 MHz and USB (upper sideband) above 10 MHz, non-amateur services use USB regardless of frequency. VSB Vestigial sideband transmits the carrier and one complete sideband, but filters out most of the other sideband. It is a compromise between AM and SSB, enabling simple receivers to be used, but requires almost as much transmitter power as AM. Its main advantage is that only half the bandwidth of an AM signal is used. It is used by the Canadian standard time signal station CHU. Vestigial sideband was used for analog television and by ATSC, the digital TV system used in North America. NFM Narrow-band frequency modulation (NBFM or NFM) is used typically above 20 MHz. Because of the larger bandwidth required, NBFM is commonly used for VHF communication. Regulations limit the bandwidth of a signal transmitted in the HF bands, and the advantages of frequency modulation are greatest if the FM signal has a wide bandwidth. NBFM is limited to short-range transmissions due to the multiphasic distortions created by the ionosphere. DRM Digital Radio Mondiale (DRM) is a digital modulation for use on bands below 30 MHz. It is a digital signal, like the data modes, below, but is for transmitting audio, like the analog modes above. Data modes CW Continuous wave (CW) is on-and-off keying of a sine-wave carrier, used for Morse code communications and Hellschreiber facsimile-based teleprinter transmissions. It is a data mode, although often listed separately. It is typically received via lower or upper SSB modes. RTTY, FAX, SSTV Radioteletype, fax, digital, slow-scan television, and other systems use forms of frequency-shift keying or audio subcarriers on a shortwave carrier. These generally require special equipment to decode, such as software on a computer equipped with a sound card. Note that on modern computer-driven systems, digital modes are typically sent by coupling a computer's sound output to the SSB input of a radio. Users Some established users of the shortwave radio bands may include: International broadcasting primarily by government-sponsored propaganda, or international news (for example, the BBC World Service), religious or cultural stations to foreign audiences: The most common use of all. Domestic broadcasting: to widely dispersed populations with few longwave, mediumwave and FM stations serving them; or for speciality political, religious and alternative media networks; or of individual commercial and non-commercial paid broadcasts. Oceanic air traffic control uses the HF/shortwave band for long-distance communication to aircraft over the oceans and poles, which are far beyond the range of traditional VHF frequencies. Modern systems also include satellite communications, such as ADS-C/CPDLC. Two-way radio communications by marine and maritime HF stations, aeronautical users, and ground based stations. For example, two way shortwave communication is still used in remote regions by the Royal Flying Doctor Service of Australia. "Utility" stations transmitting messages not intended for the general public, such as merchant shipping, marine weather, and ship-to-shore stations; for aviation weather and air-to-ground communications; for military communications; for long-distance governmental purposes, and for other non-broadcast communications. Amateur radio operators at the 80/75, 60, 40, 30, 20, 17, 15, 12, and 10–meter bands. Licenses are granted by authorized government agencies. Time signal and radio clock stations: In North America, WWV radio and WWVH radio transmit at these frequencies: 2.5 MHz, 5 MHz, 10 MHz, and 15 MHz; and WWV also transmits on 20 MHz. The CHU radio station in Canada transmits on the following frequencies: 3.33 MHz, 7.85 MHz, and 14.67 MHz. Other similar radio clock stations transmit on various shortwave and longwave frequencies around the world. The shortwave transmissions are primarily intended for human reception, while the longwave stations are generally used for automatic synchronization of watches and clocks. Sporadic or non-traditional users of the shortwave bands may include: Clandestine stations. These are stations that broadcast on behalf of various political movements such as rebel or insurrectionist forces. They may advocate civil war, insurrection, rebellion against the government-in-charge of the country to which they are directed. Clandestine broadcasts may emanate from transmitters located in rebel-controlled territory or from outside the country entirely, using another country's transmission facilities. Numbers stations. These stations regularly appear and disappear all over the shortwave radio band, but are unlicensed and untraceable. It is believed that numbers stations are operated by government agencies and are used to communicate with clandestine operatives working within foreign countries. However, no definitive proof of such use has emerged. Because the vast majority of these broadcasts contain nothing but the recitation of blocks of numbers, in various languages, with occasional bursts of music, they have become known colloquially as "number stations". Perhaps the most noted number station is called the "Lincolnshire Poacher", named after the 18th century English folk song, which is transmitted just before the sequences of numbers. Unlicensed two way radio activity by individuals such as taxi drivers, bus drivers and fishermen in various countries can be heard on various shortwave frequencies. Such unlicensed transmissions by "pirate" or "bootleg" two way radio operators can often cause signal interference to licensed stations. Unlicensed business radio (taxis, trucking companies, among numerous others) land mobile systems may be found in the 20-30 MHz region while unlicensed marine mobile and other similar users may be found over the entire shortwave range. Pirate radio broadcasters who feature programming such as music, talk and other entertainment, can be heard sporadically and in various modes on the shortwave bands. Pirate broadcasters take advantage of the better propagation characteristics to achieve more range compared to the AM or FM broadcast bands. Over-the-horizon radar: From 1976 to 1989, the Soviet Union's Russian Woodpecker over-the-horizon radar system blotted out numerous shortwave broadcasts daily. Ionospheric heaters used for scientific experimentation such as the High Frequency Active Auroral Research Program in Alaska, and the Sura ionospheric heating facility in Russia. Shortwave broadcasting See International broadcasting for details on the history and practice of broadcasting to foreign audiences. See List of shortwave radio broadcasters for a list of international and domestic shortwave radio broadcasters. See Shortwave relay station for the actual kinds of integrated technologies used to bring high power signals to listeners. Frequency allocations The World Radiocommunication Conference (WRC), organized under the auspices of the International Telecommunication Union, allocates bands for various services in conferences every few years. The last WRC took place in 2023. As of WRC-97 in 1997, these bands were allocated for international broadcasting. AM shortwave broadcasting channels are allocated with a 5 kHz separation for traditional analog audio broadcasting: Although countries generally follow the assigned bands, there may be small differences between countries or regions. For example, in the official bandplan of the Netherlands, the 49 m band starts at 5.95 MHz, the 41 m band ends at 7.45 MHz, the 11 m band starts at 25.67 MHz, and the 120 m, 90 m, and 60 m bands are absent altogether. International broadcasters sometimes operate outside the normal the WRC-allocated bands or use off-channel frequencies. This is done for practical reasons, or to attract attention in crowded bands (60 m, 49 m, 40 m, 41 m, 31 m, 25 m). The new digital audio broadcasting format for shortwave DRM operates 10 kHz or 20 kHz channels. There are some ongoing discussions with respect to specific band allocation for DRM, as it mainly transmitted in 10 kHz format. The power used by shortwave transmitters ranges from less than one watt for some experimental and amateur radio transmissions to 500 kilowatts and higher for intercontinental broadcasters and over-the-horizon radar. Shortwave transmitting centers often use specialized antenna designs (like the ALLISS antenna technology) to concentrate radio energy at the target area. Advantages Shortwave possesses a number of advantages over newer technologies: Difficulty of censoring programming by authorities in restrictive countries. Unlike their relative ease in monitoring and censoring the Internet, over-the air television, cable television, satellite television, satellite radio, mobile phones, landline phones, and satellite phones, government authorities face technical difficulties monitoring which stations (sites) are being listened to (accessed). For example, during the attempted coup against Soviet President Mikhail Gorbachev, when his access to communications was limited (e.g. his phones, television and radio were cut off), Gorbachev was able to stay informed by means of the BBC World Service on shortwave. Low-cost shortwave radios are widely available in all but the most repressive countries in the world. Simple shortwave regenerative receivers can be easily built with a few parts. In many countries (particularly in most developing nations and in the Eastern bloc during the Cold War era) ownership of shortwave receivers has been and continues to be widespread (in many of these countries some domestic stations also used shortwave). Many newer shortwave receivers are portable and can be battery-operated, making them useful in difficult circumstances. Newer technology includes hand-cranked radios which provide power without batteries. Shortwave radios can be used in situations where over-the-air television, cable television, satellite television, landline phones, mobile phones, satellite phones, satellite communications, or the Internet is temporarily, long-term or permanently unavailable (or unaffordable). Shortwave radio travels much farther than broadcast FM (88–108 MHz). Shortwave broadcasts can be easily transmitted over a distance of several thousand miles, including from one continent to another. Particularly in tropical regions, SW is somewhat less prone to interference from thunderstorms than medium wave radio, and is able to cover a large geographic area with relatively low power (and hence cost). Therefore, in many of these countries it is widely used for domestic broadcasting. Very little infrastructure is required for long-distance two-way communications using shortwave radio. All one needs is a pair of transceivers, each with an antenna, and a source of energy (such as a battery, a portable generator, or the electrical grid). This makes shortwave radio one of the most robust means of communications, which can be disrupted only by interference or bad ionospheric conditions. Modern digital transmission modes such as MFSK and Olivia are even more robust, allowing successful reception of signals well below the noise floor of a conventional receiver. Disadvantages Shortwave radio's benefits are sometimes regarded as being outweighed by its drawbacks, including: In most Western countries, shortwave radio ownership is usually limited to enthusiasts, since most new standard radios do not receive the shortwave band. Therefore, Western audiences are limited. In the developed world, shortwave reception is very difficult in urban areas because of excessive noise from switched-mode power adapters, fluorescent or LED light sources, internet modems and routers, computers and many other sources of radio interference. Audio quality may be limited due to interference and the modes that are used. Shortwave listening The Asia-Pacific Telecommunity estimates that there are approximately 600 million shortwave broadcast-radio receivers in use in 2002. WWCR claims that there are 1.5 billion shortwave receivers worldwide. Many hobbyists listen to shortwave broadcasters. In some cases, the goal is to hear as many stations from as many countries as possible (DXing); others listen to specialized shortwave utility, or "ute", transmissions such as maritime, naval, aviation, or military signals. Others focus on intelligence signals from numbers stations, stations which transmit strange broadcast usually for intelligence operations, or the two way communications by amateur radio operators. Some short wave listeners behave analogously to "lurkers" on the Internet, in that they listen only, and never attempt to send out their own signals. Other listeners participate in clubs, or actively send and receive QSL cards, or become involved with amateur radio and start transmitting on their own. Many listeners tune the shortwave bands for the programmes of stations broadcasting to a general audience (such as Radio Taiwan International, China Radio International, Voice of America, Radio France Internationale, BBC World Service, Voice of Korea, Radio Free Sarawak etc.). Today, through the evolution of the Internet, the hobbyist can listen to shortwave signals via remotely controlled or web controlled shortwave receivers around the world, even without owning a shortwave radio. Many international broadcasters offer live streaming audio on their websites and a number have closed their shortwave service entirely, or severely curtailed it, in favour of internet transmission. Shortwave listeners, or SWLs, can obtain QSL cards from broadcasters, utility stations or amateur radio operators as trophies of the hobby. Some stations even give out special certificates, pennants, stickers and other tokens and promotional materials to shortwave listeners. Shortwave broadcasts and music Some musicians have been attracted to the unique aural characteristics of shortwave radio which – due to the nature of amplitude modulation, varying propagation conditions, and the presence of interference – generally has lower fidelity than local broadcasts (particularly via FM stations). Shortwave transmissions often have bursts of distortion, and "hollow" sounding loss of clarity at certain aural frequencies, altering the harmonics of natural sound and creating at times a strange "spacey" quality due to echoes and phase distortion. Evocations of shortwave reception distortions have been incorporated into rock and classical compositions, by means of delays or feedback loops, equalizers, or even playing shortwave radios as live instruments. Snippets of broadcasts have been mixed into electronic sound collages and live musical instruments, by means of analogue tape loops or digital samples. Sometimes the sounds of instruments and existing musical recordings are altered by remixing or equalizing, with various distortions added, to replicate the garbled effects of shortwave radio reception. The first attempts by serious composers to incorporate radio effects into music may be those of the Russian physicist and musician Léon Theremin, who perfected a form of radio oscillator as a musical instrument in 1928 (regenerative circuits in radios of the time were prone to breaking into oscillation, adding various tonal harmonics to music and speech); and in the same year, the development of a French instrument called the Ondes Martenot by its inventor Maurice Martenot, a French cellist and former wireless telegrapher. Karlheinz Stockhausen used shortwave radio and effects in works including Hymnen (1966–1967), Kurzwellen (1968) – adapted for the Beethoven Bicentennial in Opus 1970 with filtered and distorted snippets of Beethoven pieces – Spiral (1968), Pole, Expo (both 1969–1970), and Michaelion (1997). Cypriot composer Yannis Kyriakides incorporated shortwave numbers station transmissions in his 1999 ConSPIracy cantata. Holger Czukay, a student of Stockhausen, was one of the first to use shortwave in a rock music context. In 1975, German electronic music band Kraftwerk recorded a full length concept album around simulated radiowave and shortwave sounds, entitled Radio-Activity. The The's Radio Cineola monthly broadcasts drew heavily on shortwave radio sound. Shortwave's future The development of direct broadcasts from satellites has reduced the demand for shortwave receiver hardware, but there are still a great number of shortwave broadcasters. A new digital radio technology, Digital Radio Mondiale (DRM), is expected to improve the quality of shortwave audio from very poor to adequate. The future of shortwave radio is threatened by the rise of power line communication (PLC), also known as Broadband over Power Lines (BPL), which uses a data stream transmitted over unshielded power lines. As the BPL frequencies used overlap with shortwave bands, severe distortions can make listening to analog shortwave radio signals near power lines difficult or impossible. According to Andy Sennitt, former editor of the World Radio TV Handbook, However, Thomas Witherspoon, editor of shortwave news site SWLingPost.com wrote that In 2018, Nigel Fry, head of Distribution for the BBC World Service Group, During the 2022 Russian invasion of Ukraine, the BBC World Service launched two new shortwave frequencies for listeners in Ukraine and Russia, broadcasting English-language news updates in an effort to avoid censorship by the Russian state. American commercial shortwave broadcasters WTWW and WRMI also redirected much of their programming to Ukraine. See also ALLISS–a very large rotatable antenna system used in international broadcasting List of American shortwave broadcasters List of European short wave transmitters List of shortwave radio broadcasters References External links View live and historical data and images of space weather and radio propagation. article describing pros and cons of short wave radio since the Cold War. describes experiments carried out for the French and British governments. International broadcasting Radio Guglielmo Marconi Radio spectrum Short wave radio
Shortwave radio
[ "Physics" ]
5,898
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
57,987
https://en.wikipedia.org/wiki/Rudolf%20Carnap
Rudolf Carnap (; ; 18 May 1891 – 14 September 1970) was a German-language philosopher who was active in Europe before 1935 and in the United States thereafter. He was a major member of the Vienna Circle and an advocate of logical positivism. Biography Carnap's father rose from being a poor ribbon-weaver to be the owner of a ribbon-making factory. His mother came from an academic family; her father was an educational reformer and her oldest brother was the archaeologist Wilhelm Dörpfeld. As a ten-year-old, Carnap accompanied Wilhelm Dörpfeld on an expedition to Greece. Carnap was raised in a profoundly religious Protestant family, but later became an atheist. He began his formal education at the Barmen Gymnasium and the Gymnasium in Jena. From 1910 to 1914, he attended the University of Jena, intending to write a thesis in physics. He also intently studied Immanuel Kant's Critique of Pure Reason during a course taught by Bruno Bauch, and was one of very few students to attend Gottlob Frege's courses in mathematical logic. During his university years he became enthralled with the German Youth Movement. While Carnap held moral and political opposition to World War I, he felt obligated to serve in the German army. After three years of service, he was given permission to study physics at the University of Berlin, 1917–18, where Albert Einstein was a newly appointed professor. Carnap then attended the University of Jena, where he wrote a thesis defining an axiomatic theory of space and time. The physics department said it was too philosophical, and Bruno Bauch of the philosophy department said it was pure physics. Carnap then wrote another thesis in 1921, under Bauch's supervision, on the theory of space in a more orthodox Kantian style, and published as Der Raum (Space) in a supplemental issue of Kant-Studien (1922). Frege's course exposed him to Bertrand Russell's work on logic and philosophy, which put a sense of the aims to his studies. He accepted the effort to surpass traditional philosophy with logical innovations that inform the sciences. He wrote a letter to Russell, who responded by copying by hand long passages from his Principia Mathematica for Carnap's benefit, as neither Carnap nor his university could afford a copy of this epochal work. In 1924 and 1925, he attended seminars led by Edmund Husserl, the founder of phenomenology, and continued to write on physics from a logical positivist perspective. Carnap discovered a kindred spirit when he met Hans Reichenbach at a 1923 conference. Reichenbach introduced Carnap to Moritz Schlick, a professor at the University of Vienna who offered Carnap a position in his department, which Carnap accepted in 1926. Carnap thereupon joined an informal group of Viennese intellectuals that came to be known as the Vienna Circle, directed largely by Schlick and including Hans Hahn, Friedrich Waismann, Otto Neurath, and Herbert Feigl, with occasional visits by Hahn's student Kurt Gödel. When Wittgenstein visited Vienna, Carnap would meet with him. He (with Hahn and Neurath) wrote the 1929 manifesto of the Circle, and (with Hans Reichenbach) initiated the philosophy journal Erkenntnis. In February 1930 Alfred Tarski lectured in Vienna, and during November 1930 Carnap visited Warsaw. On these occasions he learned much about Tarski's model theoretic method of semantics. Rose Rand, another philosopher in the Vienna Circle, noted, "Carnap's conception of semantics starts from the basis given in Tarski's work but a distinction is made between logical and non-logical constants and between logical and factual truth... At the same time he worked with the concepts of intension and extension and took these two concepts as a basis of a new method of semantics." In 1931, Carnap was appointed Professor at the German University of Prague. In 1933, W. V. Quine met Carnap in Prague and discussed the latter's work at some length. Thus began the lifelong mutual respect these two men shared, one that survived Quine's eventual forceful disagreements with a number of Carnap's philosophical conclusions. Carnap, whose socialist and pacifist beliefs put him at risk in Nazi Germany, emigrated to the United States in 1935 and became a naturalized citizen in 1941. Meanwhile, back in Vienna, Schlick was murdered in 1936. From 1936 to 1952, Carnap was a professor of philosophy at the University of Chicago. During the late 1930s, Carnap offered an assistant position in philosophy to Carl Gustav Hempel, who accepted and became one of his most significant intellectual collaborators. Thanks partly to Quine's help, Carnap spent the years 1939–41 at Harvard University, where he was reunited with Tarski. Carnap (1963) later expressed some irritation about his time at Chicago, where he and Charles W. Morris were the only members of the department committed to the primacy of science and logic. (Their Chicago colleagues included Richard McKeon, Charles Hartshorne, and Manley Thompson.) Carnap's years at Chicago were nonetheless very productive ones. He wrote books on semantics (Carnap 1942, 1943, 1956), modal logic, and on the philosophical foundations of probability and inductive logic (Carnap 1950, 1952). After a stint at the Institute for Advanced Study in Princeton (1952–1954), he joined the UCLA Department of Philosophy in 1954, Hans Reichenbach having died the previous year. He had earlier refused an offer of a similar job at the University of California, Berkeley, because accepting that position required that he sign a loyalty oath, a practice to which he was opposed on principle. While at UCLA, he wrote on scientific knowledge, the analytic–synthetic distinction, and the verification principle. His writings on thermodynamics and on the foundations of probability and inductive logic, were published posthumously as Carnap (1971, 1977, 1980). Carnap taught himself Esperanto when he was 14 years of age. He later attended the World Congress of Esperanto in Dresden in 1908. He also attended the 1924 Congress in Vienna, where he met his fellow Esperantist Otto Neurath for the first time. In the USA Carnap was somewhat politically involved. Carnap was a signatory of an open appeal distributed by the National Committee to Secure Justice in the Rosenberg Case to appeal for clemency in the case. He was listed as a 'sponsor' for the "National Conference to Appeal the Walter-McCarran Law and Defend Its Victims" organised by the American Committee for the Protection of the Foreign Born, and also for the "Scientific and Cultural Conference for World Peace" organised by the National Council of Arts, Sciences and Professions. Carnap had four children by his first marriage to Elizabeth Schöndube, which ended in divorce in 1929. He married his second wife, Elizabeth Ina Stöger, in 1933. Ina committed suicide in 1964. Philosophical work Below is an examination of the main topics in the evolution of the philosophy of Rudolf Carnap. It is not exhaustive, but it outlines Carnap's main works and contributions to modern epistemology and philosophy of logic. Der Raum From 1919 to 1921, Carnap worked on a doctoral thesis called Der Raum: Ein Beitrag zur Wissenschaftslehre (Space: A Contribution to the Theory of Science, 1922). In this dissertation on the philosophical foundations of geometry, Carnap tried to provide a logical basis for a theory of space and time in physics. Considering that Carnap was interested in pure mathematics, natural sciences and philosophy, his dissertation can be seen as an attempt to build a bridge between the different disciplines that are geometry, physics and philosophy. For Carnap thought that in many instances those disciplines use the same concepts, but with totally different meanings. The main objective of Carnap's dissertation was to show that the inconsistencies between theories concerning space only existed because philosophers, as well as mathematicians and scientists, were talking about different things while using the same "space" word. Hence, Carnap characteristically argued that there had to be three separate notions of space. "Formal" space is space in the sense of mathematics: it is an abstract system of relations. "Intuitive" space is made of certain contents of intuition independent of single experiences. "Physical" space is made of actual spatial facts given in experience. The upshot is that those three kinds of "space" imply three different kinds of knowledge and thus three different kinds of investigations. It is interesting to note that it is in this dissertation that the main themes of Carnap's philosophy appear, most importantly the idea that many philosophical contradictions appear because of a misuse of language, and a stress on the importance of distinguishing formal and material modes of speech. Der Logische Aufbau der Welt From 1922 to 1925, Carnap worked on a book which became one of his major works, namely Der logische Aufbau der Welt (translated as The Logical Structure of the World, 1967), which was accepted in 1926 as his habilitation thesis at the University of Vienna and published as a book in 1928. That achievement has become a landmark in modern epistemology and can be read as a forceful statement of the philosophical thesis of logical positivism. Indeed, the Aufbau suggests that epistemology, based on modern symbolic logic, is concerned with the logical analysis of scientific propositions, while science itself, based on experience, is the only source of knowledge of the external world, i.e. the world outside the realm of human perception. According to Carnap, philosophical propositions are statements about the language of science; they aren't true or false, but merely consist of definitions and conventions about the use of certain concepts. In contrast, scientific propositions are factual statements about the external reality. They are meaningful because they are based on the perceptions of the senses. In other words, the truth or falsity of those propositions can be verified by testing their content with further observations. In the Aufbau, Carnap wants to display the logical and conceptual structure with which all scientific (factual) statements can be organized. Carnap gives the label "constitution theory" to this epistemic-logical project. It is a constructive undertaking that systematizes scientific knowledge according to the notions of symbolic logic. Accordingly, the purpose of this constitutional system is to identify and discern different classes of scientific concepts and to specify the logical relations that link them. In the Aufbau, concepts are taken to denote objects, relations, properties, classes and states. Carnap argues that all concepts must be ranked over a hierarchy. In that hierarchy, all concepts are organized according to a fundamental arrangement where concepts can be reduced and converted to other basic ones. Carnap explains that a concept can be reduced to another when all sentences containing the first concept can be transformed into sentences containing the other. In other words, every scientific sentence should be translatable into another sentence such that the original terms have the same reference as the translated terms. Most significantly, Carnap argues that the basis of this system is psychological. Its content is the "immediately given", which is made of basic elements, namely perceptual experiences. These basic elements consist of conscious psychological states of a single human subject. In the end, Carnap argues that his constitutional project demonstrates the possibility of defining and uniting all scientific concepts in a single conceptual system on the basis of a few fundamental concepts. Overcoming metaphysics From 1928 to 1934, Carnap published papers (Scheinprobleme in der Philosophie, 1928; translated as Pseudoproblems in Philosophy, 1967) in which he appears overtly skeptical of the aims and methods of metaphysics, i.e. the traditional philosophy that finds its roots in mythical and religious thought. Indeed, he discusses how, in many cases, metaphysics is made of meaningless discussions of pseudo-problems. For Carnap, a pseudo-problem is a philosophical question which, on the surface, handles concepts that refer to our world while, in fact, these concepts do not actually denote real and attested objects. In other words, these pseudo-problems concern statements that do not, in any way, have empirical implications. They do not refer to states of affairs and the things they denote cannot be perceived. Consequently, one of Carnap's main aim has been to redefine the purpose and method of philosophy. According to him, philosophy should not aim at producing any knowledge transcending the knowledge of science. In contrast, by analyzing the language and propositions of science, philosophers should define the logical foundations of scientific knowledge. Using symbolic logic, they should explicate the concepts, methods and justificatory processes that exist in science. Carnap believed that the difficulty with traditional philosophy lay in the use of concepts that are not useful for science. For Carnap, the scientific legitimacy of these concepts was doubtful, because the sentences containing them do not express facts. Indeed, a logical analysis of those sentences proves that they do not convey the meaning of states of affairs. In other words, these sentences are meaningless. Carnap explains that to be meaningful, a sentence should be factual. It can be so, for one thing, by being based on experience, i.e. by being formulated with words relating to direct observations. For another, a sentence is factual if one can clearly state what are the observations that could confirm or disconfirm that sentence. After all, Carnap presupposes a specific criterion of meaning, namely the Wittgensteinian principle of verifiability. Indeed, he requires, as a precondition of meaningfulness, that all sentences be verifiable, which implies that a sentence is meaningful only if there is a way to verify if it is true or false. To verify a sentence, one needs to expound the empirical conditions and circumstances that would establish the truth of the sentence. As a result, it is clear for Carnap that metaphysical sentences are meaningless. They include concepts like "god", "soul" and "the absolute" that transcend experience and cannot be traced back or connected to direct observations. Because those sentences cannot be verified in any way, Carnap suggests that science, as well as philosophy, should neither consider nor contain them. The logical analysis of language At that point in his career, Carnap attempted to develop a full theory of the logical structure of scientific language. This theory, exposed in Logische Syntax der Sprache (1934; translated as The Logical Syntax of Language, 1937) gives the foundations to his idea that scientific language has a specific formal structure and that its signs are governed by the rules of deductive logic. Moreover, the theory of logical syntax expounds a method with which one can talk about a language: it is a formal meta-theory about the pure forms of language. In the end, because Carnap argues that philosophy aims at the logical analysis of the language of science and thus is the logic of science, the theory of the logical syntax can be considered as a definite language and a conceptual framework for philosophy. The logical syntax of language is a formal theory. It is not concerned with the contextualized meaning or the truth-value of sentences. In contrast, it considers the general structure of a given language and explores the different structural relations that connect the elements of that language. Hence, by explaining the different operations that allow specific transformations within the language, the theory is a systematic exposition of the rules that operate within that language. In fact, the basic function of these rules is to provide the principles to safeguard coherence, to avoid contradictions and to deduce justified conclusions. Carnap sees language as a calculus. This calculus is a systematic arrangement of symbols and relations. The symbols of the language are organized according to the class that they belong to---and it is through their combination that we can form sentences. The relations are different conditions under which a sentence can be said to follow, or to be the consequence, of another sentence. The definitions included in the calculus state the conditions under which a sentence can be considered of a certain type and how those sentences can be transformed. We can see the logical syntax as a method of formal transformation, i.e. a method for calculating and reasoning with symbols. Finally, Carnap introduces his well known "principle of tolerance." This principle suggests that there is no moral in logic. When it comes to using a language, there is no good or bad, fundamentally true or false. In this perspective, the philosopher's task is not to bring authoritative interdicts prohibiting the use of certain concepts. In contrast, philosophers should seek general agreements over the relevance of certain logical devices. According to Carnap, those agreements are possible only through the detailed presentation of the meaning and use of the expressions of a language. In other words, Carnap believes that every logical language is correct only if this language is supported by exact definitions and not by philosophical presumptions. Carnap embraces a formal conventionalism. That implies that formal languages are constructed and that everyone is free to choose the language it finds more suited to his purpose. There should not be any controversy over which language is the correct language; what matters is agreeing over which language best suits a particular purpose. Carnap explains that the choice of a language should be guided according to the security it provides against logical inconsistency. Furthermore, practical elements like simplicity and fruitfulness in certain tasks influence the choice of a language. Clearly enough, the principle of tolerance was a sophisticated device introduced by Carnap to dismiss any form of dogmatism in philosophy. Inductive logic After having considered problems in semantics, i.e. the theory of the concepts of meaning and truth (Foundations of Logic and Mathematics, 1939; Introduction to Semantics, 1942; Formalization of Logic, 1943), Carnap turned his attention to the subject of probability and inductive logic. His views on that subject are for the most part exposed in Logical foundations of probability (1950) where Carnap aims to give a sound logical interpretation of probability. Carnap thought that according to certain conditions, the concept of probability had to be interpreted as a purely logical concept. In this view, probability is a basic concept anchored in all inductive inferences, whereby the conclusion of every inference that holds without deductive necessity is said be more or less likely to be the case. In fact, Carnap claims that the problem of induction is a matter of finding a precise explanation of the logical relation that holds between a hypothesis and the evidence that supports it. An inductive logic is thus based on the idea that probability is a logical relation between two types of statements: the hypothesis (conclusion) and the premises (evidence). Accordingly, a theory of induction should explain how, by pure logical analysis, we can ascertain that certain evidence establishes a degree of confirmation strong enough to confirm a given hypothesis. Carnap was convinced that there was a logical as well as an empirical dimension in science. He believed that one had to isolate the experiential elements from the logical elements of a given body of knowledge. Hence, the empirical concept of frequency used in statistics to describe the general features of certain phenomena can be distinguished from the analytical concepts of probability logic that merely describe logical relations between sentences. For Carnap, the statistical and the logical concepts must be investigated separately. Having insisted on this distinction, Carnap defines two concepts of probability. The first one is logical and deals with the degree to which a given hypothesis is confirmed by a piece of evidence. It is the degree of confirmation. The second is empirical, and relates to the long run rate of one observable feature of nature relative to another. It is the relative frequency. Statements belonging to the second concepts are about reality and describe states of affairs. They are empirical and, therefore, must be based on experimental procedures and the observation of relevant facts. On the contrary, statements belonging to the first concept do not say anything about facts. Their meaning can be grasped solely with an analysis of the signs they contain. They are analytical sentences, i.e. true by virtue of their logical meaning. Even though these sentences could refer to states of affairs, their meaning is given by the symbols and relations they contain. In other words, the probability of a conclusion is given by the logical relation it has to the evidence. The evaluation of the degree of confirmation of a hypothesis is thus a problem of meaning analysis. Clearly, the probability of a statement about relative frequency can be unknown; because it depends on the observation of certain phenomena, one may not possess the information needed to establish the value of that probability. Consequently, the value of that statement can be confirmed only if it is corroborated with facts. In contrast, the probability of a statement about the degree of confirmation could be unknown, in the sense that one may miss the correct logical method to evaluate its exact value. But, such a statement can always receive a certain logical value, given the fact that this value only depends on the meaning of its symbols. Primary source materials The Rudolf Carnap Papers contain thousands of letters, notes and drafts, and diaries. The majority of his papers were purchased from his daughter, Hanna Carnap-Thost in 1974, by the University of Pittsburgh, with subsequent further accessions. Documents that contain financial, medical, and personal information are restricted. These were written over his entire life and career. Carnap used the mail regularly to discuss philosophical problems with hundreds of others. The most notable were: Herbert Feigl, Carl Gustav Hempel, Felix Kaufmann, Otto Neurath, and Moritz Schlick. Photographs are also part of the collection and were taken throughout his life. Family pictures and photographs of his peers and colleagues are also stored in the collection. Some of the correspondence is considered notable and consist of his student notes, his seminars with Frege, (describing the Begriffsschrift and the logic in mathematics). Carnap's notes from Russell's seminar in Chicago, and notes he took from discussions with Tarski, Heisenberg, Quine, Hempel, Gödel, and Jeffrey are also part of the University of Pittsburgh Library System's Archives and Special Collections. Digitized contents include: Notes (old), 1958–1966 More than 1,000 pages of lecture outlines are preserved that cover the courses that Carnap taught in the United States, Prague, and Vienna. Drafts of his published works and unpublished works are part of the collection. Additional Carnap materials can be found throughout the Archives of Scientific Philosophy at the University of Pittsburgh. Manuscript drafts and typescripts both for his published works and for many unpublished papers and books. A partial listing include his first formulations of his Aufbau. Much material is written in an older German shorthand, the Stolze-Schrey system. He employed this writing system extensively beginning in his student days. Some of the content has been digitized and is available through the finding aid. The University of California also maintains a collection of Rudolf Carnap Papers. Microfilm copies of his papers are maintained by the Philosophical Archives at the University of Konstanz in Germany. Selected publications 1922. Der Raum: Ein Beitrag zur Wissenschaftslehre, Kant-Studien, Ergänzungshefte. 56 (his 1921 doctoral thesis, published as a monograph supplement to the Kant-Studien journal). English translation: "Space: A Contribution to the Theory of Science" (2005 draft), published version in: The Collected Works of Rudolf Carnap, Volume 1: Early Writings (2019) pp. 21–208 1926. Physikalische Begriffsbildung. Karlsruhe: Braun. English translation: "Physical Concept Formation" (1992 draft), published version in: Collected Works (2019) pp. 339–440 1928. Scheinprobleme in der Philosophie (Pseudoproblems in Philosophy). Berlin: Weltkreis-Verlag. 1928. Der Logische Aufbau der Welt (his habilitation thesis). Leipzig: Felix Meiner Verlag. English translation: Rolf A. George, 1967. The Logical Structure of the World. Pseudoproblems in Philosophy. University of California Press. 1929. Abriss der Logistik, mit besonderer Berücksichtigung der Relationstheorie und ihrer Anwendungen. (Revised) English translation: Introduction to Symbolic Logic (1958) 1931. English translation: "The Elimination of Metaphysics Through Logical Analysis of Language" Pap, Arthur (trans.) in: Ayer, A.J (ed.) Logical Positivism (1965) pp. 60–81 1934. Logische Syntax der Sprache. English translation: 1937, The Logical Syntax of Language. Kegan Paul. 1935. Philosophy and Logical Syntax. Bristol UK: Thoemmes. Excerpt. 1939, Foundations of Logic and Mathematics in International Encyclopedia of Unified Science, Vol. I, No. 3. University of Chicago Press. 1942. Introduction to Semantics. Harvard Uni. Press. 1943. Formalization of Logic. Harvard Uni. Press. 1945. "On Inductive Logic" in Philosophy of Science, Vol. 12, pp. 72–97. 1945. "The Two Concepts of Probability" Philosophy and Phenomenological Research, Vol. 5, No. 4, pp.513–532. 1947. "On the Application of Inductive Logic" in Philosophy and Phenomenological Research, Vol. 8, pp.133–148. 1947. Meaning and Necessity: a Study in Semantics and Modal Logic. University of Chicago Press. [enlarged edition published in 1956] 1950, (1962 2nd ed:) Logical Foundations of Probability. University of Chicago Press. pp. 3–15. 1950. "Empiricism, Semantics, Ontology", Revue Internationale de Philosophie 4: 20–40. reprinted in: Paul Benacerraf & Hilary Putnam (eds.), Philosophy of Mathematics: Selected Readings (1964) 1952. The Continuum of Inductive Methods. University of Chicago Press. 1958. Introduction to Symbolic Logic and its Applications. trans. W. H. Myer and J. Wilkinson, Dover publications, New York. [revised and translated version of Abriss der Logistik (1929)] 1962. "The Aim of Inductive Logic" in (eds.) Nagel, Suppes, and Tarski, Logic, Methodology and Philosophy of Science Stanford,, pp 303–318 (revised and expanded in Carnap & Jeffrey 1971). 1963, "Intellectual Autobiography" in Schilpp. Paul A. (ed.) The Philosophy of Rudolf Carnap, Library Of Living Philosophers V. XI, Open Court p. 3–83 (1963) 1964. "The Logicist Foundations of Mathematics" in Paul Benacerraf & Hilary Putnam (eds.), Philosophy of Mathematics: Selected Readings. Englewood Cliffs, NJ, USA: Cambridge University Press. pp. 41--52 1966. An Introduction to the Philosophy of Science. Basic Books. 1966. Philosophical Foundations of Physics. Martin Gardner, ed. Basic Books. Online excerpt. 1971. Studies in Inductive Logic and Probability, Vol. 1. with Jeffrey, R. C, University of California Press. 1973 "Notes on probability and induction" Synthese 25 (3-4):269 - 298, reprinted with slight revision in Hintikka (1975) 1975 “Observation Language and Theoretical Language”, in Jaakko Hintikka (ed.), Rudolf Carnap, logical empiricist: materials and perspectives. Boston: D. Reidel Pub. Co.. pp. 75--85 [translation of “Beobachtungssprache und Theoretische Sprache”, Dialectica, 12(3–4): 236–248 1958] 1977. Two Essays on Entropy. Shimony, Abner, ed. University of California Press. 1980. "A Basic System of Inductive Logic Part II" in: Jeffrey, R. C. (ed.) Studies in Inductive Logic and Probability, Vol. 2. . University of California Press. 2000. Untersuchungen zur Allgemeinen Axiomatik. Edited from unpublished manuscript by T. Bonk and J. Mosterín. Darmstadt: Wissenschaftliche Buchgesellschaft. 167  .. 2017, “Value Concepts (1958)”, Synthese, 194(1): 185–194. 2019. Rudolf Carnap: Early Writings, A.W. Carus, Michael Friedman, Wolfgang Kienzler, Alan Richardson, and Sven Schlotter (eds.), (The Collected Works of Rudolf Carnap, 1), New York: Oxford University Press. *For a more complete listing see Carnap’s Works in "Linked bibliography". Filmography Interview with Rudolf Carnap, German TV, 1964 See also Definitions of philosophy Second Conference on the Epistemology of the Exact Sciences Second Davos Hochschulkurs References Sources Ivor Grattan-Guinness, 2000. In Search of Mathematical Roots. Princeton Uni. Press. Thomas Mormann, 2000. Rudolf Carnap. C. H. Beck. Willard Quine 1951, "Two Dogmas of Empiricism." The Philosophical Review 60: 20–43. Reprinted in his 1953 From a Logical Point of View. Harvard University Press. 1985, The Time of My Life: An Autobiography. MIT Press. Richardson, Alan W., 1998. Carnap's construction of the world: the Aufbau and the emergence of logical empiricism. Cambridge Uni. Press. Schilpp, P. A., ed., 1963. The Philosophy of Rudolf Carnap. LaSalle IL: Open Court. Spohn, Wolfgang, ed., 1991. Erkenntnis Orientated: A Centennial Volume for Rudolf Carnap and Hans Reichenbach. Kluwer Academic Publishers. 1991. Logic, Language, and the Structure of Scientific Theories: Proceedings of the Carnap-Reichenbach Centennial, University of Konstanz, May 21–24, 1991. University of Pittsburgh Press. Wagner, Pierre, ed., 2009. Carnap's Logical Syntax of Language. Palgrave Macmillan. Wagner, Pierre, ed., 2012. Carnap's Ideal of Explication and Naturalism. Palgrave Macmillan. Further reading Sarkar, Sahotra "Rudolf Carnap (1891–1970)" in Martinich, A. P.; Sosa, David (eds.) A Companion to Analytic Philosophy, Blackwell, (2001) Holt, Jim, "Positive Thinking" (review of Karl Sigmund, Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science, Basic Books, 449 pp.), The New York Review of Books, vol. LXIV, no. 20 (21 December 2017), pp. 74–76. Psillos, Stathis, "Rudolf Carnap's 'Theoretical Concepts in Science'", Studies in History and Philosophy of Science Part A 31(1) (2000):151–172. External links Rudolf Carnap Webpage and Directory of Internet Resources Homepage of the Collected Works of Rudolf Carnap – Department of Philosophy, Carnegie Mellon University Precis of Carnap's philosophy The Life of Rudolf Carnap, Philosophy at RBJones.com R. Carnap: "Von der Erkenntnistheorie zur Wissenschaftslogik", Paris Congress in 1935, Paris, 1936. R. Carnap: "Über die Einheitssprache der Wissenschaft", Paris Congress in 1935, Paris, 1936. R. Carnap: "Wahrheit und Bewährung", Paris Congress in 1935, Paris, 1936. Rudolf Carnap Papers: (Rudolf Carnap Papers, 1905–1970, ASP.1974.01, Special Collections Department, University of Pittsburgh.) Das Fremdpsychische bei Rudolf Carnap (German) by Robert Bauer. FBI file on Rudolph Carnap RUDOLF CARNAP, PHILOSOPHER, DIES obituary in The New York Times, 15 September 1970 Homage to Rudolf Carnap (1970) by Feigl, Hempel, Jeffrey, Quine et al. reprinted in frontmatter of RUDOLF CARNAP, LOGICAL EMPIRICIST (1975) [Audio] Carnap lecturing on 'Theoretical Concepts in Science' at the meeting of the American Philosophical Association, Pacific Division, at Santa Barbara, California, on 29 December 1959. 1891 births 1970 deaths 20th-century American male writers 20th-century American philosophers 20th-century atheists 20th-century American essayists 20th-century German male writers 20th-century German non-fiction writers 20th-century German philosophers American atheists American Esperantists American logicians American male essayists American male non-fiction writers American people of German descent American socialists Analytic philosophers Atheist philosophers Corresponding fellows of the British Academy Empiricists Epistemologists German atheists Emigrants from Nazi Germany to the United States German Esperantists German essayists German expatriates in Austria German logicians German male non-fiction writers German socialists Harvard University people Institute for Advanced Study visiting scholars American lecturers German lecturers Ontologists People from the Rhine Province Philosophers of language Philosophers of logic Philosophers of mathematics Philosophers of probability Philosophers of science Philosophers of time Philosophy academics Philosophy writers UCLA Department of Philosophy faculty University of California, Los Angeles faculty University of Chicago faculty University of Jena alumni University of Vienna alumni Academic staff of the University of Vienna Vienna Circle Writers from Wuppertal
Rudolf Carnap
[ "Mathematics" ]
7,065
[ "Philosophers of mathematics" ]
57,989
https://en.wikipedia.org/wiki/PLUR
Peace Love Unity Respect, commonly shortened to PLUR, is a set of principles that is associated with rave culture, originating in the United States. It has been commonly used since the early 1990s when it became commonplace in nightclub and rave flyers and especially on club paraphernalia advertising underground outdoor trance music parties. It has since expanded to the larger rave dance music culture as well. PLUR and rave culture PLUR can be interpreted as the essential philosophy of life and ethical guideline for ravers and clubbers, at least insomuch as it relates to interpersonal relationships, with basic directions on how people are expected to behave at a rave gathering or in a dance club. This universalist philosophy underpinning the tribal dance culture which began circling the globe with the rise of the internet, theoretically takes precedence over any chemical or musical aspects of the rave scene. Raves represent a modern ritualistic experience, promoting a strong communal sense, where PLUR is considered an ideology. The four terms, among others – "Peace, Love, Freedom, Tolerance, Unity, Harmony, Expression, Responsibility and Respect" – are also part of the anonymous "Raver's Manifesto" (claimed to be written by Maria Pike in 2001) which has widely been spread among the international rave subculture. The PLUR handshake is used in kandi bracelet trading. Elements Peace – The gentle resolution of negative emotions and conflict. Love – Performing acts and sharing feelings of goodwill towards others. The exchange of gestures such as hugging occurs frequently at a rave, and is considered a way of "spreading the love." Unity – Welcoming others into the community, and coming together regardless of personal differences. Respect – Showing sensitivity for the feelings of others, and accepting one another with tolerance and without judgement. Treating each other as one would like to be treated. Origins PLUR is an aggregation of ideas that were part of the earlier hippie and peace movement ("peace", "love") and black and hip hop culture ("respect"). Specific use of the term dates to the early 1990s rave scene. One of the most influential uses of the term was made by DJ Frankie Bones in June 1993. In response to a fight in the audience of one of his Storm Raves in Brooklyn, Bones took the microphone and proclaimed: "If you don't start showing some peace, love, and unity, I'll break your faces." It is also reported that as early as "on July 4, 1990, [...] Frankie's brother and Storm Rave collaborator Adam X painted 'Peace Love Unity' on a train car". The fourth term, "Respect" was championed by Laura La Gassa (wife of Brian Behlendorf). Variations Another variation is PLURR - Peace. Love. Unity. Respect. Responsibility. Several other variations on the same four words, but in a different order (e.g. LURP), have been proposed. However, none of these are commonly used. Another variation is PLUM, with "M" standing for "movement". Also, the first three elements, "Peace, Love, Unity" are used separately. An example for this is the title of DJ Hype's 1996 track "Peace, Love & Unity". Later incarnations and variations of PLUR can be seen in the adoption of Pronoia and also Ubuntu, with PLUR and Pronoia often being interchangeable terms, depending upon one's company. References External links PLUR acronym definitions Peace in culture Love Rave Electronic dance music Etiquette Acronyms sl:PLUR
PLUR
[ "Biology" ]
737
[ "Etiquette", "Behavior", "Human behavior" ]
57,992
https://en.wikipedia.org/wiki/Monocoque
Monocoque ( ), also called structural skin, is a structural system in which loads are supported by an object's external skin, in a manner similar to an egg shell. The word monocoque is a French term for "single shell". First used for boats, a true monocoque carries both tensile and compressive forces within the skin and can be recognised by the absence of a load-carrying internal frame. Few metal aircraft other than those with milled skins can strictly be regarded as pure monocoques, as they use a metal shell or sheeting reinforced with frames riveted to the skin, but most wooden aircraft are described as monocoques, even though they also incorporate frames. By contrast, a semi-monocoque is a hybrid combining a tensile stressed skin and a compressive structure made up of longerons and ribs or frames. Other semi-monocoques, not to be confused with true monocoques, include vehicle unibodies, which tend to be composites, and inflatable shells or balloon tanks, both of which are pressure stabilised. Aircraft Early aircraft were constructed using frames, typically of wood or steel tubing, which could then be covered (or skinned) with fabric such as Irish linen or cotton. The fabric made a minor structural contribution in tension but none in compression and was there for aerodynamic reasons only. By considering the structure as a whole and not just the sum of its parts, monocoque construction integrated the skin and frame into a single load-bearing shell with significant improvements to strength and weight. To make the shell, thin strips of wood were laminated into a three dimensional shape; a technique adopted from boat hull construction. One of the earliest examples was the Deperdussin Monocoque racer in 1912, which used a laminated fuselage made up of three layers of glued poplar veneer, which provided both the external skin and the main load-bearing structure. This also produced a smoother surface and reduced drag so effectively that it was able to win most of the races it was entered into. This style of construction was further developed in Germany by LFG Roland using the patented Wickelrumpf (wrapped hull) form later licensed by them to Pfalz Flugzeugwerke who used it on several fighter aircraft. Each half of the fuselage shell was formed over a male mold using two layers of plywood strips with fabric wrapping between them. The early plywood used was prone to damage from moisture and delamination. While all-metal aircraft such as the Junkers J 1 had appeared as early as 1915, these were not monocoques but added a metal skin to an underlying framework. The first metal monocoques were built by Claudius Dornier, while working for Zeppelin-Lindau. He had to overcome a number of problems, not least was the quality of aluminium alloys strong enough to use as structural materials, which frequently formed layers instead of presenting a uniform material. After failed attempts with several large flying boats in which a few components were monocoques, he built the Zeppelin-Lindau V1 to test out a monocoque fuselage. Although it crashed, he learned a lot from its construction. The Dornier-Zeppelin D.I was built in 1918 and although too late for operational service during the war was the first all metal monocoque aircraft to enter production. In parallel to Dornier, Zeppelin also employed Adolf Rohrbach, who built the Zeppelin-Staaken E-4/20, which when it flew in 1920 became the first multi-engined monocoque airliner, before being destroyed under orders of the Inter-Allied Commission. At the end of WWI, the Inter-Allied Technical Commission published details of the last Zeppelin-Lindau flying boat showing its monocoque construction. In the UK, Oswald Short built a number of experimental aircraft with metal monocoque fuselages starting with the 1920 Short Silver Streak in an attempt to convince the air ministry of its superiority over wood. Despite advantages, aluminium alloy monocoques would not become common until the mid 1930s as a result of a number of factors, including design conservatism and production setup costs. Short would eventually prove the merits of the construction method with a series of flying boats, whose metal hulls didn't absorb water as the wooden hulls did, greatly improving performance. In the United States, Northrop was a major pioneer, introducing techniques used by his own company and Douglas with the Northrop Alpha. Vehicles Race cars In motor racing, the safety of the driver depends on the car body, which must meet stringent regulations, and only a few cars have been built with monocoque structures. An aluminum alloy monocoque chassis was first used in the 1962 Lotus 25 Formula 1 race car and McLaren was the first to use carbon-fiber-reinforced polymers to construct the monocoque of the 1981 McLaren MP4/1. In 1990 the Jaguar XJR-15 became the first production car with a carbon-fiber monocoque. Road cars The term monocoque is frequently misapplied to unibody cars. Commercial car bodies are almost never true monocoques but instead use the unibody system (also referred to as unitary construction, unitary body–chassis or body–frame integral construction), in which the body of the vehicle, its floor pan, and chassis form a single structure, while the skin adds relatively little strength or stiffness. Armoured vehicles Some armoured fighting vehicles use a monocoque structure with a body shell built up from armour plates, rather than attaching them to a frame. This reduces weight for a given amount of armour. Examples include the German TPz Fuchs and RG-33. Two-wheeled vehicles French industrialist and engineer Georges Roy attempted in the 1920s to improve on the bicycle-inspired motorcycle frames of the day, which lacked rigidity. This limited their handling and therefore performance. He applied for a patent in 1926, and at the 1929 Paris Automotive Show unveiled his new motorcycle, the Art-Deco styled 1930 Majestic. Its new type of monocoque body solved the problems he had addressed, and along with better rigidity it did double-duty, as frame and bodywork provided some protection from the elements. Strictly considered, it was more of a semi-monocoque, as it used a box-section, pressed-steel frame with twin side rails riveted together via crossmembers, along with floor pans and rear and front bulkheads. A Piatti light scooter was produced in the 1950s using a monocoque hollow shell of sheet-steel pressings welded together, into which the engine and transmission were installed from underneath. The machine could be tipped onto its side, resting on the bolt-on footboards for mechanical access. A monocoque framed scooter was produced by Yamaha from 1960–1962. Model MF-1 was powered by a 50 cc engine with a three-speed transmission and a fuel tank incorporated into the frame. A monocoque-framed motorcycle was developed by Spanish manufacturer Ossa for the 1967 Grand Prix motorcycle racing season. Although the single-cylinder Ossa had less than its rivals, it was lighter and its monocoque frame was much stiffer than conventional motorcycle frames, giving it superior agility on the racetrack. Ossa won four Grands Prix races with the monocoque bike before their rider died after a crash during the 250 cc event at the 1970 Isle of Man TT, causing the Ossa factory to withdraw from Grand Prix competition. Notable designers such as Eric Offenstadt and Dan Hanebrink created unique monocoque designs for racing in the early 1970s. The F750 event at the 1973 Isle of Man TT races was won by Peter Williams on the monocoque-framed John Player Special that he helped to design based on Norton Commando. Honda also experimented with the NR500, a monocoque Grand Prix racing motorcycle in 1979. The bike had other innovative features, including an engine with oval shaped cylinders, and eventually succumbed to the problems associated with attempting to develop too many new technologies at once. In 1987 John Britten developed the Aero-D One, featuring a composite monocoque chassis that weighed only . An aluminium monocoque frame was used for the first time on a mass-produced motorcycle from 2000 on Kawasaki's ZX-12R, their flagship production sportbike aimed at being the fastest production motorcycle. It was described by Cycle World in 2000 as a "monocoque backbone ... a single large diameter beam" and "Fabricated from a combination of castings and sheet-metal stampings". Single-piece carbon fiber bicycle frames are sometimes described as monocoques; however as most use components to form a frame structure (even if molded in a single piece), these are frames not monocoques, and the pedal-cycle industry continues to refer to them as framesets. Railroads The P40DC, P42DC and P32ACDM all utilize a monocoque shell. Rockets Various rockets have used pressure-stabilized monocoque designs, such as Atlas and Falcon 1. The Atlas was very light since a major portion of its structural support was provided by its single-wall steel balloon fuel tanks, which hold their shape while under acceleration by internal pressure. Balloon tanks are not true monocoques but act in the same way as inflatable shells. A balloon tank skin only handles tensile forces while compression is resisted by internal liquid pressure in a way similar to semi-monocoques braced by a solid frame. This becomes obvious when internal pressure is lost and the structure collapses. Monocoque tanks can also be cheaper to manufacture than more traditional orthogrids. Blue Origin's upcoming New Glenn launch vehicle will use monocoque construction on its second stage despite the mass penalty in order to reduce the cost of production. This is especially important when the stage is expendable, as with the New Glenn second stage. See also Backbone chassis Body-on-frame Coachbuilder List of carbon fiber monocoque cars Space frame Thin-shell structure Vehicle frame References Citations Bibliography . Automotive chassis types Motorcycle frames Airship technology Structural engineering Aircraft components
Monocoque
[ "Engineering" ]
2,086
[ "Structural engineering", "Civil engineering", "Construction" ]
57,994
https://en.wikipedia.org/wiki/Epiphyte
An epiphyte is a plant or plant-like organism that grows on the surface of another plant and derives its moisture and nutrients from the air, rain, water (in marine environments) or from debris accumulating around it. The plants on which epiphytes grow are called phorophytes. Epiphytes take part in nutrient cycles and add to both the diversity and biomass of the ecosystem in which they occur, like any other organism. In some cases, a rainforest tree's epiphytes may total "several tonnes" (several long tons). They are an important source of food for many species. Typically, the older parts of a plant will have more epiphytes growing on them. Epiphytes differ from parasites in that they grow on other plants for physical support and do not necessarily affect the host negatively. An organism that grows on another organism that is not a plant may be called an epibiont. Epiphytes are usually found in the temperate zone (e.g., many mosses, liverworts, lichens, and algae) or in the tropics (e.g., many ferns, cacti, orchids, and bromeliads). Epiphyte species make good houseplants due to their minimal water and soil requirements. Epiphytes provide a rich and diverse habitat for other organisms including animals, fungi, bacteria, and myxomycetes. Epiphyte is one of the subdivisions of the Raunkiær system. The term epiphytic derives . Epiphytic plants are sometimes called "air plants" because they do not root in soil. However, that term is inaccurate, as there are many aquatic species of algae that are epiphytes on other aquatic plants (seaweeds or aquatic angiosperms). Terrestrial epiphytes The best-known epiphytic plants include mosses, orchids, and bromeliads such as Spanish moss (of the genus Tillandsia), but epiphytes may be found in every major group of the plant kingdom. Eighty-nine percent of (or about 24,000) terrestrial epiphyte species are flowering plants. The second largest group are the leptosporangiate ferns, with about 2,800 species (10% of epiphytes). About one-third of all fern species are epiphytes. The third largest group is clubmosses, with 190 species, followed by a handful of species in each of the spikemosses, other ferns, Gnetales, and cycads. The first important monograph on epiphytic plant ecology was written by A. F. W. Schimper (, 1888). Assemblages of large epiphytes occur most abundantly in moist tropical forests, but mosses and lichens occur as epiphytes in almost all biomes. In Europe there are no dedicated epiphytic plants using roots, but rich assemblages of mosses and lichens grow on trees in damp areas (mainly the western coastal fringe), and the common polypody fern grows epiphytically along branches. Rarely, grass, small bushes or small trees may grow in suspended soils up trees (typically in a rot-hole). Holo-epiphyte or hemi-epiphyte Epiphytes however, can generally be categorized into holo-epiphytes or hemi-epiphytes. A holo-epiphyte is a plant that spends its whole life cycle without contact with the ground and a hemi-epiphyte is a plant that spends only half of its life without the ground before the roots can reach or make contact with the ground. Orchids are a common example of holo-epiphytes and Strangler Figs are an example of hemi-epiphytes. Plant nutrient relations Epiphytes are not connected to the soil, and consequently must get nutrients from other sources, such as fog, dew, rain and mist, or from nutrients being released from the ground rooted plants by decomposition or leaching, and dinitrogen fixation. Epiphytic plants attached to their hosts high in the canopy have an advantage over herbs restricted to the ground where there is less light and herbivores may be more active. Epiphytic plants are also important to certain animals that may live in their water reservoirs, such as some types of frogs and arthropods. Epiphytes can have a significant effect on the microenvironment of their host, and of ecosystems where they are abundant, as they hold water in the canopy and decrease water input to the soil. Some non-vascular epiphytes such as lichens and mosses are well known for their ability to take up water rapidly. Epiphytes create a significantly cooler and more moist environment in the host plant canopy, potentially greatly reducing water loss by the host through transpiration. Plant metabolism CAM metabolism, a water-preserving metabolism present among various plant taxa, is particularly relevant to epiphytic communities. For example, it is estimated that among epiphytic orchids, as many as 50% are likely to use it. Other relevant epiphytic families which display such metabolism are Bromeliacee (e.g. in genera Aechmea and Tillandsia), Cactaceae (e.g. in Rhipsalis and Epiphyllum) and Apocynaceae (e.g. in Hoya and Dischidia). Marine epiphytes The ecology of epiphytes in marine environments differs from those in terrestrial ecosystems. Epiphytes in marine systems are species of algae, bacteria, fungi, sponges, bryozoans, ascidians, protozoa, crustaceans, molluscs and any other sessile organism that grows on the surface of a plant, typically seagrasses or algae. Settlement of epiphytic species is influenced by a number of factors including light, temperature, currents, nutrients, and trophic interactions. Algae are the most common group of epiphytes in marine systems. Photosynthetic epiphytes account for a large amount of the photosynthesis in systems in which they occur. This is typically between 20 and 60% of the total primary production of the ecosystem. They are a general group of organisms and are highly diverse, providing food for a great number of fauna. Snail and nudibranch species are two common grazers of epiphytes. Epiphyte species composition and the amount of epiphytes can be indicative of changes in the environment. Recent increases in epiphyte abundance have been linked to excessive nitrogen put into the environment from farm runoff and storm water. High abundance of epiphytes are considered detrimental to the plants that they grow on often causing damage or death, particularly in seagrasses. This is because too many epiphytes can block access to sunlight or nutrients. Epiphytes in marine systems are known to grow quickly with very fast generation times. See also Lithophyte – a plant that grows on rocks Tillandsia – a genus of the Bromeliaceae Epiphyllum – a genus of epiphytic cacti Parasitic plant Epilith, an organism that grows in a rock Foliicolous, lichens or bryophytes that grow on leaves of vascular plants Epiphytic bacteria Epiphytic fungus Canopy soils References External links Epiphytes on a Scot's Pine in Gorbie Glen, Scotland Ecology terminology Plant morphology Plant life-forms Plants by habit
Epiphyte
[ "Biology" ]
1,608
[ "Plant morphology", "Ecology terminology", "Plant life-forms", "Plants" ]
58,005
https://en.wikipedia.org/wiki/Airship
An airship, dirigible balloon or dirigible is a type of aerostat (lighter-than-air) aircraft that can navigate through the air flying under its own power. Aerostats use buoyancy from a lifting gas that is less dense than the surrounding air to achieve the lift needed to stay airborne. In early dirigibles, the lifting gas used was hydrogen, due to its high lifting capacity and ready availability, but the inherent flammability led to several fatal accidents that rendered hydrogen airships obsolete. The alternative lifting gas, helium gas is not flammable, but is rare and relatively expensive. Significant amounts were first discovered in the United States and for a while helium was only available for airship usage in North America. Most airships built since the 1960s have used helium, though some have used hot air. The envelope of an airship may form the gasbag, or it may contain a number of gas-filled cells. An airship also has engines, crew, and optionally also payload accommodation, typically housed in one or more gondolas suspended below the envelope. The main types of airship are non-rigid, semi-rigid and rigid airships. Non-rigid airships, often called "blimps", rely solely on internal gas pressure to maintain the envelope shape. Semi-rigid airships maintain their shape by internal pressure, but have some form of supporting structure, such as a fixed keel, attached to it. Rigid airships have an outer structural framework that maintains the shape and carries all structural loads, while the lifting gas is contained in one or more internal gasbags or cells. Rigid airships were first flown by Count Ferdinand von Zeppelin and the vast majority of rigid airships built were manufactured by the firm he founded, Luftschiffbau Zeppelin. As a result, rigid airships are often called zeppelins. Airships were the first aircraft capable of controlled powered flight, and were most commonly used before the 1940s; their use decreased as their capabilities were surpassed by those of aeroplanes. Their decline was accelerated by a series of high-profile accidents, including the 1930 crash and burning of the British R101 in France, the 1933 and 1935 storm-related crashes of the twin airborne aircraft carrier U.S. Navy helium-filled rigids, the and USS Macon respectively, and the 1937 burning of the German hydrogen-filled Hindenburg. From the 1960s, helium airships have been used where the ability to hover for a long time outweighs the need for speed and manoeuvrability, such as advertising, tourism, camera platforms, geological surveys and aerial observation. Terminology Airship During the pioneer years of aeronautics, terms such as "airship", "air-ship", "air ship" and "ship of the air" meant any kind of navigable or dirigible flying machine. In 1919 Frederick Handley Page was reported as referring to "ships of the air", with smaller passenger types as "air yachts". In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". Nowadays the term "airship" is used only for powered, dirigible balloons, with sub-types being classified as rigid, semi-rigid or non-rigid. Semi-rigid architecture is the more recent, following advances in deformable structures and the exigency of reducing weight and volume of the airships. They have a minimal structure that keeps the shape jointly with overpressure of the gas envelope. Aerostat An aerostat is an aircraft that remains aloft using buoyancy or static lift, as opposed to the aerodyne, which obtains lift by moving through the air. Airships are a type of aerostat. The term aerostat has also been used to indicate a tethered or moored balloon as opposed to a free-floating balloon. Aerostats today are capable of lifting a payload of to an altitude of more than above sea level. They can also stay in the air for extended periods of time, particularly when powered by an on-board generator or if the tether contains electrical conductors. Due to this capability, aerostats can be used as platforms for telecommunication services. For instance, Platform Wireless International Corporation announced in 2001 that it would use a tethered airborne payload to deliver cellular phone service to a region in Brazil. The European Union's ABSOLUTE project was also reportedly exploring the use of tethered aerostat stations to provide telecommunications during disaster response. Blimp A blimp is a non-rigid aerostat. In British usage it refers to any non-rigid aerostat, including barrage balloons and other kite balloons, having a streamlined shape and stabilising tail fins. Some blimps may be powered dirigibles, as in early versions of the Goodyear Blimp. Later Goodyear dirigibles, though technically semi-rigid airships, have still been called "blimps" by the company. Zeppelin The term zeppelin originally referred to airships manufactured by the German Zeppelin Company, which built and operated the first rigid airships in the early years of the twentieth century. The initials LZ, for (German for "Zeppelin airship"), usually prefixed their craft's serial identifiers. Streamlined rigid (or semi-rigid) airships are often referred to as "Zeppelins", because of the fame that this company acquired due to the number of airships it produced, although its early rival was the Parseval semi-rigid design. Hybrid airship Hybrid airships fly with a positive aerostatic contribution, usually equal to the empty weight of the system, and the variable payload is sustained by propulsion or aerodynamic contribution. Classification Airships are classified according to their method of construction into rigid, semi-rigid and non-rigid types. Rigid A rigid airship has a rigid framework covered by an outer skin or envelope. The interior contains one or more gasbags, cells or balloons to provide lift. Rigid airships are typically unpressurised and can be made to virtually any size. Most, but not all, of the German Zeppelin airships have been of this type. Semi-rigid A semi-rigid airship has some kind of supporting structure but the main envelope is held in shape by the internal pressure of the lifting gas. Typically the airship has an extended, usually articulated keel running along the bottom of the envelope to stop it kinking in the middle by distributing suspension loads into the envelope, while also allowing lower envelope pressures. Non-rigid Non-rigid airships are often called "blimps". Most, but not all, of the American Goodyear airships have been blimps. A non-rigid airship relies entirely on internal gas pressure to retain its shape during flight. Unlike the rigid design, the non-rigid airship's gas envelope has no compartments. However, it still typically has smaller internal bags containing air (ballonets). As altitude is increased, the lifting gas expands and air from the ballonets is expelled through valves to maintain the hull's shape. To return to sea level, the process is reversed: air is forced back into the ballonets by scooping air from the engine exhaust and using auxiliary blowers. Construction Envelope The envelope is the structure which contains the buoyant gas. Envelopes in the early 19th century were made from goldbeater's skin, selected for its low weight, relatively high strength, and impermeability compared to paper or linen. By the 1920s, natural rubber treated with cotton became the predominant elastomer used in envelope construction. The natural rubber was succeeded by neoprene in the 1930s and Nylon and PET in the 1950s. A few airships have been metal-clad. The most successful of which is the Detroit ZMC-2, which logged 2265 hours of flight time from 1929 to 1941 before being scrapped, as it was considered too small for operational use on anti-submarine patrols. The problem of the exact determination of the pressure on an airship envelope is still problematic and has fascinated major scientists such as Theodor Von Karman. The envelope may contain ballonets (see below), allowingadjust the density of the buoyant gas by adding or subtracting envelope volume. Ballonet A ballonet is an air bag inside the outer envelope of an airship which, when inflated, reduces the volume available for the lifting gas, making it more dense. Because air is also denser than the lifting gas, inflating the ballonet reduces the overall lift, while deflating it increases lift. In this way, the ballonet can be used to adjust the lift as required by controlling the buoyancy. By inflating or deflating ballonets strategically, the pilot can control the airship's altitude and attitude. Ballonets may typically be used in non-rigid or semi-rigid airships, commonly with multiple ballonets located both fore and aft to maintain balance and to control the pitch of the airship. Lifting gas Lifting gas is generally hydrogen, helium or hot air. Hydrogen gives the highest lift and is inexpensive and easily obtained, but is highly flammable and can detonate if mixed with air. Helium is completely non flammable, but gives lower performance- and is a rare element and much more expensive. Thermal airships use a heated lifting gas, usually air, in a fashion similar to hot air balloons. The first to do so was flown in 1973 by the British company Cameron Balloons. Gondola Propulsion and control Small airships carry their engine(s) in their gondola. Where there were multiple engines on larger airships, these were placed in separate nacelles, termed power cars or engine cars. To allow asymmetric thrust to be applied for maneuvering, these power cars were mounted towards the sides of the envelope, away from the centre line gondola. This also raised them above the ground, reducing the risk of a propeller strike when landing. Widely spaced power cars were also termed wing cars, from the use of "wing" to mean being on the side of something, as in a theater, rather than the aerodynamic device. These engine cars carried a crew during flight who maintained the engines as needed, but who also worked the engine controls, throttle etc., mounted directly on the engine. Instructions were relayed to them from the pilot's station by a telegraph system, as on a ship. If fuel is burnt for propulsion, then progressive reduction in the airship's overall weight occurs. In hydrogen airships, this is usually dealt with by simply venting cheap hydrogen lifting gas. In helium airships water is often condensed from the exhaust and stored as ballast. Fins and rudders To control the airship's direction and stability, it is equipped with fins and rudders. Fins are typically located on the tail section and provide stability and resistance to rolling. Rudders are movable surfaces on the tail that allow the pilot to steer the airship left or right. Empennage The empennage refers to the tail section of the airship, which includes the fins, rudders, and other aerodynamic surfaces. It plays a crucial role in maintaining stability and controlling the airship's attitude. Fuel and power systems Airships require a source of power to operate their propulsion systems. This includes engines, generators, or batteries, depending on the type of airship and its design. Fuel tanks or batteries are typically located within the envelope or gondola. Navigation and communication equipment To navigate safely and communicate with ground control or other aircraft, airships are equipped with a range of instruments, including GPS systems, radios, radar, and navigation lights. Landing gear Some airships have landing gear that allows them to land on runways or other surfaces. This landing gear may include wheels, skids, or landing pads. Performance Efficiency The main advantage of airships with respect to any other vehicle is that they require less energy to remain in flight, compared to other air vehicles. The proposed Varialift airship, powered by a mixture of solar-powered engines and conventional jet engines, would use only an estimated 8 percent of the fuel required by jet aircraft. Furthermore, utilizing the jet stream could allow for a faster and more energy-efficient cargo transport alternative to maritime shipping. This is one of the reasons why China has embraced their use recently. History Early pioneers 17th–18th century In 1670, the Jesuit Father Francesco Lana de Terzi, sometimes referred to as the "Father of Aeronautics", published a description of an "Aerial Ship" supported by four copper spheres from which the air was evacuated. Although the basic principle is sound, such a craft was unrealizable then and remains so to the present day, since external air pressure would cause the spheres to collapse unless their thickness was such as to make them too heavy to be buoyant. A hypothetical craft constructed using this principle is known as a vacuum airship. In 1709, the Brazilian-Portuguese Jesuit priest Bartolomeu de Gusmão made a hot air balloon, the Passarola, ascend to the skies, before an astonished Portuguese court. It would have been on August 8, 1709, when Father Bartolomeu de Gusmão held, in the courtyard of the Casa da Índia, in the city of Lisbon, the first Passarola demonstration. The balloon caught fire without leaving the ground, but, in a second demonstration, it rose to 95 meters in height. It was a small balloon of thick brown paper, filled with hot air, produced by the "fire of material contained in a clay bowl embedded in the base of a waxed wooden tray". The event was witnessed by King John V of Portugal and the future Pope Innocent XIII. A more practical dirigible airship was described by Lieutenant Jean Baptiste Marie Meusnier in a paper entitled "" (Memorandum on the equilibrium of aerostatic machines) presented to the French Academy on 3 December 1783. The 16 water-color drawings published the following year depict a streamlined envelope with internal ballonets that could be used for regulating lift: this was attached to a long carriage that could be used as a boat if the vehicle was forced to land in water. The airship was designed to be driven by three propellers and steered with a sail-like aft rudder. In 1784, Jean-Pierre Blanchard fitted a hand-powered propeller to a balloon, the first recorded means of propulsion carried aloft. In 1785, he crossed the English Channel in a balloon equipped with flapping wings for propulsion and a birdlike tail for steering. 19th century The 19th century saw continued attempts to add methods of propulsion to balloons. Rufus Porter built and flew scale models of his "Aerial Locomotive", but never a successful full-size implementation. The Australian William Bland sent designs for his "Atmotic airship" to the Great Exhibition held in London in 1851, where a model was displayed. This was an elongated balloon with a steam engine driving twin propellers suspended underneath. The lift of the balloon was estimated as 5 tons and the car with the fuel as weighing 3.5 tons, giving a payload of 1.5 tons. Bland believed that the machine could be driven at and could fly from Sydney to London in less than a week. In 1852, Henri Giffard became the first person to make an engine-powered flight when he flew in a steam-powered airship. Airships would develop considerably over the next two decades. In 1863, Solomon Andrews flew his aereon design, an unpowered, controllable dirigible in Perth Amboy, New Jersey and offered the device to the U.S. Military during the Civil War. He flew a later design in 1866 around New York City and as far as Oyster Bay, New York. This concept used changes in lift to provide propulsive force, and did not need a powerplant. In 1872, the French naval architect Dupuy de Lome launched a large navigable balloon, which was driven by a large propeller turned by eight men. It was developed during the Franco-Prussian war and was intended as an improvement to the balloons used for communications between Paris and the countryside during the siege of Paris, but was completed only after the end of the war. In 1872, Paul Haenlein flew an airship with an internal combustion engine running on the coal gas used to inflate the envelope, the first use of such an engine to power an aircraft. Charles F. Ritchel made a public demonstration flight in 1878 of his hand-powered one-man rigid airship, and went on to build and sell five of his aircraft. In 1874, Micajah Clark Dyer filed U.S. Patent 154,654 "Apparatus for Navigating the Air". It is believed successful trial flights were made between 1872 and 1874, but detailed dates are not available. The apparatus used a combination of wings and paddle wheels for navigation and propulsion. More details can be found in the book about his life. In 1883, the first electric-powered flight was made by Gaston Tissandier, who fitted a Siemens electric motor to an airship. The first fully controllable free flight was made in 1884 by Charles Renard and Arthur Constantin Krebs in the French Army airship La France. La France made the first flight of an airship that landed where it took off; the long, airship covered in 23 minutes with the aid of an electric motor, and a battery. It made seven flights in 1884 and 1885. In 1888, the design of the Campbell Air Ship, designed by Professor Peter C. Campbell, was built by the Novelty Air Ship Company. It was lost at sea in 1889 while being flown by Professor Hogan during an exhibition flight. From 1888 to 1897, Friedrich Wölfert built three airships powered by Daimler Motoren Gesellschaft-built petrol engines, the last of which, Deutschland, caught fire in flight and killed both occupants in 1897. The 1888 version used a single cylinder Daimler engine and flew from Canstatt to Kornwestheim. In 1897, an airship with an aluminum envelope was built by the Hungarian-Croatian engineer David Schwarz. It made its first flight at Tempelhof field in Berlin after Schwarz had died. His widow, Melanie Schwarz, was paid 15,000 marks by Count Ferdinand von Zeppelin to release the industrialist Carl Berg from his exclusive contract to supply Schwartz with aluminium. From 1897 to 1899, Konstantin Danilewsky, medical doctor and inventor from Kharkiv (now Ukraine, then Russian Empire), built four muscle-powered airships, of gas volume . About 200 ascents were made within a framework of experimental flight program, at two locations, with no significant incidents. Early 20th century In July 1900, the Luftschiff Zeppelin LZ1 made its first flight. This led to the most successful airships of all time: the Zeppelins, named after Count Ferdinand von Zeppelin who began working on rigid airship designs in the 1890s, leading to the flawed LZ1 in 1900 and the more successful LZ2 in 1906. The Zeppelin airships had a framework composed of triangular lattice girders covered with fabric that contained separate gas cells. At first multiplane tail surfaces were used for control and stability: later designs had simpler cruciform tail surfaces. The engines and crew were accommodated in "gondolas" hung beneath the hull driving propellers attached to the sides of the frame by means of long drive shafts. Additionally, there was a passenger compartment (later a bomb bay) located halfway between the two engine compartments. Alberto Santos-Dumont was a wealthy young Brazilian who lived in France and had a passion for flying. He designed 18 balloons and dirigibles before turning his attention to fixed-winged aircraft. On 19 October 1901 he flew his airship Number 6, from the Parc Saint Cloud to and around the Eiffel Tower and back in under thirty minutes. This feat earned him the Deutsch de la Meurthe prize of 100,000 francs. Many inventors were inspired by Santos-Dumont's small airships. Many airship pioneers, such as the American Thomas Scott Baldwin, financed their activities through passenger flights and public demonstration flights. Stanley Spencer built the first British airship with funds from advertising baby food on the sides of the envelope. Others, such as Walter Wellman and Melvin Vaniman, set their sights on loftier goals, attempting two polar flights in 1907 and 1909, and two trans-Atlantic flights in 1910 and 1912. In 1902 the Spanish engineer Leonardo Torres Quevedo published details of an innovative airship design in Spain and France titled "" ("Improvements in dirigible aerostats"). With a non-rigid body and internal bracing wires, it overcame the flaws of these types of aircraft as regards both rigid structure (zeppelin type) and flexibility, providing the airships with more stability during flight, and the capability of using heavier engines and a greater passenger load. A system called "auto-rigid". In 1905, helped by Captain A. Kindelán, he built the airship "Torres Quevedo" at the Guadalajara military base. In 1909 he patented an improved design that he offered to the French Astra company, who started mass-producing it in 1911 as the Astra-Torres airship. This type of envelope was employed in the United Kingdom in the Coastal, C Star, and North Sea airships. The distinctive three-lobed design was widely used during the Great War by the Entente powers for diverse tasks, principally convoy protection and anti-submarine warfare. The success during the war even drew the attention of the Imperial Japanese Navy, who acquired a model in 1922. Torres also drew up designs of a 'docking station' and made alterations to airship designs, to find a resolution to the slew of problems faced by airship engineers to dock dirigibles. In 1910, he proposed the idea of attaching an airships nose to a mooring mast and allowing the airship to weathervane with changes of wind direction. The use of a metal column erected on the ground, the top of which the bow or stem would be directly attached to (by a cable) would allow a dirigible to be moored at any time, in the open, regardless of wind speeds. Additionally, Torres' design called for the improvement and accessibility of temporary landing sites, where airships were to be moored for the purpose of disembarkation of passengers. The final patent was presented in February 1911 in Belgium, and later to France and the United Kingdom in 1912, under the title "Improvements in Mooring Arrengements for Airships". Other airship builders were also active before the war: from 1902 the French company Lebaudy Frères specialized in semirigid airships such as the Patrie and the République, designed by their engineer Henri Julliot, who later worked for the American company Goodrich; the German firm Schütte-Lanz built the wooden-framed SL series from 1911, introducing important technical innovations; another German firm Luft-Fahrzeug-Gesellschaft built the Parseval-Luftschiff (PL) series from 1909, and Italian Enrico Forlanini's firm had built and flown the first two Forlanini airships. On May 12, 1902, the inventor and Brazilian aeronaut Augusto Severo de Albuquerque Maranhao and his French mechanic, Georges Saché, died when they were flying over Paris in the airship called Pax. A marble plaque at number 81 of the Avenue du Maine in Paris, commemorates the location of Augusto Severo accident. The Catastrophe of the Balloon "Le Pax" is a 1902 short silent film recreation of the catastrophe, directed by Georges Méliès. In Britain, the Army built their first dirigible, the Nulli Secundus, in 1907. The Navy ordered the construction of an experimental rigid in 1908. Officially known as His Majesty's Airship No. 1 and nicknamed the Mayfly, it broke its back in 1911 before making a single flight. Work on a successor did not start until 1913. German airship passenger service known as DELAG (Deutsche-Luftschiffahrts AG) was established in 1910. In 1910 Walter Wellman unsuccessfully attempted an aerial crossing of the Atlantic Ocean in the airship America. World War I The prospect of airships as bombers had been recognized in Europe well before the airships were up to the task. H. G. Wells' The War in the Air (1908) described the obliteration of entire fleets and cities by airship attack. The Italian forces became the first to use dirigibles for a military purpose during the Italo–Turkish War, the first bombing mission being flown on 10 March 1912. World War I marked the airship's real debut as a weapon. The Germans, French, and Italians all used airships for scouting and tactical bombing roles early in the war, and all learned that the airship was too vulnerable for operations over the front. The decision to end operations in direct support of armies was made by all in 1917. Many in the German military believed they had found the ideal weapon with which to counteract British naval superiority and strike at Britain itself, while more realistic airship advocates believed the zeppelin's value was as a long range scout/attack craft for naval operations. Raids on England began in January 1915 and peaked in 1916: following losses to the British defenses only a few raids were made in 1917–18, the last in August 1918. Zeppelins proved to be terrifying but inaccurate weapons. Navigation, target selection and bomb-aiming proved to be difficult under the best of conditions, and the cloud cover that was frequently encountered by the airships reduced accuracy even further. The physical damage done by airships over the course of the war was insignificant, and the deaths that they caused amounted to a few hundred. Nevertheless, the raid caused a significant diversion of British resources to defense efforts. The airships were initially immune to attack by aircraft and anti-aircraft guns: as the pressure in their envelopes was only just higher than ambient air, holes had little effect. But following the introduction of a combination of incendiary and explosive ammunition in 1916, their flammable hydrogen lifting gas made them vulnerable to the defending aeroplanes. Several were shot down in flames by British defenders, and many others destroyed in accidents. New designs capable of reaching greater altitude were developed, but although this made them immune from attack it made their bombing accuracy even worse. Countermeasures by the British included sound detection equipment, searchlights and anti-aircraft artillery, followed by night fighters in 1915. One tactic used early in the war, when their limited range meant the airships had to fly from forward bases and the only zeppelin production facilities were in Friedrichshafen, was the bombing of airship sheds by the British Royal Naval Air Service. Later in the war, the development of the aircraft carrier led to the first successful carrier-based air strike in history: on the morning of 19 July 1918, seven Sopwith 2F.1 Camels were launched from and struck the airship base at Tønder, destroying zeppelins L 54 and L 60. The British Army had abandoned airship development in favour of aeroplanes before the start of the war, but the Royal Navy had recognized the need for small airships to counteract the submarine and mine threat in coastal waters. Beginning in February 1915, they began to develop the SS (Sea Scout) class of blimp. These had a small envelope of and at first used aircraft fuselages without the wing and tail surfaces as control cars. Later, more advanced blimps with purpose-built gondolas were used. The NS class (North Sea) were the largest and most effective non-rigid airships in British service, with a gas capacity of , a crew of 10 and an endurance of 24 hours. Six bombs were carried, as well as three to five machine guns. British blimps were used for scouting, mine clearance, and convoy patrol duties. During the war, the British operated over 200 non-rigid airships. Several were sold to Russia, France, the United States, and Italy. The large number of trained crews, low attrition rate and constant experimentation in handling techniques meant that at the war's end Britain was the world leader in non-rigid airship technology. The Royal Navy continued development of rigid airships until the end of the war. Eight rigid airships had been completed by the armistice, (No. 9r, four 23 Class, two R23X Class and one R31 Class), although several more were in an advanced state of completion by the war's end. Both France and Italy continued to use airships throughout the war. France preferred the non-rigid type, whereas Italy flew 49 semi-rigid airships in both the scouting and bombing roles. Aeroplanes had almost entirely replaced airships as bombers by the end of the war, and Germany's remaining zeppelins were destroyed by their crews, scrapped or handed over to the Allied powers as war reparations. The British rigid airship program, which had mainly been a reaction to the potential threat of the German airships, was wound down. The interwar period Britain, the United States and Germany built rigid airships between the two world wars. Italy and France made limited use of Zeppelins handed over as war reparations. Italy, the Soviet Union, the United States and Japan mainly operated semi-rigid airships. Under the terms of the Treaty of Versailles, Germany was not allowed to build airships of greater capacity than a million cubic feet. Two small passenger airships, LZ 120 Bodensee and its sister ship LZ 121 Nordstern, were built immediately after the war but were confiscated following the sabotage of the wartime Zeppelins that were to have been handed over as war reparations: Bodensee was given to Italy and Nordstern to France. On May 12, 1926, the Italian built semi-rigid airship Norge was the first aircraft to fly over the North Pole. The British R33 and R34 were near-identical copies of the German L 33, which had come down almost intact in Yorkshire on 24 September 1916. Despite being almost three years out of date by the time they were launched in 1919, they became two of the most successful airships in British service. The creation of the Royal Air Force (RAF) in early 1918 created a hybrid British airship program. The RAF was not interested in airships while the Admiralty was, so a deal was made where the Admiralty would design any future military airships and the RAF would handle manpower, facilities and operations. On 2 July 1919, R34 began the first double crossing of the Atlantic by an aircraft. It landed at Mineola, Long Island on 6 July after 108 hours in the air; the return crossing began on 8 July and took 75 hours. This feat failed to generate enthusiasm for continued airship development, and the British airship program was rapidly wound down. During World War I, the U.S. Navy acquired its first airship, the DH-1, but it was destroyed while being inflated shortly after delivery to the Navy. After the war, the U.S. Navy contracted to buy the R 38, which was being built in Britain, but before it was handed over it was destroyed because of a structural failure during a test flight. America then started constructing the , designed by the Bureau of Aeronautics and based on the Zeppelin L 49. Assembled in Hangar No. 1 and first flown on 4 September 1923 at Lakehurst, New Jersey, it was the first airship to be inflated with the noble gas helium, which was then so scarce that the Shenandoah contained most of the world's supply. A second airship, , was built by the Zeppelin company as compensation for the airships that should have been handed over as war reparations according to the terms of the Versailles Treaty but had been sabotaged by their crews. This construction order saved the Zeppelin works from the threat of closure. The success of the Los Angeles, which was flown successfully for eight years, encouraged the U.S. Navy to invest in its own, larger airships. When the Los Angeles was delivered, the two airships had to share the limited supply of helium, and thus alternated operating and overhauls. In 1922, Sir Dennistoun Burney suggested a plan for a subsidised air service throughout the British Empire using airships (the Burney Scheme). Following the coming to power of Ramsay MacDonald's Labour government in 1924, the scheme was transformed into the Imperial Airship Scheme, under which two airships were built, one by a private company and the other by the Royal Airship Works under Air Ministry control. The two designs were radically different. The "capitalist" ship, the R100, was more conventional, while the "socialist" ship, the R101, had many innovative design features. Construction of both took longer than expected, and the airships did not fly until 1929. Neither airship was capable of the service intended, though the R100 did complete a proving flight to Canada and back in 1930. On 5 October 1930, the R101, which had not been thoroughly tested after major modifications, crashed on its maiden voyage to India at Beauvais in France killing 48 of the 54 people aboard. Among the dead were the craft's chief designer and the Secretary of State for Air. The disaster ended British interest in airships. In 1925 the Zeppelin company started construction of the Graf Zeppelin (LZ 127), the largest airship that could be built in the company's existing shed, and intended to stimulate interest in passenger airships. The Graf Zeppelin burned blau gas, similar to propane, stored in large gas bags below the hydrogen cells, as fuel. Since its density was similar to that of air, it avoided the weight change as fuel was used, and thus the need to valve hydrogen. The Graf Zeppelin had an impressive safety record, flying over (including the first circumnavigation of the globe by airship) without a single passenger injury. The U.S. Navy experimented with the use of airships as airborne aircraft carriers, developing an idea pioneered by the British. The USS Los Angeles was used for initial experiments, and the and , the world's largest at the time, were used to test the principle in naval operations. Each carried four F9C Sparrowhawk fighters in its hangar, and could carry a fifth on the trapeze. The idea had mixed results. By the time the Navy started to develop a sound doctrine for using the ZRS-type airships, the last of the two built, USS Macon, had been wrecked. Meanwhile, the seaplane had become more capable, and was considered a better investment. Eventually, the U.S. Navy lost all three U.S.-built rigid airships to accidents. USS Shenandoah flew into a severe thunderstorm over Noble County, Ohio while on a poorly planned publicity flight on 3 September 1925. It broke into pieces, killing 14 of its crew. USS Akron was caught in a severe storm and flown into the surface of the sea off the shore of New Jersey on 3 April 1933. It carried no life boats and few life vests, so 73 of its crew of 76 died from drowning or hypothermia. USS Macon was lost after suffering a structural failure offshore near Point Sur Lighthouse on 12 February 1935. The failure caused a loss of gas, which was made much worse when the aircraft was driven over pressure height causing it to lose too much helium to maintain flight. Only two of its crew of 83 died in the crash thanks to the inclusion of life jackets and inflatable rafts after the Akron disaster. The Empire State Building was completed in 1931 with a dirigible mast, in anticipation of future passenger airship service, but no airship ever used the mast. Various entrepreneurs experimented with commuting and shipping freight via airship. In the 1930s, the German Zeppelins successfully competed with other means of transport. They could carry significantly more passengers than other contemporary aircraft while providing amenities similar to those on ocean liners, such as private cabins, observation decks, and dining rooms. Less importantly, the technology was potentially more energy-efficient than heavier-than-air designs. Zeppelins were also faster than ocean liners. On the other hand, operating airships was quite involved. Often the crew would outnumber passengers, and on the ground large teams were necessary to assist mooring and very large hangars were required at airports. By the mid-1930s, only Germany still pursued airship development. The Zeppelin company continued to operate the Graf Zeppelin on passenger service between Frankfurt and Recife in Brazil, taking 68 hours. Even with the small Graf Zeppelin, the operation was almost profitable. In the mid-1930s, work began on an airship designed specifically to operate a passenger service across the Atlantic. The Hindenburg (LZ 129) completed a successful 1936 season, carrying passengers between Lakehurst, New Jersey and Germany. The year 1937 started with the most spectacular and widely remembered airship accident. Approaching the Lakehurst mooring mast minutes before landing on 6 May 1937, the Hindenburg suddenly burst into flames and crashed to the ground. Of the 97 people aboard, 35 died: 13 passengers, 22 aircrew, along with one American ground-crewman. The disaster happened before a large crowd, was filmed and a radio news reporter was recording the arrival. This was a disaster that theater goers could see and hear in newsreels. The Hindenburg disaster shattered public confidence in airships, and brought a definitive end to their "golden age". The day after the Hindenburg disaster, the Graf Zeppelin landed safely in Germany after its return flight from Brazil. This was the last international passenger airship flight. Hindenburgs identical sister ship, the Graf Zeppelin II (LZ 130), could not carry commercial passengers without helium, which the United States refused to sell to Germany. The Graf Zeppelin made several test flights and conducted some electronic espionage until 1939 when it was grounded due to the beginning of the war. The two Graf Zeppelins were scrapped in April, 1940. Development of airships continued only in the United States, and to a lesser extent, the Soviet Union. The Soviet Union had several semi-rigid and non-rigid airships. The semi-rigid dirigible SSSR-V6 OSOAVIAKhIM was among the largest of these craft, and it set the longest endurance flight at the time of over 130 hours. It crashed into a mountain in 1938, killing 13 of the 19 people on board. While this was a severe blow to the Soviet airship program, they continued to operate non-rigid airships until 1950. World War II While Germany determined that airships were obsolete for military purposes in the coming war and concentrated on the development of aeroplanes, the United States pursued a program of military airship construction even though it had not developed a clear military doctrine for airship use. When the Japanese attacked Pearl Harbor on 7 December 1941, bringing the United States into World War II, the U.S. Navy had 10 nonrigid airships: 4 K-class: K-2, K-3, K-4 and K-5 designed as patrol ships, all built in 1938. 3 L-class: L-1, L-2 and L-3 as small training ships, produced in 1938. 1 G-class, built in 1936 for training. 2 TC-class that were older patrol airships designed for land forces, built in 1933. The U.S. Navy acquired both from the United States Army in 1938. Only K- and TC-class airships were suitable for combat and they were quickly pressed into service against Japanese and German submarines, which were then sinking American shipping within visual range of the American coast. U.S. Navy command, remembering airship's anti-submarine success in World War I, immediately requested new modern antisubmarine airships and on 2 January 1942 formed the ZP-12 patrol unit based in Lakehurst from the four K airships. The ZP-32 patrol unit was formed from two TC and two L airships a month later, based at NAS Moffett Field in Sunnyvale, California. An airship training base was created there as well. The status of submarine-hunting Goodyear airships in the early days of World War II has created significant confusion. Although various accounts refer to airships Resolute and Volunteer as operating as "privateers" under a Letter of Marque, Congress never authorized a commission, nor did the President sign one. In the years 1942–44, approximately 1,400 airship pilots and 3,000 support crew members were trained in the military airship crew training program and the airship military personnel grew from 430 to 12,400. The U.S. airships were produced by the Goodyear factory in Akron, Ohio. From 1942 till 1945, 154 airships were built for the U.S. Navy (133 K-class, 10 L-class, seven G-class, four M-class) and five L-class for civilian customers (serial numbers L-4 to L-8). The primary airship tasks were patrol and convoy escort near the American coastline. They also served as an organization centre for the convoys to direct ship movements, and were used in naval search and rescue operations. Rarer duties of the airships included aerophoto reconnaissance, naval mine-laying and mine-sweeping, parachute unit transport and deployment, cargo and personnel transportation. They were deemed quite successful in their duties with the highest combat readiness factor in the entire U.S. air force (87%). During the war, some 532 ships without airship escort were sunk near the U.S. coast by enemy submarines. Only one ship, the tanker Persephone, of the 89,000 or so in convoys escorted by blimps was sunk by the enemy. Airships engaged submarines with depth charges and, less frequently, with other on-board weapons. They were excellent at driving submarines down, where their limited speed and range prevented them from attacking convoys. The weapons available to airships were so limited that until the advent of the homing torpedo they had little chance of sinking a submarine. Only one airship was ever destroyed by U-boat: on the night of 18/19 July 1943, the K-74 from ZP-21 division was patrolling the coastline near Florida. Using radar, the airship located a surfaced German submarine. The K-74 made her attack run but the U-boat opened fire first. K-74s depth charges did not release as she crossed the U-boat and the K-74 received serious damage, losing gas pressure and an engine but landing in the water without loss of life. The crew was rescued by patrol boats in the morning, but one crewman, Aviation Machinist's Mate Second Class Isadore Stessel, died from a shark attack. The U-boat, , was slightly damaged and the next day or so was attacked by aircraft, sustaining damage that forced it to return to base. It was finally sunk on 24 August 1943 by a British Vickers Wellington near Vigo, Spain. Fleet Airship Wing One operated from Lakehurst, New Jersey, Glynco, Georgia, Weeksville, North Carolina, South Weymouth NAS Massachusetts, Brunswick NAS and Bar Harbor Maine, Yarmouth, Nova Scotia, and Argentia, Newfoundland. Some Navy blimps saw action in the European war theater. In 1944–45, the U.S. Navy moved an entire squadron of eight Goodyear K class blimps (K-89, K-101, K-109, K-112, K-114, K-123, K-130, & K-134) with flight and maintenance crews from Weeksville Naval Air Station in North Carolina to Naval Air Station Port Lyautey, French Morocco. Their mission was to locate and destroy German U-boats in the relatively shallow waters around the Strait of Gibraltar where magnetic anomaly detection (MAD) was viable. PBY aircraft had been searching these waters but MAD required low altitude flying that was dangerous at night for these aircraft. The blimps were considered a perfect solution to establish a 24/7 MAD barrier (fence) at the Straits of Gibraltar with the PBYs flying the day shift and the blimps flying the night shift. The first two blimps (K-123 & K-130) left South Weymouth NAS on 28 May 1944 and flew to Argentia, Newfoundland, the Azores, and finally to Port Lyautey where they completed the first transatlantic crossing by nonrigid airships on 1 June 1944. The blimps of USN Blimp Squadron ZP-14 (Blimpron 14, aka The Africa Squadron) also conducted mine-spotting and mine-sweeping operations in key Mediterranean ports and various escorts including the convoy carrying United States President Franklin D. Roosevelt and British Prime Minister Winston Churchill to the Yalta Conference in 1945. Airships from the ZP-12 unit took part in the sinking of the last U-boat before German capitulation, sinking the U-881 on 6 May 1945 together with destroyers USS Atherton and USS Moberly. Other airships patrolled the Caribbean, Fleet Airship Wing Two, Headquartered at Naval Air Station Richmond, covered the Gulf of Mexico from Richmond and Key West, Florida, Houma, Louisiana, as well as Hitchcock and Brownsville, Texas. FAW 2 also patrolled the northern Caribbean from San Julian, the Isle of Pines (now called Isla de la Juventud) and Guantánamo Bay, Cuba as well as Vernam Field, Jamaica. Navy blimps of Fleet Airship Wing Five, (ZP-51) operated from bases in Trinidad, British Guiana and Paramaribo, Suriname. Fleet Airship Wing Four operated along the coast of Brazil. Two squadrons, VP-41 and VP-42 flew from bases at Amapá, Igarapé-Açu, São Luís Fortaleza, Fernando de Noronha, Recife, Maceió, Ipitanga (near Salvador, Bahia), Caravelas, Vitória and the hangar built for the Graf Zeppelin at Santa Cruz, Rio de Janeiro. Fleet Airship Wing Three operated squadrons, ZP-32 from Moffett Field, ZP-31 at NAS Santa Ana, and ZP-33 at NAS Tillamook, Oregon. Auxiliary fields were at Del Mar, Lompoc, Watsonville and Eureka, California, North Bend and Astoria, Oregon, as well as Shelton and Quillayute in Washington. From 2 January 1942 until the end of war airship operations in the Atlantic, the blimps of the Atlantic fleet made 37,554 flights and flew 378,237 hours. Of the over 70,000 ships in convoys protected by blimps, only one was sunk by a submarine while under blimp escort. The Soviet Union flew a single airship during the war. The W-12, built in 1939, entered service in 1942 for paratrooper training and equipment transport. It made 1432 flights with 300 metric tons of cargo until 1945. On 1 February 1945, the Soviets constructed a second airship, a Pobeda-class (Victory-class) unit (used for mine-sweeping and wreckage clearing in the Black Sea) that crashed on 21 January 1947. Another W-class – W-12bis Patriot – was commissioned in 1947 and was mostly used until the mid-1950s for crew training, parades and propaganda. Postwar period Although airships are no longer used for major cargo and passenger transport, they are still used for other purposes such as advertising, sightseeing, surveillance, research and advocacy. There were several studies and proposals for nuclear-powered airships, starting with a 1954 study by F.W. Locke Jr for US Navy. In 1957 Edwin J. Kirschner published the book The Zeppelin in the Atomic Age, which promoted the use of atomic airships. In 1959 Goodyear presented a plan for nuclear-powered airship for both military and commercial use. Several other proposals and papers were published during the next decades. In the 1980s, Per Lindstrand and his team introduced the GA-42 airship, the first airship to use fly-by-wire flight control, which considerably reduced the pilot's workload. An airship was prominently featured in the James Bond film A View to a Kill, released in 1985. The Skyship 500 had the livery of Zorin Industries. The world's largest thermal airship () was constructed by the Per Lindstrand company for French botanists in 1993. The AS-300 carried an underslung raft, which was positioned by the airship on top of tree canopies in the rain forest, allowing the botanists to carry out their treetop research without significant damage to the rainforest. When research was finished at a given location, the airship returned to pick up and relocate the raft. In June 1987, the U.S. Navy awarded a US$168.9 million contract to Westinghouse Electric and Airship Industries of the UK to find out whether an airship could be used as an airborne platform to detect the threat of sea-skimming missiles, such as the Exocet. At 2.5 million cubic feet, the Westinghouse/Airship Industries Sentinel 5000 (Redesignated YEZ-2A by the U.S. Navy) prototype design was to have been the largest blimp ever constructed. Additional funding for the Naval Airship Program was killed in 1995 and development was discontinued. The SVAM CA-80 airship, which was produced in 2000 by Shanghai Vantage Airship Manufacture Co., Ltd., had a successful trial flight in September 2001. This was designed for advertisement and propagation, air-photo, scientific test, tour and surveillance duties. It was certified as a grade-A Hi-Tech introduction program (No. 20000186) in Shanghai. The CAAC authority granted a type design approval and certificate of airworthiness for the airship. In the 1990s the Zeppelin company returned to the airship business. Their new model, designated the Zeppelin NT, made its maiden flight on 18 September 1997. there were four NT aircraft flying, a fifth was completed in March 2009 and an expanded NT-14 (14,000 cubic meters of helium, capable of carrying 19 passengers) was under construction. One was sold to a Japanese company, and was planned to be flown to Japan in the summer of 2004. Due to delays getting permission from the Russian government, the company decided to transport the airship to Japan by sea. One of the four NT craft is in South Africa carrying diamond detection equipment from De Beers, an application at which the very stable low vibration NT platform excels. The project included design adaptations for high temperature operation and desert climate, as well as a separate mooring mast and a very heavy mooring truck. NT-4 belonged to Airship Ventures of Moffett Field, Mountain View in the San Francisco Bay Area, and provided sight-seeing tours. Blimps are used for advertising and as TV camera platforms at major sporting events. The most iconic of these are the Goodyear Blimps. Goodyear operates three blimps in the United States, and The Lightship Group, now The AirSign Airship Group, operates up to 19 advertising blimps around the world. Airship Management Services owns and operates three Skyship 600 blimps. Two operate as advertising and security ships in North America and the Caribbean. Airship Ventures operated a Zeppelin NT for advertising, passenger service and special mission projects. They were the only airship operator in the U.S. authorized to fly commercial passengers, until closing their doors in 2012. Skycruise Switzerland AG owns and operates two Skyship 600 blimps. One operates regularly over Switzerland used on sightseeing tours. The Switzerland-based Skyship 600 has also played other roles over the years. For example, it was flown over Athens during the 2004 Summer Olympics as a security measure. In November 2006, it carried advertising calling it The Spirit of Dubai as it began a publicity tour from London to Dubai, UAE on behalf of The Palm Islands, the world's largest man-made islands created as a residential complex. Los Angeles-based Worldwide Aeros Corp. produces FAA Type Certified Aeros 40D Sky Dragon airships. In May 2006, the U.S. Navy began to fly airships again after a hiatus of nearly 44 years. The program uses a single American Blimp Company A-170 nonrigid airship, with designation MZ-3A. Operations focus on crew training and research, and the platform integrator is Northrop Grumman. The program is directed by the Naval Air Systems Command and is being carried out at NAES Lakehurst, the original centre of U.S. Navy lighter-than-air operations in previous decades. In November 2006 the U.S. Army bought an A380+ airship from American Blimp Corporation through a Systems level contract with Northrop Grumman and Booz Allen Hamilton. The airship started flight tests in late 2007, with a primary goal of carrying of payload to an altitude of under remote control and autonomous waypoint navigation. The program will also demonstrate carrying of payload to The platform could be used for intelligence collection. In 2008, the CA-150 airship was launched by Vantage Airship. This is an improved modification of model CA-120 and completed manufacturing in 2008. With larger volume and increased passenger capacity, it is the largest manned nonrigid airship in China at present. In late June 2014 the Electronic Frontier Foundation flew the GEFA-FLUG AS 105 GD/4 blimp AE Bates (owned by, and in conjunction with, Greenpeace) over the NSA's Bluffdale Utah Data Center in protest. Postwar projects Hybrid designs such as the Heli-Stat airship/helicopter, the Aereon aerostatic/aerodynamic craft, and the CycloCrane (a hybrid aerostatic/rotorcraft), struggled to take flight. The Cyclocrane was also interesting in that the airship's envelope rotated along its longitudinal axis. In 2005, a short-lived project of the U.S. Defense Advanced Research Projects Agency (DARPA) was Walrus HULA, which explored the potential for using airships as long-distance, heavy lift craft. The primary goal of the research program was to determine the feasibility of building an airship capable of carrying of payload a distance of and land on an unimproved location without the use of external ballast or ground equipment (such as masts). In 2005, two contractors, Lockheed Martin and US Aeros Airships were each awarded approximately $3 million to do feasibility studies of designs for WALRUS. Congress removed funding for Walrus HULA in 2006. Modern Airships Military In 2010, the U.S. Army awarded a $517 million (£350.6 million) contract to Northrop Grumman and partner Hybrid Air Vehicles to develop a Long Endurance Multi-Intelligence Vehicle (LEMV) system, in the form of three HAV 304s. The project was cancelled in February 2012 due to it being behind schedule and over budget; also the forthcoming U.S. withdrawal from Afghanistan where it was intended to be deployed. Following this the Hybrid Air Vehicles HAV 304 Airlander 10 was repurchased by Hybrid Air Vehicles then modified and reassembled in Bedford, UK, and renamed the Airlander 10. As of 2018, it was being tested in readiness for its UK flight test programme. , a French company, manufactures and operates airships and aerostats. For 2 years, A-NSE has been testing its airships for the French Army. Airships and aerostats are operated to provide intelligence, surveillance, and reconnaissance (ISR) support. Their airships include many innovative features such as water ballast take-off and landing systems, variable geometry envelopes and thrust–vectoring systems. The U.S. government has funded two major projects in the high altitude arena. The Composite Hull High Altitude Powered Platform (CHHAPP) is sponsored by U.S. Army Space and Missile Defense Command. This aircraft is also sometimes called HiSentinel High-Altitude Airship. This prototype ship made a five-hour test flight in September 2005. The second project, the high-altitude airship (HAA), is sponsored by DARPA. In 2005, DARPA awarded a contract for nearly $150 million to Lockheed Martin for prototype development. First flight of the HAA was planned for 2008 but suffered programmatic and funding delays. The HAA project evolved into the High Altitude Long Endurance-Demonstrator (HALE-D). The U.S. Army and Lockheed Martin launched the first-of-its kind HALE-D on July 27, 2011. After attaining an altitude of , due to an anomaly, the company decided to abort the mission. The airship made a controlled descent in an unpopulated area of southwest Pennsylvania. On 31 January 2006 Lockheed Martin made the first flight of their secretly built hybrid airship designated the P-791. The design is very similar to the SkyCat, unsuccessfully promoted for many years by the British company Advanced Technologies Group (ATG). Dirigibles have been used in the War in Afghanistan for reconnaissance purposes, as they allow for constant monitoring of a specific area through cameras mounted on the airships. Passenger transport In the 1990s, the successor of the original Zeppelin company in Friedrichshafen, the Zeppelin Luftschifftechnik GmbH, reengaged in airship construction. The first experimental craft (later christened Friedrichshafen) of the type "Zeppelin NT" flew in September 1997. Though larger than common blimps, the Neue Technologie (New Technology) zeppelins are much smaller than their giant ancestors and not actually Zeppelin-types in the classical sense. They are sophisticated semirigids. Apart from the greater payload, their main advantages compared to blimps are higher speed and excellent maneuverability. Meanwhile, several Zeppelin NT have been produced and operated profitably in joyrides, research flights and similar applications. In June 2004, a Zeppelin NT was sold for the first time to a Japanese company, Nippon Airship Corporation, for tourism and advertising mainly around Tokyo. It was also given a role at the 2005 Expo in Aichi. The aircraft began a flight from Friedrichshafen to Japan, stopping at Geneva, Paris, Rotterdam, Munich, Berlin, Stockholm and other European cities to carry passengers on short legs of the flight. Russian authorities denied overflight permission, so the airship had to be dismantled and shipped to Japan rather than following the historic Graf Zeppelin flight from Germany to Japan. In 2008, Airship Ventures Inc. began operations from Moffett Federal Airfield near Mountain View, California and until November 2012 offered tours of the San Francisco Bay Area for up to 12 passengers. Exploration In November 2005, De Beers, a diamond mining company, launched an airship exploration program over the remote Kalahari Desert. A Zeppelin NT, equipped with a Bell Geospace gravity gradiometer, was used to find potential diamond mines by scanning the local geography for low-density rock formations, known as kimberlite pipes. On 21 September 2007, the airship was severely damaged by a whirlwind while in Botswana. One crew member, who was on watch aboard the moored craft, was slightly injured but released after overnight observation in hospital. Thermal Several companies, such as Cameron Balloons in Bristol, United Kingdom, build hot-air airships. These combine the structures of both hot-air balloons and small airships. The envelope is the normal cigar shape, complete with tail fins, but is inflated with hot air instead of helium to provide the lifting force. A small gondola, carrying the pilot and passengers, a small engine, and the burners to provide the hot air are suspended below the envelope, beneath an opening through which the burners protrude. Hot-air airships typically cost less to buy and maintain than modern helium-based blimps, and can be quickly deflated after flights. This makes them easy to carry in trailers or trucks and inexpensive to store. They are usually very slow moving, with a typical top speed of . They are mainly used for advertising, but at least one has been used in rainforests for wildlife observation, as they can be easily transported to remote areas. Unmanned remote Remote-controlled (RC) airships, a type of unmanned aerial system (UAS), are sometimes used for commercial purposes such as advertising and aerial video and photography as well as recreational purposes. They are particularly common as an advertising mechanism at indoor stadiums. While RC airships are sometimes flown outdoors, doing so for commercial purposes is illegal in the US. Commercial use of an unmanned airship must be certified under part 121. Adventures In 2008, French adventurer Stephane Rousson attempted to cross the English Channel with a muscular pedal powered airship. Stephane Rousson also flies the Aérosail, a sky sailing yacht. Current design projects Today, with large, fast, and more cost-efficient fixed-wing aircraft and helicopters, it is unknown whether huge airships can operate profitably in regular passenger transport though, as energy costs rise, attention is once again returning to these lighter-than-air vessels as a possible alternative. At the very least, the idea of comparatively slow, "majestic" cruising at relatively low altitudes and in comfortable atmosphere certainly has retained some appeal. There have been some niches for airships in and after World War II, such as long-duration observations, antisubmarine patrol, platforms for TV camera crews, and advertising; these generally require only small and flexible craft, and have thus generally been better fitted for cheaper (non-passenger) blimps. Heavy lifting It has periodically been suggested that airships could be employed for cargo transport, especially delivering extremely heavy loads to areas with poor infrastructure over great distances. This has also been called roadless trucking. Also, airships could be used for heavy lifting over short distances (e.g. on construction sites); this is described as heavy-lift, short-haul. In both cases, the airships are heavy haulers. One recent enterprise of this sort was the Cargolifter project, in which a hybrid (thus not entirely Zeppelin-type) airship even larger than Hindenburg was projected. Around 2000, CargoLifter AG built the world's largest self-supporting hall, measuring long, wide and high about south of Berlin. In May 2002, the project was stopped for financial reasons; the company had to file bankruptcy. The enormous CargoLifter hangar was later converted to house the Tropical Islands Resort. Although no rigid airships are currently used for heavy lifting, hybrid airships are being developed for such purposes. AEREON 26, tested in 1971, was described in John McPhee's The Deltoid Pumpkin Seed. An impediment to the large-scale development of airships as heavy haulers has been figuring out how they can be used in a cost-efficient way. In order to have a significant economic advantage over ocean transport, cargo airships must be able to deliver their payload faster than ocean carriers but more cheaply than airplanes. William Crowder, a fellow at the Logistics Management Institute, has calculated that cargo airships are only economical when they can transport 500 to 1,000 tons, approximately the same as a super-jumbo aircraft. The large initial investment required to build such a large airship has been a hindrance to production, especially given the risk inherent in a new technology. The chief commercial officer of the company hoping to sell the LMH-1, a cargo airship currently being developed by Lockheed Martin, believes that airships can be economical in hard-to-reach locations such as mining operations in northern Canada that currently require ice roads. Metal-clad airships A metal-clad airship has a very thin metal envelope, rather than the usual fabric. The shell may be either internally braced or monocoque as in the ZMC-2, which flew many times in the 1920s, the only example ever to do so. The shell may be gas-tight as in a non-rigid blimp, or the design may employ internal gas bags as in a rigid airship. Compared to a fabric envelope the metal cladding is expected to be more durable. Hybrid airships A hybrid airship is a general term for an aircraft that combines characteristics of heavier-than-air (aeroplane or helicopter) and lighter-than-air technology. Examples include helicopter/airship hybrids intended for heavy lift applications and dynamic lift airships intended for long-range cruising. Most airships, when fully loaded with cargo and fuel, are usually ballasted to be heavier than air, and thus must use their propulsion system and shape to create aerodynamic lift, necessary to stay aloft. All airships can be operated to be slightly heavier than air at periods during flight (descent). Accordingly, the term "hybrid airship" refers to craft that obtain a significant portion of their lift from aerodynamic lift or other kinetic means. For example, the Aeroscraft is a buoyancy assisted air vehicle that generates lift through a combination of aerodynamics, thrust vectoring and gas buoyancy generation and management, and for much of the time will fly heavier than air. Aeroscraft is Worldwide Aeros Corporation's continuation of DARPA's now cancelled Walrus HULA (Hybrid Ultra Large Aircraft) project. The Patroller P3 hybrid airship developed by Advanced Hybrid Aircraft Ltd, BC, Canada, is a relatively small () buoyant craft, manned by the crew of five and with the endurance of up to 72 hours. The flight-tests with the 40% RC scale model proved that such a craft can be launched and landed without a large team of strong ground-handlers. Design features a special "winglet" for aerodynamic lift control. Airships in space exploration Airships have been proposed as a potential cheap alternative to surface rocket launches for achieving Earth orbit. JP Aerospace have proposed the Airship to Orbit project, which intends to float a multi-stage airship up to mesospheric altitudes of 55 km (180,000 ft) and then use ion propulsion to accelerate to orbital speed. At these heights, air resistance would not be a significant problem for achieving such speeds. The company has not yet built any of the three stages. NASA has proposed the High Altitude Venus Operational Concept, which comprises a series of five missions including crewed missions to the atmosphere of Venus in airships. Pressures on the surface of the planet are too high for human habitation, but at a specific altitude the pressure is equal to that found on Earth and this makes Venus a potential target for human colonization. Hypothetically, there could be an airship lifted by a vacuum—that is, by material that can contain nothing at all inside but withstand the atmospheric pressure from the outside. It is, at this point, science fiction, although NASA has posited that some kind of vacuum airship could eventually be used to explore the surface of Mars. Cruiser feeder transport airship EU FP7 MAAT Project has studied an innovative cruiser/feeder airship system, for the stratosphere with a cruiser remaining airborne for a long time and feeders connecting it to the ground and flying as piloted balloons. Airships for humanitarian and cargo transport Google co-founder Sergey Brin founded LTA Research in 2015 to develop airships for humanitarian and cargo transport. The company's 124-meter-long airship Pathfinder 1 received from the FAA a special airworthiness certificate for the helium-filled airship in September 2023. The certificate allowed the largest airship since the ill-fated Hindenburg to begin flight tests at Moffett Field, a joint civil-military airport in Silicon Valley. Comparison with heavier-than-air aircraft The advantage of airships over aeroplanes is that static lift sufficient for flight is generated by the lifting gas and requires no engine power. This was an immense advantage before the middle of World War I and remained an advantage for long-distance or long-duration operations until World War II. Modern concepts for high-altitude airships include photovoltaic cells to reduce the need to land to refuel, thus they can remain in the air until consumables expire. This similarly reduces or eliminates the need to consider variable fuel weight in buoyancy calculations. The disadvantages are that an airship has a very large reference area and comparatively large drag coefficient, thus a larger drag force compared to that of aeroplanes and even helicopters. Given the large frontal area and wetted surface of an airship, a practical limit is reached around , only about one-third the typical airspeed of a modern commercial airplane. Thus, airships are used where speed is not critical. The lift capability of an airship is equal to the buoyant force minus the weight of the airship. This assumes standard air-temperature and pressure conditions. Corrections are usually made for water vapor and impurity of lifting gas, as well as percentage of inflation of the gas cells at liftoff. Based on specific lift (lifting force per unit volume of gas), the greatest static lift is provided by hydrogen (11.15 N/m3 or 71 lbf/1000 cu ft) with helium (10.37 N/m3 or 66 lbf/1000 cu ft) a close second. In addition to static lift, an airship can obtain a certain amount of dynamic lift from its engines. Dynamic lift in past airships has been about 10% of the static lift. Dynamic lift allows an airship to "take off heavy" from a runway similar to fixed-wing and rotary-wing aircraft. This requires additional weight in engines, fuel, and landing gear, negating some of the static lift capacity. The altitude at which an airship can fly largely depends on how much lifting gas it can lose due to expansion before stasis is reached. The ultimate altitude record for a rigid airship was set in 1917 by the L-55 under the command of Hans-Kurt Flemming when he forced the airship to attempting to cross France after the "Silent Raid" on London. The L-55 lost lift during the descent to lower altitudes over Germany and crashed due to loss of lift. While such waste of gas was necessary for the survival of airships in the later years of World War I, it was impractical for commercial operations, or operations of helium-filled military airships. The highest flight made by a hydrogen-filled passenger airship was on the Graf Zeppelin's around-the-world flight. The greatest disadvantage of the airship is size, which is essential to increasing performance. As size increases, the problems of ground handling increase geometrically. As the German Navy changed from the P class of 1915 with a volume of over to the larger Q class of 1916, the R class of 1917, and finally the W class of 1918, at almost ground handling problems reduced the number of days the Zeppelins were able to make patrol flights. This availability declined from 34% in 1915, to 24.3% in 1916 and finally 17.5% in 1918. So long as the power-to-weight ratios of aircraft engines remained low and specific fuel consumption high, the airship had an edge for long-range or -duration operations. As those figures changed, the balance shifted rapidly in the aeroplane's favour. By mid-1917, the airship could no longer survive in a combat situation where the threat was aeroplanes. By the late 1930s, the airship barely had an advantage over the aeroplane on intercontinental over-water flights, and that advantage had vanished by the end of World War II. This is in face-to-face tactical situations. Currently, a high-altitude airship project is planned to survey hundreds of kilometres as their operation radius, often much farther than the normal engagement range of a military aeroplane. For example, a radar mounted on a vessel platform high has radio horizon at range, while a radar at altitude has radio horizon at range. This is significantly important for detecting low-flying cruise missiles or fighter-bombers. Safety The most commonly used lifting gas, helium, is inert and therefore presents no fire risk. A series of vulnerability tests were done by the UK Defence Evaluation and Research Agency DERA on a Skyship 600. Since the internal gas pressure was maintained at only 1–2% above the surrounding air pressure, the vehicle proved highly tolerant to physical damage or to attack by small-arms fire or missiles. Several hundred high-velocity bullets were fired through the hull, and even two hours later the vehicle would have been able to return to base. Ordnance passed through the envelope without causing critical helium loss. The results and related mathematical model have presented in the hypothesis of considering a Zeppelin NT size airship. In all instances of light armament fire evaluated under both test and live conditions, the airship was able to complete its mission and return to base. Licensing In the United Kingdom, the basic pilot licence for airships is the PPL(As), or private pilot licence, which requires a minimum of 35 hours instruction on airships. To fly commercially, an Commercial Pilot Licence (Airships) is required. See also Airborne aircraft carrier Aircruise Airship hangar Barrage balloon Conrad Airship CA 80 (1975–1977) Evolutionary Air and Space Global Laser Engagement High-altitude platform station Hyperion, fictional airship type. List of airship accidents List of British airships List of current airships in the United States List of Zeppelins Mystery airship Stratellite SVAM CA-80 Worldwide Aeros Corp Zeppelin mail Notes References Citations Bibliography Althoff, William F., USS Los Angeles: The Navy's Venerable Airship and Aviation Technology, 2003, Ausrotas, R. A., "Basic Relationships for LTA Technical Analysis," Proceedings of the Interagency Workshop on Lighter-Than-Air Vehicles, Massachusetts Institute of Technology Flight Transportation Library, 1975 Archbold, Rich and Ken Marshall, Hindenburg, an Illustrated History, 1994 Bailey, D. B., and Rappoport, H. K., Maritime Patrol Airship Study, Naval Air Development Center, 1980 Botting, Douglas, Dr. Eckener's Dream Machine. New York Henry Hold and Company, 2001, Burgess, Charles P., Airship Design, (1927) 2004 Cross, Wilbur, Disaster at the Pole, 2002 Dick, Harold G., with Robinson, Douglas H., Graf Zeppelin & Hindenburg, Washington, D.C., Smithsonian Institution Press, 1985, ISBN Ege, L.; Balloons and Airships, Blandford (1973). Frederick, Arthur, et al., Airship saga: The history of airships seen through the eyes of the men who designed, built, and flew them, 1982, Griehl, Manfred and Joachim Dressel, Zeppelin! The German Airship Story, 1990, Higham, Robin, The British Rigid Airship, 1908–1931: A study in weapons policy, London, G. T. Foulis, 1961, Keirns, Aaron J, "America's Forgotten Airship Disaster: The Crash of the USS Shenandoah", Howard, Little River Publishing, 1998, . Khoury, Gabriel Alexander (Editor), Airship Technology (Cambridge Aerospace Series), 2004, McKee, Alexander, Ice crash, 1980, Morgala, Andrzej, Sterowce w II Wojnie Światowej (Airships in the Second World War), Lotnictwo, 1992 Mowthorpe, Ces, Battlebags: British Airships of the First World War, 1995 Robinson, Douglas H., Giants in the Sky, University of Washington Press, 1973, Robinson, Douglas H., The Zeppelin in Combat: A history of the German Naval Airship Division, 1912–1918, Atglen, PA, Shiffer Publications, 1994, Smith, Richard K. The Airships Akron & Macon: flying aircraft carriers of the United States Navy, Annapolis MD, US Naval Institute Press, 1965, Shock, James R., Smith, David R., The Goodyear Airships, Bloomington, Illinois, Airship International Press, 2002, Sprigg, C., The Airship: Its design, history, operation and future, London 1931, Samson Low, Marston and Company. Toland, John, Ships in the Sky, New York, Henry Hold; London, Muller, 1957, Vaeth, J. Gordon, Blimps & U-Boats, Annapolis, Maryland, US Naval Institute Press, 1992, Ventry, Lord; Kolesnik, Eugene, Jane's Pocket Book 7: Airship Development, 1976 Ventry, Lord; Koesnik, Eugene M., Airship Saga, Poole, Dorset, Blandford Press, 1982, p. 97 Winter, Lumen; Degner, Glenn, Minute Epics of Flight, New York, Grosset & Dunlap, 1933. US War Department, Airship Aerodynamics: Technical Manual, (1941) 2003, External links Should Airships Make a Comeback? – Veritasium YouTube channel Aeronautics Gases Vehicles introduced in 1899
Airship
[ "Physics", "Chemistry" ]
15,765
[ "Statistical mechanics", "Gases", "Phases of matter", "Matter" ]
58,017
https://en.wikipedia.org/wiki/Microwave%20oven
A microwave oven or simply microwave is an electric oven that heats and cooks food by exposing it to electromagnetic radiation in the microwave frequency range. This induces polar molecules in the food to rotate and produce thermal energy in a process known as dielectric heating. Microwave ovens heat foods quickly and efficiently because excitation is fairly uniform in the outer of a homogeneous, high-water-content food item. The development of the cavity magnetron in the United Kingdom made possible the production of electromagnetic waves of a small enough wavelength (microwaves) to efficiently heat up water molecules. American electrical engineer Percy Spencer is generally credited with developing and patenting the world's first commercial microwave oven post World War II from British radar technology developed before and during the war. Named the "RadaRange", it was first sold in 1947. Raytheon later licensed its patents for a home-use microwave oven that was introduced by Tappan in 1955, but it was still too large and expensive for general home use. Sharp Corporation introduced the first microwave oven with a turntable between 1964 and 1966. The countertop microwave oven was introduced in 1967 by the Amana Corporation. After microwave ovens became affordable for residential use in the late 1970s, their use spread into commercial and residential kitchens around the world, and prices fell rapidly during the 1980s. In addition to cooking food, microwave ovens are used for heating in many industrial processes. Microwave ovens are a common kitchen appliance and are popular for reheating previously cooked foods and cooking a variety of foods. They rapidly heat foods which can easily burn or turn lumpy if cooked in conventional pans, such as hot butter, fats, chocolate, or porridge. Microwave ovens usually do not directly brown or caramelize food, since they rarely attain the necessary temperature to produce Maillard reactions. Exceptions occur in cases where the oven is used to heat frying-oil and other oily items (such as bacon), which attain far higher temperatures than that of boiling water. Microwave ovens have a limited role in professional cooking, because the boiling-range temperatures of a microwave oven do not produce the flavorful chemical reactions that frying, browning, or baking at a higher temperature produces. However, such high-heat sources can be added to microwave ovens in the form of a convection microwave oven. History Early developments The exploitation of high-frequency radio waves for heating substances was made possible by the development of vacuum tube radio transmitters around 1920. By 1930 the application of short waves to heat human tissue had developed into the medical therapy of diathermy. At the 1933 Chicago World's Fair, Westinghouse demonstrated the cooking of foods between two metal plates attached to a 10 kW, 60 MHz shortwave transmitter. The Westinghouse team, led by I. F. Mouromtseff, found that foods like steaks and potatoes could be cooked in minutes. The 1937 United States patent application by Bell Laboratories states: However, lower-frequency dielectric heating, as described in the aforementioned patent, is (like induction heating) an electromagnetic heating effect, the result of the so-called near-field effects that exist in an electromagnetic cavity that is small compared with the wavelength of the electromagnetic field. This patent proposed radio frequency heating, at 10 to 20 megahertz (wavelength 30 to 15 meters, respectively). Heating from microwaves that have a wavelength that is small relative to the cavity (as in a modern microwave oven) is due to "far-field" effects that are due to classical electromagnetic radiation that describes freely propagating light and microwaves suitably far from their source. Nevertheless, the primary heating effect of all types of electromagnetic fields at both radio and microwave frequencies occurs via the dielectric heating effect, as polarized molecules are affected by a rapidly alternating electric field. Cavity magnetron The invention of the cavity magnetron made possible the production of electromagnetic waves of a small enough wavelength (microwaves). The cavity magnetron was a crucial component in the development of short wavelength radar during World War II. In 1937–1940, a multi-cavity magnetron was built by British physicist Sir John Turton Randall, FRSE and coworkers, for the British and American military radar installations in World War II. A higher-powered microwave generator that worked at shorter wavelengths was needed, and in 1940, at the University of Birmingham in England, Randall and Harry Boot produced a working prototype. They invented a valve that could produce pulses of microwave radio energy at a wavelength of 10 cm, an unprecedented discovery. Sir Henry Tizard traveled to the US in late September 1940 to offer Britain's most valuable technical secrets including the cavity magnetron in exchange for US financial and industrial support (see Tizard Mission). An early 6 kW version, built in England by the General Electric Company Research Laboratories, Wembley, London, was given to the U.S. government in September 1940. The cavity magnetron was later described by American historian James Phinney Baxter III as "[t]he most valuable cargo ever brought to our shores". Contracts were awarded to Raytheon and other companies for the mass production of the cavity magnetron. Discovery In 1945, the heating effect of a high-power microwave beam was independently and accidentally discovered by Percy Spencer, an American self-taught engineer from Howland, Maine. Employed by Raytheon at the time, he noticed that microwaves from an active radar set he was working on started to melt a candy bar he had in his pocket. The first food deliberately cooked by Spencer was popcorn, and the second was an egg, which exploded in the face of one of the experimenters. To verify his finding, Spencer created a high-density electromagnetic field by feeding microwave power from a magnetron into a metal box from which it had no way to escape. When food was placed in the box with the microwave energy, the temperature of the food rose rapidly. On 8 October 1945, Raytheon filed a United States patent application for Spencer's microwave cooking process, and an oven that heated food using microwave energy from a magnetron was soon placed in a Boston restaurant for testing. Another independent discovery of microwave oven technology was by British scientists, including James Lovelock, who in the 1950s used it to reanimate cryogenically frozen hamsters. Commercial availability In 1947, Raytheon built the "Radarange", the first commercially available microwave oven. It was almost tall, weighed and cost about US$5,000 ($ in dollars) each. It consumed 3 kilowatts, about three times as much as today's microwave ovens, and was water-cooled. The name was the winning entry in an employee contest. An early Radarange was installed (and remains) in the galley of the nuclear-powered passenger/cargo ship NS Savannah. An early commercial model introduced in 1954 consumed 1.6 kilowatts and sold for US$2,000 to US$3,000 ($ to $ in dollars). Raytheon licensed its technology to the Tappan Stove company of Mansfield, Ohio in 1952. Under contract to Whirlpool, Westinghouse, and other major appliance manufacturers looking to add matching microwave ovens to their conventional oven line, Tappan produced several variations of their built-in model from roughly 1955 to 1960. Due to maintenance (some units were water-cooled), in-built requirement, and cost—US$1,295 ($ in dollars)—sales were limited. Japan's Sharp Corporation began manufacturing microwave ovens in 1961. Between 1964 and 1966, Sharp introduced the first microwave oven with a turntable, an alternative means to promote more even heating of food. In 1965, Raytheon, looking to expand their Radarange technology into the home market, acquired Amana to provide more manufacturing capability. In 1967, they introduced the first popular home model, the countertop Radarange, at a price of US$495 ($ in dollars). Unlike the Sharp models, a motor driven mode stirrer in the top of the oven cavity rotated allowing the food to remain stationary. In the 1960s, Litton bought Studebaker's Franklin Manufacturing assets, which had been manufacturing magnetrons and building and selling microwave ovens similar to the Radarange. Litton developed a new configuration of the microwave oven: the short, wide shape that is now common. The magnetron feed was also unique. This resulted in an oven that could survive a no-load condition: an empty microwave oven where there is nothing to absorb the microwaves. The new oven was shown at a trade show in Chicago, and helped begin a rapid growth of the market for home microwave ovens. Sales volume of 40,000 units for the U.S. industry in 1970 grew to one million by 1975. Market penetration was even faster in Japan, due to a less expensive re-engineered magnetron. Several other companies joined in the market, and for a time most systems were built by defence contractors, who were most familiar with the magnetron. Litton was particularly well known in the restaurant business. Residential use While uncommon today, combination microwave-ranges were offered by major appliance manufacturers through much of the 1970s as a natural progression of the technology. Both Tappan and General Electric offered units that appeared to be conventional stove top/oven ranges, but included microwave capability in the conventional oven cavity. Such ranges were attractive to consumers since both microwave energy and conventional heating elements could be used simultaneously to speed cooking, and there was no loss of countertop space. The proposition was also attractive to manufacturers as the additional component cost could better be absorbed compared with countertop units where pricing was increasingly market-sensitive. By 1972, Litton (Litton Atherton Division, Minneapolis) introduced two new microwave ovens, priced at $349 and $399, to tap into the market estimated at $750 million by 1976, according to Robert I Bruder, president of the division. While prices remained high, new features continued to be added to home models. Amana introduced automatic defrost in 1974 on their RR-4D model, and was the first to offer a microprocessor controlled digital control panel in 1975 with their RR-6 model. The late 1970s saw an explosion of low-cost countertop models from many major manufacturers. Formerly found only in large industrial applications, microwave ovens increasingly became a standard fixture of residential kitchens in developed countries. By 1986, roughly 25% of households in the U.S. owned a microwave oven, up from only about 1% in 1971; the U.S. Bureau of Labor Statistics reported that over 90% of American households owned a microwave oven in 1997. In Australia, a 2008 market research study found that 95% of kitchens contained a microwave oven and that 83% of them were used daily. In Canada, fewer than 5% of households had a microwave oven in 1979, but more than 88% of households owned one by 1998. In France, 40% of households owned a microwave oven in 1994, but that number had increased to 65% by 2004. Adoption has been slower in less-developed countries, as households with disposable income concentrate on more important household appliances like refrigerators and ovens. In India, for example, only about 5% of households owned a microwave oven in 2013, well behind refrigerators at 31% ownership. However, microwave ovens are gaining popularity. In Russia, for example, the number of households with a microwave oven grew from almost 24% in 2002 to almost 40% in 2008. Almost twice as many households in South Africa owned microwave ovens in 2008 (38.7%) as in 2002 (19.8%). Microwave oven ownership in Vietnam in 2008 was at 16% of households, versus 30% ownership of refrigerators; this rate was up significantly from 6.7% microwave oven ownership in 2002, with 14% ownership for refrigerators that year. Consumer household microwave ovens usually come with a cooking power of between 600 and 1200 watts. Microwave cooking power, also referred to as output wattage, is lower than its input wattage, which is the manufacturer's listed power rating. The size of household microwave ovens can vary, but usually have an internal volume of around , and external dimensions of approximately wide, deep and tall. Countertop microwaves vary in weight 23 – 45 lbs. Microwaves can be turntable or flatbed. Turntable ovens include a glass plate or tray. Flatbed ones do not include a plate, so they have a flat and wider cavity. By position and type, US DOE classifies them as (1) countertop or (2) over the range and built-in (wall oven for a cabinet or a drawer model). A traditional microwave only has two power output levels, fully on and fully off. Intermediate heat settings are achieved using duty-cycle modulation and switch between full power and off every few seconds, with more time on for higher settings. An inverter type, however, can sustain lower temperatures for a lengthy duration without having to switch itself off and on repeatedly. Apart from offering superior cooking ability, these microwaves are generally more energy-efficient. , the majority of countertop microwave ovens (regardless of brand) sold in the United States were manufactured by the Midea Group. Categories Domestic microwave ovens are typically marked with the microwave-safe symbol, next to the device's approximate IEC 60705 output power rating, in watts (typically either: 600W, 700W, 800W, 900W, 1000W), and a voluntary Heating Category (A-E). Principles A microwave oven heats food by passing microwave radiation through it. Microwaves are a form of non-ionizing electromagnetic radiation with a frequency in the so-called microwave region (300 MHz to 300 GHz). Microwave ovens use frequencies in one of the ISM (industrial, scientific, medical) bands, which are otherwise used for communication amongst devices that do not need a license to operate, so they do not interfere with other vital radio services. It is a common misconception that microwave ovens heat food by operating at a special resonance of water molecules in the food. Instead, microwave ovens heat by causing molecules to spin under the influence of a constantly changing electric field, usually in the microwave frequencies range, and a higher wattage power of the microwave oven results in faster cooking times. Typically, consumer ovens work around a nominal 2.45 gigahertz (GHz) – a wavelength of in the 2.4 GHz to 2.5 GHz ISM band – while large industrial / commercial ovens often use 915 megahertz (MHz) – . Among other differences, the longer wavelength of a commercial microwave oven allows the initial heating effects to begin deeper within the food or liquid, and therefore become evenly spread within its bulk sooner, as well as raising the temperature deep within the food more quickly. A microwave oven takes advantage of the electric dipole structure of water molecules, fats, and many other substances in the food, using a process known as dielectric heating. These molecules have a partial positive charge at one end and a partial negative charge at the other. In an alternating electric field, they will constantly spin around as they continually try to align themselves with the electric field. This can happen over a wide range of frequencies. The electric field's energy is absorbed by the dipole molecules as rotational energy. Then they hit non-dipole molecules, making them move faster as well. This energy is shared deeper into the substance as molecular rotation and translational movement occurs, signifying an increase in the temperature of the food. Once the electrical field's energy is initially absorbed, heat will gradually spread through the object similarly to any other heat transfer by contact with a hotter body. Defrosting Microwave heating is more efficient on liquid water than on frozen water, where the movement of molecules is more restricted. Defrosting is done at a low power setting, allowing time for conduction to carry heat to still frozen parts of food. Dielectric heating of liquid water is also temperature-dependent: At 0 °C, dielectric loss is greatest at a field frequency of about 10 GHz, and for higher water temperatures at higher field frequencies. Fats and sugar Sugars and triglycerides (fats and oils) absorb microwaves due to the dipole moments of their hydroxyl groups or ester groups. Microwave heating is less efficient on fats and sugars than on water because they have a smaller molecular dipole moment. Although fats and sugar typically absorb energy less efficiently than water, paradoxically their temperatures rise faster and higher than water when cooking: Fats and oils require less energy delivered per gram of material to raise their temperature by 1 °C than does water (they have lower specific heat capacity) and they begin cooling off by "boiling" only after reaching a higher temperature than water (the temperature they require to vaporize is higher), so inside microwave ovens they normally reach higher temperatures – sometimes much higher. This can induce temperatures in oil or fatty foods like bacon far above the boiling point of water, and high enough to induce some browning reactions, much in the manner of conventional broiling (UK: grilling), braising, or deep fat frying. The effect is most often noticed by consumers from unexpected damage to plastic containers when microwaving foods high in sugar, starch, or fat generates higher temperatures. Foods high in water content and with little oil rarely exceed the boiling temperature of water and do not damage plastic. Cookware Cookware must be transparent to microwaves. Conductive cookware, such as metal pots, reflects microwaves, and prevents the microwaves from reaching the food. Cookware made of materials with high electrical permittivity will absorb microwaves, resulting in the cookware heating rather than the food. Cookware made of melamine resin is a common type of cookware that will heat in a microwave oven, reducing the effectiveness of the microwave oven and creating a hazard from burns or shattered cookware. Thermal runaway Microwave heating can cause localized thermal runaways in some materials with low thermal conductivity which also have dielectric constants that increase with temperature. An example is glass, which can exhibit thermal runaway in a microwave oven to the point of melting if preheated. Additionally, microwaves can melt certain types of rocks, producing small quantities of molten rock. Some ceramics can also be melted, and may even become clear upon cooling. Thermal runaway is more typical of electrically conductive liquids such as salty water. Penetration Another misconception is that microwave ovens cook food "from the inside out", meaning from the center of the entire mass of food outwards. This idea arises from heating behavior seen if an absorbent layer of water lies beneath a less absorbent drier layer at the surface of a food; in this case, the deposition of heat energy inside a food can exceed that on its surface. This can also occur if the inner layer has a lower heat capacity than the outer layer causing it to reach a higher temperature, or even if the inner layer is more thermally conductive than the outer layer making it feel hotter despite having a lower temperature. In most cases, however, with uniformly structured or reasonably homogeneous food item, microwaves are absorbed in the outer layers of the item at a similar level to that of the inner layers. Depending on water content, the depth of initial heat deposition may be several centimetres or more with microwave ovens, in contrast with broiling / grilling (infrared) or convection heating methods which thinly deposit heat at the food surface. Penetration depth of microwaves depends on food composition and the frequency, with lower microwave frequencies (longer wavelengths) penetrating deeper. Energy consumption In use, microwave ovens can be as low as 50% efficient at converting electricity into microwaves, but energy-efficient models can exceed 64% efficiency. Stovetop cooking is 40–90% efficient, depending on the type of appliance used. Because they are used fairly infrequently, the average residential microwave oven consumes only 72 kWh per year. Globally, microwave ovens used an estimated 77 TWh per year in 2018, or 0.3% of global electricity generation. A 2000 study by Lawrence Berkeley National Laboratory found that the average microwave drew almost 3 watts of standby power when not being used, which would total approximately 26 kWh per year. New efficiency standards imposed in 2016 by the United States Department of Energy require less than 1 watt, or approximately 9 kWh per year, of standby power for most types of microwave ovens. Components A microwave oven generally consists of: a high-voltage DC power source, either: a large high voltage transformer with a voltage doubler (a high-voltage capacitor and a diode) an electronic power converter usually based around an inverter. a cavity magnetron, which converts the high-voltage DC electric energy to microwave radiation a magnetron control circuit (usually with a microcontroller) a short waveguide (to couple microwave power from the magnetron into the cooking chamber) a turntable and/or metal wave guide stirring fan a control panel In most ovens, the magnetron is driven by a linear transformer which can only feasibly be switched completely on or off. (One variant of the GE Spacemaker had two taps on the transformer primary, for high and low power modes.) Usually choice of power level does not affect intensity of the microwave radiation; instead, the magnetron is cycled on and off every few seconds, thus altering the large scale duty cycle. Newer models use inverter power supplies that use pulse-width modulation to provide effectively continuous heating at reduced power settings, so that foods are heated more evenly at a given power level and can be heated more quickly without being damaged by uneven heating. The microwave frequencies used in microwave ovens are chosen based on regulatory and cost constraints. The first is that they should be in one of the industrial, scientific, and medical (ISM) frequency bands set aside for unlicensed purposes. For household purposes, 2.45 GHz has the advantage over 915 MHz in that 915 MHz is only an ISM band in some countries (ITU Region 2) while 2.45 GHz is available worldwide. Three additional ISM bands exist in the microwave frequencies, but are not used for microwave cooking. Two of them are centered on 5.8 GHz and 24.125 GHz, but are not used for microwave cooking because of the very high cost of power generation at these frequencies. The third, centered on 433.92 MHz, is a narrow band that would require expensive equipment to generate sufficient power without creating interference outside the band, and is only available in some countries. The cooking chamber is similar to a Faraday cage to prevent the waves from coming out of the oven. Even though there is no continuous metal-to-metal contact around the rim of the door, choke connections on the door edges act like metal-to-metal contact, at the frequency of the microwaves, to prevent leakage. The oven door usually has a window for easy viewing, with a layer of conductive mesh some distance from the outer panel to maintain the shielding. Because the size of the perforations in the mesh is much less than the microwaves' wavelength (12.2 cm for the usual 2.45 GHz), microwave radiation cannot pass through the door, while visible light (with its much shorter wavelength) can. Control panel Modern microwave ovens use either an analog dial-type timer or a digital control panel for operation. Control panels feature an LED, LCD or vacuum fluorescent display, buttons for entering the cook time and a power level selection feature. A defrost option is typically offered, as either a power level or a separate function. Some models include pre-programmed settings for different food types, typically taking weight as input. In the 1990s, brands such as Panasonic and GE began offering models with a scrolling-text display showing cooking instructions. Power settings are commonly implemented not by actually varying the power output, but by switching the emission of microwave energy off and on at intervals. The highest setting thus represents continuous power. Defrost might represent power for two seconds followed by no power for five seconds. To indicate cooking has completed, an audible warning such as a bell or a beeper is usually present, and/or "End" usually appears on the display of a digital microwave. Microwave control panels are often considered awkward to use and are frequently employed as examples for user interface design. Variants and accessories A variant of the conventional microwave oven is the convection microwave oven. A convection microwave oven is a combination of a standard microwave oven and a convection oven. It allows food to be cooked quickly, yet come out browned or crisped, as from a convection oven. Convection microwave ovens are more expensive than conventional microwave ovens. Some convection microwave ovens—those with exposed heating elements—can produce smoke and burning odors as food spatter from earlier microwave-only use is burned off the heating elements. Some ovens use high speed air; these are known as impingement ovens and are designed to cook food quickly in restaurants, but cost more and consume more power. In 2000, some manufacturers began offering high power quartz halogen bulbs to their convection microwave oven models, marketing them under names such as "Speedcook", "Advantium", "Lightwave" and "Optimawave" to emphasize their ability to cook food rapidly and with good browning. The bulbs heat the food's surface with infrared (IR) radiation, browning surfaces as in a conventional oven. The food browns while also being heated by the microwave radiation and heated through conduction through contact with heated air. The IR energy which is delivered to the outer surface of food by the lamps is sufficient to initiate browning caramelization in foods primarily made up of carbohydrates and Maillard reactions in foods primarily made up of protein. These reactions in food produce a texture and taste similar to that typically expected of conventional oven cooking rather than the bland boiled and steamed taste that microwave-only cooking tends to create. In order to aid browning, sometimes an accessory browning tray is used, usually composed of glass or porcelain. It makes food crisp by oxidizing the top layer until it turns brown. Ordinary plastic cookware is unsuitable for this purpose because it could melt. Frozen dinners, pies, and microwave popcorn bags often contain a susceptor made from thin aluminium film in the packaging or included on a small paper tray. The metal film absorbs microwave energy efficiently and consequently becomes extremely hot and radiates in the infrared, concentrating the heating of oil for popcorn or even browning surfaces of frozen foods. Heating packages or trays containing susceptors are designed for a single use and are then discarded as waste. Heating characteristics Microwave ovens produce heat directly within the food, but despite the common misconception that microwaved food cooks from the inside out, 2.45 GHz microwaves can only penetrate approximately into most foods. The inside portions of thicker foods are mainly heated by heat conducted from the outer . Uneven heating in microwaved food can be partly due to the uneven distribution of microwave energy inside the oven, and partly due to the different rates of energy absorption in different parts of the food. The first problem is reduced by a stirrer, a type of fan that reflects microwave energy to different parts of the oven as it rotates, or by a turntable or carousel that turns the food; turntables, however, may still leave spots, such as the center of the oven, which receive uneven energy distribution. The location of dead spots and hot spots in a microwave oven can be mapped out by placing a damp piece of thermal paper in the oven: When the water-saturated paper is subjected to the microwave radiation it becomes hot enough to cause the dye to be darkened which can provide a visual representation of the microwaves. If multiple layers of paper are constructed in the oven with a sufficient distance between them a three-dimensional map can be created. Many store receipts are printed on thermal paper which allows this to be easily done at home. The second problem is due to food composition and geometry, and must be addressed by the cook, by arranging the food so that it absorbs energy evenly, and periodically testing and shielding any parts of the food that overheat. In some materials with low thermal conductivity, where dielectric constant increases with temperature, microwave heating can cause localized thermal runaway. Under certain conditions, glass can exhibit thermal runaway in a microwave oven to the point of melting. Due to this phenomenon, microwave ovens set at too-high power levels may even start to cook the edges of frozen food while the inside of the food remains frozen. Another case of uneven heating can be observed in baked goods containing berries. In these items, the berries absorb more energy than the drier surrounding bread and cannot dissipate the heat due to the low thermal conductivity of the bread. Often this results in overheating the berries relative to the rest of the food. "Defrost" oven settings either use low power levels or repeatedly turn the power off and on – intended to allow time for heat to be conducted within frozen foods from areas that absorb heat more readily to those which heat more slowly. In turntable-equipped ovens, more even heating can take place by placing food off-center on the turntable tray instead of exactly in the center, as this results in more even heating of the food throughout. There are microwave ovens on the market that allow full-power defrosting. They do this by exploiting the properties of the electromagnetic radiation LSM modes. LSM full-power defrosting may actually achieve more even results than slow defrosting. Microwave heating can be deliberately uneven by design. Some microwavable packages (notably pies) may include materials that contain ceramic or aluminium flakes, which are designed to absorb microwaves and heat up, which aids in baking or crust preparation by depositing more energy shallowly in these areas. The technical term for such a microwave-absorbing patch is a susceptor. Such ceramic patches affixed to cardboard are positioned next to the food, and are typically smokey blue or gray in colour, usually making them easily identifiable; the cardboard sleeves included with Hot Pockets, which have a silver surface on the inside, are a good example of such packaging. Microwavable cardboard packaging may also contain overhead ceramic patches which function in the same way. Effects on food and nutrients Any form of cooking diminishes overall nutrient content in food, particularly water-soluble vitamins common in vegetables, but the key variables are how much water is used in the cooking, how long the food is cooked, and at what temperature. Nutrients are primarily lost by leaching into cooking water, which tends to make microwave cooking effective, given the shorter cooking times it requires and that the water heated is in the food. Like other heating methods, microwaving converts vitamin B from an active to inactive form; the amount of conversion depends on the temperature reached, as well as the cooking time. Boiled food reaches a maximum of (the boiling point of water), whereas microwaved food can get internally hotter than this, leading to faster breakdown of vitamin B. The higher rate of loss is partially offset by the shorter cooking times required. Spinach retains nearly all its folate when cooked in a microwave oven; when boiled, it loses about 77%, leaching nutrients into the cooking water. Bacon cooked by microwave oven has significantly lower levels of nitrosamines than conventionally cooked bacon. Steamed vegetables tend to maintain more nutrients when microwaved than when cooked on a stovetop. Microwave blanching is 3–4 times more effective than boiled-water blanching for retaining of the water-soluble vitamins, folate, thiamin and riboflavin, with the exception of of which 29% is lost (compared with a 16% loss with boiled-water blanching). Safety benefits and features All microwave ovens use a timer to switch off the oven at the end of the cooking time. Microwave ovens heat food without getting hot themselves. Taking a pot off a stove, unless it is an induction cooktop, leaves a potentially dangerous heating element or trivet that remains hot for some time. Likewise, when taking a casserole out of a conventional oven, one's arms are exposed to the very hot walls of the oven. A microwave oven does not pose this problem. Food and cookware taken out of a microwave oven are rarely much hotter than . Cookware used in a microwave oven is often much cooler than the food because the cookware is transparent to microwaves; the microwaves heat the food directly and the cookware is indirectly heated by the food. Food and cookware from a conventional oven, on the other hand, are the same temperature as the rest of the oven; a typical cooking temperature is . That means that conventional stoves and ovens can cause more serious burns. The lower temperature of cooking (the boiling point of water) is a significant safety benefit compared with baking in the oven or frying, because it eliminates the formation of tars and char, which are carcinogenic. Microwave radiation also penetrates deeper than direct heat, so that the food is heated by its own internal water content. In contrast, direct heat can burn the surface while the inside is still cold. Pre-heating the food in a microwave oven before putting it into the grill or pan reduces the time needed to heat up the food and reduces the formation of carcinogenic char. Unlike frying and baking, microwaving does not produce acrylamide in potatoes, however unlike deep-frying at high-temperatures, it is of only limited effectiveness in reducing glycoalkaloid (i.e., solanine) levels. Acrylamide has been found in other microwaved products like popcorn. Use in cleaning kitchen sponges Studies have investigated the use of the microwave oven to clean non-metallic domestic sponges which have been thoroughly wetted. A 2006 study found that microwaving wet sponges for 2 minutes (at 1000-watt power) removed 99% of coliforms, E. coli, and MS2 phages. Bacillus cereus spores were killed at 4 minutes of microwaving. A 2017 study was less affirmative: About 60% of the germs were killed but the remaining ones quickly re-colonized the sponge. Issues High temperatures Closed containers Closed containers, such as eggs, can explode when heated in a microwave oven due to the increased pressure from steam. Intact fresh egg yolks outside the shell also explode as a result of superheating. Insulating plastic foams of all types generally contain closed air pockets, and are generally not recommended for use in a microwave oven, as the air pockets explode and the foam (which can be toxic if consumed) may melt. Not all plastics are microwave-safe, and some plastics absorb microwaves to the point that they may become dangerously hot. Fires Products that are heated for too long can catch fire. Though this is inherent to any form of cooking, the rapid cooking and unattended nature of the use of microwave ovens results in additional hazard. Superheating In rare cases, water and other homogeneous liquids can superheat when heated in a microwave oven in a container with a smooth surface. That is, the liquid reaches a temperature slightly above its normal boiling point without bubbles of vapour forming inside the liquid. The boiling process can start explosively when the liquid is disturbed, such as when the user takes hold of the container to remove it from the oven or while adding solid ingredients such as powdered creamer or sugar. This can result in spontaneous boiling (nucleation) which may be violent enough to eject the boiling liquid from the container and cause severe scalding. Metal objects Contrary to popular assumptions, metal objects can be safely used in a microwave oven, but with some restrictions. Any metal or conductive object placed into the microwave oven acts as an antenna to some degree, resulting in an electric current. This causes the object to act as a heating element. This effect varies with the object's shape and composition, and is sometimes utilized for cooking. Any object containing pointed metal can create an electric arc (sparks) when microwaved. This includes cutlery, crumpled aluminium foil (though some foil used in microwave ovens is safe, see below), twist-ties containing metal wire, the metal wire carry-handles in oyster pails, or almost any metal formed into a poorly conductive foil or thin wire, or into a pointed shape. Forks are a good example: the tines of the fork respond to the electric field by producing high concentrations of electric charge at the tips. This has the effect of exceeding the dielectric breakdown of air, about 3 megavolts per meter (3×106 V/m). The air forms a conductive plasma, which is visible as a spark. The plasma and the tines may then form a conductive loop, which may be a more effective antenna, resulting in a longer lived spark. When dielectric breakdown occurs in air, some ozone and nitrogen oxides are formed, both of which are unhealthy in large quantities. Microwaving an individual smooth metal object without pointed ends, for example, a spoon or shallow metal pan, usually does not produce sparking. Thick metal wire racks can be part of the interior design in microwave ovens (see illustration). In a similar way, the interior wall plates with perforating holes which allow light and air into the oven, and allow interior-viewing through the oven door, are all made of conductive metal formed in a safe shape. The effect of microwaving thin metal films can be seen clearly on a Compact Disc or DVD (particularly the factory pressed type). The microwaves induce electric currents in the metal film, which heats up, melting the plastic in the disc and leaving a visible pattern of concentric and radial scars. Similarly, porcelain with thin metal films can also be destroyed or damaged by microwaving. Aluminium foil is thick enough to be used in microwave ovens as a shield against heating parts of food items, if the foil is not badly warped. When wrinkled, aluminium foil is generally unsafe in microwaves, as manipulation of the foil causes sharp bends and gaps that invite sparking. The USDA recommends that aluminium foil used as a partial food shield in microwave oven cooking cover no more than one quarter of a food object, and be carefully smoothed to eliminate sparking hazards. Another hazard is the resonance of the magnetron tube itself. If the microwave oven is run without an object to absorb the radiation, a standing wave forms. The energy is reflected back and forth between the tube and the cooking chamber. This may cause the tube to overload and burn out. High reflected power may also cause magnetron arcing, possibly resulting in primary power fuse failure, though such a causal relationship is not easily established. Thus, dehydrated food, or food wrapped in metal which does not arc, is problematic for overload reasons, without necessarily being a fire hazard. Certain foods such as grapes, if properly arranged, can produce an electric arc. Prolonged arcing from food carries similar risks to arcing from other sources as noted above. Some other objects that may conduct sparks are plastic/holographic print Thermos flasks and other heat-retaining containers (such as Starbucks novelty cups) or cups with metal lining. If any bit of the metal is exposed, all the outer shell can burst off the object or melt. The high electrical fields generated inside a microwave oven often can be illustrated by placing a radiometer or neon glow-bulb inside the cooking chamber, creating glowing plasma inside the low-pressure bulb of the device. Direct microwave exposure Direct microwave exposure is not generally possible, as microwaves emitted by the source in a microwave oven are confined in the oven by the material out of which the oven is constructed. Furthermore, ovens are equipped with redundant safety interlocks, which remove power from the magnetron if the door is opened. This safety mechanism is required by United States federal regulations. Tests have shown confinement of the microwaves in commercially available ovens to be so nearly universal as to make routine testing unnecessary. According to the United States Food and Drug Administration's Center for Devices and Radiological Health, a U.S. Federal Standard limits the amount of microwaves that can leak from an oven throughout its lifetime to 5 milliwatts of microwave radiation per square centimeter at approximately (2 in) from the surface of the oven. This is far below the exposure level currently considered to be harmful to human health. The radiation produced by a microwave oven is non-ionizing. It therefore does not have the cancer risks associated with ionizing radiation such as X-rays and high-energy particles. Long-term rodent studies to assess cancer risk have so far failed to identify any carcinogenicity from microwave radiation even with chronic exposure levels (i.e. large fraction of life span) far larger than humans are likely to encounter from any leaking ovens. However, with the oven door open, the radiation may cause damage by heating. Microwave ovens are sold with a protective interlock so that it cannot be run when the door is open or improperly latched. Microwaves generated in microwave ovens cease to exist once the electrical power is turned off. They do not remain in the food when the power is turned off, any more than light from an electric lamp remains in the walls and furnishings of a room when the lamp is turned off. They do not make the food or the oven radioactive. In contrast with conventional cooking, the nutritional content of some foods may be altered differently, but generally in a positive way by preserving more micronutrients – see above. There is no indication of detrimental health issues associated with microwaved food. There are, however, a few cases where people have been exposed to direct microwave radiation, either from appliance malfunction or deliberate action. This exposure generally results in physical burns to the body, as human tissue, particularly the outer fat and muscle layers, has a similar composition to some foods that are typically cooked in microwave ovens and so experiences similar dielectric heating effects when exposed to microwave electromagnetic radiation. Chemical exposure The use of unmarked plastics for microwave cooking raises the issue of plasticizers leaching into the food. The plasticizers which received the most attention are bisphenol A (BPA) and phthalates, although it is unclear whether other plastic components present a toxicity risk. Other issues include melting and flammability. An alleged issue of release of dioxins into food has been dismissed as an intentional red herring distraction from actual safety issues. Some current plastic containers and food wraps are specifically designed to resist radiation from microwaves. Products may use the term "microwave safe", may carry a microwave symbol (three lines of waves, one above the other) or simply provide instructions for proper microwave oven use. Any of these is an indication that a product is suitable for microwaving when used in accordance with the directions provided. Plastic containers can release microplastics into food when heated in microwave ovens. Uneven heating Microwave ovens are frequently used for reheating leftover food, and bacterial contamination may not be repressed if the microwave oven is used improperly. If safe temperature is not reached, this can result in foodborne illness, as with other reheating methods. While microwave ovens can destroy bacteria as well as conventional ovens can, they cook rapidly and may not cook as evenly, similar to frying or grilling, leading to a risk of some food regions failing to reach recommended temperatures. Therefore, a standing period after cooking to allow temperatures in the food to equalize is recommended, as well as the use of a food thermometer to verify internal temperatures. Interference Microwave ovens, although shielded for safety purposes, still emit low levels of microwave radiation. This is not harmful to humans, but can sometimes cause interference to Wi-Fi and Bluetooth and other devices that communicate on the 2.45 GHz wavebands, particularly at close range. Conventional transformer ovens do not operate continuously over the mains cycle, but can cause significant slowdowns for many metres around the oven, whereas inverter-based ovens can stop nearby networking entirely while operating. See also Countertop Electromagnetic reverberation chamber Induction cooker List of cooking appliances List of home appliances Microwave chemistry Peryton (astronomy) Robert V. Decareau Thelma Pressman Wall oven Notes References External links : Percy Spencer's original patent Ask a Scientist Chemistry Archives , Argonne National Laboratory Further Reading On The History Of Microwaves and Microwave Ovens Microwave oven history from American Heritage magazine Superheating and Microwave Ovens, University of New South Wales (includes video) "The Microwave Oven": Short explanation of microwave oven in terms of microwave cavities and waveguides, intended for use in a class in electrical engineering How Things Work: Microwave Ovens, David Ruzic, University of Illinois Ovens Owen American inventions Radiation effects Products introduced in 1945 20th-century inventions Home appliances
Microwave oven
[ "Physics", "Materials_science", "Technology", "Engineering" ]
9,277
[ "Machines", "Physical phenomena", "Materials science", "Physical systems", "Radiation", "Condensed matter physics", "Home appliances", "Radiation effects" ]
58,019
https://en.wikipedia.org/wiki/Harvard%20architecture
The Harvard architecture is a computer architecture with separate storage and signal pathways for instructions and data. It is often contrasted with the von Neumann architecture, where program instructions and data share the same memory and pathways. This architecture is often used in real-time processing or low-power applications. The term is often stated as having originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters. These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself. However, in the only peer-reviewed published paper on the topicThe Myth of the Harvard Architecture published in the IEEE Annals of the History of Computingthe author demonstrates that: 'The term "Harvard architecture" was coined decades later, in the context of microcontroller design' and only 'retrospectively applied to the Harvard machines and subsequently applied to RISC microprocessors with separated caches'; 'The so-called "Harvard" and "von Neumann" architectures are often portrayed as a dichotomy, but the various devices labeled as the former have far more in common with the latter than they do with each other'; 'In short [the Harvard architecture] isn't an architecture and didn't derive from work at Harvard'. Modern processors appear to the user to be systems with von Neumann architectures, with the program code stored in the same main memory as the data. For performance reasons, internally and largely invisible to the user, most designs have separate processor caches for the instructions and data, with separate pathways into the processor for each. This is one form of what is known as the modified Harvard architecture. Harvard architecture is historically, and traditionally, split into two address spaces, but having three, i.e. two extra (and all accessed in each cycle) is also done, while rare. Memory details In a Harvard architecture, there is no need to make the two memories share characteristics. In particular, the word width, timing, implementation technology, and memory address structure can differ. In some systems, instructions for pre-programmed tasks can be stored in read-only memory while data memory generally requires read-write memory. In some systems, there is much more instruction memory than data memory so instruction addresses are wider than data addresses. Contrast with von Neumann architectures In a system with a pure von Neumann architecture, instructions and data are stored in the same memory, so instructions are fetched over the same data path used to fetch data. This means that a CPU cannot simultaneously read an instruction and read or write data from or to the memory. In a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the same time, even without a cache. A Harvard architecture computer can thus be faster for a given circuit complexity because instruction fetches and data access do not contend for a single memory pathway. Also, a Harvard architecture machine has distinct code and data address spaces: instruction address zero is not the same as data address zero. Instruction address zero might identify a twenty-four-bit value, while data address zero might indicate an eight-bit byte that is not part of that twenty-four-bit value. Contrast with modified Harvard architecture A modified Harvard architecture machine is very much like a Harvard architecture machine, but it relaxes the strict separation between instruction and data while still letting the CPU concurrently access two (or more) memory buses. The most common modification includes separate instruction and data caches backed by a common address space. While the CPU executes from cache, it acts as a pure Harvard machine. When accessing backing memory, it acts like a von Neumann machine (where code can be moved around like data, which is a powerful technique). This modification is widespread in modern processors, such as the ARM architecture, Power ISA and x86 processors. It is sometimes loosely called a Harvard architecture, overlooking the fact that it is actually "modified". Another modification provides a pathway between the instruction memory (such as ROM or flash memory) and the CPU to allow words from the instruction memory to be treated as read-only data. This technique is used in some microcontrollers, including the Atmel AVR. This allows constant data, such as text strings or function tables, to be accessed without first having to be copied into data memory, preserving scarce (and power-hungry) data memory for read/write variables. Special machine language instructions are provided to read data from the instruction memory, or the instruction memory can be accessed using a peripheral interface. (This is distinct from instructions which themselves embed constant data, although for individual constants the two mechanisms can substitute for each other.) Speed In recent years, the speed of the CPU has grown many times in comparison to the access speed of the main memory. Care needs to be taken to reduce the number of times main memory is accessed in order to maintain performance. If, for instance, every instruction run in the CPU requires an access to memory, the computer gains nothing for increased CPU speed—a problem referred to as being memory bound. It is possible to make extremely fast memory, but this is only practical for small amounts of memory for cost, power and signal routing reasons. The solution is to provide a small amount of very fast memory known as a CPU cache which holds recently accessed data. As long as the data that the CPU needs is in the cache, the performance is much higher than it is when the CPU has to get the data from the main memory. On the other side, however, it may still be limited to storing repetitive programs or data and still has a storage size limitation, and other potential problems associated with it. Internal vs. external design Modern high performance CPU chip designs incorporate aspects of both Harvard and von Neumann architecture. In particular, the "split cache" version of the modified Harvard architecture is very common. CPU cache memory is divided into an instruction cache and a data cache. Harvard architecture is used as the CPU accesses the cache. In the case of a cache miss, however, the data is retrieved from the main memory, which is not formally divided into separate instruction and data sections, although it may well have separate memory controllers used for concurrent access to RAM, ROM and (NOR) flash memory. Thus, while a von Neumann architecture is visible in some contexts, such as when data and code come through the same memory controller, the hardware implementation gains the efficiencies of the Harvard architecture for cache accesses and at least some main memory accesses. In addition, CPUs often have write buffers which let CPUs proceed after writes to non-cached regions. The von Neumann nature of memory is then visible when instructions are written as data by the CPU and software must ensure that the caches (data and instruction) and write buffer are synchronized before trying to execute those just-written instructions. Modern uses of the Harvard architecture The principal advantage of the pure Harvard architecture—simultaneous access to more than one memory system—has been reduced by modified Harvard processors using modern CPU cache systems. Relatively pure Harvard architecture machines are used mostly in applications where trade-offs, like the cost and power savings from omitting caches, outweigh the programming penalties from featuring distinct code and data address spaces. Digital signal processors (DSPs) generally execute small, highly optimized audio or video processing algorithms using a Harvard architecture. They avoid caches because their behavior must be extremely reproducible. The difficulties of coping with multiple address spaces are of secondary concern to speed of execution. Consequently, some DSPs feature multiple data memories in distinct address spaces to facilitate SIMD and VLIW processing. Texas Instruments TMS320 C55x processors, for one example, feature multiple parallel data buses (two write, three read) and one instruction bus. Microcontrollers are characterized by having small amounts of program (flash memory) and data (SRAM) memory, and take advantage of the Harvard architecture to speed processing by concurrent instruction and data. The separate storage means the program and data memories may feature different bit widths, for example using 16-bit-wide instructions and 8-bit-wide data. They also mean that instruction prefetch can be performed in parallel with other activities. Examples include the PIC by Microchip Technology, Inc. and the AVR by Atmel Corp (now part of Microchip Technology). Even in these cases, it is common to employ special instructions in order to access program memory as though it were data for read-only tables, or for reprogramming; those processors are modified Harvard architecture processors. Notes References External links Harvard Architecture Harvard vs von Neumann Architectures Difference Between Harvard Architecture And Von Neumann Architecture Computer architecture Classes of computers
Harvard architecture
[ "Technology", "Engineering" ]
1,806
[ "Computer engineering", "Computer architecture", "Computer systems", "Computers", "Classes of computers" ]
58,032
https://en.wikipedia.org/wiki/Control%20Data%20Corporation
Control Data Corporation (CDC) was a mainframe and supercomputer company that in the 1960s was one of the nine major U.S. computer companies, which group included IBM, the Burroughs Corporation, and the Digital Equipment Corporation (DEC), the NCR Corporation (NCR), General Electric, and Honeywell, RCA and UNIVAC. For most of the 1960s, the strength of CDC was the work of the electrical engineer Seymour Cray who developed a series of fast computers, then considered the fastest computing machines in the world; in the 1970s, Cray left the Control Data Corporation and founded Cray Research (CRI) to design and make supercomputers. In 1988, after much financial loss, the Control Data Corporation began withdrawing from making computers and sold the affiliated companies of CDC; in 1992, CDC established Control Data Systems, Inc. The remaining affiliate companies of CDC currently do business as the software company Dayforce. Background: World War II – 1957 During World War II the U.S. Navy had built up a classified team of engineers to build codebreaking machinery for both Japanese and German electro-mechanical ciphers. A number of these were produced by a team dedicated to the task working in the Washington, D.C., area. With the post-war wind-down of military spending, the Navy grew increasingly worried that this team would break up and scatter into various companies, and it started looking for ways to keep the code-breaking team together. Eventually they found their solution: John Parker, the owner of a Chase Aircraft affiliate named Northwestern Aeronautical Corporation located in St. Paul, Minnesota, was about to lose all his contracts due to the ending of the war. The Navy never told Parker exactly what the team did, since it would have taken too long to get top secret clearance. Instead they simply said the team was important, and they would be very happy if he hired them all. Parker was obviously wary, but after several meetings with increasingly high-ranking Naval officers it became apparent that whatever it was, they were serious, and he eventually agreed to give this team a home in his military glider factory. The result was Engineering Research Associates (ERA). Formed in 1946, this contract engineering company worked on a number of seemingly unrelated projects in the early 1950s. Among these was the ERA Atlas, an early military stored program computer, the basis of the Univac 1101, which was followed by the 1102, and then the 36-bit ERA 1103 (UNIVAC 1103). The Atlas was built for the Navy, which intended to use it in their non-secret code-breaking centers. In the early 1950s a minor political debate broke out in Congress about the Navy essentially "owning" ERA, and the ensuing debates and legal wrangling left the company drained of both capital and spirit. In 1952, Parker sold ERA to Remington Rand. Although Rand kept the ERA team together and developing new products, it was most interested in ERA's magnetic drum memory systems. Rand soon merged with Sperry Corporation to become Sperry Rand. In the process of merging the companies, the ERA division was folded into Sperry's UNIVAC division. At first this did not cause too many changes at ERA, since the company was used primarily to provide engineering talent to support a variety of projects. However, one major project was moved from UNIVAC to ERA, the UNIVAC II project, which led to lengthy delays and upsets to nearly everyone involved. Since the Sperry "big company" mentality encroached on the decision-making powers of the ERA employees, a number left Sperry to form the Control Data Corp. in September 1957, setting up shop in an old warehouse across the river from Sperry's St. Paul laboratory, in Minneapolis at 501 Park Avenue. Of the members forming CDC, William Norris was the unanimous choice to become the chief executive officer of the new company. Seymour Cray soon became the chief designer, though at the time of CDC's formation he was still in the process of completing a prototype for the Naval Tactical Data System (NTDS), and he did not leave Sperry to join CDC until it was complete. The M-460 was Seymour's first transistor computer, though the power supply rectifiers were still tubes. Early designs and Cray's big plan CDC started business by selling subsystems, mostly drum memory systems, to other companies. Cray joined the next year, and he immediately built a small transistor-based 6-bit machine known as the "CDC Little Character" to test his ideas on large-system design and transistor-based machines. "Little Character" was a great success. In 1959, CDC released a 48-bit transistorized version of their re-design of the 1103 re-design under the name CDC 1604; the first machine was delivered to the U.S. Navy in 1960 at the Naval Postgraduate School in Monterey, California. Legend has it that the 1604 designation was chosen by adding CDC's first street address (501 Park Avenue) to Cray's former project, the ERA-Univac 1103. A 12-bit cut-down version was also released as the CDC 160A in 1960, often considered among the first minicomputers. The 160A was particularly notable as it was built as a standard office desk, which was unusual packaging for that era. New versions of the basic 1604 architecture were rebuilt into the CDC 3000 series, which sold through the early and mid-1960s. Cray immediately turned to the design of a machine that would be the fastest (or in the terminology of the day, largest) machine in the world, setting the goal at 50 times the speed of the 1604. This required radical changes in design, and as the project "dragged on" — it had gone on for about four years by then — the management got increasingly upset and it demanded greater oversight. Cray in turn demanded (in 1962) to have his own remote lab, saying that otherwise, he would quit. Norris agreed, and Cray and his team moved to Cray's home town, Chippewa Falls, Wisconsin. Not even Bill Norris, the founder and president of CDC, could visit Cray's laboratory without an invitation. Peripherals business In the early 1960s, the corporation moved to the Highland Park neighborhood of St. Paul where Norris lived. Through this period, Norris became increasingly worried that CDC had to develop a "critical mass" to compete with IBM. To do this, he started an aggressive program of buying up various companies to round out CDC's peripheral lineup. In general, they tried to offer a product to compete with any of IBM's, but running 10% faster and costing 10% less. This was not always easy to achieve. One of its first peripherals was a tape transport, which led to some internal wrangling as the Peripherals Equipment Division attempted to find a reasonable way to charge other divisions of the company for supplying the devices. If the division simply "gave" them away at cost as part of a system purchase, they would never have a real budget of their own. Instead, a plan was established in which it would share profits with the divisions selling its peripherals, a plan eventually used throughout the company. The tape transport was followed by the 405 Card Reader and the 415 Card Punch, followed by a series of tape drives and drum printers, all of which were designed in-house. The printer business was initially supported by Holley Carburetor in the Rochester, Michigan suburb outside of Detroit. They later formalized this by creating a jointly held company, Holley Computer Products. Holley later sold its stake back to CDC, the remainder becoming the Rochester Division. Train printers and band printers in Rochester were developed in a joint venture with NCR and ICL, with CDC holding controlling interest. This joint venture was known as Computer Peripherals, Inc. (CPI). In the early 80s, it was merged with dot matrix computer printer manufacturer Centronics. Norris was particularly interested in breaking out of the punched card–based workflow, where IBM held a stranglehold. He eventually decided to buy Rabinow Engineering, one of the pioneers of optical character recognition (OCR) systems. The idea was to bypass the entire punched card stage by having the operators simply type onto normal paper pages with an OCR-friendly typewriter font, and then submit those pages to the computer. Since a typewritten page contains much more information than a punched card (which has essentially one line of text from a page), this would offer savings all around. This seemingly simple task turned out to be much harder than anyone expected, and while CDC became a major player in the early days of OCR systems, OCR has remained a niche product to this day. Rabinow's plant in Rockville, MD was closed in 1976, and CDC left the business. With the continued delays on the OCR project, it became clear that punched cards were not going to go away any time soon, and CDC had to address this as quickly as possible. Although the 405 remained in production, it was an expensive machine to build. So another purchase was made, Bridge Engineering, which offered a line of lower-cost as well as higher-speed card punches. All card-handling products were moved to what became the Valley Forge Division after Bridge moved to a new factory, with the tape transports to follow. Later, the Valley Forge and Rochester divisions were spun off to form a new joint company with National Cash Register (later NCR Corporation), Computer Peripherals Inc (CPI), to share development and production costs across the two companies. ICL later joined the effort. Eventually the Rochester Division was sold to Centronics in 1982. Another side effect of Norris's attempts to diversify was the creation of a number of service bureaus that ran jobs on behalf of smaller companies that could not afford to buy computers. This was never very profitable, and in 1965, several managers suggested that the unprofitable centers be closed in a cost-cutting measure. Nevertheless, Norris was so convinced of the idea that he refused to accept this, and ordered an across-the-board "belt tightening" instead. Control Data Institute Control Data created an international technical/computer vocational school from the mid-1960s to the late 1980s. By the late 1970s there were sixty-nine learning centers worldwide, serving 18,000 students. CDC 6600: defining supercomputing Meanwhile, at the new Chippewa Falls lab, Seymour Cray, Jim Thornton, and Dean Roush put together a team of 34 engineers, which continued work on the new computer design. One of the ways they hoped to improve the CDC 1604 was to use better transistors, and Cray used the new silicon transistors using the planar process, developed by Fairchild Semiconductor. These were much faster than the germanium transistors in the 1604, without the drawbacks of the older mesa silicon transistors. The speed of light restriction forced a more compact design with refrigeration designed by Dean Roush. In 1964, the resulting computer was released onto the market as the CDC 6600, out-performing everything on the market by roughly ten times. When it sold over 100 units at $8 million ($ million in dollars) each; it was considered a supercomputer. The 6600 had a 100ns, transistor-based CPU (Central Processing Unit) with multiple asynchronous functional units, using 10 logical, external I/O processors to off-load many common tasks and core memory. That way, the CPU could devote all of its time and circuitry to processing actual data, while the other controllers dealt with the mundane tasks like punching cards and running disk drives. Using late-model compilers, the machine attained a standard mathematical operations rate of 500 kiloFLOPS, but handcrafted assembly managed to deliver approximately 1 megaFLOPS. A simpler, albeit much slower and less expensive version, implemented using a more traditional serial processor design rather than the 6600's parallel functional units, was released as the CDC 6400, and a two-processor version of the 6400 is called the CDC 6500. A FORTRAN compiler, known as MNF (Minnesota FORTRAN), was developed by Lawrence A. Liddiard and E. James Mundstock at the University of Minnesota for the 6600. After the delivery of the 6600 IBM took notice of this new company. In 1965 IBM started an effort to build a machine that would be faster than the 6600, the ACS-1. Two hundred people were gathered on the U.S. West Coast to work on the project, away from corporate prodding, in an attempt to mirror Cray's off-site lab. The project produced interesting computer architecture and technology, but it was not compatible with IBM's hugely successful System/360 line of computers. The engineers were directed to make it 360-compatible, but that compromised its performance. The ACS was canceled in 1969, without ever being produced for customers. Many of the engineers left the company, leading to a brain-drain in IBM's high-performance departments. In the meantime, IBM announced a new System/360 model, the Model 92, which would be just as fast as CDC's 6600. Although this machine did not exist, sales of the 6600 dropped drastically while people waited for the release of the mythical Model 92. Norris did not take this tactic, dubbed as fear, uncertainty and doubt (FUD), lying down, and in an extensive antitrust lawsuit launched against IBM a year later, he eventually won a settlement valued at $80 million. As part of the settlement, he picked up IBM's subsidiary, Service Bureau Corporation (SBC), which ran computer processing for other corporations on its own computers. SBC fitted nicely into CDC's existing service bureau offerings. During the designing of the 6600, CDC had set up Project SPIN to supply the system with a high speed hard disk memory system. At the time it was unclear if disks would replace magnetic memory drums, or whether fixed or removable disks would become the more prevalent. SPIN explored all of these approaches, and eventually delivered a 28" diameter fixed disk and a smaller multi-platter 14" removable disk-pack system. Over time, the hard disk business pioneered in SPIN became a major product line. CDC 7600 and 8600 In the same month it won its lawsuit against IBM, CDC announced its new computer, the CDC 7600 (previously referred to as the 6800 within CDC). This machine's hardware clock speed was almost four times that of the 6600 (36 MHz vs. 10 MHz), with a 27.5 ns clock cycle, and it offered considerably more than four times the total throughput, with much of the speed increase coming from extensive use of pipelining. The 7600 did not sell well because it was introduced during the 1969 downturn in the U.S. national economy. Its complexity had led to poor reliability. The machine was not totally compatible with the 6000-series and required a completely different operating system, which like most new OSs, was primitive. The 7600 project paid for itself, but damaged CDC's reputation. The 7600 memory had a split primary- and secondary-memory which required user management but was more than fast enough to make it the fastest uniprocessor from 1969 to 1976. A few dozen 7600s were the computers of choice at supercomputer centers around the world. Cray then turned to the design of the CDC 8600. This design included four 7600-like processors in a single, smaller case. The smaller size and shorter signal paths allowed the 8600 to run at much higher clock speeds which, together with faster memory, provided most of the performance gains. The 8600, however, belonged to the "old school" in terms of its physical construction, and it used individual components soldered to circuit boards. The design was so compact that cooling the CPU modules proved effectively impossible, and access for maintenance difficult. An abundance of hot-running solder joints ensured that the machines did not work reliably; Cray recognized that a re-design was needed. The STAR and the Cyber In addition to the redesign of the 8600, CDC had another project called the CDC STAR-100 under way, led by Cray's former collaborator on the 6600/7600, Jim Thornton. Unlike the 8600's "four computers in one box" solution to the speed problem, the STAR was a new design using a unit that we know today as the vector processor. By highly pipelining mathematical instructions with purpose-built instructions and hardware, mathematical processing is dramatically improved in a machine that was otherwise slower than a 7600. Although the particular set of problems it would be best at solving was limited compared to the general-purpose 7600, it was for solving exactly these problems that customers would buy CDC machines. Since these two projects competed for limited funds during the late 1960s, Norris felt that the company could not support simultaneous development of the STAR and a complete redesign of the 8600. Therefore, Cray left CDC to form the Cray Research company in 1972. Norris remained, however, a staunch supporter of Cray, and invested money into Cray's new company. In 1974 CDC released the STAR, designated as the Cyber 203. It turned out to have "real world" performance that was considerably worse than expected. STAR's chief designer, Jim Thornton, then left CDC to form the Network Systems Corporation. In 1975, a STAR-100 was placed into service in a Control Data service center which was considered the first supercomputer in a data center. Founder William C. Norris presided at the podium for the press conference announcing the new service. Publicity was a key factor in making the announcement a success by coordinating the event with Guinness; thus, establishing the Star-100 as "The most powerful and fastest computer" which was published in the Guinness Book of World Records. The late Duane Andrews, Public Relations, was responsible for coordinating this event. Andrews successfully attracted many influential editors including the research editor at Business Week who chronicled this publicity release "... as the most exciting public event he attended in 20 years". Sharing the podium were William C. Norris, Boyd Jones V.P. and S. Steve Adkins, Data Center Manager. It was extremely rare for Bill Norris to take the podium being a very private individual. Also, during the lunch at a local country club, Norris signed a huge stack of certificates attesting to the record which were printed by the Star 100 on printer paper produced in our Lincoln, Nebraska plant. The paper included a half-tone photo of the Star 100. The main customers of the STAR-100 data center were oil companies running oil reservoir simulations. Most notably was the simulation controlled from a terminal in Texas which solved oil extraction simulations for oil fields in Kuwait. A front page Wall Street Journal news article resulted in acquiring a new user, Allis-Chalmers, for simulation of a damaged hydroelectric turbine in a Norwegian mountain hydropower plant. A variety of systems based on the basic 6600/7600 architecture were repackaged in different price/performance categories of the CDC Cyber, which became CDC's main product line in the 1970s. An updated version of the STAR architecture, the Cyber 205, had considerably better performance than the original. By this time, however, Cray's own designs, like the Cray-1, were using the same basic design techniques as the STAR, but were computing much faster. The Star 100 was able to process vectors up to 64K (65536) elements, versus 64 elements for the Cray-1, but the Star 100 took much longer for initiating the operation so the Cray-1 outperformed with short vectors. Sales of the STAR were weak, but Control Data Corp. produced a successor system, the Cyber 200/205, that gave Cray Research some competition. CDC also embarked on a number of special projects for its clients, who produced an even smaller number of black project computers. The CDC Advanced Flexible Processor (AFP), also known as CYBER PLUS, was one such machine. Another design direction was the "Cyber 80" project, which was aimed at release in 1980. This machine could run old 6600-style programs, and also had a completely new 64-bit architecture. The concept behind Cyber 80 was that current 6000-series users would migrate to these machines with relative ease. The design and debugging of these machines went on past 1980, and the machines were eventually released under other names. CDC was also attempting to diversify its revenue from hardware into services and this included its promotion of the PLATO computer-aided learning system, which ran on Cyber hardware and incorporated many early computer interface innovations including bit-mapped touchscreen terminals. Magnetic Peripherals Inc. Meanwhile, several very large Japanese manufacturing firms were entering the market. The supercomputer market was too small to support more than a handful of companies, so CDC started looking for other markets. One of these was the hard disk drive (HDD) market. Magnetic Peripherals Inc., later Imprimis Technology, was originally a joint venture with Honeywell formed in 1975 to manufacture HDDs for both companies. CII-Honeywell Bull later purchased a 3 percent interest in MPI from Honeywell. Sperry became a partner in 1983 with 17 percent, making the ownership split CDC (67%) and Honeywell (17%). MPI was a captive supplier to its parents. It sold on an OEM basis only to them, while CDC sold MPI product to third parties under its brand name. It became a major player in the HDD market. It was the worldwide leader in 14-inch disk drive technology in the OEM marketplace in the late 1970s and early 1980s especially with its SMD (Storage Module Device) and CMD (Cartridge Module Drive), with its plant at Brynmawr in the South Wales valleys running 24/7 production. The Magnetic Peripherals division in Brynmawr had produced 1 million disks and 3 million magnetic tapes by October 1979. CDC was an early developer of the eight-inch drive technology with products from its MPI Oklahoma City Operation. Its CDC Wren series drives were particularly popular with high end users, although it was behind the capacity growth and performance curves of numerous startups such as Micropolis, Atasi, Maxtor, and Quantum. CDC also co-developed the now universal Advanced Technology Attachment (ATA) interface with Compaq and Western Digital, which was aimed at lowering the cost of adding low-performance drives. CDC founded a separate division called Rigidyne in Simi Valley, California, to develop 3.5-inch drives using technology from the Wren series. These were marketed by CDC as the "Swift" series, and were among the first high-performance 3.5-inch drives on the market at their introduction in 1987. In September 1988, CDC merged Rigidyne and MPI into the umbrella subsidiary of Imprimis Technology. The next year, Seagate Technology purchased Imprimis for $250 million in cash, 10.7 million in Seagate stock and a $50 million promissory note. Investments Control Data held interests in other companies including computer research company Arbitron, Commercial Credit Corporation and Ticketron. Commercial Credit Corporation In 1968, Commercial Credit Corporation was the target of a hostile takeover by Loews Inc. Loews had acquired nearly 10% of CCC, which it intended to break up on acquisition. To avoid the takeover, CCC forged a deal with CDC lending them the money to purchase control in CCC instead, and "That is how a computer company came to own a fleet of fishing boats in the Chesapeake Bay." By the 1980s, Control Data entered an unstable period, which resulted in the company liquidating many of their assets. In 1986, Sandy Weill convinced the Control Data management to spin off their Commercial Credit subsidiary to prevent the company's potential liquidation. Over a period of years, Weill used Commercial Credit to build an empire that became Citigroup. In 1999, Commercial Credit was renamed CitiFinancial, and in 2011, the full-service network of US CitiFinancial branches were renamed OneMain Financial. Ticketron In 1969, Control Data acquired 51% of Ticketron for $3.9 million from Cemp Investments. In 1970, Ticketron became the sole computerized ticketing provider in the United States. In 1973, Control Data increased the size of its investment. Ticketron also provided ticketing terminals and back-end infrastructure for parimutuel betting, and provided similar services for a number of US lotteries, including those in New York, Illinois, Pennsylvania, Delaware, Washington and Maryland. By the mid 1980s, Ticketron was CDC's most profitable business with revenue of $120 million and CDC, which was loss-making at the time, considered selling the business. In 1990 the majority of Ticketron's assets and business, with the exception of a small antitrust carve-out for Broadway's "Telecharge" business-unit, were bought by The Carlyle Group who sold it the following year to rival Ticketmaster. ETA Systems, wind-down and sale of assets CDC decided to fight for the high-performance niche, but Norris considered that the company had become moribund and unable to quickly design competitive machines. In 1983 he set up a spinoff company, ETA Systems, whose design goal was a machine processing data at 10 GFLOPs, about 40 times the speed of the Cray-1. The design never fully matured, and it was unable to reach its goals. Nevertheless, the product was one of the fastest computers on the market, and 7 liquid nitrogen-cooled and 27 smaller air cooled versions of the computers were sold during the next few years. They used the new CMOS chips, which produced much less heat. The effort ended after half-hearted attempts to sell ETA Systems. In 1989, most of the employees of ETA Systems were laid off, and the remaining ones were folded into CDC. Despite having valuable technology, CDC still suffered huge losses in 1985 ($567 million) and 1986 while attempting to reorganize. As a result, in 1987 it sold its PathLab Laboratory Information System to 3M. While CDC was still making computers, it was decided that hardware manufacturing was no longer as profitable as it used to be, and so in 1988 it was decided to leave the industry, bit by bit. The first division to go was Imprimis. After that, CDC sold other assets such as VTC (a chip maker that specialized in mass-storage circuitry and was closely linked with MPI), and non-computer-related assets like Ticketron. In 1992, the company separated into two independent companies – the computer businesses were spun out as Control Data Systems, Inc. (CDS), while the information service businesses became the Ceridian Corporation. CDS later became owner of ICEM Technologies, makers of ICEM DDN and ICEM Surf software and sold the business to PTC for $40.6m in 1998. In 1999, CDS was bought out by Syntegra, a subsidiary of the BT Group, and merged into BT's Global Services organization. Ceridian continues as a successful outsourced IT company focusing on human resources. CDC's Energy Management Division, was one of its most successful business units, providing control systems solutions that managed as much as 25% of all electricity on the planet, and went to Ceridian in the split. This division was renamed Empros and was sold to Siemens in 1993. In 1997, General Dynamics acquired the Computing Devices International Division of Ceridian, which was a defense electronics and systems integration business headquartered in Bloomington, Minnesota – originally Control Data's Government Systems Division. In March 2001, Ceridian separated into two independent companies, with the old Ceridian Corporation renamed itself to Arbitron Inc. and the rest of the company (consisting of human resources services and Comdata business) took the Ceridian Corporation name. Ceridian was later split again in 2013, with formation of Ceridian HCM Holding Inc. (human resources services) and Comdata Inc. (payments business), marking the end of CDC assets split for good. Timeline of systems releases CDC 1604 et al – 1604, 1604-A, 1604-B, 1604-C, 924 (a "cut down" 1604 sibling) * CDC 160 series – 160, 160A (160-A), 160G (160-G) * CDC 3000 series – 3100, 3200, 3300, 3400, 3500, 3600, 3800 * CDC 6000 series – 6200, 6400, 6500, 6700 * CDC 6600 * CDC 7600 * CDC CYBER – 17, 18, 71, 72, 73, 74, 76, 170, 171, 172, 173, 174, 175, 176, 203, 205, Omega/480, 700 * CDC STAR-100 1957 – Founding 1959 – 1604 1960 – 1604-B 1961 – 160 1962 – 924 (a 24-bit 1604) 1963 – 160A (160-A), 1604-A, 3400, 6600 1964 – 160G (160-G), 3100, 3200, 3600, 6400 1965 – 1604-C, 1700, 3300, 3500, 8050, 8090 1966 – 3800, 6200, 6500, Station 6000 1968 – 7600 1969 – 6700 1970 – STAR-100 1971 – Cyber 71, Cyber 72, Cyber 73, Cyber 74, Cyber 76 1972 – 5600, 8600 1973 – Cyber 170, Cyber 172, Cyber 173, Cyber 174, Cyber 175, Cyber 17 1976 – Cyber 18 1977 – Cyber 171, Cyber 176, Omega/480 1979 – Cyber 203, Cyber 720, Cyber 730, Cyber 740, Cyber 750, Cyber 760 1980 – Cyber 205 1982 – Cyber 815, Cyber 825, Cyber 835, Cyber 845, Cyber 855, Cyber 865, Cyber 875 1983 – ETA10 1984 – Cyber 810, Cyber 830, Cyber 840, Cyber 850, Cyber 860, Cyber 990, CyberPlus 1987 – Cyber 910, Cyber 930, Cyber 995 1988 – Cyber 960 1989 – Cyber 920, Cyber 2000 Note: The 8xx & 9xx Cyber models, introduced beginning in 1982, formed the 64-bit Cyber 180 series, and their Peripheral Processors (PPs) were 16-bit. The 180 series had virtual memory capability, using CDC's NOS/VE operating system.The more complete nomenclature for these was 180/xxx, although at times the shorter form (e.g. Cyber 990) was used. Peripheral Systems Group Control Data Corporation's Peripheral Systems Group was both a hardware and a software development unit that functioned in the 1970s and 1980s. Their services including development and marketing of IBM-oriented (operating) systems software. One of the Peripheral Systems Group's software products was named CUPID, "Control Data's Program for Unlike Data Set Concatenation." Its focus was for customers of IBM's MVS operating system, and the intended audience was systems programmers. The product's General Information and Reference Manual included SysGen-like options and information about internal user-accessible control blocks. Film and science fiction references Mars Needs Women (1967) – a CDC 3400 is used for radio communication and to direct the actions of the military as they intercept the Martian spaceships. Colossus: The Forbin Project (1970) – The title sequences to this film include tape drives and other early CDC equipment. The Mad Bomber (1973) – The police department has a CDC 3100 that they use to profile the bomber. The Adolescence of P-1 (1977), by Thomas Ryan – Control Data computers were very enticing to young P-1. The New Avengers – In episode 2-10 (#23) ("Complex", 1977) Purdey uses a CDC card reader. Mi-Sex – Computer Games: 1979 pop music video. The band enters the computer room in the Control Data North Sydney building and proceeds to play with CDC equipment. Tron (1982) – In the wide screen version of the film, when Flynn and Lora sneak into Encom, a CDC 7600 is visible in the background, alongside a Cray-1. This scene was shot at the Lawrence Livermore National Laboratory. Die Hard (1988) – The computer room shot up by one of the terrorists contained a number of working Cyber 180 computers and a mock-up of an ETA-10 supercomputer, along with a number of other peripheral devices, all provided by CDC Demonstration Services/Benchmark Lab. This equipment was requested on short notice after another computer manufacturer backed out at the last minute. Paul Derby, manager of the Benchmark Lab, arranged to send two van-loads of equipment to Hollywood for the shoot, accompanied by Jerry Sterns of the Benchmark Lab who supervised the equipment while it was on the set. After the machines were returned to Minnesota, they were inspected and tested, and as each machine was sold, a notation was made in the corporate records that the machine had appeared in the film. They Live (1988), by John Carpenter – As Roddy Piper's character is trying on his new "sunglasses" that allow him to see the world as it is, he looks at an advertisement for Control Data Corporation and sees the word OBEY. The film's credits include "special thanks" to CDC. References Further reading Lundstrom, David. A Few Good Men from Univac. Cambridge, Massachusetts: MIT Press, 1987. . Misa, Thomas J., ed. Building the Control Data Legacy: The Career of Robert M. Price. Minneapolis: Charles Babbage Institute, 2012 Murray, Charles J. The Supermen: The Story of Seymour Cray and the Technical Wizards behind the Supercomputer. New York: John Wiley, 1997. . Price, Robert M. The Eye for Innovation: Recognizing Possibilities and Managing the Creative Enterprise. New Haven: Yale University Press, 2005 Thornton, J. E. Design of a Computer: The Control Data 6600. Glenview, Ill.: Scott, Foresman, 1970 Worthy, James C. William C. Norris: Portrait of a Maverick. Ballinger Pub Co., May 1987. External links Control Data Corporation Records at the Charles Babbage Institute, University of Minnesota, Minneapolis; CDC records donated by Ceridian Corporation in 1991; finding guide contains historical timeline, product timeline, acquisitions list, and joint venture list. Oral history interview with William Norris discusses ERA years, acquisition of ERA by Remington Rand, the Univac File computer, work as head of the Univac Division, and the formation of CDC. Charles Babbage Institute, University of Minnesota, Minneapolis. Oral history interview with Willis K. Drake Discusses Remington-Rand, the Eckert-Mauchly Computer Company, ERA, and formation of Control Data Corporation. Charles Babbage Institute, University of Minnesota, Minneapolis. Organized discussion moderated by Neil R. Lincoln with eighteen Control Data Corporation (CDC) engineers on computer architecture and design. Charles Babbage Institute, University of Minnesota, Minneapolis. Engineers include Robert Moe, Wayne Specker, Dennis Grinna, Tom Rowan, Maurice Hutson, Curt Alexander, Don Pagelkopf, Maris Bergmanis, Dolan Toth, Chuck Hawley, Larry Krueger, Mike Pavlov, Dave Resnick, Howard Krohn, Bill Bhend, Kent Steiner, Raymon Kort, and Neil R. Lincoln. Discussion topics include CDC 1604, CDC 6600, CDC 7600, CDC 8600, CDC STAR-100 and Seymour Cray. Information about the spin out of Commercial Credit from Control Data by Sandy Weill Information about the Control Data CDC 3800 Computer—on display at the National Air and Space Museum Steven F. Udvar-Hazy Center near Washington Dulles International Airport. Private Collection of historical documents about CDC Control Data User Manuals Library @ Computing History Computing history describing the use of a range of CDC systems and equipment 1970–1985 A German collection of CDC, Cray and other large computer systems, some of them in operation American companies established in 1957 American companies disestablished in 1992 Chippewa County, Wisconsin Computer companies established in 1957 Computer companies disestablished in 1992 Defunct companies based in Minneapolis Defunct companies based in Minnesota Defunct computer companies of the United States Defunct computer hardware companies Defunct computer systems companies Defunct software companies of the United States Manufacturing companies based in Minnesota Software companies based in Minnesota Supercomputers Technology companies established in 1957 Technology companies disestablished in 1992
Control Data Corporation
[ "Technology" ]
7,581
[ "Supercomputers", "Supercomputing" ]
58,072
https://en.wikipedia.org/wiki/John%20Mathieson%20%28computer%20scientist%29
John Mathieson is a British computer chip designer who initially worked for Sinclair Research on the cancelled Loki computer project before co-founding Flare with ex-Sinclair colleagues Martin Brennan and Ben Cheese. After working at Flare on the Flare 1 and its development into the Konix Multisystem, he worked for Atari Corporation developing the Atari Panther video game console. It was abandoned in favor of its successor, the Atari Jaguar. The Jaguar was commercially released in the United States on November 23, 1993. Mathieson has been called "the father of the Jaguar." After leaving Atari, Mathieson worked on the development of the ill-fated NUON media processor at VM Labs. He moved to work for Nvidia at the end of 2001. As Director of Mobile Systems Architecture at Nvidia Corp. he led the system architecture team for three generations of the Tegra applications processor. References External links http://www.vmlabs.de/team.htm - List of VM Labs team with picture of John https://www.linkedin.com/in/johnmathieson/ - Linked-In profile Sinclair Research Year of birth missing (living people) Living people British computer scientists
John Mathieson (computer scientist)
[ "Technology" ]
252
[ "Computing stubs", "Computer specialist stubs" ]
58,161
https://en.wikipedia.org/wiki/Dust%20devil
A dust devil (also known regionally as a dirt devil) is a strong, well-formed, and relatively short-lived whirlwind. Its size ranges from small (18 in/half a metre wide and a few yards/metres tall) to large (more than 30 ft/10 m wide and more than half a mile/1 km tall). The primary vertical motion is upward. Dust devils are usually harmless, but can on rare occasions grow large enough to pose a threat to both people and property. They are comparable to tornadoes in that both are a weather phenomenon involving a vertically oriented rotating column of wind. Most tornadoes are associated with a larger parent circulation, the mesocyclone on the back of a supercell thunderstorm. Dust devils form as a swirling updraft under sunny conditions during fair weather, rarely coming close to the intensity of a tornado. Formation Dust devils form when a pocket of hot air near the surface rises quickly through cooler air above it, forming an updraft. If conditions are just right, the updraft may begin to rotate. As the air rapidly rises, the column of hot air is stretched vertically, thereby moving mass closer to the axis of rotation, which causes intensification of the spinning effect by conservation of angular momentum. The secondary flow in the dust devil causes other hot air to speed horizontally inward to the bottom of the newly forming vortex. As more hot air rushes in toward the developing vortex to replace the air that is rising, the spinning effect becomes further intensified and self-sustaining. A dust devil, fully formed, is a funnel-like chimney through which hot air moves, both upwards and in a circle. As the hot air rises, it cools, loses its buoyancy and eventually ceases to rise. As it rises, it displaces air which descends outside the core of the vortex. This cool air returning acts as a balance against the spinning hot-air outer wall and keeps the system stable. The spinning effect, along with surface friction, usually will produce a forward momentum. The dust devil may be sustained if it moves over nearby sources of hot surface air. As available hot air near the surface is channelled up the dust devil, eventually surrounding cooler air will be sucked in. Once it occurs, the effect is dramatic, and the dust devil dissipates in seconds. Usually it occurs when the dust devil is moving slowly (depletion) or begins to enter a terrain where the surface temperatures are cooler. Certain conditions increase the likelihood of dust devil formation. Flat barren terrain, desert or tarmac: Flat conditions increase the likelihood of the hot-air "fuel" being a near constant. Dusty or sandy conditions will cause particles to become caught up in the vortex, making the dust devil easily visible, but are not necessary for the formation of the vortex. Clear skies or lightly cloudy conditions: The surface needs to absorb significant amounts of solar energy to heat the air near the surface and create ideal dust devil conditions. Light or no wind and cool atmospheric temperature: The underlying factor for sustainability of a dust devil is the extreme difference in temperature between the near-surface air and the atmosphere. Windy conditions will destabilize the spinning effect of a dust devil. Intensity and duration Many dust devils are usually small and weak, often less than 3 feet (0.9 m) in diameter with maximum winds averaging about 45 miles per hour (70 km/h), and they often dissipate less than a minute after forming. On rare occasions, a dust devil can grow very large and intense, sometimes reaching a diameter of up to 300 feet (90 m) with winds in excess of 60 mph (100 km/h+) and can last for upwards of 20 minutes before dissipating. Because of their small diameter, Coriolis force is not significant in the dust devil itself so dust devils with anticyclonic rotation do occur. Hazards Dust devils typically do not cause injuries, but rare, severe dust devils have caused damage and even deaths in the past. One such dust devil struck the Coconino County Fairgrounds in Flagstaff, Arizona, on September 14, 2000, causing extensive damage to several temporary tents, stands and booths, as well as some permanent fairgrounds structures. Several injuries were reported, but there were no fatalities. Based on the degree of damage left behind, it is estimated that the dust devil produced winds as high as 75 mph (120 km/h), which is equivalent to an EF0 tornado. On May 19, 2003, a dust devil lifted the roof off a two-story building in Lebanon, Maine, causing it to collapse and kill a man inside.<ref>NCDC: Event Details National Climatic Data Center'.' Retrieved 2008-06-05.</ref> On June 18, 2008, a woman near Casper, Wyoming was killed when a dust devil caused a small scorer's shed at a youth baseball field to flip on top of her. She had been trying to shelter from the dust devil by going behind the shed. At East El Paso, Texas in 2010, three children in an inflatable jump house were picked up by a dust devil and lifted over 10 feet (3 m), travelling over a fence and landing in a backyard three houses away.This rare weather incident was the subject of a United States Air Force Weather Squadron study: Clarence Giles, "Air Force Weather Squadron forecasts, studies weather to keep servicemembers safe", https://web.archive.org/web/20150518114436/http://fortblissbugle.com/air-force-weather-squadron-forecasts-studies-weather-to-keep-servicemembers-safe/ archived 2015-05-18 Fort Bliss Bugle, Unit News p.1A (January 12, 2011) In Commerce City, Colorado in 2018, a powerful dust devil hurtled two porta-potties into the air; no one was injured. In 2019, a large dust devil in Yucheng county, Henan province, China killed 2 children and injured 18 children and 2 adults when an inflatable jump house was lifted into the air. Dust devils have been implicated in around 100 aircraft accidents. While many incidents have been simple taxiing problems, a few have had fatal consequences. Dust devils are also considered major hazards among skydivers and paragliding pilots as they can cause a parachute or a paraglider to collapse with little to no warning, at altitudes considered too low to cut away, and contribute to the serious injury or death of parachutists. Such was the case on June 1, 1996, when a dust devil caused a skydiver's parachute to collapse about above the ground. He later died from the injuries he sustained. Dust devils can also contribute to wildfires. One case occurred in Engebæk, Billund Municipality, Denmark in 1868 where a dust devil tossed tuft into a heater, causing a wildfire that possibly extended from 10,000 to 50,000 hectares or more. Electrical activities Dust devils, even small ones, can produce radio noise and electrical fields greater than 10,000 volts per meter. A dust devil picks up small dirt and dust particles. As the particles whirl around, they become electrically charged through contact or frictional charging (triboelectrification). The whirling charged particles also create a magnetic field that fluctuates between 3 and 30 times each second. These electric fields may assist the vortices in lifting material off the ground and into the atmosphere. Field experiments indicate that a dust devil can lift 1 gram of dust per second from each square metre (10 lb/s from each acre) of ground over which it passes. A large dust devil measuring about 100 metres (330 ft) across at its base can lift about 15 metric tonnes (17 short tons) of dust into the air in 30 minutes. Giant dust storms that sweep across the world's deserts contribute 8% of the mineral dust in the atmosphere each year during the handful of storms that occur. In comparison, the significantly smaller dust devils that twist across the deserts during the summer lift about three times as much dust, thus having a greater combined impact on the dust content of the atmosphere. When this occurs, they are often called sand pillars. Martian dust devils Alternate names In Australia, a dust devil is more commonly known as "Willy willy". In Ireland, dust devils are known as "sí gaoithe" or "fairy wind". Related phenomena Ash devils Hot cinders underneath freshly deposited ash in recently burned areas may sometimes generate numerous dust devils. The lighter weight and the darker color of the ash may create dust devils that are visible hundreds of feet into the air. Ash devils form similar to dust devils and are often seen on unstable days in burn scar areas of recent fires. Coal devils are common at the coal town of Tsagaan Khad in South Gobi Province, Mongolia. They occur when dust devils pick up large amounts of stockpiled coal. Their dark color makes them resemble some tornadoes. Fire whirls Fire whirls or swirls, sometimes called fire devils or fire tornadoes, can be seen during intense fires in combustible building structures or, more commonly, in forest or bush fires. A fire whirl is a vortex-shaped formation of burning gases being released from the combustible material. The genesis of the vortex is probably similar to a dust devil's. As distinct from the dust devil, it is improbable that the height reached by the fire gas vortex is greater than the visible height of the vertical flames because of turbulence in the surrounding gases that inhibit creation of a stable boundary layer between the rotating/rising gases relative to the surrounding gases. Hay devils A "hay devil" is a gentle whirlwind that forms in the warm air above fields of freshly-cut hay. A vortex forms from a column of hot air rising from the ground on calm, sunny days, tossing and swirling stalks and clumps of hay harmlessly through the air, often to the delight of children and onlookers. Snow devils The same conditions can produce a snow whirlwind, snow devil'', or sometimes referred to as a "snownado", although differential heating is more difficult in snow-covered areas. Steam devils Steam devils are a small vortex column of saturated air of varying height but small diameter forming when cold air lies over a much warmer body of water or saturated surface. They are also often observed in the steam rising from power plants. References External links Australian Dust Devil Photos Dancing with the Devils Video Dust Devil Imaged by Spirit Rover on Mars Matador Dust Devil Project Page with many movies of Martian dust devils as seen by Spirit, with enhanced images as well as ratings. The Bibliography of Aeolian Research Dust storms Wind Vortices Mars Articles containing video clips Microscale meteorology
Dust devil
[ "Chemistry", "Mathematics" ]
2,237
[ "Dynamical systems", "Vortices", "Fluid dynamics" ]
58,171
https://en.wikipedia.org/wiki/AirPort
AirPort is a discontinued line of wireless routers and network cards developed by Apple Inc. using Wi-Fi protocols. In Japan, the line of products was marketed under the brand AirMac due to previous registration by I-O Data. Apple introduced the AirPort line in 1999. Wireless cards were discontinued in 2009 following the Mac transition to Intel processors, after all of Apple's Mac products had adopted built-in Wi-Fi. Apple's line of wireless routers consisted of the AirPort Base Station (later AirPort Extreme); the AirPort Time Capsule, a variant with a built-in hard disk for automated backups; and the AirPort Express, a compact router. In 2018, Apple discontinued the AirPort line. The remaining inventory was sold off, and Apple later retailed routers from Linksys, Netgear, and Eero in Apple retail stores. Overview AirPort debuted in 1999, as "one more thing" at Macworld New York, with Steve Jobs surfing the web on an iBook using wireless internet technology for the very first time in a public demo of an Apple laptop. The initial offering consisted of an optional expansion card for Apple's new line of iBook notebooks and an AirPort Base Station. The AirPort card (a repackaged Lucent ORiNOCO Gold Card PC Card adapter) was later added as an option for almost all of Apple's product line, including PowerBooks, eMacs, iMacs, and Power Macs. Only Xserves did not have it as a standard or optional feature. The original AirPort system allowed transfer rates up to 11 Mbit/s and was commonly used to share Internet access and files between multiple computers. In 2003, Apple introduced AirPort Extreme, based on the 802.11g specification, using Broadcom's BCM4306/BCM2050 two-chip solution. AirPort Extreme allows theoretical peak data transfer rates of up to 54 Mbit/s, and is fully backward-compatible with existing 802.11b wireless network cards and base stations. Several of Apple's desktop computers and portable computers, including the MacBook Pro, MacBook, Mac Mini, and iMac shipped with an AirPort Extreme (802.11g) card as standard. All other Macs of the time had an expansion slot for the card. AirPort and AirPort Extreme cards are not physically compatible: AirPort Extreme cards cannot be installed in older Macs, and AirPort cards cannot be installed in newer Macs. The original AirPort card was discontinued in June 2004. In 2004, Apple released the AirPort Express base station as a "Swiss Army knife" multifunction product. It can be used as a portable travel router, using the same AC connectors as on Apple's AC adapters; as an audio streaming device, with both line-level and optical audio outputs; and as a USB printer sharing device, through its USB host port. In 2007, Apple unveiled a new AirPort Extreme (802.11 Draft-N) Base Station, which introduced 802.11 Draft-N to the Apple AirPort product line. This implementation of 802.11 Draft-N can operate in both the 2.4 GHz and 5 GHz ISM bands, and has modes that make it compatible with 802.11b/g and 802.11a. The number of Ethernet ports was increased to four—one nominally for WAN, three for LAN, but all can be used in bridged mode. A USB port was included for printers and other USB devices. The Ethernet ports were later updated to Gigabit Ethernet on all ports. The styling is similar to that of the Mac Mini and Apple TV. In January 2008, Apple introduced Time Capsule, an AirPort Extreme (802.11 Draft-N) with an internal hard drive. The device includes software to allow any computer running a reasonably recent version of Mac OS or Windows to access the disk as a shared volume. Macs running Mac OS X 10.5 and later, which includes the Time Machine feature, can use the Time Capsule as a wireless backup device, allowing automatic, untethered backups of the client computer. As an access point, the unit is otherwise equivalent to an AirPort Extreme (802.11 Draft-N), with four Gigabit Ethernet ports and a USB port for printer and disk sharing. In March 2008, Apple released an updated AirPort Express Base Station with 802.11 Draft-N 2x2 radio. All other features (analog and digital optical audio out, single Ethernet port, USB port for printer sharing) remained the same. At the time, it was the least expensive ($99) device to handle both frequency bands (2.4 GHz and 5 GHz) in 2x2 802.11 Draft-N. In March 2009, Apple unveiled AirPort Extreme and Time Capsule products with simultaneous dual-band 802.11 Draft-N radios. This allows full 802.11 Draft-N 2x2 communication in both 802.11 Draft-N bands at the same time. In October 2009, Apple unveiled the updated AirPort Extreme and Time Capsule products with antenna improvements (the 5.8 GHz model). In 2011, Apple unveiled an updated AirPort Extreme base station, referred to as AirPort Extreme 802.11n (5th Generation). The latest AirPort base stations and cards work with third-party base stations and wireless cards that conformed to the 802.11a, 802.11b, 802.11g, 802.11 Draft-N, and 802.11 Final-N networking standards. It was not uncommon to see wireless networks composed of several types of AirPort base station serving old and new Macintosh, Microsoft Windows, and Linux systems. Apple's software drivers for AirPort Extreme also supported some Broadcom and Atheros-based PCI Wireless adapters when fitted to Power Mac computers. Due to the developing nature of Draft-N hardware, there was no assurance that the new model would work with all 802.11 Draft-N routers and access devices from other manufacturers. Discontinuation In approximately 2016, Apple disbanded its wireless router team. In 2018, Apple formally discontinued all of its AirPort products, exiting the router market. Bloomberg News noted that "Apple rarely discontinues product categories" and that its decision to leave the business was "a boon for other wireless router makers." AirPort routers An AirPort router is used to connect AirPort-enabled computers to the Internet, each other, a wired LAN, and/or other devices. AirPort Base Station The original AirPort Base Station (known as Graphite, model M5757, part number M7601LL/B) features a dial-up modem and an Ethernet port. It employs a Lucent WaveLAN Silver PC Card as the Radio, and uses an embedded AMD Élan SC410 processor. It connects to the machine via the Ethernet port. It was released July 21, 1999. The Graphite AirPort Base Station is functionally identical to the Lucent RG-1000 wireless base station and can run the same firmware. Due to the original firmware-locked limitations of the Silver card, the unit can only accept 40-bit WEP encryption. Later aftermarket tweaks can enable 128-bit WEP on the Silver card. Aftermarket Linux firmware has been developed for these units to extend their useful service life. A second-generation model (known as Dual Ethernet or Snow, model M8440, part number M8209LL/A) was introduced on November 13, 2001. It features a second Ethernet port when compared to the Graphite design, allowing for a shared Internet connection with both wired and wireless clients. Also new (but available for the original model via software update) was the ability to connect to and share America Online's dial-up service—a feature unique to Apple base stations. This model is based on Motorola's PowerPC 855 processor and contained a fully functional original AirPort Card, which can be removed and used in any compatible Macintosh computer. AirPort Extreme Base Station Three different configurations of model A1034 are all called the "AirPort Extreme Base Station":1. M8799LL/A – 2 Ethernet ports, 1 USB port, external antenna connector, 1 56k (V.90) modem port2. M8930LL/A – 2 Ethernet ports, 1 USB port, external antenna connector.3. M9397LL/A – 2 Ethernet ports, 1 USB port, external antenna connector, powered over Ethernet cable (PoE/UL2043) The AirPort Base Station was discontinued after the updated AirPort Extreme was announced on January 7, 2003. In addition to providing wireless connection speeds of up to a maximum of 54 Mbit/s, it adds an external antenna port and a USB port. The antenna port allows the addition of a signal-boosting antenna, and the USB port allows the sharing of a USB printer. A connected printer is made available via Bonjour's "zero configuration" technology and IPP to all wired and wireless clients on the network. The CPU is an AU1500-333MBC Alchemy (processor). A second model (M8930LL/A) lacking the modem and external antenna port was briefly made available, but then discontinued after the launch of AirPort Express (see below). On April 19, 2004, a third version, marketed as the AirPort Extreme Base Station (with Power over Ethernet and UL 2043), was introduced that supports Power over Ethernet and complies to the UL 2043 specifications for safe usage in air handling spaces, such as above suspended ceilings. All three models support the Wireless Distribution System (WDS) standard. The model introduced in January 2007 does not have a corresponding PoE, UL-compliant variant. An AirPort Extreme base station can serve a maximum of 50 wireless clients simultaneously. AirPort Extreme 802.11n The AirPort Extreme was updated on January 9, 2007, to support the 802.11n protocol. This revision also adds two LAN ports for a total of three. It now more closely resembles the square-shaped 1st generation Apple TV and Mac Mini, and is about the same size as the mini. The new AirPort Disk feature allows users to plug a USB hard drive into the AirPort Extreme for use as a network-attached storage (NAS) device for Mac OS X and Microsoft Windows clients. Users may also connect a USB hub and printer. The performance of USB hard drives attached to an AirPort Extreme is slower than if the drive were connected directly to a computer. This is due to the processor speed on the AirPort extreme. Depending on the setup and types of reads and writes, performance ranges from 0.5 to 17.5 MB/s for writing and 1.9 to 25.6 MB/s for reading. Performance for the same disk connected directly to a computer would be 6.6 to 31.6 MB/s for writing and 7.1 to 37.2 MB/s for reading. The AirPort Extreme has no port for an external antenna. On August 7, 2007, the AirPort Extreme began shipping with Gigabit Ethernet, matching most other Apple products. On March 19, 2008, Apple released a firmware update for both models of the AirPort Extreme to allow AirPort Disks to be used in conjunction with Time Machine, similar to the functionality provided by Time Capsule. On March 3, 2009, Apple unveiled a new AirPort Extreme with simultaneous dual-band 802.11 Draft-N radios. This allows full 802.11 Draft-N 2x2 communication in both 802.11 Draft-N bands at the same time. On October 20, 2009, Apple unveiled an updated AirPort Extreme base station with antenna improvements. On June 21, 2011, Apple unveiled an updated AirPort Extreme base station, referred to as AirPort Extreme 802.11n (5th Generation). AirPort Express The AirPort Express is a simplified and compact AirPort Extreme base station. It allows up to 50 networked users, and includes a feature called AirTunes (predecessor to AirPlay). The original version (M9470LL/A, model A1084) was introduced by Apple on June 7, 2004, and includes an analog–optical audio mini-jack output, a USB port for remote printing or charging the iPod (iPod Shuffle only), and a single Ethernet port. The USB port cannot be used to connect a hard disk or other storage device. The AirPort Express functions as a wireless access point when connected to an Ethernet network. It can be used as an Ethernet-to-wireless bridge under certain wireless configurations. It can be used to extend the range of a network, or as a printer and audio server. In 2012, the AirPort Express took on a new shape, similar to that of the second and third generation Apple TV. The new product also features two 10/100 Mbit/s Ethernet LAN ports. AirPort Time Capsule The AirPort Time Capsule is a version of AirPort Extreme with a built-in hard drive currently coming in either 2 TB or 3 TB sizes, with a previous version having 1 TB or 500 GB. It features a built-in design that, when used with Time Machine in Mac OS X Leopard, automatically makes incremental data backups. Acting as a wireless file server, AirPort Time Capsule can serve to back up multiple Macs. It also includes all AirPort Extreme (802.11 Draft-N) functionality. On March 3, 2009, the Time Capsule was updated with simultaneous dual-band 802.11 Draft-N capability, remote AirPort Disk accessibility through Back to My Mac, and the ability to broadcast a guest network at the same time as an existing network. On October 20, 2009, Apple unveiled the updated Time Capsule with antenna improvements resulting in wireless performance gains of both speed and range. Also stated is a resulting performance improvement/time reduction on Time Capsule backups of up to 60%. In June 2011, Apple unveiled the updated Time Capsule with a higher capacity 2 TB and 3 TB. They also changed the wireless card from a Marvell chip to a Broadcom BCM4331 chip. When used in conjunction with the latest 2011 MacBooks, MacBook Pros, and MacBook Airs (which also use a Broadcom BCM4331 wireless chip), the wireless signal is improved thanks to Broadcom's Frame Bursting technology. On June 10, 2013, Apple renamed the Time Capsule to the AirPort Time Capsule and added support for the 802.11ac standard. AirPort cards Apple produced numerous wireless card used to connect to wireless networks such as those provided by an AirPort Base Station. AirPort 802.11b card The original model, known as simply AirPort card, was a re-branded Lucent WaveLAN/Orinoco Gold PC card, in a modified housing that lacked the integrated antenna. It was designed to be capable of being user-installable. It was also modified in such a way that it could not be used in a regular PCMCIA slot (at the time it was significantly cheaper than the official WaveLAN/Orinoco Gold card). An AirPort card adapter is required to use this card in the slot-loading iMacs. AirPort Extreme 802.11g cards Corresponding with the release of the AirPort Extreme Base Station, the AirPort Extreme card became available as an option on the current models. It is based on a Broadcom 802.11g chipset and is housed in a custom form factor, but is electrically compatible with the Mini PCI standard. It was also capable of being user-installed. Variants of the user-installable AirPort Extreme card are marked A-1010 (early North American spec), A-1026 (current North American spec), A-1027 (Europe/Asia spec (additional channels)) and A-1095 (unknown). A different 802.11g card was included in the last iteration of the PowerPC-based PowerBooks and iBooks. A major distinction for this card was that it was the first "combo" card that included both 802.11g as well as Bluetooth. It was also the first card that was not user-installable. It was again a custom form factor, but was still electrically a Mini PCI interface for the Broadcom WLAN chip. A separate USB connection was used for the on-board Bluetooth chip. The AirPort Extreme (802.11g) card was discontinued in January 2009. Integrated AirPort Extreme 802.11a/b/g and /n cards As 802.11g began to come standard on all notebook models, Apple phased out the user-installable designs in their notebooks, iMacs and Mac Minis by mid-2005, moving to an integrated design. AirPort continued to be an option, either installed at purchase or later, on the Power Mac G5 and the Mac Pro.With the introduction of the Intel-based MacBook Pro in January 2006, Apple began to use a standard PCI Express mini card. The particular brand and model of card has changed over the years; in early models, it was Atheros brand, while since late 2008 they have been Broadcom cards. This distinction is mostly of concern to those who run other operating systems such as Linux on MacBooks, as different cards require different device drivers. The MacBook Air Mid 2012 13", MacBook Air Mid 2011 13" and MacBook Air Late 2010 (11", A1370 and 13", Model A1369) each use a Broadcom BCM 943224 PCIEBT2 Wi-Fi card (main chip BCM43224: 2 × 2 2.4 GHz and 5 GHz). The MacBook Pro Retina Mid 2012 uses Broadcom BCM94331CSAX (main chip BCM4331: 3 × 3 2.4 GHz and 5 GHz, up to 450 Mbit/s). In early 2007, Apple announced that most Intel Core 2 Duo-based Macs, which had been shipping since November 2006, already included AirPort Extreme cards compatible with the draft-802.11 Draft-N specification. Apple also offered an application to enable 802.11 Draft-N functionality on these Macs for a fee of $1.99, or free with the purchase of an AirPort Extreme base station. Starting with Leopard, the Draft-N functionality was quietly enabled on all Macs that had Draft-N cards. This card was also a PCI Express mini design, but used three antenna connectors in the notebooks and iMacs, in order to use a 2 × 3 MIMO antenna configuration. The cards in the Mac Pro and Apple TV have two antenna connectors and support a 2 × 2 configuration. The Network Utility application located in Applications → Utilities can be used to identify the model and supported protocols of an installed AirPort card. Integrated AirPort Extreme 802.11ac cards The Macbook Air Mid 2013 uses a Broadcom BCM94360CS2 (main chip BCM4360: 2 × 2 : 2). Security AirPort and AirPort Extreme support a variety of security technologies to prevent eavesdropping and unauthorized network access, including several forms of cryptography. The original graphite AirPort base station used 40-bit Wired Equivalent Privacy (WEP). The second-generation model (known as Dual Ethernet or Snow) AirPort base station, like most other Wi-Fi products, used 40-bit or 128-bit Wired Equivalent Privacy (WEP). AirPort Extreme and Express base stations retain this option, but also allow and encourage the use of Wi-Fi Protected Access (WPA) and, as of July 14, 2005, WPA2. AirPort Extreme cards, which use the Broadcom chipset, have the media access control layer in software. The driver is closed source. AirPort Disk The AirPort Disk feature shares a hard disk connected to an AirPort Extreme or Time Capsule (though not AirPort Express), as a small-scale NAS. AirPort Disk can be accessed from Windows and Linux as well as Mac OS X using the SMB/CIFS protocol for FAT volumes, and both SMB/CIFS and AFP for HFS+ partitions. NTFS- or exFAT-formatted volumes are not supported. Although Windows does not natively support HFS+, an HFS+ volume on an AirPort Disk can be easily accessed from Windows. This is because the SMB/CIFS protocol used to access the disk, and hence access from Windows is filesystem-independent. Therefore, HFS+ is a viable option for Windows as well as OS X users, and more flexible than FAT32 as the latter has a 4 GiB file size limit. Recent firmware versions cause the internal disk and any external USB drives to sleep after periods of time as short as 2 minutes. A caveat of the use of AirPort Disk is that the AFP port 548 is reserved for the service, which then does not allow for simultaneous use of port forwarding to provide AFP services to external users. This is also true of a Time Capsule setup for use as a network-based Time Machine Backup location, its main purpose and default configuration. An AirPort administrator must choose between using AirPort Disk and providing remote access to AFP services. The AirPort Extreme or Time Capsule will recognize multiple disks connected via a USB hub. See also AirDrop AirPrint iTunes Sleep Proxy Service Timeline of Apple products Wireless LAN IEEE 802.11 Notes External links AirPort products (archived 2013-06-07) All AirPort products AirPort manuals AirPort software compatibility table Apple AirPort 802.11 N first look at ifixit Apple Inc. peripherals Macintosh internals Wi-Fi ITunes Computer-related introductions in 1999 Products and services discontinued in 2018
AirPort
[ "Technology" ]
4,425
[ "Wireless networking", "Wi-Fi" ]
58,180
https://en.wikipedia.org/wiki/Freckle
Freckles are clusters of concentrated melaninized cells which are most easily visible on people with a fair complexion. Freckles do not have an increased number of the melanin-producing cells, or melanocytes, but instead have melanocytes that overproduce melanin granules (melanosomes) changing the coloration of the outer skin cells (keratinocytes). As such, freckles are different from lentigines and moles, which are caused by accumulation of melanocytes in a small area. Freckles can appear on all types of skin tones. Of the six Fitzpatrick skin types, they are most common on skin tones 1 and 2, which usually belong to North Europeans. However, it can also be found on people all over the world. In England a historical term for freckles is summer-voys, sometimes spelt summervoise, which may be related to the German term Sommersprossen. Biology The formation of freckles is caused by exposure to sunlight. The exposure to UV-B radiation activates melanocytes to increase melanin production, which can cause freckles to become darker and more visible. This means that one who has never developed freckles may develop them suddenly following extended exposure to sunlight. Freckles are predominantly found on the face, although they may appear on any skin exposed to the sun, such as arms or shoulders. Heavily distributed concentrations of melanin may cause freckles to multiply and cover an entire area of skin, such as the face. Freckles are rare on infants, and more commonly found on children before puberty. Upon exposure to the sun, freckles will reappear if they have been altered with creams or lasers and not protected from the sun, but do fade with age in some cases. Freckles are not a skin disorder, but people with freckles generally have a lower concentration of photo-protective melanin, and are therefore more susceptible to the harmful effects of UV radiation. It is suggested that people whose skin tends to freckle should avoid overexposure to sun and use sunscreen. Genetics The presence of freckles is related to rare alleles of the MC1R gene, though it does not differentiate whether an individual will have freckles if they have one or even two copies of this gene. Also, individuals with no copies of the MC1R do sometimes display freckles. Even so, individuals with a high number of freckling sites have one or more of variants of the MC1R gene. Of the variants of the MC1R gene Arg151Cys, Arg160Trp, and Asp294His are the most common in the freckled subjects. The MC1R gene is also associated with red hair more strongly than with freckles. Most red-haired individuals have two variants of the MC1R gene and almost all have one. The variants that cause red hair are the same that cause freckling. Freckling can also be found in areas, such as Japan, where red hair is not seen. These individuals have the variant Val92Met which is also found in Europeans, although it has minimal effects on their pigmentation. The R162Q allele has a disputed involvement in freckling. The variants of the MC1R gene that are linked with freckles started to emerge in the human genotype when humans started to leave Africa. The variant Val92Met arose somewhere between 250,000 and 100,000 years ago, long enough for this gene to be carried by humans into central Asia. Arg160Trp is estimated to have arisen around 80,000 years ago while Arg151Cys and Asp294His have been estimated to arise around 30,000 years ago. The wide variation of the MC1R gene exists in people of European descent because of the lack of strong environmental pressures on the gene. The original allele of MC1R is coded for dark skin with a high melanin content in the cells. The high melanin content is protective in areas of high UV light exposure. The need was less as humans moved into higher latitudes where incoming sunlight has lower UV light content. The adaptation of lighter skin is needed so that individuals in higher latitudes can still absorb enough UV for the production of vitamin D. Freckled individuals tend to tan less and have very light skin, an adaptation to allow individuals that expressed these genes to synthesise sufficient vitamin D. Types Ephelides describes a freckle that is flat and light brown or red and fades with a reduction of sun exposure. Ephelides are more common in those with light complexions, although they are found on people with a variety of skin tones. The regular use of sunblock can inhibit their development. Liver spots (also known as sunspots and lentigines) look like large freckles, but they form after years of exposure to the sun. Liver spots are more common in older people. In culture Western societies traditionally perceived freckles as imperfections. For example, Pliny the Elder described freckles as spiritual and religious stains. This perception changed after the mid-20th century, when a tan, and the freckles associated with it, came to be desired as a status symbol indicating a life of leisure. Freckles became increasingly fashionable in the late 20th century as a result of the 1960s "youthquake" movement and through association with popular figures such as the model Twiggy and the musician Jane Birkin. In the early 2000s, freckled models were a trend in advertising and fashion. By the 2020s, the popularity of freckle tattoos further increased due to their popularity on social media such as TikTok. See also Beauty mark List of Mendelian traits in humans Melanocortin 1 receptor Mole References External links MedicineNet.com: Freckles Human skin color Skin conditions resulting from physical factors Melanocytic nevi and neoplasms
Freckle
[ "Biology" ]
1,251
[ "Human skin color", "Pigmentation" ]
58,198
https://en.wikipedia.org/wiki/Mikhail%20Bakunin
Mikhail Alexandrovich Bakunin ( ; – 1 July 1876) was a Russian revolutionary anarchist. He is among the most influential figures of anarchism and a major figure in the revolutionary socialist, social anarchist, and collectivist anarchist traditions. Bakunin's prestige as a revolutionary also made him one of the most famous ideologues in Europe, gaining substantial influence among radicals throughout Russia and Europe. Bakunin grew up in Pryamukhino, a family estate in Tver Governorate. From 1840, he studied in Moscow, then in Berlin hoping to enter academia. Later in Paris, he met Karl Marx and Pierre-Joseph Proudhon, who deeply influenced him. Bakunin's increasing radicalism ended hopes of a professorial career. He was expelled from France for opposing the Russian Empire's occupation of Poland. After participating in the 1848 Prague and 1849 Dresden uprisings, Bakunin was imprisoned, tried, sentenced to death, and extradited multiple times. Finally exiled to Siberia in 1857, he escaped via Japan to the United States and then to London, where he worked with Alexander Herzen on the journal Kolokol (The Bell). In 1863, Bakunin left to join the insurrection in Poland, but he failed to reach it and instead spent time in Switzerland and Italy. In 1868, Bakunin joined the International Workingmen's Association, leading the anarchist faction to rapidly grow in influence. The 1872 Hague Congress was dominated by a struggle between Bakunin and Marx, who was a key figure in the General Council of the International and argued for the use of the state to bring about socialism. In contrast, Bakunin and the anarchist faction argued for the replacement of the state by federations of self-governing workplaces and communes. Bakunin could not reach the Netherlands, and the anarchist faction lost the debate in his absence. Bakunin was expelled from the International for maintaining, in Marx's view, a secret organisation within the International, and founded the Anti-Authoritarian International in 1872. From 1870 until his death in 1876, Bakunin wrote his longer works such as Statism and Anarchy and God and the State, but he continued to directly participate in European worker and peasant movements. In 1870, he was involved in an insurrection in Lyon, France. Bakunin sought to take part in an anarchist insurrection in Bologna, Italy, but his declining health forced him to return to Switzerland in disguise. Bakunin is remembered as a major figure in the history of anarchism, an opponent of Marxism, especially of the dictatorship of the proletariat; and for his predictions that Marxist regimes would be one-party dictatorships ruling over the proletariat, not rule by the proletariat. His book God and the State has been widely translated and remains in print. Bakunin has had a significant influence on thinkers such as Peter Kropotkin, Errico Malatesta, Herbert Marcuse, E. P. Thompson, Neil Postman and A. S. Neill as well as syndicalist organizations such as the IWW, the anarchists in the Spanish Civil War and contemporary anarchists involved in the modern-day anti-globalization movement. Life Early life On , Mikhail Aleksandrovich Bakunin was born into Russian nobility. His family's Priamukhino estate, in the Tver region northwest of Moscow, had over 500 serfs. His father, Alexander Mikhailovich Bakunin, was a Russian diplomat who had served in Italy. Upon returning to Priamukhino and marrying the much younger Varvara Aleksandrovna Muravyeva, the elder Bakunin raised his ten children in the Rousseauan pedagogic model. Mikhail Bakunin, their third child and oldest son, read the languages, literature, and philosophy of the period and described his youth as idyllic and sheltered from the realities of Russian life. As an early teenager, he began training for a military career at the St. Petersburg Artillery School, which he rejected. Becoming an officer in 1833, he availed himself of the freedom to participate in the city's social life, but was unfulfilled. Derelict in his studies, he was sent to Belarus and Lithuania as punishment in early 1834, where he read academic theory and philosophy. He deserted the school in 1835 and only escaped arrest through his familial influence. He was discharged at the end of the year and, despite his father's protests, left for Moscow to pursue a career as a mathematics teacher. Bakunin lived a bohemian, intellectual life in Moscow, where German Romantic literature and idealist philosophy were influential in the 1830s. In the intellectual circle of Nikolai Stankevich, Bakunin read German philosophy, from Kant to Fichte to Hegel, and published Russian translations of their works. Bakunin produced the first Russian translation of Hegel and was the foremost Russian expert on Hegel by 1837. Bakunin befriended Russian intellectuals including the literary critic Vissarion Belinsky, the poet Nikolay Ogarev, the novelist Ivan Turgenev, and the writer Alexander Herzen as youth prior to their careers. Herzen funded Bakunin to study at the University of Berlin in 1840. Bakunin's plans to return to Moscow as a professor were soon abandoned. In Berlin, Bakunin gravitated towards the Young Hegelians, an intellectual group with radical interpretations of Hegel's philosophy, and who drew Bakunin to political topics. He left Berlin in early 1842 for Dresden and met the Hegelian Arnold Ruge, who published Bakunin's first original publication. ("The Reaction in Germany") proposes a continuation of the French Revolution to the rest of Europe and Russia. Though steeped in Hegelian jargon and published under a pseudonym, it marked Bakunin's transition from philosophy to revolutionary rhetoric. Revolutionary activity and imprisonment Throughout the 1840s, Bakunin grew into revolutionary agitation. When his cadre aroused interest from Russian secret agents, Bakunin left for Zürich in early 1843. He met the proto-communist Wilhelm Weitling whose arrest led Bern's Russian embassy to distrust Bakunin. Defying Russian orders to return, the Russian Senate stripped him of his rights as a nobleman and sentenced him in absentia to penal labor in Siberia. Without steady financial support, Bakunin became an itinerant, traveling Europe meeting the people who had influenced him. He visited Brussels and Paris, where he joined international emigrants and socialists, befriended the anarchist Pierre-Joseph Proudhon, and met the philosopher Karl Marx, with whom he would later tussle. Bakunin only became personally active in political agitation in 1847, as Polish emigrants in Paris invited him to commemorate the 1830 Polish uprising with a speech. His call for Poles to overthrow czarism in alliance with Russian democrats made Bakunin known throughout Europe and led the Russian ambassador to successfully request Bakunin's deportation. When the French King Louis Philippe I abdicated during the February 1848 Revolution, Bakunin returned to Paris and basked in the revolutionary milieu. With the French government's support, he headed to Prussian Poland to agitate for revolt against Russia but never arrived. He attended the 1848 Prague Slavic Congress to defend Slavic rights against German and Hungarian nationalism, and participated in its impromptu insurrection against the Austrian Habsburgs. Uncaptured, he wrote Aufruf an die Slaven ("Appeal to the Slavs") at the end of the year, advocating for a Slavic federation and revolt against the Austrian, Prussian, Turkish, and Russian governments. It was widely read and translated. After participating in both the Prague uprising and the 1849 Dresden uprising, Bakunin was imprisoned, tried, sentenced to death, extradited multiple times, and ultimately placed in solitary confinement in the Peter and Paul Fortress of St. Petersberg, Russia, in 1851. Three years later, he transferred to Shlisselburg Fortress near St. Petersberg for another three years. Prison weathered but did not break Bakunin, who retained his revolutionary zeal through his release. He did, however, write an autobiographical, genuflecting Confession to the Russian emperor, which proved to be a controversial document upon its public discovery some 70 years later. The letter did not improve his prison conditions. In 1857, Bakunin was permitted to transfer to permanent exile in Siberia. He married Antonia Kwiatkowska there before escaping in 1861, first to Japan, then to San Francisco, sailing to Panama and then to New York and Boston, and arrived in London by the end of the year. Bakunin set foot in America just as the Civil War was breaking out. Speaking with supporters of both sides, Bakunin stated that his sympathies were with the North, although he claimed hypocrisy in their stated goal of slave liberation while also forcing the South to remain in the Union. Bakunin also viewed the Southern political system and agrarian character as freer in some respects for its white citizens than in the industrial North. Though a fierce critic and enemy of slavery, Bakunin held a deep admiration for the United States as a whole, referring to the country as “the finest political organization that ever existed in history.” Back in Europe In London, Bakunin reunited with Herzen and Ogarev. Bakunin collaborated with them on their Russian-language newspaper but his revolutionary fervor exceeded their moderate reform agenda. Bakunin's 1862 pamphlet The Peoples Cause: Romanov, Pugachev, or Pestel? criticized the Russian tsar for not using his position to facilitate a bloodless revolution and forgo another Pugachev's Rebellion. In early August 1862, he briefly travelled to Paris. In Paris at this time, famous photographer Nadar took three famous photographs of him on August 7, 1862. After being photographed, he also signed Nadar's Livre d'Or (autograph albume), wrote that (leaf 161): "Watch out that liberty doesn't come to you from the north." In 1863, Bakunin joined in an unsuccessful effort to supply armed men for the Polish January Uprising against Russia. Bakunin, reunited with his wife, moved to Italy the next year, where they stayed for three years. Bakunin, in his early 50s, developed his core anarchist thoughts in Italy. He continued to refine these ideas in his remaining 12 years. Among this ideology was the first of many conspiratorial revolutionary societies, though none of these participated in revolutionary actions, chiefly the revolutionary toppling of the state, to be replaced by free federation between voluntarily associated economic producers. He moved to Switzerland in 1867, a more permissive environment for revolutionary literature. Bakunin's anarchist writings were fragmentary and prolific. With France's collapse in the 1870 Franco-Prussian War, Bakunin traveled to Lyon and participated in the fruitless Lyon Commune in which the citizens briefly occupied the city hall. Bakunin retreated to Switzerland. In Switzerland, the Russian revolutionary Sergey Nechayev sought out Bakunin for a collaboration. Not knowing Nechayev's past betrayals, Bakunin warmed to Nechayev's revolutionary zeal and they together produced the 1869 Catechism of the Revolutionary, a tract that endorsed an ascetic life for revolutionaries without societal or moral bonds. Bakunin's connection with Nechayev hurt the former's reputation. More recent scholarship, however, challenges the catechism's authorship, crediting Nechayev as the primary or sole author. Bakunin ultimately disavowed their connection. First International While Bakunin encountered Karl Marx in Paris (1844) and London (1864), he came to know him through the First International (International Working Men's Association), which Marx and Friedrich Engels formed in the 1860s. Bakunin's relationship with Marx became strained in the early 1870s for both interpersonal and ideological differences. Bakunin respected Marx's erudition and passion for socialism but found his personality to be authoritarian and arrogant. In turn, Marx was skeptical towards Russian reactionism and Bakunin's unruliness. As Bakunin developed his anarchist ideas in this period, he came to see federative social organization, led by the peasantry and poorest workers, as the primary post-revolution goal, whereas Marx believed in a dictatorship of the proletariat, led by organized workers in industrially advanced countries, in which the workers use state infrastructure until the state withers away. Bakunists abhorred the political organization for which Marx advocated. Marx had Bakunin and Bakunist anarchists ejected from the First International's 1872 Hague Congress. This breaking point split the Marxist socialist movement from the anarchist movement and led to the undoing of the International. Bakunin's ideas continued to spread nevertheless to the labor movement in Spain and the watchmakers of the Swiss Jura Federation. Bakunin wrote his last major work, Statism and Anarchy (1873), anonymously in Russian to stir underground revolution in Russia. It restates his anarchist position, establishes the German Empire as the foremost centralized state in opposition to European anarchism, likens Marx to German authoritarianism, and warns of Marx's dictatorship of the proletariat being led by autocrats for their own gain in the name of the proletariat. This premonition furthered the gulf between the Marxists and Bakunist anarchists. In one final revolutionary act, Bakunin planned the unsuccessful 1874 Bologna insurrection with his Italian followers. Its failure was a major setback to the Italian anarchist movement. Bakunin retreated to Switzerland, where he retired, dying in Bern on 1 July 1876. Thought Much of Bakunin's writings on anarchism reflects antipathy for the state and "political organization itself as the source of oppression and exploitation". His revolutionary solutions focus on undoing the state and hierarchical religious, social, and economic institutions, to be replaced by a system of freely federated communes organized "from below upward" with voluntary associations of economic producers, starting locally but ostensibly organizing internationally. These thoughts were first published in his unfinished 1871 The Knouto-Germanic Empire and the Social Revolution, expanded by a second part published in his 1908 Oeuvres, and again elaborated a fragment found and published posthumously as God and the State (1882). The latter was his most famous work, translated widely, and a touchstone of anarchist literature. It appeals to cast off both the state and religion to realize man's inborn freedom. Bakunin's core political thought addressed emancipatory communities in which members freely develop their abilities and faculties without overpowering each other. Participation within community was a personal concern of his, and his vision of a community's role in creating free and happy humans stemmed from his close sibling relationships. Bakunin unsuccessfully sought community in religion and philosophy through influences including Arnold Ruge (Left Hegelianism), Ludwig Feuerbach (philosophical humanism), Wilhelm Weitling (proto-communism), and Pierre-Joseph Proudhon (early anarchism). Bakunin turned from metaphysics and theory to the practice of creating communities of free, independent people. His first attempts at this, with the Polish emigrants and Prague Slav Congress in the 1840s, focused on national liberation, but he turned to emancipatory community after the failed 1863 Polish naval expedition. For Bakunin, freedom required community (such that humanity could only be free if everyone was free) and equality (that all people have the same starting basis), including equality in rights and social functions for women. He envisioned an international revolution by the awakened masses that would bring about new forms of social organization (by committees of delegates and independent municipalities) in a large-scale federation undoing all state structure and social coercion. In this emancipated community, every adult would be entitled freedom, to be governed by their own conscience and reason according to their own will, responsible foremost to themselves and then to their community. He did not believe a reformed bourgeois or revolutionary state could emancipate like such a community he described, so his vision of revolution meant not capturing power but ensuring that no new power took the place of the old. Bakunin was not a systematic thinker and did not design grand systems or theoretical constructs. His writing was prolific and fragmented. He was prone to large digressions and rarely completed what he set out to address. As a result, much of his writings on anarchism do not cohere and were published only posthumously. Bakunin did develop his theoretical perspective through draft programs. Bakunin first called himself an anarchist in 1867. Authority Bakunin saw the institutions of church and state as standing against the aims of emancipatory community, namely that they impose wisdom and justice from above under the pretense that the masses could not fully self-govern. He wrote that "to exploit and to govern mean the same thing". Bakunin held the State as a regulated system of domination and exploitation by a privileged, ruling class. This applied to States both historical and contemporaneous, including modern monarchies and republics that each used military and bureaucratic centralization. He regarded representative democracies as a paradoxical abstraction from social reality. Although a popular legislature is meant to represent the will of the people, he saw it rarely function as such in practice. Elected politicians instead represented abstractions. Bakunin believed that powerful institutions to be inherently stronger than individual will and incapable of internal reform due to the overwhelming ambitions and temptations that corrupt those with power. To Bakunin, anarchists were rightly "enemies of all power, knowing that power corrupts those invested with it just as much as those compelled to submit to it". Bakunin clashed with Marx over worker governance and revolutionary change. Bakunin argued that even the best revolutionary placed on the Russian throne would become worse than Czar Alexander. Bakunin wrote that socialist workers in power would become ex-workers who govern by their own pretensions, not representing the people. Bakunin did not believe in transitional dictatorship serving any purpose other than to perpetuate itself, saying that "liberty without socialism is privilege and injustice, and socialism without liberty is slavery and brutality". Bakunin disagreed with Marx that the state would wither away under worker ownership and that worker conquest and changes in production conditions would inherently kill the state. Bakunin promoted spontaneous worker actions over Marx's suggested organization of a working-class party. While Bakunin believed that science and specialists could be useful in enlightening communities, he did not believe in government by experts or letting any privileged minority rule over a majority or any presumed intelligence rule over a presumed stupidity. Bakunin wrote of referring to the "authority to the bootmaker" on boots and to savants for their specialties, and listening to them freely in respect for their expertise, but not allowing the bootmaker or the savant to impose this authority and not letting them be beyond criticism or censure. Bakunin believed that authority should be in continual voluntary exchange rather than a constant subordination. He believed intelligence to have intrinsic benefits so as to not require additional privileges. Revolutionary societies Towards the end of his life, beginning in 1864 in Italy with the International Brotherhood, Bakunin attempted to unite his international network under secret revolutionary societies, a concept at odds with his professed caution against the autocratic tendencies of the revolutionary elite. Composed of Bakunin's circle, these informal groups existed mainly on paper and thus did not participate in revolutionary action or bridge revolutionary theory to practice like Bakunin intended. The groups operated with significant autonomy, having diverged from Bakunin on multiple controversial issues. Despite being cast at the Hague Congress as under Bakunin's stern authority, they were organized by personal relationships rather than the vertical hierarchies and membership ranks found in Bakunin's notes. His written programs played a larger role in his politics than these draft secret societies. The idea of the "invisible dictatorship" was central to Bakunin's politics. In combination with Bakunin's opposition to parliamentary politics, historian Peter Marshall wrote that such a secret party—its existence unknown and its policies beholden to none—had the potential for greater tyranny than a Blanquist or Marxist party and was hard to envision as presaging an open, democratic society. Personal life Bakunin married Antonia Kwiatkowska, originally from Poland, during his exile in Siberia. Kwiatkowska was much younger than Bakunin (a difference of 26 years; she was 18) and had little interest in politics. Their differences and Bakunin's meagre attention to romance have left biographers speculating about possible psychosexual rationales for Bakunin's personal life and the extent of his dedication to revolutionary action. Though she remained married to Bakunin until his death in 1876, Kwiatkowska had three children with another man while Bakunin was still alive – an Italian disciple of his, who married her after Bakunin's death. Legacy Bakunin was the leading anarchist revolutionary of the 19th century, active from the 1840s through the 1870s. His foundational anarchist writings helped the movement stand in contrast to capitalism and Marxism and became more popular after his death, with some of his highest regarded works published posthumously and in new editions. His Statism and Anarchy influenced the growing Russian Narodnik movement of peasant socialism, and his anarchism influenced ideology in both the Russian Revolution and the Spanish Civil War. The 1960s New Left revived interest in his works and ideas of voluntary association and opposition to authoritarian socialism, with new editions and translations published. Bakunin's legacy reflects the paradox and ambivalence by which he lived. As historian Paul Avrich put it, Bakunin was "a nobleman who yearned for a peasant revolt, a libertarian with an urge to dominate others, an intellectual with a powerful anti-intellectual streak", who professed support for unfettered liberty while demanding unconditional obedience from his followers. Many of his actions put him closer to later authoritarian movements, even if his words were anti-authoritarian. In particular, the antisemitic passages in Bakunin's writing have been the subject of extended interest, such that Bakunin biographer Mark Leier has said the question is raised every time he speaks on Bakunin. Both Leier and scholar of antisemitism Eirik Eiglad have commented that antisemitism was not essential to Bakunin's thought, nor was his thought valued for his antisemitism. Sociologist Marcel Stoetzler argued the opposite, saying that the antisemitic trope of Jewish world domination was at the centre of Bakunin's political thought. Bakunin's anti-Jewish and anti-German resentment are most visible in the context of his attacks on Marx, but his antisemitism predated these passages. Scholar Marshall Shatz noted that there is a gap between Bakunin's egalitarian principles and his ethnic prejudices, even if this antisemitism and stereotyping was common among French radicals of the era and shared by Marx himself. Noam Chomsky called Bakunin's prediction that Marxist regimes would become dictatorships "one of the few predictions in the social sciences that actually came true". Bakunin archives are held in several places: the Pushkin House, the State Archive of the Russian Federation, the Russian State Library, the Russian State Archive of Literature and Art, the National Library of Russia, and the International Institute of Social History. Works Books God and the State, Pamphlets Stateless Socialism: Anarchism (1953) Marxism, Freedom and the State, (translated by Kenneth Kenafick in 1950) The Paris Commune and the Idea of the State (1871) The Immorality of the State (1953) Statism and Anarchy (1990), Cambridge University Press, Revolutionary Catechism (1866) The Commune, the Church, and the State (1947) Founding of the First International (1953) On Rousseau (1972) No Gods No Masters (1998) by Daniel Guérin, Edinburgh: AK Press, Articles "Power Corrupts the Best" (1867) "The Class War" (1870) "What is Authority?" (1870) "Recollections on Marx and Engels" (1869) "The Red Association" (1870) "Solidarity in Liberty" (1867) "The German Crisis" (1870) "God or Labor" (1947) "Where I Stand" (1947) "Appeal to my Russian Brothers" (1896) "The Social Upheaval" (1947) "Integral Education, Part I" (1869) "Integral Education, Part II" (1869) "The Organization of the International" (1869) "Polish Declaration" (1896) "Politics and the State" (1871) "Workers and the Sphinx" (1867) "The Policy of the Council" (1869) "The Two Camps" (1869) Collections Bakunin on Anarchism (1971). Edited, translated and with an introduction by Sam Dolgoff. Preface by Paul Avrich. New York: Knopf Originally published as Bakunin on Anarchy, it includes James Guillaume's Bakunin: A Biographical Sketch. . Michael Bakunin: Selected Writings (1974). A. Lehning (ed.). New York: Grove Press. . Anarchism: A Documentary History of Libertarian Ideas, Volume 1: From Anarchy to Anarchism (300 CE – 1939) (2005). Robert Graham (ed.). Montreal and New York: Black Rose Books. . The Political Philosophy of Bakunin (1953). G. P. Maximoff (ed.). It includes Mikhail Bakunin: A Biographical Sketch by Max Nettlau. The Basic Bakunin: Writings 1869–1871 (1992). Robert M. Cutler (ed.). New York: Prometheus Books, 1992. . See also Archives Bakounine List of Russian anarchists Notes References Footnotes Bibliography Further reading Angaut, Jean-Christophe."Revolution and the Slav question : 1848 and Mikhail Bakunin" in Douglas Moggach and Gareth Stedman Jones (eds.) The 1848 revolutions and European political thought. Cambridge University Press, 2018. David, Zdeněk V. "Frič, Herzen, and Bakunin: the Clash of Two Political Cultures." East European Politics and Societies 12.1 (1997): 1–30. Guérin, Daniel. Anarchism: From Theory to Practice. New York: Monthly Review Press, 1970 (paperback, ). Stoppard, Tom. The Coast of Utopia. New York: Grove Press, 2002 (paperback, ). External links Bakunin Archive at RevoltLib Bakunin archive at Anarchy Archives Archive of Michail Aleksandrovič Bakunin Papers at the International Institute of Social History Writings of Bakunin at Marxist Internet Archive 1814 births 1876 deaths 19th-century atheists 19th-century philosophers from the Russian Empire 19th-century writers from the Russian Empire Anarchist theorists Anarchist writers Antisemitism in the Russian Empire Aphorists Atheists from the Russian Empire Atheist philosophers Collectivist anarchists Critics of Freemasonry Critics of Judaism Critics of Marxism Critics of religions Critics of work and the work ethic Escapees from Russian detention Former Russian Orthodox Christians Libertarian socialists Materialists Members of the International Workingmen's Association Participants of the Slavic Congress in Prague 1848 People from Kuvshinovsky District People from Lugano People from Novotorzhsky Uyezd People of the Revolutions of 1848 Philosophers of culture Philosophers of economics Philosophers of history Philosophers of nihilism Philosophers of religion Philosophy writers Prisoners of Shlisselburg fortress Prisoners of the Peter and Paul Fortress Revolution theorists Russian anarchists Russian anti-capitalists Russian atheists Russian atheism activists Russian writers on atheism Russian duellists Russian escapees Russian nihilists Russian male non-fiction writers Russian political philosophers Russian political writers Russian revolutionaries Russian socialists Russian untitled nobility Socialist economists Social philosophers Theorists on Western civilization
Mikhail Bakunin
[ "Physics" ]
5,717
[ "Materialism", "Matter", "Materialists" ]
58,222
https://en.wikipedia.org/wiki/Fujitsu
is a Japanese multinational information and communications technology equipment and services corporation, established in 1935 and headquartered in Kawasaki, Kanagawa. It is the world's sixth-largest IT services provider by annual revenue, and the largest in Japan, as of 2021. Fujitsu's hardware offerings mainly consist of personal and enterprise computing products, including x86, SPARC, and mainframe-compatible server products. The corporation and its subsidiaries also offer diverse products and services in data storage, telecommunications, advanced microelectronics, and air conditioning. It has approximately 124,000 employees supporting customers in over 50 countries and regions. Fujitsu is listed on the Tokyo Stock Exchange and Nagoya Stock Exchange; its Tokyo listing is a constituent of the Nikkei 225 and TOPIX 100 indices. History 1935 to 2000 Fujitsu was established on June 20, 1935, which makes it one of the oldest operating IT companies after IBM and before Hewlett-Packard, under the name , as a spin-off of the Fuji Electric Company, itself a joint venture between the Furukawa Electric Company and the German conglomerate Siemens which had been founded in 1923. Despite its connections to the Furukawa zaibatsu, Fujitsu escaped the Allied occupation of Japan after the Second World War mostly unscathed. In 1954, Fujitsu manufactured Japan's first computer, the FACOM 100 mainframe, and in 1961 launched its second generation computers (transistorized) the FACOM 222 mainframe. The 1968 FACOM230 "5" Series marked the beginning of its third generation computers. Fujitsu offered mainframe computers from 1955 until at least 2002 Fujitsu's computer products have also included minicomputers, small business computers, servers and personal computers (FM-8, FM-7, FM-Towns, etc.). In 1955, Fujitsu founded Kawasaki Frontale as a company football club; Kawasaki Frontale has been a J. League football club since 1999. In 1967, the company's name was officially changed to the contraction . Since 1985, the company also fields a company American football team, the Fujitsu Frontiers, who play in the corporate X-League, appeared in 7 Japan X Bowls, winning two, and won two Rice Bowls. In 1971, Fujitsu signed an OEM agreement with the Canadian company Consolidated Computers Limited (CCL) to distribute CCL's data entry product, Key-Edit. Fujitsu joined both International Computers Limited (ICL) which earlier began marketing Key-Edit in the British Commonwealth of countries as well as in both western and eastern Europe; and CCL's direct marketing staff in Canada, USA, London (UK) and Frankfurt. Mers Kutt, inventor of Key-Edit and founder of CCL, was the common thread that led to Fujitsu's later association with ICL and Gene Amdahl. In 1986, Fujitsu and The Queen's University of Belfast business incubation unit (QUBIS Ltd) established a joint venture called Kainos, a privately held software company based in Belfast, Northern Ireland. In 1990, Fujitsu acquired 80% of the UK-based computer company ICL for $1.29 billion. In September 1990, Fujitsu announced the launch of a new series of mainframe computers which were at that time the fastest in the world. In July 1991, Fujitsu acquired more than half of the Russian company KME-CS (Kazan Manufacturing Enterprise of Computer Systems). In 1992, Fujitsu announced plans to build a joint-venture plant in Punjab, India, to produce telephone switchboards. Fujitsu owned 51 percent of the joint venture, with the remaining 49 percent owned by Punjab's state-run electronics company. Dr. Sushil Kumar Mangal, who was at the time the managing director of the Punjab State Electronics Corporation, was appointed Chairman of Fujitsu India Telecom Ltd. This INR 116-crore project was set up by Fujitsu for the manufacture of Electronic Digital Exchange, with Fujitsu holding a 51 percent stake in the venture. Concurrently, Fujitsu established a new subsidiary, Fujitsu Networks Industry Inc., in Stamford, Connecticut, to develop communications services. In 1992, Fujitsu also introduced the world's first 21-inch full-color plasma display. It was a hybrid, based upon the plasma display created at the University of Illinois at Urbana-Champaign and NHK STRL, achieving superior brightness. In 1993, Fujitsu formed a flash memory manufacturing joint venture with AMD, Spansion. As part of the transaction, AMD contributed its flash memory group, Fab 25 in Texas, its R&D facilities and assembly plants in Thailand, Malaysia and China; Fujitsu provided its Flash memory business division and the Malaysian Fujitsu Microelectronics final assembly and test operations. From February 1989 until mid-1997, Fujitsu built the FM Towns PC variant. It started as a proprietary PC variant intended for multimedia applications and computer games, but later became more compatible with regular PCs. In 1993, the FM Towns Marty was released, a gaming console compatible with the FM Towns games. Fujitsu agreed to acquire the 58 percent of Amdahl Corporation (including the Canada-based DMR consulting group) that it did not already own for around $850 million in July 1997. In April 1997, the company acquired a 30 percent stake in GLOVIA International, Inc., an El Segundo, Calif., manufacturing ERP software provider whose software it had begun integrating into its electronics plants starting in 1994. In June 1999, Fujitsu's historical connection with Siemens was revived, when the two companies agreed to merge their European computer operations into a new 50:50 joint venture called Fujitsu Siemens Computers, which became the world's fifth-largest computer manufacturing company. 2000 to 2024 In April 2000, Fujitsu acquired the remaining 70% of GLOVIA International. In April 2002 ICL re-branded itself as Fujitsu. On March 2, 2004, Fujitsu Computer Products of America lost a class action lawsuit over MPG series hard disk drives with defective chips and firmware. In October 2004, Fujitsu acquired the Australian subsidiary of Atos Origin, a systems implementation company with around 140 employees which specialized in SAP. In August 2007, Fujitsu signed a £500 million, 10-year deal with Reuters Group under which Reuters outsourced the majority of its internal IT department to Fujitsu. As part of the agreement around 300 Reuters staff and 200 contractors transferred to Fujitsu. In October 2007, Fujitsu announced that it would be establishing an offshore development centre in Noida, India with a capacity to house 1,200 employees, in an investment of US$10 million. In October 2007, Fujitsu's Australia and New Zealand subsidiary acquired Infinity Solutions Ltd, a New ZealandVirtuora–based IT hardware, services and consultancy company, for an undisclosed amount. In January 2009, Fujitsu reached an agreement to sell its HDD business to Toshiba. Transfer of the business was completed on October 1. 2009. In March 2009, Fujitsu announced that it had decided to convert FDK Corporation, at that time an equity-method affiliate, to a consolidated subsidiary from May 1, 2009 (tentative schedule) by subscribing to a private placement to increase FDK's capital. On April 1, 2009, Fujitsu agreed to acquire Siemens' stake in Fujitsu Siemens Computers for approximately EUR450m. Fujitsu Siemens Computers was subsequently renamed Fujitsu Technology Solutions. In April 2009, Fujitsu acquired Australian software company Supply Chain Consulting for a $48 million deal, just weeks after purchasing the Telstra subsidiary Kaz for $200 million. Concerning of net loss forecast that amounted to 95 billion yen in the year ending March 2013, in February 2013 Fujitsu announced the cutting of 5,000 jobs of which 3,000 jobs in Japan and the rest overseas from its 170,000 employees. Fujitsu also merged its Large-scale integration chip designing business with that of Panasonic Corporation, resulting in the establishment of Socionext. In 2014, after severe losses, Fujitsu spun off its LSI chip manufacturing division as well, as Mie Fujitsu semiconductor, which was later bought in 2018 by United Semiconductor Japan Co., Ltd., wholly owned by United Microelectronics Corporation. In 2015, Fujitsu celebrated 80 years since establishment at a time when its IT business embarked upon the Fujitsu 2015 World Tour which has included 15 major cities globally and been visited by over 10,000 IT professionals with Fujitsu presenting its take on the future of Hyper Connectivity and Human Centric Computing. In April 2015 GLOVIA International is renamed FUJITSU GLOVIA, Inc. In November 2015, Fujitsu Limited and VMware announced new areas of collaboration to empower customers with flexible and secure cloud technologies. It also acquired USharesoft which provides enterprise-class application delivery software for automating the build, migration and governance of applications in multi-cloud environments. In January 2016, Fujitsu Network Communications Inc. announced a new suite of layered products to advance software-defined networking (SDN) for carriers, service providers and cloud builders. Virtuora NC, based on open standards, is described by Fujitsu as "a suite of standards-based, multi-layered, multi-vendor network automation and virtualization products" that "has been hands-on hardened by some of the largest global service providers." In 2019, Fujitsu started to deliver 5G telecommunications equipment to NTT Docomo, along with NEC. In March 2020, Fujitsu announced the creation of a subsidiary, later named Fujitsu Japan, that will enable the company to expand its business in the Japanese IT services market. In June 2020, Fugaku, co-developed with the RIKEN research institute, was declared the most powerful supercomputer in the world. The performance capability of Fugaku is 415.53 PFLOPS with a theoretical peak of 513.86 PFLOPS. It is three times faster than of the previous champion. Fugaku also ranked first place in categories that measure computational methods performance for industrial use, artificial intelligence applications, and big data analytics. The supercomputer is located in a facility in Kobe. In June 2020, Fujitsu developed an artificial intelligence monitor that can recognize complex hand movements, built on its crime surveillance technology. The AI is designed to check whether the subject completes the proper hand washing procedure based on the guidelines issued by the WHO. In September 2020, Fujitsu introduced software-defined storage technology that incorporates Qumulo hybrid cloud file storage software to enable enterprises to unify petabytes of unstructured data from disparate locations, across multiple data centers and the cloud. This is expected to support various types of storages, including NVMe SSDs, HDDs and flash-drives. In 2024, Fujitsu relocated its headquarters from Shiodome City Center in Minato, Tokyo, to Kawasaki, Kanagawa. The company relocated its administration department to its Kawasaki plant in Nakahara-ku, its sales department to an office building in Saiwaiku-ku, and its system development department to an office building in Ōta, Tokyo. Operations Fujitsu Laboratories Fujitsu Laboratories, Fujitsu's Research and Development division, has approximately 900 employees and a capital of JP¥5 billion. The current CEO is Hirotaka Hara. In 2012, Fujitsu announced that it had developed new technology for non-3D camera phones. The technology will allow the camera phones to take 3D photos. Fujitsu Electronics Europe GmbH Fujitsu Electronics Europe GmbH entered the market as a global distributor on January 1, 2016. Fujitsu Consulting Fujitsu Consulting is the consulting and services arm of the Fujitsu group, providing information technology consulting, implementation and management services. Fujitsu Consulting was founded in 1973 in Montreal, Quebec, Canada, under its original name "DMR" (an acronym of the three founder's names: Pierre Ducros, Serge Meilleur and Alain Roy) During the next decade, the company established a presence throughout Quebec and Canada, before extending its reach to international markets. For nearly thirty years, DMR Consulting grew to become an international consulting firm, changing its name to Fujitsu Consulting in 2002 after being acquired by Fujitsu Ltd. Fujitsu operates a division of the company in India, resulting from an acquisition of North America–based company, Rapidigm. It has offshore divisions at Noida, Pune, Hyderabad, Chennai and Bangalore with Pune being the head office. Fujitsu Consulting India launched its second $10 million development center at Noida in October 2007, a year after starting operation in the country. Following the expansion plan, Fujitsu Consulting India launched the fourth development center in Bengaluru in Nov 2011. Fujitsu General Fujitsu Ltd. has a 42% shareholding in , which manufactures and markets various air conditioning units and humidity control solutions under the General & Fujitsu brands. In India, The company has ended its long-standing joint venture agreement with the Dubai-based ETA group and henceforth will operate under a wholly owned subsidiary Fujitsu General (India) Pvt Ltd, which was earlier known as ETA General. PFU Limited PFU Limited, headquartered in Ishikawa, Japan is a wholly owned subsidiary of Fujitsu Limited. PFU Limited was established in 1960, has approximately 4,600 employees globally and in 2013 turned over 126.4 billion Yen (US$1.2 Billion). PFU manufactures interactive kiosks, keyboards, network security hardware, embedded computers and imaging products (document scanners) all under the PFU or Fujitsu brand. In addition to hardware PFU also produce desktop and enterprise document capture software and document management software products. PFU has overseas Sales & Marketing offices in Germany (PFU Imaging Solutions Europe Limited), Italy (PFU Imaging Solutions Europe Limited), United Kingdom (PFU Imaging Solutions Europe Limited) and United States of America (Fujitsu Computer Products of America Inc). PFU Limited are responsible for the design, development, manufacture, sales and support of document scanners which are sold under the Fujitsu brand. Fujitsu are market leaders in professional document scanners with their fi-series, Scansnap and ScanPartner product families as well as Paperstream IP, Paperstream Capture, ScanSnap Manager, ScanSnap Home, Cardminder, Magic Desktop and Rack2Filer software products. Fujitsu Glovia, Inc. Fujitsu Glovia, a wholly owned subsidiary of Fujitsu Ltd., is a discrete manufacturing enterprise resource planning software vendor based in El Segundo, California, with international operations in the Netherlands, Japan and the United Kingdom. The company offers on-premise and cloud-based ERP manufacturing software under the Glovia G2 brand, and software as a service (SaaS) under the brand Glovia OM. The company was established in 1970 as Xerox Computer Services, where it developed inventory, manufacturing and financial applications. Fujitsu acquired 30 percent of the renamed Glovia International in 1997 and the remaining 70 percent stake in 2000. Fujitsu Client Computing Limited (FCCL), headquartered in Kawasaki, Kanagawa, the city where the company was founded, is the division of Fujitsu responsible for research, development, design, manufacturing and sales of consumer PC products. Formerly a wholly owned subsidiary, in November 2017, FCCL was spun off into a joint venture with Lenovo and Development Bank of Japan (DBJ). The new company retains the same name, and Fujitsu is still responsible for sales and support of the products; however, Lenovo owns a majority stake at 51%, while Fujitsu retains 44%. The remaining 5% stake is held by DBJ. Fujitsu Network Communications, Inc. Fujitsu Network Communications, Inc., headquartered in Richardson, Texas, United States, is a wholly owned subsidiary of Fujitsu Limited. Established in 1996, Fujitsu Network Communications specializes in building, operating, and supporting optical and wireless broadband and telecommunications networks. The company's customers include telecommunications service providers, internet service providers, cable companies, utilities, and municipalities. Fujitsu Network Communications provides network management and design tools. The company also builds networks that comply with various next-generation technologies and initiatives, including the Telecom Infra Project. Products and services Computing products Fujitsu's computing product lines include: Relational Database: Fujitsu Enterprise Postgres Fujitsu has more than 35 years experience in database development and is a “major contributor” to open source Postgres. Fujitsu engineers have also developed an Enterprise Postgres version called Fujitsu Enterprise Postgres. Fujitsu Enterprise Postgres benefits include Enterprise Support; warranted code; High Availability enhancements; security enhancements (end to end transparent data encryption, data masking, auditing); Performance enhancements (In-Memory Columnar Index provides support for HTAP (Hybrid transactional/analytical processing) workloads); High-speed Backup and Recovery; High-speed data load; Global metacache (improved memory management); Oracle compatibility extensions (to assist migration from Oracle to Postgres). Fujitsu Enterprise Postgres can be deployed on X86 (Linux, Windows), IBM z/IBM LinuxONE; it is also packaged as a RedHat OpenShift (OCP) container. PRIMERGY In May 2011, Fujitsu decided to enter the mobile phone space again, with Microsoft announcing plans that Fujitsu would release Windows Phone devices. ETERNUS Fujitsu PRIMERGY and ETERNUS are distributed by TriTech Distribution Limited in Hong Kong. LIFEBOOK, AMILO: Fujitsu's range of notebook computers and tablet PCs. Cloud computing Fujitsu offers a public cloud service delivered from data centers in Japan, Australia, Singapore, the United States, the United Kingdom and Germany based on its Global Cloud Platform strategy announced in 2010. The platform delivers Infrastructure-as-a-Service (IaaS) – virtual information and communication technology (ICT) infrastructure, such as servers and storage functionality – from Fujitsu's data centers. In Japan, the service was offered as the On-Demand Virtual System Service (OViSS) and was then launched globally as Fujitsu Global Cloud Platform/S5 (FGCP/S5). Since July 2013 the service has been called IaaS Trusted Public S5. Globally, the service is operated from Fujitsu data centers located in Australia, Singapore, the United States, the United Kingdom, Germany and Japan. Fujitsu has also launched a Windows Azure powered Global Cloud Platform in a partnership with Microsoft. This offering, delivering Platform-as-a-Service (PaaS), was known as FGCP/A5 in Japan but has since been renamed FUJITSU Cloud PaaS A5 for Windows Azure. It is operated from a Fujitsu data center in Japan. It offers a set of application development frameworks, such as Microsoft .NET, Java and PHP, and data storage capabilities consistent with the Windows Azure platform provided by Microsoft. The basic service consists of compute, storage, Microsoft SQL Azure, and Windows Azure AppFabric technologies such as Service Bus and Access Control Service, with options for inter-operating services covering implementation and migration of applications, system building, systems operation, and support. Fujitsu acquired RunMyProcess in April 2013, a Cloud-based integration Platform-as-a-Service (PaaS) specialized in workflow automation and business application development. Fujitsu offers local cloud platforms, such as in Australia, that provide the ability to rely on its domestic data centers which keep sensitive financial data under local jurisdiction and compliance standards. Microprocessors Fujitsu produces the SPARC-compliant CPU (SPARClite), the "Venus" 128 GFLOP SPARC64 VIIIfx model is included in the K computer, the world's fastest supercomputer in June 2011 with a rating of over 8 petaflops, and in October 2011, K became the first computer to top 10 petaflops. This speed was achieved in testing on October 7–8, and the results were then presented at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC11) in November 2011. The Fujitsu FR, FR-V and ARM architecture microprocessors are widely used, additionally in ASICs and Application-specific standard products (ASSP) like the Milbeaut with customer variants named Nikon Expeed. They were acquired by Spansion in 2013. Advertising The old slogan "The possibilities are infinite" can be found below the company's logo on major advertisements and ties in with the small logo above the letters J and I of the word Fujitsu. This smaller logo represents the symbol for infinity. As of April 2010, Fujitsu is in the process of rolling out a new slogan focused on entering into partnerships with its customers and retiring the "possibilities are infinite" tagline. The new slogan is "shaping tomorrow with you". Horizon scandal Fujitsu designed, developed and operated the Horizon IT system which was at the centre of the legal dispute between the UK Post Office and its sub-postmasters. The case, still unsettled, found that the IT system was unreliable and that faults in the system caused discrepancies in branch accounts which were not the responsibility of the postmasters themselves. Mr Justice Fraser, the judge hearing the case, noted that Fujitsu had given "wholly unsatisfactory evidence" and there had been a "lack of accuracy on the part of Fujitsu witnesses in their evidence". Following his concerns, Fraser sent a file to the Director of Public Prosecutions. Fujitsu was also found to have pressured the UK government to sign off the controversial and faulty IT system that, as a result of its faulty operation, resulted in numerous UK postmasters and sub-postmasters being accused falsely, and subsequently convicted of theft, false accounting and fraud, with consequences including imprisonment and monetary reparations. Of the hundreds of people affected, as of January 2024 only around 1 in 9 of those convicted have had their convictions overturned, whereas others have died due to the length of time that has elapsed, or have taken their own lives in the wake of their wrongful convictions. Although there has been some media coverage of organised legal appeals by groups of those affected, the matter has recently gained enhanced attention by mainstream media and the wider public as a whole, and is currently being acted upon by authorities in an attempt to accelerate and automate the clearing of the names of all those wrongly convicted, who until now were required to launch their own legal appeals in order to have their names cleared. Environmental record Fujitsu reports that all its notebook and tablet PCs released globally comply with the latest Energy Star standard. Greenpeace's Cool IT Leaderboard of April 2013 "examines how IT companies use their considerable influence to change government policies that will drive clean energy deployment" and ranks Fujitsu 4th out of 21 leading manufacturers, on the strength of "developed case study data of its solutions with fairly transparent methodology, and is the leading company in terms of establishing ambitious and detailed goals for future carbon savings from its IT solutions." See also List of computer system manufacturers List of semiconductor fabrication plants See the World by Train, a daily Japanese TV mini-programme sponsored by Fujitsu since 1987 References External links Wiki collection of bibliographic works on Fujitsu Cloud computing providers Companies listed on the Tokyo Stock Exchange Companies in the Nikkei 225 Computer hardware companies Computer systems companies Consumer electronics brands Defense companies of Japan Display technology companies Electronics companies established in 1935 Electronics companies of Japan Furukawa Group Heating, ventilation, and air conditioning companies Japanese brands Manufacturing companies based in Tokyo Mobile phone manufacturers Multinational companies headquartered in Japan Point of sale companies Software companies based in Tokyo Technology companies of Japan Telecommunications companies based in Tokyo Computer enclosure companies Japanese companies established in 1935
Fujitsu
[ "Technology" ]
4,888
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
58,246
https://en.wikipedia.org/wiki/Nitrocellulose
Nitrocellulose (also known as cellulose nitrate, flash paper, flash cotton, guncotton, pyroxylin and flash string, depending on form) is a highly flammable compound formed by nitrating cellulose through exposure to a mixture of nitric acid and sulfuric acid. One of its first major uses was as guncotton, a replacement for gunpowder as propellant in firearms. It was also used to replace gunpowder as a low-order explosive in mining and other applications. In the form of collodion it was also a critical component in an early photographic emulsion, the use of which revolutionized photography in the 1860s. In the 20th century it was adapted to automobile lacquer and adhesives. Production The process uses a mixture of nitric acid and sulfuric acid to convert cellulose into nitrocellulose. The quality of the cellulose is important. Hemicellulose, lignin, pentosans, and mineral salts give inferior nitrocelluloses. In precise chemical terms, nitrocellulose is not a nitro compound, but a nitrate ester. The glucose repeat unit (anhydroglucose) within the cellulose chain has three OH groups, each of which can form a nitrate ester. Thus, nitrocellulose can denote mononitrocellulose, dinitrocellulose, and trinitrocellulose, or a mixture thereof. With fewer OH groups than the parent cellulose, nitrocelluloses do not aggregate by hydrogen bonding. The overarching consequence is that the nitrocellulose is soluble in organic solvents such as acetone and esters; e.g., ethyl acetate, methyl acetate, ethyl carbonate. Most lacquers are prepared from the dinitrate, whereas explosives are mainly the trinitrate. The chemical equation for the formation of the trinitrate is 3 HNO3 + C6H7(OH)3O2 C6H7(ONO2)3O2 + 3 H2O The yields are about 85%, with losses attributed to complete oxidation of the cellulose to oxalic acid. Use The principal uses of cellulose nitrate is for the production of lacquers and coatings, explosives, and celluloid. In terms of lacquers and coatings, nitrocellulose dissolves readily in organic solvents, which upon evaporation leave a colorless, transparent, flexible film. Nitrocellulose lacquers have been used as a finish on furniture and musical instruments. Guncotton, dissolved at about 25% in acetone, forms a lacquer used in preliminary stages of wood finishing to develop a hard finish with a deep lustre. It is normally the first coat applied, then it is sanded and followed by other coatings that bond to it. Nail polish contains nitrocellulose, as it is inexpensive, dries quickly to a hard film, and does not damage skin. The explosive applications are diverse and nitrate content is typically higher for propellant applications than for coatings. For space flight, nitrocellulose was used by Copenhagen Suborbitals on several missions as a means of jettisoning components of the rocket/space capsule and deploying recovery systems. However, after several missions and flights, it proved not to have the desired explosive properties in a near vacuum environment. In 2014, the Philae comet lander failed to deploy its harpoons because its 0.3 grams of nitrocellulose propulsion charges failed to fire during the landing. Other uses Collodion, a solution of nitrocellulose, is used today in topical skin applications, such as liquid skin and in the application of salicylic acid, the active ingredient in Compound W wart remover. Laboratory uses Membrane filters made of a mesh of nitrocellulose threads with various porosities are used in laboratory procedures for particle retention and cell capture in liquid or gaseous solutions and, reversely, obtaining particle-free filtrates. A nitrocellulose slide, nitrocellulose membrane, or nitrocellulose paper is a sticky membrane used for immobilizing nucleic acids in southern blots and northern blots. It is also used for immobilization of proteins in western blots and atomic force microscopy for its nonspecific affinity for amino acids. Nitrocellulose is widely used as support in diagnostic tests where antigen-antibody binding occurs; e.g., pregnancy tests, U-albumin tests, and CRP tests. Glycine and chloride ions make protein transfer more efficient. Radon tests for alpha track etches use nitrocellulose. Adolph Noé developed a method of peeling coal balls using nitrocellulose. It is used to coat playing cards and to bind staples together in office staplers. Hobbies In 1846, nitrated cellulose was found to be soluble in ether and alcohol. The solution was named collodion and was soon used as a dressing for wounds. In 1851, Frederick Scott Archer invented the wet collodion process as a replacement for albumen in early photographic emulsions, binding light-sensitive silver halides to a glass plate. Magicians' flash paper are sheets of paper consisting of pure nitrocellulose, which burn almost instantly with a bright flash, leaving no ash or smoke. As a medium for cryptographic one-time pads, they make the disposal of the pad complete, secure, and efficient. Nitrocellulose lacquer is spin-coated onto aluminium or glass discs, then a groove is cut with a lathe, to make one-off phonograph records, used as masters for pressing or for play in dance clubs. They are referred to as acetate discs. Depending on the manufacturing process, nitrocellulose is esterified to varying degrees. Table tennis balls, guitar picks, and some photographic films have fairly low esterification levels and burn comparatively slowly with some charred residue. Historical uses Early work on nitration of cellulose In 1832 Henri Braconnot discovered that nitric acid, when combined with starch or wood fibers, would produce a lightweight combustible explosive material, which he named xyloïdine. A few years later in 1838, another French chemist, Théophile-Jules Pelouze (teacher of Ascanio Sobrero and Alfred Nobel), treated paper and cardboard in the same way. Jean-Baptiste Dumas obtained a similar material, which he called nitramidine. Guncotton Around 1846 Christian Friedrich Schönbein, a German-Swiss chemist, discovered a more practical formulation. As he was working in the kitchen of his home in Basel, he spilled a mixture of nitric acid (HNO3) and sulfuric acid (H2SO4) on the kitchen table. He reached for the nearest cloth, a cotton apron, and wiped it up. He hung the apron on the stove door to dry, and as soon as it was dry, a flash occurred as the apron ignited. His preparation method was the first to be widely used. The method was to immerse one part of fine cotton in 15 parts of an equal blend of sulfuric acid and nitric acid. After two minutes, the cotton was removed and washed in cold water to set the esterification level and to remove all acid residue. The cotton was then slowly dried at a temperature below 40 °C (104 °F). Schönbein collaborated with the Frankfurt professor Rudolf Christian Böttger, who had discovered the process independently in the same year. By coincidence, a third chemist, the Brunswick professor F. J. Otto had also produced guncotton in 1846 and was the first to publish the process, much to the disappointment of Schönbein and Böttger. The patent rights for the manufacture of guncotton were obtained by John Hall & Son in 1846, and industrial manufacture of the explosive began at a purpose-built factory at Marsh Works in Faversham, Kent, a year later. The manufacturing process was not properly understood and few safety measures were put in place. A serious explosion in July that killed almost two dozen workers resulted in the immediate closure of the plant. Guncotton manufacture ceased for over 15 years until a safer procedure could be developed. The British chemist Frederick Augustus Abel developed the first safe process for guncotton manufacture, which he patented in 1865. The washing and drying times of the nitrocellulose were both extended to 48 hours and repeated eight times over. The acid mixture was changed to two parts sulfuric acid to one part nitric. Nitration can be controlled by adjusting acid concentrations and reaction temperature. Nitrocellulose is soluble in a mixture of ethanol and ether until nitrogen concentration exceeds 12%. Soluble nitrocellulose, or a solution thereof, is sometimes called collodion. Guncotton containing more than 13% nitrogen (sometimes called insoluble nitrocellulose) was prepared by prolonged exposure to hot, concentrated acids for limited use as a blasting explosive or for warheads of underwater weapons such as naval mines and torpedoes. Safe and sustained production of guncotton began at the Waltham Abbey Royal Gunpowder Mills in the 1860s, and the material rapidly became the dominant explosive, becoming the standard for military warheads, although it remained too potent to be used as a propellant. More-stable and slower-burning collodion mixtures were eventually prepared using less concentrated acids at lower temperatures for smokeless powder in firearms. The first practical smokeless powder made from nitrocellulose, for firearms and artillery ammunition, was invented by French chemist Paul Vieille in 1884. Jules Verne viewed the development of guncotton with optimism. He referred to the substance several times in his novels. His adventurers carried firearms employing this substance. In his From the Earth to the Moon, guncotton was used to launch a projectile into space. Because of their fluffy and nearly white appearance, nitrocellulose products are often referred to as cottons, e.g. lacquer cotton, celluloid cotton, and gun cotton. Guncotton was originally made from cotton (as the source of cellulose) but contemporary methods use highly processed cellulose from wood pulp. While guncotton is dangerous to store, the hazards it presents can be minimized by storing it dampened with various liquids, such as alcohol. For this reason, accounts of guncotton usage dating from the early 20th century refer to "wet guncotton." The power of guncotton made it suitable for blasting. As a projectile driver, it had around six times the gas generation of an equal volume of black powder and produced less smoke and less heating. Artillery shells filled with gun cotton were widely used during the American Civil War, and its use was one of the reasons the conflict was seen as the "first modern war." In combination with breech-loading artillery, such high explosive shells could cause greater damage than previous solid cannonballs. During the first World War, British authorities were slow to introduce grenades, with soldiers at the front improvising by filling ration tin cans with gun cotton, scrap and a basic fuse. Further research indicated the importance of washing the acidified cotton. Unwashed nitrocellulose (sometimes called pyrocellulose) may spontaneously ignite and explode at room temperature, as the evaporation of water results in the concentration of unreacted acid. Film In 1855, the first human-made plastic, nitrocellulose (branded Parkesine, patented in 1862), was created by Alexander Parkes from cellulose treated with nitric acid and a solvent. In 1868, American inventor John Wesley Hyatt developed a plastic material he named Celluloid, improving on Parkes' invention by plasticizing the nitrocellulose with camphor so that it could be processed into a photographic film. This was used commercially as "celluloid", a highly flammable plastic that until the mid-20th century formed the basis for lacquers and photographic film. On May 2, 1887, Hannibal Goodwin filed a patent for "a photographic pellicle and process of producing same ... especially in connection with roller cameras", but the patent was not granted until September 13, 1898. In the meantime, George Eastman had already started production of roll-film using his own process. Nitrocellulose was used as the first flexible film base, beginning with Eastman Kodak products in August 1889. Camphor is used as a plasticizer for nitrocellulose film, often called nitrate film. Goodwin's patent was sold to Ansco, which successfully sued Eastman Kodak for infringement of the patent and was awarded $5,000,000 in 1914 to Goodwin Film. Nitrate film fires Disastrous fires related to celluloid or "nitrate film" became regular occurrences in the motion picture industry throughout the silent era and for many years after the arrival of sound film. Projector fires and spontaneous combustion of nitrate footage stored in studio vaults and in other structures were often blamed during the early to mid 20th century for destroying or heavily damaging cinemas, inflicting many serious injuries and deaths, and for reducing to ashes the master negatives and original prints of tens of thousands of screen titles, turning many of them into lost films. Even when nitrate stock did not start the blaze, flames from other sources spread to large nearby film collections, producing intense and highly destructive fires. In 1914the same year that Goodwin Film was awarded $5,000,000 from Kodak for patent infringementnitrate film fires incinerated a significant portion of the United States' early cinematic history. In that year alone, five very destructive fires occurred at four major studios and a film-processing plant. Millions of feet of film burned on March 19 at the Eclair Moving Picture Company in Fort Lee, New Jersey. Later that same month, many more reels and film cans of negatives and prints also burned at Edison Studios in New York City, in the Bronx. On May 13, a fire at Universal Pictures' Colonial Hall "film factory" in Manhattan consumed another extensive collection. Yet again, on June 13 in Philadelphia, a fire and a series of explosions ignited inside the 186-square-meter (2,000-square-foot) film vault of the Lubin Manufacturing Company and quickly wiped out virtually all of that studio's pre-1914 catalogue. Then a second fire hit the Edison Company at another location on December 9, at its film-processing complex in West Orange, New Jersey. That catastrophic fire started inside a film-inspection building and caused over $7,000,000 in property damages ($ today). Even after film technology changed, archives of older films remained vulnerable; the 1965 MGM vault fire burned many films that were decades old. The use of volatile nitrocellulose film for motion pictures led many cinemas to fireproof their projection rooms with wall coverings made of asbestos. Those additions intended to prevent or at least delay the migration of flames beyond the projection areas. A training film for projectionists included footage of a controlled ignition of a reel of nitrate film, which continued to burn even when fully submerged in water. Once burning, it is extremely difficult to extinguish. Unlike most other flammable materials, nitrocellulose does not need a source of air to continue burning, since it contains sufficient oxygen within its molecular structure to sustain a flame. For this reason, immersing burning film in water may not extinguish it, and could actually increase the amount of smoke produced. Owing to public safety precautions, the United Kingdom's Health and Safety Executive to this day forbids transportation of nitrate film by post or public transit, or disposal with household refuse. Cinema fires caused by the ignition of nitrocellulose film stock commonly occurred as well. In Ireland in 1926, it was blamed for the Dromcolliher cinema tragedy in County Limerick in which 48 people died. Then in 1929 at the Glen Cinema in Paisley, Scotland, a film-related fire killed 69 children. Today, nitrate film projection is rare and normally highly regulated and requires extensive precautions, including extra health-and-safety training for projectionists. A special projector certified to run nitrate films has many modifications, among them the chambering of the feed and takeup reels in thick metal covers with small slits to allow the film to run through them. The projector is additionally modified to accommodate several fire extinguishers with nozzles aimed at the film gate. The extinguishers automatically trigger if a piece of film near the gate starts to burn. While this triggering would likely damage or destroy a significant portion of the projector's components, it would contain a fire and prevent far greater damage. Projection rooms may also be required to have automatic metal covers for the projection windows, preventing the spread of fire to the auditorium. Today, the Dryden Theatre at the George Eastman Museum is one of a few theaters in the world that is capable of safely projecting nitrate films and regularly screens them to the public. The BFI Southbank in London is the only cinema in the United Kingdom licensed to show Nitrate Film. The use of nitrate film and its fiery potential were certainly not issues limited to the realm of motion pictures or to commercial still photography. The film was also used for many years in medicine, where its hazardous nature was most acute, especially in its application to X-ray photography. In 1929, several tons of stored X-ray film were ignited by steam from a broken heating pipe at the Cleveland Clinic in Ohio. That tragedy claimed 123 lives during the fire and additional fatalities several days later, when hospitalized victims died due to inhaling excessive amounts of smoke from the burning film, which was laced with toxic gases such as sulfur dioxide and hydrogen cyanide. Related fires in other medical facilities prompted the growing disuse of nitrocellulose stock for X-rays by 1933, nearly two decades before its use was discontinued for motion-picture films in favour of cellulose acetate film, more commonly known as "safety film". Nitrocellulose decomposition and new "safety" stocks Nitrocellulose was found to gradually decompose, releasing nitric acid and further catalyzing the decomposition (eventually into a flammable powder). Decades later, storage at low temperatures was discovered as a means of delaying these reactions indefinitely. Many films produced during the early 20th century were lost through this accelerating, self-catalyzed disintegration or through studio warehouse fires, and many others were deliberately destroyed specifically to avoid the fire risk. Salvaging old films is a major problem for film archivists (see film preservation). Nitrocellulose film base manufactured by Kodak can be identified by the presence of the word "nitrate" in dark letters along one edge; the word only in clear letters on a dark background indicates derivation from a nitrate base original negative or projection print, but the film in hand itself may be a later print or copy negative, made on safety film. Acetate film manufactured during the era when nitrate films were still in use was marked "Safety" or "Safety Film" along one edge in dark letters. 8, 9.5, and 16 mm film stocks, intended for amateur and other nontheatrical use, were never manufactured with a nitrate base in the west, but rumors exist of 16 mm nitrate film having been produced in the former Soviet Union and China. Nitrate dominated the market for professional-use 35 mm motion picture film from the industry's origins to the early 1950s. While cellulose acetate-based safety film, notably cellulose diacetate and cellulose acetate propionate, was produced in the gauge for small-scale use in niche applications (such as printing advertisements and other short films to enable them to be sent through the mails without the need for fire safety precautions), the early generations of safety film base had two major disadvantages relative to nitrate: it was much more expensive to manufacture, and considerably less durable in repeated projection. The cost of the safety precautions associated with the use of nitrate was significantly lower than the cost of using any of the safety bases available before 1948. These drawbacks were eventually overcome with the launch of cellulose triacetate base film by Eastman Kodak in 1948. Cellulose triacetate superseded nitrate as the film industry's mainstay base very quickly. While Kodak had discontinued some nitrate film stocks earlier, it stopped producing various nitrate roll films in 1950 and ceased production of nitrate 35 mm motion picture film in 1951. The crucial advantage cellulose triacetate had over nitrate was that it was no more of a fire risk than paper (the stock is often referred to as "non-flam": this is true—but it is combustible, just not in as volatile or as dangerous a way as nitrate), while it almost matched the cost and durability of nitrate. It remained in almost exclusive use in all film gauges until the 1980s, when polyester/PET film began to supersede it for intermediate and release printing. Polyester is much more resistant to polymer degradation than either nitrate or triacetate. Although triacetate does not decompose in as dangerous a way as nitrate does, it is still subject to a process known as deacetylation, often nicknamed "vinegar syndrome" (due to the acetic acid smell of decomposing film) by archivists, which causes the film to shrink, deform, become brittle and eventually unusable. PET, like cellulose mononitrate, is less prone to stretching than other available plastics. By the late 1990s, polyester had almost entirely superseded triacetate for the production of intermediate elements and release prints. Triacetate remains in use for most camera negative stocks because it can be "invisibly" spliced using solvents during negative assembly, while polyester film is usually spliced using adhesive tape patches, which leave visible marks in the frame area. However, ultrasonic splicing in the frame line area can be invisible. Also, polyester film is so strong, it will not break under tension and may cause serious damage to expensive camera or projector mechanisms in the event of a film jam, whereas triacetate film breaks easily, reducing the risk of damage. Many were opposed to the use of polyester for release prints for this reason, and because ultrasonic splicers are very expensive, beyond the budgets of many smaller theaters. In practice, though, this has not proved to be as much of a problem as was feared. Rather, with the increased use of automated long-play systems in cinemas, the greater strength of polyester has been a significant advantage in lessening the risk of a film performance being interrupted by a film break. Despite its self-oxidizing hazards, nitrate is still regarded highly as the stock is more transparent than replacement stocks, and older films used denser silver in the emulsion. The combination results in a notably more luminous image with a high contrast ratio. Fabric The solubility of nitrocellulose was the basis for the first "artificial silk" by Georges Audemars in 1855, which he called "Rayon".. However, Hilaire de Chardonnet was the first to patent a nitrocellulose fiber marketed as "artificial silk" at the Paris Exhibition of 1889. Commercial production started in 1891, but the result was flammable and more expensive than cellulose acetate or cuprammonium rayon. Because of this predicament, production ceased early in the 1900s. Nitrocellulose was briefly known as "mother-in-law silk". Frank Hastings Griffin invented the double-godet, a special stretch-spinning process that changed artificial silk to rayon, rendering it usable in many industrial products such as tire cords and clothing. Nathan Rosenstein invented the "spunize process" by which he turned rayon from a hard fiber to a fabric. This allowed rayon to become a popular raw material in textiles. Coatings Nitrocellulose lacquer manufactured by (among others) DuPont, was the primary material for painting automobiles for many years. Durability of finish, complexities of "multiple stage" modern finishes, and other factors including environmental regulation led manufacturers to choose newer technologies. It remained the favorite of hobbyists for both historical reasons and for the ease with which a professional finish can be obtained. Most automobile "touch up" paints are still made from lacquer because of its fast drying, easy application, and superior adhesion properties – regardless of the material used for the original finish. Guitars sometimes shared color codes with current automobiles. It fell out of favor for mass production use for a number of reasons including environmental regulation and the cost of application vs. "poly" finishes. However, Gibson still use nitrocellulose lacquers on all of their guitars, as well as Fender when reproducing historically accurate guitars. The nitrocellulose lacquer yellows and cracks over time, and custom shops will reproduce this aging to make instruments appear vintage. Guitars made by smaller shops (luthiers) also often use "nitro" as it has an almost mythical status among guitarists. Hazards Because of its explosive nature, not all applications of nitrocellulose were successful. In 1869, with elephants having been poached to near extinction, the billiards industry offered a US$10,000 prize to whoever came up with the best replacement for ivory billiard balls. John Wesley Hyatt created the winning replacement, which he created with a new material he invented, called camphored nitrocellulose—the first thermoplastic, better known as celluloid. The invention enjoyed a brief popularity, but the Hyatt balls were extremely flammable, and sometimes portions of the outer shell would explode upon impact. An owner of a billiard saloon in Colorado wrote to Hyatt about the explosive tendencies, saying that he did not mind very much personally but for the fact that every man in his saloon immediately pulled a gun at the sound. The process used by Hyatt to manufacture the billiard balls, patented in 1881, involved placing the mass of nitrocellulose in a rubber bag, which was then placed in a cylinder of liquid and heated. Pressure was applied to the liquid in the cylinder, which resulted in a uniform compression on the nitrocellulose mass, compressing it into a uniform sphere as the heat vaporized the solvents. The ball was then cooled and turned to make a uniform sphere. In light of the explosive results, this process was called the "Hyatt gun method". An overheated container of dry nitrocellulose is believed to be the initial cause of the 2015 Tianjin explosions. See also Pentaerythritol tetranitrate (PETN), a related explosive. Cordite Nitroglycerine Nitrostarch RE factor References External links Gun Cotton at The Periodic Table of Videos (University of Nottingham) Nitrocellulose Paper Video (aka:Flash paper) Cellulose, nitrate (Nitrocellulose)—ChemSub Online How To Make Nitro-Cellulose That Works 1846 introductions Nitrate esters Articles containing video clips Cellulose Cotton Explosive chemicals Film and video technology Firearm propellants Photographic chemicals Storage media Transparent materials Explosive polymers
Nitrocellulose
[ "Physics", "Chemistry" ]
5,752
[ "Physical phenomena", "Optical phenomena", "Materials", "Transparent materials", "Explosive chemicals", "Matter" ]
58,251
https://en.wikipedia.org/wiki/Nickel%E2%80%93metal%20hydride%20battery
A nickel–metal hydride battery (NiMH or Ni–MH) is a type of rechargeable battery. The chemical reaction at the positive electrode is similar to that of the nickel–cadmium cell (NiCd), with both using nickel oxide hydroxide (NiOOH). However, the negative electrodes use a hydrogen-absorbing alloy instead of cadmium. NiMH batteries can have two to three times the capacity of NiCd batteries of the same size, with significantly higher energy density, although only about half that of lithium-ion batteries. They are typically used as a substitute for similarly shaped non-rechargeable alkaline batteries, as they feature a slightly lower but generally compatible cell voltage and are less prone to leaking. History Work on NiMH batteries began at the Battelle-Geneva Research Center following the technology's invention in 1967. It was based on sintered Ti2Ni+TiNi+x alloys and NiOOH electrodes. Development was sponsored over nearly two decades by Daimler-Benz and by Volkswagen AG within Deutsche Automobilgesellschaft, now a subsidiary of Daimler AG. The batteries' specific energy reached 50 W·h/kg (180 kJ/kg), specific power up to 1000 W/kg and a life of 500 charge cycles (at 100% depth of discharge). Patent applications were filed in European countries (priority: Switzerland), the United States, and Japan. The patents transferred to Daimler-Benz. Interest grew in the 1970s with the commercialisation of the nickel–hydrogen battery for satellite applications. Hydride technology promised an alternative, less bulky way to store the hydrogen. Research carried out by Philips Laboratories and France's CNRS developed new high-energy hybrid alloys incorporating rare-earth metals for the negative electrode. However, these suffered from alloy instability in alkaline electrolyte and consequently insufficient cycle life. In 1987, Willems and Buschow demonstrated a successful battery based on this approach (using a mixture of La0.8Nd0.2Ni2.5Co2.4Si0.1), which kept 84% of its charge capacity after 4000 charge-discharge cycles. More economically viable alloys using mischmetal instead of lanthanum were soon developed. Modern NiMH cells were based on this design. The first consumer-grade NiMH cells became commercially available in 1989. In 1998, Stanford Ovshinsky at Ovonic Battery Co., which had been working on MH-NiOOH batteries since mid-1980, improved the Ti–Ni alloy structure and composition and patented its innovations. In 2008, more than two million hybrid cars worldwide were manufactured with NiMH batteries. In the European Union due to its Battery Directive, nickel–metal hydride batteries replaced Ni–Cd batteries for portable consumer use. About 22% of portable rechargeable batteries sold in Japan in 2010 were NiMH. In Switzerland in 2009, the equivalent statistic was approximately 60%. This percentage has fallen over time due to the increase in manufacture of lithium-ion batteries: in 2000, almost half of all portable rechargeable batteries sold in Japan were NiMH. In 2015 BASF produced a modified microstructure that helped make NiMH batteries more durable, in turn allowing changes to the cell design that saved considerable weight, allowing the specific energy to reach 140 watt-hours per kilogram. Electrochemistry The negative electrode reaction occurring in a NiMH cell is H2O + M + e− OH− + MH On the positive electrode, nickel oxyhydroxide, NiO(OH), is formed: Ni(OH)2 + OH− NiO(OH) + H2O + e− The reactions proceed left to right during charge and the opposite during discharge. The metal M in the negative electrode of a NiMH cell is an intermetallic compound. Many different compounds have been developed for this application, but those in current use fall into two classes. The most common is AB5, where A is a rare-earth mixture of lanthanum, cerium, neodymium, praseodymium, and B is nickel, cobalt, manganese, or aluminium. Some cells use higher-capacity negative electrode materials based on AB2 compounds, where A is titanium or vanadium, and B is zirconium or nickel, modified with chromium, cobalt, iron, or manganese. NiMH cells have an alkaline electrolyte, usually potassium hydroxide. The positive electrode is nickel hydroxide, and the negative electrode is hydrogen in the form of an interstitial metal hydride. Hydrophilic polyolefin nonwovens are used for separation. Charge When fast-charging, it is advisable to charge the NiMH cells with a smart battery charger to avoid overcharging, which can damage cells. Trickle charging The simplest of the safe charging methods is with a fixed low current, with or without a timer. Most manufacturers claim that overcharging is safe at very low currents, below 0.1 C (C/10) (where C is the current equivalent to the capacity of the battery divided by one hour). The Panasonic NiMH charging manual warns that overcharging for long enough can damage a battery and suggests limiting the total charging time to 10–20 hours. Duracell further suggests that a trickle charge at C/300 can be used for batteries that must be kept in a fully charged state. Some chargers do this after the charge cycle, to offset natural self-discharge. A similar approach is suggested by Energizer, which indicates that self-catalysis can recombine gas formed at the electrodes for charge rates up to C/10. This leads to cell heating. The company recommends C/30 or C/40 for indefinite applications where long life is important. This is the approach taken in emergency lighting applications, where the design remains essentially the same as in older NiCd units, except for an increase in the trickle-charging resistor value. Panasonic's handbook recommends that NiMH batteries on standby be charged by a lower duty cycle approach, where a pulse of a higher current is used whenever the battery's voltage drops below 1.3 V. This can extend battery life and use less energy. ΔV charging method To prevent cell damage, fast chargers must terminate their charge cycle before overcharging occurs. One method is to monitor the change of voltage with time. When the battery is fully charged, the voltage across its terminals drops slightly. The charger can detect this and stop charging. This method is often used with nickel–cadmium cells, which display a large voltage drop at full charge. However, the voltage drop is much less pronounced for NiMH and can be non-existent at low charge rates, which can make the approach unreliable. Another option is to monitor the change of voltage with respect to time and stop when this becomes zero, but this risks premature cutoffs. With this method, a much higher charging rate can be used than with a trickle charge, up to 1 C. At this charge rate, Panasonic recommends to terminate charging when the voltage drops 5–10 mV per cell from the peak voltage. Since this method measures the voltage across the battery, a constant-current (rather than a constant-voltage) charging circuit is used. ΔT charging method The temperature-change method is similar in principle to the ΔV method. Because the charging voltage is nearly constant, constant-current charging delivers energy at a near-constant rate. When the cell is not fully charged, most of this energy is converted to chemical energy. However, when the cell reaches full charge, most of the charging energy is converted to heat. This increases the rate of change of battery temperature, which can be detected by a sensor such as a thermistor. Both Panasonic and Duracell suggest a maximal rate of temperature increase of 1 °C per minute. Using a temperature sensor allows an absolute temperature cutoff, which Duracell suggests at 60 °C. With both the ΔT and the ΔV charging methods, both manufacturers recommend a further period of trickle charging to follow the initial rapid charge. Safety A resettable fuse in series with the cell, particularly of the bimetallic strip type, increases safety. This fuse opens if either the current or the temperature gets too high. Modern NiMH cells contain catalysts to handle gases produced by over-charging: 2H2{} + O2 ->[\text{catalyst}] 2H2O However, this only works with overcharging currents of up to 0.1 C (that is, nominal capacity divided by ten hours). This reaction causes batteries to heat, ending the charging process. A method for very rapid charging called in-cell charge control involves an internal pressure switch in the cell, which disconnects the charging current in the event of overpressure. One inherent risk with NiMH chemistry is that overcharging causes hydrogen gas to form, potentially rupturing the cell. Therefore, cells have a vent to release the gas in the event of serious overcharging. NiMH batteries are made of environmentally friendly materials. The batteries contain only mildly toxic substances and are recyclable. Loss of capacity Voltage depression (often mistakenly attributed to the memory effect) from repeated partial discharge can occur, but is reversible with a few full discharge/charge cycles. Discharge A fully charged cell supplies an average 1.25 V/cell during discharge, declining to about 1.0–1.1 V/cell (further discharge may cause permanent damage in the case of multi-cell packs, due to polarity reversal of the weakest cell). Under a light load (0.5 amperes), the starting voltage of a freshly charged AA NiMH cell in good condition is about 1.4 volts. Over-discharge Complete discharge of multi-cell packs can cause reverse polarity in one or more cells, which can permanently damage them. This situation can occur in the common arrangement of four AA cells in series, where one cell completely discharges before the others due to small differences in capacity among the cells. When this happens, the good cells start to drive the discharged cell into reverse polarity (i.e. positive anode and negative cathode). Some cameras, GPS receivers and PDAs detect the safe end-of-discharge voltage of the series cells and perform an auto-shutdown, but devices such as flashlights and some toys do not. Irreversible damage from polarity reversal is a particular danger, even when a low voltage-threshold cutout is employed, when the cells vary in temperature. This is because capacity significantly declines as the cells are cooled. This results in a lower voltage under load of the colder cells. Self-discharge Historically, NiMH cells have had a somewhat higher self-discharge rate (equivalent to internal leakage) than NiCd cells. The self-discharge rate varies greatly with temperature, where lower storage temperature leads to slower discharge and longer battery life. The self-discharge is on the first day and stabilizes around per day at room temperature. But at it is approximately three times as high. Low self-discharge The low–self-discharge nickel–metal hydride battery (LSD NiMH) has a significantly lower rate of self-discharge. The innovation was introduced in 2005 by Sanyo, branded Eneloop. By using improvements to electrode separator, positive electrode, and other components, manufacturers claim the cells retain 70–85% of their capacity when stored for one year at , compared to about half for normal NiMH batteries. They are otherwise similar to standard NiMH batteries, and can be charged in standard NiMH chargers. These cells are marketed as "hybrid", "ready-to-use" or "pre-charged" rechargeables. Retention of charge depends in large part on the battery's leakage resistance (the higher the better), and on its physical size and charge capacity. Separators keep the two electrodes apart to slow electrical discharge while allowing the transport of ionic charge carriers that close the circuit during the passage of current. High-quality separators are critical for battery performance. The self-discharge rate depends upon separator thickness; thicker separators reduce self-discharge, but also reduce capacity as they leave less space for active components, and thin separators lead to higher self-discharge. Some batteries may have overcome this tradeoff by using more precisely manufactured thin separators, and a sulfonated polyolefin separator, an improvement over the hydrophilic polyolefin based on ethylene vinyl alcohol. Low-self-discharge cells have somewhat lower capacity than otherwise equivalent NiMH cells because of the larger volume of the separator. The highest-capacity low-self-discharge AA cells have 2500 mAh capacity, compared to 2700 mAh for high-capacity AA NiMH cells. Common methods to improve self-discharge include: use of a sulfonated separator (causing removal of N-containing compounds), use of an acrylic acid grafted PP separator (causing reduction in Al- and Mn-debris formation in separator), removal of Co and Mn in A2B7 MH alloy, (causing reduction in debris formation in separator), increase of the amount of electrolyte (causing reduction in the hydrogen diffusion in electrolyte), removal of Cu-containing components (causing reduction in micro-short), PTFE coating on positive electrode (causing suppression of reaction between NiOOH and H2), CMC solution dipping (causing suppression of oxygen evolution), micro-encapsulation of Cu on MH alloy (causing decrease in H2 released from MH alloy), Ni–B alloy coating on MH alloy (causing formation of a protection layer), alkaline treatment of negative electrode (causing reduction of leach-out of Mn and Al), addition of LiOH and NaOH into electrolyte (causing reduction in electrolyte corrosion capabilities), and addition of Al2(SO4)3 into electrolyte (causing reduction in MH alloy corrosion). Most of these improvements have no or negligible effect on cost; some increase cost modestly. Compared to other battery types Alkaline batteries NiMH cells are often used in digital cameras and other high-drain devices, where over the duration of single-charge use they outperform primary (such as alkaline) batteries. NiMH cells are advantageous for high-current-drain applications compared to alkaline batteries, largely due to their lower internal resistance. Typical alkaline AA-size batteries, which offer approximately 2.6 Ah capacity at low current demand (25 mA), provide only 1.3 Ah capacity with a 500 mA load. Digital cameras with LCDs and flashlights can draw over 1 A, quickly depleting them. NiMH cells can deliver these current levels without similar loss of capacity. Devices that were designed to operate using primary alkaline chemistry (or zinc-carbon/chloride) cells may not function with NiMH cells. However, most devices compensate for the voltage drop of an alkaline battery as it discharges down to about 1 volt. Low internal resistance allows NiMH cells to deliver a nearly constant voltage until they are almost completely discharged. Thus battery-level indicators designed to read alkaline cells overstate the remaining charge when used with NiMH cells, as the voltage of alkaline cells decreases steadily during most of the discharge cycle. Lithium-ion batteries Lithium-ion batteries can deliver extremely high power and have a higher specific energy than nickel–metal hydride batteries, but they were originally significantly more expensive. The cost of lithium batteries fell drastically during the 2010s and many small consumer devices now have non-consumer-replaceable lithium batteries as a result. Lithium batteries produce a higher voltage (3.2–3.7 V nominal), and are thus not a drop-in replacement for AA (alkaline or NiMh) batteries without circuitry to reduce voltage. Although a single lithium cell will typically provide ideal power to replace 3 NiMH cells, the form factor means that the device still needs modification. Lead batteries NiMH batteries can easily be made smaller and lighter than lead-acid batteries and have completely replaced them in small devices. However, lead-acid batteries can deliver huge current at low cost, making lead-acid batteries more suitable for starter motors in combustion vehicles. , nickel–metal hydride batteries constituted three percent of the battery market. Applications Consumer electronics NiMH batteries have replaced NiCd for many roles, notably small rechargeable batteries. NiMH batteries are commonly available in AA (penlight-size) batteries. These have nominal charge capacities (C) of 1.1–2.8 Ah at 1.2 V, measured at the rate that discharges the cell in 5 hours. Useful discharge capacity is a decreasing function of the discharge rate, but up to a rate of around 1×C (full discharge in 1 hour), it does not differ significantly from the nominal capacity. NiMH batteries nominally operate at 1.2 V per cell, somewhat lower than conventional 1.5 V cells, but can operate many devices designed for that voltage. Electric vehicles NiMH batteries were frequently used in prior-generation electric and hybrid-electric vehicles; as of 2020 they have been superseded almost entirely by lithium-ion batteries in all-electric and plug-in hybrid vehicles, but they remain in use in some hybrid vehicles (2020 Toyota Highlander, for example). Prior all-electric plug-in vehicles included the General Motors EV1, first-generation Toyota RAV4 EV, Honda EV Plus, Ford Ranger EV and Vectrix scooter. Every first generation hybrid vehicle used NIMH batteries, most notably the Toyota Prius and Honda Insight, as well as later models including the Ford Escape Hybrid, Chevrolet Malibu Hybrid and Honda Civic Hybrid also use them. Patent issues Stanford R. Ovshinsky invented and patented a popular improvement of the NiMH battery and founded Ovonic Battery Company in 1982. General Motors purchased Ovonics' patent in 1994. By the late 1990s, NiMH batteries were being used successfully in many fully electric vehicles, such as the General Motors EV1 and Dodge Caravan EPIC minivan. This generation of electric cars, although successful, was abruptly pulled off the market. In October 2000, the patent was sold to Texaco, and a week later Texaco was acquired by Chevron. Chevron's Cobasys subsidiary provides these batteries only to large OEM orders. General Motors shut down production of the EV1, citing lack of battery availability as a chief obstacle. Cobasys control of NiMH batteries created a patent encumbrance for large automotive NiMH batteries. See also Automotive battery Battery recycling Comparison of commercial battery types Gas diffusion electrode Jelly roll Lead–acid battery List of battery sizes List of battery types Lithium-ion battery Lithium iron phosphate battery Nickel–zinc battery Nickel(II) hydroxide Nickel(III) oxide Patent encumbrance of large automotive NiMH batteries Power-to-weight ratio References External links "Bipolar Nickel Metal Hydride Battery" by Martin G. Klein, Michael Eskra, Robert Plivelich and Paula Ralston Energizer Nickel Metal Hydride (NiMH) Handbook and Application Manual NiMH battery charging and safety Metal hydrides Nickel Rechargeable batteries
Nickel–metal hydride battery
[ "Chemistry" ]
4,087
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
58,256
https://en.wikipedia.org/wiki/Wax
Waxes are a diverse class of organic compounds that are lipophilic, malleable solids near ambient temperatures. They include higher alkanes and lipids, typically with melting points above about 40 °C (104 °F), melting to give low viscosity liquids. Waxes are insoluble in water but soluble in nonpolar organic solvents such as hexane, benzene and chloroform. Natural waxes of different types are produced by plants and animals and occur in petroleum. Chemistry Waxes are organic compounds that characteristically consist of long aliphatic alkyl chains, although aromatic compounds may also be present. Natural waxes may contain unsaturated bonds and include various functional groups such as fatty acids, primary and secondary alcohols, ketones, aldehydes and fatty acid esters. Synthetic waxes often consist of homologous series of long-chain aliphatic hydrocarbons (alkanes or paraffins) that lack functional groups. Plant and animal waxes Waxes are synthesized by many plants and animals. Those of animal origin typically consist of wax esters derived from a variety of fatty acids and carboxylic alcohols. In waxes of plant origin, characteristic mixtures of unesterified hydrocarbons may predominate over esters. The composition depends not only on species, but also on geographic location of the organism. Animal waxes The best-known animal wax is beeswax, used in constructing the honeycombs of beehives, but other insects also secrete waxes. A major component of beeswax is myricyl palmitate which is an ester of triacontanol and palmitic acid. Its melting point is . Spermaceti occurs in large amounts in the head oil of the sperm whale. One of its main constituents is cetyl palmitate, another ester of a fatty acid and a fatty alcohol. Lanolin is a wax obtained from wool, consisting of esters of sterols. Plant waxes Plants secrete waxes into and on the surface of their cuticles as a way to control evaporation, wettability and hydration. The epicuticular waxes of plants are mixtures of substituted long-chain aliphatic hydrocarbons, containing alkanes, alkyl esters, fatty acids, primary and secondary alcohols, diols, ketones and aldehydes. From the commercial perspective, the most important plant wax is carnauba wax, a hard wax obtained from the Brazilian palm Copernicia prunifera. Containing the ester myricyl cerotate, it has many applications, such as confectionery and other food coatings, car and furniture polish, floss coating, and surfboard wax. Other more specialized vegetable waxes include jojoba oil, candelilla wax and ouricury wax. Modified plant and animal waxes Plant and animal based waxes or oils can undergo selective chemical modifications to produce waxes with more desirable properties than are available in the unmodified starting material. This approach has relied on green chemistry approaches including olefin metathesis and enzymatic reactions and can be used to produce waxes from inexpensive starting materials like vegetable oils. Petroleum derived waxes Although many natural waxes contain esters, paraffin waxes are hydrocarbons, mixtures of alkanes usually in a homologous series of chain lengths. These materials represent a significant fraction of petroleum. They are refined by vacuum distillation. Paraffin waxes are mixtures of saturated n- and iso- alkanes, naphthenes, and alkyl- and naphthene-substituted aromatic compounds. A typical alkane paraffin wax chemical composition comprises hydrocarbons with the general formula CnH2n+2, such as hentriacontane, C31H64. The degree of branching has an important influence on the properties. Microcrystalline wax is a lesser produced petroleum based wax that contains higher percentage of isoparaffinic (branched) hydrocarbons and naphthenic hydrocarbons. Millions of tons of paraffin waxes are produced annually. They are used in foods (such as chewing gum and cheese wrapping), in candles and cosmetics, as non-stick and waterproofing coatings and in polishes. Montan wax Montan wax is a fossilized wax extracted from coal and lignite. It is very hard, reflecting the high concentration of saturated fatty acids and alcohols. Although dark brown and odorous, they can be purified and bleached to give commercially useful products. Polyethylene and related derivatives , about 200 million kilograms of polyethylene waxes were consumed annually. Polyethylene waxes are manufactured by one of three methods: The direct polymerization of ethylene, potentially including co-monomers; The thermal degradation of high molecular weight polyethylene resin; The recovery of low molecular weight fractions from high molecular weight resin production. Each production technique generates products with slightly different properties. Key properties of low molecular weight polyethylene waxes are viscosity, density and melt point. Polyethylene waxes produced by means of degradation or recovery from polyethylene resin streams contain very low molecular weight materials that must be removed to prevent volatilization and potential fire hazards during use. Polyethylene waxes manufactured by this method are usually stripped of low molecular weight fractions to yield a flash point >500 °F (>260 °C). Many polyethylene resin plants produce a low molecular weight stream often referred to as low polymer wax (LPW). LPW is unrefined and contains volatile oligomers, corrosive catalyst and may contain other foreign material and water. Refining of LPW to produce a polyethylene wax involves removal of oligomers and hazardous catalyst. Proper refining of LPW to produce polyethylene wax is especially important when being used in applications requiring FDA or other regulatory certification. Uses Waxes are mainly consumed industrially as components of complex formulations, often for coatings. The main use of polyethylene and polypropylene waxes is in the formulation of colourants for plastics. Waxes confer matting effects (i.e., to confer non-glossy finishes) and wear resistance to paints. Polyethylene waxes are incorporated into inks in the form of dispersions to decrease friction. They are employed as release agents, find use as slip agents in furniture, and confer corrosion resistance. Candles Waxes such as paraffin wax or beeswax, and hard fats such as tallow are used to make candles, used for lighting and decoration. Another fuel type used in candle manufacturing includes soy. Soy wax is made by the hydrogenation process using soybean oil. Wood products Waxes are used as finishes and coatings for wood products. Beeswax is frequently used as a lubricant on drawer slides where wood to wood contact occurs. Other uses Sealing wax was used to close important documents in the Middle Ages. Wax tablets were used as writing surfaces. There were different types of wax in the Middle Ages, namely four kinds of wax (Ragusan, Montenegro, Byzantine, and Bulgarian), "ordinary" waxes from Spain, Poland, and Riga, unrefined waxes and colored waxes (red, white, and green). Waxes are used to make waxed paper, impregnating and coating paper and card to waterproof it or make it resistant to staining, or to modify its surface properties. Waxes are also used in shoe polishes, wood polishes, and automotive polishes, as mold release agents in mold making, as a coating for many cheeses, and to waterproof leather and fabric. Wax has been used since antiquity as a temporary, removable model in lost-wax casting of gold, silver and other materials. Wax with colorful pigments added has been used as a medium in encaustic painting, and is used today in the manufacture of crayons, china markers and colored pencils. Carbon paper, used for making duplicate typewritten documents was coated with carbon black suspended in wax, typically montan wax, but has largely been superseded by photocopiers and computer printers. In another context, lipstick and mascara are blends of various fats and waxes colored with pigments, and both beeswax and lanolin are used in other cosmetics. Ski wax is used in skiing and snowboarding. Also, the sports of surfing and skateboarding often use wax to enhance the performance. Some waxes are considered food-safe and are used to coat wooden cutting boards and other items that come into contact with food. Beeswax or coloured synthetic wax is used to decorate Easter eggs in Romania, Ukraine, Poland, Lithuania and the Czech Republic. Paraffin wax is used in making chocolate covered sweets. Wax is also used in wax bullets, which are used as simulation aids, and for wax sculpturing. Specific examples Animal waxes Beeswax – produced by honey bees Chinese wax – produced by the scale insect Ceroplastes ceriferus Lanolin (wool wax) – from the sebaceous glands of sheep Shellac wax – from the lac insect Kerria lacca Spermaceti – from the head cavities and blubber of the sperm whale Vegetable waxes Bayberry wax – from the surface wax of the fruits of the bayberry shrub, Myrica faya Candelilla wax – from the Mexican shrubs Euphorbia cerifera and Euphorbia antisyphilitica Carnauba wax – from the leaves of the carnauba palm, Copernicia cerifera Castor wax – catalytically hydrogenated castor oil Esparto wax – a byproduct of making paper from esparto grass (Macrochloa tenacissima) Japan wax – a vegetable triglyceride (not a true wax), from the berries of Rhus and Toxicodendron species Jojoba oil – a liquid wax ester, from the seed of Simmondsia chinensis. Ouricury wax – from the Brazilian feather palm, Syagrus coronata. Rice bran wax – obtained from rice bran (Oryza sativa) Soy wax – from soybean oil Tallow tree wax – from the seeds of the tallow tree Triadica sebifera. Mineral waxes Ceresin waxes Montan wax – extracted from lignite and brown coal Ozocerite – found in lignite beds Peat waxes Petroleum waxes Paraffin wax – made of long-chain alkane hydrocarbons Microcrystalline wax – with very fine crystalline structure See also Slip melting point Wax acid Wax argument, or the "ball of wax example", is a thought experiment originally articulated by Renė Descartes. References External links Waxes Petroleum products Plant products Animal products Lipids Esters Soft matter
Wax
[ "Physics", "Chemistry", "Materials_science" ]
2,279
[ "Biomolecules by chemical classification", "Natural products", "Petroleum products", "Esters", "Soft matter", "Animal products", "Functional groups", "Organic compounds", "Petroleum", "Materials", "Condensed matter physics", "Plant products", "Lipids", "Matter", "Waxes" ]
58,267
https://en.wikipedia.org/wiki/Conceptual%20schema
A conceptual schema or conceptual data model is a high-level description of informational needs underlying the design of a database. It typically includes only the core concepts and the main relationships among them. This is a high-level model with insufficient detail to build a complete, functional database. It describes the structure of the whole database for a group of users. The conceptual model is also known as the data model that can be used to describe the conceptual schema when a database system is implemented. It hides the internal details of physical storage and targets the description of entities, datatypes, relationships and constraints. Overview A conceptual schema is a map of concepts and their relationships used for databases. This describes the semantics of an organization and represents a series of assertions about its nature. Specifically, it describes the things of significance to an organization (entity classes), about which it is inclined to collect information, and their characteristics (attributes) and the associations between pairs of those things of significance (relationships). Because a conceptual schema represents the semantics of an organization, and not a database design, it may exist on various levels of abstraction. The original ANSI four-schema architecture began with the set of external schemata that each represents one person's view of the world around him or her. These are consolidated into a single conceptual schema that is the superset of all of those external views. A data model can be as concrete as each person's perspective, but this tends to make it inflexible. If that person's world changes, the model must change. Conceptual data models take a more abstract perspective, identifying the fundamental things, of which the things an individual deals with are just examples. The model does allow for what is called inheritance in object oriented terms. The set of instances of an entity class may be subdivided into entity classes in their own right. Thus, each instance of a sub-type entity class is also an instance of the entity class's super-type. Each instance of the super-type entity class, then is also an instance of one of the sub-type entity classes. Super-type/sub-type relationships may be exclusive or not. A methodology may require that each instance of a super-type may only be an instance of one sub-type. Similarly, a super-type/sub-type relationship may be exhaustive or not. It is exhaustive if the methodology requires that each instance of a super-type must be an instance of a sub-type. A sub-type named "Other" is often necessary. Example relationships Each PERSON may be the vendor in one or more ORDERS. Each ORDER must be from one and only one PERSON. PERSON is a sub-type of PARTY. (Meaning that every instance of PERSON is also an instance of PARTY.) Each EMPLOYEE may have a supervisor who is also an EMPLOYEE. Data structure diagram A data structure diagram (DSD) is a data model or diagram used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that bind them. See also References Further reading Perez, Sandra K., & Anthony K. Sarris, eds. (1995) Technical Report for IRDS Conceptual Schema, Part 1: Conceptual Schema for IRDS, Part 2: Modeling Language Analysis, X3/TR-14:1995, American National Standards Institute, New York, NY. Halpin T, Morgan T (2008) Information Modeling and Relational Databases, 2nd edn., San Francisco, CA: Morgan Kaufmann. External links A different point of view, as described by the agile community Data modeling Conceptual modelling
Conceptual schema
[ "Engineering" ]
743
[ "Data modeling", "Data engineering" ]
58,282
https://en.wikipedia.org/wiki/Thermal%20diffusivity
In heat transfer analysis, thermal diffusivity is the thermal conductivity divided by density and specific heat capacity at constant pressure. It is a measure of the rate of heat transfer inside a material and has SI units of m2/s. It is an intensive property. Thermal diffusivity is usually denoted by lowercase alpha (), but , , (kappa), , ,, are also used. The formula is: where is thermal conductivity (W/(m·K)) is specific heat capacity (J/(kg·K)) is density (kg/m3) Together, can be considered the volumetric heat capacity (J/(m3·K)). As seen in the heat equation, one way to view thermal diffusivity is as the ratio of the time derivative of temperature to its curvature, quantifying the rate at which temperature concavity is "smoothed out". Thermal diffusivity is a contrasting measure to thermal effusivity. In a substance with high thermal diffusivity, heat moves rapidly through it because the substance conducts heat quickly relative to its volumetric heat capacity or 'thermal bulk'. Thermal diffusivity is often measured with the flash method. It involves heating a strip or cylindrical sample with a short energy pulse at one end and analyzing the temperature change (reduction in amplitude and phase shift of the pulse) a short distance away. Thermal diffusivity of selected materials and substances See also Heat equation Laser flash analysis Thermophoresis Thermal effusivity Thermal time constant References Heat transfer Physical quantities Heat conduction
Thermal diffusivity
[ "Physics", "Chemistry", "Mathematics" ]
329
[ "Transport phenomena", "Thermodynamic properties", "Physical phenomena", "Heat transfer", "Physical quantities", "Quantity", "Thermodynamics", "Heat conduction", "Physical properties" ]
58,283
https://en.wikipedia.org/wiki/Prandtl%20number
The Prandtl number (Pr) or Prandtl group is a dimensionless number, named after the German physicist Ludwig Prandtl, defined as the ratio of momentum diffusivity to thermal diffusivity. The Prandtl number is given as:where: : momentum diffusivity (kinematic viscosity), , (SI units: m2/s) : thermal diffusivity, , (SI units: m2/s) : dynamic viscosity, (SI units: Pa s = N s/m2) : thermal conductivity, (SI units: W/(m·K)) : specific heat, (SI units: J/(kg·K)) : density, (SI units: kg/m3). Note that whereas the Reynolds number and Grashof number are subscripted with a scale variable, the Prandtl number contains no such length scale and is dependent only on the fluid and the fluid state. The Prandtl number is often found in property tables alongside other properties such as viscosity and thermal conductivity. The mass transfer analog of the Prandtl number is the Schmidt number and the ratio of the Prandtl number and the Schmidt number is the Lewis number. Experimental values Typical values For most gases over a wide range of temperature and pressure, is approximately constant. Therefore, it can be used to determine the thermal conductivity of gases at high temperatures, where it is difficult to measure experimentally due to the formation of convection currents. Typical values for are: 0.003 for molten potassium at 975 K around 0.015 for mercury 0.065 for molten lithium at 975 K around 0.16–0.7 for mixtures of noble gases or noble gases with hydrogen 0.63 for oxygen around 0.71 for air and many other gases 1.38 for gaseous ammonia between 4 and 5 for R-12 refrigerant around 7.56 for water (At 18 °C) 13.4 and 7.2 for seawater (At 0 °C and 20 °C respectively) 50 for n-butanol between 100 and 40,000 for engine oil 1000 for glycerol 10,000 for polymer melts around 1 for Earth's mantle. Formula for the calculation of the Prandtl number of air and water For air with a pressure of 1 bar, the Prandtl numbers in the temperature range between −100 °C and +500 °C can be calculated using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 0.1% from the literature values. , where is the temperature in Celsius. The Prandtl numbers for water (1 bar) can be determined in the temperature range between 0 °C and 90 °C using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 1% from the literature values. Physical interpretation Small values of the Prandtl number, , means the thermal diffusivity dominates. Whereas with large values, , the momentum diffusivity dominates the behavior. For example, the listed value for liquid mercury indicates that the heat conduction is more significant compared to convection, so thermal diffusivity is dominant. However, engine oil with its high viscosity and low heat conductivity, has a higher momentum diffusivity as compared to thermal diffusivity. The Prandtl numbers of gases are about 1, which indicates that both momentum and heat dissipate through the fluid at about the same rate. Heat diffuses very quickly in liquid metals () and very slowly in oils () relative to momentum. Consequently thermal boundary layer is much thicker for liquid metals and much thinner for oils relative to the velocity boundary layer. In heat transfer problems, the Prandtl number controls the relative thickness of the momentum and thermal boundary layers. When is small, it means that the heat diffuses quickly compared to the velocity (momentum). This means that for liquid metals the thermal boundary layer is much thicker than the velocity boundary layer. In laminar boundary layers, the ratio of the thermal to momentum boundary layer thickness over a flat plate is well approximated by where is the thermal boundary layer thickness and is the momentum boundary layer thickness. For incompressible flow over a flat plate, the two Nusselt number correlations are asymptotically correct: where is the Reynolds number. These two asymptotic solutions can be blended together using the concept of the Norm (mathematics): See also Turbulent Prandtl number Magnetic Prandtl number References Further reading Convection Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Fluid dynamics
Prandtl number
[ "Physics", "Chemistry", "Engineering" ]
978
[ "Transport phenomena", "Thermodynamic properties", "Physical phenomena", "Physical quantities", "Dimensionless numbers of thermodynamics", "Chemical engineering", "Convection", "Thermodynamics", "Piping", "Fluid dynamics" ]
58,285
https://en.wikipedia.org/wiki/Nusselt%20number
In thermal fluid dynamics, the Nusselt number (, after Wilhelm Nusselt) is the ratio of total heat transfer to conductive heat transfer at a boundary in a fluid. Total heat transfer combines conduction and convection. Convection includes both advection (fluid motion) and diffusion (conduction). The conductive component is measured under the same conditions as the convective but for a hypothetically motionless fluid. It is a dimensionless number, closely related to the fluid's Rayleigh number. A Nusselt number of order one represents heat transfer by pure conduction. A value between one and 10 is characteristic of slug flow or laminar flow. A larger Nusselt number corresponds to more active convection, with turbulent flow typically in the 100–1000 range. A similar non-dimensional property is the Biot number, which concerns thermal conductivity for a solid body rather than a fluid. The mass transfer analogue of the Nusselt number is the Sherwood number. Definition The Nusselt number is the ratio of total heat transfer (convection + conduction) to conductive heat transfer across a boundary. The convection and conduction heat flows are parallel to each other and to the surface normal of the boundary surface, and are all perpendicular to the mean fluid flow in the simple case. where h is the convective heat transfer coefficient of the flow, L is the characteristic length, and k is the thermal conductivity of the fluid. Selection of the characteristic length should be in the direction of growth (or thickness) of the boundary layer; some examples of characteristic length are: the outer diameter of a cylinder in (external) cross flow (perpendicular to the cylinder axis), the length of a vertical plate undergoing natural convection, or the diameter of a sphere. For complex shapes, the length may be defined as the volume of the fluid body divided by the surface area. The thermal conductivity of the fluid is typically (but not always) evaluated at the film temperature, which for engineering purposes may be calculated as the mean-average of the bulk fluid temperature and wall surface temperature. In contrast to the definition given above, known as average Nusselt number, the local Nusselt number is defined by taking the length to be the distance from the surface boundary to the local point of interest. The mean, or average, number is obtained by integrating the expression over the range of interest, such as: Context An understanding of convection boundary layers is necessary to understand convective heat transfer between a surface and a fluid flowing past it. A thermal boundary layer develops if the fluid free stream temperature and the surface temperatures differ. A temperature profile exists due to the energy exchange resulting from this temperature difference. The heat transfer rate can be written using Newton's law of cooling as , where h is the heat transfer coefficient and A is the heat transfer surface area. Because heat transfer at the surface is by conduction, the same quantity can be expressed in terms of the thermal conductivity k: . These two terms are equal; thus . Rearranging, . Multiplying by a representative length L gives a dimensionless expression: . The right-hand side is now the ratio of the temperature gradient at the surface to the reference temperature gradient, while the left-hand side is similar to the Biot modulus. This becomes the ratio of conductive thermal resistance to the convective thermal resistance of the fluid, otherwise known as the Nusselt number, Nu. . Derivation The Nusselt number may be obtained by a non-dimensional analysis of Fourier's law since it is equal to the dimensionless temperature gradient at the surface: , where q is the heat transfer rate, k is the constant thermal conductivity and T the fluid temperature. Indeed, if: and we arrive at then we define so the equation becomes By integrating over the surface of the body: , where . Empirical correlations Typically, for free convection, the average Nusselt number is expressed as a function of the Rayleigh number and the Prandtl number, written as: Otherwise, for forced convection, the Nusselt number is generally a function of the Reynolds number and the Prandtl number, or Empirical correlations for a wide variety of geometries are available that express the Nusselt number in the aforementioned forms. See also Heat transfer coefficient#Convective_heat_transfer_correlations. Free convection Free convection at a vertical wall Cited as coming from Churchill and Chu: Free convection from horizontal plates If the characteristic length is defined where is the surface area of the plate and is its perimeter. Then for the top surface of a hot object in a colder environment or bottom surface of a cold object in a hotter environment And for the bottom surface of a hot object in a colder environment or top surface of a cold object in a hotter environment Free convection from enclosure heated from below Cited as coming from Bejan: This equation "holds when the horizontal layer is sufficiently wide so that the effect of the short vertical sides is minimal." It was empirically determined by Globe and Dropkin in 1959: "Tests were made in cylindrical containers having copper tops and bottoms and insulating walls." The containers used were around 5" in diameter and 2" high. Flat plate in laminar flow The local Nusselt number for laminar flow over a flat plate, at a distance downstream from the edge of the plate, is given by The average Nusselt number for laminar flow over a flat plate, from the edge of the plate to a downstream distance , is given by Sphere in convective flow In some applications, such as the evaporation of spherical liquid droplets in air, the following correlation is used: Forced convection in turbulent pipe flow Gnielinski correlation Gnielinski's correlation for turbulent flow in tubes: where f is the Darcy friction factor that can either be obtained from the Moody chart or for smooth tubes from correlation developed by Petukhov: The Gnielinski Correlation is valid for: Dittus–Boelter equation The Dittus–Boelter equation (for turbulent flow) as introduced by W.H. McAdams is an explicit function for calculating the Nusselt number. It is easy to solve but is less accurate when there is a large temperature difference across the fluid. It is tailored to smooth tubes, so use for rough tubes (most commercial applications) is cautioned. The Dittus–Boelter equation is: where: is the inside diameter of the circular duct is the Prandtl number for the fluid being heated, and for the fluid being cooled. The Dittus–Boelter equation is valid for The Dittus–Boelter equation is a good approximation where temperature differences between bulk fluid and heat transfer surface are minimal, avoiding equation complexity and iterative solving. Taking water with a bulk fluid average temperature of , viscosity and a heat transfer surface temperature of (viscosity , a viscosity correction factor for can be obtained as 1.45. This increases to 3.57 with a heat transfer surface temperature of (viscosity ), making a significant difference to the Nusselt number and the heat transfer coefficient. Sieder–Tate correlation The Sieder–Tate correlation for turbulent flow is an implicit function, as it analyzes the system as a nonlinear boundary value problem. The Sieder–Tate result can be more accurate as it takes into account the change in viscosity ( and ) due to temperature change between the bulk fluid average temperature and the heat transfer surface temperature, respectively. The Sieder–Tate correlation is normally solved by an iterative process, as the viscosity factor will change as the Nusselt number changes. where: is the fluid viscosity at the bulk fluid temperature is the fluid viscosity at the heat-transfer boundary surface temperature The Sieder–Tate correlation is valid for Forced convection in fully developed laminar pipe flow For fully developed internal laminar flow, the Nusselt numbers tend towards a constant value for long pipes. For internal flow: where: Dh = Hydraulic diameter kf = thermal conductivity of the fluid h = convective heat transfer coefficient Convection with uniform temperature for circular tubes From Incropera & DeWitt, OEIS sequence gives this value as . Convection with uniform heat flux for circular tubes For the case of constant surface heat flux, See also Sherwood number (mass transfer Nusselt number) Churchill–Bernstein equation Biot number Reynolds number Convective heat transfer Heat transfer coefficient Thermal conductivity References External links Simple derivation of the Nusselt number from Newton's law of cooling (Accessed 23 September 2009) Convection Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Fluid dynamics Heat transfer
Nusselt number
[ "Physics", "Chemistry", "Engineering" ]
1,792
[ "Transport phenomena", "Thermodynamic properties", "Physical phenomena", "Heat transfer", "Physical quantities", "Dimensionless numbers of thermodynamics", "Chemical engineering", "Convection", "Thermodynamics", "Piping", "Fluid dynamics" ]
58,287
https://en.wikipedia.org/wiki/Grashof%20number
In fluid mechanics (especially fluid thermodynamics), the Grashof number (, after Franz Grashof) is a dimensionless number which approximates the ratio of the buoyancy to viscous forces acting on a fluid. It frequently arises in the study of situations involving natural convection and is analogous to the Reynolds number (). Definition Heat transfer Free convection is caused by a change in density of a fluid due to a temperature change or gradient. Usually the density decreases due to an increase in temperature and causes the fluid to rise. This motion is caused by the buoyancy force. The major force that resists the motion is the viscous force. The Grashof number is a way to quantify the opposing forces. The Grashof number is: for vertical flat plates for pipes and bluff bodies where: is gravitational acceleration due to Earth is the coefficient of volume expansion (equal to approximately for ideal gases) is the surface temperature is the bulk temperature is the vertical length is the diameter is the kinematic viscosity. The and subscripts indicate the length scale basis for the Grashof number. The transition to turbulent flow occurs in the range for natural convection from vertical flat plates. At higher Grashof numbers, the boundary layer is turbulent; at lower Grashof numbers, the boundary layer is laminar, that is, in the range . Mass transfer There is an analogous form of the Grashof number used in cases of natural convection mass transfer problems. In the case of mass transfer, natural convection is caused by concentration gradients rather than temperature gradients. where and: is gravitational acceleration due to Earth is the concentration of species at surface is the concentration of species in ambient medium is the characteristic length is the kinematic viscosity is the fluid density is the concentration of species is the temperature (constant) is the pressure (constant). Relationship to other dimensionless numbers The Rayleigh number, shown below, is a dimensionless number that characterizes convection problems in heat transfer. A critical value exists for the Rayleigh number, above which fluid motion occurs. The ratio of the Grashof number to the square of the Reynolds number may be used to determine if forced or free convection may be neglected for a system, or if there's a combination of the two. This characteristic ratio is known as the Richardson number (). If the ratio is much less than one, then free convection may be ignored. If the ratio is much greater than one, forced convection may be ignored. Otherwise, the regime is combined forced and free convection. Derivation The first step to deriving the Grashof number is manipulating the volume expansion coefficient, as follows. The in the equation above, which represents specific volume, is not the same as the in the subsequent sections of this derivation, which will represent a velocity. This partial relation of the volume expansion coefficient, , with respect to fluid density, , given constant pressure, can be rewritten as where: is the bulk fluid density is the boundary layer density , the temperature difference between boundary layer and bulk fluid. There are two different ways to find the Grashof number from this point. One involves the energy equation while the other incorporates the buoyant force due to the difference in density between the boundary layer and bulk fluid. Energy equation This discussion involving the energy equation is with respect to rotationally symmetric flow. This analysis will take into consideration the effect of gravitational acceleration on flow and heat transfer. The mathematical equations to follow apply both to rotational symmetric flow as well as two-dimensional planar flow. where: is the rotational direction, i.e. direction parallel to the surface is the tangential velocity, i.e. velocity parallel to the surface is the planar direction, i.e. direction normal to the surface is the normal velocity, i.e. velocity normal to the surface is the radius. In this equation the superscript is to differentiate between rotationally symmetric flow from planar flow. The following characteristics of this equation hold true. = 1: rotationally symmetric flow = 0: planar, two-dimensional flow is gravitational acceleration This equation expands to the following with the addition of physical fluid properties: From here we can further simplify the momentum equation by setting the bulk fluid velocity to 0 (). This relation shows that the pressure gradient is simply a product of the bulk fluid density and the gravitational acceleration. The next step is to plug in the pressure gradient into the momentum equation. where the volume expansion coefficient to density relationship found above and the kinematic viscosity relationship were substituted into the momentum equation. To find the Grashof number from this point, the preceding equation must be non-dimensionalized. This means that every variable in the equation should have no dimension and should instead be a ratio characteristic to the geometry and setup of the problem. This is done by dividing each variable by corresponding constant quantities. Lengths are divided by a characteristic length, . Velocities are divided by appropriate reference velocities, , which, considering the Reynolds number, gives . Temperatures are divided by the appropriate temperature difference, . These dimensionless parameters look like the following: , , , , and . The asterisks represent dimensionless parameter. Combining these dimensionless equations with the momentum equations gives the following simplified equation. where: is the surface temperature is the bulk fluid temperature is the characteristic length. The dimensionless parameter enclosed in the brackets in the preceding equation is known as the Grashof number: Buckingham π theorem Another form of dimensional analysis that will result in the Grashof number is known as the Buckingham π theorem. This method takes into account the buoyancy force per unit volume, due to the density difference in the boundary layer and the bulk fluid. This equation can be manipulated to give, The list of variables that are used in the Buckingham π method is listed below, along with their symbols and dimensions. With reference to the Buckingham π theorem there are dimensionless groups. Choose , , and as the reference variables. Thus the groups are as follows: , , , . Solving these groups gives: , , , From the two groups and the product forms the Grashof number: Taking and the preceding equation can be rendered as the same result from deriving the Grashof number from the energy equation. In forced convection the Reynolds number governs the fluid flow. But, in natural convection the Grashof number is the dimensionless parameter that governs the fluid flow. Using the energy equation and the buoyant force combined with dimensional analysis provides two different ways to derive the Grashof number. Physical Reasoning It is also possible to derive the Grashof number by physical definition of the number as follows: However, above expression, especially the final part at the right hand side, is slightly different from Grashof number appearing in literature. Following dimensionally correct scale in terms of dynamic viscosity can be used to have the final form. Writing above scale in Gr gives; Physical reasoning is helpful to grasp the meaning of the number. On the other hand, following velocity definition can be used as a characteristic velocity value for making certain velocities nondimensional. Effects of Grashof number on the flow of different fluids In a recent research carried out on the effects of Grashof number on the flow of different fluids driven by convection over various surfaces. Using slope of the linear regression line through data points, it is concluded that increase in the value of Grashof number or any buoyancy related parameter implies an increase in the wall temperature and this makes the bond(s) between the fluid to become weaker, strength of the internal friction to decrease, the gravity to becomes stronger enough (i.e. makes the specific weight appreciably different between the immediate fluid layers adjacent to the wall). The effects of buoyancy parameter are highly significant in the laminar flow within the boundary layer formed on a vertically moving cylinder. This is only achievable when the prescribed surface temperature (PST) and prescribed wall heat flux (WHF) are considered. It can be concluded that buoyancy parameter has a negligible positive effect on the local Nusselt number. This is only true when the magnitude of Prandtl number is small or prescribed wall heat flux (WHF) is considered. Sherwood number, Bejan Number, Entropy generation, Stanton Number and pressure gradient are increasing properties of buoyancy related parameter while concentration profiles, frictional force, and motile microorganism are decreasing properties. Notes References Further reading Buoyancy Convection Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Fluid dynamics Heat transfer
Grashof number
[ "Physics", "Chemistry", "Engineering" ]
1,767
[ "Transport phenomena", "Thermodynamic properties", "Physical phenomena", "Heat transfer", "Physical quantities", "Dimensionless numbers of thermodynamics", "Chemical engineering", "Convection", "Thermodynamics", "Piping", "Fluid dynamics" ]
58,296
https://en.wikipedia.org/wiki/California%20gold%20rush
The California gold rush (1848–1855) was a gold rush that began on January 24, 1848, when gold was found by James W. Marshall at Sutter's Mill in Coloma, California. The news of gold brought approximately 300,000 people to California from the rest of the United States and abroad. The sudden influx of gold into the money supply reinvigorated the American economy; the sudden population increase allowed California to go rapidly to statehood in the Compromise of 1850. The gold rush had severe effects on Native Californians and accelerated the Native American population's decline from disease, starvation, and the California genocide. The effects of the gold rush were substantial. Whole indigenous societies were attacked and pushed off their lands by the gold-seekers, called "forty-niners" (referring to 1849, the peak year for gold rush immigration). Outside of California, the first to arrive were from Oregon, the Sandwich Islands (Hawaii), and Latin America in late 1848. Of the approximately 300,000 people who came to California during the gold rush, about half arrived by sea and half came overland on the California Trail and the California Road; forty-niners often faced substantial hardships on the trip. While most of the newly arrived were Americans, the gold rush attracted thousands from Latin America, Europe, Australia, and China. Agriculture and ranching expanded throughout the state to meet the needs of the settlers. San Francisco grew from a small settlement of about 200 residents in 1846 to a boomtown of about 36,000 by 1852. Roads, churches, schools and other towns were built throughout California. In 1849, a state constitution was written. The new constitution was adopted by referendum vote; the future state's interim first governor and legislature were chosen. In September 1850, California became a state. At the beginning of the gold rush, there was no law regarding property rights in the goldfields and a system of "staking claims" was developed. Prospectors retrieved the gold from streams and riverbeds using simple techniques, such as panning. Although mining caused environmental harm, more sophisticated methods of gold recovery were developed and later adopted around the world. New methods of transportation developed as steamships came into regular service. By 1869, railroads were built from California to the eastern United States. At its peak, technological advances reached a point where significant financing was required, increasing the proportion of gold companies to individual miners. Gold worth tens of billions of today's US dollars was recovered, which led to great wealth for a few, though many who participated in the California gold rush earned little more than they had started with. History Earlier discoveries Gold was discovered in California as early as March 9, 1842, at Rancho San Francisco, in the mountains north of present-day Los Angeles. Californian native Francisco Lopez was searching for stray horses and stopped on the bank of a small creek (in today's Placerita Canyon), about east of present-day Newhall, and about northwest of Los Angeles. While the horses grazed, Lopez dug up some wild onions and found a small gold nugget in the roots among the bulbs. He looked further and found more gold. Lopez took the gold to authorities who confirmed its worth. Lopez and others began to search for other streambeds with gold deposits in the area. They found several in the northeastern section of the forest, within present-day Ventura County. In November, some of the gold was sent to the U.S. Mint, although otherwise attracted little notice. In 1843, Lopez found gold in San Feliciano Canyon near his first discovery. Mexican miners from Sonora worked the placer deposits until 1846. Minor finds of gold in California were also made by Mission Indians prior to 1848. The friars instructed them to keep its location secret to avoid a gold rush. Marshall's discovery In January 1847, nine months into the Mexican–American War, the Treaty of Cahuenga was signed, leading to the resolution of the military conflict in Alta California (Upper California). On January 24, 1848, James W. Marshall found shiny metal in the tailrace of a lumber mill he was building for Sacramento pioneer John Sutter—known as Sutter's Mill, near Coloma on the American River. Marshall brought what he found to Sutter, and the two privately tested the metal. After the tests showed that it was gold, Sutter expressed dismay, wanting to keep the news quiet because he feared what would happen to his plans for an agricultural empire if there were a gold rush in the region. The Mexican–American War ended on May 30 with the ratification of the Treaty of Guadalupe Hidalgo, which formally transferred California to the United States. Having sworn all concerned at the mill to secrecy, in February 1848, Sutter sent Charles Bennett to Monterey to meet with Colonel Mason, the chief U.S. official in California, to secure the mineral rights of the land where the mill stood. Bennett was not to tell anyone of the discovery of gold, but when he stopped at Benicia, he heard talk about the discovery of coal near Mount Diablo, and he blurted out the discovery of gold. He continued to San Francisco, where again, he could not keep the secret. At Monterey, Mason declined to make any judgement of title to lands and mineral rights, and Bennett for the third time revealed the gold discovery. By March 1848, rumors of the discovery were confirmed by San Francisco newspaper publisher and merchant Samuel Brannan. Brannan hurriedly set up a store to sell gold prospecting supplies, and he walked through the streets of San Francisco, holding aloft a vial of gold, shouting "Gold! Gold! Gold from the American River!" On August 19, 1848, the New York Herald was the first major newspaper on the East Coast to report the discovery of gold. On December 5, 1848, US President James K. Polk confirmed the discovery of gold in an address to Congress. As a result, individuals seeking to benefit from the gold rush—later called the "forty-niners"—began moving to the Gold Country of California or "Mother Lode" from other countries and from other parts of the United States. As Sutter had feared, his business plans were ruined after his workers left in search of gold, and squatters took over his land and stole his crops and cattle. San Francisco had been a tiny settlement before the rush began. When residents learned about the discovery, it at first became a ghost town of abandoned ships and businesses, but then boomed as merchants and new people arrived. The population of San Francisco increased quickly from about 1,000 in 1848 to 25,000 full-time residents by 1850. Miners lived in tents, wood shanties, or deck cabins removed from abandoned ships. There were no churches or religious services in the rapidly growing city, which prompted missionaries like William Taylor to meet the need, where he held services in the street, using a barrel head as his pulpit. Crowds would gather to listen to his sermons, and before long he received enough generous donations from successful gold miners and built San Francisco's first church. Transportation and supplies In what has been referred to as the "first world-class gold rush," there was no easy way to get to California; forty-niners faced hardship and often death on the way. At first, most s, as they were also known, traveled by sea. From the East Coast, a sailing voyage around the tip of South America would take four to five months, and cover approximately . An alternative was to sail to the Atlantic side of the Isthmus of Panama, take canoes and mules for a week through the jungle, and then on the Pacific side, wait for a ship sailing for San Francisco. There was also a route across Mexico starting at Veracruz. The companies providing such transportation created vast wealth among their owners and included the U.S. Mail Steamship Company, the federally subsidized Pacific Mail Steamship Company, and the Accessory Transit Company. Many gold-seekers took the overland route across the continental United States, particularly along the California Trail. Each of these routes had its own deadly hazards, from shipwreck to typhoid fever and cholera. In the early years of the rush, much of the population growth in the San Francisco area was due to steamship travel from New York City through overland portages in Nicaragua and Panama and then back up by steamship to San Francisco. While traveling, many steamships from the eastern seaboard required the passengers to bring kits, which were typically full of personal belongings such as clothes, guidebooks, tools, etc. In addition to personal belongings, Argonauts were required to bring barrels full of beef, biscuits, butter, pork, rice, and salt. While on the steamships, travelers could talk to each other, smoke, fish, and engage in other activities depending on the ship they traveled. Still, the dominant activity held throughout the steamships was gambling, which was ironic because segregation between wealth gaps was prominent throughout the ships. Everything was segregated between the rich vs. the poor. There were different levels of travel one could pay for to get to California. The cheaper steamships tended to have longer routes. In contrast, the more expensive would get passengers to California quicker. There were clear social and economic distinctions between those who traveled together, being that those who spent more money would receive accommodations that others were not allowed. They would do this with the clear intent to distinguish their higher class power over those that could not afford those accommodations. Supply ships arrived in San Francisco with goods to supply the needs of the growing population. When hundreds of ships were abandoned after their crews deserted to go into the goldfields, many ships were converted to warehouses, stores, taverns, hotels, and one into a jail. As the city expanded and new places were needed on which to build, many ships were destroyed and used as landfill. Other developments Within a few years, there was an important but lesser-known surge of prospectors into far Northern California, specifically into present-day Siskiyou, Shasta and Trinity Counties. Discovery of gold nuggets at the site of present-day Yreka in 1851 brought thousands of gold-seekers up the Siskiyou Trail and throughout California's northern counties. Settlements of the gold rush era, such as Portuguese Flat on the Sacramento River, sprang into existence and then faded. The Gold Rush town of Weaverville on the Trinity River today retains the oldest continuously used Taoist temple in California, a legacy of Chinese miners who came. While there are not many Gold Rush era ghost towns still in existence, the remains of the once-bustling town of Shasta have been preserved in a California State Historic Park in Northern California. By 1850, most of the easily accessible gold had been collected, and attention turned to extracting gold from more difficult locations. Faced with gold increasingly difficult to retrieve, Americans began to drive out foreigners to get at the most accessible gold that remained. The new California State Legislature passed a foreign miners tax of twenty dollars per month ($ per month as of ), and American prospectors began organized attacks on foreign miners, particularly Latin Americans and Chinese. In addition, the huge numbers of newcomers were driving Native Americans out of their traditional hunting, fishing and food-gathering areas. To protect their homes and livelihood, some Native Americans responded by attacking the miners. This provoked counter-attacks on native villages. The Native Americans, out-gunned, were often slaughtered. Those who escaped massacres were many times unable to survive without access to their food-gathering areas, and they starved to death. Novelist and poet Joaquin Miller vividly captured one such attack in his semi-autobiographical work, Life Amongst the Modocs. Forty-niners The first people to rush to the goldfields, beginning in the spring of 1848, were the residents of California themselves—primarily agriculturally oriented Americans and Europeans living in Northern California, along with Native Californians and some Californios (Spanish-speaking Californians; at the time, commonly referred to in English as simply 'Californians'). These first miners tended to be families in which everyone helped in the effort. Women and children of all ethnicities were often found panning next to the men. Some enterprising families set up boarding houses to accommodate the influx of men; in such cases, the women often brought in steady income while their husbands searched for gold. Word of the gold rush spread slowly at first. The earliest gold-seekers were people who lived near California or people who heard the news from ships on the fastest sailing routes from California. The first large group of Americans to arrive were several thousand Oregonians who came down the Siskiyou Trail. Next came people from the Sandwich Islands, and several thousand Latin Americans, including people from Mexico, from Peru and from as far away as Chile, both by ship and overland. By the end of 1848, some 6,000 Argonauts had come to California. Only a small number (probably fewer than 500) traveled overland from the United States that year. Some of these "forty-eighters", as the earliest gold-seekers were sometimes called, were able to collect large amounts of easily accessible gold—in some cases, thousands of dollars worth each day. Even ordinary prospectors averaged daily gold finds worth 10 to 15 times the daily wage of a laborer on the East Coast. A person could work for six months in the goldfields and find the equivalent of six years' wages back home. Some hoped to get rich quick and return home, and others wished to start businesses in California. By the beginning of 1849, word of the gold rush had spread around the world, and an overwhelming number of gold-seekers and merchants began to arrive from virtually every continent. The largest group of forty-niners in 1849 were Americans, arriving by the tens of thousands overland across the continent and along various sailing routes (the name "forty-niner" was derived from the year 1849). Many from the East Coast negotiated a crossing of the Appalachian Mountains, taking to riverboats in Pennsylvania, poling the keelboats to Missouri River wagon train assembly ports, and then traveling in a wagon train along the California Trail. Many others came by way of the Isthmus of Panama and the steamships of the Pacific Mail Steamship Company. Australians and New Zealanders picked up the news from ships carrying Hawaiian newspapers, and thousands, infected with "gold fever", boarded ships for California. Forty-niners came from Latin America, particularly from the Mexican mining districts near Sonora and Chile. Gold-seekers and merchants from Asia, primarily from China, began arriving in 1849, at first in modest numbers to Gum San ("Gold Mountain"), the name given to California in Chinese. The first immigrants from Europe, reeling from the effects of the Revolutions of 1848 and with a longer distance to travel, began arriving in late 1849, mostly from France, with some Germans, Italians, and Britons. It is estimated that approximately 90,000 people arrived in California in 1849—about half by land and half by sea. Of these, perhaps 50,000 to 60,000 were Americans, and the rest were from other countries. By 1855, it is estimated at least 300,000 gold-seekers, merchants, and other immigrants had arrived in California from around the world. The largest group continued to be Americans, but there were tens of thousands each of Mexicans, Chinese, Britons, Australians, French, and Latin Americans, together with many smaller groups of miners, such as African Americans, Filipinos, Basques and Turks. People from small villages in the hills near Genoa, Italy were among the first to settle permanently in the Sierra Nevada foothills; they brought with them traditional agricultural skills, developed to survive cold winters. A modest number of miners of African ancestry (probably less than 4,000) had come from the Southern States, the Caribbean and Brazil. A number of immigrants were from China. Several hundred Chinese arrived in California in 1849 and 1850, and in 1852 more than 20,000 landed in San Francisco. Their distinctive dress and appearance was highly recognizable in the goldfields. Chinese miners suffered enormously, enduring violent racism from white miners who aimed their frustrations at foreigners. Further animosity toward the Chinese led to legislation such as the Chinese Exclusion Act and Foreign Miners Tax. There were also women in the gold rush. However, their numbers were small. Of the 40,000 people who arrived by ship to the San Francisco Bay in 1849, only 700 were women (including those who were poor, wealthy, entrepreneurs, prostitutes, single, and married). They were of various ethnicities including Anglo-American, African-American, Hispanic, Native, European, Chinese, and Jewish. The reasons they came varied: some came with their husbands, refusing to be left behind to fend for themselves, some came because their husbands sent for them, and others came (singles and widows) for the adventure and economic opportunities. On the trail many people died from accidents, cholera, fever, and myriad other causes, and many women became widows before even setting eyes on California. While in California, women became widows quite frequently due to mining accidents, disease, or mining disputes of their husbands. Life in the goldfields offered opportunities for women to break from their traditional work. Because of many thousands of people flooding into California at Sacramento and San Francisco and surrounding areas, the Methodist church deemed it necessary to send missionaries there to preach the gospel, as churches in that part of the state were not to be found. The first missionary to arrive was William Taylor who arrived in San Francisco in September 1849. For many months he preached in the streets to hundreds of people without salary, and ultimately after saving often generous donations from successful miners, he built and established the first Methodist church in California, and California's first professional hospital. Legal rights When the Gold Rush began, the California goldfields were peculiarly lawless places. When gold was discovered at Sutter's Mill, California was still technically part of Mexico, under American military occupation as the result of the Mexican–American War. With the signing of the treaty ending the war on February 2, 1848, California became a possession of the United States, but it was not a formal "territory" and did not become a state until September 9, 1850. California existed in the unusual condition of a region under military control. There was no civil legislature, executive or judicial body for the entire region. Local residents operated under a confusing and changing mixture of Mexican rules, American principles, and personal dictates. Lax enforcement of federal laws, such as the Fugitive Slave Act of 1850, encouraged the arrival of free blacks and escaped slaves. While the treaty ending the Mexican–American War obliged the United States to honor Mexican land grants, almost all the goldfields were outside those grants. Instead, the goldfields were primarily on "public land", meaning land formally owned by the United States government. However, there were no legal rules yet in place, and no practical enforcement mechanisms. The benefit to the forty-niners was that the gold was simply "free for the taking" at first. In the goldfields at the beginning, there was no private property, no licensing fees, and no taxes. The miners informally adapted Mexican mining law that had existed in California. For example, the rules attempted to balance the rights of early arrivers at a site with later arrivers; a "claim" could be "staked" by a prospector, but that claim was valid only as long as it was being actively worked. Miners worked at a claim only long enough to determine its potential. If a claim was deemed as low-value—as most were—miners would abandon the site in search of a better one. In the case where a claim was abandoned or not worked upon, other miners would "claim-jump" the land. "Claim-jumping" meant that a miner began work on a previously claimed site. Disputes were often handled personally and violently, and were sometimes addressed by groups of prospectors acting as arbitrators. This often led to heightened ethnic tensions. In some areas the influx of many prospectors could lead to a reduction of the existing claim size by simple pressure. Development of gold-recovery techniques Approximately four hundred million years ago, California lay at the bottom of a large sea; underwater volcanoes deposited lava and minerals (including gold) onto the sea floor. By tectonic forces these minerals and rocks came to the surface of the Sierra Nevada, and eroded. Water carried the exposed gold downstream and deposited it in quiet gravel beds along the sides of old rivers and streams.</ref> The forty-niners first focused their efforts on these deposits of gold. Because the gold in the California gravel beds was so richly concentrated, early forty-niners were able to retrieve loose gold flakes and nuggets with their hands, or simply "pan" for gold in rivers and streams. Panning cannot take place on a large scale, and industrious miners and groups of miners graduated to placer mining, using "cradles" and "rockers" or "long-toms" to process larger volumes of gravel. Miners would also engage in "coyoteing", a method that involved digging a shaft deep into placer deposits along a stream. Tunnels were then dug in all directions to reach the richest veins of pay dirt. In the most complex placer mining, groups of prospectors would divert the water from an entire river into a sluice alongside the river and then dig for gold in the newly exposed river bottom. Modern estimates are that as much as 12 million ounces (370 t) of gold were removed in the first five years of the Gold Rush. In the next stage, by 1853, hydraulic mining was used on ancient gold-bearing gravel beds on hillsides and bluffs in the goldfields. In a modern style of hydraulic mining first developed in California, and later used around the world, a high-pressure hose directed a powerful stream or jet of water at gold-bearing gravel beds. The loosened gravel and gold would then pass over sluices, with the gold settling to the bottom where it was collected. By the mid-1880s, it is estimated that of gold (worth approximately US$15 billion at December 2010 prices) had been recovered by hydraulic mining. A byproduct of these extraction methods was that large amounts of gravel, silt, heavy metals, and other pollutants went into streams and rivers. Court rulings (1882 Gold Run and 1884 "Sawyer Act") and 1893 federal legislation limited hydraulic mining in California. many areas still bear the scars of hydraulic mining, since the resulting exposed earth and downstream gravel deposits do not support plant life. After the gold rush had concluded, gold recovery operations continued. The final stage to recover loose gold was to prospect for gold that had slowly washed down into the flat river bottoms and sandbars of California's Central Valley and other gold-bearing areas of California (such as Scott Valley in Siskiyou County). By the late 1890s, dredging technology (also invented in California) had become economical, and it is estimated that more than were recovered by dredging. Both during the gold rush and in the decades that followed, gold-seekers also engaged in "hard-rock" mining, extracting the gold directly from the rock that contained it (typically quartz), usually by digging and blasting to follow and remove veins of the gold-bearing quartz. Once the gold-bearing rocks were brought to the surface, the rocks were crushed and the gold separated, either using separation in water, using its density difference from quartz sand, or by washing the sand over copper plates coated with mercury (with which gold forms an amalgam). Loss of mercury in the amalgamation process was a source of environmental contamination. Eventually, hard-rock mining became the single largest source of gold produced in the Gold Country. The total production of gold in California from then until now is estimated at . Image gallery Profits Recent scholarship confirms that merchants made far more money than miners during the gold rush. The wealthiest man in California during the early years of the rush was Samuel Brannan, a tireless self-promoter, shopkeeper and newspaper publisher. Brannan opened the first supply stores in Sacramento, Coloma, and other spots in the goldfields. Just as the rush began, he purchased all the prospecting supplies available in San Francisco and resold them at a substantial profit. Some gold-seekers made a significant amount of money. On average, half the gold-seekers made a modest profit, after taking all expenses into account; economic historians have suggested that white miners were more successful than black, Indian, or Chinese miners. However, taxes such as the California foreign miners tax passed in 1851, targeted mainly Latino miners and kept them from making as much money as whites, who did not have any taxes imposed on them. In California most late arrivals made little or wound up losing money. Similarly, many unlucky merchants set up in settlements that disappeared, or which succumbed to one of the calamitous fires that swept the towns that sprang up. By contrast, a businessman who went on to great success was Levi Strauss, who first began selling denim overalls in San Francisco in 1853. Other businessmen reaped great rewards in retail, shipping, entertainment, lodging, or transportation. Boardinghouses, food preparation, sewing, and laundry were highly profitable businesses often run by women (married, single, or widowed) who realized men would pay well for a service done by a woman. Brothels also brought in large profits, especially when combined with saloons and gaming houses. By 1855, the economic climate had changed dramatically. Gold could be retrieved profitably from the goldfields only by medium to large groups of workers, either in partnerships or as employees. By the mid-1850s, it was the owners of these gold-mining companies who made the money. Also, the population and economy of California had become large and diverse enough that money could be made in a wide variety of conventional businesses. Path of the gold Once extracted, the gold itself took many paths. First, much of the gold was used locally to purchase food, supplies and lodging for the miners. It also went towards entertainment, which consisted of anything from a traveling theater to alcohol, gambling, and prostitutes. These transactions often took place using the recently recovered gold, carefully weighed out. These merchants and vendors, in turn, used the gold to purchase supplies from ship captains or packers bringing goods to California. The gold then left California aboard ships or mules to go to the makers of the goods from around the world. A second path was the Argonauts themselves who, having personally acquired a sufficient amount, sent the gold home, or returned home taking with them their hard-earned "diggings". For example, one estimate is that some US$80 million worth of California gold (equivalent to US$ billion today) was sent to France by French prospectors and merchants. A majority of the gold went back to New York City brokerage houses. As the gold rush progressed, local banks and gold dealers issued "banknotes" or "drafts"—locally accepted paper currency—in exchange for gold, and private mints created private gold coins. With the building of the San Francisco Mint in 1854, gold bullion was turned into official United States gold coins for circulation. The gold was also later sent by California banks to U.S. national banks in exchange for national paper currency to be used in the booming California economy. Effects The arrival of hundreds of thousands of new people in California within a few years, compared to a population of some 15,000 Europeans and Californios beforehand, had many dramatic effects. A 2017 study attributes the record-long economic expansion of the United States in the recession-free period of 1841–1856 primarily to "a boom in transportation-goods investment following the discovery of gold in California." Government and commerce The gold rush propelled California from a sleepy, little-known backwater to a center of the global imagination and the destination of hundreds of thousands of people. The new immigrants often showed remarkable inventiveness and civic mindedness. For example, in the midst of the gold rush, towns and cities were chartered, a state constitutional convention was convened, a state constitution written, elections held, and representatives sent to Washington, D.C., to negotiate the admission of California as a state. Large-scale agriculture (California's second "Gold Rush") began during this time. Roads, schools, churches, and civic organizations quickly came into existence. The vast majority of the immigrants were Americans. Pressure grew for better communications and political connections to the rest of the United States, leading to statehood for California on September 9, 1850, in the Compromise of 1850 as the state of the United States. Between 1847 and 1870, the population of San Francisco increased from 500 to 150,000. The Gold Rush wealth and population increase led to significantly improved transportation between California and the East Coast. The Panama Railway, spanning the Isthmus of Panama, was finished in 1855. Steamships, including those owned by the Pacific Mail Steamship Company, began regular service from San Francisco to Panama, where passengers, goods and mail would take the train across the Isthmus and board steamships headed to the East Coast. One ill-fated journey, that of the S.S. Central America, ended in disaster as the ship sank in a hurricane off the coast of the Carolinas in 1857, with approximately three tons of California gold aboard. Native Americans The human and environmental costs of the Gold Rush were substantial. Native Americans, dependent on traditional hunting, gathering and agriculture, became the victims of starvation and disease, as gravel, silt and toxic chemicals from prospecting operations killed fish and destroyed habitats. The surge in the mining population also resulted in the disappearance of game and food gathering locales as gold camps and other settlements were built amidst them. Later farming spread to supply the settlers' camps, taking more land away from the Native Americans. In some areas, systematic attacks against tribespeople in or near mining districts occurred. Various conflicts were fought between natives and settlers. Miners often saw Native Americans as impediments to their mining activities. Ed Allen, interpretive lead for Marshall Gold Discovery State Historic Park, reported that there were times when miners would kill up to 50 or more Natives in one day. Retribution attacks on solitary miners could result in larger scale attacks against Native populations, at times tribes or villages not involved in the original act. During the 1852 Bridge Gulch Massacre, a group of settlers attacked a band of Wintu Indians in response to the killing of a citizen named J. R. Anderson. After his killing, the sheriff led a group of men to track down the Indians, whom the men then attacked, killing more than 150 Wintu people. Only three children survived the massacre that was against a different band of Wintu than the one that had killed Anderson. Historian Benjamin Madley recorded the numbers of killings of California Indians between 1846 and 1873 and estimated that during this period at least 9,400 to 16,000 California Indians were killed by non-Indians, mostly occurring in more than 370 massacres (defined as the "intentional killing of five or more disarmed combatants or largely unarmed noncombatants, including women, children, and prisoners, whether in the context of a battle or otherwise"). According to demographer Russell Thornton, between 1849 and 1890, the Indigenous population of California fell below 20,000 – primarily because of the killings. According to the government of California, some 4,500 Native Americans suffered violent deaths between 1849 and 1870. Furthermore, California stood in opposition of ratifying the eighteen treaties signed between tribal leaders and federal agents in 1851. The state government, in support of miner activities, funded and supported death squads, appropriating over 1 million dollars towards the funding and operation of the paramilitary organizations. Peter Burnett, California's first governor declared that California was a battleground between the races and that there were only two options towards California Indians, extermination or removal. "That a war of extermination will continue to be waged between the two races until the Indian race becomes extinct, must be expected. While we cannot anticipate the result with but painful regret, the inevitable destiny of the race is beyond the power and wisdom of man to avert." For Burnett, like many of his contemporaries, the genocide was part of God's plan, and it was necessary for Burnett's constituency to move forward in California. The Act for the Government and Protection of Indians, passed on April 22, 1850, by the California Legislature, allowed settlers to capture and use Native people as bonded workers, prohibited Native peoples' testimony against settlers, and allowed the adoption of Native children by settlers, often for labor purposes. After the initial boom had ended, explicitly anti-foreign and racist attacks, laws and confiscatory taxes sought to drive out foreigners—in addition to Native Americans—from the mines, especially the Chinese and Latin American immigrants mostly from Sonora, Mexico, and Chile. The toll on the American immigrants was severe as well: one in twelve forty-niners perished, as the death and crime rates during the Gold Rush were extraordinarily high, and the resulting vigilantism also took its toll. World-wide economic stimulation The gold rush stimulated economies around the world as well. Farmers in Chile, Australia, and Hawaii found a huge new market for their food; British manufactured goods were in high demand; clothing and even prefabricated houses arrived from China. The return of large amounts of California gold to pay for these goods raised prices and stimulated investment and the creation of jobs around the world. Australian prospector Edward Hargraves, noting similarities between the geography of California and his home country, returned to Australia to discover gold and spark the Australian gold rushes. Preceding the gold rush, the United States was on a bi-metallic standard, but the sudden increase in physical gold supply increased the relative value of physical silver and drove silver money from circulation. The increase in gold supply also created a monetary supply shock. Within a few years after the end of the gold rush, in 1863, the groundbreaking ceremony for the western leg of the First transcontinental railroad was held in Sacramento. The line's completion, some six years later, financed in part with Gold Rush money, united California with the central and eastern United States. Travel that had taken weeks or even months could now be accomplished in days. Gender practices As the California gold rush brought a disproportionate population of men and set an environment of experimental lawlessness separate from the bounds of standard society, conventional American gender roles came into question. In the large absence of women, these migrant young men were made to reorganize their social and sexual practices, leading to cross-gender practices that most often took place as cross-dressing. Dance events were a notable social space for cross-dressing, where a piece of cloth (such as a handkerchief or sackcloth patch) would denote a 'woman.' Beyond social events, these subverted gender expectations continued into domestic duties as well. Though cross-dressing occurred most frequently with men as women, the reverse also applied. These miners and merchants of various genders and gendered appearances, encouraged by the social fluidity and population limitations of the Wild West, shaped the beginnings of San Francisco's prominent queer history. Longer-term California's name became indelibly connected with the gold rush, and fast success in a new world became known as the "California Dream". California was perceived as a place of new beginnings, where great wealth could reward hard work and good luck. Historian H. W. Brands noted that in the years after the Gold Rush, the California Dream spread across the nation: Overnight California gained the international reputation as the "golden state". Generations of immigrants have been attracted by the California Dream. California farmers, oil drillers, movie makers, airplane builders, computer and microchip makers, and "dot-com" entrepreneurs have each had their boom times in the decades after the Gold Rush. In addition, the standard route shield of state highways in California is in the shape of a miner's spade to honor the California gold rush. Today, the aptly named State Route 49 travels through the Sierra Nevada foothills, connecting many Gold Rush-era towns such as Placerville, Auburn, Grass Valley, Nevada City, Coloma, Jackson, and Sonora. This state highway also passes very near Columbia State Historic Park, a protected area encompassing the historic business district of the town of Columbia; the park has preserved many gold rush-era buildings, which are presently occupied by tourist-oriented businesses. Cultural references The literary history of the gold rush is reflected in the works of Mark Twain (The Celebrated Jumping Frog of Calaveras County), Bret Harte (A Millionaire of Rough-and-Ready), Joaquin Miller (Life Amongst the Modocs), and many others. The San Francisco 49ers, a professional American football team based in the San Francisco Bay Area and competing in the National Football League, are named after miners. See also Barbary Coast California Mining and Mineral Museum Pike's Peak gold rush Doré bar Gold in California References Footnotes Citations Works cited Further reading Ngai, Mae. The Chinese Question: The Gold Rushes and Global Politics (2021), Mid 19c in California, Australia and South Africa online edition Witschi, N. S. (2004). "Bret Harte." Oxford Encyclopedia of American Literature. Ed. Jay Parini. New York: Oxford University Press. 154–157. Maps Ord, Edward Otho Cresap, Topographical sketch of the gold & quicksilver district of California, 1848. from loc.gov accessed October 4, 2018. Lawson's Map from Actual Survey of the Gold, Silver & Quicksilver Regions of Upper California Exhibiting the Mines, Diggings, Roads, Paths, Houses, Mills, Missions &c. &c by J.T. Lawson, Esq. Cala. ... New York, 1849. from raremaps.com accessed October 4, 2018. Lawson's map of the Gold Regions is the first map to accurately depict California's Gold Regions. Issued in January 1849, at the beginning of the California gold rush, Lawson's map was produced specifically for prospectors and miners. A Correct Map of the Bay of San Francisco and the Gold Region from actual Survey June 20th. 1849 for J.J. Jarves. Embracing all the New Towns, Ranchos, Roads, Dry and Wet Diggings, with their several distances from each other, James Munroe & Co. of Boston, 1849 from raremaps.com accessed October 4, 2018. One of the earliest maps of the gold region made from personal observation, Jarves' map states on it that it was the result of a survey of the diggings made for him on June 20, 1849. George Derby, Sketch of General Riley's Route Through the Mining Districts July and Aug., J. McH. Hollingsworth, New York, 1849 from raremaps.com accessed October 4, 2018. The Sacramento Valley from The American River to Butte Creek, Surveyed & Drawn by Order of Gen.l Riley ... by Lt. George H. Derby,... September & October 1849, Washington, 1849 from raremaps.com accessed October 4, 2018. Map by Lt. George H. Derby, from Tyson's Information in Relation to the Geology and Topography of California. Jackson, William A., Map of the mining district of California, Lambert & Lane's Lith., 1850. from loc.gov accessed October 4, 2018. Map of the Gold Region of California taken from a recent survey By Robert H. Ellis 1850 (with early manuscript annotations), George F. Nesbitt, Lith., New York, 1850 from raremaps.com accessed October 4, 2018. A later 1850 map showing the growing settlement in the goldfields and in that vicinity of the state. Map of North America during the California gold rush at omniatlas.com External links California gold rush chronology at The Virtual Museum of the City of San Francisco University of California, Berkeley, Bancroft Library Lewis B. Rush diary, diary of a gold rush miner, MSS SC 161 at L. Tom Perry Special Collections, Harold B. Lee Library, Brigham Young University Gold Rush Collection. Yale Collection of Western Americana, Beinecke Rare Book and Manuscript Library. 1848 in California 1849 in California 1850s in California History of mining in the United States History of United States expansionism Maritime history of California Hydraulic engineering 1840s in economic history 1850s in economic history
California gold rush
[ "Physics", "Engineering", "Environmental_science" ]
8,321
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
58,319
https://en.wikipedia.org/wiki/Serial%20number
A serial number is a unique identifier used to uniquely identify an item, and is usually assigned incrementally or sequentially. Despite being called serial "numbers", they do not need to be strictly numerical and may contain letters and other typographical symbols, or may consist entirely of a character string. Applications of serial numbering Serial numbers identify otherwise identical individual units, thereby serving various practical uses. Serial numbers are a deterrent against theft and counterfeit products, as they can be recorded, and stolen or otherwise irregular goods can be identified. Banknotes and other transferable documents of value bear serial numbers to assist in preventing counterfeiting and tracing stolen ones. They are valuable in quality control, as once a defect is found in the production of a particular batch of product, the serial number will identify which units are affected. Some items with serial numbers are automobiles, firearms, electronics, and appliances. Smartphones and other smart devices In smartphones, serial numbers are extended to the integrated components in addition to the electronic device as a whole, also known as serialization. This gives unique individual parts such as the screen, battery, chip and camera a separate serial number. This is queried by the software for proper release for use. This practice by manufacturers limits the serial number electronic devices. Serial numbers for intangible goods Serial numbers may be used to identify individual physical or intangible objects; for example, computer software or the right to play an online multiplayer game. The purpose and application is different. A software serial number, otherwise called a product key, is usually not embedded in the software but is assigned to a specific user with a right to use the software. The software will function only if a potential user enters a valid product code. The vast majority of possible codes are rejected by the software. If an unauthorised user is found to be using the software, the legitimate user can be identified from the code. It is usually not impossible, however, for an unauthorised user to create a valid but unallocated code either by trying many possible codes, or reverse engineering the software; use of unallocated codes can be monitored if the software makes an Internet connection. Other uses of the term The term serial number is sometimes used for codes which do not identify a single instance of something. For example, the International Standard Serial Number or ISSN used on magazines, journals and other periodicals, an equivalent to the International Standard Book Number (ISBN) applied to books, is assigned to each periodical. It takes its name from the library science use of the word serial to mean a periodical. Certificates and certificate authorities (CA) are necessary for widespread use of cryptography. These depend on applying mathematically rigorous serial numbers and serial number arithmetic, again not identifying a single instance of the content being protected. Military and government use The term serial number is also used in military formations as an alternative to the expression service number. In air forces, the serial number is used to uniquely identify individual aircraft and is usually painted on both sides of the aircraft fuselage, most often in the tail area, although in some cases the serial is painted on the side of the aircraft's fin/rudder(s). Because of this, the serial number is sometimes called a tail number. In the UK Royal Air Force (RAF) the individual serial takes the form of two letters followed by three digits, e.g., BT308—the prototype Avro Lancaster, or XS903—an English Electric Lightning F.6 at one time based at RAF Binbrook. During the Second World War RAF aircraft that were secret or carrying secret equipment had "/G" (for "Guard") appended to the serial, denoting that the aircraft was to have an armed guard at all times while on the ground, e.g., LZ548/G—the prototype de Havilland Vampire jet fighter, or ML926/G—a de Havilland Mosquito XVI experimentally fitted with H2S radar. Prior to this scheme the RAF, and predecessor Royal Flying Corps (RFC), utilised a serial consisting of a letter followed by four figures, e.g., D8096—a Bristol F.2 Fighter currently owned by the Shuttleworth Collection, or K5054—the prototype Supermarine Spitfire. The serial number follows the aircraft throughout its period of service. In 2009, the U.S. FDA published draft guidance for the pharmaceutical industry to use serial numbers on prescription drug packages. This measure is intended to enhance the traceability of drugs and to help prevent counterfeiting. Serial number arithmetic Serial numbers are often used in network protocols. However, most sequence numbers in computer protocols are limited to a fixed number of bits, and will wrap around after sufficiently many numbers have been allocated. Thus, recently allocated serial numbers may duplicate very old serial numbers, but not other recently allocated serial numbers. To avoid ambiguity with these non-unique numbers, "Serial Number Arithmetic", defines special rules for calculations involving these kinds of serial numbers. Lollipop sequence number spaces are a more recent and sophisticated sit for dealing with finite-sized sequence numbers in protocols. See also (serial identifiers for databases) – one of the first machines to sport a unique serial number Sources Elz, R., and R. Bush, "Serial Number Arithmetic", Network Working Group, August 1996. Plummer, William W. "Sequence Number Arithmetic" . Cambridge, Massachusetts: Bolt Beranek and Newman, Inc., 21 September 1978. References External links ISSN International Centre 20081221213405
Serial number
[ "Mathematics" ]
1,139
[ "Serial numbers", "Mathematical objects", "Numbers" ]
58,320
https://en.wikipedia.org/wiki/Potential%20flow
In fluid dynamics, potential flow or irrotational flow refers to a description of a fluid flow with no vorticity in it. Such a description typically arises in the limit of vanishing viscosity, i.e., for an inviscid fluid and with no vorticity present in the flow. Potential flow describes the velocity field as the gradient of a scalar function: the velocity potential. As a result, a potential flow is characterized by an irrotational velocity field, which is a valid approximation for several applications. The irrotationality of a potential flow is due to the curl of the gradient of a scalar always being equal to zero. In the case of an incompressible flow the velocity potential satisfies Laplace's equation, and potential theory is applicable. However, potential flows also have been used to describe compressible flows and Hele-Shaw flows. The potential flow approach occurs in the modeling of both stationary as well as nonstationary flows. Applications of potential flow include: the outer flow field for aerofoils, water waves, electroosmotic flow, and groundwater flow. For flows (or parts thereof) with strong vorticity effects, the potential flow approximation is not applicable. In flow regions where vorticity is known to be important, such as wakes and boundary layers, potential flow theory is not able to provide reasonable predictions of the flow. Fortunately, there are often large regions of a flow where the assumption of irrotationality is valid which is why potential flow is used for various applications. For instance in: flow around aircraft, groundwater flow, acoustics, water waves, and electroosmotic flow. Description and characteristics In potential or irrotational flow, the vorticity vector field is zero, i.e., , where is the velocity field and is the vorticity field. Like any vector field having zero curl, the velocity field can be expressed as the gradient of certain scalar, say which is called the velocity potential, since the curl of the gradient is always zero. We therefore have The velocity potential is not uniquely defined since one can add to it an arbitrary function of time, say , without affecting the relevant physical quantity which is . The non-uniqueness is usually removed by suitably selecting appropriate initial or boundary conditions satisfied by and as such the procedure may vary from one problem to another. In potential flow, the circulation around any simply-connected contour is zero. This can be shown using the Stokes theorem, where is the line element on the contour and is the area element of any surface bounded by the contour. In multiply-connected space (say, around a contour enclosing solid body in two dimensions or around a contour enclosing a torus in three-dimensions) or in the presence of concentrated vortices, (say, in the so-called irrotational vortices or point vortices, or in smoke rings), the circulation need not be zero. In the former case, Stokes theorem cannot be applied and in the later case, is non-zero within the region bounded by the contour. Around a contour encircling an infinitely long solid cylinder with which the contour loops times, we have where is a cyclic constant. This example belongs to a doubly-connected space. In an -tuply connected space, there are such cyclic constants, namely, Incompressible flow In case of an incompressible flow — for instance of a liquid, or a gas at low Mach numbers; but not for sound waves — the velocity has zero divergence: Substituting here shows that satisfies the Laplace equation where is the Laplace operator (sometimes also written ). Since solutions of the Laplace equation are harmonic functions, every harmonic function represents a potential flow solution. As evident, in the incompressible case, the velocity field is determined completely from its kinematics: the assumptions of irrotationality and zero divergence of flow. Dynamics in connection with the momentum equations, only have to be applied afterwards, if one is interested in computing pressure field: for instance for flow around airfoils through the use of Bernoulli's principle. In incompressible flows, contrary to common misconception, the potential flow indeed satisfies the full Navier–Stokes equations, not just the Euler equations, because the viscous term is identically zero. It is the inability of the potential flow to satisfy the required boundary conditions, especially near solid boundaries, makes it invalid in representing the required flow field. If the potential flow satisfies the necessary conditions, then it is the required solution of the incompressible Navier–Stokes equations. In two dimensions, with the help of the harmonic function and its conjugate harmonic function (stream function), incompressible potential flow reduces to a very simple system that is analyzed using complex analysis (see below). Compressible flow Steady flow Potential flow theory can also be used to model irrotational compressible flow. The derivation of the governing equation for from Eulers equation is quite straightforward. The continuity and the (potential flow) momentum equations for steady flows are given by where the last equation follows from the fact that entropy is constant for a fluid particle and that square of the sound speed is . Eliminating from the two governing equations results in The incompressible version emerges in the limit . Substituting here results in where is expressed as a function of the velocity magnitude . For a polytropic gas, , where is the specific heat ratio and is the stagnation enthalpy. In two dimensions, the equation simplifies to Validity: As it stands, the equation is valid for any inviscid potential flows, irrespective of whether the flow is subsonic or supersonic (e.g. Prandtl–Meyer flow). However in supersonic and also in transonic flows, shock waves can occur which can introduce entropy and vorticity into the flow making the flow rotational. Nevertheless, there are two cases for which potential flow prevails even in the presence of shock waves, which are explained from the (not necessarily potential) momentum equation written in the following form where is the specific enthalpy, is the vorticity field, is the temperature and is the specific entropy. Since in front of the leading shock wave, we have a potential flow, Bernoulli's equation shows that is constant, which is also constant across the shock wave (Rankine–Hugoniot conditions) and therefore we can write 1) When the shock wave is of constant intensity, the entropy discontinuity across the shock wave is also constant i.e., and therefore vorticity production is zero. Shock waves at the pointed leading edge of two-dimensional wedge or three-dimensional cone (Taylor–Maccoll flow) has constant intensity. 2) For weak shock waves, the entropy jump across the shock wave is a third-order quantity in terms of shock wave strength and therefore can be neglected. Shock waves in slender bodies lies nearly parallel to the body and they are weak. Nearly parallel flows: When the flow is predominantly unidirectional with small deviations such as in flow past slender bodies, the full equation can be further simplified. Let be the mainstream and consider small deviations from this velocity field. The corresponding velocity potential can be written as where characterizes the small departure from the uniform flow and satisfies the linearized version of the full equation. This is given by where is the constant Mach number corresponding to the uniform flow. This equation is valid provided is not close to unity. When is small (transonic flow), we have the following nonlinear equation where is the critical value of Landau derivative and is the specific volume. The transonic flow is completely characterized by the single parameter , which for polytropic gas takes the value . Under hodograph transformation, the transonic equation in two-dimensions becomes the Euler–Tricomi equation. Unsteady flow The continuity and the (potential flow) momentum equations for unsteady flows are given by The first integral of the (potential flow) momentum equation is given by where is an arbitrary function. Without loss of generality, we can set since is not uniquely defined. Combining these equations, we obtain Substituting here results in Nearly parallel flows: As in before, for nearly parallel flows, we can write (after introudcing a recaled time ) provided the constant Mach number is not close to unity. When is small (transonic flow), we have the following nonlinear equation Sound waves: In sound waves, the velocity magntiude (or the Mach number) is very small, although the unsteady term is now comparable to the other leading terms in the equation. Thus neglecting all quadratic and higher-order terms and noting that in the same approximation, is a constant (for example, in polytropic gas ), we have which is a linear wave equation for the velocity potential . Again the oscillatory part of the velocity vector is related to the velocity potential by , while as before is the Laplace operator, and is the average speed of sound in the homogeneous medium. Note that also the oscillatory parts of the pressure and density each individually satisfy the wave equation, in this approximation. Applicability and limitations Potential flow does not include all the characteristics of flows that are encountered in the real world. Potential flow theory cannot be applied for viscous internal flows, except for flows between closely spaced plates. Richard Feynman considered potential flow to be so unphysical that the only fluid to obey the assumptions was "dry water" (quoting John von Neumann). Incompressible potential flow also makes a number of invalid predictions, such as d'Alembert's paradox, which states that the drag on any object moving through an infinite fluid otherwise at rest is zero. More precisely, potential flow cannot account for the behaviour of flows that include a boundary layer. Nevertheless, understanding potential flow is important in many branches of fluid mechanics. In particular, simple potential flows (called elementary flows) such as the free vortex and the point source possess ready analytical solutions. These solutions can be superposed to create more complex flows satisfying a variety of boundary conditions. These flows correspond closely to real-life flows over the whole of fluid mechanics; in addition, many valuable insights arise when considering the deviation (often slight) between an observed flow and the corresponding potential flow. Potential flow finds many applications in fields such as aircraft design. For instance, in computational fluid dynamics, one technique is to couple a potential flow solution outside the boundary layer to a solution of the boundary layer equations inside the boundary layer. The absence of boundary layer effects means that any streamline can be replaced by a solid boundary with no change in the flow field, a technique used in many aerodynamic design approaches. Another technique would be the use of Riabouchinsky solids. Analysis for two-dimensional incompressible flow Potential flow in two dimensions is simple to analyze using conformal mapping, by the use of transformations of the complex plane. However, use of complex numbers is not required, as for example in the classical analysis of fluid flow past a cylinder. It is not possible to solve a potential flow using complex numbers in three dimensions. The basic idea is to use a holomorphic (also called analytic) or meromorphic function , which maps the physical domain to the transformed domain . While , , and are all real valued, it is convenient to define the complex quantities Now, if we write the mapping as Then, because is a holomorphic or meromorphic function, it has to satisfy the Cauchy–Riemann equations The velocity components , in the directions respectively, can be obtained directly from by differentiating with respect to . That is So the velocity field is specified by Both and then satisfy Laplace's equation: So can be identified as the velocity potential and is called the stream function. Lines of constant are known as streamlines and lines of constant are known as equipotential lines (see equipotential surface). Streamlines and equipotential lines are orthogonal to each other, since Thus the flow occurs along the lines of constant and at right angles to the lines of constant . is also satisfied, this relation being equivalent to . So the flow is irrotational. The automatic condition then gives the incompressibility constraint . Examples of two-dimensional incompressible flows Any differentiable function may be used for . The examples that follow use a variety of elementary functions; special functions may also be used. Note that multi-valued functions such as the natural logarithm may be used, but attention must be confined to a single Riemann surface. Power laws In case the following power-law conformal map is applied, from to : then, writing in polar coordinates as , we have In the figures to the right examples are given for several values of . The black line is the boundary of the flow, while the darker blue lines are streamlines, and the lighter blue lines are equi-potential lines. Some interesting powers are: : this corresponds with flow around a semi-infinite plate, : flow around a right corner, : a trivial case of uniform flow, : flow through a corner, or near a stagnation point, and : flow due to a source doublet The constant is a scaling parameter: its absolute value determines the scale, while its argument introduces a rotation (if non-zero). Power laws with : uniform flow If , that is, a power law with , the streamlines (i.e. lines of constant ) are a system of straight lines parallel to the -axis. This is easiest to see by writing in terms of real and imaginary components: thus giving and . This flow may be interpreted as uniform flow parallel to the -axis. Power laws with If , then and the streamline corresponding to a particular value of are those points satisfying which is a system of rectangular hyperbolae. This may be seen by again rewriting in terms of real and imaginary components. Noting that and rewriting and it is seen (on simplifying) that the streamlines are given by The velocity field is given by , or In fluid dynamics, the flowfield near the origin corresponds to a stagnation point. Note that the fluid at the origin is at rest (this follows on differentiation of at ). The streamline is particularly interesting: it has two (or four) branches, following the coordinate axes, i.e. and . As no fluid flows across the -axis, it (the -axis) may be treated as a solid boundary. It is thus possible to ignore the flow in the lower half-plane where and to focus on the flow in the upper halfplane. With this interpretation, the flow is that of a vertically directed jet impinging on a horizontal flat plate. The flow may also be interpreted as flow into a 90 degree corner if the regions specified by (say) are ignored. Power laws with If , the resulting flow is a sort of hexagonal version of the case considered above. Streamlines are given by, and the flow in this case may be interpreted as flow into a 60° corner. Power laws with : doublet If , the streamlines are given by This is more easily interpreted in terms of real and imaginary components: Thus the streamlines are circles that are tangent to the x-axis at the origin. The circles in the upper half-plane thus flow clockwise, those in the lower half-plane flow anticlockwise. Note that the velocity components are proportional to ; and their values at the origin is infinite. This flow pattern is usually referred to as a doublet, or dipole, and can be interpreted as the combination of a source-sink pair of infinite strength kept an infinitesimally small distance apart. The velocity field is given by or in polar coordinates: Power laws with : quadrupole If , the streamlines are given by This is the flow field associated with a quadrupole. Line source and sink A line source or sink of strength ( for source and for sink) is given by the potential where in fact is the volume flux per unit length across a surface enclosing the source or sink. The velocity field in polar coordinates are i.e., a purely radial flow. Line vortex A line vortex of strength is given by where is the circulation around any simple closed contour enclosing the vortex. The velocity field in polar coordinates are i.e., a purely azimuthal flow. Analysis for three-dimensional incompressible flows For three-dimensional flows, complex potential cannot be obtained. Point source and sink The velocity potential of a point source or sink of strength ( for source and for sink) in spherical polar coordinates is given by where in fact is the volume flux across a closed surface enclosing the source or sink. The velocity field in spherical polar coordinates are See also Potential flow around a circular cylinder Aerodynamic potential-flow code Conformal mapping Darwin drift Flownet Laplacian field Laplace equation for irrotational flow Potential theory Stream function Velocity potential Helmholtz decomposition Notes References Further reading External links — Java applets for exploring conformal maps Potential Flow Visualizations - Interactive WebApps Fluid dynamics
Potential flow
[ "Chemistry", "Engineering" ]
3,560
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
58,358
https://en.wikipedia.org/wiki/Tryptophan
Tryptophan (symbol Trp or W) is an α-amino acid that is used in the biosynthesis of proteins. Tryptophan contains an α-amino group, an α-carboxylic acid group, and a side chain indole, making it a polar molecule with a non-polar aromatic beta carbon substituent. Tryptophan is also a precursor to the neurotransmitter serotonin, the hormone melatonin, and vitamin B3 (niacin). It is encoded by the codon UGG. Like other amino acids, tryptophan is a zwitterion at physiological pH where the amino group is protonated (–; pKa = 9.39) and the carboxylic acid is deprotonated ( –COO−; pKa = 2.38). Humans and many animals cannot synthesize tryptophan: they need to obtain it through their diet, making it an essential amino acid. Tryptophan is named after the digestive enzymes trypsin, which were used in its first isolation from casein proteins. It was assigned the one-letter symbol W based on the double ring being visually suggestive to the bulky letter. Function Amino acids, including tryptophan, are used as building blocks in protein biosynthesis, and proteins are required to sustain life. Tryptophan is among the less common amino acids found in proteins, but it plays important structural or functional roles whenever it occurs. For instance, tryptophan and tyrosine residues play special roles in "anchoring" membrane proteins within the cell membrane. Tryptophan, along with other aromatic amino acids, is also important in glycan-protein interactions. In addition, tryptophan functions as a biochemical precursor for the following compounds: Serotonin (a neurotransmitter), synthesized by tryptophan hydroxylase. Melatonin (a neurohormone) is in turn synthesized from serotonin, via N-acetyltransferase and 5-hydroxyindole-O-methyltransferase enzymes. Kynurenine, to which tryptophan is mainly (more than 95%) metabolized. Two enzymes, namely indoleamine 2,3-dioxygenase (IDO) in the immune system and the brain, and tryptophan 2,3-dioxygenase (TDO) in the liver, are responsible for the synthesis of kynurenine from tryptophan. The kynurenine pathway of tryptophan catabolism is altered in several diseases, including psychiatric disorders such as schizophrenia, major depressive disorder, and bipolar disorder. Niacin, also known as vitamin B3, is synthesized from tryptophan via kynurenine and quinolinic acids. Auxins (a class of phytohormones) are synthesized from tryptophan. The disorder fructose malabsorption causes improper absorption of tryptophan in the intestine, reduced levels of tryptophan in the blood, and depression. In bacteria that synthesize tryptophan, high cellular levels of this amino acid activate a repressor protein, which binds to the trp operon. Binding of this repressor to the tryptophan operon prevents transcription of downstream DNA that codes for the enzymes involved in the biosynthesis of tryptophan. So high levels of tryptophan prevent tryptophan synthesis through a negative feedback loop, and when the cell's tryptophan levels go down again, transcription from the trp operon resumes. This permits tightly regulated and rapid responses to changes in the cell's internal and external tryptophan levels. Recommended dietary allowance In 2002, the U.S. Institute of Medicine set a Recommended Dietary Allowance (RDA) of 5 mg/kg body weight/day of tryptophan for adults 19 years and over. Dietary sources Tryptophan is present in most protein-based foods or dietary proteins. It is particularly plentiful in chocolate, oats, dried dates, milk, yogurt, cottage cheese, red meat, eggs, fish, poultry, sesame, chickpeas, almonds, sunflower seeds, pumpkin seeds, hemp seeds, buckwheat, spirulina, and peanuts. Contrary to the popular belief that cooked turkey contains an abundance of tryptophan, the tryptophan content in turkey is typical of poultry. Medical use Depression Because tryptophan is converted into 5-hydroxytryptophan (5-HTP) which is then converted into the neurotransmitter serotonin, it has been proposed that consumption of tryptophan or 5-HTP may improve depression symptoms by increasing the level of serotonin in the brain. Tryptophan is sold over the counter in the United States (after being banned to varying extents between 1989 and 2005) and the United Kingdom as a dietary supplement for use as an antidepressant, anxiolytic, and sleep aid. It is also marketed as a prescription drug in some European countries for the treatment of major depression. There is evidence that blood tryptophan levels are unlikely to be altered by changing the diet, but consuming purified tryptophan increases the serotonin level in the brain, whereas eating foods containing tryptophan does not. In 2001 a Cochrane review of the effect of 5-HTP and tryptophan on depression was published. The authors included only studies of a high rigor and included both 5-HTP and tryptophan in their review because of the limited data on either. Of 108 studies of 5-HTP and tryptophan on depression published between 1966 and 2000, only two met the authors' quality standards for inclusion, totaling 64 study participants. The substances were more effective than placebo in the two studies included but the authors state that "the evidence was of insufficient quality to be conclusive" and note that "because alternative antidepressants exist which have been proven to be effective and safe, the clinical usefulness of 5-HTP and tryptophan is limited at present". The use of tryptophan as an adjunctive therapy in addition to standard treatment for mood and anxiety disorders is not supported by the scientific evidence. Insomnia The American Academy of Sleep Medicine's 2017 clinical practice guidelines recommended against the use of tryptophan in the treatment of insomnia due to poor effectiveness. Side effects Potential side effects of tryptophan supplementation include nausea, diarrhea, drowsiness, lightheadedness, headache, dry mouth, blurred vision, sedation, euphoria, and nystagmus (involuntary eye movements). Interactions Tryptophan taken as a dietary supplement (such as in tablet form) has the potential to cause serotonin syndrome when combined with antidepressants of the MAOI or SSRI class or other strongly serotonergic drugs. Because tryptophan supplementation has not been thoroughly studied in a clinical setting, its interactions with other drugs are not well known. Isolation The isolation of tryptophan was first reported by Frederick Hopkins in 1901. Hopkins recovered tryptophan from hydrolysed casein, recovering 4–8 g of tryptophan from 600 g of crude casein. Biosynthesis and industrial production As an essential amino acid, tryptophan is not synthesized from simpler substances in humans and other animals, so it needs to be present in the diet in the form of tryptophan-containing proteins. Plants and microorganisms commonly synthesize tryptophan from shikimic acid or anthranilate: anthranilate condenses with phosphoribosylpyrophosphate (PRPP), generating pyrophosphate as a by-product. The ring of the ribose moiety is opened and subjected to reductive decarboxylation, producing indole-3-glycerol phosphate; this, in turn, is transformed into indole. In the last step, tryptophan synthase catalyzes the formation of tryptophan from indole and the amino acid serine. The industrial production of tryptophan is also biosynthetic and is based on the fermentation of serine and indole using either wild-type or genetically modified bacteria such as B. amyloliquefaciens, B. subtilis, C. glutamicum or E. coli. These strains carry mutations that prevent the reuptake of aromatic amino acids or multiple/overexpressed trp operons. The conversion is catalyzed by the enzyme tryptophan synthase. Society and culture Showa Denko contamination scandal There was a large outbreak of eosinophilia-myalgia syndrome (EMS) in the U.S. in 1989, with more than 1,500 cases reported to the CDC and at least 37 deaths. After preliminary investigation revealed that the outbreak was linked to intake of tryptophan, the U.S. Food and Drug Administration (FDA) recalled tryptophan supplements in 1989 and banned most public sales in 1990, with other countries following suit. Subsequent studies suggested that EMS was linked to specific batches of L-tryptophan supplied by a single large Japanese manufacturer, Showa Denko. It eventually became clear that recent batches of Showa Denko's L-tryptophan were contaminated by trace impurities, which were subsequently thought to be responsible for the 1989 EMS outbreak. However, other evidence suggests that tryptophan itself may be a potentially major contributory factor in EMS. There are also claims that a precursor reached sufficient concentrations to form a toxic dimer. The FDA loosened its restrictions on sales and marketing of tryptophan in February 2001, but continued to limit the importation of tryptophan not intended for an exempted use until 2005. The fact that the Showa Denko facility used genetically engineered bacteria to produce the contaminated batches of L-tryptophan later found to have caused the outbreak of eosinophilia-myalgia syndrome has been cited as evidence of a need for "close monitoring of the chemical purity of biotechnology-derived products". Those calling for purity monitoring have, in turn, been criticized as anti-GMO activists who overlook possible non-GMO causes of contamination and threaten the development of biotech. Turkey meat and drowsiness hypothesis A common assertion in the US and the UK is that heavy consumption of turkey meat—as seen during Thanksgiving and Christmas—results in drowsiness, due to high levels of tryptophan contained in turkey. However, the amount of tryptophan in turkey is comparable with that of other meats. Drowsiness after eating may be caused by other foods eaten with the turkey, particularly carbohydrates. Ingestion of a meal rich in carbohydrates triggers the release of insulin. Insulin in turn stimulates the uptake of large neutral branched-chain amino acids (BCAA), but not tryptophan, into muscle, increasing the ratio of tryptophan to BCAA in the blood stream. The resulting increased tryptophan ratio reduces competition at the large neutral amino acid transporter (which transports both BCAA and aromatic amino acids), resulting in more uptake of tryptophan across the blood–brain barrier into the cerebrospinal fluid (CSF). Once in the CSF, tryptophan is converted into serotonin in the raphe nuclei by the normal enzymatic pathway. The resultant serotonin is further metabolised into the hormone melatonin—which is an important mediator of the circadian rhythm—by the pineal gland. Hence, these data suggest that "feast-induced drowsiness"—or postprandial somnolence—may be the result of a heavy meal rich in carbohydrates, which indirectly increases the production of melatonin in the brain, and thereby promotes sleep. Research Yeast amino acid metabolism In 1912 Felix Ehrlich demonstrated that yeast metabolizes the natural amino acids essentially by splitting off carbon dioxide and replacing the amino group with a hydroxyl group. By this reaction, tryptophan gives rise to tryptophol. Serotonin precursor Tryptophan affects brain serotonin synthesis when given orally in a purified form and is used to modify serotonin levels for research. Low brain serotonin level is induced by administration of tryptophan-poor protein in a technique called acute tryptophan depletion. Studies using this method have evaluated the effect of serotonin on mood and social behavior, finding that serotonin reduces aggression and increases agreeableness. Psychedelic effects Tryptophan produces the head-twitch response (HTR) in rodents when administered at sufficiently high doses. The HTR is induced by serotonergic psychedelics like lysergic acid diethylamide (LSD) and psilocybin and is a behavioral proxy of psychedelic effects. Tryptophan is converted into the trace amine tryptamine and tryptamine is N-methylated by indolethylamine N-methyltransferase (INMT) into N-methyltryptamine (NMT) and N,N-dimethyltryptamine (N,N-DMT), which are known serotonergic psychedelics. Fluorescence Tryptophan is an important intrinsic fluorescent probe (amino acid), which can be used to estimate the nature of the microenvironment around the tryptophan residue. Most of the intrinsic fluorescence emissions of a folded protein are due to excitation of tryptophan residues. See also 5-Hydroxytryptophan (5-HTP) α-Methyltryptophan Acree–Rosenheim reaction Adamkiewicz reaction Attenuator (genetics) N,N-Dimethyltryptamine Hopkins–Cole reaction Serotonin Tryptamine References Further reading External links Alpha-Amino acids Proteinogenic amino acids indole alkaloids Glucogenic amino acids Ketogenic amino acids Aromatic amino acids Essential amino acids Tryptamine alkaloids Dietary supplements Carbonic anhydrase activators Monoamine precursors Serotonin
Tryptophan
[ "Chemistry" ]
3,042
[ "Tryptamine alkaloids", "Alkaloids by chemical classification", "Indole alkaloids" ]
16,252,996
https://en.wikipedia.org/wiki/Neumann%E2%80%93Dirichlet%20method
In mathematics, the Neumann–Dirichlet method is a domain decomposition preconditioner which involves solving Neumann boundary value problem on one subdomain and Dirichlet boundary value problem on another, adjacent across the interface between the subdomains. On a problem with many subdomains organized in a rectangular mesh, the subdomains are assigned Neumann or Dirichlet problems in a checkerboard fashion. See also Neumann–Neumann method References Domain decomposition methods
Neumann–Dirichlet method
[ "Mathematics" ]
95
[ "Applied mathematics", "Applied mathematics stubs" ]
16,254,249
https://en.wikipedia.org/wiki/Mortar%20methods
In numerical analysis, mortar methods are discretization methods for partial differential equations, which use separate finite element discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. Mortar discretizations lend themselves naturally to the solution by iterative domain decomposition methods such as FETI and balancing domain decomposition In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints. Similar to Penalty methods, mortar methods are explicit in their nature, i.e. they require the contacting surfaces to be defined. This is in contrast to fully implicit methods, such as the Third medium contact method, where contacting surfaces do not need to be defined. References Domain decomposition methods
Mortar methods
[ "Mathematics" ]
189
[ "Applied mathematics", "Applied mathematics stubs" ]
16,254,333
https://en.wikipedia.org/wiki/Mushkin
Mushkin () is an American computer hardware company best known for producing computer memory modules (RAM). Its customers include gamers and industry professionals. Mushkin products include solid-state drives, computer power supply units (PSUs), and RAM modules for desktops, servers, and laptops. They also produce a line of USB flash drives. Their memory products are available in several performance categories, including those intended for overclockers. History Mushkin was founded in Denver, Colorado, in 1994. Ramtron International acquired Mushkin in June 2000. Mushkin became a wholly owned subsidiary of Ramtron, who used it to distribute memory modules made by its subsidiary, Enhanced Memory Systems. In 2003, George Stathakis, the general manager, purchased Mushkin from Ramtron and converted it into a worker-owned company. Mushkin is now owned by Avant Technology. But in July 2005, George Stathakis, the general manager, bought Mushkin from Ramtron and became the new owner and president of Mushkin. References External links Computer companies of the United States Computer hardware companies Companies based in Austin, Texas American companies established in 1994 Computer memory companies Computer power supply unit manufacturers
Mushkin
[ "Technology" ]
250
[ "Computer hardware companies", "Computers" ]
16,254,476
https://en.wikipedia.org/wiki/Relative%20identifier
In the context of the Microsoft Windows NT line of computer operating systems, the relative identifier (RID) is a variable length number that is assigned to objects at creation and becomes part of the object's Security Identifier (SID) that uniquely identifies an account or group within a domain. The Relative ID Master allocates security RIDs to Domain Controllers to assign to new Active Directory security principals (users, groups or computer objects). It also manages objects moving between domains. The Relative ID Master is one role of the Flexible single master operation for assigning RID. See also Security Identifier References Security Identifiers (SID) ObjectSID and Active Directory Windows NT kernel
Relative identifier
[ "Technology" ]
140
[ "Computing stubs" ]
16,254,742
https://en.wikipedia.org/wiki/Acatech
Acatech (styled acatech), founded in 2002 and established as the German Academy of Science and Engineering () on 1 January 2008, represents the interests of German technical sciences independently, in self-determination and guided by the common good, at home and abroad. acatech is organized as a working academy that advises politicians and the public on forward-looking issues concerning the technical sciences and technology politics. The academy sees itself as an institution that provides neutral, fact and science-based assessments of technology-related questions and serves society with far-sighted recommendations of excellent quality. Also, acatech aims to facilitate the knowledge transfer between science and business and to promote new talent in the technical sciences. To further the acceptance of technical progress in Germany and demonstrate the potential of forward-looking technologies for the economy and for society, acatech organizes symposia, forums, panel discussions and workshops. acatech communicates with the public by publishing studies, recommendations and issue papers. Also, in the future, the “Three Academies”, acatech, the Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) and the German Academy of Sciences Leopoldina in Halle, which will act as the lead organization in this enterprise, are intended to undertake the functions of a national academy of science. acatech, whose name stands for the combination of academia and technology, is made up of three bodies: General Assembly, Executive Board and Senate. The executive board of acatech is chaired by its presidents, Karl-Heinz Streibich and Prof. Dr. Johann-Dietrich Wörner. The acatech head office is located in Munich. In addition the academy operates offices in Berlin and Brussels. Internationally, acatech plays its role in the European Council of Applied Sciences, Technologies and Engineering (Euro-CASE) and its global equivalent, the International Council of Academies of Engineering and Technological Sciences (CAETS). Operation The way acatech operates is characterized by its close interlinking between science and business. The paramount aim of acatech's work is to promote sustainable growth through more innovation in Germany. This entails the assessment of the potentials, chances and risks of new technical developments. Following this concept, every scientific or technical discipline that serves to produce scientific knowledge and invest that knowledge into practical uses is represented in acatech. These include the engineering sciences, but also the applied natural sciences and parts of the humanities and social sciences. acatech is organized as a flexible, working academy, so that the members can engage in topical networks and projects and thus determine the contents of their work. Every distinguished scientist from academia or industry can become a member of acatech. The co-option of new members is decided, on recommendation, by the executive board and the General Assembly. Presently, acatech has about 600 members, each of them involved in the development of projects and engaged in the topical networks of the academy. The selection of topical fields and projects is decided by the executive board. To foster the quality of coverage relating to technology issues, acatech founded the PUNKT prize in 2005. Distinguished examples of journalism in the field of photography, text and multimedia in Germany are eligible for the PUNKT. The prize is awarded yearly. Structure acatech is governed by three organs: the General Assembly, the Executive Board and the Senate. There is also a Board of Trustees. The Executive Board represents acatech and defines the guidelines for the contents of the work undertaken by members in project groups and topical networks. The honorary members of the executive board are elected by the General Assembly, for a period of three years. The members of Board elect the presidents. The incumbent presidents of acatech are Karl-Heinz Streibich and Prof. Dr. Johann-Dietrich Wörner. The Senate advises acatech on questions of strategy. Its members are CEOs and chairmen of major technological corporations, presidents of the main science organizations in Germany, such as the Fraunhofer Society, the German Research Foundation (DFG), the Leibniz-Gemeinschaft and representatives from politics. The Senate is currently chaired by Karl-Heinz Streibich, President of acatech. Corporations and private individuals are affiliated in the Board of Trustees, which engages in the interest of acatech and supports the work of acatech with grants and donations. The chairman of the Board of Trustees is Prof. Dr. Henning Kagermann, Global Representative and Advisor of the Plattform Industrie 4.0 and former Chief Executive Officer of SAP AG. acatech operates a head office at the Residenz in Munich and branch offices in Berlin and Brussels. The employees at the three offices work to support the organs of acatech in their activities. Results acatech publishes the results of its scientific work in three series of monographs: Position Statements Project Reports acatech debates Apart from that, acatech also publishes annual reports and a newsletter, TRANSFER. Position Statements – “acatech takes a position”: This is the format of brief statements on current issues from the fields of technical sciences and technology politics. The statements are drawn up by leading experts for the respective subjects and are authorized by the acatech Executive Board. They are published by acatech. Project Reports – “acatech reports and advises”: In this series, the results achieved by acatech project groups are reported. The reports are of the form of studies developed by interdisciplinary working groups in projects lasting one or several years and resulting in specific recommendations for action. Each project report is authorized by the executive board and published by acatech. “acatech debates”: This series reports about symposia, workshops and other projects whose results do not have the status of official, authorized recommendations by acatech. The responsibility for the contents of these publications lies with the individual publishers and authors. Apart from these series, acatech publishes an Annual Report, each spring, documenting the main activities and events of the past year. The quarterly newsletter, TRANSFER, offers up-to-date information about events, news and current affairs. History The idea to create a national academy of sciences to represent the interests of the technical sciences in Germany is not exactly new. However, in contrast to other European countries, e.g. the United Kingdom with its “Royal Society”, France with the “Académie des Sciences” or Sweden with the “Royal Swedish Academy of Sciences”, the idea of national academies, generally, did not come to fruition in Germany for a long time. Consequently, there was no such superordinate representation of the technical sciences, either. The first important step towards an integrated representation of the technical sciences was made only after years of discussion, on 21 November 1997. That was the day of the constituting session of the “Convent for the Technical Sciences”. This convent came into existence thanks to the initiative of the Berlin-Brandenburg and North Rhine-Westphalia academies of sciences. The inaugural assembly elected Professor Dr.-Ing. Günter Spur as chairman of the executive board of the convent. The members, initially just 50 in number, in their majority came from the technical sciences, natural sciences, engineering sciences and economics faculties of the two founding academies. Right from the beginnings, the tasks to be undertaken by the Convent for the Technical Sciences included the promotion of research and new talent in the technical sciences; the strengthening of international cooperation; and the dialog with the natural sciences and humanities, politics, business and society about the role of forward-looking technologies. To provide a broader base for development of the Convent for the Technical Sciences, the presidents of the, then, seven academies of sciences in Germany, agreed in April 2001 to bundle all national technical-sciences activities under the umbrella of the Union of the German Academies of Sciences and Humanities. So, on 15 February 2002 the “Convent for the Technical Sciences of the Union of the German Academies of Sciences” was established and subsequently incorporated and registered as an associated for the common public good. The chairmanship of the executive board was awarded to Joachim Milberg, with Franz Pischinger as his deputy. (The two functions were renamed, by change of statute, to president and vice president, respectively, in May 2003.) The Convent decided to operate the snappy short name, “akatech”, which, in view of the international context of its functions, was subsequently changed into “acatech”. The breakthrough for the Convent to become a national academy was marked by the decision by the Federal and State Governments’ Commission (Bund-Länder-Kommission, BLK) of 23 October 2006, to accept acatech into the common institutional funding framework of the Federal and State governments of Germany. On 23 April 2007, the BLK issued the recommendation to the “heads of government of the Bund and the Länder to implement an amendment to the framework agreement for research funding”. In its reasoning for this decision, the BLK emphasized that the technical sciences are an important pillar in the “science landscape” and commended the concept of acatech as a convincing basis for the work of an independent, national academy of the technical sciences. Since 1 January 2008, acatech operates under the name “German Academy of Science and Engineering”. References Further reading External links acatech — German Academy of Science and Engineering Scientific organisations based in Germany Scientific societies based in Germany National academies of engineering
Acatech
[ "Engineering" ]
1,926
[ "National academies of engineering" ]
16,255,266
https://en.wikipedia.org/wiki/Apoptotic%20DNA%20fragmentation
Apoptotic DNA fragmentation is a key feature of apoptosis, a type of programmed cell death. Apoptosis is characterized by the activation of endogenous endonucleases, particularly the caspase-3 activated DNase (CAD), with subsequent cleavage of nuclear DNA into internucleosomal fragments of roughly 180 base pairs (bp) and multiples thereof (360, 540 etc.). The apoptotic DNA fragmentation is being used as a marker of apoptosis and for identification of apoptotic cells either via the DNA laddering assay, the TUNEL assay, or the by detection of cells with fractional DNA content ("sub G1 cells") on DNA content frequency histograms e.g. as in the Nicoletti assay. Mechanism The enzyme responsible for apoptotic DNA fragmentation is the Caspase-Activated DNase (CAD). CAD is normally inhibited by another protein, the Inhibitor of Caspase Activated DNase (ICAD). During apoptosis, the apoptotic effector caspase, caspase-3, cleaves ICAD and thus causes CAD to become activated. CAD cleaves DNA at internucleosomal linker sites between nucleosomes, protein-containing structures that occur in chromatin at ~180-bp intervals. This is because the DNA is normally tightly wrapped around histones, the core proteins of the nucleosomes. The linker sites are the only parts of the DNA strand that are exposed and thus accessible to CAD. Degradation of nuclear DNA into nucleosomal units is one of the hallmarks of apoptotic cell death. It occurs in response to various apoptotic stimuli in a wide variety of cell types. Molecular characterization of this process identified a specific DNase (CAD, caspase-activated DNase) that cleaves chromosomal DNA in a caspase-dependent manner. CAD is synthesized with the help of ICAD (inhibitor of CAD), which works as a specific chaperone for CAD and is found complexed with ICAD in proliferating cells. When cells are induced to undergo apoptosis, caspase 3 cleaves ICAD to dissociate the CAD:ICAD complex, allowing CAD to cleave chromosomal DNA. Cells that lack ICAD or that express caspase-resistant mutant ICAD thus do not show DNA fragmentation during apoptosis, although they do exhibit some other features of apoptosis and die. Even though much work has been performed on the analysis of apoptotic events, little information is available to link the timing of morphological features at the cell surface and in the nucleus to the biochemical degradation of DNA in the same cells. Apoptosis can be initiated by a myriad of different mechanisms in different cell types, and the kinetics of these events vary widely, from only a few minutes to several days depending on the cell system. The presence or absence of particular apoptotic event(s), including DNA fragmentation, depends on the "time window" at which the kinetic process of apoptosis is being investigated. Often this may complicate identification of apoptotic cells if cell populations are analyzed only at a single time point e.g. after induction of apoptosis. Historical background The discovery of the internucleosomal fragmentation of genomic DNA to regular repeating oligonucleosomal fragments generated by Ca/Mg-dependent endonuclease is accepted as one of the best-characterized biochemical markers of apoptosis (programmed cell death). In 1970, described that cytoplasmic DNA isolated from mouse liver cells after culture was characterized by DNA fragments with a molecular weight consisting of multiples of 135 kDa. This finding was consistent with the hypothesis that these DNA fragments were a specific degradation product of nuclear DNA. In 1972, , , and coined the term apoptosis and distinguished this type of cell death from necrosis based on morphological features. In 1973, and , during the study of subchromatin structure, found that chromatin is accessible to the Ca++/Mg++ endonuclease, resulting in the formation of a digestion product with a regular series of molecular weight similar to the one previously described by Williamson (1970). In 1974, , , and , using cells exposed to widely differing types of trauma, found that during cell death, degraded DNA in "every case had a modal value of between 10(x6) and 10(x7) Dalton and cellular metabolism is required to produce degradation of DNA". However, this observation was without indication of "whether the incision attack on the DNA molecule was a random or rather at a particular site, that have structural or functional meaning". In 1976, , , and described internucleosomal fragmentation of irradiated lymphoid chromatin DNA in vivo. Six years passed from 1972 to 1978/1980 until the discovery and evaluation of internucleosomal fragmentation of DNA during apoptotic cell death as a hallmark of apoptosis. Since 1972 (, , and ), it is accepted that glucocorticoid-induced death of lymphocytes is a form of apoptosis. In 1978, and presented a paper revealing that glucocorticoid-induced DNA degradation in rat lymphoid tissue, thymus, and spleen occurred in a specific pattern producing fragments of DNA that were electrophoretically similar to those observed after treatment of chromatin with microccoccal nuclease, which indicated internucleosomal cleavage pattern of DNA degradation occurred during apoptosis. Thus, the first link between programmed cell death/apoptosis and internucleosomal fragmentation of chromatin DNA was discovered and soon became as a specific feature of apoptosis. In 1980, reported additional evidence for an internucleosomal DNA cleavage pattern as a specific feature of glucocorticoid-treated thymocytes undergoing apoptosis. The internucleosomal DNA cleavage pattern was observed as a specific feature of apoptosis in 1978/1980 and has become a recognised hallmark of programmed cell death since then. In 1992 Gorczyca et al. [3] and Gavrieli et al.[4] independently described the DNA fragmentation assay based on the use of the terminal deoxynucleotidyl transferase (TUNEL) which become one of the standard methods to detect and identify apoptotic cells. Detection assays Flow cytometry is most frequently used to detect apoptotic DNA fragmentation. Analysis of DNA content by flow cytometry can identify apoptotic cells with fragmented DNA as the cells with fractional DNA content, often called the sub-G1 cells. The flow-cytometric assay utilizing the fluorochrome acridine orange shows that DNA fragmentation within individual cells is discontinuous likely reflecting different levels of restriction in accessibility of DNA to DNase, by the supranucleosomal and nucleosomal levels of chromatin structure. The presence of apoptotic "sub-G1cells" can also be detected in cells pre-fixed in ethanol but not after fixation in the crosslinking fixatives such as formaldehyde. The late-S and G2 apoptotic cells may not be detected with this approach because their fractional DNA content may overlap with that of the non-apoptotic G1 cells. Treatment of cells with detergent, prior or concurrently with DNA fluorochrome, also reveals DNA fragmentation by virtue of the presence of the sub-G1 cells or cell fragments, as defined by Nicoletti et al.[5] Apoptotic DNA fragmentation can also be detected by the TUNEL assay. The fluorochrome-based TUNEL assay applicable for flow cytometry, correlates the detection of DNA strand breaks with the cellular DNA content and thus with cell cycle-phase position. The avidin-peroxidase labeling TUNEL assay is applicable for light absorption microscopy. Many TUNEL-related kits are commercially available. Apoptotic DNA fragmentation is also analyzed using agarose gel electrophoresis to demonstrate a "ladder" pattern at ~180-BP intervals.[1] Necrosis, on the other hand, is usually characterized by random DNA fragmentation which forms a "smear" on agarose gels. See also Caspase-activated DNase DNA laddering References Further reading Apoptosis DNA
Apoptotic DNA fragmentation
[ "Chemistry", "Biology" ]
1,780
[ "Cell biology", "Signal transduction", "Apoptosis", "Molecular biology", "Biochemistry" ]
16,255,401
https://en.wikipedia.org/wiki/Cray%20APP
The Cray APP (Attached Parallel Processor) was a parallel computer sold by Cray Research from 1992 onwards. It was based on the Intel i860 microprocessor and could be configured with up to 84 processors. The design was based on "computational nodes" of 12 processors interconnected by a shared bus, with multiple nodes connected to each other, memory and I/O nodes via an 8×8 crossbar switch. The APP was marketed as a "matrix co-processor" system and required a SPARC-based host system to operate, such as the Cray S-MP. Connection to the host system was via VMEbus or HiPPI. A fully configured APP had a peak performance of 6.7 (single-precision) gigaflops. The APP was originally designed by FPS Computing as the FPS MCP-784. FPS were acquired by Cray Research in 1991, becoming Cray Research Superservers Inc., and the MCP-784 was relaunched by Cray in 1992 as the APP. References Cray Attached Parallel Processor (APP) brief, SunFLASH Vol 58 #2, October 1993. App Supercomputers
Cray APP
[ "Technology" ]
244
[ "Supercomputers", "Computing stubs", "Supercomputing", "Computer hardware stubs" ]
16,255,496
https://en.wikipedia.org/wiki/Zirconyl%20chloride
Zirconyl chloride is the inorganic compound with the formula of [Zr4(OH)8(H2O)16]Cl8(H2O)12, more commonly written ZrOCl2·8H2O, and referred to as zirconyl chloride octahydrate. It is a white solid and is the most common water-soluble derivative of zirconium. A compound with the formula ZrOCl2 has not been characterized. Production and structure The salt is produced by hydrolysis of zirconium tetrachloride or treating zirconium oxide with hydrochloric acid. It adopts a tetrameric structure, consisting of the cation [Zr4(OH)8]8+. features four pairs of hydroxide bridging ligands linking four Zr4+ centers. The chloride anions are not ligands, consistent with the high oxophilicity of Zr(IV). The salt crystallizes as tetragonal crystals. See also Zirconyl acetate References External links MSDS Data MSDS data Sigma-Aldrich Zirconium(IV) compounds Chlorides Metal halides Oxychlorides
Zirconyl chloride
[ "Chemistry" ]
253
[ "Chlorides", "Inorganic compounds", "Metal halides", "Salts" ]
16,256,753
https://en.wikipedia.org/wiki/Shell%E2%80%93Paques%20process
The Shell–Paques process, also known by the trade name of Thiopaq O&G, is a gas desulfurization technology for the removal of hydrogen sulfide from natural-, refinery-, synthesis- and biogas. The process was initially named after the Shell Oil and Paques purification companies. After accession of a dedicated joint venture by the founders, Paqell B.V., the trade name for applications in the Oil & Gas industry was changed to "THIOPAQ O&G". It is based on the biocatalytical conversion of sulfide into elemental sulfur. It operates at near-ambient conditions of temperature, about 30-40 °C, and pressure which results in inherent safety. It is an alternative to, for example, the Claus process. Process chemistry Each reaction can be applied individually or sequentially as dictated by the characteristics of the stream to be treated. The process consist of three main sections: An absorber (gas washing section), a bioreactor (sulfide oxidation and regeneration of washing liquid) and Sulfur handling section as shown in the figure below: The washing step uses a dilute alkaline solution to remove hydrogen sulfide (H2S) from the sour gas according to: H2S + NaOH → NaHS + H2O The loaded washing liquid is transported to a bioreactor where a biocatalyst oxidises the aqueous NaHS to elemental sulfur with about 95% selectivity according to: NaHS + ½ O2 → S + NaOH Combined reaction equation: H2S + ½ O2 → S + H2O The regenerated washing liquid is sent back to the washing column. The controlled partial oxidation of sulfide to elemental sulfur (2) is catalyzed by naturally occurring microorganisms of the genus Halothiobacillus in the bioreactor. These natural, living microorganisms present in the bioreactor catalyse the sulfur conversions and are, by their nature, resilient and adaptive. In many situations the process can be used for sulfur removal and recovery. When sulfur recovery is desired, the elemental sulfur produced in the aerobic bioreactor will be separated from the aqueous effluent in a separator inside of the reactor. The excess sulfur will be removed as aqueous slurry or cake of up to 65% dry solids content. There are several options for handling this slurry and to convert it into products for sulfuric acid generation, fertiliser or fungicide. The system is flexible and has several processing options that have ready application in the petroleum refinery or petrochemical complex for managing a variety of sulfur-containing streams including sulfidic caustic, LPG, hydrotreater offgas and fuel gas. See also Flue-gas desulfurization References Industrial processes Shell plc Desulfurization
Shell–Paques process
[ "Chemistry" ]
599
[ "Desulfurization", "Separation processes" ]
16,257,394
https://en.wikipedia.org/wiki/Laboratory%20Unit%20for%20Computer%20Assisted%20Surgery
Laboratory Unit for Computer Assisted Surgery is a system used for virtual surgical planning. Starting with 1998, LUCAS was developed at the University of Regensburg, Germany, with the support of the Carl Zeiss Company. The resulting surgical planning is then reproduced onto the patient by using a navigation system. In fact, LUCAS is integrated into the same platform together with the Surgical Segment Navigator (SSN), the Surgical Tool Navigator (STN), the Surgical Microscope Navigator (SMN) and the 6DOF Manipulator (or, in German, "Mehrkoordinatenmanipulator" - MKM), also from the Carl Zeiss Company. Workflow Data from separate bidimensional slices generated by a CT or MRI scan are uploaded into the LUCAS system. The resulting dataset is then processed, in order to eliminate image noise, and to enhance the anatomical contours and also the general contrast of the images. The next step is to create a virtual 3D model from the gathered collection of 2D images. The bone segment that is to be repositioned is marked, on the 3D grid reconstructed model; then, the actual repositioning of that bone segment is done on the virtual model, until the optimal anatomical position is obtained. The criteria for the optimal position of the bone segment are: symmetry with the opposite side, the continuity of the normal bone contours, or the normal volume of an anatomical region (such as the Orbit. Afterwards, a textured final image is rendered. The calculated vectors for the bone segment repositioning, together with the whole virtual model are finally transferred to the Surgical Segment Navigator. References Marmulla R, Niederdellmann H: Surgical Planning of Computer Assisted Repositioning Osteotomies, Plast Reconstr Surg 104 (4): 938-944, 1999 Oral and maxillofacial surgery Medical software Radiology Tomography Computer-assisted surgery
Laboratory Unit for Computer Assisted Surgery
[ "Biology" ]
397
[ "Medical software", "Medical technology" ]
16,258,324
https://en.wikipedia.org/wiki/Iranian%20Science%20and%20Culture%20Hall%20of%20Fame
Iranian Science and Culture Hall of Fame or Ever-lasting Names / People (; read as čehre-hā-ye māndegār) is a formal ceremony to honor influential contemporary scientific and cultural Eminents. The title is awarded to those who "will remain always alive with its impact on the life of the people of the country". The ceremony is hosted by the IRIB. Presenters The main presenter is IRIB TV4 with collaboration of: Academy of Persian Language and Literature Iranian Academy of sciences Iranian Academy of Medical Sciences Iranian Academy of the Arts University of Tehran Sharif University of Technology Iranian Research Institute of Philosophy Some of the notable members Majid Samii (1937–, neurosurgery) Ali Shariatmadari (1924–, educationist) Sayed Jafar Shahidi (1918–2008, historian of Islam) Caro Lucas (1951–2010, academician, father of Iranian robotics) Mahmoud Farshchian (1930–, master of Persian miniature) Ali Nassirian (1934–, actor) Mohammad Nouri (1929–2010, singer) Gholamreza Aavani (1943–, philosopher) Hashem Rafii-Tabar (1948–, nanotechnologist) Ali reza Eftekhari (1958-, Vocalist) Non-Iranians members Roger Garaudy (1913–2012, French author and philosopher) Annemarie Schimmel (1922–2003, German Iranologist) See also Science in Iran Culture of Iran References Science and technology halls of fame Halls of fame in Iran Science and technology in Iran Iranian Science and Culture Hall of Fame recipients
Iranian Science and Culture Hall of Fame
[ "Technology" ]
329
[ "Science and technology awards", "Science and technology halls of fame" ]
16,258,342
https://en.wikipedia.org/wiki/Formalism%20%28philosophy%20of%20mathematics%29
In the philosophy of mathematics, formalism is the view that holds that statements of mathematics and logic can be considered to be statements about the consequences of the manipulation of strings (alphanumeric sequences of symbols, usually as equations) using established manipulation rules. A central idea of formalism "is that mathematics is not a body of propositions representing an abstract sector of reality, but is much more akin to a game, bringing with it no more commitment to an ontology of objects or properties than ludo or chess." According to formalism, the truths expressed in logic and mathematics are not about numbers, sets, or triangles or any other coextensive subject matter — in fact, they are not "about" anything at all. Rather, mathematical statements are syntactic forms whose shapes and locations have no meaning unless they are given an interpretation (or semantics). In contrast to mathematical realism, logicism, or intuitionism, formalism's contours are less defined due to broad approaches that can be categorized as formalist. Along with realism and intuitionism, formalism is one of the main theories in the philosophy of mathematics that developed in the late nineteenth and early twentieth century. Among formalists, David Hilbert was the most prominent advocate. Early formalism The early mathematical formalists attempted "to block, avoid, or sidestep (in some way) any ontological commitment to a problematic realm of abstract objects." German mathematicians Eduard Heine and Carl Johannes Thomae are considered early advocates of mathematical formalism. Heine and Thomae's formalism can be found in Gottlob Frege's criticisms in The Foundations of Arithmetic. According to Alan Weir, the formalism of Heine and Thomae that Frege attacks can be "describe[d] as term formalism or game formalism." Term formalism is the view that mathematical expressions refer to symbols, not numbers. Heine expressed this view as follows: "When it comes to definition, I take a purely formal position, in that I call certain tangible signs numbers, so that the existence of these numbers is not in question." Thomae is characterized as a game formalist who claimed that "[f]or the formalist, arithmetic is a game with signs which are called empty. That means that they have no other content (in the calculating game) than they are assigned by their behaviour with respect to certain rules of combination (rules of the game)." Frege provides three criticisms of Heine and Thomae's formalism: "that [formalism] cannot account for the application of mathematics; that it confuses formal theory with metatheory; [and] that it can give no coherent explanation of the concept of an infinite sequence." Frege's criticism of Heine's formalism is that his formalism cannot account for infinite sequences. Dummett argues that more developed accounts of formalism than Heine's account could avoid Frege's objections by claiming they are concerned with abstract symbols rather than concrete objects. Frege objects to the comparison of formalism with that of a game, such as chess. Frege argues that Thomae's formalism fails to distinguish between game and theory. Hilbert's formalism A major figure of formalism was David Hilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics. Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent (i.e. no contradictions can be derived from the system). The way that Hilbert tried to show that an axiomatic system was consistent was by formalizing it using a particular language. In order to formalize an axiomatic system, a language must first be chosen in which operations can be expressed and performed within that system. This language must include five components: It must include variables such as x, which can stand for some number. It must have quantifiers such as the symbol for the existence of an object. It must include equality. It must include connectives such as ↔ for "if and only if." It must include certain undefined terms called parameters. For geometry, these undefined terms might be something like a point or a line, which we still choose symbols for. By adopting this language, Hilbert thought that all theorems could be proven within any axiomatic system using nothing more than the axioms themselves and the chosen formal language. Gödel's conclusion in his incompleteness theorems was that one cannot prove consistency within any consistent axiomatic system rich enough to include classical arithmetic. On the one hand, only the formal language chosen to formalize this axiomatic system must be used; on the other hand, it is impossible to prove the consistency of this language in itself. Hilbert was originally frustrated by Gödel's work because it shattered his life's goal to completely formalize everything in number theory. However, Gödel did not feel that he contradicted everything about Hilbert's formalist point of view. After Gödel published his work, it became apparent that proof theory still had some use, the only difference is that it could not be used to prove the consistency of all of number theory as Hilbert had hoped. Hilbert was initially a deductivist, but he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation. Further developments Other formalists, such as Rudolf Carnap, considered mathematics to be the investigation of formal axiom systems. Haskell Curry defines mathematics as "the science of formal systems." Curry's formalism is unlike that of term formalists, game formalists, or Hilbert's formalism. For Curry, mathematical formalism is about the formal structure of mathematics and not about a formal system. Stewart Shapiro describes Curry's formalism as starting from the "historical thesis that as a branch of mathematics develops, it becomes more and more rigorous in its methodology, the end-result being the codification of the branch in formal deductive systems." Criticism Kurt Gödel indicated one of the weak points of formalism by addressing the question of consistency in axiomatic systems. Bertrand Russell has argued that formalism fails to explain what is meant by the linguistic application of numbers in statements such as "there are three men in the room". See also QED project Mathematical formalism Formalized mathematics Formal system References External links Philosophy of mathematics
Formalism (philosophy of mathematics)
[ "Mathematics" ]
1,370
[ "nan" ]
16,258,350
https://en.wikipedia.org/wiki/FRAS1
Extracellular matrix protein FRAS1 is a protein that in humans is encoded by the FRAS1 (Fraser syndrome 1) gene. This gene encodes an extracellular matrix protein that appears to function in the regulation of epidermal-basement membrane adhesion and organogenesis during development. Metastatic prostate cancer A single nucleotide switch (polymorphism) in FRAS1 promoter region is associated with metastatic Prostate cancer. The promoter region is directly related to the NFkB pathway and has been shown to be associated with lethal prostate cancer. Fras1 related extracellular matrix (FREM1) directly relates to congenital diaphragmatic hernia in developing fetuses. Decreased expression of FREM1 may be linked with disruptions in the growth of diaphragm cells. Both FRAS1 and FREM1 are among the proteins that are primarily interacting during embryonic development. It is shown that a decrease in these two proteins lead to an increase of congenital diaphragmatic hernia in both humans and mice. Clinical significance Mutations in this gene have been observed to cause fraser syndrome. See also Fraser syndrome References Further reading Extracellular matrix proteins
FRAS1
[ "Chemistry", "Biology" ]
237
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
16,259,622
https://en.wikipedia.org/wiki/Fleming%E2%80%93Tamao%20oxidation
The Fleming–Tamao oxidation, or Tamao–Kumada–Fleming oxidation, converts a carbon–silicon bond to a carbon–oxygen bond with a peroxy acid or hydrogen peroxide. Fleming–Tamao oxidation refers to two slightly different conditions developed concurrently in the early 1980s by the Kohei Tamao and Ian Fleming research groups. The reaction is stereospecific with retention of configuration at the carbon–silicon bond. This allows the silicon group to be used as a functional equivalent of the hydroxyl group. Another key feature of the silicon group is that it is relatively stable due to the presence of the silicon atom, and therefore can tolerate various reaction conditions that the hydroxyl group can not tolerate. Due to the stability of the silicon group, organosilicon compounds are useful in the total synthesis of complex natural products and pharmaceutical drugs. For instance, the Fleming–Tamao oxidation has been used to accomplish the synthesis of subunits of tautomycin, an inhibitor that is used as a lead cancer compound and as an immunosuppressant. History In 1983, Tamao and co-workers were the first to report the successful transformation of an allyl alkoxy silyl to an allyl alcohol without an allylic shift. In their report, the chemists observed that the hydroxyl group was introduced exclusively onto the carbon atom to which the silicon atom was attached. In the same year, Tamao and group published another paper that showed that the carbon–silicon bond in alkoxy organosilicon compounds can be cleaved using H2O2 or m-CPBA under acidic, basic (chemistry), or neutral conditions, to afford the corresponding alcohols. A year later, Ian Fleming and group reported that the dimethylphenylsilyl (Me2PhSi) group can be converted to an hydroxyl group in a two-pot sequence. Later, in 1987, Fleming reported a one-pot variant to the two-pot sequence in which either bromine or mercuric ion acts as the electrophile. These early findings paved the way for the development of a large number of silicon-based reagents and the use of various silyl groups as functional equivalents of the hydroxyl group. Mechanisms Tamao–Kumada oxidation Although the mechanism below is for the basic condition, the proposed mechanism for the Tamao oxidation is similar under each condition. The mechanism below contains at least one fluorine atom as the substituent, which is the prototype structure that Tamao studied. Fluoride, provided by a fluoride source or a donor solvent, attacks the fluorosilane in a fast and reversible step to give a pentacoordinated species. This species is more electrophilic than the fluorosilane, thereby promoting attack by the nucleophilic oxidant to yield the negatively charged hexacoordinated transition state. This step was determined to be the rate determining step based on kinetic studies done by Tamao. Further studies by Tamao on the steric and electronic effects of different groups attached to the silicon led him to suggest that attack by the oxidant trans to the electronegative fluoride group is energetically favored. The group cis to the peroxide oxygen in the transition state structure then migrates preferentially, thus explaining the retention of configuration at the carbon center. Finally, the new silicon–oxygen bond of the hexaco-ordinated species is hydrolyzed by water in the reaction medium. Subsequent workup produced the expected alcohol. Fleming oxidation Two-pot sequence Unlike the Tamao oxidation whose starting material is an activated heteroatom-substituted silyl group, the Fleming oxidation utilizes a more robust silyl group which has only carbon atoms attached to the silicon atom. The prototype silyl structure that Fleming used was dimethylphenylsilyl. This aryl silane is then converted to the more reactive halo- or heterosilane to initiate the oxidation. The mechanism of the two-pot sequence differs from the Tamao oxidation since the reagents are different. First, an electrophile attacks the phenyl ring in the ipso position to give a beta-carbocation that is stabilized by the silicon group. A heteroatom then attacks the silicon group, which allows the phenyl ring to leave, in a key step referred to as protodesilylation of the arylsilane. The alkyl group undergoes 1,2 migration from the silicon to the oxygen atom. Aqueous acid mediated hydrolysis and subsequent workup yield the desired alcohol. It is difficult to prevent small resulting silyl-alcohols from dehydrating to form siloxanes. One-pot sequence The main difference between the one-pot and two-pot sequences is that the former has bromine or mercuric ion as the electrophile that is attacked by the benzene ring. The bromine electrophile is generated by diatomic bromine or another source such as potassium bromide, which can be oxidized to generate bromine in situ by the peracetic acid. The source of the mercuric ion is mercuric acetate, and this reagent is mixed with peracetic acid in AcOH to provide the oxidizing conditions. The mechanism for the one-pot and two-pot sequences is the same since the bromine or mercuric ion are attacked by the phenyl ring instead of the hydrogen ion. Scope The Tamao–Kumada oxidation, or the Tamao oxidation, uses a silyl group with a hydrogen atom, a heteroatom or an electron-donating group attached to the silicon atom to make it more reactive. Tamao used either fluorine or chlorine atom, or an alkoxy (OR) or amine group (NR2) as the substituent on the substrates. In addition to varying the percent composition of oxidants and combining different solvents, Tamao also used additives such as acetic anhydride (Ac2O), potassium hydrogen fluoride (KHF2), and potassium hydrogen carbonate (KHCO3) or sodium hydrogen carbonate (NaHCO3) to make the reaction conditions slightly acidic, neutral, and alkaline, respectively. The different conditions were used to observe the effect that pH environment had on the oxidative cleavage of the various alkoxy groups. Below is an example of each reaction condition. Variations Recently, the Fleming–Tamao oxidation has been used to generate phenol and substituted phenols in very good yield. The Tamao oxidation was used to synthesize acid, aldehyde, and ketone under varying reaction conditions. Whereas the carbon-silicon bond of a substituted alkylsilyl is cleaved to a carbon-oxygen single bond, a substituted alkenylsilyl group is transformed to a carbonyl under the same Tamao oxidation conditions employed for alkylsilane. Advantages of a C–Si linkage The silyl group is a non-polar and relatively unreactive species and is therefore tolerant of many reagents and reaction conditions that might be incompatible with free alcohols. Consequently, the silyl group also eliminates the need for introduction of hydroxyl protecting groups. In short, by deferring introduction of an alcohol to a late synthetic stage, opting instead to carry through a silane, a number of potential problems experienced in total syntheses can be mitigated or avoided entirely. Steric effects One of the major pitfalls of either the Fleming or Tamao oxidations is steric hindrance. Increasing the steric bulk at the silicon center generally slows down reaction, potentially even suppressing reaction entirely when certain substituents are employed. In general, less bulky groups such as methyl or ethyl favor oxidation, while bulkier groups such as tert-butyl slow down or stop oxidation. There are special cases in which this pattern in not followed. For example, alkoxy groups tend to enhance oxidation, while oxidation does not proceed under normal conditions when three alkyl substituents are attached to the silicon atom. The trend below illustrates the order in which oxidation proceeds. Applications Natural product synthesis The natural product, (+)− pramanicin, became an interesting target for synthesis because it was observed to be active against a fungal pathogen that resulted in meningitis in AIDS patients. Therefore, its synthesis which utilized the Fleming–Tamao oxidation as a crucial step has been relevant to chemists as well as to patients afflicted by AIDS. The antifungal agent has also been shown previously to induce cell death and increase calcium levels in vascular endothelial cells. Furthermore, (+)– pramanicin has a wide range of potential applications against human diseases. Polyol synthesis Polyols and diols are especially useful to the food industry and polymer chemistry. Their importance is underscored by the fact that they can be used as sugar replacers for diabetics or those who choose to have sugar-free or low-calorie diets. The Fleming-Tamao has been applied in the synthesis of stereoselective diols. Woerpel used the reaction to synthesize anti-1,3 diols from functionalized silyl anion. Alternatively, Hara, K.; Moralee, and Ojima achieved syn-1,3 diols using Tamao oxidation. See also Baeyer-Villiger oxidation External links Tamao-Fleming Oxidation References Organic oxidation reactions Name reactions
Fleming–Tamao oxidation
[ "Chemistry" ]
1,987
[ "Name reactions", "Organic oxidation reactions", "Organic redox reactions", "Organic reactions" ]
16,259,862
https://en.wikipedia.org/wiki/Polynomial%20code
In coding theory, a polynomial code is a type of linear code whose set of valid code words consists of those polynomials (usually of some fixed length) that are divisible by a given fixed polynomial (of shorter length, called the generator polynomial). Definition Fix a finite field , whose elements we call symbols. For the purposes of constructing polynomial codes, we identify a string of symbols with the polynomial Fix integers and let be some fixed polynomial of degree , called the generator polynomial. The polynomial code generated by is the code whose code words are precisely the polynomials of degree less than that are divisible (without remainder) by . Example Consider the polynomial code over with , , and generator polynomial . This code consists of the following code words: Or written explicitly: Since the polynomial code is defined over the Binary Galois Field , polynomial elements are represented as a modulo-2 sum and the final polynomials are: Equivalently, expressed as strings of binary digits, the codewords are: This, as every polynomial code, is indeed a linear code, i.e., linear combinations of code words are again code words. In a case like this where the field is GF(2), linear combinations are found by taking the XOR of the codewords expressed in binary form (e.g. 00111 XOR 10010 = 10101). Encoding In a polynomial code over with code length and generator polynomial of degree , there will be exactly code words. Indeed, by definition, is a code word if and only if it is of the form , where (the quotient) is of degree less than . Since there are such quotients available, there are the same number of possible code words. Plain (unencoded) data words should therefore be of length Some authors, such as (Lidl & Pilz, 1999), only discuss the mapping as the assignment from data words to code words. However, this has the disadvantage that the data word does not appear as part of the code word. Instead, the following method is often used to create a systematic code: given a data word of length , first multiply by , which has the effect of shifting by places to the left. In general, will not be divisible by , i.e., it will not be a valid code word. However, there is a unique code word that can be obtained by adjusting the rightmost symbols of . To calculate it, compute the remainder of dividing by : where is of degree less than . The code word corresponding to the data word is then defined to be Note the following properties: , which is divisible by . In particular, is a valid code word. Since is of degree less than , the leftmost symbols of agree with the corresponding symbols of . In other words, the first symbols of the code word are the same as the original data word. The remaining symbols are called checksum digits or check bits. Example For the above code with , , and generator polynomial , we obtain the following assignment from data words to codewords: 000 00000 001 00111 010 01001 011 01110 100 10010 101 10101 110 11011 111 11100 Decoding An erroneous message can be detected in a straightforward way through polynomial division by the generator polynomial resulting in a non-zero remainder. Assuming that the code word is free of errors, a systematic code can be decoded simply by stripping away the checksum digits. If there are errors, then error correction should be performed before decoding. Efficient decoding algorithms exist for specific polynomial codes, such as BCH codes. Properties of polynomial codes As for all digital codes, the error detection and correction abilities of polynomial codes are determined by the minimum Hamming distance of the code. Since polynomial codes are linear codes, the minimum Hamming distance is equal to the minimum weight of any non-zero codeword. In the example above, the minimum Hamming distance is 2, since 01001 is a codeword, and there is no nonzero codeword with only one bit set. More specific properties of a polynomial code often depend on particular algebraic properties of its generator polynomial. Here are some examples of such properties: A polynomial code is cyclic if and only if the generator polynomial divides . If the generator polynomial is primitive, then the resulting code has Hamming distance at least 3, provided that . In BCH codes, the generator polynomial is chosen to have specific roots in an extension field, in a way that achieves high Hamming distance. The algebraic nature of polynomial codes, with cleverly chosen generator polynomials, can also often be exploited to find efficient error correction algorithms. This is the case for BCH codes. Specific families of polynomial codes Cyclic codes – every cyclic code is also a polynomial code; a popular example is the CRC code. BCH codes – a family of cyclic codes with high Hamming distance and efficient algebraic error correction algorithms. Reed–Solomon codes – an important subset of BCH codes with particularly efficient structure. References W.J. Gilbert and W.K. Nicholson: Modern Algebra with Applications, 2nd edition, Wiley, 2004. R. Lidl and G. Pilz. Applied Abstract Algebra, 2nd edition. Wiley, 1999. Coding theory
Polynomial code
[ "Mathematics" ]
1,068
[ "Discrete mathematics", "Coding theory" ]
16,260,043
https://en.wikipedia.org/wiki/John%20E.%20Amoore
John E. Amoore (1930–1998) was a British biochemist who first proposed the stereochemical theory for olfaction. Bibliography Molecular Basis of Odor John E. Amoore, Published 1970, Thomas How Smells Shape Up John E. Amoore, Published 1977, American Chemical Society References British biochemists 1930 births 1998 deaths
John E. Amoore
[ "Chemistry" ]
72
[ "Biochemistry stubs", "Biochemists", "Biochemist stubs" ]
16,260,262
https://en.wikipedia.org/wiki/Adrian%20John%20Brown
Adrian John Brown, FRS (27 April 1852 – 2 July 1919) was a British Professor of Malting and Brewing at the University of Birmingham and a pioneer in the study of enzyme kinetics. He was born at Burton-on-Trent, Staffordshire to Edwin Brown, a bank manager in the town. His elder brother was Horace Tabberer Brown. He attended the local grammar school and then went up to study chemistry at the Royal College of Science in London. He became private assistant to Dr Russell at St Bartholomew's Hospital Medical School. In 1873 he returned to Burton to work as a chemist in the brewing industry for the next twenty-five years. In 1899 he left to become Professor of Brewing and Malting at Mason University College (which became Birmingham University in 1900). He studied the rate of fermentation of sucrose by yeast and suggested in 1892 that a substance in the yeast might be responsible for speeding up the reaction. This was the first time enzymes were suggested as separate entities from organisms and talked about in chemical terms. He later studied the enzyme responsible and made the striking suggestion that the kinetics he observed were the result of an enzyme–substrate complex being formed during the reaction, a concept that has formed the basis of all later work on enzyme kinetics. Similar ideas had been put earlier by German chemist and Nobel laureate Hermann Emil Fischer by comparing substrate and enzyme with a key and a lock. References Royal Society of Chemistry Biography External links A Brief History of Enzyme Kinetics 1852 births 1919 deaths People from Burton upon Trent British biochemists Fellows of the Royal Society Academics of the University of Birmingham
Adrian John Brown
[ "Chemistry" ]
326
[ "Biochemistry stubs", "Biochemists", "Biochemist stubs" ]
16,260,775
https://en.wikipedia.org/wiki/Cyberethics
Cyberethics is "a branch of ethics concerned with behavior in an online environment". In another definition, it is the "exploration of the entire range of ethical and moral issues that arise in cyberspace" while cyberspace is understood to be "the electronic worlds made visible by the Internet." For years, various governments have enacted regulations while organizations have defined policies about cyberethics. Theory According to Larry Lessig in Code and Other Laws of Cyberspace, there are four constraints that govern human behavior: law, (social) norms, the market and the code/architecture. The same four apply in cyberspace. Ethics are outside these four and complementary to them. In 2001, Herman T. Tavani considered whether computer ethics were different from cyberethics. While he agreed that "The internet has perpetuated and, in certain cases, exacerbated many of the ethical issues associated with the use of earlier computing technologies", he did not agree that there is enough difference and that a new field should be introduced. He extended the same opinion to internet ethics. Challenges According to, Baird, Ramsower and Rosenbaum, it is difficult to unravel cyberethical issues since "the building material of cyberspace is information and that is invisible and carries "value and ethical implications."" They also point out that new ethical issues will arise since technology is changing and growing. Another challenge is that the internet is a borderless phenomenon and according to some, it is "quite difficult for any nation to exercise local jurisdiction over information available in cyberspace" and so governments are better left with a "modest" role in Internet regulation. According to the International Telecommunication Union, 5.4 billion were using the internet in 2023. That amounted to 67% of the world population. The number increased by 45% since 2018. Privacy history In the late 19th century, the invention of cameras spurred similar ethical debates as the internet does today. During a seminar of Harvard Law Review in 1890, Samuel D. Warren II and Brandeis defined privacy from an ethical and moral point of view to be: "central to dignity and individuality and boyhood. Privacy is also indispensable to a sense of autonomy—to 'a feeling that there is an area of an individual's life that is totally under his or her control, an area that is free from outside intrusion.' The deprivation of privacy can even endanger a person's health." Over the past century, the advent of the internet and the rapid expansion of e-commerce have ushered in a new era of privacy concerns. Governments and organizations collect vast amounts of private data, raising questions about individual autonomy and control over personal information. With the rise of online transactions and digital footprints, individuals face increased risks of privacy breaches and identity theft. This modern landscape necessitates a renewed ethical debate surrounding privacy rights in the digital age. Privacy can be decomposed to the limitation of others' access to an individual with "three elements of secrecy, anonymity, and solitude." Anonymity refers to the individual's right to protection from undesired attention. Solitude refers to the lack of physical proximity of an individual to others. Secrecy refers to the protection of personalized information from being freely distributed. Moreover, digital security encompasses psychological and technical aspects, shaping users' perceptions of trust and safety in online interactions. Users' awareness of cybersecurity risks, alongside incident response protocols, authentication mechanisms, and encryption protocols, are pivotal in protecting digital environments. Despite advancements in defensive technologies, the cybersecurity landscape presents ongoing challenges, evident through a continuous influx of data breaches and cyber incidents reported across diverse sectors. This emphasizes the significance of comprehending user behavior and perceptions within the realm of cyberethics, as individuals navigate the intricacies of digital security in their online endeavors. Individuals surrender private information when conducting transactions and registering for services. Ethical business practice protects the privacy of their customers by securing information which may contribute to the loss of secrecy, anonymity, and solitude. Credit card information, social security numbers, phone numbers, mothers' maiden names, addresses and phone numbers freely collected and shared over the internet may lead to a loss of Privacy. Fraud and impersonation are some of the malicious activities that occur due to the direct or indirect abuse of private information. Identity theft is rising rapidly due to the availability of private information in the internet. For instance, seven million Americans fell victim to identity theft in 2002, and nearly 12 million Americans were victims of identity theft in 2011 making it the fastest growing crime in the United States. Moreover, with the widespread use of social media and online transactions, the chances of identity theft are increasing. It's essential for people and businesses to stay cautious and implement strong security measures to prevent identity theft and financial fraud. Public records search engines and databases are the main culprits contributing to the rise of cybercrime. Listed below are a few recommendations to restrict online databases from proliferating sensitive personnel information. Exclude sensitive unique identifiers from database records such as social security numbers, birth dates, hometown and mothers' maiden names. Exclude phone numbers that are normally unlisted. Clear provision of a method which allows people to have their names removed from a database. Banning the reverse social security number lookup services. History of Cyberethics in Hacking The evolution of hacking raises ethical questions in cybersecurity. Once a hobby driven by curiosity, hacking has transformed into a profitable underground industry, with cybercriminals exploiting vulnerabilities for personal gain or political motives. This shift raises concerns about privacy violations, financial losses, and societal harm resulting from cyberattacks. The emergence of cybercriminals exploiting vulnerabilities in digital systems for personal gain or political motives has led to ethical dilemmas surrounding hacking practices. Bug bounty programs and vulnerability disclosure introduce complexities, blurring the lines between legitimate security research and malicious exploitation. Balancing security imperatives with respect for privacy rights presents challenges in safeguarding critical infrastructure while upholding individual liberties. Addressing the ethical dimensions of hacking requires collaborative efforts across industry sectors, governmental agencies, and academia. Establishing ethical frameworks for vulnerability disclosure, bug bounty programs, and penetration testing is essential to ensure responsible cybersecurity practices. International cooperation and information sharing are imperative to combat cyber threats that transcend national borders and jurisdictions. Private collection Data warehouses are used today to collect and store huge amounts of personal data and consumer transactions. These facilities can preserve large volumes of consumer information for an indefinite amount of time. Some of the key architectures contributing to the erosion of privacy include databases, cookies and spyware. Some may argue that data warehouses are supposed to stand alone and be protected. However, the fact is enough personal information can be gathered from corporate websites and social networking sites to initiate a reverse lookup. Therefore, is it not important to address some of the ethical issues regarding how protected data ends up in the public domain? As a result, identity theft protection businesses are on the rise. The market is predicted to reach 34.7 billion (USD) by 2032, according to Market.us. Property Ethical debate has long included the concept of property. This concept has created many clashes in the world of cyberethics. One philosophy of the internet is centered around the freedom of information. The controversy over ownership occurs when the property of information is infringed upon or uncertain. Intellectual property rights The ever-increasing speed of the internet and the emergence of compression technology, such as mp3 opened the doors to Peer-to-peer file sharing, a technology that allowed users to anonymously transfer files to each other, previously seen on programs such as Napster or now seen through communications protocol such as BitTorrent. Much of this, however, was copyrighted music and illegal to transfer to other users. Whether it is ethical to transfer copyrighted media is another question. Proponents of unrestricted file sharing point out how file sharing has given people broader and faster access to media, has increased exposure to new artists, and has reduced the costs of transferring media (including less environmental damage). Supporters of restrictions on file sharing argue that we must protect the income of our artists and other people who work to create our media. This argument is partially answered by pointing to the small proportion of money artists receive from the legitimate sale of media. A similar debate can be seen over intellectual property rights in respect to software ownership. The two opposing views are for closed source software distributed under restrictive licenses or for free and Free software. The argument can be made that restrictions are required because companies would not invest weeks and months in development if there were no incentive for revenue generated from sales and licensing fees. A counter argument to this is that standing on shoulders of giants is far cheaper when the giants do not hold IP rights. Some proponents for Free software believe that source code for most programs should be available to anyone who use them, in a manner which respects their freedoms. Digital rights management (DRM) With the introduction of digital rights management software, new issues are raised over whether the subverting of DRM is ethical. Some champion the hackers of DRM as defenders of users' rights, allowing the blind to make audio books of PDFs they receive, allowing people to burn music they have legitimately bought to CD or to transfer it to a new computer. Others see this as nothing but simply a violation of the rights of the intellectual property holders, opening the door to uncompensated use of copyrighted media. Another ethical issue concerning DRMs involves the way these systems could undermine the fair use provisions of the copyright laws. The reason is that these allow content providers to choose who can view or listen to their materials making the discrimination against certain groups possible. In addition, the level of control given to content providers could lead to the invasion of user privacy since the system is able to keep tabs on the personal information and activities of users who access their materials. In the United States, the Digital Millennium Copyright Act (DMCA) reinforces this aspect to DRM technology, particularly in the way the flow of information is controlled by content providers. Programs or any technologies that attempt to circumvent DRM controls are in violation of one of its provisions (Section 1201). Accessibility, censorship and filtering Accessibility, censorship and filtering bring up many ethical issues that have several branches in cyberethics. Many questions have arisen which continue to challenge our understanding of privacy, security and our participation in society. Throughout the centuries mechanisms have been constructed in the name of protection and security. Today the applications are in the form of software that filters domains and content so that they may not be easily accessed or obtained without elaborate circumvention or on a personal and business level through free or content-control software. Internet censorship and filtering are used to control or suppress the publishing or accessing of information. The legal issues are similar to offline censorship and filtering. The same arguments that apply to offline censorship and filtering apply to online censorship and filtering; whether people are better off with free access to information or should be protected from what is considered by a governing body as harmful, indecent or illicit. The fear of access by minors drives much of the concern and many online advocate groups have sprung up to raise awareness and of controlling the accessibility of minors to the internet. Censorship and filtering occurs on small to large scales, whether it be a company restricting their employees' access to cyberspace by blocking certain websites which are deemed as relevant only to personal usage and therefore damaging to productivity or on a larger scale where a government creates large firewalls which censor and filter access to certain information available online frequently from outside their country to their citizens and anyone within their borders. One of the most famous examples of a country controlling access is the Golden Shield Project, also referred to as the Great Firewall of China, a censorship and surveillance project set up and operated by the People's Republic of China. Another instance is the 2000 case of the League Against Racism and Antisemitism (LICRA), French Union of Jewish Students, vs. Yahoo! Inc (USA) and Yahoo! France, where the French Court declared that "access by French Internet users to the auction website containing Nazi objects constituted a contravention of French law and an offence to the 'collective memory' of the country and that the simple act of displaying such objects (e.g. exhibition of uniforms, insignia or emblems resembling those worn or displayed by the Nazis) in France constitutes a violation of the Article R645-1 of the Penal Code and is therefore considered as a threat to internal public order." Since the French judicial ruling many websites must abide by the rules of the countries in which they are accessible. Freedom of information Freedom of information, that is the freedom of speech as well as the freedom to seek, obtain and impart information brings up the question of who or what, has the jurisdiction in cyberspace. The right of freedom of information is commonly subject to limitations dependent upon the country, society and culture concerned. Generally there are three standpoints on the issue as it relates to the internet. First is the argument that the internet is a form of media, put out and accessed by citizens of governments and therefore should be regulated by each individual government within the borders of their respective jurisdictions. Second, is that, "Governments of the Industrial World... have no sovereignty [over the Internet] ... We have no elected government, nor are we likely to have one,... You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear." A third party believes that the internet supersedes all tangible borders such as the borders of countries, authority should be given to an international body since what is legal in one country may be against the law in another. Digital divide An issue specific to the ethical issues of the freedom of information is what is known as the digital divide. This refers to the unequal socio-economic divide between those who had access to digital and information technology, such as cyberspace, and those who have had limited or no access at all. This gap of access between countries or regions of the world is called the global digital divide. Sexuality and pornography Sexuality in terms of sexual orientation, infidelity, sex with or between minors, public display and pornography have always stirred ethical controversy. These issues are reflected online to varying degrees. In terms of its resonance, the historical development of the online pornography industry and user-generated content have been the studied by media academics. One of the largest cyberethical debates is over the regulation, distribution and accessibility of pornography online. Hardcore pornographic material is generally controlled by governments with laws regarding how old one has to be to obtain it and what forms are acceptable or not. The availability of pornography online calls into question jurisdiction as well as brings up the problem of regulation, in particular over child pornography, which is illegal in most countries, as well as pornography involving violence or animals, which is restricted within most countries. Gambling Gambling is often a topic in ethical debate as some view it as inherently wrong and support prohibition or controls while others advocate for no legal restrictions. "Between these extremes lies a multitude of opinions on what types of gambling the government should permit and where it should be allowed to take place. Discussion of gambling forces public policy makers to deal with issues as diverse as addiction, tribal rights, taxation, senior living, professional and college sports, organized crime, neurobiology, suicide, divorce, and religion." Due to its controversy, gambling is either banned or heavily controlled on local or national levels. The accessibility of the internet and its ability to cross geographic-borders have led to illegal online gambling, often offshore operations. Over the years online gambling, both legal and illegal, has grown exponentially which has led to difficulties in regulation. This enormous growth has even called into question by some the ethical place of gambling online. In education There are particular cyberethics concerns in an educational setting: plagiarism or other appropriation of intellectual property, cyberbullying and other activities harmful activities, as well as accessing inappropriate material such as a test key. There is also the issue of bringing to the classroom material that was meant for a different audience on a social media platform and its authors did not give permission for its classroom use. Another issue is the authenticity and accuracy of online material used for learning. On the other hand, however, some might only feel able to express themselves under anonymous conditions where true collaboration happens. Cyberbullying Cyberbullying occurs when "a student is threatened, humiliated, harassed, embarrassed or target by another student". It encompasses many of the same issues that come with bullying but it extends beyond "the physical schoolyard". Cyberbullying takes place "on Web or social networking sites, or using email, text messaging or instant messaging". It evolved with the increased use of information and communication technology. It can also reach a victim 24 hours, 7 days a week in places that are outside of the traditional forms of bullying. The issue of cyberstalking, "the use of electronic communication to harass or threaten someone with physical harm", is sometimes used interchangeably with cyberbullying. However, cyberstalking is a form of cyberbullying. Cyberstalking is a federal crime in the United States as part of the Violence Against Women Act of 2005. This law was amended in 2013 to include stalking over the Internet and by telephone and introduces penalties of up to five years in prison and a 250 000 USD fine. The UK-based Internet Watch Foundation reported in September 2023 that sextortion was on the rise as numbers for the first half of that year "surged by 257%* compared with the whole of 2022". Similarly, the American Federal Bureau of Investigation reported in January 2024 that in the period of October 2022 to March 2023 there was "at least a 20% increase" in cases as compared to the same period the previous year. Between October 2021 to March 2023, 12 600 victims were registered and 20 suicides were link to sextortion. The victims of sextortion are most often young boys. Related organizations The following organizations are of notable interest in cyberethics debates: International Federation for Information Processing (IFIP) Association for Computing Machinery, Special Interest Group: Computers and Society (SIGCAS) Electronic Privacy Information Center (EPIC) Electronic Frontier Foundation (EFF) International Center for Information Ethics (ICIE) Directions and Implications in Advanced Computing (DIAC) The Centre for Computing and Social Responsibility (CCSR) Cyber-Rights and Cyber-liberties International Journal of Cyber Ethics in Education (IJCEE) The Center for Digital Ethics and Policy (CDEP) Codes of ethics in computing Four notable examples of ethics codes for IT professionals are listed below: The Code of Fair Information Practices The Code of Fair Information Practices is based on five principles outlining the requirements for records keeping systems. This requirement was implemented in 1973 by the U.S. Department of Health, Education and Welfare. There must be no personal data record-keeping systems whose very existence is secret. There must be a way for a person to find out what information about the person is in a record and how it is used. There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person's consent. There must be a way for a person to correct or amend a record of identifiable information about the person. Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuses of the data. RFC 1087 In January 1989, the Internet Architecture Board (IAB) in RFC 1087, titled "Ethics and the Internet," defines an activity as unethical and unacceptable if it: Seeks to gain unauthorized access to the resources of the Internet. Disrupts the intended use of the internet. Wastes resources (people, capacity, computer) through such actions. Destroys the integrity of computer-based information, or Compromises the privacy of users. They defined the role of the government and the users. However, these were seen as intended for the protection of U.S. government investment into the infrastructure of the Internet. Ten Commandments of Computer Ethics In 1992, Ramon C. Barquin authored a set of principles based on the IAB RFC 1087, it was called “In Pursuit of a ‘Ten Commandments’ for Computer Ethics”. These were published in 1992 (or 1996) by the Computer Ethics Institute; a nonprofit organization whose mission is to advance technology by ethical means. It lists these rules: Thou shalt not use a computer to harm other people. Thou shalt not interfere with other people's computer work. Thou shalt not snoop around in other people's computer files. Thou shalt not use a computer to steal. Thou shalt not use a computer to bear false witness. Thou shalt not copy or use proprietary software for which you have not paid. Thou shalt not use other people's computer resources without authorization or proper compensation. Thou shalt not appropriate other people's intellectual output. Thou shalt think about the social consequences of the program you are writing or the system you are designing. Thou shalt always use a computer in ways that ensure consideration and respect for your fellow humans. (ISC)² Code of Ethics The International Information System Security Certification Consortium, is a professional association known as (ISC)², which seeks to inspire a safe and secure cyber world. It has further defined its own code of ethics. The code is based on four canons, under a general preamble: Code of Ethics Preamble: The safety and welfare of society and the common good, duty to our principals, and duty to each other, require that we adhere, and be seen to adhere, to the highest ethical standards of behavior. Code of Ethics Canons: Protect society, the common good, necessary public trust and confidence, and the infrastructure. Act honorably, honestly, justly, responsibly, and legally. Provide diligent and competent service to principals. Advance and protect the profession. Ethical considerations in emerging technology Though it is impossible to predict all potential ethical implications resulting from new or emerging technology, ethical considerations early in the Research and Development (R&D) phases of a system or technology's lifecycle can help ensure the development of technology that adheres to ethical standards. Several methodologies, to include frameworks and checklists, have been proposed by researchers for the purpose of conducting ethical impact assessments on developing technology. The goal of these assessments is to identify potential ethical scenarios prior to deployment and adoption of an emerging technology. The output from these assessments allow for the mitigation of potential ethical risk and ultimately helps to ensure ethical standards are upheld as technology evolves. Additionally, the overlap of ethics and cybersecurity reveals a complex situation. Safeguarding important infrastructure and private data often clashes with worries about privacy. Deciding on security measures must balance protecting national interests with preserving civil liberties. Ethical concerns are crucial in dealing with the differences in cybersecurity practices between public and private sectors. Despite efforts to improve funding and cooperation, challenges remain in finding and stopping cyber threats, especially in government agencies. This shows the need for clear ethical guidelines to guide cybersecurity decisions See also References External links Association for Computing Machinery, Special Interest Group: Computers and Society website International Center for Information Ethics website The Centre for Computing and Social Responsibility website Safer Internet Center which includes Awareness Node, Helpline and Hotline Cyprus Safer Internet Helpline, Cyprus Safer Internet Hotline Cyber-Rights and Cyber-Liberties website IEEE Website Association for Computing Machinery website ISC2 website Internet Architecture Board Get Cyber Space (Canada) Computer ethics Computing and society Cyberspace
Cyberethics
[ "Technology" ]
4,884
[ "Cyberspace", "Information technology", "Internet ethics", "Computing and society", "Ethics of science and technology", "Computer ethics" ]
16,261,888
https://en.wikipedia.org/wiki/HD%20268835
HD 268835 (or R66) (30 SM) is one of two stars that were identified by NASA's Spitzer Space Telescope in the Milky Way's nearest neighbor galaxy, the Large Magellanic Cloud (the other being R 126 or HD 37974), as being circled by monstrous dust disks that are theorised to be the origin of planets. Significance Both HD 268835 and HD 37974 are classified as hypergiants, very large and very bright. The dust cloud around them surprised astronomers because stars as big as these were thought to be inhospitable to planet formation as they have very strong winds making it difficult/impossible for the dust clouds to "condense" into planets. "We do not know if planets like those in our solar system are able to form in the highly energetic, dynamic environment of these massive stars, but if they could, their existence would be a short and exciting one" said Charles Beichman, an astronomer at NASA's Jet Propulsion Laboratory and the California Institute of Technology, both in Pasadena, California. References Stars in the Large Magellanic Cloud Mensa (constellation) Luminous blue variables B-type hypergiants 268835 022989 CD-70 00273 B(e) stars ko:R 66과 R 126 it:R 66 e R 126
HD 268835
[ "Astronomy" ]
279
[ "Mensa (constellation)", "Constellations" ]
16,262,012
https://en.wikipedia.org/wiki/HD%2037974
HD 37974 (or R 126) a variable B[e] hypergiant in the Large Magellanic Cloud. It is surrounded by an unexpected dust disk. Properties R126, formally RMC (Radcliffe observatory Magellanic Cloud) 126, is a massive luminous star with several unusual properties. It exhibits the B[e] phenomenon where forbidden emission lines appear in the spectrum due to extended circumstellar material. Its spectrum also shows normal (permitted) emission lines formed in denser material closer to the star, indicative of a power stellar wind. The spectra include silicate and polycyclic aromatic hydrocarbon (PAH) features that suggest a dusty disc. The star itself is a hot supergiant thought to be seventy times more massive than the Sun and over a million times more luminous. It has evolved away from the main sequence (being an O-class star, when it was in MS) and is so luminous and large that it is losing material through its stellar wind over a billion times faster than the Sun. It would lose more material than the Sun contains in about 25,000 years. It is expected to evolve into Wolf–Rayet star in several hundred thousand years. Dusty disc The dust cloud around R126 is surprising because stars as massive as these were thought to be inhospitable to planet formation due to powerful stellar winds making it difficult for dust particles to condense. The nearby hypergiant HD 268835 shows similar features and is also likely to have a dusty disc, so R126 is not unique. The disc extends outwards for 60 times the size of Pluto's orbit around the Sun, and probably contains as much material as the entire Kuiper belt. It is unclear whether such a disc represents the first or last stages of the planet-forming process. Variability The brightness of R126 varies in an unpredictable way by around 0.6 magnitude over timescales of tens to hundreds of days. The faster variations are characteristic of α Cygni variables, irregular pulsating supergiants. The slower variations are accompanied by changes in the colour of the star, with it being redder when it is visually brighter, typical of the S Doradus phases of luminous blue variables. See also List of most massive stars References Stars in the Large Magellanic Cloud Dorado B-type hypergiants Large Magellanic Cloud R126 037974 Alpha Cygni variables B(e) stars J05362586-6922558 CPD-69 420
HD 37974
[ "Astronomy" ]
523
[ "Dorado", "Constellations" ]
16,262,317
https://en.wikipedia.org/wiki/Gallagher%E2%80%93Hollander%20degradation
In the Gallagher–Hollander degradation (1946) pyruvic acid is removed from a linear aliphatic carboxylic acid yielding a new acid with two carbon atoms fewer. The original publication concerns the conversion of bile acid in a series of reactions: acid chloride (2) formation with thionyl chloride, diazoketone formation (3) with diazomethane, chloromethyl ketone formation (4) with hydrochloric acid, organic reduction of chlorine to methylketone (5), ketone halogenation to 6, elimination reaction with pyridine to enone 7 and finally oxidation with chromium trioxide to bisnorcholanic acid 8. References Organic reactions Name reactions Degradation reactions
Gallagher–Hollander degradation
[ "Chemistry" ]
156
[ "Name reactions", "Degradation reactions", "Organic reactions" ]
16,262,543
https://en.wikipedia.org/wiki/An%20Investigation%20of%20Global%20Policy%20with%20the%20Yamato%20Race%20as%20Nucleus
was a Japanese government report created by the Ministry of Health and Welfare's Institute of Population Problems (now the National Institute of Population and Social Security Research), and completed on July 1, 1943. The document, comprising six volumes totaling 3,127 pages, deals with race theory in general, and the rationale behind policies adopted by wartime Japan towards other races, while also providing a vision of the Asia-Pacific under Japanese control. The document was written in an academic style, surveying Western philosophy on race from the writings of Plato and Aristotle to modern German social scientists, such as Karl Haushofer. A connection between racism, nationalism and imperialism was also claimed, with the conclusion, drawing by citing both British and German sources, that overseas expansionism was essential not only for military and economic security, but for preserving racial consciousness. Concerns pertaining to the cultural assimilation of second and third generation immigrants into foreign cultures were also mentioned. Discovery The document was classified and largely forgotten until 1981, when portions were discovered in a used bookstore in Japan, and subsequently publicized by being used as source material for a chapter in historian John W. Dower's book War Without Mercy: Race and Power in the Pacific War. In 1982 the Ministry of Health and Welfare re-issued the full six-volume version along with another two volumes entitled The Influence of War upon Population as a reference work for historians. Impact Although external Japanese propaganda during World War II emphasized Pan-Asianist and anti-colonial themes, specifically anti-Western imperialist themes, domestic propaganda always took Japanese superiority over other Asians for granted. However, Japan never had an overarching racial theory for Asia until well into the 1930s—following the Japanese invasion of China, military planners decided that they should raise Japanese racial consciousness in order to forestall the potential assimilation of Japanese colonists. The document was written by the Ministry of Health and Welfare, which was not a powerful arm of the bureaucracy at the time. It had to essentially censor its own recommendations so as not to violate official doctrine and policy, and could not even obtain a public hearing for its ideas; so, the document would likely have had little impact on Japanese policy. Themes Colonization and living space Some statements in the document coincide with the then-publicly espoused concept of Yamato people; however, much of the work borrowed heavily from Nazi racial, political and economic theories, including mention of the "Jewish question" and inclusion of racist anti-Jewish political cartoons, although Japan had a mostly negligible and overlooked Jewish minority. The term "Blood and Soil" was frequently used, though usually in quotes, as if to indicate its alien origin. The authors rationalized Japanese colonization of most of the Eastern Hemisphere including New Zealand and Australia, with projected populations by the 1950s, as "securing the living space of the Yamato race," a very clear reflection of the Nazi concept of Lebensraum. Racial supremacy It has been noted that even in the decades before World War II, the Japanese culture regarded Gaijin (non-Japanese) people to be subhumans and included Yamato master race theory ideology in government propaganda and schools as well. Japanese belief as being the superior Asian country was also common by the Meiji era, with discrimination even being enacted against racial minorities such as the Ryūkyū people. However, where the document deviated from Nazi ideology was in its use of Confucianism and the metaphor of the patriarchical family. This metaphor, with the non-Japanese Asians serving as children of the Japanese, rationalized the "equitable inequality" of Japanese political, economic, and cultural dominance. Just as a family has harmony and reciprocity, but with a clear-cut hierarchy, the Japanese, as a purportedly racially superior people, were destined to rule Asia "eternally" and become the supreme dominant leader of all humanity and ruler of the world. The term "proper place" was used frequently throughout the document. The document left open whether Japan was destined eventually to become head of the global family of nations. Jinshu and Minzoku The document drew an explicit distinction between jinshu (人種) or Rasse (English: race), and or Volk (English: people), describing a minzoku as "a natural and spiritual community bound by a common destiny". However, the authors went on to assert that blood mattered. It approved of Hitler's concern about finding the "Germanness" of his people. It made explicit calls, sometimes approaching Nazi attitudes, for eugenic improvements, calling for the medical profession not to concentrate on treating the sickly and the weak, and calling for mental and physical training and selective marriages to improve the population. See also Ethnic issues in Japan Hakkō ichiu – "eight cords, one roof" Honorary Aryan "Manifesto of Race" Scientific racism Shinmin no Michi Tanaka Memorial Yamato people Yamato nationalism Yamato-damashii – "the Japanese spirit" North Korea: The Cleanest Race, a book by Brian Reynolds Myers in which he suggests that the ideology of the North Korean government is derived from 1930s Japanese racialism. References 1943 non-fiction books Racism in Japan Ethnic supremacy Government reports Japan in World War II Japanese nationalism Jewish Japanese history Politics of World War II Race and intelligence controversy Race in Japan Scientific racism Yamato people
An Investigation of Global Policy with the Yamato Race as Nucleus
[ "Biology" ]
1,087
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]
16,263,257
https://en.wikipedia.org/wiki/Murid%20gammaherpesvirus%2068
Murid gammaherpesvirus 68 (MuHV-68) is an isolate of the virus species Murid gammaherpesvirus 4, a member of the genus Rhadinovirus. It is a member of the subfamily Gammaherpesvirinae in the family of Herpesviridae. MuHV-68 serves as a model for study of human gammaherpesviruses which cause significant human disease including B-cell lymphoma and Kaposi's sarcoma. The WUMS strain of MuHV-68 was fully sequenced and annotated in 1997, and the necessity of most of its genes in viral replication was characterized by random transposon mutagenesis. Surface proteins Alpha-, beta-, and gammaherpesviruses display a heterodimer composed of glycoprotein H (gH) and glycoprotein L (gL) on their envelopes. This receptor is involved in the cell-to-cell transmission of the virus. Glycoprotein H has two conformations. Glycoprotein L is a chaperone protein which assures that gH takes on the correct conformation. Murid gammaherpesvirus 68 lack gL, gH misfolds. When alpha- or betaherpesviruses lack gL, they are noninfectious. Murid gammaherpesvirus 68 lacks gL, it remains infectious but is less able to bind to fibroblasts and epithelial cells. The open reading frame M7 of the MuHV-68 genome encodes for the surface receptor glycoprotein 150 (gp150). It is homologous to the Epstein-Barr virus membrane antigen gp350/220. MuHV-68 is more closely related to the Kaposi's Sarcoma-associated herpesvirus (KSHV) than it is to the Epstein-Barr virus. Glycoprotein K8.1 is the KSHV homolog of MuHV-68 gp150. MuHV-68 is a very close relative of MuHV-72. The MuHV-68 M7 gene differs from the corresponding MuHV-72 gene by five point mutations which alter four codons. Glycoprotein 150 allows MuHV-68 to bind to B-cells. References Animal virology Gammaherpesvirinae Unaccepted virus taxa
Murid gammaherpesvirus 68
[ "Biology" ]
512
[ "Biological hypotheses", "Unaccepted virus taxa", "Controversial taxa" ]
16,263,446
https://en.wikipedia.org/wiki/Victim%20soul
The concept of a victim soul is an unofficial belief derived from interpretations of the Catholic Church teachings on redemptive suffering. A person believes themselves or is considered by others to be chosen by God to suffer more than most, accepting this condition based on the example of Christ's own Passion. Neither the Catholic Church, nor any other Christian denomination, officially designates anyone as a victim soul. As it is not considered dogma, the Church classifies belief in victim souls as a matter of private revelation and thus not obligatory for members to subscribe to. Background In the apostolic letter Salvifici doloris (1984), which deals with human suffering and redemption, Pope John Paul II noted that: "The Redeemer suffered in place of man and for man. Every man has his own share in the Redemption. Each one is also called to share in that suffering through which the Redemption was accomplished. ..." An exposition of the tradition of victim soul appears in the autobiography of the Carmelite monastic Thérèse of Lisieux, The Story of a Soul. In her personal view, the victim soul is a chosen one whose suffering is mysteriously joined with the redemptive suffering of Christ and is used for the redemption of others. The Catholic Church does not officially designate anyone as a victim soul. The issue came up when the family of Audrey Santo, an ailing child in a vegetative state, claimed that she had volunteered to be a victim soul. Rev. Daniel P. Reilly, Bishop of Worcester, made clear that the Church does not acknowledge such claims. The term comes from the testimony of those who have observed Christians who seem to or purport to undergo redemptive suffering. Victim soul status is a matter of private revelation unlike dogmas; therefore, individual believers are not required to accept, as part of the Catholic faith, the legitimacy of any particular person for whom such a claim is made, nor the genuineness of any miraculous claims that have been made in connection with such a person. Notable cases Examples of alleged victim souls are: Mary of the Divine Heart (1863–1899), the noble countess Droste zu Vischering and Mother Superior of the Convent of Good Shepherd Sisters in Porto, Portugal, wrote in her autobiography "I offered myself to God as a victim for the sanctification of priests" and added "I know that the Lord has accepted my suffering". Gemma Galgani (1878 – April 11, 1903) wrote in her autobiography how Jesus told her "I need souls who, by their sufferings, trials and sacrifices, make amends for sinners and for their ingratitude." Alexandrina of Balazar (1904–1955), whose Vatican biography states that she saw her vocation in life to invite others to conversion, and to "offer a living witness of Christ's passion, contributing to the redemption of humanity." Faustina Kowalska (1905–1938), who wrote in her diary that Christ had chosen her to be a "victim offering", a role that she voluntarily accepted. Anneliese Michel (1952–1976), who is said to have suffered from demonic possession and undergone subsequent exorcisms; she is also said to have been visited by the Blessed Virgin Mary who asked her "to be a victim soul who would show the German people and the world the devil does really exist." Although the notion of a scapegoat has been present within Judeo-Christian teachings for a long time, the concept of a victim soul is distinct and different, in that in this case the victim soul willingly offers the suffering to God, unlike the unwitting scapegoat scenario. Journalist Peggy Noonan likened John Paul II to a "victim soul" as his health failed in his final years. However, she views it in a somewhat different context. "He is teaching us something through his pain." This is more akin to philosopher Michael Novak's view of Thérèse of Lisieux and Redemptive suffering. See also Our Lady of Seven Dolours Pieta Ecce Homo Reparation to the Immaculate Heart of Mary Persecution of Christians Redemptive suffering References Catholic spirituality Soul
Victim soul
[ "Biology" ]
852
[ "Behavior", "Aggression", "Victims" ]
16,263,779
https://en.wikipedia.org/wiki/Braitenberg%20vehicle
A Braitenberg vehicle is a concept conceived in a thought experiment by the Italian cyberneticist Valentino Braitenberg in his book Vehicles: Experiments in Synthetic Psychology. The book models the animal world in a minimalistic and constructive way, from simple reactive behaviours (like phototaxis) through the simplest vehicles, to the formation of concepts, spatial behaviour, and generation of ideas. For the simplest vehicles, the motion of the vehicle is directly controlled by some sensors (for example photo cells). Yet the resulting behaviour may appear complex or even intelligent. Mechanism A Braitenberg vehicle is an agent that can autonomously move around based on its sensor inputs. It has primitive sensors that measure some stimulus at a point, and wheels (each driven by its own motor) that function as actuators or effectors. In the simplest configuration, a sensor is directly connected to an effector, so that a sensed signal immediately produces a movement of the wheel. Depending on how sensors and wheels are connected, the vehicle exhibits different behaviors (which can be goal-oriented). This means that, depending on the sensor-motor wiring, it appears to strive to achieve certain situations and to avoid others, changing course when the situation changes. The connections between sensors and actuators for the simplest vehicles (2 and 3) can be ipsilateral or contralateral, and excitatory or inhibitory, producing four combinations with different behaviours named fear, aggression, liking, and love. These correspond to biological positive and negative taxes present in many animal species. Examples The following examples are some of Braitenberg's simplest vehicles. Vehicle 1 - Getting Around The first vehicle has one sensor (e.g. a temperature detector) that directly stimulates its single wheel in a directly proportional way. The vehicle moves ideally in one dimension only and can stand still or move forward at varying speeds depending on the sensed temperature. When forces like asymmetric friction come into play, the vehicle could deviate from its straight line motion in unpredictable ways akin to Brownian motion. This behavior might be understood by a human observer as a creature that is 'alive' like an insect and 'restless', never stopping in its movement. The low velocity in regions of low temperature might be interpreted as a preference for cold areas. Vehicle 2a A slightly more complex agent has two (left and right) symmetric sensors (e.g. light detectors) each stimulating a wheel on the same side of the body. This vehicle represents a model of negative animal tropotaxis. It obeys the following rule: More light right → right wheel turns faster → turns towards the left, away from the light. This is more efficient as a behavior to escape from the light source, since the creature can move in different directions, and tends to orient towards the direction from which least light comes. In another variation, the connections are negative or inhibitory: more light → slower movement. In this case, the agents move away from the dark and towards the light. Vehicle 2b The agent has the same two (left and right) symmetric sensors (e.g. light detectors), but each one stimulates a wheel on the other side of the body. It obeys the following rule: More light left → right wheel turns faster → turns towards the left, closer to the light. As a result, the robot follows the light; it moves to be closer to the light. Behavior In a complex environment with several sources of stimulus, Braitenberg vehicles will exhibit complex and dynamic behavior. Depending on the connections between sensors and actuators, a Braitenberg vehicle might move close to a source, but not touch it, run away very fast, or describe circles or figures-of-eight around a point. This behavior is undoubtedly goal-directed, flexible and adaptive, and might even appear to be intelligent, the way some minimal intelligence is attributed to a cockroach. Yet, the functioning of the agent is purely mechanical, without any information processing or other apparently cognitive processes. Analog robots, such as those used in the BEAM robotics approach, often utilise these sorts of behaviors. See also BEAM robotics Turtle (robot) Unicycle cart References Notes Lambrinos, D., Scheier, Ch. (1995). Extended braitenberg architectures. Technical Report AI Lab no. 95.10, Computer Science Department, University of Zurich. Headleand, Chris, Llyr Ap Cynedd, and William J. Teahan. "Berry Eaters: Learning Colour Concepts with Template Based Evolution Evaluation." ALIFE 14: The Fourteenth Conference on the Synthesis and Simulation of Living Systems. Vol. 14. External links Valentino Braitenberg's homepage A software Braitenberg vehicle simulator Another Braitenberg vehicle simulator, lets you play around with different settings, vehicles and sources An Apple Playground on Braitenberg Vehicles, an APPLE playground in SWIFT language which implements some Braitenberg vehicles, it lets experiment in a very interactive way. Cybernetics BEAM robotics Thought experiments
Braitenberg vehicle
[ "Biology" ]
1,031
[ "BEAM robotics" ]
16,264,786
https://en.wikipedia.org/wiki/Neuroscience%20Information%20Framework
The Neuroscience Information Framework is a repository of global neuroscience web resources, including experimental, clinical, and translational neuroscience databases, knowledge bases, atlases, and genetic/genomic resources and provides many authoritative links throughout the neuroscience portal of Wikipedia. Description The Neuroscience Information Framework (NIF) is an initiative of the NIH Blueprint for Neuroscience Research, which was established in 2004 by the National Institutes of Health. Development of the NIF started in 2008, when the University of California, San Diego School of Medicine obtained an NIH contract to create and maintain "a dynamic inventory of web-based neurosciences data, resources, and tools that scientists and students can access via any computer connected to the Internet". The project is headed by Maryann Martone, co-director of the National Center for Microscopy and Imaging Research (NCMIR), part of the multi-disciplinary Center for Research in Biological Systems (CRBS), headquartered at UC San Diego. Together with co-principal investigators Jeffrey S. Grethe and Amarnath Gupta, Martone leads a national collaboration that includes researchers at Yale University, the California Institute of Technology, George Mason University, Harvard, and Washington University. Goals Unlike general search engines, NIF provides much deeper access to a focused set of resources that are relevant to neuroscience, search strategies tailored to neuroscience, and access to content that is traditionally “hidden” from web search engines. The NIF is a dynamic inventory of neuroscience databases, annotated and integrated with a unified system of biomedical terminology (i.e. NeuroLex). NIF supports concept-based queries across multiple scales of biological structure and multiple levels of biological function, making it easier to search for and understand the results. NIF will also provide a registry through which resources providers can disclose availability of resources relevant to neuroscience research. NIF is not intended to be a warehouse or repository itself, but a means for disclosing and locating resources elsewhere available via the web. The NIFSTD, or NIF Standard Ontology contains many of the terms, synonyms and abbreviations useful for neuroscience, as well as dynamic categories such as defined cell classes based on various properties like neuron by neurotransmitter or by circuit role or drugs of abuse according to the National Institutes on Drug Abuse. Any term (with associated synonyms) or dynamic category (all terms with their synonyms) can be used to simultaneously query all of the data that NIF currently indexes, please find several examples below: available data about the hippocampus including synonyms data about parkinson's disease including archaic synonyms like paralysis agitans neocortical neuron a dynamic category includes all neurons that have cell soma in any part of the neocortex Content NIF content can be thought of as a Catalog (NIF Registry) and deep database search (NIF Data Federation) The NIF Catalog has the largest listing of NIH-funded, neuroscience-relevant resources, including scientific databases, software tools, experimental reagents and tools, knowledge bases and portals, and other entities identified by the neuroscience research community. A listing of current resources can be found at www.neuinfo.org/registry The NIF Data Federation searches deep database content of over 150 databases including: various NCBI databases (PubMed, Gensat, Entrez Gene, Homologene, GEO) as well as many large and small databases that have something to do with neuroscience including Gemma (microarray data from the nervous system), CCDB & CIL (images of neurons and astrocytes, mainly), GeneNetwork, AgingGenesDB, XNAT, 1000 Functional Connectomes. The 'complete' list (as of April 2013) can be found in the table below. An updated list can be found on the Data Federation page. In addition many databases that have very similar types of data have been integrated into 'virtual databases', which combine many databases into one table. For example, the AntibodyRegistry combines data from 200+ vendors, the NIF Integrated BrainGeneExpression combines gene expression data from Gensat, Alan Brain, and Mouse Genome Informatics, the Connectivity view combines six databases that have statements about nervous system connectivity, the Integrated Animal view combines data about experimental animal catalogs available to researchers from transgenic or inbred worms, zebrafish, mice and rats. We add more of these as data are registered, so check back to this page to see the current contents. For an exhaustive and up to date list of Databases and Datasets registered to NIF please check this page www.neurolex.org. The table below was updated April 9, 2013. Data Via Web Services The idea of NIF is that while scientific databases do have a plethora of interfaces, some quite complex, there should be a uniform way of looking at them and searching though them. This uniform search idea has been extended to services so that developers can take advantage of the work done at NIF to enhance their own applications by gaining access to all of the data available through the NIF interface. When data is made public via NIF, it also becomes immediately available via web services. These RESTful web services can be thought of as programming functions that can be built into other applications. Currently, the data can be queried and pulled as an XML feed and several other sites are now pulling NIF data via services, including DOMEO and Eagle i. Developers can learn how to access data by viewing the WADL file available at http://neuinfo.org/developers Below are some public RESTful services that can be accessed by students or used in building applications: Annotate any text by using this url: * http://nif-services.neuinfo.org/servicesv1/v1/annotate?content=The%20cerebellum%20is%20a%20wonderful%20thing&longestOnly=true The url contains the text you want to annotate, the input, which is "The cerebellum is a wonderful thing". To change this you can try to use any other text. The output from the service will return the sentence with a SPAN tag denoting that it recognized the term cerebellum and it is a type of anatomical_structure. The terms that are not recognized are returned without span tags. Note, the longestOnly=true parameter is optional it means that only the longest set of terms will be recognized an in this example it makes no difference, but in terms like hippocampal neuron it will only return one response. Developers can use the span tags to bring back information about the recognized term because the identifier is unique and linked to definitions, synonyms, other brain regions and in some cases images: For a human readable version see * http://neurolex.org/wiki/Birnlex_1489 For a machine readable version see * http://nif-services.neuinfo.org/ontoquest/concepts/Birnlex_1489?get_super=true Retrieve neuroscience auto-complete suggestions, e.g., * http://nif-services.neuinfo.org/servicesv1/v1/vocabulary?prefix=hippocampu The above example shows the term completion for "hippocampu", but you can try to type on the url any other set of letters. The return of the service is a set of terms that matches this string including: Hippocampus and many hippocampal cells. Retrieve the registry items that match a search term: * http://nif-services.neuinfo.org/servicesv1/v1/federation/data/nlx_144509-1?q=miame The NIF Registry is a data source and this service will return all items in the registry that match the particular search term. In this case the term is miame, as in the miame standard. To use this data retrieving function you can type query terms into the end of this url in addition to or instead of the term miame. This will work the same way as typing your terms into the search box here: * https://neuinfo.org/mynif/search.php?q=hippocampus&t=registry Note, make sure to check the terms and conditions for any source of data, terms and conditions are available as a courtesy in NIF, but you may also check with the individual sources that you wish to incorporate in your applications, all of the above described data is owned by NIF and is covered under the Creative Commons Attribution license, so it can be freely distributed and shared. Notes and references See also NeuroLex NeuroNames External links Neuroscience Information Framework (NIF) website NIF NeuroLex - The Neuroscience Lexicon Neuroscience Information Framework News & Announcements Neuroscience Information Framework Facebook Page Neuroscience Information Framework Mendeley Group Internet search engines Neuroscience software Neuroinformatics Online databases Ontology (information science) Semantic Web Anatomy websites Biological databases
Neuroscience Information Framework
[ "Biology" ]
1,898
[ "Bioinformatics", "Biological databases", "Neuroinformatics" ]
16,265,386
https://en.wikipedia.org/wiki/Kepler%20scientific%20workflow%20system
Kepler is a free software system for designing, executing, reusing, evolving, archiving, and sharing scientific workflows. Kepler's facilities provide process and data monitoring, provenance information, and high-speed data movement. Workflows in general, and scientific workflows in particular, are directed graphs where the nodes represent discrete computational components, and the edges represent paths along which data and results can flow between components. In Kepler, the nodes are called 'Actors' and the edges are called 'channels'. Kepler includes a graphical user interface for composing workflows in a desktop environment, a runtime engine for executing workflows within the GUI and independently from a command-line, and a distributed computing option that allows workflow tasks to be distributed among compute nodes in a computer cluster or computing grid. The Kepler system principally targets the use of a workflow metaphor for organizing computational tasks that are directed towards particular scientific analysis and modeling goals. Thus, Kepler scientific workflows generally model the flow of data from one step to another in a series of computations that achieve some scientific goal. Scientific workflow A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions to a scientific problem. Scientific workflow systems often provide graphical user interfaces to combine different technologies along with efficient methods for using them, and thus increase the efficiency of the scientists. Access to scientific data Kepler provides direct access to scientific data that has been archived in many of the commonly used data archives. For example, Kepler provides access to data stored in the Knowledge Network for Biocomplexity (KNB) Metacat server and described using Ecological Metadata Language. Additional data sources that are supported include data accessible using the DiGIR protocol, the OPeNDAP protocol, GridFTP, JDBC, SRB, and others. Models of Computation Kepler differs from many of the other bioinformatics workflow management systems in that it separates the structure of the workflow model from its model of computation, such that different models for the computation of the workflow can be bound to a given workflow graph. Kepler inherits several common models of computation from the Ptolemy system, including Synchronous Data Flow (SDF), Continuous Time (CT), Process Network (PN), and Dynamic Data Flow (DDF), among others. Hierarchical workflows Kepler supports hierarchy in workflows, which allows complex tasks to be composed of simpler components. This feature allows workflow authors to build re-usable, modular components that can be saved for use across many different workflows. Workflow semantics Kepler provides a model for the semantic annotation of workflow components using terms drawn from an ontology. These annotations support many advanced features, including improved search capabilities, automated workflow validation, and improved workflow editing. Sharing workflows Kepler components can be shared by exporting the workflow or component into a Kepler Archive (KAR) file, which is an extension of the JAR file format from Java. Once a KAR file is created, it can be emailed to colleagues, shared on web sites, or uploaded to the Kepler Component Repository. The Component Repository is centralized system for sharing Kepler workflows that is accessible via both a web portal and a web service interface. Users can directly search for and utilize components from the repository from within the Kepler workflow composition GUI. Provenance Provenance is a critical concept in scientific workflows, since it allows scientists to understand the origin of their results, to repeat their experiments, and to validate the processes that were used to derive data products. In order for a workflow to be reproduced, provenance information must be recorded that indicates where the data originated, how it was altered, and which components and what parameter settings were used. This will allow other scientists to re-conduct the experiment, confirming the results. Little support exists in current systems to allow end-users to query provenance information in scientifically meaningful ways, in particular when advanced workflow execution models go beyond simple DAGs (as in process networks). Kepler history The Kepler Project was created in 2002 by members of the Science Environment for Ecological Knowledge (SEEK) project and the Scientific Data Management (SDM) project. The project was founded by researchers at the National Center for Ecological Analysis and Synthesis (NCEAS) at the University of California, Santa Barbara and the San Diego Supercomputer Center at the University of California, San Diego. Kepler extends Ptolemy II, which is a software system for modeling, simulation, and design of concurrent, real-time, embedded systems developed at UC Berkeley. Collaboration on Kepler quickly grew as members of various scientific disciplines realized the benefits of scientific workflows for analysis and modeling and began contributing to the system. As of 2008, Kepler collaborators come from many science disciplines, including ecology, molecular biology, genetics, physics, chemistry, conservation science, oceanography, hydrology, library science, computer science, and others. Kepler is a workflow orchestration engine which is used to make workflows for making work much easier, in the form of actor. See also Apache Taverna Discovery Net VisTrails LONI Pipeline Bioinformatics workflow management systems DataONE Investigator Toolkit References External links Kepler Project website Kepler Component Repository Ptolemy II project website Knowledge Network for Biocomplexity (KNB) Data archive List of software tools related to workflows on the DataONE website Workflow applications Bioinformatics software Free and open-source software Software using the BSD license Free software programmed in Java (programming language)
Kepler scientific workflow system
[ "Biology" ]
1,149
[ "Bioinformatics", "Bioinformatics software" ]
16,265,577
https://en.wikipedia.org/wiki/Vacuum%20Rabi%20oscillation
A vacuum Rabi oscillation is a damped oscillation of an initially excited atom coupled to an electromagnetic resonator or cavity in which the atom alternately emits photon(s) into a single-mode electromagnetic cavity and reabsorbs them. The atom interacts with a single-mode field confined to a limited volume V in an optical cavity. Spontaneous emission is a consequence of coupling between the atom and the vacuum fluctuations of the cavity field. Mathematical treatment A mathematical description of vacuum Rabi oscillation begins with the Jaynes–Cummings model, which describes the interaction between a single mode of a quantized field and a two level system inside an optical cavity. The Hamiltonian for this model in the rotating wave approximation is where is the Pauli z spin operator for the two eigenstates and of the isolated two level system separated in energy by ; and are the raising and lowering operators of the two level system; and are the creation and annihilation operators for photons of energy in the cavity mode; and is the strength of the coupling between the dipole moment of the two level system and the cavity mode with volume and electric field polarized along . The energy eigenvalues and eigenstates for this model are where is the detuning, and the angle is defined as Given the eigenstates of the system, the time evolution operator can be written down in the form If the system starts in the state , where the atom is in the ground state of the two level system and there are photons in the cavity mode, the application of the time evolution operator yields The probability that the two level system is in the excited state as a function of time is then where is identified as the Rabi frequency. For the case that there is no electric field in the cavity, that is, the photon number is zero, the Rabi frequency becomes . Then, the probability that the two level system goes from its ground state to its excited state as a function of time is For a cavity that admits a single mode perfectly resonant with the energy difference between the two energy levels, the detuning vanishes, and becomes a squared sinusoid with unit amplitude and period Generalization to N atoms The situation in which two level systems are present in a single-mode cavity is described by the Tavis–Cummings model , which has Hamiltonian Under the assumption that all two level systems have equal individual coupling strength to the field, the ensemble as a whole will have enhanced coupling strength . As a result, the vacuum Rabi splitting is correspondingly enhanced by a factor of . See also Jaynes–Cummings model Quantum fluctuation Rabi cycle Rabi frequency Rabi problem Spontaneous emission Isidor Isaac Rabi References and notes Quantum optics Atomic physics Atomic, molecular, and optical physics
Vacuum Rabi oscillation
[ "Physics", "Chemistry" ]
571
[ "Quantum optics", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
17,487,236
https://en.wikipedia.org/wiki/Promise%20theory
Promise theory is a method of analysis suitable for studying any system of interacting components. In the context of information science, promise theory offers a methodology for organising and understanding systems by modelling voluntary cooperation between individual actors or agents, which make public their intentions to one another in the form of promises. Promise theory is grounded in graph theory and set theory. The goal of promise theory is to reveal the behavior of a whole by taking the viewpoint of the parts rather than the whole. In other words, it is a bottom-up, constructionist view of the world. Promise theory is not a technology or design methodology. It doesn't advocate any position or design principle, except as a method of analysis. Promise theory is being used in a variety of disciplines ranging from network (SDN) and computer systems management to organizations and finance. History An early form of promise theory was proposed by physicist and computer scientist Mark Burgess in 2004, initially in the context of information science, in order to solve observed problems with the use of obligation-based logics in computer management schemes, in particular for policy-based management. A collaboration between Burgess and Dutch computer scientist Jan Bergstra refined the model of a promise, which included the notion of impositions and the role of trust. The cooperation resulted in several books and many scientific papers covering a range of different applications. In spite of wider applications of promise theory, it was originally proposed by Burgess as a way of modelling the computer management software CFEngine and its autonomous behaviour. CFEngine had been under development since 1993 and Burgess had found that existing theories based on obligations were unsuitable as "they amounted to wishful thinking". Consequently, CFEngine uses a model of autonomy - as implied by promise theory—both as a way of avoiding distributed inconsistency in policy and as a security principle against external attack. As of January 2023, more than 2700 companies are using CFEngine worldwide. Outside the configuration management and DevOps disciplines, promise theory had a slow start. In the essay Promise You A Rose Garden (2007) Burgess used a more popular, less academic style, but it failed to widen the general visibility of the concept at the time. A few years later, in 2012, things changed when Cisco began using promise theory in their growing SDN initiatives, also known as Application Centric Infrastructure (ACI). The tech media picked up the usage in 2013, which led to a number of applications of promise theory in new disciplines in the years following, such as biology, supply chain management, design, business/leadership and systems architecture. Tim O'Reilly discusses promise theory in his bestseller WTF: What's the Future. Key ideas Promise theory is described as a modeling tool or a method of analysis suitable for studying any system of interacting components. It is not a technology or design methodology and does not advocate any position or design principle, except as a method of analysis. Agents Agents in promise theory are said to be autonomous, meaning that they are causally independent of one another. This independence implies that they cannot be controlled from without, they originate their own behaviours entirely from within, yet they can rely on one another's services through the making of promises to signal cooperation. Agents are thus self-determined until such a time as they partially or completely give up their independence by promising to accept guidance from other agents. Agents may be as simple as a heading in an HTML document, or as complex as a name server in a network (e.g., a DNS server). Regardless of internal complexity, agents encapsulate mechanisms which make and keep promises. An HTML heading makes promises about its own rendering via CSS statements. A DNS server promises to provide answers to questions about domain names, host names and IP addresses. The first is extremely simple, the latter much more sophisticated. These differences in internal process complexity lead to a definition of so-called semantic scaling of agent complexity. Intentions and outcomes Agents in promise theory may have intentions. An intention may be realized by a behaviour or a target outcome. Intentions are thus made concrete by defining a set of acceptable outcomes associated with each intention. An outcome is most useful when it describes an invariant or mathematical fixed point in some description of states, because this can be both dynamically and semantically stable. Each intention expresses a quantifiable outcome, which may be described as a state of an agent. Intentions are sometimes described as targets, goals, or desired states. The selection of intentions by an agent is left unexplained so as to avoid questions about free will. Agents express their intentions to one another by promise or by imposition. This provides a measure by which they can assess whether intentions are fulfilled or not (i.e. whether promises are kept). Promises Promises arise when an agent shares one of its intentions with another agent voluntarily (e.g., by publishing its intent). The method of sharing is left to the modeller to explain. For example, an object, such as a door handle, is an agent that promises to be suitable for opening a door, although it could be used for something else (e.g., for digging a hole in the ground). We cannot assume that agents will accept the promises given in the spirit in which they were intended, because every agent has its own context and capabilities. The promise of door handleness could be expressed by virtue of its physical form or by having a written label attached in some language. An agent that uses this promise can assess whether the agent keeps its promise, or is fit for purpose. Any agent can decide this for itself. A promise may be used voluntarily by another agent in order to influence its usage of the other agent. Promises facilitate interaction, cooperation, and tend to maximize an intended outcome. Promises are not commands or deterministic controls. Autonomy Obligations, rather than promises have been the traditional way of modelling behaviour—in society, in technology, and in other areas. While still dominant, the obligation based model has known weaknesses, in particular in areas like scalability and predictability, because of its rigidness and lack of dynamism. Promise theory's point of departure from obligation logics is the idea that all agents in a system should have autonomy of control—i.e. that they cannot be coerced or forced into a specific behaviour. Obligation theories in computer science often view an obligation as a deterministic command that causes its proposed outcome. In promise theory an agent may only make promises about its own behaviour. For autonomous agents, it is meaningless to make promises about another's behaviour. Although this assumption could be interpreted morally or ethically, in promise theory this is simply a pragmatic engineering principle, which leads to a more complete declaration of the intended roles of the actors or agents in a system: When making assumptions about others' behaviour is disallowed, one is forced to document every promise more completely in order to make predictions, which in turn will reveal possible failure modes by which cooperative behaviour could fail. Command and control systems—like those that motivate obligation theories—can easily be reproduced by having agents voluntarily promise to follow the instructions of another agent (this is also viewed as a more realistic model of behaviour). Since a promise can always be withdrawn, there is no contradiction between voluntary cooperation and command and control. In philosophy and law, a promise is often viewed as something that leads to an obligation; promise theory rejects that point of view. Bergstra and Burgess state that the concept of a promise is quite independent of that of obligation. Economics Promises can be valuable to the promisee or even to the promiser. They might also lead to costs. There is thus an economic story to tell about promises. The economics of promises naturally motivate selfish agent behaviour, and promise theory can be seen as a motivation for game theoretical decision making, in which multiple promises play the role of strategies in a game. Promise theory has also been used to model and build new insights into monetary systems. Emergent behaviour In computer science, the promise theory describes policy governed services, in a framework of completely autonomous agents, which assist one another by voluntary cooperation alone. It is a framework for analyzing realistic models of modern networking, and as a formal model for swarm intelligence. Promise theory may be viewed as a logical and graph theoretical framework for understanding complex relationships in networks, where many constraints have to be met, which was developed at Oslo University College, by drawing on ideas from several different lines of research conducted there, including policy based management, graph theory, logic and configuration management. It uses a constructivist approach that builds conventional management structures from graphs of interacting, autonomous agents. Promises can be asserted either from an agent to itself or from one agent to another and each promise implies a constraint on the behavior of the promising agent. The atomicity of the promises makes them a tool for finding contradictions and inconsistencies. Agency as a model of systems in space and time The promises made by autonomous agents lead to a mutually approved graph structure, which in turn leads to spatial structures in which the agents represent point-like locations. This allows models of smart spaces, i.e. semantically labeled or even functional spaces, such as databases, knowledge maps, warehouses, hotels, etc., to be unified with other more conventional descriptions of space and time. The model of semantic spacetime uses promise theory to discuss these spacetime concepts. Promises are more mathematically primitive than graph adjacencies, since a link requires the mutual consent of two autonomous agents, thus the concept of a connected space requires more work to build structure. This makes them mathematically interesting as a notion of space, and offers a useful way of modeling physical and virtual information systems. Promise theory, agile transformation, and social science The Open Leadership Network and Open Space Technology organizers Daniel Mezick and Mark Sheffield invited promise theory originator Mark Burgess to keynote at the Open Leadership Network's Boston conference in 2019. This led to applying the formal development of promise theory to teach agile concepts. Burgess later extended the lecture notes into an online study course, which he claims prompted an even deeper study of the concepts of social systems, including trust and authority. References Theoretical computer science Management cybernetics Formal methods Economic theories Sociological theories
Promise theory
[ "Mathematics", "Engineering" ]
2,084
[ "Theoretical computer science", "Applied mathematics", "Software engineering", "Formal methods" ]
17,487,293
https://en.wikipedia.org/wiki/Nicholson%E2%80%93Bailey%20model
The Nicholson–Bailey model was developed in the 1930s to describe the population dynamics of a coupled host-parasitoid system. It is named after Alexander John Nicholson and Victor Albert Bailey. Host-parasite and prey-predator systems can also be represented with the Nicholson-Bailey model. The model is closely related to the Lotka–Volterra model, which describes the dynamics of antagonistic populations (preys and predators) using differential equations. The model uses (discrete time) difference equations to describe the population growth of host-parasite populations. The model assumes that parasitoids search for hosts at random, and that both parasitoids and hosts are assumed to be distributed in a non-contiguous ("clumped") fashion in the environment. In its original form, the model does not allow for stable coexistence. Subsequent refinements of the model, notably adding density dependence on several terms, allowed this coexistence to happen. Equations Derivation The model is defined in discrete time. It is usually expressed as with H the population size of the host, P the population size of the parasitoid, k the reproductive rate of the host, a the searching efficiency of the parasitoid, and c the average number of viable eggs that a parasitoid lays on a single host. This model can be explained based on probability. is the probability that the host will survive predators; whereas is that they will not, bearing in mind the parasitoid eventually will hatch into larva and escape. Analysis of the Nicholson–Bailey model When , is the unique non-negative fixed point and all non-negative solutions converge to . When , all non-negative solutions lie on level curves of the function and converge to a fixed point on the -axis. When , this system admits one unstable positive fixed point, at It has been proven that all positive solutions whose initial conditions are not equal to are unbounded and exhibit oscillations with infinitely increasing amplitude. Variations Density dependence can be added to the model, by assuming that the growth rate of the host decreases at high abundances. The equation for the parasitoid is unchanged, and the equation for the host is modified: The host rate of increase k is replaced by r, which becomes negative when the host population density reaches K. See also Lotka–Volterra inter-specific competition equations Population dynamics Notes Parasitoids encompass insects that place their ova inside the eggs or larva of other creatures (generally other insects as well). References Further reading External links Nicholson–Bailey model Nicholson-Bailey model with density dependence Nicholson-Bailey spatial model Predation Population models Mathematical and theoretical biology
Nicholson–Bailey model
[ "Mathematics" ]
540
[ "Applied mathematics", "Mathematical and theoretical biology" ]
17,488,633
https://en.wikipedia.org/wiki/Dioxin%20affair
The Dioxin affair was a political crisis that struck in Belgium during the spring of 1999. Contamination of feedstock with polychlorinated biphenyls (PCB) was detected in animal food products, mainly eggs and chickens. Although health inspectors reported the problem in January, measurements were taken only from May 1999 when the media revealed the case. The then Flemish Liberals and Democrats (VLD) opposition leader Guy Verhofstadt claimed that the government was trying to cover up the so-called "nota Destickere", which proved that several secretaries of state were informed much earlier that the food contained PCBs and dioxins. Political scandal The Dioxin Affair started with some complaints from chicken farmers who noticed increased death among newborn chickens. Laboratory analysis confirmed the presence of dioxin-like toxins well above normal limits in the eggs, tissues and feed of the affected birds. It was later confirmed that the dioxin-like toxicity was a result of the presence of PCBs, many of which form part of the group of dioxins and dioxin-like compounds which have toxic properties, in the birds’ feed. , Minister of Agriculture, and , Minister of Health, immediately resigned their positions and a commission was installed to investigate the probable sources of contamination and the errors that had been made by the government. Later investigations revealed that the source of the contamination came from an oil-and-fat-recycling company, Verkest, from Deinze. The fats were reprocessed into animal feed that also contained transformer oil (coolant fluid), a known source of PCBs. Public concern about the quality of animal food in general became a hot issue in the media. This forced the commission to ban certain recycling streams (like frying oil) from entering the food chain in order to prevent future contamination. Later studies indicated that there was never a significant danger to human health because contamination only impacted a small proportion (at most 2%) of the food chain over a limited period. Seven million chickens and fifty thousand pigs were slaughtered and discarded. Many farms were closed down for months and animal food products were banned from the market. During the investigation, questions were raised as to whether the costs for destroying the food and feedstock were necessary, as it seemed obvious that the contaminated food had already passed through the food market during the period from January to May. To protect the farmers, the Belgian government promised to compensate them for their losses. The crisis also damaged the export of Belgian animal products. Many Belgians went shopping for meat and dairy products in foreign countries. The total costs of the food crisis are estimated at 25 billion francs, or 625 million euros. Political implications The dioxin crisis strongly influenced the federal elections of 1999 (as well as the regional elections of 1999). The governing party, Christian People's Party (CVP), suffered a historic loss and forced the end of premier Jean-Luc Dehaene's eight-year reign. This meant a victory for the VLD and Guy Verhofstadt, who had brought the affair to public attention in the first place, resulting in him becoming Prime Minister of Belgium until 2007. Green parties Ecolo and Agalev were also able to profit from the public concern around environment and food quality. Health implications In 2001, a public report announced that high dioxin levels were detected in Belgians' blood plasma compared to other European populations. A direct link to the dioxin crisis seemed obvious. Later comparison with blood samples that were taken before the crisis disproved this hypothesis. High levels could also be attributed to the dense populations and industry. See also Dioxin controversy Federal Agency for the Safety of the Food Chain References External links Review on Belgium dioxin crisis of 1999 Government crises Political scandals in Belgium 20th-century scandals Environment of Belgium Dioxins 1999 in Belgium 1999 in the environment 1999 health disasters
Dioxin affair
[ "Chemistry" ]
797
[ "Dioxins", "Toxins by chemical classification" ]
17,489,797
https://en.wikipedia.org/wiki/Criminal%20Justice%20and%20Immigration%20Act%202008
The Criminal Justice and Immigration Act 2008 (c. 4) is an Act of the Parliament of the United Kingdom which makes significant changes in many areas of the criminal justice system in England and Wales and, to a lesser extent, in Scotland and Northern Ireland. In particular, it changes the law relating to custodial sentences and the early release of prisoners to reduce prison overcrowding, which reached crisis levels in 2008. It also reduces the right of prison officers to take industrial action, and changed the law on the deportation of foreign criminals. It received royal assent on 8 May 2008, but most of its provisions came into force on various later dates. Many sections came into force on 14 July 2008. Specific provisions Sentencing Non-custodial sentences Section 1 of the Act provides a comprehensive list of new community orders, called youth rehabilitation orders, which can be imposed on offenders aged under 18. They can only be imposed if the offence is imprisonable (i.e. an adult could receive a prison sentence for the offence) and, if the offender is aged under 15, he is a persistent offender. Neither of these criteria are necessary under the old law. (This section and sections 2 to 4 came into force on 30 November 2009.) Section 11 deals with adult offenders, and provides that adult community orders may not be imposed unless the offence is imprisonable, or unless the offender has been fined (without additional punishment) on three previous occasions. (This section came into force on 14 July 2008.) Section 35 extends the availability of referral orders (sentences designed to rehabilitate young offenders). Previously only available to first offenders, referral orders may be passed on offenders with previous convictions, subject to certain conditions being met. (This section came into force on 27 April 2009.) All of these sections were repealed and replaced by the Sentencing Act 2020. Dangerous offenders The Criminal Justice Act 2003 introduced mandatory sentencing for violent and sexual offenders, which significantly reduced judicial discretion in sentencing defendants who judges considered were a danger to the public. The increase in life sentences and "extended sentences" which resulted contributed to a major crisis of prison overcrowding, in which the prison population of England and Wales reached unprecedented levels. Sections 13 to 17 restored a proportion of judicial discretion and imposed stricter criteria for the imposition of such sentences. Section 25 provided for the automatic early release of prisoners serving extended (as opposed to life) sentences, instead of discretionary release by the Parole Board. (These sections all came into force on 14 July 2008.) Curfew English law already provided the courts with the power to impose a curfew as a condition of bail, and the power to require the defendant to wear an electronic tag to monitor compliance. Section 21 introduces a new power enabling a court which imposes a custodial sentence to order that half of the time for which the defendant was on a curfew is to count as time served towards that sentence, provided that the curfew was in force for at least 9 hours each day and that it was monitored by a tag. Although there is a presumption that the court is to make such an order, the court may decline to do so, and is obliged to take into account any breaches of the bail condition. (This power only applies to offences committed on or after 4 April 2005, the last date on which major changes to sentencing were made. This section came into force on 3 November 2008.) Obscene publications Section 71 increases the maximum sentence for publishing an obscene article under section 2 of the Obscene Publications Act 1959 from 3 to 5 years. (This section came into force on 26 January 2009.) Offences Extreme pornographic images Section 63 creates a new offence of possessing "an extreme pornographic image". An image is deemed to be extreme if it "is grossly offensive, disgusting or otherwise of an obscene character" and "it portrays, in an explicit and realistic way, any of the following— (a) an act which threatens a person's life, (b) an act which results, or is likely to result, in serious injury to a person's anus, breasts or genitals, (c) an act which involves sexual interference with a human corpse, or (d) a person performing an act of intercourse or oral sex with an animal (whether dead or alive), and a reasonable person looking at the image would think that any such person or animal was real". Where (a) or (b) apply, the maximum sentence is three years; otherwise the maximum is two years. Those sentenced to at least two years will be placed on the Violent and Sex Offender Register. Section 64 excludes classified works, but states that extracts from classified works are not exempt, if "it is of such a nature that it must reasonably be assumed to have been extracted (whether with or without other images) solely or principally for the purpose of sexual arousal". Sections 65 to 66 provide defences to this offence. (These sections all came into force on 26 January 2009.) Child pornography Section 69 extends the definition of indecent photographs in the Protection of Children Act 1978 (which creates offences relating to child pornography) to cover tracings of such photographs or pseudo-photographs. Child sex offences Section 72 amends section 72 of the Sexual Offences Act 2003 to extend extraterritorial jurisdiction over sexual offences against children overseas. Section 73 and Schedule 15 extend the definition of the offence of child grooming. (These provisions all came into force on 14 July 2008.) Hate crimes Section 74 and Schedule 16 amend Part 3A of the Public Order Act 1986 to extend hate crime legislation to cover "hatred against a group of persons defined by reference to sexual orientation (whether towards persons of the same sex, the opposite sex or both)". To prevent the Act being used to inhibit freedom of speech on the subject of homosexuality, paragraph 14 of Schedule 16 inserts a new section 29JA, entitled "Protection of freedom of expression (sexual orientation)" but sometimes known as the Waddington Amendment (after Lord Waddington who introduced it). It reads: The government tried to insert a clause in the 2009 Coroners and Justice Bill which would have explicitly repealed section 29JA, but the proposed repeal failed and section 29JA remains. The section was extended to protect criticism of gay marriage by the Marriage (Same Sex Couples) Act 2013. Section 74 and Schedule 16 came into force on 23 March 2010. Nuclear terrorism Section 75 and Schedule 17 make major amendments to the Nuclear Material (Offences) Act 1983 to extend extraterritorial jurisdiction over offences under section 1 of that Act, and to increase penalties. It also creates new offences (under sections 1B and 1C) pertaining to nuclear and radioactive material, also with extraterritorial jurisdiction. (This section came into force on 30 November 2009.) Blasphemy Section 79 abolished the common law offences of blasphemy and blasphemous libel in England and Wales. This section came into force two months after royal assent (that is, on 8 July 2008). Violent offender orders Part 7 (sections 98 to 117) creates violent offender orders. These are orders made by a magistrates' court under section 101 to control violent offenders, and are similar to anti-social behaviour orders. They must be "necessary for the purpose of protecting the public from the risk of serious violent harm caused by the offender". (Part 7 came into force on 3 August 2009.) Applications for an order To be eligible for an order a person must be at least 18, have been convicted of a "specified offence" (or an equivalent offence under the law of a foreign country), and have received a sentence of at least one year in prison or incarceration in a psychiatric hospital. The "specified offences" are manslaughter, attempted murder, conspiracy to murder, and offences under sections 4, 18 or 20 of the Offences against the Person Act 1861 (inciting murder and serious assaults). A conviction for murder under the law of a foreign country is also sufficient; this was added by section 119 of the Anti-social Behaviour, Crime and Policing Act 2014, which came into force on 13 May 2014. Before deciding whether to make the order, a court may make an interim violent offender order, which lasts until it decides whether or not to make a final order. The court may make an interim order if it decides that it would be "likely" to make a final order if it were dealing with the main application. An application for a final or interim order can only be made by the police, who can only apply for one if the offender has, since he became eligible for the order, acted in a way that "gives reasonable cause" to believe that the order is necessary. The defendant must be served with a notice giving the time and place of the hearing at which the application will be made. The court must be satisfied that the notice was given before it can hear the application. The court may only make the final order if it decides that the order is necessary to protect the public from "a current risk of serious physical or psychological harm caused by that person committing one or more specified offences". When making this decision the court must take into account any other statutory measures that are in place to protect the public from the person. If the order is made, the defendant may appeal to the Crown Court, which does not review the decision but decides the matter afresh for itself. Effect of an order A final violent offender order lasts for between two and five years, but may be renewed for up to five years at a time. It may not be in force during any time that the offender is in custody or on parole subject to licence. After two years the defendant may apply to the magistrates' court to have the order discharged. A final or interim order "may contain prohibitions, restrictions or conditions preventing the offender— (a) from going to any specified premises or any other specified place (whether at all, or at or between any specified time or times); (b) from attending any specified event; (c) from having any, or any specified description of, contact with any specified individual". The offender must also notify the police, within 3 days of the order being made, of his date of birth, national insurance number, his name on the date the order came into force and on the day he notifies the police (or all of his names if he uses more than one), his home address on each of those dates, and the address of any other premises in the United Kingdom at which on the latter date the offender regularly resides or stays, and any other information prescribed by regulations. He must repeat the notification every year (except if it is an interim order), and must notify any subsequent change of name or address within 3 days of the change. He may be fingerprinted and photographed by the police whenever he gives any of these notifications. If he leaves the United Kingdom he may also be required (by regulations made under the Act) to notify, before he leaves, the date he intends to leave, where he intends to go, his movements outside the UK, and any information about his return. Breaching a violent offender order (whether it is a final or interim order), or failing to make a required notification on time, is an offence punishable with imprisonment for 5 years. Miscellaneous Early release of prisoners Section 26 brought forward the release date of prisoners serving sentences greater than 4 years imposed before 4 April 2005. It did not apply to prisoners serving life sentences or serving sentences for violent or sexual offences. This section came into force on 9 June 2008. This was in order to alleviate prison overcrowding. Absence of defendants Section 54 creates a presumption that when an adult defendant fails to attend a magistrates' court for his trial or sentence, the hearing should continue without him. (This section came into force on 14 July 2008.) Non-legal staff Before the Act, the Crown Prosecution Service already employed staff who were not qualified lawyers to prosecute cases at pre-trial hearings and sentences in the magistrates' court. Section 55 grants them the right to prosecute trials for offences which are non-imprisonable and not triable on indictment. The original version of this section, when the Act was still a bill, would have allowed them to prosecute imprisonable, indictable offences. This proved to be controversial, and was amended following representations by concerned groups such as the Bar Council. (This section came into force on 14 July 2008.) Self-defence Section 76 codifies English and Northern Irish case law on the subject of self-defence. However it made no changes to the existing law. The Secret Barrister described this as "an exercise of pure political conmanship", since politicians had pretended that they were strengthening the right of self-defence. The section was amended on 25 April 2013 by section 43 of the Crime and Courts Act 2013 to allow people to use greater force in defence of their homes against burglars. The government told the public that in those circumstances, the new law meant that force need no longer be reasonable as long as it is not "grossly disproportionate". However, in a 2016 court case the government's lawyer successfully argued that this was not what the law really said, and that the primary test a jury would have to consider was still whether reasonable force had been used. Section 76, as amended, only meant that grossly disproportionate force would never be reasonable, not that merely disproportionate force would always be reasonable. Anti-social behaviour Section 118 created a new Part 1A to the Anti-Social Behaviour Act 2003. This permitted police and local authorities to apply for a court order to close for a period of three months residential premises associated with persistent noise and nuisance. This section came into force on 1 December 2008. When an ASBO was made on a person aged under 17, section 123 required the courts to review the order every twelve months, until the subject of the order is 18. This section came into force on 1 February 2009. These sections, along with the relevant sections of the 2003 Act, were repealed, and thereby ASBOs abolished, by the Anti-social Behaviour, Crime and Policing Act 2014. Public order Section 119 created a new offence of causing "a nuisance or disturbance" to a member of staff of the National Health Service. It is non-imprisonable and carries a maximum fine of £1,000. This section came into force on 30 November 2009. Section 122 makes similar provision for Northern Ireland. Foreign criminals Part 10 of the Act (sections 130 to 137) gives the Secretary of State the power to designate as "foreign criminals" certain criminals who are not British citizens and do not have the right of abode. Designated foreign criminals have a special status under immigration law, and may be required to comply with conditions as to their residence, employment, and compulsory reporting to the police or a government office. Failure to comply is an imprisonable offence. As of September 2023, Part 10 is not yet in force. Prison officers Section 138 curtails the right of prison officers to strike. This section came into force on royal assent. Child sex offenders Section 140 requires local authorities to consider disclosing to members of the public details about the previous convictions of convicted child sex offenders. (This legislation took effect as new sections 327A and 327B of the Criminal Justice Act 2003, on 14 July 2008.) Tobacco Section 143 inserts new sections 12A to 12D into the Children and Young Persons Act 1933. These create two new civil orders, which may be imposed by the magistrates' courts, prohibiting the sale of tobacco or cigarette paper, or keeping a cigarette vending machine, for up to one year. Breaching the order is a summary offence punishable with a fine of up to £20,000 (the usual maximum on summary convictions is £5,000). These orders (called restricted premises orders and restricted sale orders) can be imposed on anyone who has been convicted of an offence under section 7 of the 1933 Act, which prohibits selling tobacco to children under 18. (Section 143 came into force on 1 April 2009.) Commencement Section 153 of the Act provides that most of its sections will come into force on dates to be determined by the Secretary of State. However the restriction on prison officers' right to strike came into force on royal assent (8 May 2008), and the abolition of the offence of blasphemy came into force two months later. Fifteen commencement orders have been made under section 153. The second one brought most of the remaining provisions into effect on 14 July 2008. Commencement orders Criminal Justice and Immigration Act 2008 (Commencement No.1 and Transitional Provisions) Order 2008. Brought into force section 26 (in part) on 9 June 2008. Criminal Justice and Immigration Act 2008 (Commencement No. 2 and Transitional and Saving Provisions) Order 2008. Brought into force sections 10, 11(1), 12–18, 20, 24–25, 27–32, 38, 40, 42–47 (except 46(2)), 52, 54–59, 72–73, 76, 93–97, 140–142, Schedules 5, 8, 12, 15, 24, and miscellaneous amendments of other legislation on 14 July 2008. Criminal Justice and Immigration Act 2008 (Commencement No. 3 and Transitional Provisions) Order 2008. Brought into force sections 21 (except 21(2)), 22–23, 33(1), (3), (5) and (6), 34 (mostly), 41, 51, 60, 126(1) (in part), 127 (in part), 129, Schedules 6, 11, 22 (in part), 23 (in part), and miscellaneous amendments of other legislation on 3 November 2008. Criminal Justice and Immigration Act 2008 (Commencement No. 4 and Saving Provision) Order 2008. Brought into force sections 61, 118, 126 (in part), 127 (in part), Schedules 20, 22 (in part), 23 (in part), 27 (in part) and 28 (in part) on 1 December 2008. Also brought into force sections 63–68, 71, Schedules 14, 26 (paragraph 58 only) and 27 (paragraphs 23 and 25 only) on 26 January 2009. Criminal Justice and Immigration Act 2008 (Commencement No. 5) Order 2008. Brought into force sections 49-50, Schedules 10, and 27 (paragraphs 19 and 20 only) on 19 December 2008. Also brought into force sections 119(4), 120(5) and (6), and 121(1) to (3), (5) and (6) on 1 January 2009 in England only. Criminal Justice and Immigration Act 2008 (Commencement No. 6 and Transitional Provisions) Order 2009. Brought into force sections 48(1)(a), 123, 124, Schedules 9 (in part) and 27 (paragraphs 33 and 34 only) on 1 February 2009. Criminal Justice and Immigration Act 2008 (Commencement No. 7) Order 2009. Brought into force sections 125, 143, 146, Schedules 1 (paragraphs 26(5) and 35) and 28 (Part 7) on 1 April 2009. Also brought into force sections 35-37 on 27 April 2009. Criminal Justice and Immigration Act 2008 (Commencement No. 8) Order 2009. Brought into force most of Schedule 25 on 31 October 2009. Criminal Justice and Immigration Act 2008 (Commencement No. 9) Order 2009. Brought into force some miscellaneous provisions on 8 July 2009. Criminal Justice and Immigration Act 2008 (Commencement No. 10) Order 2009. Brought into force sections 98–117 on 3 August 2009. Criminal Justice and Immigration Act 2008 (Commencement No. 11) Order 2009. Brought into force sections 80–92 and Schedules 18 and 19 on 1 October 2009. Also brought into force sections 21, 26 (fully) and 29 on 31 October 2009. Criminal Justice and Immigration Act 2008 (Commencement No. 12) Order 2009. Brought into force section 48(1)(b) and some miscellaneous provisions in Cambridgeshire, Hampshire, Humberside, Merseyside, and Norfolk only, on 16 November 2009. Criminal Justice and Immigration Act 2008 (Commencement No.13 and Transitory Provision) Order 2009. Brought into force sections 1 to 5, 6 (in part), 7, 8, 75, 119 to 121, Schedules 1 to 3, 4 (in part) and 17, and other miscellaneous provisions on 30 November 2009. Criminal Justice and Immigration Act 2008 (Commencement No.14) Order 2010. Brought into force section 74, Schedule 16, and other miscellaneous provisions on 23 March 2010. Also brought into force section 144 on 6 April 2010 and some other miscellaneous provisions on 1 April 2010. Criminal Justice and Immigration Act 2008 (Commencement No. 15) Order 2013. Brought into force section 148(1) and paragraphs 59, 60 and 62 of Schedule 26 on 8 April 2013. See also Criminal Justice Act Immigration Act References External links The Criminal Justice and Immigration Act 2008, as amended from the National Archives. The Criminal Justice and Immigration Act 2008, as originally enacted from the National Archives. Explanatory notes to the Criminal Justice and Immigration Act 2008. United Kingdom Acts of Parliament 2008 English criminal law Anti-social behaviour Immigration legislation Immigration law in the United Kingdom
Criminal Justice and Immigration Act 2008
[ "Biology" ]
4,368
[ "Anti-social behaviour", "Behavior", "Human behavior" ]
17,490,047
https://en.wikipedia.org/wiki/Slow%20vertex%20response
The slow vertex response (also called SVR or V potential) is an electrochemical signal associated with electrophysiological recordings of the auditory system, specifically Auditory evoked potentials (AEPs). The SVR of a normal human being recorded with surface electrodes can be found at the end of a recorded AEP waveform between the latencies 50-500ms. Detection of SVR is used to estimate thresholds for hearing pathways. References Physiology
Slow vertex response
[ "Biology" ]
94
[ "Physiology" ]
17,490,264
https://en.wikipedia.org/wiki/Pseudo%20Stirling%20cycle
The pseudo Stirling cycle, also known as the adiabatic Stirling cycle, is a thermodynamic cycle with an adiabatic working volume and isothermal heater and cooler, in contrast to the ideal Stirling cycle with an isothermal working space. The working fluid has no bearing on the maximum thermal efficiencies of the pseudo Stirling cycle. Practical Stirling engines usually use a adiabatic Stirling cycle as the ideal Stirling cycle can not be practically implemented. Nomenclature (practical engines and ideal cycle are both named Stirling) and lack in specificity (omitting ideal or adiabatic Stirling cycle) can cause confusion. History The pseudo Stirling cycle was designed to address predictive shortcomings in the ideal isothermal Stirling cycle. Specifically, the ideal cycle does not give usable figures or criteria for judging the performance of real-world Stirling engines. See also Stirling engine Stirling cycle References External links Abstract of "The Pseudo Stirling cycle - A suitable performance criterion" Brief History of Stirling Machines p. 4 and on Thermodynamic cycles
Pseudo Stirling cycle
[ "Physics", "Chemistry" ]
212
[ "Thermodynamics stubs", "Physical chemistry stubs", "Thermodynamics" ]
17,494,135
https://en.wikipedia.org/wiki/High-efficiency%20hybrid%20cycle
The high-efficiency hybrid cycle (HEHC) is a new 4-stroke thermodynamic cycle combining elements of the Otto cycle, Diesel cycle, Atkinson cycle and Rankine cycle. HEHC engines The 3rd generation design of the Liquidpiston Engine currently in development is the only engine currently designed around the HEHC. It is a rotary combustion engine. References External links LiquidPiston Inc. – The company designing the first HEHC-based engine MIT News article: "Small engine packs a punch" (December 5, 2014) Thermodynamic cycles
High-efficiency hybrid cycle
[ "Physics", "Chemistry" ]
117
[ "Thermodynamics stubs", "Physical chemistry stubs", "Thermodynamics" ]
17,496,320
https://en.wikipedia.org/wiki/HP%20660LX
The HP 660LX (F1270A) is a handheld palmtop organizer that runs Windows CE 2.0 or 2.11 that launched in 1998. It is similar to the previous model, the HP 620LX. It has a CompactFlash Type I card slot, a PC card slot, a serial link cable plug, and an infrared port. It is internet capable by attaching an add-on modem or through an Ethernet or Wi-Fi card. Only Type I PC cards are supported and special drivers for the Windows CE operating system are required. On June 4, 1998, the 660LX was announced to ship in the month of July with a 75Mhz Hitachi SH-3 RISC processor and 32Mb of RAM at a price of $999. By August 1998, the 660LX was available for purchase through corporate resellers including CompUSA. Compared to the 620LX The 660LX has 32MB of RAM, compared to only 16MB on the 620LX. The 660LX is bundled with a 56kbps fax/modem card. The 660LX includes Microsoft Windows CE Services 2.1 on CD, whereas the 620LX uses Services 2.0. See also List of HP pocket computers HP 300LX HP 320LX References 660LX
HP 660LX
[ "Technology" ]
276
[ "Computing stubs", "Computer hardware stubs" ]
17,498,504
https://en.wikipedia.org/wiki/Bresler%E2%80%93Pister%20yield%20criterion
The Bresler–Pister yield criterion is a function that was originally devised to predict the strength of concrete under multiaxial stress states. This yield criterion is an extension of the Drucker–Prager yield criterion and can be expressed on terms of the stress invariants as where is the first invariant of the Cauchy stress, is the second invariant of the deviatoric part of the Cauchy stress, and are material constants. Yield criteria of this form have also been used for polypropylene and polymeric foams. The parameters have to be chosen with care for reasonably shaped yield surfaces. If is the yield stress in uniaxial compression, is the yield stress in uniaxial tension, and is the yield stress in biaxial compression, the parameters can be expressed as {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of expressions for parameters A, B, C |- |The Bresler–Pister yield criterion in terms of the principal stresses is If is the yield stress in uniaxial tension, then If is the yield stress in uniaxial compression, then If is the yield stress in equibiaxial compression, then Solving these three equations for (using Maple) gives us |} Alternative forms of the Bresler-Pister yield criterion In terms of the equivalent stress () and the mean stress (), the Bresler–Pister yield criterion can be written as The Etse-Willam form of the Bresler–Pister yield criterion for concrete can be expressed as where is the yield stress in uniaxial compression and is the yield stress in uniaxial tension. The GAZT yield criterion for plastic collapse of foams also has a form similar to the Bresler–Pister yield criterion and can be expressed as where is the density of the foam and is the density of the matrix material. References See also Yield surface Yield (engineering) Plasticity (physics) Plasticity (physics) Solid mechanics Yield criteria
Bresler–Pister yield criterion
[ "Materials_science" ]
438
[ "Deformation (mechanics)", "Plasticity (physics)" ]
17,500,066
https://en.wikipedia.org/wiki/List%20of%20low-energy%20building%20techniques
Low-energy buildings, which include zero-energy buildings, passive houses and green buildings, may use any of a large number of techniques to lower energy use. The following are some of the techniques used to achieve low-energy buildings, which excludes energy generation (microgeneration). Improvements to building envelope Active daylighting Barra system Brise soleil Cool roof and green roof Daylighting Double envelope house Earth sheltering Energy plus house Fluorescent lighting, compact fluorescent lamp, and LED lighting Green building and wood History of passive solar building design Low-energy house Passive daylighting Passive house Passive solar Passive solar building design Quadruple glazing Solar energy Superinsulation Sustainable architecture Sustainability Trombe wall Windcatcher Zero energy building Zero heating building Improvements to heating, cooling, ventilation and water heating Absorption refrigerator Annualized geothermal solar Earth cooling tubes Geothermal heat pump Heat recovery ventilation Hot water heat recycling Passive cooling Renewable heat Seasonal thermal energy storage (STES) Solar air conditioning Solar hot water Energy rating systems EnerGuide (Canada) Home energy rating (US) House Energy Rating (Australia) LEED - Leadership in Energy and Environmental Design National Home Energy Rating (UK) Techniques Sustainable building Architecture lists Energy-related lists Lists related to renewable energy
List of low-energy building techniques
[ "Engineering" ]
252
[ "Sustainable building", "Building engineering", "Architecture", "Construction", "Architecture lists" ]
17,501,906
https://en.wikipedia.org/wiki/Seismic%20microzonation
Seismic microzonation is defined as the process of subdividing a potential seismic or earthquake prone area into zones with respect to some geological and geophysical characteristics of the sites such as ground shaking, liquefaction susceptibility, landslide and rock fall hazard, earthquake-related flooding, so that seismic hazards at different locations within the area can correctly be identified. Microzonation provides the basis for site-specific risk analysis, which can assist in the mitigation of earthquake damage. In most general terms, seismic microzonation is the process of estimating the response of soil layers under earthquake excitations and thus the variation of earthquake characteristics on the ground surface. Regional geology can have a large effect on the characteristics of ground motion. The site response of the ground motion may vary in different locations of the city according to the local geology. A seismic zonation map for a whole country may, therefore, be inadequate for detailed seismic hazard assessment of the cities. This necessitates the development of microzonation maps for big cities for detailed seismic hazard analysis. Microzonation maps can serve as a basis for evaluating site-specific risk analysis, which is essential for critical structures like nuclear power plants, subways, bridges, elevated highways, sky trains and dam sites. Seismic microzonation can be considered as the preliminary phase of earthquake risk mitigation studies. It requires multi-disciplinary contributions as well as comprehensive understanding of the effects of earthquake generated ground motions on man made structures. Many large cities around the world have put effort into developing microzonation maps for the better understanding of earthquake hazard within the cities. Effect of site conditions on earthquake ground motion It has long been recognized that the intensity of ground shaking during earthquakes and the associated damage to structures are significantly influenced by local geologic and soil conditions. Unconsolidated sediments are found to amplify ground motion during earthquakes and are hence more prone to earthquake damage than ground with hard strata. Modern cities built on soft sediments are especially vulnerable to damage caused by amplified ground motions. The 1985 Mexico City earthquake of September 19, 1985 is a good example of earthquake damage to a modern city built on soft sediment. Though the earthquake epicenter was located around 350 km from the city, the sites with soft clay deposits exhibited a huge amplification of ground motion resulting in severe damage. Mexico City is built on a thick layer of soft soil over a hard stratum. The western part of the city is located on the edge of an old lakebed, whereas, soft clay deposits filling the former lakebed underline the eastern part. In the lake bed area, the soft clay deposits have shear wave velocities ranging from 40 to 90 m/s and the underlying hard strata has a shear wave velocity in the range 500 m/s or greater. During the earthquake of 1985, the seismic waves were trapped in the soft strata. The soft soil layer allowed the upward propagating shear waves to propagate easily; however, the hard strata at the bottom acted like a reflector and bounced back the downward propagating waves. This kind of trapping of waves created a resonance and consequently resulted in an enormous amplification of the ground motion. As a result, the lake bed area suffered catastrophic damage; however, in the southwest part of the city, ground motions were moderate and building damage was minor. The acceleration recorded in the hill-zones were relatively low-amplitude, short period ground motions compared to high amplitude and long period ground motions recorded at stations located in the lake zone. Similar kinds of site amplification of ground motion were observed in the Loma Prieta earthquake in October 1989. Deep clay deposits underlying sites around the perimeter of the San Francisco Bay area amplified the ground motion tremendously in the San Francisco and Oakland area causing severe damage. The San Francisco-Oakland Bay Bridge, founded on a deep clay site, was extensively damaged in this earthquake. The site amplification phenomenon observed during these earthquakes clearly highlighted the possibility of severe ground motions on sites with soft soil profiles located at large distance from causative faults and underscored the importance of site specific risk analysis. Methods of seismic microzonation Dynamic characteristics of site such as predominant period, amplification factor, shear wave velocity, standard penetration test values can be used for seismic microzonation purpose. Shear wave velocity measurement and standard penetration test are generally expensive and are not feasible to be carried out at large number of sites for the purpose of microzonation. Ambient Vibrations measurement (also called Microtremor) has become a popular method for determining the dynamic properties of soil strata and is being extensively used for microzonation. Microtremor observations are easy to perform, inexpensive and can be applied to places with low seismicity as well, hence, microtremor measurements can be used conveniently for microzonation. References Earthquake and seismic risk mitigation Earthquake engineering
Seismic microzonation
[ "Engineering" ]
995
[ "Structural engineering", "Earthquake engineering", "Earthquake and seismic risk mitigation", "Civil engineering" ]
17,502,363
https://en.wikipedia.org/wiki/Nato%20wood
Nato wood is a general term for wood from Mora trees. The best-known species are Mora excelsa (mora) and Mora gonggrijpii (morabukea). This should not be confused with nyatoh, an Asian hardwood from the family Sapotaceae with a very similar look and characteristics to Honduras mahogany, though totally unrelated. Mora may vary in appearance, with reddish brown being the dominant color, but with varying shades and often with darker or lighter streaks. It has a similar appearance to mahogany, and as such it is often referred to as "eastern mahogany". Despite this, the two are unrelated. The heartwood is light to medium reddish brown. Wide pale yellow-brown sapwood is clearly demarcated from heartwood. It has a straight to interlocked grain, with a medium to coarse texture and good natural luster. The wood is dense and it is not particularly easy to dry or to work, although it finishes well. Mora wood species are not listed in the CITES Appendices or on the IUCN Red List of Threatened Species. Because of its similar properties to more traditional tone woods like mahogany, many guitar manufacturers use nato in their construction. Squier, Epiphone, Gretsch, BC Rich, Eastwood and Japan-based manufacturers Yamaha, Hondo and Takamine are amongst them. The wood is available in large solid cuts and is well above average in properties such as resistance to wear, strength, and durability, making it an excellent candidate for heavy construction, industrial flooring, railroad ties, and boatbuilding. References External links Wood properties at the USDA Forest Products Laboratory, Madison, WI Wood by type
Nato wood
[ "Physics" ]
342
[ "Materials stubs", "Materials", "Matter" ]
1,030,860
https://en.wikipedia.org/wiki/Buffer%20state
A buffer state is a country geographically lying between two rival or potentially hostile great powers. Its existence can sometimes be thought to prevent conflict between them. A buffer state is sometimes a mutually agreed upon area lying between two greater powers, which is demilitarised in the sense of not hosting the armed forces of either power (though it will usually have its own military forces). The invasion of a buffer state by one of the powers surrounding it will often result in war between the powers. Research shows that buffer states are significantly more likely to be conquered and occupied than are nonbuffer states. This is because "states that great powers have an interest in preserving—buffer states—are in fact in a high-risk group for death. Regional or great powers surrounding buffer states face a strategic imperative to take over buffer states: if these powers fail to act against the buffer, they fear that their opponent will take it over instead. By contrast, these concerns do not apply to nonbuffer states, where powers face no competition for influence or control." Buffer states, when authentically independent, typically pursue a neutralist foreign policy, which distinguishes them from satellite states. The concept of buffer states is part of a theory of the balance of power that entered European strategic and diplomatic thinking in the 18th century. After the First World War, notable examples of buffer states were Poland and Czechoslovakia, situated between major powers such as Germany and the Soviet Union. Lebanon is another significant example, positioned between Syria and Israel, thereby experiencing challenges as a result. Examples Americas Bolivia, created by Gran Colombia as a buffer between Peru and Argentina during the Upper Peru question Uruguay, served as a demilitarised buffer between Argentina and the Empire of Brazil during the early independence period in South America Paraguay, maintained after the end of the Paraguayan War in 1870, as a buffer separating Argentina and Brazil Georgia, a colony established by Great Britain in 1732 as a buffer between its other colonies along the Atlantic coast of North America and Spanish Florida Ecuador, served as a "cushion state" between Colombia and Peru, which had a bigger extension and military force and fought a war in the 1820s. Asia Kingdom of Judah was a buffer state between Egyptian Empire and Neo-Babylonian Empire. Multiple buffer states played major roles during the Roman–Persian Wars (66 BC – 628 AD). Armenia was a frequently contested buffer between the Roman Empire (as well as the later Byzantine Empire) and the various Persian and Muslim states. North Korea, during and after the Cold War, has been seen by some analysts as a buffer state between the military forces of China, the Soviet Union and those of South Korea, Japan, and the United States (stationed in South Korea, Japan, and Taiwan from 1954 to 1979). Manchukuo was a pro-Japanese buffer state between the Empire of Japan, the Soviet Union, and the Republic of China during World War II. Thailand, historically known as Siam, was an independent buffer state between the British Raj, British Malaya, French Indochina, and their competing colonial interests in Laos and Cambodia. Korea acted as a buffer zone between the growing superpowers of Imperial Japan and the Russian Empire. The Far Eastern Republic was a formally independent state created to act as a buffer between Bolshevik Russia and the Empire of Japan. Afghanistan was a buffer state between the British Empire, which ruled much of South Asia, and the Russian Empire, which ruled much of Central Asia, during the Anglo–Russian conflicts of the 19th century. Later, the Wakhan Corridor extended the buffer eastwards to the Chinese border. The Himalayan nations of Tibet, Nepal, Bhutan, and Sikkim were buffer states between the British Empire and China. Later, during the Sino-Indian War of 1962, they became buffers between China and India as the two powers fought along their borders. Mongolia acted as a buffer between the Soviet Union and China until 1991. It currently serves as a buffer between Russia and China. is a buffer state between Israel and Syria. and are buffer states between Iran and Saudi Arabia. Africa Morocco served as a buffer state between the Ottoman Empire, Spain, and Portugal in the 16th century. The Bechuanaland Protectorate (present-day Botswana) was initially created as a buffer between the British Empire and the two Boer republics of the Orange Free State and the Transvaal Republic until the Second Boer War. Europe Principality of Transylvania was a buffer state between Ottoman Empire and Habsburg Empire until the Treaty of Karlowitz was signed. Switzerland has been a buffer state between Italy, Austria, France, Germany, and other state powers in medieval and modern Europe. The United Kingdom of the Netherlands, composed of today's Belgium and Netherlands, was created by the Congress of Vienna in 1815 to maintain peace between France, Prussia, and the United Kingdom. The kingdom existed for 15 years until the Belgian Revolution. Belgium acted as buffer state between France, the German Empire, the Netherlands, and the British Empire before the First World War. The Rhineland served as a demilitarised zone between France and Germany during the interwar years of the 1920s and early 1930s. There were early French attempts at creating a Rhenish Republic. The Socialist Soviet Republic of Byelorussia was founded as a buffer state between Soviet Russia and the European powers. The Qasim Khanate (1452–1681) may have served as a buffer between Muscovy and the Kazan Khanate. Austria acted as a buffer state between Germany and Italy during the interwar period. Poland and other states between Germany and the Soviet Union have sometimes been described as buffer states, both as non-communist states before World War II and later as communist states of the Eastern Bloc. Yugoslavia, which broke with the Soviet Union before the formation of the Warsaw Pact, became a buffer state between NATO and the Eastern Bloc during the Cold War. West Germany and East Germany were also regarded as buffer states between NATO and the Warsaw Pact during the Cold War in Europe. During the Cold War, Sweden and Finland were sometimes regarded as buffer states between NATO and the Soviet Union. More recently, the Russo-Ukrainian War has helped push both countries into joining NATO. Oceania New Hebrides served as a buffer between the United Kingdom and France in Oceania during the New Imperialism period. Papua New Guinea served as a buffer state between Indonesia, the Solomon Islands, and Vanuatu. Indonesia accused both the Solomon Islands and Vanuatu of supporting the Free Papua Movement during the Papua conflict. See also Indian barrier state, a British proposal to establish a Native American buffer state in the Great Lakes region of North America during the 18th and early 19th centuries Limitrophe states Neutral and Non-Aligned European States Puppet state Satellite state References Former countries Types of countries Independence Sovereignty Borders Geopolitics
Buffer state
[ "Physics" ]
1,357
[ "Spacetime", "Borders", "Space" ]
1,030,916
https://en.wikipedia.org/wiki/Nod%20factor
Nod factors (nodulation factors or NF), are signaling molecules produced by soil bacteria known as rhizobia in response to flavonoid exudation from plants under nitrogen limited conditions. Nod factors initiate the establishment of a symbiotic relationship between legumes and rhizobia by inducing nodulation. Nod factors produce the differentiation of plant tissue in root hairs into nodules where the bacteria reside and are able to fix nitrogen from the atmosphere for the plant in exchange for photosynthates and the appropriate environment for nitrogen fixation. One of the most important features provided by the plant in this symbiosis is the production of leghemoglobin, which maintains the oxygen concentration low and prevents the inhibition of nitrogenase activity. Chemical Structure Nod factors structurally are lipochitooligosaccharides (LCOs) that consist of an N-acetyl-D-glucosamine chain linked through β-1,4 linkage with a fatty acid of variable identity attached to a non reducing nitrogen in the backbone with various functional group substitutions at the terminal or non-terminal residues. Nod factors are produced in complex mixtures differing in the following characteristics: Length of the chain can vary from three to six units of N-acetyl-D-glucosamine with the exception of M. loti which can produce Nod factors with two unit only. Presence or absence of strain-specific substitutions along the chain Identity of the fatty acid component Presence or absence of unsaturated fatty acids Nod gene expression is induced by the presence of certain flavonoids in the soil, which are secreted by the plant and act as an attractant to bacteria and induce Nod factor production. Flavonoids activate NodD, a LysR family transcription factor, which binds to the nod box and initiates the transcription of the nod genes which encode the proteins necessary for the production of a wide range of LCOs. Function Nod factors are potentially recognized by plant receptors made of two histidine kinases with extracellular LysM domain, which have been identified in L. japonicus, soybean, and M. truncatula . Binding of Nod factors to these receptors depolarizes the plasma membrane of root hairs via an influx of Ca+2 which induce the expression of early nodulin (ENOD) genes and swelling of the root hairs. In M. truncatula, the signal transduction initiates by the activation of dmi1, dmi2, and dmi3 which lead to the deformation of root hairs, early nodulin expression, cortical cell division and bacterial infection. Additionally, nsp and hcl genes are recruited later and aid in the process of early nodulation expression, cortical cell division, and infection. Genes dmi1, dmi2, and dmi3 have also been found to aid in the establishment of interactions between M. truncatula and arbuscular mycorrhiza, indicating that the two very different symbioses may share some common mechanisms. The end result is the nodule, the structure in which nitrogen is fixed. Nod factors act by inducing changes in gene expression in the legume, most notable the nodulin genes, which are needed for nodule organogenesis. Nodulation Rhizobia bind to host specific lectins present in root hairs which together with Nod factors lead to the formation of nodulation. Nod factors are recognized by a specific class of receptor kinases that have LysM domains in their extracellular domains. The two LysM (lysin motif) receptor kinases (NFR1 and NFR5) that appear to make up the Nod factor receptor were first isolated in the model legume Lotus japonicus in 2003. They now have been isolated also from soybean and the model legume Medicago truncatula. NFR5 lacks the classical activation loop in the kinase domain. The NFR5 gene lacks introns. First the cell membrane is depolarized and the root hairs start to swell and cell division stops. Nod factor cause the fragmentation and rearrangement of actin network, which coupled with the reinstitution of cell growth lead to the curling of the root hair around the bacteria. This is followed by the localized breakdown of the cell wall and the invagination of the plant cell membrane, allowing the bacterium to form an infection thread. As the infection thread grows the rhizobia travel down its length towards the site of the nodule. During this process the pericycle cells in plants become activated and cells in the inner cortex start growing and become the nodule primordium where the rhizobia infect and differentiate into bacteroids and fix nitrogen. Activation of adjacent middle cortex cells leads to the formation of nodule meristem. See also ENOD40 Notes Fabaceae Oligosaccharides Signal transduction Plant physiology
Nod factor
[ "Chemistry", "Biology" ]
1,025
[ "Plant physiology", "Carbohydrates", "Plants", "Signal transduction", "Oligosaccharides", "Biochemistry", "Neurochemistry" ]
1,031,206
https://en.wikipedia.org/wiki/Pinocytosis
In cellular biology, pinocytosis, otherwise known as fluid endocytosis and bulk-phase pinocytosis, is a mode of endocytosis in which small molecules dissolved in extracellular fluid are brought into the cell through an invagination of the cell membrane, resulting in their containment within a small vesicle inside the cell. These pinocytotic vesicles then typically fuse with early endosomes to hydrolyze (break down) the particles. Pinocytosis is variably subdivided into categories depending on the molecular mechanism and the fate of the internalized molecules. Function In humans, this process occurs primarily for absorption of fat droplets. In endocytosis the cell plasma membrane extends and folds around desired extracellular material, forming a pouch that pinches off creating an internalized vesicle. The invaginated pinocytosis vesicles are much smaller than those generated by phagocytosis. The vesicles eventually fuse with the lysosome, whereupon the vesicle contents are digested. Pinocytosis involves a considerable investment of cellular energy in the form of ATP. Pinocytosis and ATP Pinocytosis is used primarily for clearing extracellular fluids (ECF) and as part of immune surveillance. In contrast to phagocytosis, it generates very small amounts of ATP from the wastes of alternative substances such as lipids (fat). Unlike receptor-mediated endocytosis, pinocytosis is nonspecific in the substances that it does transport: the cell takes in surrounding fluids, including all solutes present. Etymology and pronunciation The word pinocytosis () uses combining forms of pino- + cyto- + -osis, all Neo-Latin from Greek, reflecting píno, to drink, and cytosis. The term was proposed by W. H. Lewis in 1931. Non-specific, adsorptive pinocytosis Non-specific, adsorptive pinocytosis is a form of endocytosis, a process in which small particles are taken in by a cell by splitting off small vesicles from the cell membrane. Cationic proteins bind to the negative cell surface and are taken up via the clathrin-mediated system, thus the uptake is intermediate between receptor-mediated endocytosis and non-specific, non-adsorptive pinocytosis. The clathrin-coated pits occupy about 2% of the surface area of the cell and only last about a minute, with an estimated 2500 leaving the average cell surface each minute. The clathrin coats are lost almost immediately, and the membrane is subsequently recycled to the cell surface. Macropinocytosis Macropinocytosis is a clathrin-independent endocytic mechanism that can be activated in practically all animal cells, resulting in uptake. In most cell types, it does not occur continuously but rather is induced for a limited time in response to cell-surface receptor activation by specific cargoes, including growth factors, ligands of integrins, and apoptotic cell remnants. These ligands activate a complex signaling pathway, resulting in a change in actin dynamics and the formation of cell-surface protrusions of filopodia and lamellopodia, commonly called ruffles. When ruffles collapse back onto the membrane, large fluid-filled endocytic vesicles form called macropinosomes, which can transiently increase the bulk fluid uptake of a cell by up to tenfold. Macropinocytosis is a solely degradative pathway: macropinosomes acidify and then fuse with late endosomes or endolysosomes, without recycling their cargo back to the plasma membrane. Some bacteria and viruses have evolved to induce macropinocytosis as a mechanism for entering host cells. Some of these can stop the degradation processes in order to survive inside the macropinosome, which may transform into smaller and long-lasting vacuoles containing the viruses or bacteria (some of which may replicate inside), or simply escape through the wall of the macropinosome when inside. For example, the gut pathogen Salmonella typhimurium injects toxins into the host cell in order to induce macropinocytosis as a form of uptake, inhibits the degradation of the macropinosome, and forms a salmonella-containing vacuole, or SCV, wherein it can replicate. Inhibitors Virapinib See also Caveolae Phagocytosis Receptor-mediated endocytosis References Campbell, Reece, Mitchell: "Biology", Sixth Edition, Copyright 2002 P. 151 Marshall, Ben, Incredible Biological Advancements of the 20th Century, Copyright 2001 p. 899 Alrt, Pablo, Global Society Harvard study, copyright 2003 p. 189 Brooker, Robert: "Biology", Second Edition, Copyright 2011 p. 116 Cherrr, Malik, The Only Edition, Copyright 2012, p. 256 Abbas, Abul, et al. "Basic Immunology: Functions and Disorders of the Immune System." 5th ed. Elsevier, 2016. p. 69 Cellular processes
Pinocytosis
[ "Biology" ]
1,072
[ "Cellular processes" ]
1,031,339
https://en.wikipedia.org/wiki/Ballochroy
Ballochroy is a megalithic site in Kintyre on the Argyll peninsula in Scotland. It consists of three vertical stones, side by side, aligned with various land features away. Alexander Thom, known for his work on Stonehenge, maintained that the great length between the stones and the features of distant landscape lent precision to pinpointing the midsummer and winter solstices for ancient observers. These three stones are considered the most spectacular set of megalithic monuments that cluster around south Argyll. The three mica schist stones were measured at in height. It is possible that this last, smallest, stone may have been broken off at the top. The line of stones is orientated north-east to south-west. The flat face of the central stone (at right angles to the alignment) indicates the mountain of Cora Bheinn, on the island of Jura, which is away. The shortest stone also faces across the alignment, and points to Beinn a' Chaolais, the southernmost of the three Paps of Jura. The sun setting here would have given warning of the approach of the solstice. As with many megalithic sites, the current theories concerning the exact use of the stones at Ballochroy are somewhat controversial. See also List of archaeoastronomical sites sorted by country References External links Archaeoastronomy Archaeological sites in Argyll and Bute Megalithic monuments in Scotland Kintyre Scheduled monuments in Argyll and Bute
Ballochroy
[ "Astronomy" ]
302
[ "Archaeoastronomy", "Astronomical sub-disciplines" ]
1,031,440
https://en.wikipedia.org/wiki/Forensic%20medicine
Forensic medicine is a broad term used to describe a group of medical specialties which deal with the examination and diagnosis of individuals who have been injured by or who have died because of external or unnatural causes such as poisoning, assault, suicide and other forms of violence, and apply findings to law (i.e. court cases). Forensic medicine is a multi-disciplinary branch which includes the practice of forensic pathology, forensic psychiatry, forensic odontology, forensic radiology and forensic toxicology. There are two main categories of forensic medicine; Clinical forensic medicine; Pathological forensics medicine, with the differing factor being the condition of the patients. In clinical forensic medicine it is the investigation of trauma to living patients, whereas pathological forensic medicine involves the examination of traumas to the deceased to find the cause of death. History The term clinical forensic medicine, coined by Thomas Stuart, dates back to the 19th century, referring to the connection between the usage of medical evidence for judiciary purposes. Although this form of forensics medicine has been used before this term was conceived. However, clinical forensics could not be considered a thing until both legal and medical systems were well developed. But, there has been evidence of some form of forensics as far back as 220B.C.E., in the Qin Dynasty, where evidence of linking medical and legal systems were written out. Forensic medicine emerged as a discipline in France in the late 18th century. Pathological forensic medicine was not considered as its own subfield until 1819, when Joan Lobstein was appointed the position of Professorship of Pathology at the University of Strassburg. However, forensic pathology has been used throughout history to determine cause all factors of a death (e.g. mechanism, etc.) by examining the body of the deceased. Autopsies of animals were conducted as early as 400 B.C.E. Until the 13th century the body of the deceased were considered holy and could not be operated on. However, around 1231 C.E. the first law that allowed for the dissection and observation of a human body. This led to further easing on the concept of human autopsies, as more and more occurred. This development led to many advancements in pathology as the human body was properly mapped for its structure and function, and studied for the causes of diseases. This led to the overall increase in health as ancient technique were rid of, and new scientific medical practices were implemented. See also Jean-Jacques Belloc References Histopathology
Forensic medicine
[ "Chemistry" ]
508
[ "Histopathology", "Microscopy" ]
1,031,478
https://en.wikipedia.org/wiki/Glauconite
Glauconite is an iron potassium phyllosilicate (mica group) mineral of characteristic green color which is very friable and has very low weathering resistance. It crystallizes with a monoclinic geometry. Its name is derived from the Greek () meaning 'bluish green', referring to the common blue-green color of the mineral; its sheen (mica glimmer) and blue-green color. Its color ranges from olive green, black green to bluish green, and yellowish on exposed surfaces due to oxidation. In the Mohs scale it has a hardness of 2, roughly the same as gypsum. The relative specific gravity range is 2.4–2.95. It is normally found as dark green rounded concretions with the dimensions of a sand grain. It can be confused with chlorite (also of green color) or with a clay mineral. Glauconite has the chemical formula . Glauconite particles are one of the main components of greensand, glauconitic siltstone and glauconitic sandstone. Glauconite has been called a marl in an old and broad sense of that word. Thus references to "greensand marl" sometimes refer specifically to glauconite. The Glauconitic Marl formation is named after it, and there is a glauconitic sandstone formation in the Mannville Group of Western Canada. Occurrence At the broadest level, glauconite is an authigenic mineral and forms exclusively in marine settings. It is commonly associated with low-oxygen conditions. Normally, glauconite is considered a diagnostic mineral indicative of continental shelf marine depositional environments with slow rates of accumulation and gradational boundaries. For instance, it appears in Jurassic/lower Cretaceous deposits of greensand, so-called after the coloration caused by glauconite, its presence gradually lessening further landward. It can also be found in sand or clay formations, or in impure limestones and in chalk. It develops as a consequence of diagenetic alteration of sedimentary deposits at the surface, bio-chemical reduction and subsequent mineralogical changes affecting iron-bearing micas such as biotite, and is also influenced by the decaying process of organic matter degraded by bacteria in marine animal shells. In these cases, the organic matter creates the reducing environment needed to form glauconite within otherwise oxygenated sediment. Glauconite deposits are commonly found in nearshore sands, open oceans and shallow seas, such as the Mediterranean Sea. Glauconite remains absent in fresh-water lakes, but is noted in shelf sediments of the western Black Sea. The wide distribution of these sandy deposits was first made known by naturalists on board the fifth HMS Challenger, in the expedition of 1872–1876. Uses Glauconite has long been used in Europe as a green pigment for artistic oil paint under the name green earth. One example is its use in Russian "icon paintings", another widespread use was for underpainting of human flesh in medieval painting. It is also found as mineral pigment in wall paintings from the ancient Roman Gaul. Fertilizers Glauconite, a major component of greensand, is a common source of potassium (K+) in plant fertilizers and is also used to adjust soil pH. It is used for soil conditioning in both organic and non-organic farming, whether as an unprocessed material (mixed in adequate proportions) or as a feedstock in the synthesis of commercial fertilizer powders. In Brazil, greensand refers to a fertilizer produced from a glauconitic siltstone unit belonging to the Serra da Saudade Formation, Bambuí Group, of Neoproterozoic/Ediacaran age. The outcrops occur in the Serra da Saudade ridge, in the Alto Paranaíba region, Minas Gerais state. It is a silty-clayed sedimentary rock, laminated, bluish-green, composed of glauconite (40-80%), potassium feldspar (10-15%), quartz (10-60%), muscovite (5%) and minor quantities of biotite (2%), goethite (<1%), titanium and manganese oxides (<1%), barium phosphate and rare-earth element phosphates (<1%). Enriched levels of potash have K2O grades between 8 and 12%, thickness up to and are associated to the glauconitic levels, dark-green in color. Glauconite is authigenic and highly mature. The high concentration of this mineral is related to a depositional environment with a low sedimentation rate. The glauconitic siltstone has resulted from a high-level flooding event in the Bambuí Basin. The sedimentary provenance is from supracrustal felsic elements in a continental margin environment with acid magmatic arc (foreland basin). Hazards In the wind farm industry off the coasts of Massachusetts, New York and New Jersey, glauconite-rich sands of Cretaceous to Paleogene age found in the seabed have become a hazard to the installation of monopiles used for turbine foundation. When these sands are manipulated, during the driving of monopiles, they start to crush, changing their geotechnical behaviour from sand-like to clay-like, with the risk of pile refusal, making it impossible to reach the target depth of the piles. The pile driving difficulties stem from the high frictional resistance of the native glauconite sand at the pile tip, combined with the high cohesive resistance of the altered, now clay-like material along the pile shaft. References Potassium minerals Sodium minerals Magnesium minerals Aluminium minerals Iron(III) minerals Mica group Monoclinic minerals Minerals in space group 12 Inorganic pigments
Glauconite
[ "Chemistry" ]
1,218
[ "Inorganic pigments", "Inorganic compounds" ]
1,031,607
https://en.wikipedia.org/wiki/Squidgygate
Squidgygate or Dianagate refers to the controversy over pre-1990 telephone conversations between Diana, Princess of Wales, and her lover, James Gilbey (heir to Gilbey's Gin), which were published by The Sun newspaper. In 1992, The Sun publicly revealed the recording's existence in an article titled "Squidgygate" (the "-gate" suffix being a reference for a scandal). During the calls, Gilbey affectionately called Diana by the names "Squidgy" and "Squidge". In the conversation, the Princess of Wales likens her situation to that of a character in the popular British soap opera EastEnders, and expresses concern that she might be pregnant and there is discussion of abortion. The publication of the tapes was a highpoint of the media attention which surrounded Diana's serial adultery leading to the marriage, separation, and eventual divorce of The Prince and Princess of Wales. Recording and publication The tape was published after it was allegedly accidentally recorded by a retired bank manager who was a radio enthusiast. First eavesdropper: Cyril Reenan In January 1990, two reporters from The Sun newspaper met Cyril Reenan in the parking bay of Didcot railway station, six miles from his home in Abingdon. Reenan, a 70-year-old retired manager for the Trustee Savings Bank, regularly listened in on non-commercial radio frequencies for amusement with his wife. Reenan played them excerpts from a tape without having previously told them what he had recorded. Two days later the journalists were shown round Mr Reenan's home-made eavesdropping studio, in which "Above the scanners was a 1960s-style tape recorder with a microphone dangling down above the scanning equipment so that the couple could tape 'interesting' conversations". Reenan was quoted as saying he was "so nervous I just want you [the reporters] to take the tape away." He added, "I didn't know what to do with it once I'd got it. I was stuck with it, and I was frightened of it," he was quoted as saying, claiming that if the paper had told him that "the tape was 'dangerous', I would have burned it or scrubbed it out." Reenan claimed that he had been so worried by the evident security breach that he had first thought of attempting to gain an audience with Diana: "I could have used a code-word, perhaps the nickname Squidgy... I was trying to save her face in a way." However, having thought on it "for a day, at least", Reenan decided that he "would not get to see Diana." So he "rang the Sun instead." Publication Published in The Sun on 23 August 1992, "Squidgygate" (initially called "Dianagate") was the front-page revelation of the existence of a tape-recording of Diana, Princess of Wales talking to a close friend, who later turned out to be Gilbey, heir to the eponymous gin fortune. Gilbey, who initially denied The Sun's charges, was a 33-year-old Lotus car-dealer who had been a friend of Diana's since childhood. Their conversation, which took place on New Year's Eve 1989, was wide-ranging. A special phone line allowed about 60,000 callers to hear the contents of the 30-minute tape for themselves, at 36 pence per minute. The tape begins in mid-conversation, with the man asking: "And so, darling, what other lows today?" To which the woman replies: "I was very bad at lunch, and I nearly started blubbing. I just felt so sad and empty and thought 'bloody hell, after all I've done for this fucking family...' It's just so desperate. Always being innuendo, the fact that I'm going to do something dramatic because I can't stand the confines of this marriage [...] He makes my life real torture, I've decided." The conversation covered topics as diverse as the BBC soap opera EastEnders, and the strange looks that Diana received from the Queen Mother: "It's not hatred, it's sort of pity and interest mixed in one [...] every time I look up, she's looking at me, and then looks away and smiles." Additionally, in view of a fascination with spiritualism that was later to become well-known, Diana was also heard explaining how she had startled the Bishop of Norwich by claiming to be "aware that people I have loved and [who] have died [...] are now in the spirit world, looking after me." Diana expressed worries about whether a recent meeting with Gilbey would be discovered. She also discussed a fear of becoming pregnant, and Gilbey referred to her as "Darling" 14 times, and as "Squidgy" (or "Squidge") 53 times. Second eavesdropper: Jane Norgrove On 5 September 1992, The Sun announced that the same call had also been recorded by an Oxfordshire eavesdropper, 25-year-old Jane Norgrove, who claimed she had recorded the call on New Year's Eve 1989, but "didn't even listen to it. I just put the tape in a drawer. I didn't play it until weeks later, and then I suddenly realised who was speaking on the tape." In January 1991, after sitting on the tape for a year, Norgrove approached The Sun. The paper made a copy of her recording, and offered her £200 for her time: Norgrove refused the money, claiming that she "got scared and didn't want to know about it any more." Norgrove claimed: "I wanted to speak out now to clear up all this nonsense about a conspiracy [...] I'm not part of a Palace plot to smear the Princess of Wales." The Sun had initially published the opinions of "a senior courtier [who] claims the tape is part of a plot to blacken Diana's name" and the verdicts of other anonymous Palace staffers, who said that the tape was "a sophisticated attempt to get even by friends loyal to Prince Charles after Diana's co-operation with the book Diana: Her True Story, by Andrew Morton." Such speculation had not been confined to tabloid newspapers: William Parsons, of anti-surveillance consultants Systems Elite, remarked that the technical and atmospheric requirements for such a recording to be possible (both halves of a cellular telephone call, with equal clarity, when the callers were over 100 miles apart, in different network cells), were so improbable as to arouse suspicion: "My money would not be on somebody accidentally picking it up [...] There is more to this than meets the eye." Jane Norgrove was adamant: "It was just me, recording a telephone conversation in my bedroom. Nothing more and nothing less than that." Context and reaction According to Tina Brown, Diana and Gilbey had first met each other before her marriage to Charles and reconnected in the late 1980s. At the time of publication, the Prince and Princess of Wales, engaged in acrimonious pre-divorce proceedings, were involved in a protracted battle for public sympathy which became known as the "War of the Waleses". The Duke and Duchess of York had separated months before, and now all eyes were on Charles and Diana, the next king and queen, whose marriage had been the subject of rumour for years. Speculation in the media—and in court circles—reached fever pitch. In his memoirs, Diana's private secretary Patrick Jephson recounts a fraught game of media one-upmanship by the feuding couple: secret briefings to friendly journalists, open collaboration with TV documentaries, and separate appearances at different public events on the same day were just some of the many strategies with which Charles and Diana attempted to force each other out of the limelight. Jephson recalls that the atmosphere at Kensington Palace at the time was "like a slowly-spreading pool of blood leaking from under a locked door." Throughout 1991 and into 1992, Diana had been secretly collaborating with a previously little-known court correspondent, Andrew Morton, on the book Diana: Her True Story, which revealed in graphic detail the previously hidden disaster that the Waleses' marriage had become. Diana's bulimia, suicide attempts and self-harm were spelt out unambiguously, as were Charles's relationship with Camilla Parker Bowles, and the intrigues of Palace officials in attempting to contain the disintegrating royal marriage. Analysis of the tape In 1993, The Sunday Times published the findings of an analysis of the "Squidgygate" tape, commissioned from Corby-based surveillance specialists Audiotel International. Audiotel concluded that the presence of data bursts on the tape was suspicious. Data bursts ("pips" at intervals of approximately 10 seconds, containing information for billing purposes) would normally be filtered out at the exchange before Cellnet transmission. That these "pips" were present at all was therefore anomalous, but they were also too fast, too loud, and exhibited a "low-frequency [audio] 'shadow'," implying "some kind of doctoring of the tape," said Audiotel's managing director, Andrew Martin, in his firm's report. "The balance of probability suggests something irregular about the recording which may indicate a rebroadcasting of the conversation some time after the conversation took place." Within a week of the Sunday Timess announcement, a further independent analysis was carried out for the same newspaper by John Nelson of Crew Green Consulting, with assistance from Martin Colloms, audio analyst for Sony International. Their analysis demonstrated convincingly that the conversation could not have been recorded by a scanning receiver in the manner claimed by Reenan. Amongst several relevant factors, there was a 50 hertz hum in the background of the "Squidgygate" conversation together with components in the recorded speech with frequencies in excess of 4 kHz. Neither could have passed through the filters of Reenan's Icom receiver or indeed have been transmitted by the cellular telephone system. The 50 Hz hum was consistent with the effect of attempting to record a telephone conversation via a direct tap on a landline. Since Gilbey was known to have been speaking from a mobile phone, inside a parked car, this left Diana's telephone line at Sandringham as the source of the recording. Nelson's analysis, written after a visit to Reenan and an examination of his unsophisticated receiving system (which consisted essentially of an Icom wideband scanning receiver and a conventional television antenna), showed that the recording was most likely to have been made as a result of a local tapping of the telephone line somewhere between Diana's telephone itself, and the local exchange. Furthermore, narrow-band spectrum analysis showed this 50 Hz "hum" to consist of two separate but superimposed components, possibly indicating a remixing of the tape after the initial recording. The spectral frequency content of the tape was demonstrably inconsistent with its supposed origin as an off-air recording of an analogue cellular telephone channel but quite feasible if the recording had been made via a local-end direct tap. As well as the strong technical case he made against the recording, Nelson established two other salient points. The first was that Gilbey's mobile telephone was registered to the Cellnet network. Secondly, the Cellnet base-station transmitter site in Abingdon Town, the data channel of which was the only one receivable on Reenan's receiving system at the time of his visit, was not in service at the date of the alleged telephone conversation; it was first commissioned on 3 March 1990. It was therefore not possible that the purported recording could have been made off-air by Reenan or Norgrove in December 1989 or January 1990. With regard to the data-bursts that had aroused the suspicion of Audiotel International, Colloms and Nelson stated: "We are forced to conclude that these data-bursts are not genuine, but were added later to the tape. They originated with a locally-made recording, and show that an attempt has been made to disguise a local tap by making it appear that it was recorded over cellular radio." Telecommunications company Cellnet admitted that it had automatically conducted its own internal investigation after publication of the "Squidgygate" transcript, because Gilbey had been speaking on a Cellnet phone. "It is a very sensitive issue if a cellular network has been bugged," said Cellnet spokesman William Ostrom: "We wished to satisfy ourselves exactly what happened." Cellnet's inquiry, claimed Ostrom, had "replicated" the findings of Colloms and Nelson: Cellnet announced that it was "completely satisfied that we can dismiss this as an example of our network being eavesdropped." Government reaction Suspicion about responsibility for the "Squidgygate" leak focused on the United Kingdom's security service, MI5. Home Secretary Kenneth Clarke said: "The security services are strictly controlled in their telephone tapping, and I know of no evidence whatever to indicate that they were involved." Such suggestions, he added, were "wild" and "extremely silly." On the same day as these remarks, members of the Commons all-party Home Affairs Select Committee had their first meeting with Stella Rimington, director general of MI5. Committee member John Greenway MP (Conservative) remarked that the recent "Camillagate" leak "strengthens the case for a parliamentary committee to have responsibility to oversee or scrutinise the work of the security services [...] I suspect that colleagues will want to ask how true the allegations [of MI5 complicity in the 'Camillagate' leak] are, and I suspect that she [Rimington] will refuse to tell us." No record exists of matters discussed at the meeting. The first major "Establishment" figure to question the official line on "Squidgygate" was Lord Rees-Mogg, the arch-conservative chairman of the Broadcasting Standards Authority. He had proved an early proponent of the "rogue spies" school of thought in January 1993, when he used his Times column to accuse elements within the British security services of being responsible for both the recording and its leak. "All those tapes were made within a month," he wrote. "The most likely explanation is that MI5 did it to protect the Royal Family at a time of danger from the IRA. I don't think there was any sense of wrong-doing, but once they were made there was the danger of a leak." A few days before Clarke's remarks, the Daily Mirror had run with "Camillagate", an eight-minute tape of Prince Charles engaging in explicit conversation with his mistress, Camilla Parker Bowles. Richard Stott, editor of the Mirror, claimed that the tape had been recorded by "a very ordinary member of the public", although the paper was not allowed to keep or to make a copy of the tape. But The Sunday Times reported that an anonymous freelance journalist from Manchester was known to be attempting to sell a complete copy of the original tape, asking price £50,000. The re-ignition of the controversy over "Squidgygate" had been instantaneous: the date of the "Camillagate" recording was known to be 18 December 1989, just weeks before the "Squidgygate" tape had been recorded. Political fallout Before any investigation into "Squidgygate" or "Camillagate" had begun, Home Secretary Kenneth Clarke told the House of Commons: "There is nothing to investigate. [...] I am absolutely certain that the allegation that this is anything to do with the security services or GCHQ [...] is being put out by newspapers, who I think feel rather guilty that they are using plainly tapped telephone calls." The Labour Party, then in Opposition, accused Kenneth Clarke of irresponsibility, issuing a statement: "He has to show that he is taking these allegations seriously, otherwise he will be perceived as being unable to control an organisation for which he is responsible." Official position John Major's government eventually published two reports, both of which cleared MI5 and MI6 of involvement in the "Royalgates" tapes. One of these was the annual report of the Interceptions Commissioner, Lord Bingham of Cornhill, who oversaw the intelligence-gathering practices of the security services. Excerpt follows: "[Lord Bingham] was impressed by the scrupulous adherence to the statutory provisions [against misconduct] of those involved in the [intelligence-gathering] procedures." In a clear reference to the "Squidgygate" affair, he commented on "the stories which occasionally circulated in the press with regard to the interceptions by MI5, MI6 and GCHQ," stating that such stories were, in his experience, "without exception false, and gave an entirely misleading impression to the public both of the extent of official interception and of the targets against which interception is directed." Conservative MP Richard Shepherd called the official reports: "two old buffers saying that in their opinion the security services act with integrity." The National Heritage Secretary Peter Brooke gave MPs "a categorical assurance that the heads of the agencies concerned have said there is no truth in the rumours." The Queen was so disturbed by the "Squidgygate" episode that she requested that MI5 conduct an investigation to discover the culprit or culprits. Since the motive could not have been financial, said the investigators—the only winners were the radio hams and the press—it must have been political. In 2002, Diana's former Personal Protection Officer, Inspector Ken Wharfe, stated that the investigation had "identified all those involved, but for legal reasons I cannot expand further, and nor is it necessary to do so." Wharfe added that: "It does [...] lend credence to the Princess's belief, so often dismissed by her detractors, that the Establishment was out to destroy her." See also Diana: Her True Story, a 1993 television film based on the publication of the same name by Andrew Morton, with Serena Scott Thomas as Princess Diana James Hewitt References Sources Adams, James: The New Spies: Exploring the Frontiers of Espionage; Hutchinson, London, 1994, . Cockerell, Michael: Live from Number 10: The Inside Story of Prime Ministers and Television; Faber and Faber, London, 1988, . Dorril, Stephen, and Ramsay, Robin: Smear! Wilson and the Secret State; Fourth Estate, London, 1991; . Jephson, Patrick: Shadows of a Princess; HarperCollins, London 2000; . Morton, Andrew: Diana: Her True Story; Michael O'Mara, London, 1993 (2nd edition); . Wharfe, Ken: Diana: Closely Guarded Secret; Michael O'Mara, London, 2002, . External links Part of John Nelson's technical analysis of Squidgygate recording Political scandals in the United Kingdom Royal scandals in the United Kingdom 1993 in the United Kingdom Telephone tapping Diana, Princess of Wales Field recording
Squidgygate
[ "Engineering" ]
3,955
[ "Audio engineering", "Field recording" ]
1,031,641
https://en.wikipedia.org/wiki/Pathogenicity%20island
Pathogenicity islands (PAIs), as termed in 1990, are a distinct class of genomic islands acquired by microorganisms through horizontal gene transfer. Pathogenicity islands are found in both animal and plant pathogens. Additionally, PAIs are found in both gram-positive and gram-negative bacteria. They are transferred through horizontal gene transfer events such as transfer by a plasmid, phage, or conjugative transposon. Although the general makeup of pathogenicity islands (PAIs) might vary throughout bacterial pathogen strains, all PAIs are known to have characteristics with all genomic islands, which includes virulence genes, functional mobility elements, and areas of homology to tRNA genes and direct repeats. Therefore, PAIs enables microorganisms to induce disease and also contribute to microorganisms' ability to evolve. The spread of antibiotic resistance and, more generally, the conversion of non-pathogenic strains in natural environments to strains that infect animal and plant hosts with disease are two examples of the evolutionary and ecological changes brought about by the transmission and acquisition of PAIs among bacterial species. However, It is impossible to overlook their impact on bacterial evolution, though, since if a PAI is acquired and is stably absorbed, it can irreversibly change the bacterial genome. One species of bacteria may have more than one PAI. For example, Salmonella has at least five. An analogous genomic structure in rhizobia is termed a symbiosis island. Properties Pathogenicity islands (PAIs) are gene clusters incorporated in the genome, chromosomally or extrachromosomally, of pathogenic organisms, but are usually absent from those nonpathogenic organisms of the same or closely related species. They may be located on a bacterial chromosome or may be transferred within a plasmid or can be found in bacteriophage genomes. Every genomic island has the following characteristics; a GC- content that differs from the surrounding DNA sequence, a connection with tRNA genes, the presence of repeats on both ends (flanking), and the capacity to recombine, which is usually shown by the presence of an integrase. The GC-content and codon usage of pathogenicity islands often differs from that of the rest of the genome, potentially aiding in their detection within a given DNA sequence, unless the donor and recipient of the PAI have similar GC-content. The most basic kind of mobile genetic element is an insertion sequence (IS), which usually just has one or two open reading frames that encode genes to make transposition easier. Sections inside the PAI may be rearranged or deleted with the use of IS components. These changes encourages adaption and aid in the generation of alternative strains. PAIs also contain transposons, which are more sophisticated forms of IS elements. The majority are surrounded by brief terminal inverted repeats that serve as homologous recombination sites, enhancing a PAI's stability. Bacteriophage integrases also found on pathogenicity islands (PAIs) are enzymes produced by bacteriophages to enable site-specific recombination between two recognition sequences, serving as another form of mobility element to enable PAIs insertion into host DNA. PAIs are often associated with tRNA genes, which target sites for this integration event. Given that integration may result in tRNA truncation, it is probable that only non-essential tRNA loci found in multiple locations, or those possessing wobble capacity (the ability of a 5' base of a tRNA anticodon to mispair with the thrid base of an mRNA codon) can become common integration sites. They can be transferred as a single unit to new bacterial cells, thus conferring virulence to formerly benign strains. Pathogenicity islands carry genes encoding one or more virulence factors, including, but not limited to, adhesins, secretion systems (type III and IV secretion system), toxins, invasins, modulins, effectors, superantigens, iron uptake systems, o-antigen synthesis, serum resistance, immunoglobulin A proteases, apoptosis, capsule synthesis, and plant tumorigenesis via Agrobacterium tumefaciens. Type III and type IV secretion systems, which are both expressed in Gram-negative bacteria, are the secretion systems most frequently linked to PAIs. The bacterial membranes contain the type III secretion system (T3SS), which functions essentially as a molecular syringe. The needle-like apparatus secretes effectors, which go from the bacterial cell to the host cell via the tip of the apparatus, creating a hole in the membrane of the host cell. There are various combinations of regulation involving pathogenicity islands. The first combination is that the pathogenicity island contains the genes to regulate the virulence genes encoded on the PAI. The second combination is that the pathogenicity island contains the genes to regulate genes located outside of the pathogenicity island. Additionally, regulatory genes outside of the PAI may regulate virulence genes in the pathogenicity island. Regulation genes typically encoded on PAIs include AraC-like proteins and two-component response regulators. PAIs can be considered unstable DNA regions as they are susceptible to deletions or mobilization. This may be due to the structure of PAIs, with direct repeats, insertion sequences and association with tRNA that enables deletion and mobilization at higher frequencies. Additionally, deletions of pathogenicity islands inserted in the genome can result in disrupting tRNA and subsequently affect the metabolism of the cell. Examples The P fimbriae island contains virulence factors such as haemolysin, pili, cytotoxic necrosing factor, and uropathogenic specific protein (USP). Yersinia pestis high pathogenicity island I has genes regulating iron uptake and storage. Salmonella SP-1 and SP-2 sites regulates bacterium's invasion and survival within host cells. Rhodococcus equi virulence plasmid pathogenicity island encodes virulence factors for proliferation in macrophages. The SaPI family of Staphylococcus aureus pathogenicity islands, mobile genetic elements, encode superantigens, including the gene for toxic shock syndrome toxin, and are mobilized at high frequencies by specific bacteriophages. Phage encoded Cholera toxin of Vibrio cholerae, Diphtheria toxin of Corynebacterium diphtheriae, Neurotoxins of Clostridium botulinum and Cytotoxin of Pseudomonas aeruginosa. Helicobacter pylori has two strains, one being more virulent than the other due to the presence of the Cag pathogenicity island. Escherichia coli pathogenicity island carries genes of toxins (eg., Shiga toxin) in enterohemorrhagic E. coli (EHEC) strains. References External links BAC definition of pathogenicity Genetics
Pathogenicity island
[ "Biology" ]
1,494
[ "Genetics" ]
1,031,713
https://en.wikipedia.org/wiki/Computer%20experiment
A computer experiment or simulation experiment is an experiment used to study a computer simulation, also referred to as an in silico system. This area includes computational physics, computational chemistry, computational biology and other similar disciplines. Background Computer simulations are constructed to emulate a physical system. Because these are meant to replicate some aspect of a system in detail, they often do not yield an analytic solution. Therefore, methods such as discrete event simulation or finite element solvers are used. A computer model is used to make inferences about the system it replicates. For example, climate models are often used because experimentation on an earth sized object is impossible. Objectives Computer experiments have been employed with many purposes in mind. Some of those include: Uncertainty quantification: Characterize the uncertainty present in a computer simulation arising from unknowns during the computer simulation's construction. Inverse problems: Discover the underlying properties of the system from the physical data. Bias correction: Use physical data to correct for bias in the simulation. Data assimilation: Combine multiple simulations and physical data sources into a complete predictive model. Systems design: Find inputs that result in optimal system performance measures. Computer simulation modeling Modeling of computer experiments typically uses a Bayesian framework. Bayesian statistics is an interpretation of the field of statistics where all evidence about the true state of the world is explicitly expressed in the form of probabilities. In the realm of computer experiments, the Bayesian interpretation would imply we must form a prior distribution that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980s and is nicely summarized by Sacks et al. (1989) . While the Bayesian approach is widely used, frequentist approaches have been recently discussed . The basic idea of this framework is to model the computer simulation as an unknown function of a set of inputs. The computer simulation is implemented as a piece of computer code that can be evaluated to produce a collection of outputs. Examples of inputs to these simulations are coefficients in the underlying model, initial conditions and forcing functions. It is natural to see the simulation as a deterministic function that maps these inputs into a collection of outputs. On the basis of seeing our simulator this way, it is common to refer to the collection of inputs as , the computer simulation itself as , and the resulting output as . Both and are vector quantities, and they can be very large collections of values, often indexed by space, or by time, or by both space and time. Although is known in principle, in practice this is not the case. Many simulators comprise tens of thousands of lines of high-level computer code, which is not accessible to intuition. For some simulations, such as climate models, evaluation of the output for a single set of inputs can require millions of computer hours . Gaussian process prior The typical model for a computer code output is a Gaussian process. For notational simplicity, assume is a scalar. Owing to the Bayesian framework, we fix our belief that the function follows a Gaussian process, where is the mean function and is the covariance function. Popular mean functions are low order polynomials and a popular covariance function is Matern covariance, which includes both the exponential () and Gaussian covariances (as ). Design of computer experiments The design of computer experiments has considerable differences from design of experiments for parametric models. Since a Gaussian process prior has an infinite dimensional representation, the concepts of A and D criteria (see Optimal design), which focus on reducing the error in the parameters, cannot be used. Replications would also be wasteful in cases when the computer simulation has no error. Criteria that are used to determine a good experimental design include integrated mean squared prediction error and distance based criteria . Popular strategies for design include latin hypercube sampling and low discrepancy sequences. Problems with massive sample sizes Unlike physical experiments, it is common for computer experiments to have thousands of different input combinations. Because the standard inference requires matrix inversion of a square matrix of the size of the number of samples (), the cost grows on the . Matrix inversion of large, dense matrices can also cause numerical inaccuracies. Currently, this problem is solved by greedy decision tree techniques, allowing effective computations for unlimited dimensionality and sample size patent WO2013055257A1, or avoided by using approximation methods, e.g. . See also Simulation Uncertainty quantification Bayesian statistics Gaussian process emulator Design of experiments Molecular dynamics Monte Carlo method Surrogate model Grey box completion and validation Artificial financial market Further reading Computational science Design of experiments Simulation
Computer experiment
[ "Mathematics" ]
944
[ "Computational science", "Applied mathematics" ]
1,031,810
https://en.wikipedia.org/wiki/MSISDN
MSISDN ( ) is a number uniquely identifying a subscription in a Global System for Mobile communications or a Universal Mobile Telecommunications System mobile network. It is the mapping of the telephone number to the subscriber identity module in a mobile or cellular phone. This abbreviation has several interpretations, the most common one being "Mobile Station International Subscriber Directory Number". The MSISDN and international mobile subscriber identity (IMSI) are two important numbers used for identifying a mobile subscriber. The IMSI is stored in the SIM (the card inserted into the mobile phone), and uniquely identifies the mobile station, its home wireless network, and the home country of the home wireless network. The MSISDN is used for routing calls to the subscriber. The IMSI is often used as a key in the home location register ("subscriber database") and the MSISDN is the number normally dialed to connect a call to the mobile phone. A SIM has a unique IMSI that does not change, while the MSISDN can change in time, i.e. different MSISDNs can be associated with the SIM. The MSISDN follows the numbering plan defined in the International Telecommunication Standard Sector recommendation E.164. Abbreviation Depending on source or standardization body, the abbreviation MSISDN can be written out in several different ways. These are today the most widespread and common in use. MSISDN format The ITU-T recommendation E.164 limits the maximum length of an MSISDN to 15 digits. 1-3 digits are reserved for country code. Prefixes are not included (e.g., 00 prefixes an international MSISDN when dialing from Sweden). Minimum length of the MSISDN is not specified by ITU-T but is instead specified in the national numbering plans by the telecommunications regulator in each country. In GSM and its variant DCS 1800, MSISDN is built up as MSISDN = CC + NDC + SN CC = Country Code NDC = National Destination Code, identifies one or part of a PLMN SN = Subscriber Number In the GSM variant PCS 1900, MSISDN is built up as MSISDN = CC + NPA + SN CC = Country Code NPA = Number Planning Area SN = Subscriber Number The country code identifies a country or geographical area, and may be between 1-3 digits. The ITU defines and maintains the list of assigned country codes. Example Example Number: +880 15 00121121 (Teletalk Hotline Number) Has the following subscription number: MSISDN=8801500121121MSISDN=CCCXXN1N2N3N4N5N6N7N8 For further information on the MSISDN format, see the ITU-T specification E.164. See also E.164 International Mobile Equipment Identity (IMEI) International Mobile Subscriber Identity (IMSI) SIM card Mobile phone GSM HLR E.214 Mobile identification number References External links http://www.3gpp.org, GSM 03.03 (see section 3.3) http://www.openmobilealliance.org http://www.itu.int, E.164, E.212, E.213, E.214 https://web.archive.org/web/20110222090438/http://www.gsmworld.com/ ITU-T recommendations Telephone numbers Identifiers
MSISDN
[ "Mathematics" ]
754
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
1,031,816
https://en.wikipedia.org/wiki/Chemokine
Chemokines (), or chemotactic cytokines, are a family of small cytokines or signaling proteins secreted by cells that induce directional movement of leukocytes, as well as other cell types, including endothelial and epithelial cells. In addition to playing a major role in the activation of host immune responses, chemokines are important for biological processes, including morphogenesis and wound healing, as well as in the pathogenesis of diseases like cancers. Cytokine proteins are classified as chemokines according to behavior and structural characteristics. In addition to being known for mediating chemotaxis, chemokines are all approximately 8–10 kilodaltons in mass and have four cysteine residues in conserved locations that are key to forming their 3-dimensional shape. These proteins have historically been known under several other names including the SIS family of cytokines, SIG family of cytokines, SCY family of cytokines, Platelet factor-4 superfamily or intercrines. Some chemokines are considered pro-inflammatory and can be induced during an immune response to recruit cells of the immune system to a site of infection, while others are considered homeostatic and are involved in controlling the migration of cells during normal processes of tissue maintenance or development. Chemokines are found in all vertebrates, some viruses and some bacteria, but none have been found in other invertebrates. Chemokines have been classified into four main subfamilies: CXC, CC, CX3C and C. All of these proteins exert their biological effects by interacting with G protein-linked transmembrane receptors called chemokine receptors, that are selectively found on the surfaces of their target cells. Function The major role of chemokines is to act as a chemoattractant to guide the migration of cells. Cells that are attracted by chemokines follow a signal of increasing chemokine concentration towards the source of the chemokine. Some chemokines control cells of the immune system during processes of immune surveillance, such as directing lymphocytes to the lymph nodes so they can screen for invasion of pathogens by interacting with antigen-presenting cells residing in these tissues. These are known as homeostatic chemokines and are produced and secreted without any need to stimulate their source cells. Some chemokines have roles in development; they promote angiogenesis (the growth of new blood vessels), or guide cells to tissues that provide specific signals critical for cellular maturation. Other chemokines are inflammatory and are released from a wide variety of cells in response to bacterial infection, viruses and agents that cause physical damage such as silica or the urate crystals that occur in gout. Their release is often stimulated by pro-inflammatory cytokines such as interleukin 1. Inflammatory chemokines function mainly as chemoattractants for leukocytes, recruiting monocytes, neutrophils and other effector cells from the blood to sites of infection or tissue damage. Certain inflammatory chemokines activate cells to initiate an immune response or promote wound healing. They are released by many different cell types and serve to guide cells of both innate immune system and adaptive immune system. Types by function Chemokines are functionally divided into two groups: Homeostatic: are constitutively produced in certain tissues and are responsible for basal leukocyte migration. These include: CCL14, CCL19, CCL20, CCL21, CCL25, CCL27, CXCL12 and CXCL13. This classification is not strict; for example, CCL20 can act also as pro-inflammatory chemokine. Inflammatory: these are formed under pathological conditions (on pro-inflammatory stimuli, such as IL-1, TNF-alpha, LPS, or viruses) and actively participate in the inflammatory response attracting immune cells to the site of inflammation. Examples are: CXCL-8, CCL2, CCL3, CCL4, CCL5, CCL11, CXCL10. Homing The main function of chemokines is to manage the migration of leukocytes (homing) in the respective anatomical locations in inflammatory and homeostatic processes. Basal: homeostatic chemokines are basal produced in the thymus and lymphoid tissues. Their homeostatic function in homing is best exemplified by the chemokines CCL19 and CCL21 (expressed within lymph nodes and on lymphatic endothelial cells) and their receptor CCR7 (expressed on cells destined for homing in cells to these organs). Using these ligands is possible routing antigen-presenting cells (APC) to lymph nodes during the adaptive immune response. Among other homeostatic chemokine receptors include: CCR9, CCR10, and CXCR5, which are important as part of the cell addresses for tissue-specific homing of leukocytes. CCR9 supports the migration of leukocytes into the intestine, CCR10 to the skin and CXCR5 supports the migration of B-cell to follicles of lymph nodes. As well CXCL12 (SDF-1) constitutively produced in the bone marrow promotes proliferation of progenitor B cells in the bone marrow microenvironment. Inflammatory: inflammatory chemokines are produced in high concentrations during infection or injury and determine the migration of inflammatory leukocytes into the damaged area. Typical inflammatory chemokines include: CCL2, CCL3 and CCL5, CXCL1, CXCL2 and CXCL8. A typical example is CXCL-8, which acts as a chemoattractant for neutrophils. In contrast to the homeostatic chemokine receptors, there is significant promiscuity (redundancy) associated with binding receptor and inflammatory chemokines. This often complicates research on receptor-specific therapeutics in this area. Types by cell attracted Monocytes / macrophages: the key chemokines that attract these cells to the site of inflammation include: CCL2, CCL3, CCL5, CCL7, CCL8, CCL13, CCL17 and CCL22. T-lymphocytes: the four key chemokines that are involved in the recruitment of T lymphocytes to the site of inflammation are: CCL2, CCL1, CCL22 and CCL17. Furthermore, CXCR3 expression by T-cells is induced following T-cell activation and activated T-cells are attracted to sites of inflammation where the IFN-y inducible chemokines CXCL9, CXCL10 and CXCL11 are secreted. Mast cells: on their surface express several receptors for chemokines: CCR1, CCR2, CCR3, CCR4, CCR5, CXCR2, and CXCR4. Ligands of these receptors CCL2 and CCL5 play an important role in mast cell recruitment and activation in the lung. There is also evidence that CXCL8 might be inhibitory of mast cells. Eosinophils: the migration of eosinophils into various tissues involved several chemokines of CC family: CCL11, CCL24, CCL26, CCL5, CCL7, CCL13, and CCL3. Chemokines CCL11 (eotaxin) and CCL5 (RANTES) acts through a specific receptor CCR3 on the surface of eosinophils, and eotaxin plays an essential role in the initial recruitment of eosinophils into the lesion. Neutrophils: are regulated primarily by CXC chemokines. An example CXCL8 (IL-8) is chemoattractant for neutrophils and also activating their metabolic and degranulation. Structural characteristics Proteins are classified into the chemokine family based on their structural characteristics, not just their ability to attract cells. All chemokines are small, with a molecular mass of between 8 and 10 kDa. They are approximately 20-50% identical to each other; that is, they share gene sequence and amino acid sequence homology. They all also possess conserved amino acids that are important for creating their 3-dimensional or tertiary structure, such as (in most cases) four cysteines that interact with each other in pairs to create a Greek key shape that is a characteristic of chemokines. Intramolecular disulfide bonds typically join the first to third, and the second to fourth cysteine residues, numbered as they appear in the protein sequence of the chemokine. Typical chemokine proteins are produced as pro-peptides, beginning with a signal peptide of approximately 20 amino acids that gets cleaved from the active (mature) portion of the molecule during the process of its secretion from the cell. The first two cysteines, in a chemokine, are situated close together near the N-terminal end of the mature protein, with the third cysteine residing in the centre of the molecule and the fourth close to the C-terminal end. A loop of approximately ten amino acids follows the first two cysteines and is known as the N-loop. This is followed by a single-turn helix, called a 310-helix, three β-strands and a C-terminal α-helix. These helices and strands are connected by turns called 30s, 40s and 50s loops; the third and fourth cysteines are located in the 30s and 50s loops. Types by structure Members of the chemokine family are divided into four groups depending on the spacing of their first two cysteine residues. Thus the nomenclature for chemokines is, e.g.: CCL1 for the ligand 1 of the CC-family of chemokines, and CCR1 for its respective receptor. CC chemokines The CC chemokine (or β-chemokine) proteins have two adjacent cysteines (amino acids), near their amino terminus. There have been at least 27 distinct members of this subgroup reported for mammals, called CC chemokine ligands (CCL)-1 to -28; CCL10 is the same as CCL9. Chemokines of this subfamily usually contain four cysteines (C4-CC chemokines), but a small number of CC chemokines possess six cysteines (C6-CC chemokines). C6-CC chemokines include CCL1, CCL15, CCL21, CCL23 and CCL28. CC chemokines induce the migration of monocytes and other cell types such as NK cells and dendritic cells. Examples of CC chemokine include monocyte chemoattractant protein-1 (MCP-1 or CCL2) which induces monocytes to leave the bloodstream and enter the surrounding tissue to become tissue macrophages. CCL5 (or RANTES) attracts cells such as T cells, eosinophils and basophils that express the receptor CCR5. Increased CCL11 levels in blood plasma are associated with aging (and reduced neurogenesis) in mice and humans. CXC chemokines The two N-terminal cysteines of CXC chemokines (or α-chemokines) are separated by one amino acid, represented in this name with an "X". There have been 17 different CXC chemokines described in mammals, that are subdivided into two categories, those with a specific amino acid sequence (or motif) of glutamic acid-leucine-arginine (or ELR for short) immediately before the first cysteine of the CXC motif (ELR-positive), and those without an ELR motif (ELR-negative). ELR-positive CXC chemokines specifically induce the migration of neutrophils, and interact with chemokine receptors CXCR1 and CXCR2. An example of an ELR-positive CXC chemokine is interleukin-8 (IL-8), which induces neutrophils to leave the bloodstream and enter into the surrounding tissue. Other CXC chemokines that lack the ELR motif, such as CXCL13, tend to be chemoattractant for lymphocytes. CXC chemokines bind to CXC chemokine receptors, of which seven have been discovered to date, designated CXCR1-7. C chemokines The third group of chemokines is known as the C chemokines (or γ chemokines), and is unlike all other chemokines in that it has only two cysteines; one N-terminal cysteine and one cysteine downstream. Two chemokines have been described for this subgroup and are called XCL1 (lymphotactin-α) and XCL2 (lymphotactin-β). CX3C chemokines A fourth group has also been discovered and members have three amino acids between the two cysteines and is termed CX3C chemokine (or d-chemokines). The only CX3C chemokine discovered to date is called fractalkine (or CX3CL1). It is both secreted and tethered to the surface of the cell that expresses it, thereby serving as both a chemoattractant and as an adhesion molecule. Receptors Chemokine receptors are G protein-coupled receptors containing 7 transmembrane domains that are found on the surface of leukocytes. Approximately 19 different chemokine receptors have been characterized to date, which are divided into four families depending on the type of chemokine they bind; CXCR that bind CXC chemokines, CCR that bind CC chemokines, CX3CR1 that binds the sole CX3C chemokine (CX3CL1), and XCR1 that binds the two XC chemokines (XCL1 and XCL2). They share many structural features; they are similar in size (with about 350 amino acids), have a short, acidic N-terminal end, seven helical transmembrane domains with three intracellular and three extracellular hydrophilic loops, and an intracellular C-terminus containing serine and threonine residues important for receptor regulation. The first two extracellular loops of chemokine receptors each has a conserved cysteine residue that allow formation of a disulfide bridge between these loops. G proteins are coupled to the C-terminal end of the chemokine receptor to allow intracellular signaling after receptor activation, while the N-terminal domain of the chemokine receptor determines ligand binding specificity. Signal transduction Chemokine receptors associate with G-proteins to transmit cell signals following ligand binding. Activation of G proteins, by chemokine receptors, causes the subsequent activation of an enzyme known as phospholipase C (PLC). PLC cleaves a molecule called phosphatidylinositol (4,5)-bisphosphate (PIP2) into two second messenger molecules known as Inositol triphosphate (IP3) and diacylglycerol (DAG) that trigger intracellular signaling events; DAG activates another enzyme called protein kinase C (PKC), and IP3 triggers the release of calcium from intracellular stores. These events promote many signaling cascades (such as the MAP kinase pathway) that generate responses like chemotaxis, degranulation, release of superoxide anions and changes in the avidity of cell adhesion molecules called integrins within the cell harbouring the chemokine receptor. Infection control The discovery that the β chemokines RANTES, MIP (macrophage inflammatory proteins) 1α and 1β (now known as CCL5, CCL3 and CCL4 respectively) suppress HIV-1 provided the initial connection and indicated that these molecules might control infection as part of immune responses in vivo, and that sustained delivery of such inhibitors have the capacity of long-term infection control. The association of chemokine production with antigen-induced proliferative responses, more favorable clinical status in HIV infection, as well as with an uninfected status in subjects at risk for infection suggests a positive role for these molecules in controlling the natural course of HIV infection. See also Paracrine signalling References External links The cytokine family database – Chemokines at kumamoto-u.ac.jp The correct chemokine nomenclature at rndsystems.com Cytokines Signal transduction
Chemokine
[ "Chemistry", "Biology" ]
3,655
[ "Biochemistry", "Cytokines", "Neurochemistry", "Signal transduction" ]
1,031,895
https://en.wikipedia.org/wiki/Stephen%20Tweedie
Stephen C. Tweedie is a Scottish software developer who is known for his work on the Linux kernel, in particular his work on filesystems. After becoming involved with the development of the ext2 filesystem working on performance issues, he led the development of the ext3 filesystem which involved adding a journaling layer (JBD) to the ext2 filesystem. For his work on the journaling layer, he has been described by fellow Linux developer Andrew Morton as "a true artisan". Born in Edinburgh, Scotland in 1969, Tweedie studied computer science at Churchill College, Cambridge and the University of Edinburgh, where he did his thesis on Contention and Achieved Performance in Multicomputer Wormhole Routing Networks. After contributing to the Linux kernel in his spare time since the early nineties and working on VMS filesystem support for DEC for two years, Tweedie was employed by Linux distributor Red Hat where he continues to work on the Linux kernel. Tweedie has published a number of papers on Linux, including Design and Implementation of the Second Extended Filesystem in 1994, Journaling the Linux ext2fs Filesystem in 1998, and Planned Extensions to the Linux Ext2/Ext3 Filesystem in 2002. Tweedie is also a frequent speaker on the subject of Linux kernel development at technical conferences. Amongst others, he has given talks on Linux kernel development at the 1997 and 1998 USENIX Annual Technical Conferences, the 2000 UKUUG conference in London, and he gave the keynote speech at the Ottawa Linux Symposium in 2002. References 1969 births Living people Linux kernel programmers Alumni of Churchill College, Cambridge Alumni of the University of Edinburgh
Stephen Tweedie
[ "Technology" ]
347
[ "Computing stubs", "Computer specialist stubs" ]
1,032,013
https://en.wikipedia.org/wiki/Darwin%20%28spacecraft%29
Darwin was a suggested ESA Cornerstone mission which would have involved a constellation of four to nine spacecraft designed to directly detect Earth-like planets orbiting nearby stars and search for evidence of life on these planets. The most recent design envisaged three free-flying space telescopes, each three to four metres in diameter, flying in formation as an astronomical interferometer. These telescopes were to redirect light from distant stars and planets to a fourth spacecraft, which would have contained the beam combiner, spectrometers, and cameras for the interferometer array, and which would have also acted as a communications hub. There was also an earlier design, called the "Robin Laurance configuration," which included six 1.5 metre telescopes, a beam combiner spacecraft, and a separate power and communications spacecraft. The study of this proposed mission ended in 2007 with no further activities planned. To produce an image, the telescopes would have had to operate in formation with distances between the telescopes controlled to within a few micrometres, and the distance between the telescopes and receiver controlled to within about one nanometre. Several more detailed studies would have been needed to determine whether technology capable of such precision is actually feasible. Concept The space telescopes were to observe in the infrared part of the electromagnetic spectrum. As well as studying extrasolar planets, the telescopes would probably have been useful for general purpose imaging, producing very high resolution (i.e. milliarcsecond) infrared images, allowing detailed study of a variety of astrophysical processes. The infrared region was chosen because in the visible spectrum an Earth-like planet is outshone by its star by a factor of a billion. However, in the infrared, the difference is less by a few orders of magnitude. According to a 2000 ESA bulletin, all spacecraft components in the optical path would have to be passively cooled to 40 kelvins to allow infrared observations to take place. The planet search would have used a nulling interferometer configuration. In this system, phase shifts would be introduced into the three beams, so that light from the central star would suffer destructive interference and cancel itself out. However, light from any orbiting planets would not cancel out, as the planets are offset slightly from the star's position. This would allow planets to be detected, despite the much brighter signal from the star. For planet detection, the telescopes would operate in an imaging mode. The detection of an Earth-like planet would require about 10 hours of observation in total, spread out over several months. A 2002 design which would have used 1.5 metre mirrors was expected to take about 100 hours to get a spectrum of a possibly Earth-like planet. Were the Darwin spacecraft to detect a suitable planet, a more detailed study of its atmosphere would have been made by taking an infrared spectrum of the planet. By analyzing this spectrum, the chemistry of the atmosphere could be determined, and this could provide evidence for life on the planet. The presence of oxygen and water vapour in the atmosphere could be evidence for life. Oxygen is very reactive so if large amounts of oxygen exist in a planet's atmosphere some process such as photosynthesis must be continuously producing it. The presence of oxygen alone, however, is not conclusive evidence for life. Jupiter's moon Europa, for example, has a tenuous oxygen atmosphere thought to be produced by radiolysis of water molecules. Numerical simulations have shown that under proper conditions it is possible to build up an oxygen atmosphere via photolysis of carbon dioxide. Photolysis of water vapor and carbon dioxide produces hydroxyl ions and atomic oxygen, respectively, and these in turn produce oxygen in small concentrations, with hydrogen escaping into space. When O2 is produced by H2O photolysis at high altitude, hydrogenous compounds like H+, OH− and H2O are produced which attack very efficiently O3 and prevent its accumulation. The only known way to have a significant amount of O3 in the atmosphere is that O2 be produced at low altitude, e.g. by biological photosynthesis, and that little H2O gets to high altitudes where UV is present. For terrestrial planets, the simultaneous presence of O3, H2O and CO2 in the atmosphere appears to be a reliable biosignature, and the Darwin spacecraft would have been capable of detecting these atmospheric components. Candidate planets Planet Gliese 581 d, discovered in 2007, was considered a good candidate for the Darwin project. It orbits within the theoretical habitable zone of its star, and scientists surmise that conditions on the planet might be conducive to supporting life. Similar initiatives The interferometric version of NASA's Terrestrial Planet Finder mission is similar in concept to Darwin and also has very similar scientific aims. According to NASA's 2007 budget documentation, released on February 6, 2006, the project was deferred indefinitely, and in June 2011 the project was reported as cancelled. Antoine Labeyrie has proposed a much larger space-based astronomical interferometer similar to Darwin, but with the individual telescopes positioned in a spherical arrangement and with an emphasis on interferometric imaging. This Hypertelescope project would be much more expensive and complex than the Darwin and TPF missions, involving many large free-flying spacecraft. References External links Darwin: study ended, no further activities planned DARWIN Exoplanet Space Mission European Space Agency space probes Interferometric telescopes Space telescopes Exoplanet search projects Astronomy projects
Darwin (spacecraft)
[ "Astronomy" ]
1,108
[ "Astronomy projects", "Exoplanet search projects", "Space telescopes" ]
1,032,155
https://en.wikipedia.org/wiki/All%20one%20polynomial
In mathematics, an all one polynomial (AOP) is a polynomial in which all coefficients are one. Over the finite field of order two, conditions for the AOP to be irreducible are known, which allow this polynomial to be used to define efficient algorithms and circuits for multiplication in finite fields of characteristic two. The AOP is a 1-equally spaced polynomial. Definition An AOP of degree m has all terms from xm to x0 with coefficients of 1, and can be written as or or Thus the roots of the all one polynomial of degree m are all (m+1)th roots of unity other than unity itself. Properties Over GF(2) the AOP has many interesting properties, including: The Hamming weight of the AOP is m + 1, the maximum possible for its degree The AOP is irreducible if and only if m + 1 is prime and 2 is a primitive root modulo m + 1 (over GF(p) with prime p, it is irreducible if and only if m + 1 is prime and p is a primitive root modulo m + 1) The only AOP that is a primitive polynomial is x2 + x + 1. Despite the fact that the Hamming weight is large, because of the ease of representation and other improvements there are efficient implementations in areas such as coding theory and cryptography. Over , the AOP is irreducible whenever m + 1 is a prime p, and therefore in these cases, the pth cyclotomic polynomial. References External links Field (mathematics) Polynomials
All one polynomial
[ "Mathematics" ]
324
[ "Polynomials", "Algebra" ]
1,032,170
https://en.wikipedia.org/wiki/Equally%20spaced%20polynomial
An equally spaced polynomial (ESP) is a polynomial used in finite fields, specifically GF(2) (binary). An s-ESP of degree sm can be written as: for or Properties Over GF(2) the ESP - which then can be referred to as all one polynomial (AOP) - has many interesting properties, including: The Hamming weight of the ESP is m + 1. A 1-ESP is known as an all one polynomial (AOP) and has additional properties including the above. References Field (mathematics) Polynomials
Equally spaced polynomial
[ "Mathematics" ]
119
[ "Polynomials", "Algebra" ]
1,032,215
https://en.wikipedia.org/wiki/1089%20%28number%29
1089 is the integer after 1088 and before 1090. It is a square number (33 squared), a nonagonal number, a 32-gonal number, a 364-gonal number, and a centered octagonal number. 1089 is the first reverse-divisible number. The next is 2178 , and they are the only four-digit numbers that divide their reverse. In magic 1089 is widely used in magic tricks because it can be "produced" from any two three-digit numbers. This allows it to be used as the basis for a Magician's Choice. For instance, one variation of the book test starts by having the spectator choose any two suitable numbers and then apply some basic maths to produce a single four-digit number. That number is always 1089. The spectator is then asked to turn to page 108 of a book and read the 9th word, which the magician has memorized. To the audience it looks like the number is random, but through manipulation, the result is always the same. In base 10, the following steps always yield 1089: Take any three-digit number where the first and last digits differ by more than 1. Reverse the digits, and subtract the smaller from the larger one. Add to this result the number produced by reversing its digits. For example, if the spectator chooses 237 (or 732): 732 − 237 = 495 495 + 594 = 1089 as expected. On the other hand, if the spectator chooses 102 (or 201): 201 − 102 = 99 99 + 99 ≠ 1089 contradicting the rule. However, if we amend the third rule by reading 99 as a three-digit number 099 and take its reverse, we obtain: 201 − 102 = 099 099 + 990 = 1089 as expected. Explanation The spectator's 3-digit number can be written as 100 × A + 10 × B + 1 × C, and its reversal as 100 × C + 10 × B + 1 × A, where 1 ≤ A ≤ 9, 0 ≤ B ≤ 9 and 1 ≤ C ≤ 9. Their difference is 99 × (A − C) (For convenience, we assume A > C; if A < C, we first swap A and C.). If A − C is 0, the difference is 0, and we do not get a 3-digit number for the next step. If A − C is 1, the difference is 99. Using a leading 0 gives us a 3-digit number for the next step. 99 × (A − C) can also be written as 99 × [(A − C) − 1] + 99 = 100 × [(A − C) − 1] − 1 × [(A − C) − 1] + 90 + 9 = 100 × [(A − C) − 1] + 90 + 9 − (A − C) + 1 = 100 × [(A − C) − 1] + 10 × 9 + 1 × [10 − (A − C)]. (The first digit is (A − C) − 1, the second is 9 and the third is 10 − (A − C). As 2 ≤ A − C ≤ 9, both the first and third digits are guaranteed to be single digits.) Its reversal is 100 × [10 − (A − C)] + 10 × 9 + 1 × [(A − C) − 1]. The sum is thus 101 × [(A − C) − 1] + 20 × 9 + 101 × [10 − (A − C)] = 101 × [(A − C) − 1 + 10 − (A − C)] + 20 × 9 = 101 × [−1 + 10] + 180 = 1089. Other properties Multiplying the number 1089 by the integers from 1 to 9 produces a pattern: multipliers adding up to 10 give products that are the digit reversals of each other: 1 × 1089 = 1089 ↔ 9 × 1089 = 9801 2 × 1089 = 2178 ↔ 8 × 1089 = 8712 3 × 1089 = 3267 ↔ 7 × 1089 = 7623 4 × 1089 = 4356 ↔ 6 × 1089 = 6534 5 × 1089 = 5445 ↔ 5 × 1089 = 5445 Also note the patterns within each column: 1 × 1089 = 1089 2 × 1089 = 2178 3 × 1089 = 3267 4 × 1089 = 4356 5 × 1089 = 5445 6 × 1089 = 6534 7 × 1089 = 7623 8 × 1089 = 8712 9 × 1089 = 9801 Numbers formed analogously in other bases, e.g. octal 1067 or hexadecimal 10EF, also have these properties. Extragalactic astronomy The numerical value of the cosmic microwave background radiation redshift is about ( corresponds to present time) Other uses In the Rich Text Format, the language code 1089 indicates the text is in Swahili. References Integers
1089 (number)
[ "Mathematics" ]
1,051
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
1,032,221
https://en.wikipedia.org/wiki/National%20Nanotechnology%20Initiative
The National Nanotechnology Initiative (NNI) is a research and development initiative which provides a framework to coordinate nanoscale research and resources among United States federal government agencies and departments. History Mihail C. Roco proposed the initiative in a 1999 presentation to the White House under the Clinton administration. The NNI was officially launched in 2000 and received funding for the first time in FY2001. President Bill Clinton advocated nanotechnology development. In a 21 January 2000 speech at the California Institute of Technology, Clinton stated that "Some of our research goals may take twenty or more years to achieve, but that is precisely why there is an important role for the federal government." President George W. Bush further increased funding for nanotechnology. On 3 December 2003 Bush signed into law the 21st Century Nanotechnology Research and Development Act (), which authorizes expenditures for five of the participating agencies totaling $3.63 billion over four years.. This law is an authorization, not an appropriation, and subsequent appropriations for these five agencies have not met the goals set out in the 2003 Act. However, there are many agencies involved in the Initiative that are not covered by the Act, and requested budgets under the Initiative for all participating agencies in Fiscal Years 2006 – 2015 totaled over $1 billion each. In February 2014, the National Nanotechnology Initiative released a Strategic Plan outlining updated goals and "program component areas" ," as required under the terms of the Act. This document supersedes the NNI Strategic Plans released in 2004 and 2007. The NNI's budget supplement proposed by the Obama administration for Fiscal Year 2015 provides $1.5 billion in requested funding. The cumulative NNI investment since fiscal year 2001, including the 2015 request, totals almost $21 billion. Cumulative investments in nanotechnology-related environmental, health, and safety research since 2005 to 2015 total nearly $900 million. The Federal agencies with the largest investments are the National Institutes of Health, National Science Foundation, Department of Energy, Department of Defense, and the National Institute of Standards and Technology. The NNI received increased support for emerging technologies during the Trump administration and a special focus on clean energy and mitigating climate change during the Biden administration. NNI cumulative investment by 2023 inclusive reached $40 billion, and nanotechnology has become pervasive in material, energy and biosystem related discoveries and applications. Goals The four primary goals of NNI are: Advance a world-class nanotechnology research and development program; Foster the transfer of new technologies into products for commercial and public benefits; Develop and sustain educational resources, a skilled workforce, and a dynamic infrastructure and toolset to advance nanotechnology; Support responsible development of nanotechnology. Initiatives Nanotechnology Signature Initiatives Nanotechnology Signature Initiatives (NSIs) spotlight areas of nanotechnology where significant advances in nanoscale science and technology can be made with the focus and cooperation of participating agencies. NSIs accelerate research, development, and application of nanotechnology in these critical areas. As of December 2020, the current NSIs are: NSI: Water Sustainability through Nanotechnology – Nanoscale Solutions for a Global-Scale Challenge, NSI: Nanotechnology for Sensors and Sensors for Nanotechnology – Improving and Protecting Health, Safety, and the Environment, NSI: Sustainable Nanomanufacturing - Creating the Industries of the Future, NSI: Nanoelectronics for 2020 and Beyond. NSIs are dynamic and are retired as they achieve their specified goals or develop an established community they no longer require the spotlight provided as a NSI. Retired NSIs are: NSI: Nanoelectronics for 2020 and Beyond, NSI: Nanotechnology for Solar Energy Collection and Conversion - Contributing to Energy Solutions for the Future, NSI: Nanotechnology Knowledge Infrastructure - Enabling National Leadership in Sustainable Design. Nanotechnology-Inspired Grand Challenges A nanotechnology-inspired grand challenge (GC) is an ambitious goal that utilizes nanotechnology and nanoscience to solve national and global issues. The first and current GC was announced in October 2015 after receiving input and suggestions from the public. As of December 2020, the grand challenge is: A Nanotechnology-Inspired Grand Challenge for Future Computing: Create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain. Participating Federal Agencies and Departments Departments and agencies with nanotechnology R&D budgets: Other participating departments and agencies: Results and Effects Only a very small number of studies attempted to evaluate the effects of the NNI objectively. A study of Corporate and University Nanotechnology patenting published in 2023, looked at patent grants since the launch of the NNI in 2000 through 2009 and maintenance events on those patents through 2021. US-invented nanopatents with US assignees, were somewhat more apt to renew at least once (14.5% vs. 11.7%) compared to the US -assignees on average, but somewhat less inclined to pay for full maintenance of 20 years from filing (40.5% vs. 52.5%). The lower propensity to renew could be attributed to a quickly changing technology-landscape. See also National Science and Technology Council President's Council of Advisors on Science and Technology Translational research References External links Nanotechnology institutions Government research
National Nanotechnology Initiative
[ "Materials_science" ]
1,113
[ "Nanotechnology", "Nanotechnology institutions" ]
1,032,254
https://en.wikipedia.org/wiki/Speaker%20recognition
Speaker recognition is the identification of a person from characteristics of voices. It is used to answer the question "Who is speaking?" The term voice recognition can refer to speaker recognition or speech recognition. Speaker verification (also called speaker authentication) contrasts with identification, and speaker recognition differs from speaker diarisation (recognizing when the same speaker is speaking). Recognizing the speaker can simplify the task of translating speech in systems that have been trained on specific voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. Speaker recognition has a history dating back some four decades as of 2019 and uses the acoustic features of speech that have been found to differ between individuals. These acoustic patterns reflect both anatomy and learned behavioral patterns. Verification versus identification There are two major applications of speaker recognition technologies and methodologies. If the speaker claims to be of a certain identity and the voice is used to verify this claim, this is called verification or authentication. On the other hand, identification is the task of determining an unknown speaker's identity. In a sense, speaker verification is a 1:1 match where one speaker's voice is matched to a particular template whereas speaker identification is a 1:N match where the voice is compared against multiple templates. From a security perspective, identification is different from verification. Speaker verification is usually employed as a "gatekeeper" in order to provide access to a secure system. These systems operate with the users' knowledge and typically require their cooperation. Speaker identification systems can also be implemented covertly without the user's knowledge to identify talkers in a discussion, alert automated systems of speaker changes, check if a user is already enrolled in a system, etc. In forensic applications, it is common to first perform a speaker identification process to create a list of "best matches" and then perform a series of verification processes to determine a conclusive match. Working to match the samples from the speaker to the list of best matches helps figure out if they are the same person based on the amount of similarities or differences. The prosecution and defense use this as evidence to determine if the suspect is actually the offender. Training One of the earliest training technologies to commercialize was implemented in Worlds of Wonder's 1987 Julie doll. At that point, speaker independence was an intended breakthrough, and systems required a training period. A 1987 ad for the doll carried the tagline "Finally, the doll that understands you." - despite the fact that it was described as a product "which children could train to respond to their voice." The term voice recognition, even a decade later, referred to speaker independence. Variants of speaker recognition Each speaker recognition system has two phases: enrollment and verification. During enrollment, the speaker's voice is recorded and typically a number of features are extracted to form a voice print, template, or model. In the verification phase, a speech sample or "utterance" is compared against a previously created voice print. For identification systems, the utterance is compared against multiple voice prints in order to determine the best match(es) while verification systems compare an utterance against a single voice print. Because of the process involved, verification is faster than identification. Speaker recognition systems fall into two categories: text-dependent and text-independent. Text-dependent recognition requires the text to be the same for both enrollment and verification. In a text-dependent system, prompts can either be common across all speakers (e.g. a common pass phrase) or unique. In addition, the use of shared-secrets (e.g.: passwords and PINs) or knowledge-based information can be employed in order to create a multi-factor authentication scenario. Conversely, text-independent systems do not require the use of a specific text. They are most often used for speaker identification as they require very little if any cooperation by the speaker. In this case the text during enrollment and test is different. In fact, the enrollment may happen without the user's knowledge, as in the case for many forensic applications. As text-independent technologies do not compare what was said at enrollment and verification, verification applications tend to also employ speech recognition to determine what the user is saying at the point of authentication. In text independent systems both acoustics and speech analysis techniques are used. Technology Speaker recognition is a pattern recognition problem. The various technologies used to process and store voice prints include frequency estimation, hidden Markov models, Gaussian mixture models, pattern matching algorithms, neural networks, matrix representation, vector quantization and decision trees. For comparing utterances against voice prints, more basic methods like cosine similarity are traditionally used for their simplicity and performance. Some systems also use "anti-speaker" techniques such as cohort models and world models. Spectral features are predominantly used in representing speaker characteristics. Linear predictive coding (LPC) is a speech coding method used in speaker recognition and speech verification. Ambient noise levels can impede both collections of the initial and subsequent voice samples. Noise reduction algorithms can be employed to improve accuracy, but incorrect application can have the opposite effect. Performance degradation can result from changes in behavioural attributes of the voice and from enrollment using one telephone and verification on another telephone. Integration with two-factor authentication products is expected to increase. Voice changes due to ageing may impact system performance over time. Some systems adapt the speaker models after each successful verification to capture such long-term changes in the voice, though there is debate regarding the overall security impact imposed by automated adaptation Legal implications Due to the introduction of legislation like the General Data Protection Regulation in the European Union and the California Consumer Privacy Act in the United States, there has been much discussion about the use of speaker recognition in the work place. In September 2019 Irish speech recognition developer Soapbox Labs warned about the legal implications that may be involved. Applications The first international patent was filed in 1983, coming from the telecommunication research in CSELT (Italy) by Michele Cavazza and Alberto Ciaramella as a basis for both future telco services to final customers and to improve the noise-reduction techniques across the network. Between 1996 and 1998, speaker recognition technology was used at the Scobey–Coronach Border Crossing to enable enrolled local residents with nothing to declare to cross the Canada–United States border when the inspection stations were closed for the night. The system was developed for the U.S. Immigration and Naturalization Service by Voice Strategies of Warren, Michigan. In 2013 Barclays Wealth, the private banking division of Barclays, became the first financial services firm to deploy voice biometrics as the primary means of identifying customers to their call centers. The system used passive speaker recognition to verify the identity of telephone customers within 30 seconds of normal conversation. It was developed by voice recognition company Nuance (that in 2011 acquired the company Loquendo, the spin-off from CSELT itself for speech technology), the company behind Apple's Siri technology. 93% of customers gave the system at "9 out of 10" for speed, ease of use and security. Speaker recognition may also be used in criminal investigations, such as those of the 2014 executions of, amongst others, James Foley and Steven Sotloff. In February 2016 UK high-street bank HSBC and its internet-based retail bank First Direct announced that it would offer 15 million customers its biometric banking software to access online and phone accounts using their fingerprint or voice. In 2023 Vice News and The Guardian separately demonstrated they could defeat standard financial speaker-authentication systems using AI-generated voices generated from about five minutes of the target's voice samples. See also AI effect Applications of artificial intelligence Speaker diarisation Speech recognition Voice changer Lists List of emerging technologies Outline of artificial intelligence Notes References Homayoon Beigi (2011), "Fundamentals of Speaker Recognition", Springer-Verlag, Berlin, 2011, . "Biometrics from the movies" –National Institute of Standards and Technology Elisabeth Zetterholm (2003), Voice Imitation. A Phonetic Study of Perceptual Illusions and Acoustic Success, Phd thesis, Lund University. Md Sahidullah (2015), Enhancement of Speaker Recognition Performance Using Block Level, Relative and Temporal Information of Subband Energies, PhD thesis, Indian Institute of Technology Kharagpur. External links Circumventing Voice Authentication The PLA Radio podcast recently featured a simple way to fool rudimentary voice authentication systems. Speaker recognition – Scholarpedia Voice recognition benefits and challenges in access control Software bob.bio.spear ALIZE Speech processing Voice technology Automatic identification and data capture Biometrics
Speaker recognition
[ "Technology" ]
1,741
[ "Data", "Automatic identification and data capture" ]
1,032,545
https://en.wikipedia.org/wiki/Rainwater%20harvesting
Rainwater harvesting (RWH) is the collection and storage of rain, rather than allowing it to run off. Rainwater is collected from a roof-like surface and redirected to a tank, cistern, deep pit (well, shaft, or borehole), aquifer, or a reservoir with percolation, so that it seeps down and restores the ground water. Rainwater harvesting differs from stormwater harvesting as the runoff is typically collected from roofs and other area surfaces for storage and subsequent reuse. Its uses include watering gardens, livestock, irrigation, domestic use with proper treatment, and domestic heating. The harvested water can also be used for long-term storage or groundwater recharge. Rainwater harvesting is one of the simplest and oldest methods of self-supply of water for households, having been used in South Asia and other countries for many thousands of years. Installations can be designed for different scales, including households, neighborhoods, and communities, and can also serve institutions such as schools, hospitals, and other public facilities. Uses Domestic use Rooftop rainwater harvesting is used to provide drinking water, domestic water, water for livestock, water for small irrigation, and a way to replenish groundwater levels. Kenya has already been successfully harvesting rainwater for toilets, laundry, and irrigation. Since the establishment of the 2016 Water Act, Kenya has prioritized regulating its agriculture industry. Additionally, areas in Australia use harvested rainwater for cooking and drinking. Studies by Stout et al. on the feasibility of RWH in India found it most beneficial for small-scale irrigation, which provides income from produce sales, and for groundwater recharge. Agriculture In regards to urban agriculture, rainwater harvesting in urban areas reduces the impact of runoff and flooding. The combination of urban 'green' rooftops with rainwater catchments have been found to reduce building temperatures by more than 1.3 degrees Celsius. Rainwater harvesting in conjunction with urban agriculture would be a viable way to help meet the United Nations Sustainable Development Goals for cleaner and sustainable cities, health and wellbeing, and food and water security (Sustainable Development Goal 6). The technology is available, however, it needs to be remodeled in order to use water more efficiently, especially in an urban setting. Missions to five Caribbean countries have shown that the capture and storage of rainwater runoff for later use is able to significantly reduce the risk of losing some or all of the year's harvest because of soil or water scarcity. In addition, the risks associated with flooding and soil erosion during high rainfall seasons would decrease. Small farmers, especially those farming on hillsides, could benefit the most from rainwater harvesting because they are able to capture runoff and decrease the effects of soil erosion. Many countries, especially those with arid environments, use rainwater harvesting as a cheap and reliable source of clean water. To enhance irrigation in arid environments, ridges of soil are constructed to trap and prevent rainwater from running down hills and slopes. Even in periods of low rainfall, enough water is collected for crops to grow. Water can be collected from roofs, dams and ponds can be constructed to hold large quantities of rainwater so that even on days when little to no rainfall occurs, enough is available to irrigate crops. Industry Frankfurt Airport has the largest rainwater harvesting system in Germany, saving approximately 1 million cubic meters of water per year. The cost of the system was 1.5 million dm (US$63,000) in 1993. This system collects water from the roofs of the new terminal which has an area of 26,800 square meters. The water is collected in the basement of the airport in six tanks with a storage capacity of 100 cubic meters. The water is mainly used for toilet flushing, watering plants and cleaning the air conditioning system. Rainwater harvesting was adopted at The Velodrome – The London Olympic Park – in order to increase the sustainability of the facility. A 73% decrease in potable water demand by the park was estimated. Despite this, it was deemed that rainwater harvesting was a less efficient use of financial resources to increase sustainability than the park's blackwater recycling program. Technologies Traditionally, stormwater management using detention basins served a single purpose. However, optimized real-time control lets this infrastructure double as a source of rainwater harvesting without compromising the existing detention capacity. This has been used in the EPA headquarters to evacuate stored water prior to storm events, thus reducing wet weather flow while ensuring water availability for later reuse. This has the benefit of increasing water quality released and decreasing the volume of water released during combined sewer overflow events. Generally, check dams are constructed across the streams to enhance the percolation of surface water into the subsoil strata. The water percolation in the water-impounded area of the check dams can be enhanced artificially manyfold by loosening the subsoil strata and ANFO explosives as used in open cast mining. Thus, local aquifers can be recharged quickly using the available surface water fully for use in the dry season. System setup Rainwater harvesting systems can range in complexity, from systems that can be installed with minimal skills, to automated systems that require advanced setup and installation. The basic rainwater harvesting system is more of a plumbing job than a technical job, as all the outlets from the building's terrace are connected through a pipe to an underground tank that stores water. There are common components that are installed in such systems, such as pre-filters (see e.g. vortex filter), drains/gutters, storage containers, and depending on whether the system is pressurized, also pumps, and treatment devices such as UV lights, chlorination devices and post-filtration equipment. Systems are ideally sized to meet the water demand throughout the dry season since it must be big enough to support daily water consumption. Specifically, the rainfall capturing area such as a building roof must be large enough to maintain an adequate flow of water. The water storage tank size should be large enough to contain the captured water. For low-tech systems, many low-tech methods are used to capture rainwater: rooftop systems, surface water capture, and pumping the rainwater that has already soaked into the ground or captured in reservoirs and storing it in tanks (cisterns). Rainwater harvesting by solar power panels Good quality water resources near populated areas are becoming scarce and costly for consumers. In addition to solar and wind energy, rainwater is a major renewable resource for any land. Vast areas are being covered by solar PV panels every year in all parts of the world. Solar panels can also be used for harvesting most of the rainwater falling on them and drinking quality water, free from bacteria and suspended matter, can be generated by simple filtration and disinfection processes as rainwater is very low in salinity. Exploiting rainwater for value-added products like bottled drinking water makes solar PV power plants profitable even in high rainfall or cloudy areas by generating additional income. Recently, cost-effective rainwater collection in existing wells has been found highly effective in raising groundwater levels in India. Other innovations The Groasis Waterboxx is an example of low scale technology, in this case to assist planting of trees in arid area. It harvests rainwater and dew. Advantages Rainwater harvesting provides the independent water supply during regional water restrictions, and in developed countries, it is often used to supplement the main supply. It provides water when a drought occurs, can help mitigate flooding of low-lying areas, and reduces demand on wells which may enable groundwater levels to be sustained. Rainwater harvesting increases the availability of water during dry seasons by increasing the levels of dried borewells and wells. Surface water supply is readily available for various purposes thus reducing dependence on underground water. It improves the quality of ground by diluting salinity. It does not cause pollution and is environmentally friendly. It is cost-effective and easily affordable. It also helps in the availability of potable water, as rainwater is substantially free of salinity and other salts. Applications of rainwater harvesting in urban water system provides a substantial benefit for both water supply and wastewater subsystems by reducing the need for clean water in water distribution systems, less generated stormwater in sewer systems, and a reduction in stormwater runoff polluting freshwater bodies. A large body of work has focused on the development of life cycle assessment and its costing methodologies to assess the level of environmental impacts and money that can be saved by implementing rainwater harvesting systems. Independent water supply Rainwater harvesting provides an independent water supply during water restrictions. In areas where clean water is costly, or difficult to come by, rainwater harvesting is a critical source of clean water. In developed countries, rainwater is often harvested to be used as a supplemental source of water rather than the main source, but the harvesting of rainwater can also decrease a household's water costs or overall usage levels. Rainwater is safe to drink if the consumers do additional treatments before drinking. Boiling water helps to kill germs. Adding another supplement to the system such as a first flush diverter is also a common procedure to avoid contaminants of the water. Supplemental in drought When drought occurs, rainwater harvested in past months can be used. If rain is scarce but also unpredictable, the use of a rainwater harvesting system can be critical to capturing the rain when it does fall. Many countries with arid environments, use rainwater harvesting as a cheap and reliable source of clean water. To enhance irrigation in arid environments, ridges of soil are constructed to trap and prevent rainwater from running downhills. Even in periods of low rainfall, enough water is collected for crops to grow. Water can be collected from roofs and tanks can be constructed to hold large quantities of rainwater. In addition, rainwater harvesting decreases the demand for water from wells, enabling groundwater levels to be further sustained rather than depleted. Life-cycle assessment Life-cycle assessment is a methodology used to evaluate the environmental impacts of a system from cradle-to-grave of its lifetime. Devkota et al, developed such a methodology for rainwater harvesting, and found that the building design (e.g., dimensions) and function (e.g., educational, residential, etc.) play critical roles in the environmental performance of the system. To address the functional parameters of rainwater harvesting systems, a new metric was developed – the demand to supply ratio (D/S) – identifying the ideal building design (supply) and function (demand) in regard to the environmental performance of rainwater harvesting for toilet flushing. With the idea that supply of rainwater not only saves the potable water but also saves the stormwater entering the combined sewer network (thereby requiring treatment), the savings in environmental emissions were higher if the buildings are connected to a combined sewer network compared to separate one. Cost-effectiveness Although standard RWH systems can provide a water source to developing regions facing poverty, the average cost for an RWH setup can be costly depending on the type of technology used. Governmental aid and NGOs can assist communities facing poverty by providing the materials and education necessary to develop and maintain RWH setups. Some studies show that rainwater harvesting is a widely applicable solution for water scarcity and other multiple usages, owing to its cost-effectiveness and eco-friendliness. Constructing new substantial, centralized water supply systems, such as dams, is prone to damage local ecosystems, generates external social costs, and has limited usages, especially in developing countries or impoverished communities. On the other hand, installing rainwater harvesting systems is verified by a number of studies to provide local communities a sustainable water source, accompanied by other various benefits, including protection from flood and control of water runoff, even in poor regions. Rainwater harvesting systems that do not require major construction or periodic maintenance by a professional from outside the community are more friendly to the environment and more likely to benefit the local people for a longer period of time. Thus, rainwater harvesting systems that could be installed and maintained by local people have bigger chances to be accepted and used by more people. The usage of in-situ technologies can reduce investment costs in rainwater harvesting. In-situ technologies for rainwater harvesting could be a feasible option for rural areas since less material is required to construct them. They can provide a reliable water source that can be utilized to expand agricultural outputs. Above-ground tanks can collect water for domestic use; however, such units can be unaffordable to people in poverty. Limitations Rainwater harvesting is a widely used method of storing rainwater in countries presenting with drought characteristics. Several pieces of research have derived and developed different criteria and techniques to select suitable sites for harvesting rainwater. Some research was identified and selected suitable sites for the potential erection of dams, as well as derived a model builder in ArcMap 10.4.1. The model combined several parameters, such as slope, runoff potential, land cover/use, stream order, soil quality, and hydrology to determine the suitability of the site for harvesting rainwater. Harvested water from RWH systems can be minimal during below-average precipitation in arid urban regions such as the Middle East. RWH is useful for developing areas as it collects water for irrigation and domestic purposes. However, the gathered water should be adequately filtered to ensure safe drinking. Quality of water Rainwater may need to be analyzed properly, and used in a way appropriate to its safety. In the Gansu province, for example, solar water disinfection is used by boiling harvested rainwater in parabolic solar cookers before being used for drinking. These so-called "appropriate technology" methods provide low-cost disinfection options for treatment of stored rainwater for drinking. While rainwater itself is a clean source of water, often better than groundwater or water from rivers or lakes, the process of collection and storage often leaves the water polluted and non-potable. Rainwater harvested from roofs can contain human, animal and bird feces, mosses and lichens, windblown dust, particulates from urban pollution, pesticides, and inorganic ions from the sea (Ca, Mg, Na, K, Cl, SO4), and dissolved gases (CO2, NOx, SOx). High levels of pesticide have been found in rainwater in Europe with the highest concentrations occurring in the first rain immediately after a dry spell; the concentration of these and other contaminants are reduced significantly by diverting the initial flow of run-off water to waste. Improved water quality can also be obtained by using a floating draw-off mechanism (rather than from the base of the tank) and by using a series of tanks, withdraw from the last in series. Prefiltration is a common practice used in the industry to keep the system healthy and ensure that the water entering the tank is free of large sediments. A concept of rainwater harvesting and cleaning it with solar energy for rural household drinking purposes has been developed by Nimbkar Agricultural Research Institute. Conceptually, a water supply system should match the quality of water with the end-user. However, in most of the developed world, high-quality potable water is used for all end uses. This approach wastes money and energy and imposes unnecessary impacts on the environment. Supplying rainwater that has gone through preliminary filtration measures for non-potable water uses, such as toilet flushing, irrigation, and laundry, may be a significant part of a sustainable water management strategy. Rainwater cisterns can also act as habitat for pathogen-bearing mosquitoes. As a result, care must be taken to ensure that female mosquitoes can not access the cistern to lay eggs. Larvae eating fish can also be added to the cistern, or it can be chemically treated. Country examples Canada India While rainwater harvesting in an urban context has gained traction in recent years, evidence points toward rainwater harvesting in rural India since ancient times. United Kingdom United States Other countries Uganda: Rainwater harvesting has been used in Uganda to promote household and community scale water security for many years. Regular maintenance is an ongoing challenge with existing installation and there are many examples of installations that have failed due to poor maintenance. Research has also shown that awareness of RWH and how to access necessary resources to implement RWH is variable across Ugandan society. Thailand has the largest fraction of the population in the rural area relying on rainwater harvesting (currently around 40%). Rainwater harvesting was promoted heavily by the government in the 1980s. In the 1990s, after government funding for the collection tanks ran out, the private sector stepped in and provided several million tanks to private households, many of which continue to be used. This is one of the largest examples of self-supply of water worldwide. In Bermuda, the law requires all new construction to include rainwater harvesting adequate for the residents. New Zealand has plentiful rainfall in the West and South, and rainwater harvesting is the normal practice in many rural areas, using roof water directed by spouting into covered, 1000 litre storage tanks, with the encouragement of most local councils. In Sri Lanka, rainwater harvesting has been a popular method of obtaining water for agriculture and for drinking purposes in rural homes. The legislation to promote rainwater harvesting was enacted through the Urban Development Authority (Amendment) Act, No. 36 of 2007. The Lanka Rainwater Harvesting Forum is leading Sri Lanka's initiative. The tank cascade system is an ancient irrigation system spanning the island of Sri Lanka. History The construction and use of cisterns to store rainwater can be traced back to the Neolithic Age, when waterproof lime plaster cisterns were built in the floors of houses in village locations of the Levant, a large area in Southwest Asia, south of the Taurus Mountains, bounded by the Mediterranean Sea in the west, the Arabian Desert in the south, and Mesopotamia in the east. By the late 4000 BC, cisterns were essential elements of emerging water management techniques used in dry-land farming. Many ancient cisterns have been discovered in some parts of Jerusalem and throughout what is today Israel/Palestine. At the site believed by some to be that of the biblical city of Ai (Khirbet et-Tell), a large cistern dating back to around 2500 BC was discovered that had a capacity of nearly . It was carved out of a solid rock, lined with large stones, and sealed with clay to keep it from leaking. The Greek island of Crete is also known for its use of large cisterns for rainwater collection and storage during the Minoan period from 2,600 BC–1,100 BC. Four large cisterns have been discovered at Myrtos-Pyrgos, Archanes, and Zakroeach. The cistern found at Myrtos-Pyrgos was found to have a capacity of more than and to date back to 1700 BC. Around 300 BC, farming communities in Balochistan (now located in Pakistan, Afghanistan, and Iran), and Kutch, India, used rainwater harvesting for agriculture and many other uses. Rainwater harvesting was done by Chola kings as well. Rainwater from the Brihadeeswarar temple (located in Balaganapathy Nagar, Thanjavur, India) was collected in Shivaganga tank. During the later Chola period, the Vīrānam tank was built (1011 to 1037 AD) in the Cuddalore district of Tamil Nadu to store water for drinking and irrigation purposes. Vīrānam is a 16-km-long tank with a storage capacity of . Rainwater harvesting was also common in the Roman Empire. While Roman aqueducts are well-known, Roman cisterns were also commonly used and their construction expanded with the Empire. For example, in Pompeii, rooftop water storage was common before the construction of the aqueduct in the 1st century BC. This history continued with the Byzantine Empire; for example, the Basilica Cistern in Istanbul. Though little known, the town of Venice for centuries depended on rainwater harvesting. The lagoon surrounding Venice is brackish water, which is unsuitable for drinking. Venice's ancient inhabitants established a rainwater collection system based on man-made insulated collection wells. Water percolated down the specially designed stone flooring, and was filtered by a layer of sand, then collected at the bottom of the well. Later, as Venice acquired territories on the mainland, it started to import water by boat from local rivers. Still, the wells remained in use and were especially important in times of war when an enemy could block access to the mainland water. See also References External links Water supply Water conservation Irrigation Appropriate technology Hydrology and urban planning Sustainable gardening DIY culture
Rainwater harvesting
[ "Chemistry", "Engineering", "Environmental_science" ]
4,242
[ "Hydrology", "Hydrology and urban planning", "Water supply", "Environmental engineering" ]
1,032,607
https://en.wikipedia.org/wiki/Focus%20%28geometry%29
In geometry, focuses or foci (; : focus) are special points with reference to which any of a variety of curves is constructed. For example, one or two foci can be used in defining conic sections, the four types of which are the circle, ellipse, parabola, and hyperbola. In addition, two foci are used to define the Cassini oval and the Cartesian oval, and more than two foci are used in defining an n-ellipse. Conic sections Defining conics in terms of two foci An ellipse can be defined as the locus of points for which the sum of the distances to two given foci is constant. A circle is the special case of an ellipse in which the two foci coincide with each other. Thus, a circle can be more simply defined as the locus of points each of which is a fixed distance from a single given focus. A circle can also be defined as the circle of Apollonius, in terms of two different foci, as the locus of points having a fixed ratio of distances to the two foci. A parabola is a limiting case of an ellipse in which one of the foci is a point at infinity. A hyperbola can be defined as the locus of points for which the absolute value of the difference between the distances to two given foci is constant. Defining conics in terms of a focus and a directrix It is also possible to describe all conic sections in terms of a single focus and a single directrix, which is a given line not containing the focus. A conic is defined as the locus of points for each of which the distance to the focus divided by the distance to the directrix is a fixed positive constant, called the eccentricity . If the conic is an ellipse, if the conic is a parabola, and if the conic is a hyperbola. If the distance to the focus is fixed and the directrix is a line at infinity, so the eccentricity is zero, then the conic is a circle. Defining conics in terms of a focus and a directrix circle It is also possible to describe all the conic sections as loci of points that are equidistant from a single focus and a single, circular directrix. For the ellipse, both the focus and the center of the directrix circle have finite coordinates and the radius of the directrix circle is greater than the distance between the center of this circle and the focus; thus, the focus is inside the directrix circle. The ellipse thus generated has its second focus at the center of the directrix circle, and the ellipse lies entirely within the circle. For the parabola, the center of the directrix moves to the point at infinity (see Projective geometry). The directrix "circle" becomes a curve with zero curvature, indistinguishable from a straight line. The two arms of the parabola become increasingly parallel as they extend, and "at infinity" become parallel; using the principles of projective geometry, the two parallels intersect at the point at infinity and the parabola becomes a closed curve (elliptical projection). To generate a hyperbola, the radius of the directrix circle is chosen to be less than the distance between the center of this circle and the focus; thus, the focus is outside the directrix circle. The arms of the hyperbola approach asymptotic lines and the "right-hand" arm of one branch of a hyperbola meets the "left-hand" arm of the other branch of a hyperbola at the point at infinity; this is based on the principle that, in projective geometry, a single line meets itself at a point at infinity. The two branches of a hyperbola are thus the two (twisted) halves of a curve closed over infinity. In projective geometry, all conics are equivalent in the sense that every theorem that can be stated for one can be stated for the others. Astronomical significance In the gravitational two-body problem, the orbits of the two bodies about each other are described by two overlapping conic sections with one of the foci of one being coincident with one of the foci of the other at the center of mass (barycenter) of the two bodies. Thus, for instance, the minor planet Pluto's largest moon Charon has an elliptical orbit which has one focus at the Pluto-Charon system's barycenter, which is a point that is in space between the two bodies; and Pluto also moves in an ellipse with one of its foci at that same barycenter between the bodies. Pluto's ellipse is entirely inside Charon's ellipse, as shown in this animation of the system. By comparison, the Earth's Moon moves in an ellipse with one of its foci at the barycenter of the Moon and the Earth, this barycenter being within the Earth itself, while the Earth (more precisely, its center) moves in an ellipse with one focus at that same barycenter within the Earth. The barycenter is about three-quarters of the distance from Earth's center to its surface. Moreover, the Pluto-Charon system moves in an ellipse around its barycenter with the Sun, as does the Earth-Moon system (and every other planet-moon system or moonless planet in the solar system). In both cases the barycenter is well within the body of the Sun. Two binary stars also move in ellipses sharing a focus at their barycenter; for an animation, see here. Cartesian and Cassini ovals A Cartesian oval is the set of points for each of which the weighted sum of the distances to two given foci is constant. If the weights are equal, the special case of an ellipse results. A Cassini oval is the set of points for each of which the product of the distances to two given foci is constant. Generalizations An n-ellipse is the set of points all having the same sum of distances to foci (the case being the conventional ellipse). The concept of a focus can be generalized to arbitrary algebraic curves. Let be a curve of class and let and denote the circular points at infinity. Draw the tangents to through each of and . There are two sets of lines which will have points of intersection, with exceptions in some cases due to singularities, etc. These points of intersection are the defined to be the foci of . In other words, a point is a focus if both and are tangent to . When is a real curve, only the intersections of conjugate pairs are real, so there are in a real foci and imaginary foci. When is a conic, the real foci defined this way are exactly the foci which can be used in the geometric construction of . Confocal curves Let be given as foci of a curve of class . Let be the product of the tangential equations of these points and the product of the tangential equations of the circular points at infinity. Then all the lines which are common tangents to both and are tangent to . So, by the AF+BG theorem, the tangential equation of has the form . Since has class , must be a constant and but have degree less than or equal to . The case can be eliminated as degenerate, so the tangential equation of can be written as where is an arbitrary polynomial of degree . For example, let , , and . The tangential equations are so . The tangential equations for the circular points at infinity are so . Therefore, the tangential equation for a conic with the given foci is or where is an arbitrary constant. In point coordinates this becomes References Conic sections Geometric centers
Focus (geometry)
[ "Physics", "Mathematics" ]
1,649
[ "Point (geometry)", "Geometric centers", "Symmetry" ]
1,032,856
https://en.wikipedia.org/wiki/Greeble
Greebles, also greeblies (singular: greebly), or "nurnies", are parts harvested from plastic modeling kits to be applied to an original model as a detail element. The practice of using parts in this manner is called "kitbashing". Etymology The term "greeblies" was first used by effects artists at Industrial Light & Magic in the 1970s to refer to small details added to models. According to model designer and fabricator Adam Savage, George Lucas, Industrial Light & Magic's founder, coined the term "greeblies". Ron Thornton is widely believed to have coined the term "nurnies" referring to CGI technical detail that his company Foundation Imaging produced for the Babylon 5 series, while the model-making team of 2001: A Space Odyssey referred to them as "wiggets". See also Bump mapping Diapering Fractal art Horror vacui Kitbashing References External links Staffan Norling's comments about greebling Computer graphic techniques Film and video technology Scale modeling Special effects
Greeble
[ "Physics" ]
220
[ "Scale modeling" ]
1,032,961
https://en.wikipedia.org/wiki/Tom%20Miller%20%28computer%20programmer%29
Tom Miller (born 1950) is a software developer who was employed by Microsoft. Miller worked as a member of the original team of developers who followed Dave Cutler from DEC to Microsoft, where he initially started working in the networking group. After less than two years, Miller moved to the Windows NT team, where he worked with John Nelson on file systems and wrote the original 50 page specification document for the NT File System. References American computer programmers Microsoft employees Microsoft Windows people Living people 1950 births
Tom Miller (computer programmer)
[ "Technology" ]
98
[ "Computing stubs", "Computer specialist stubs" ]
1,032,998
https://en.wikipedia.org/wiki/Gas%20centrifuge
A gas centrifuge is a device that performs isotope separation of gases. A centrifuge relies on the principles of centrifugal force accelerating molecules so that particles of different masses are physically separated in a gradient along the radius of a rotating container. A prominent use of gas centrifuges is for the separation of uranium-235 (235U) from uranium-238 (238U). The gas centrifuge was developed to replace the gaseous diffusion method of 235U extraction. High degrees of separation of these isotopes relies on using many individual centrifuges arranged in series that achieve successively higher concentrations. This process yields higher concentrations of 235U while using significantly less energy compared to the gaseous diffusion process. History Suggested in 1919, the centrifugal process was first successfully performed in 1934. American scientist Jesse Beams and his team at the University of Virginia developed the process by separating two chlorine isotopes through a vacuum ultracentrifuge. It was one of the initial isotopic separation means pursued during the Manhattan Project, more particularly by Harold Urey and Karl P. Cohen, but research was discontinued in 1944 as it was felt that the method would not produce results by the end of the war, and that other means of uranium enrichment (gaseous diffusion and electromagnetic separation) had a better chance of success in the short term. This method was successfully used in the Soviet nuclear program, making the Soviet Union the most effective supplier of enriched uranium. Franz Simon, Rudolf Peierls, Klaus Fuchs and Nicholas Kurti made important contributions to the centrifugal process. Paul Dirac made important theoretical contributions to the centrifugal process during World War II; Dirac developed the fundamental theory of separation processes that underlies the design and analysis of modern uranium enrichment plants. In the long term, especially with the development of the Zippe-type centrifuge, the gas centrifuge has become a very economical mode of separation, using considerably less energy than other methods and having numerous other advantages. Research in the physical performance of centrifuges was carried out by the Pakistani scientist Abdul Qadeer Khan in the 1970s–80s, using vacuum methods for advancing the role of centrifuges in the development of nuclear fuel for Pakistan's atomic bomb. Many of the theorists working with Khan were unsure that either gaseous and enriched uranium would be feasible on time. One scientist recalled: "No one in the world has used the [gas] centrifuge method to produce military-grade uranium.... This was not going to work. He was simply wasting time." In spite of skepticism, the program was quickly proven to be feasible. Enrichment via centrifuge has been used in experimental physics, and the method was smuggled to at least three different countries by the end of the 20th century. Centrifugal process The centrifuge relies on the force resulting from centrifugal acceleration to separate molecules according to their mass and can be applied to most fluids. The dense (heavier) molecules move towards the wall, and the lighter ones remain close to the center. The centrifuge consists of a rigid body rotor rotating at full period at high speed. Concentric gas tubes located on the axis of the rotor are used to introduce feed gas into the rotor and extract the heavier and lighter separated streams. For 235U production, the heavier stream is the waste stream and the lighter stream is the product stream. Modern Zippe-type centrifuges are tall cylinders spinning on a vertical axis. A vertical temperature gradient can be applied to create a convective circulation rising in the center and descending at the periphery of the centrifuge. Such a countercurrent flow can also be stimulated mechanically by the scoops that take out the enriched and depleted fractions. Diffusion between these opposing flows increases the separation by the principle of countercurrent multiplication. In practice, since there are limits to how tall a single centrifuge can be made, several such centrifuges are connected in series. Each centrifuge receives one input line and produces two output lines, corresponding to light and heavy fractions. The input of each centrifuge is the product stream of the previous centrifuge. This produces an almost pure light fraction from the product stream of the last centrifuge and an almost pure heavy fraction from the waste stream of the first centrifuge. Gas centrifugation process The gas centrifugation process uses a unique design that allows gas to constantly flow in and out of the centrifuge. Unlike most centrifuges which rely on batch processing, the gas centrifuge uses continuous processing, allowing cascading in which multiple identical processes occur in succession. The gas centrifuge consists of a cylindrical rotor, a casing, an electric motor, and three lines for material to travel. The gas centrifuge is designed with a casing that completely encloses the centrifuge. The cylindrical rotor is located inside the casing, which is evacuated of all air to produce a near frictionless rotation when operating. The motor spins the rotor, creating the centrifugal force on the components as they enter the cylindrical rotor. This force acts to separate the molecules of the gas, with heavier molecules moving towards the wall of the rotor and the lighter molecules towards the central axis. There are two output lines, one for the fraction enriched in the desired isotope (in uranium separation, this is 235U), and one depleted of that isotope. The output lines take these separations to other centrifuges to continue the centrifugation process. The process begins when the rotor is balanced in three stages. Most of the technical details on gas centrifuges are difficult to obtain because they are shrouded in "nuclear secrecy". The early gas centrifuges used in the UK used an alloy body wrapped in epoxy-impregnated glass fibre. Dynamic balancing of the assembly was accomplished by adding small traces of epoxy at the locations indicated by the balancing test unit. The motor was usually a pancake type located at the bottom of the cylinder. The early units were typically around 2 metres long, but subsequent developments gradually increased the length. The present generation are over 4 metres in length. The bearings are gas-based devices, as mechanical bearings would not survive at the normal operating speeds of these centrifuges. A section of centrifuges would be fed with variable-frequency alternating current from an electronic (bulk) inverter, which would slowly ramp them up to the required speed, generally in excess of 50,000 rpm. One precaution was to quickly get past frequencies at which the cylinder was known to suffer resonance problems. The inverter is a high-frequency unit capable of operating at frequencies around 1 kilohertz. The whole process is normally silent; if a noise is heard coming from a centrifuge, it is a warning of failure (which normally occurs very quickly). The design of the cascade normally allows for the failure of at least one centrifuge unit without compromising the operation of the cascade. The units are normally very reliable, with early models having operated continuously for over 30 years. Later models have steadily increased the rotation speed of the centrifuges, as it is the velocity of the centrifuge wall that has the most effect on the separation efficiency. A feature of the cascade system of centrifuges is that it is possible to increase plant throughput incrementally, by adding cascade "blocks" to the existing installation at suitable locations, rather than having to install a completely new line of centrifuges. Concurrent and countercurrent centrifuges The simplest gas centrifuge is the concurrent centrifuge, where separative effect is produced by the centrifugal effects of the rotor's rotation. In these centrifuges, the heavy fraction is collected at the periphery of the rotor and the light fraction from nearer the axis of rotation. Inducing a countercurrent flow uses countercurrent multiplication to enhance the separative effect. A vertical circulating current is set up, with the gas flowing axially along the rotor walls in one direction and a return flow closer to the center of the rotor. The centrifugal separation continues as before (heavier molecules preferentially moving outwards), which means that the heavier molecules are collected by the wall flow, and the lighter fraction collects at the other end. In a centrifuge with downward wall flow, the heavier molecules collect at the bottom. The outlet scoops are then placed at the ends of the rotor cavity, with the feed mixture injected along the axis of the cavity (ideally, the injection point is at the point where the mixture in the rotor is equal to the feed). This countercurrent flow can be induced mechanically or thermally, or a combination. In mechanically induced countercurrent flow, the arrangement of the (stationary) scoops and internal rotor structures are used to generate the flow. A scoop interacts with the gas by slowing it, which tends to draw it into the centre of the rotor. The scoops at each end induce opposing currents, so one scoop is protected from the flow by a "baffle": a perforated disc within the rotor which rotates along with the gas—at this end of the rotor, the flow will be outwards, towards the rotor wall. Thus, in a centrifuge with a baffled top scoop, the wall flow is downwards, and heavier molecules are collected at the bottom. Thermally induced convection currents can be created by heating the bottom of the centrifuge and/or cooling the top end. Separative work units The separative work unit (SWU) is a measure of the amount of work done by the centrifuge and has units of mass (typically kilogram separative work unit). The work necessary to separate a mass of feed of assay into a mass of product assay , and tails of mass and assay is expressed in terms of the number of separative work units needed, given by the expression where is the value function, defined as Practical application of centrifugation Separation of uranium-235 from uranium-238 The separation of uranium requires the material in a gaseous form; uranium hexafluoride (UF6) is used for uranium enrichment. Upon entering the centrifuge cylinder, the UF6 gas is rotated at a high speed. The rotation creates a strong centrifugal force that draws more of the heavier gas molecules (containing the 238U) toward the wall of the cylinder, while the lighter gas molecules (containing the 235U) tend to collect closer to the center. The stream that is slightly enriched in 235U is withdrawn and fed into the next higher stage, while the slightly depleted stream is recycled back into the next lower stage. Separation of zinc isotopes For some uses in nuclear technology, the content of zinc-64 in zinc metal has to be lowered in order to prevent formation of radioisotopes by its neutron activation. Diethyl zinc is used as the gaseous feed medium for the centrifuge cascade. An example of a resulting material is depleted zinc oxide, used as a corrosion inhibitor. See also Nuclear technology Nuclear power Nuclear fuel Notes References "Basics of Centrifugation." Cole-Parmer Technical Lab. 14 Mar. 2008 "Gas Centrifuge Uranium Enrichment." Global Security.Org. 27 Apr. 2005. 13 Mar. 2008 "What is a Gas Centrifuge?" 2003. Institute for Science and International Security. 10 Oct. 2013 External links Annotated bibliography on the gas centrifuge from the Alsos Digital Library History of the Centrifuge What is a Gas Centrifuge? Agreement between the Government of the United States of America and the Four Governments of the French Republic, the United Kingdom of Great Britain and Northern Ireland, the Kingdom of the Netherlands, and the Federal Republic of Germany Regarding the Establishment, Construction and Operation of Uranium Enriching Installations Using Gas Centrifuge Technology in the United States of America United States Department of State Centrifuges Isotope separation Nuclear chemistry Nuclear proliferation Uranium P R A
Gas centrifuge
[ "Physics", "Chemistry", "Engineering" ]
2,540
[ "Centrifugation", "Chemical equipment", "Nuclear chemistry", "nan", "Nuclear physics", "Centrifuges" ]
1,033,036
https://en.wikipedia.org/wiki/Thin%20film
A thin film is a layer of materials ranging from fractions of a nanometer (monolayer) to several micrometers in thickness. The controlled synthesis of materials as thin films (a process referred to as deposition) is a fundamental step in many applications. A familiar example is the household mirror, which typically has a thin metal coating on the back of a sheet of glass to form a reflective interface. The process of silvering was once commonly used to produce mirrors, while more recently the metal layer is deposited using techniques such as sputtering. Advances in thin film deposition techniques during the 20th century have enabled a wide range of technological breakthroughs in areas such as magnetic recording media, electronic semiconductor devices, integrated passive devices, light-emitting diodes, optical coatings (such as antireflective coatings), hard coatings on cutting tools, and for both energy generation (e.g. thin-film solar cells) and storage (thin-film batteries). It is also being applied to pharmaceuticals, via thin-film drug delivery. A stack of thin films is called a multilayer. In addition to their applied interest, thin films play an important role in the development and study of materials with new and unique properties. Examples include multiferroic materials, and superlattices that allow the study of quantum phenomena. Nucleation Nucleation is an important step in growth that helps determine the final structure of a thin film. Many growth methods rely on nucleation control such as atomic-layer epitaxy (atomic layer deposition). Nucleation can be modeled by characterizing surface process of adsorption, desorption, and surface diffusion. Adsorption and desorption Adsorption is the interaction of a vapor atom or molecule with a substrate surface. The interaction is characterized the sticking coefficient, the fraction of incoming species thermally equilibrated with the surface. Desorption reverses adsorption where a previously adsorbed molecule overcomes the bounding energy and leaves the substrate surface. The two types of adsorptions, physisorption and chemisorption, are distinguished by the strength of atomic interactions. Physisorption describes the van der Waals bonding between a stretched or bent molecule and the surface characterized by adsorption energy . Evaporated molecules rapidly lose kinetic energy and reduces its free energy by bonding with surface atoms. Chemisorption describes the strong electron transfer (ionic or covalent bond) of molecule with substrate atoms characterized by adsorption energy . The process of physic- and chemisorption can be visualized by the potential energy as a function of distance. The equilibrium distance for physisorption is further from the surface than chemisorption. The transition from physisorbed to chemisorbed states are governed by the effective energy barrier . Crystal surfaces have specific bonding sites with larger values that would preferentially be populated by vapor molecules to reduce the overall free energy. These stable sites are often found on step edges, vacancies and screw dislocations. After the most stable sites become filled, the adatom-adatom (vapor molecule) interaction becomes important. Nucleation models Nucleation kinetics can be modeled considering only adsorption and desorption. First consider case where there are no mutual adatom interactions, no clustering or interaction with step edges. The rate of change of adatom surface density , where is the net flux, is the mean surface lifetime prior to desorption and is the sticking coefficient: Adsorption can also be modeled by different isotherms such as Langmuir model and BET model. The Langmuir model derives an equilibrium constant based on the adsorption reaction of vapor adatom with vacancy on the substrate surface. The BET model expands further and allows adatoms deposition on previously adsorbed adatoms without interaction between adjacent piles of atoms. The resulting derived surface coverage is in terms of the equilibrium vapor pressure and applied pressure. Langmuir model where is the vapor pressure of adsorbed adatoms: BET model where is the equilibrium vapor pressure of adsorbed adatoms and is the applied vapor pressure of adsorbed adatoms: As an important note, surface crystallography and differ from the bulk to minimize the overall free electronic and bond energies due to the broken bonds at the surface. This can result in a new equilibrium position known as “selvedge”, where the parallel bulk lattice symmetry is preserved. This phenomenon can cause deviations from theoretical calculations of nucleation. Surface diffusion Surface diffusion describes the lateral motion of adsorbed atoms moving between energy minima on the substrate surface. Diffusion most readily occurs between positions with lowest intervening potential barriers. Surface diffusion can be measured using glancing-angle ion scattering. The average time between events can be describes by: In addition to adatom migration, clusters of adatom can coalesce or deplete. Cluster coalescence through processes, such as Ostwald ripening and sintering, occur in response to reduce the total surface energy of the system. Ostwald repining describes the process in which islands of adatoms with various sizes grow into larger ones at the expense of smaller ones. Sintering is the coalescence mechanism when the islands contact and join. Deposition The act of applying a thin film to a surface is thin-film deposition – any technique for depositing a thin film of material onto a substrate or onto previously deposited layers. "Thin" is a relative term, but most deposition techniques control layer thickness within a few tens of nanometres. Molecular beam epitaxy, the Langmuir–Blodgett method, atomic layer deposition and molecular layer deposition allow a single layer of atoms or molecules to be deposited at a time. It is useful in the manufacture of optics (for reflective, anti-reflective coatings or self-cleaning glass, for instance), electronics (layers of insulators, semiconductors, and conductors form integrated circuits), packaging (i.e., aluminium-coated PET film), and in contemporary art (see the work of Larry Bell). Similar processes are sometimes used where thickness is not important: for instance, the purification of copper by electroplating, and the deposition of silicon and enriched uranium by a chemical vapor deposition-like process after gas-phase processing. Deposition techniques fall into two broad categories, depending on whether the process is primarily chemical or physical. Chemical deposition Here, a fluid precursor undergoes a chemical change at a solid surface, leaving a solid layer. An everyday example is the formation of soot on a cool object when it is placed inside a flame. Since the fluid surrounds the solid object, deposition happens on every surface, with little regard to direction; thin films from chemical deposition techniques tend to be conformal, rather than directional. Chemical deposition is further categorized by the phase of the precursor: Plating relies on liquid precursors, often a solution of water with a salt of the metal to be deposited. Some plating processes are driven entirely by reagents in the solution (usually for noble metals), but by far the most commercially important process is electroplating. In semiconductor manufacturing, an advanced form of electroplating known as electrochemical deposition is now used to create the copper conductive wires in advanced chips, replacing the chemical and physical deposition processes used to previous chip generations for aluminum wires Chemical solution deposition or chemical bath deposition uses a liquid precursor, usually a solution of organometallic powders dissolved in an organic solvent. This is a relatively inexpensive, simple thin-film process that produces stoichiometrically accurate crystalline phases. This technique is also known as the sol-gel method because the 'sol' (or solution) gradually evolves towards the formation of a gel-like diphasic system. The Langmuir–Blodgett method uses molecules floating on top of an aqueous subphase. The packing density of molecules is controlled, and the packed monolayer is transferred on a solid substrate by controlled withdrawal of the solid substrate from the subphase. This allows creating thin films of various molecules such as nanoparticles, polymers and lipids with controlled particle packing density and layer thickness. Spin coating or spin casting, uses a liquid precursor, or sol-gel precursor deposited onto a smooth, flat substrate which is subsequently spun at a high velocity to centrifugally spread the solution over the substrate. The speed at which the solution is spun and the viscosity of the sol determine the ultimate thickness of the deposited film. Repeated depositions can be carried out to increase the thickness of films as desired. Thermal treatment is often carried out in order to crystallize the amorphous spin coated film. Such crystalline films can exhibit certain preferred orientations after crystallization on single crystal substrates. Dip coating is similar to spin coating in that a liquid precursor or sol-gel precursor is deposited on a substrate, but in this case the substrate is completely submerged in the solution and then withdrawn under controlled conditions. By controlling the withdrawal speed, the evaporation conditions (principally the humidity, temperature) and the volatility/viscosity of the solvent, the film thickness, homogeneity and nanoscopic morphology are controlled. There are two evaporation regimes: the capillary zone at very low withdrawal speeds, and the draining zone at faster evaporation speeds. Chemical vapor deposition generally uses a gas-phase precursor, often a halide or hydride of the element to be deposited. In the case of metalorganic vapour phase epitaxy, an organometallic gas is used. Commercial techniques often use very low pressures of precursor gas. Plasma Enhanced Chemical Vapor Deposition uses an ionized vapor, or plasma, as a precursor. Unlike the soot example above, this method relies on electromagnetic means (electric current, microwave excitation), rather than a chemical-reaction, to produce a plasma. Atomic layer deposition and its sister technique molecular layer deposition, uses gaseous precursor to deposit conformal thin film's one layer at a time. The process is split up into two half reactions, run in sequence and repeated for each layer, in order to ensure total layer saturation before beginning the next layer. Therefore, one reactant is deposited first, and then the second reactant is deposited, during which a chemical reaction occurs on the substrate, forming the desired composition. As a result of the stepwise, the process is slower than chemical vapor deposition; however, it can be run at low temperatures. When performed on polymeric substrates, atomic layer deposition can become sequential infiltration synthesis, where the reactants diffuse into the polymer and interact with functional groups on the polymer chains. Physical deposition Physical deposition uses mechanical, electromechanical or thermodynamic means to produce a thin film of solid. An everyday example is the formation of frost. Since most engineering materials are held together by relatively high energies, and chemical reactions are not used to store these energies, commercial physical deposition systems tend to require a low-pressure vapor environment to function properly; most can be classified as physical vapor deposition. The material to be deposited is placed in an energetic, entropic environment, so that particles of material escape its surface. Facing this source is a cooler surface which draws energy from these particles as they arrive, allowing them to form a solid layer. The whole system is kept in a vacuum deposition chamber, to allow the particles to travel as freely as possible. Since particles tend to follow a straight path, films deposited by physical means are commonly directional, rather than conformal. Examples of physical deposition include: A thermal evaporator that uses an electric resistance heater to melt the material and raise its vapor pressure to a useful range. This is done in a high vacuum, both to allow the vapor to reach the substrate without reacting with or scattering against other gas-phase atoms in the chamber, and reduce the incorporation of impurities from the residual gas in the vacuum chamber. Only materials with a much higher vapor pressure than the heating element can be deposited without contamination of the film. Molecular beam epitaxy is a particularly sophisticated form of thermal evaporation. An electron beam evaporator fires a high-energy beam from an electron gun to boil a small spot of material; since the heating is not uniform, lower vapor pressure materials can be deposited. The beam is usually bent through an angle of 270° in order to ensure that the gun filament is not directly exposed to the evaporant flux. Typical deposition rates for electron beam evaporation range from 1 to 10 nanometres per second. In molecular beam epitaxy, slow streams of an element can be directed at the substrate, so that material deposits one atomic layer at a time. Compounds such as gallium arsenide are usually deposited by repeatedly applying a layer of one element (i.e., gallium), then a layer of the other (i.e., arsenic), so that the process is chemical, as well as physical; this is known also as atomic layer deposition. If the precursors in use are organic, then the technique is called molecular layer deposition. The beam of material can be generated by either physical means (that is, by a furnace) or by a chemical reaction (chemical beam epitaxy). Sputtering relies on a plasma (usually a noble gas, such as argon) to knock material from a "target" a few atoms at a time. The target can be kept at a relatively low temperature, since the process is not one of evaporation, making this one of the most flexible deposition techniques. It is especially useful for compounds or mixtures, where different components would otherwise tend to evaporate at different rates. Note, sputtering's step coverage is more or less conformal. It is also widely used in optical media. The manufacturing of all formats of CD, DVD, and BD are done with the help of this technique. It is a fast technique and also it provides a good thickness control. Presently, nitrogen and oxygen gases are also being used in sputtering. Pulsed laser deposition systems work by an ablation process. Pulses of focused laser light vaporize the surface of the target material and convert it to plasma; this plasma usually reverts to a gas before it reaches the substrate. Thermal laser epitaxy uses focused light from a continuous-wave laser to thermally evaporate sources of material. By adjusting the power density of the laser beam, the evaporation of any solid, non-radioactive element is possible. The resulting atomic vapor is then deposited upon a substrate, which is also heated via a laser beam. The vast range of substrate and deposition temperatures allows of the epitaxial growth of various elements considered challenging by other thin film growth techniques. Cathodic arc deposition (arc-physical vapor deposition), which is a kind of ion beam deposition where an electrical arc is created that blasts ions from the cathode. The arc has an extremely high power density resulting in a high level of ionization (30–100%), multiply charged ions, neutral particles, clusters and macro-particles (droplets). If a reactive gas is introduced during the evaporation process, dissociation, ionization and excitation can occur during interaction with the ion flux and a compound film will be deposited. Electrohydrodynamic deposition (electrospray deposition) is a relatively new process of thin-film deposition. The liquid to be deposited, either in the form of nanoparticle solution or simply a solution, is fed to a small capillary nozzle (usually metallic) which is connected to a high voltage. The substrate on which the film has to be deposited is connected to ground. Through the influence of electric field, the liquid coming out of the nozzle takes a conical shape (Taylor cone) and at the apex of the cone a thin jet emanates which disintegrates into very fine and small positively charged droplets under the influence of Rayleigh charge limit. The droplets keep getting smaller and smaller and ultimately get deposited on the substrate as a uniform thin layer. Growth modes Frank–van der Merwe growth ("layer-by-layer"). In this growth mode the adsorbate-surface and adsorbate-adsorbate interactions are balanced. This type of growth requires lattice matching, and hence considered an "ideal" growth mechanism. Stranski–Krastanov growth ("joint islands" or "layer-plus-island"). In this growth mode the adsorbate-surface interactions are stronger than adsorbate-adsorbate interactions. Volmer–Weber ("isolated islands"). In this growth mode the adsorbate-adsorbate interactions are stronger than adsorbate-surface interactions, hence "islands" are formed right away. There are three distinct stages of stress evolution that arise during Volmer-Weber film deposition. The first stage consists of the nucleation of individual atomic islands. During this first stage, the overall observed stress is very low. The second stage commences as these individual islands coalesce and begin to impinge on each other, resulting in an increase in the overall tensile stress in the film. This increase in overall tensile stress can be attributed to the formation of grain boundaries upon island coalescence that results in interatomic forces acting over the newly formed grain boundaries. The magnitude of this generated tensile stress depends on the density of the formed grain boundaries, as well as their grain-boundary energies. During this stage, the thickness of the film is not uniform because of the random nature of the island coalescence but is measured as the average thickness. The third and final stage of the Volmer-Weber film growth begins when the morphology of the film’s surface is unchanging with film thickness. During this stage, the overall stress in the film can remain tensile, or become compressive.   On a stress-thickness vs. thickness plot, an overall compressive stress is represented by a negative slope, and an overall tensile stress is represented by a positive slope. The overall shape of the stress-thickness vs. thickness curve depends on various processing conditions (such as temperature, growth rate, and material). Koch states that there are three different modes of Volmer-Weber growth. Zone I behavior is characterized by low grain growth in subsequent film layers and is associated with low atomic mobility. Koch suggests that Zone I behavior can be observed at lower temperatures. The zone I mode typically has small columnar grains in the final film. The second mode of Volmer-Weber growth is classified as Zone T, where the grain size at the surface of the film deposition increases with film thickness, but the grain size in the deposited layers below the surface does not change. Zone T-type films are associated with higher atomic mobilities, higher deposition temperatures, and V-shaped final grains. The final mode of proposed Volmer-Weber growth is Zone II type growth, where the grain boundaries in the bulk of the film at the surface are mobile, resulting in large yet columnar grains. This growth mode is associated with the highest atomic mobility and deposition temperature. There is also a possibility of developing a mixed Zone T/Zone II type structure, where the grains are mostly wide and columnar, but do experience slight growth as their thickness approaches the surface of the film. Although Koch focuses mostly on temperature to suggest a potential zone mode, factors such as deposition rate can also influence the final film microstructure. Epitaxy A subset of thin-film deposition processes and applications is focused on the so-called epitaxial growth of materials, the deposition of crystalline thin films that grow following the crystalline structure of the substrate. The term epitaxy comes from the Greek roots epi (ἐπί), meaning "above", and taxis (τάξις), meaning "an ordered manner". It can be translated as "arranging upon". The term homoepitaxy refers to the specific case in which a film of the same material is grown on a crystalline substrate. This technology is used, for instance, to grow a film which is more pure than the substrate, has a lower density of defects, and to fabricate layers having different doping levels. Heteroepitaxy refers to the case in which the film being deposited is different from the substrate. Techniques used for epitaxial growth of thin films include molecular beam epitaxy, chemical vapor deposition, and pulsed laser deposition. Mechanical Behavior Stress Thin films may be biaxially loaded via stresses originated from their interface with a substrate. Epitaxial thin films may experience stresses from misfit strains between the coherent lattices of the film and substrate, and from the restructuring of the surface triple junction. Thermal stress is common in thin films grown at elevated temperatures due to differences in thermal expansion coefficients with the substrate. Differences in interfacial energy and the growth and coalescence of grains contribute to intrinsic stress in thin films. These intrinsic stresses can be a function of film thickness. These stresses may be tensile or compressive and can cause cracking, buckling, or delamination along the surface. In epitaxial films, initially deposited atomic layers may have coherent lattice planes with the substrate. However, past a critical thickness misfit dislocations will form leading to relaxation of stresses in the film. Strain Films may experience a dilatational transformation strain relative to its substrate due to a volume change in the film. Volume changes that cause dilatational strain may come from changes in temperature, defects, or phase transformations. A temperature change will induce a volume change if the film and substrate thermal expansion coefficients are different. The creation or annihilation of defects such as vacancies, dislocations, and grain boundaries will cause a volume change through densification. Phase transformations and concentration changes will cause volume changes via lattice distortions. Thermal Strain A mismatch of thermal expansion coefficients between the film and substrate will cause thermal strain during a temperature change. The elastic strain of the film relative to the substrate is given by: where is the elastic strain, is the thermal expansion coefficient of the film, is the thermal expansion coefficient of the substrate, is the temperature, and is the initial temperature of the film and substrate when it is in a stress-free state. For example, if a film is deposited onto a substrate with a lower thermal expansion coefficient at high temperatures, then cooled to room temperature, a positive elastic strain will be created. In this case, the film will develop tensile stresses. Growth Strain A change in density due to the creation or destruction of defects, phase changes, or compositional changes after the film is grown on the substrate will generate a growth strain. Such as in the Stranski–Krastanov mode, where the layer of film is strained to fit the substrate due to an increase in supersaturation and interfacial energy which shifts from island to island. The elastic strain to accommodate these changes is related to the dilatational strain by: A film experiencing growth strains will be under biaxial tensile strain conditions, generating tensile stresses in biaxial directions in order to match the substrate dimensions. Epitaxial Strains An epitaxially grown film on a thick substrate will have an inherent elastic strain given by: where and are the lattice parameters of the substrate and film, respectively. It is assumed that the substrate is rigid due to its relative thickness. Therefore, all of the elastic strain occurs in the film to match the substrate. Measuring stress and strain The stresses in Films deposited on flat substrates such as wafers can be calculated by measuring the curvature of the wafer due to the strain by the film. Using optical setups, such as those with lasers, allow for whole wafer characterization pre and post deposition. Lasers are reflected off the wafer in a grid pattern and distortions in the grid are used to calculate the curvature as well as measure the optical constants. Strain in thin films can also be measured by x-ray diffraction or by milling a section of the film using a focused ion beam and monitoring the relaxation via scanning electron microscopy. Wafer Curvature Measurements A common method for determining the stress evolution of a film is to measure the wafer curvature during its deposition. Stoney relates a film’s average stress to its curvature through the following expression:   where , where is the bulk elastic modulus of the material comprising the film, and is the Poisson’s ratio of the material comprising the film, is the thickness of the substrate, is the height of the film, and is the average stress in the film. The assumptions made regarding the Stoney formula assume that the film and substrate are smaller than the lateral size of the wafer and that the stress is uniform across the surface. Therefore the average stress thickness of a given film can be determined by integrating the stress over a given film thickness:   where is the direction normal to the substrate and represents the in-place stress at a particular height of the film. The stress thickness (or force per unit width) is represented by is an important quantity as it is directionally proportional to the curvature by . Because of this proportionality, measuring the curvature of a film at a given film thickness can directly determine the stress in the film at that thickness. The curvature of a wafer is determined by the average stress of in the film. However, if stress is not uniformly distributed in a film (as it would be for epitaxially grown film layers that have not relaxed so that the intrinsic stress is due to the lattice mismatch of the substrate and the film), it is impossible to determine the stress at a specific film height without continuous curvature measurements. If continuous curvature measurements are taken, the time derivative of the curvature data: can show how the intrinsic stress is changing at any given point. Assuming that stress in the underlying layers of a deposited film remains constant during further deposition, we can represent the incremental stress as: Nanoindentation Nanoindentation is a popular method of measuring the mechanical properties of films. Measurements can be used to compare coated and uncoated films to reveal the effects of surface treatment on both elastic and plastic responses of the film. Load-displacement curves may reveal information about cracking, delamination, and plasticity in both the film and substrate. The Oliver and Pharr method can be used to evaluate nanoindentation results for hardness and elastic modulus evaluation by the use of axisymmetric indenter geometries like a spherical indenter. This method assumes that during unloading, only elastic deformations are recovered (where reverse plastic deformation is negligible). The parameter designates the load, is the displacement relative to the undeformed coating surface and is the final penetration depth after unloading. These are used to approximate the power law relation for unloading curves: After the contact area is calculated, the hardness is estimated by: From the relationship of contact area, the unloading stiffness can be expressed by the relation: Where is the effective elastic modulus and takes into account elastic displacements in the specimen and indenter. This relation can also be applied to elastic-plastic contact, which is not affected by pile-up and sink-in during indentation. Due to the low thickness of the films, accidental probing of the substrate is a concern. To avoid indenting beyond the film and into the substrate, penetration depths are often kept to less than 10% of the film thickness. For a conical or pyramidal indenters, the indentation depth scales as where is the radius of the contact circle and is the film thickness. The ratio of penetration depth and film thickness can be used as a scale parameter for soft films. Strain engineering Stress and relaxation of stresses in films can influence the materials properties of the film, such as mass transport in microelectronics applications. Therefore precautions are taken to either mitigate or produce such stresses; for example a buffer layer may be deposited between the substrate and film. Strain engineering is also used to produce various phase and domain structures in thin films such as in the domain structure of the ferroelectric Lead Zirconate Titanate (PZT). Multilayer medium In the physical sciences, a multilayer or stratified medium is a stack of different thin films. Typically, a multilayer medium is made for a specific purpose. Since layers are thin with respect to some relevant length scale, interface effects are much more important than in bulk materials, giving rise to novel physical properties. The term "multilayer" is not an extension of "monolayer" and "bilayer", which describe a single layer that is one or two molecules thick. A multilayer medium rather consists of several thin films. Examples An optical coating, as used for instance in a dielectric mirror, is made of several layers that have different refractive indexes. Giant magnetoresistance is a macroscopic quantum effect observed in alternating ferromagnetic and non-magnetic conductive layers. Applications Decorative coatings The usage of thin films for decorative coatings probably represents their oldest application. This encompasses ca. 100 nm thin gold leaves that were already used in ancient India more than 5000 years ago. It may also be understood as any form of painting, although this kind of work is generally considered as an arts craft rather than an engineering or scientific discipline. Today, thin-film materials of variable thickness and high refractive index like titanium dioxide are often applied for decorative coatings on glass for instance, causing a rainbow-color appearance like oil on water. In addition, intransparent gold-colored surfaces may either be prepared by sputtering of gold or titanium nitride. Optical coatings These layers serve in both reflective and refractive systems. Large-area (reflective) mirrors became available during the 19th century and were produced by sputtering of metallic silver or aluminum on glass. Refractive lenses for optical instruments like cameras and microscopes typically exhibit aberrations, i.e. non-ideal refractive behavior. While large sets of lenses had to be lined up along the optical path previously, nowadays, the coating of optical lenses with transparent multilayers of titanium dioxide, silicon nitride or silicon oxide etc. may correct these aberrations. A well-known example for the progress in optical systems by thin-film technology is represented by the only a few mm wide lens in smart phone cameras. Other examples are given by anti-reflection coatings on eyeglasses or solar panels. Protective coatings Thin films are often deposited to protect an underlying work piece from external influences. The protection may operate by minimizing the contact with the exterior medium in order to reduce the diffusion from the medium to the work piece or vice versa. For instance, plastic lemonade bottles are frequently coated by anti-diffusion layers to avoid the out-diffusion of , into which carbonic acid decomposes that was introduced into the beverage under high pressure. Another example is represented by thin TiN films in microelectronic chips separating electrically conducting aluminum lines from the embedding insulator in order to suppress the formation of . Often, thin films serve as protection against abrasion between mechanically moving parts. Examples for the latter application are diamond-like carbon layers used in car engines or thin films made of nanocomposites. Electrically operating coatings Thin layers from elemental metals like copper, aluminum, gold or silver etc. and alloys have found numerous applications in electrical devices. Due to their high electrical conductivity they are able to transport electrical currents or supply voltages. Thin metal layers serve in conventional electrical system, for instance, as Cu layers on printed circuit boards, as the outer ground conductor in coaxial cables and various other forms like sensors etc. A major field of application became their use in integrated passive devices and integrated circuits, where the electrical network among active and passive devices like transistors and capacitors etc. is built up from thin Al or Cu layers. These layers dispose of thicknesses in the range of a few 100 nm up to a few μm, and they are often embedded into a few nm thin titanium nitride layers in order to block a chemical reaction with the surrounding dielectric like . The figure shows a micrograph of a laterally structured TiN/Al/TiN metal stack in a microelectronic chip. Heterostructures of gallium nitride and similar semiconductors can lead to electrons being bound to a sub-nanometric layer, effectively behaving as a two-dimensional electron gas. Quantum effects in such thin films can significantly enhance electron mobility as compared to that of a bulk crystal, which is employed in high-electron-mobility transistors. Biosensors and plasmonic devices Noble metal thin films are used in plasmonic structures such as surface plasmon resonance (SPR) sensors. Surface plasmon polaritons are surface waves in the optical regime that propagate in between metal-dielectric interfaces; in Kretschmann-Raether configuration for the SPR sensors, a prism is coated with a metallic film through evaporation. Due to the poor adhesive characteristics of metallic films, germanium, titanium or chromium films are used as intermediate layers to promote stronger adhesion. Metallic thin films are also used in plasmonic waveguide designs. Thin-film photovoltaic cells Thin-film technologies are also being developed as a means of substantially reducing the cost of solar cells. The rationale for this is thin-film solar cells are cheaper to manufacture owing to their reduced material costs, energy costs, handling costs and capital costs. This is especially represented in the use of printed electronics (roll-to-roll) processes. Other thin-film technologies, that are still in an early stage of ongoing research or with limited commercial availability, are often classified as emerging or third generation photovoltaic cells and include, organic, dye-sensitized, and polymer solar cells, as well as quantum dot, copper zinc tin sulfide, nanocrystal and perovskite solar cells. Thin-film batteries Thin-film printing technology is being used to apply solid-state lithium polymers to a variety of substrates to create unique batteries for specialized applications. Thin-film batteries can be deposited directly onto chips or chip packages in any shape or size. Flexible batteries can be made by printing onto plastic, thin metal foil, or paper. Thin-film bulk acoustic wave resonators (TFBARs/FBARs) For miniaturising and more precise control of resonance frequency of piezoelectric crystals thin-film bulk acoustic resonators TFBARs/FBARs are developed for oscillators, telecommunication filters and duplexers, and sensor applications. See also Coating Dielectric mirror Dual-polarisation interferometry Ellipsometry Flexible display Flexible electronics Hydrogenography Kelvin probe force microscope Langmuir–Blodgett film Layer by layer Microfabrication Organic LED Sarfus Thin-film interference Thin-film optics Thin-film solar cell Thin-film bulk acoustic resonator Transfer-matrix method (optics) References Further reading Textbooks Historical Artificial materials Materials science Nanotechnology
Thin film
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
7,226
[ "Applied and interdisciplinary physics", "Materials science", "Artificial materials", "Materials", "nan", "Nanotechnology", "Planes (geometry)", "Thin films", "Matter" ]
1,033,045
https://en.wikipedia.org/wiki/Marcinkiewicz%20interpolation%20theorem
In mathematics, the Marcinkiewicz interpolation theorem, discovered by , is a result bounding the norms of non-linear operators acting on Lp spaces. Marcinkiewicz' theorem is similar to the Riesz–Thorin theorem about linear operators, but also applies to non-linear operators. Preliminaries Let f be a measurable function with real or complex values, defined on a measure space (X, F, ω). The distribution function of f is defined by Then f is called weak if there exists a constant C such that the distribution function of f satisfies the following inequality for all t > 0: The smallest constant C in the inequality above is called the weak norm and is usually denoted by or Similarly the space is usually denoted by L1,w or L1,∞. (Note: This terminology is a bit misleading since the weak norm does not satisfy the triangle inequality as one can see by considering the sum of the functions on given by and , which has norm 4 not 2.) Any function belongs to L1,w and in addition one has the inequality This is nothing but Markov's inequality (aka Chebyshev's Inequality). The converse is not true. For example, the function 1/x belongs to L1,w but not to L1. Similarly, one may define the weak space as the space of all functions f such that belong to L1,w, and the weak norm using More directly, the Lp,w norm is defined as the best constant C in the inequality for all t > 0. Formulation Informally, Marcinkiewicz's theorem is Theorem. Let T be a bounded linear operator from to and at the same time from to . Then T is also a bounded operator from to for any r between p and q. In other words, even if one only requires weak boundedness on the extremes p and q, regular boundedness still holds. To make this more formal, one has to explain that T is bounded only on a dense subset and can be completed. See Riesz-Thorin theorem for these details. Where Marcinkiewicz's theorem is weaker than the Riesz-Thorin theorem is in the estimates of the norm. The theorem gives bounds for the norm of T but this bound increases to infinity as r converges to either p or q. Specifically , suppose that so that the operator norm of T from Lp to Lp,w is at most Np, and the operator norm of T from Lq to Lq,w is at most Nq. Then the following interpolation inequality holds for all r between p and q and all f ∈ Lr: where and The constants δ and γ can also be given for q = ∞ by passing to the limit. A version of the theorem also holds more generally if T is only assumed to be a quasilinear operator in the following sense: there exists a constant C > 0 such that T satisfies for almost every x. The theorem holds precisely as stated, except with γ replaced by An operator T (possibly quasilinear) satisfying an estimate of the form is said to be of weak type (p,q). An operator is simply of type (p,q) if T is a bounded transformation from Lp to Lq: A more general formulation of the interpolation theorem is as follows: If T is a quasilinear operator of weak type (p0, q0) and of weak type (p1, q1) where q0 ≠ q1, then for each θ ∈ (0,1), T is of type (p,q), for p and q with p ≤ q of the form The latter formulation follows from the former through an application of Hölder's inequality and a duality argument. Applications and examples A famous application example is the Hilbert transform. Viewed as a multiplier, the Hilbert transform of a function f can be computed by first taking the Fourier transform of f, then multiplying by the sign function, and finally applying the inverse Fourier transform. Hence Parseval's theorem easily shows that the Hilbert transform is bounded from to . A much less obvious fact is that it is bounded from to . Hence Marcinkiewicz's theorem shows that it is bounded from to for any 1 < p < 2. Duality arguments show that it is also bounded for 2 < p < ∞. In fact, the Hilbert transform is really unbounded for p equal to 1 or ∞. Another famous example is the Hardy–Littlewood maximal function, which is only sublinear operator rather than linear. While to bounds can be derived immediately from the to weak estimate by a clever change of variables, Marcinkiewicz interpolation is a more intuitive approach. Since the Hardy–Littlewood Maximal Function is trivially bounded from to , strong boundedness for all follows immediately from the weak (1,1) estimate and interpolation. The weak (1,1) estimate can be obtained from the Vitali covering lemma. History The theorem was first announced by , who showed this result to Antoni Zygmund shortly before he died in World War II. The theorem was almost forgotten by Zygmund, and was absent from his original works on the theory of singular integral operators. Later realized that Marcinkiewicz's result could greatly simplify his work, at which time he published his former student's theorem together with a generalization of his own. In 1964 Richard A. Hunt and Guido Weiss published a new proof of the Marcinkiewicz interpolation theorem. See also Interpolation space References . . . Fourier analysis Theorems in functional analysis Lp spaces
Marcinkiewicz interpolation theorem
[ "Mathematics" ]
1,170
[ "Theorems in mathematical analysis", "Theorems in functional analysis" ]
1,033,084
https://en.wikipedia.org/wiki/Gaseous%20diffusion
Gaseous diffusion is a technology that was used to produce enriched uranium by forcing gaseous uranium hexafluoride (UF6) through microporous membranes. This produces a slight separation (enrichment factor 1.0043) between the molecules containing uranium-235 (235U) and uranium-238 (238U). By use of a large cascade of many stages, high separations can be achieved. It was the first process to be developed that was capable of producing enriched uranium in industrially useful quantities, but is nowadays considered obsolete, having been superseded by the more-efficient gas centrifuge process (enrichment factor 1.05 to 1.2). Gaseous diffusion was devised by Francis Simon and Nicholas Kurti at the Clarendon Laboratory in 1940, tasked by the MAUD Committee with finding a method for separating uranium-235 from uranium-238 in order to produce a bomb for the British Tube Alloys project. The prototype gaseous diffusion equipment itself was manufactured by Metropolitan-Vickers (MetroVick) at Trafford Park, Manchester, at a cost of £150,000 for four units, for the M. S. Factory, Valley. This work was later transferred to the United States when the Tube Alloys project became subsumed by the later Manhattan Project. Background Of the 33 known radioactive primordial nuclides, two (235U and 238U) are isotopes of uranium. These two isotopes are similar in many ways, except that only 235U is fissile (capable of sustaining a nuclear chain reaction of nuclear fission with thermal neutrons). In fact, 235U is the only naturally occurring fissile nucleus. Because natural uranium is only about 0.72% 235U by mass, it must be enriched to a concentration of 2–5% to be able to support a continuous nuclear chain reaction when normal water is used as the moderator. The product of this enrichment process is called enriched uranium. Technology Scientific basis Gaseous diffusion is based on Graham's law, which states that the rate of effusion of a gas is inversely proportional to the square root of its molecular mass. For example, in a box with a microporous membrane containing a mixture of two gases, the lighter molecules will pass out of the container more rapidly than the heavier molecules, if the pore diameter is smaller than the mean free path length (molecular flow). The gas leaving the container is somewhat enriched in the lighter molecules, while the residual gas is somewhat depleted. A single container wherein the enrichment process takes place through gaseous diffusion is called a diffuser. Uranium hexafluoride UF6 is the only compound of uranium sufficiently volatile to be used in the gaseous diffusion process. Fortunately, fluorine consists of only a single isotope 19F, so that the 1% difference in molecular weights between 235UF6 and 238UF6 is due only to the difference in weights of the uranium isotopes. For these reasons, UF6 is the only choice as a feedstock for the gaseous diffusion process. UF6, a solid at room temperature, sublimes at 56.4 °C (133 °F) at 1 atmosphere. The triple point is at 64.05 °C and 1.5 bar. Applying Graham's law gives: where: Rate1 is the rate of effusion of 235UF6. Rate2 is the rate of effusion of 238UF6. M1 is the molar mass of 235UF6 = 235.043930 + 6 × 18.998403  = 349.034348 g·mol−1 M2 is the molar mass of 238UF6 = 238.050788 + 6 × 18.998403  = 352.041206 g·mol−1 This explains the 0.4% difference in the average velocities of 235UF6 molecules over that of 238UF6 molecules. UF6 is a highly corrosive substance. It is an oxidant and a Lewis acid which is able to bind to fluoride, for instance the reaction of copper(II) fluoride with uranium hexafluoride in acetonitrile is reported to form copper(II) heptafluorouranate(VI), Cu(UF7)2. It reacts with water to form a solid compound, and is very difficult to handle on an industrial scale. As a consequence, internal gaseous pathways must be fabricated from austenitic stainless steel and other heat-stabilized metals. Non-reactive fluoropolymers such as Teflon must be applied as a coating to all valves and seals in the system. Barrier materials Gaseous diffusion plants typically use aggregate barriers (porous membranes) constructed of sintered nickel or aluminum, with a pore size of 10–25 nanometers (this is less than one-tenth the mean free path of the UF6 molecule). They may also use film-type barriers, which are made by boring pores through an initially nonporous medium. One way this can be done is by removing one constituent in an alloy, for instance using hydrogen chloride to remove the zinc from silver-zinc (Ag-Zn) or sodium hydroxide to remove aluminum from Ni-Al alloy. Energy requirements Because the molecular weights of 235UF6 and 238UF6 are nearly equal, very little separation of the 235U and 238U occurs in a single pass through a barrier, that is, in one diffuser. It is therefore necessary to connect a great many diffusers together in a sequence of stages, using the outputs of the preceding stage as the inputs for the next stage. Such a sequence of stages is called a cascade. In practice, diffusion cascades require thousands of stages, depending on the desired level of enrichment. All components of a diffusion plant must be maintained at an appropriate temperature and pressure to assure that the UF6 remains in the gaseous phase. The gas must be compressed at each stage to make up for a loss in pressure across the diffuser. This leads to compression heating of the gas, which then must be cooled before entering the diffuser. The requirements for pumping and cooling make diffusion plants enormous consumers of electric power. Because of this, gaseous diffusion was the most expensive method used until recently for producing enriched uranium. History Workers working on the Manhattan Project in Oak Ridge, Tennessee, developed several different methods for the separation of isotopes of uranium. Three of these methods were used sequentially at three different plants in Oak Ridge to produce the 235U for "Little Boy" and other early nuclear weapons. In the first step, the S-50 uranium enrichment facility used the thermal diffusion process to enrich the uranium from 0.7% up to nearly 2% 235U. This product was then fed into the gaseous diffusion process at the K-25 plant, the product of which was around 23% 235U. Finally, this material was fed into calutrons at the Y-12. These machines (a type of mass spectrometer) employed electromagnetic isotope separation to boost the final 235U concentration to about 84%. The preparation of UF6 feedstock for the K-25 gaseous diffusion plant was the first ever application for commercially produced fluorine, and significant obstacles were encountered in the handling of both fluorine and UF6. For example, before the K-25 gaseous diffusion plant could be built, it was first necessary to develop non-reactive chemical compounds that could be used as coatings, lubricants and gaskets for the surfaces that would come into contact with the UF6 gas (a highly reactive and corrosive substance). Scientists of the Manhattan Project recruited William T. Miller, a professor of organic chemistry at Cornell University, to synthesize and develop such materials, because of his expertise in organofluorine chemistry. Miller and his team developed several novel non-reactive chlorofluorocarbon polymers that were used in this application. Calutrons were inefficient and expensive to build and operate. As soon as the engineering obstacles posed by the gaseous diffusion process had been overcome and the gaseous diffusion cascades began operating at Oak Ridge in 1945, all of the calutrons were shut down. The gaseous diffusion technique then became the preferred technique for producing enriched uranium. At the time of their construction in the early 1940s, the gaseous diffusion plants were some of the largest buildings ever constructed. Large gaseous diffusion plants were constructed by the United States, the Soviet Union (including a plant that is now in Kazakhstan), the United Kingdom, France, and China. Most of these have now closed or are expected to close, unable to compete economically with newer enrichment techniques. Some of the technology used in pumps and membranes remains top secret. Some of the materials that were used remain subject to export controls, as a part of the continuing effort to control nuclear proliferation. Current status In 2008, gaseous diffusion plants in the United States and France still generated 33% of the world's enriched uranium. However, the French plant (Eurodif's Georges-Besse plant) definitively closed in June 2012, and the Paducah Gaseous Diffusion Plant in Kentucky operated by the United States Enrichment Corporation (USEC) (the last fully functioning uranium enrichment facility in the United States to employ the gaseous diffusion process) ceased enrichment in 2013. The only other such facility in the United States, the Portsmouth Gaseous Diffusion Plant in Ohio, ceased enrichment activities in 2001. Since 2010, the Ohio site is now used mainly by AREVA, a French conglomerate, for the conversion of depleted UF6 to uranium oxide. As existing gaseous diffusion plants became obsolete, they were replaced by second generation gas centrifuge technology, which requires far less electric power to produce equivalent amounts of separated uranium. AREVA replaced its Georges Besse gaseous diffusion plant with the Georges Besse II centrifuge plant. See also Capenhurst Fick's laws of diffusion K-25 Lanzhou Marcoule Molecular diffusion Nuclear fuel cycle Thomas Graham (chemist) Tomsk References External links Annotated references on gaseous diffusion from the Alsos Library Isotope separation Uranium Membrane technology
Gaseous diffusion
[ "Chemistry" ]
2,093
[ "Membrane technology", "Separation processes" ]
1,033,177
https://en.wikipedia.org/wiki/Convenience
Convenient procedures, products and services are those intended to increase ease in accessibility, save resources (such as time, effort and energy) and decrease frustration. A modern convenience is a labor-saving device, service or substance which make a task easier or more efficient than a traditional method. Convenience is a relative concept, and depends on context. For example, automobiles were once considered a convenience, yet today are regarded as a normal part of life. Because differences in lifestyles around the world, the term is a relative term based upon the conveniences previously available to a person or group. For instance, an American definition of 'modern convenience' is likely different from that of an individual living in a developing country. Most of the time, the term 'modern convenience' is used to express personal lifestyle and home life. Examples Service conveniences are those that save shoppers time or effort, and includes variables such as credit availability and extended store hours. Service convenience pertains to the facilitation of selling both goods and services, and combinations of the two. Convenience goods are widely distributed products that "require minimal time and physical and mental effort to purchase." Ready meals and convenience cooking spare the consumer effort in preparation of a meal while providing high levels of energy and pronounced, if mostly artificial, flavour. Filling stations sell items that have nothing to do with refuelling a motor vehicle, (e.g. milk, newspapers, cigarettes) but purchasing at that location can save the consumer time compared to making a separate journey to a supermarket. Conveniences such as direct deposit can save companies and consumers money, though this may or may not be passed along to the consumer. Some conveniences can become nuisances when they break down or don't function correctly. It costs time and money to fix items of convenience when they break down, and may cause much greater costs if something else that depends on them cannot take place. History Late 20th century Household In 1911, architect and author Louis. H. Gibson defined modern conveniences as "those arrangements and appliances which make it possible for people to live comfort ably in a larger house, without seriously increasing the cares which they had in a smaller one". The supposition is that at that time if a family lived in a smaller home, they would have less furniture, appliances and other goods to take care of, and as a result the family's lifestyle and housekeeping would be relatively easy. If, on the other hand, a family moved into a larger home the increase area and furnishings would be much more difficult to manage without labor-saving devices. Examples of modern conveniences at that time included: Kitchen sinks with tap water hot and cold running water and wastewater drainage The addition of bathrooms as separate rooms with sinks and toilets, also with waste water and sewage drainage A furnace, also identified as a significant cost savings Closets in bedrooms, bathrooms, hallway for linen, broom closets Gas lighting, stoves and fireplaces, where gas was available Icebox or refrigerator 20th century The homes of the 20th century are much bigger than the homes of our family members from the 19th century, both in terms of square footage and number of rooms. Homes built at the beginning of the 21st century have 2–3 times more rooms than homes at the turn of the 20th century. In terms of square footage, new homes built in 2000 are 50% larger than a home built in the 1960s. The 20th century also enjoyed a proliferation of home appliances like washing machines, dryers, dishwashers, microwave ovens, frost-free refrigerators, water heaters, air conditioning, vacuum cleaners, and irons. Electricity and innovative electronics products including stereo equipment, color television, answering machine, and video cassette recorders also facilitated modern life. 21st century Comparison of modern conveniences in new housing construction In his 2011 book America's Ticking Bankruptcy Bomb: How the Looming Debt Crisis Threatens the American Dream—and How We Can Turn the Tide Before It's Too Late, Peter Ferrara says that the residential access to modern convenience is markedly different in the 21st century compared to the beginning of the 20th century: Upcoming technological advancements David Kirkpatrick, author of The Facebook Effect (2010), wrote in an article called Tech Targets the Third World projects that technological advancements in education and health care, mobile computing and broadband will empower the poor and provide economic opportunities that they would not otherwise have access. These technologies are relatively easy and cost-effective to implement because of technological advancements that have driven down the costs and because developing countries do not have expensive and outdated legacy systems to manage emerging technology. Religious groups Religious groups that shun modern conveniences include Anabaptists (and their direct descendants, the Amish, Hutterites, and Mennonites) and Judaism. Anabaptists Key beliefs that determine an Anabaptist community's position on use of modern conveniences are: The belief that in order to enter the Kingdom of God, they must live apart from the "world", or the unreformed. Avoiding "worldly" behaviors that pull their attention and intentions away from their religious community. Orthodox and Conservative Judaism For Orthodox and Conservative Jews, Shabbat is the seventh day of the Jewish week and is a day of rest in Judaism. Shabbat is observed from a few minutes before sunset on Friday evening until a few minutes after the appearance of three stars in the sky on Saturday night. On Shabbat, Jews recall the Genesis creation narrative describing God creating the Heavens and the Earth in six days and resting on the seventh. It also recalls the giving of the Torah at Mount Sinai, when God commanded the Israelite nation to observe the seventh day and keep it holy. Shabbat is considered a festive day, when a Jew is freed from the regular labors of everyday life, can contemplate the spiritual aspects of life, and can spend time with family. Orthodox and some Conservative authorities rule that there are 39 prohibited activities of work (referred to as "melakhot"), such as turning electric devices on or off, driving cars, and more, during the Shabbat, as listed in Mishnah Tractate Shabbat. Consequences There are many ramifications of the development of modern conveniences for individuals and their families over the past 150 or more years. The many labor-saving devices have kept pace with growing houses and furnishings and allow for greater leisure. There are also some negative effects, some of which are also as the result of advancements in chemical technology in the food that we eat or products that we use. In these cases there are also conflicting opinions about the extent to which some of the products are harmful. Here are a few examples of positive and negative effects of modern conveniences. Positive effects Health care Some of the major improvements over the past century has been in improved health care. For example, modern medicine has made leaps in preventing infectious diseases in part due to improved water and sewage treatment. This is obvious in the marked rises in life expectancy. Technological advancement in underdeveloped countries Some of the most dramatic technological benefits are seen in underdeveloped countries. For instance, cabling for landline telephone service is expensive and requires a lot of time to complete, especially in the most remote areas. Introduction of cellphone service, on the other hand, is much cheaper and dramatically improves individual's ability to be economically productive, often in microbusinesses. It is estimated that 80% of the world's population is now located within range of cellular towers, 1.5 billion cellular phones are in use in developing countries and, in India alone, five million customers sign up for cellular service each week. The Four Asian Tigers—i.e., Hong Kong, Singapore, Taiwan, and South Korea—are a few of the countries that have leveraged technology to become a presence in the global community. Another example, led by Nicholas Negroponte of MIT's Media Lab in rural Asia, Latin America and Africa, provides $100 laptop computers to underdeveloped countries. Negative effects In 1905, the Journal of the American Medical Association published an article titled "Nervous Strain" about how "modern conveniences" make our lives busier and with less direct contact than the preceding generations. As an example, the author compared having a calming cup of tea with a person to the more distant practice of placing a telephone call. Labor-saving devices meant that people now spent more time sitting, breathed machine-generated smoke, and ate food, especially meat, fat and sugars, in greater abundance, changing peoples' diets. These activities were speculated to result in high blood pressure, obesity, and "nervous strain". Meat consumption Because of the enormous productivity growth in intensive agriculture and meat industry, meat has become a major part of the diet in most developed countries and is on the rise in developing countries. Red meat consumption has been linked to colon cancer; besides, growth hormone and antibiotics treatment of cattle and poultry has raised serious concerns about the adverse effects of those substances in industrially produced meat. Processed food and food preparation Processed foods, high-fructose corn syrup, and increased fat—the greater reliance on processed, packaged, microwaveable food has resulted in a rise in Type 2 diabetes, obesity, and other health concerns. Margarine, once seen as a great alternative to butter, does not help with absorption of nutrients and may contribute to heart disease. Other Styrofoam cups release styrene as the food or drink is consumed. Leaded fuel is another hazardous chemical. Although it has been outlawed in the United States, its use in developing countries impacts the health of local people and the global environment. See also Amish life in the modern world Appropriate technology Canadians of convenience Consumerism Convenience function (computing) Convenience store Convenience store crime Convenience translation (finance) Critique of technology Flag of convenience Flag of convenience (business) Gamaekjip List of convenience stores Marriage of convenience Modern technology Public convenience term for a public toilet Social construction of technology Technology Technology and society References Further reading Carlin, Dale. (2002) Acid-Base Balancing: Magic Bullet Against Aging. Lincoln, NE: iUniverse. . Price, DDS, Weston A. (2008) [1939]. Nutritional and Physical Degeneration. & . United States. Dept. of Agriculture. Office of the Secretary. Information Office (1915). Reports: Needs of farm women, Issues 103-106. Washington, D.C.: Government Printing Office. Morse, Dan. "Still Called by Faith to the Booth: As Pay Phones Vanish, Amish and Mennonites Build Their Own", The Washington Post, September 3, 2006, p. C1. Zimmerman Umble, Diane. Work on the subject of the Amish and telephones. External links Consumer behaviour
Convenience
[ "Biology" ]
2,204
[ "Behavior", "Consumer behaviour", "Human behavior" ]
1,033,226
https://en.wikipedia.org/wiki/R%C3%A9my%20Card
Rémy Card is a French software developer who is credited as one of the primary developers of the Extended file system (ext) and Second Extended file system (ext2) for Linux. References Bibliography Card, Rémy. (1997) Programmation Linux 2.0. Gestion 2000. . Card, Rémy; Dumas, Éric; & Mével, Franck. (1998). The Linux Kernel Book. John Wiley & Sons. . External links Design and Implementation of the Second Extended Filesystem - written by Rémy Card, Theodore Ts'o and Stephen Tweedie, published at the First Dutch International Symposium on Linux (December 1994) Rémy Card Interview - in French (April 1998) French computer programmers Free software programmers Linux kernel programmers Year of birth missing (living people) Living people Academic staff of Versailles Saint-Quentin-en-Yvelines University
Rémy Card
[ "Technology" ]
176
[ "Computing stubs", "Computer specialist stubs" ]
1,033,293
https://en.wikipedia.org/wiki/Rutan%20Boomerang
The Rutan Model 202 Boomerang is an aircraft designed and built by Burt Rutan, with the first prototype taking flight in 1996. The design was intended to be a multi-engine aircraft that in the event of failure of a single engine would not become dangerously difficult to control due to asymmetric thrust. The result is an asymmetrical aircraft with a very distinct appearance. Design and development The Boomerang was designed around the specifications of the Beechcraft Baron 58, one of the best known and most numerous twin-engine civilian aircraft. The use of the asymmetrical design allows the Boomerang to fly faster and farther than the Baron using smaller engines, and seating the same number of occupants. The Boomerang is powered by two engines, with the right engine producing 10 hp (8 kW) more power than the left one (the engines are in fact the same model, just rated differently). The wings are forward-swept. A single prototype was completed in 1996, registered as N24BT. It was operated by Rutan for six years. Rutan's Boomerang was restored to flying condition in 2011 and made an appearance at Oshkosh that year as part of the Tribute to Rutan. Morrow Aircraft Corporation MB-300 In 1997, avionics entrepreneur Ray Morrow and his son, Neil, founded an air taxi company, Skytaxi. Initially they used Cessna 414s as interim aircraft, but in the long term they planned to use a modified version of Rutan's Boomerang design, which they designated the MB-300, and founded Morrow Aircraft Corporation in order to design and manufacture the MB-300. In 1999, Morrow applied to the Federal Aviation Administration (FAA) of the United States for a type certificate for the MB-300, however development was suspended in 2002 before the prototype was completed, and was not resumed. Specifications (Boomerang) See also Blohm & Voss BV 141 - asymmetric German design of World War II References External links Autopia - Burt Rutan’s Boomerang: Safety Through Asymmetry - July 29, 2011 SCALED and RAF Manned Research Projects - April 2011 1990s United States civil utility aircraft Boomerang Asymmetrical aircraft Twin-fuselage aircraft Forward-swept-wing aircraft Aircraft first flown in 1996 Twin piston-engined tractor aircraft Twin-tail aircraft Aircraft with retractable tricycle landing gear
Rutan Boomerang
[ "Physics" ]
494
[ "Asymmetrical aircraft", "Symmetry", "Asymmetry" ]