text
stringlengths
559
401k
source
stringlengths
13
121
Polycystic liver disease (PLD) usually describes the presence of multiple cysts scattered throughout normal liver tissue. PLD is commonly seen in association with autosomal-dominant polycystic kidney disease, with a prevalence of 1 in 400 to 1000, and accounts for 8–10% of all cases of end-stage renal disease. The much rarer autosomal-dominant polycystic liver disease will progress without any kidney involvement. == Presentation == == Pathophysiology == Associations with PRKCSH and SEC63 have been described. Polycystic liver disease comes in two forms: autosomal dominant polycystic kidney disease (with kidney cysts) and autosomal dominant polycystic liver disease (liver cysts only). == Diagnosis == Most patients with PLD are asymptomatic with simple cysts found following routine investigations. After confirming the presence of cysts in the liver, laboratory tests may be ordered to check for liver function including bilirubin, alkaline phosphatase, alanine aminotransferase, and prothrombin time. Patients with PLD often have an enlarged liver which will compress adjacent organs, leading to nausea, respiratory issues, and limited physical ability. Classification of the progression of the disease takes into consideration the amount of remaining liver parenchyma compared to the amount and size of cysts. Polycystic liver disease can exist either as isolated polycystic liver disease (PCLD), part of autosomal dominant polycystic kidney disease (ADPKD), or autosomal recessive polycystic kidney disease (ARPKD). == Treatment == Many patients are asymptomatic and thus are not candidates for surgery. For patients with pain or complications from the cysts, the goal of treatment is to reduce the size of cysts while protecting the functioning liver parenchyma. Cysts may be removed surgically or by using aspiration sclerotherapy. == References == == Further reading == == External links ==
Wikipedia/Polycystic_liver_disease
Thyroid disease in women is an autoimmune disease that affects the thyroid in women. This condition can have a profound effect during pregnancy and on the child. It also is called Hashimoto's thyroiditis (theye-royd-EYET-uhss). During pregnancy, the infant may be seriously affected and have a variety of birth defects. Many women with Hashimoto's disease develop an underactive thyroid. They may have mild or no symptoms at first, but symptoms tend to worsen over time. If a woman is pregnant and has symptoms of Hashimoto's disease, the clinician will do an exam and order one or more tests. The thyroid is a small gland in the front of the neck. The thyroid makes hormones called T3 and T4 that regulate how the body uses energy. Thyroid hormone levels are controlled by the pituitary gland, which is a pea-sized gland in the brain. It makes thyroid stimulating hormone (TSH), which triggers the thyroid to make thyroid hormone. In thyroid disease the immune system makes antibodies that damage thyroid cells and interfere with their ability to make thyroid hormone. Over time, thyroid damage can cause thyroid hormone levels to be too low. This is called an underactive thyroid or hypothyroidism (heye-poh-THEYE-royd-ism). An underactive thyroid causes every function of the body to slow down, such as heart rate, brain function, and the rate your body turns food into energy. Hashimoto's disease is the most common cause of an underactive thyroid. It is closely related to Graves' disease, another autoimmune disease affecting the thyroid. == Effects of hormone == Normal hormone changes during pregnancy cause thyroid hormone levels to increase. The thyroid may enlarge slightly in healthy women during pregnancy, but not enough to be felt. These changes do not affect the pregnancy or unborn baby. Yet, untreated thyroid problems can threaten pregnancy and the growing baby. Symptoms of normal pregnancy, like fatigue, can make it easy to overlook thyroid problems in pregnancy. Thyroid hormone is vital during pregnancy. The unborn baby's brain and nervous system need thyroid hormone to develop. During the first trimester, the baby depends on the mother's supply of thyroid hormone. At 10 to 12 weeks of pregnancy, the baby's thyroid begins to work on its own. But the baby still depends on the mother for iodine, which the thyroid uses to make thyroid hormone. Pregnant women need about 250 micrograms (mcg) of iodine a day. Some women might not get all the iodine they need through the foods they eat or prenatal vitamins. Using iodized salt — salt that has had iodine added to it over plain table salt is recommended. Prenatal vitamins that contain iodine are also recommended. Some women develop thyroid problems in the first year after giving birth. This is called postpartum thyroiditis. It often begins with symptoms of an overactive thyroid, which last 2 to 4 months. Mild symptoms might be overlooked. Affected women then develop symptoms of an underactive thyroid, which can last up to a year. An underactive thyroid needs to be treated. In most cases, thyroid function returns to normal as the thyroid heals. == Testing == Experts have not reached agreement on whether all pregnant women should be routinely screened for thyroid problems. But, if an underactive thyroid with or without symptoms is found during pregnancy it should be treated to lower the risk of pregnancy problems. An underactive thyroid without symptoms occurs in 2 to 3 in every 100 pregnancies. Women can request thyroid screening. == Disease prior to pregnancy == Women being treated for Hashimoto's disease can become pregnant. It is recommended that thyroid function be well-controlled before getting pregnant. Untreated or poorly treated underactive thyroid can lead to problems for the mother, such as: Preeclampsia Anemia Miscarriage Placental abruption High cholesterol Postpartum bleeding It also can cause serious problems for the baby, such as: Preterm birth Low birth weight Stillbirth Birth defects Thyroid problems == Treatment during pregnancy == During pregnancy, women may want to see both an OB/GYN and an endocrinologist, a doctor who treats people with hormone problems. Levothyroxine is safe to use during pregnancy and necessary for the health of the baby. Women with Hashimoto's disease or an underactive thyroid who are taking levothyroxine before pregnancy may need a higher dose to maintain normal thyroid function. Clinicians may check thyroid function every 6 to 8 weeks during pregnancy. After delivery, hormone levels usually go back to the pre-pregnancy level. == Breastfeeding == Levothyroxine does pass through breast milk. It is not likely to cause problems for the baby. In some cases, an underactive thyroid may inhibit the production of breast milk. == References ==
Wikipedia/Thyroid_disease_in_women
Lugol's iodine, also known as aqueous iodine and strong iodine solution, is a solution of potassium iodide with iodine in water. It is a medication and disinfectant used for a number of purposes. Taken by mouth it is used to treat thyrotoxicosis until surgery can be carried out, protect the thyroid gland from radioactive iodine, and to treat iodine deficiency. When applied to the cervix it is used to help in screening for cervical cancer. As a disinfectant it may be applied to small wounds such as a needle stick injury. A small amount may also be used for emergency disinfection of drinking water. Side effects may include allergic reactions, headache, vomiting, and conjunctivitis. Long term use may result in trouble sleeping and depression. It should not typically be used during pregnancy or breastfeeding. Lugol's iodine is a liquid made up of two parts potassium iodide for every one part elemental iodine in water. Lugol's iodine was first made in 1829 by the French physician Jean Lugol. It is on the World Health Organization's List of Essential Medicines. Lugol's iodine is available as a generic medication and over the counter. Lugol's solution is available in different strengths of iodine. Large volumes of concentrations more than 2.2% may be subject to regulation. == Uses == === Medical uses === Preoperative administration of Lugol's solution decreases intraoperative blood loss during thyroidectomy in patients with Graves' disease. However, it appears ineffective in patients who are already euthyroid on anti-thyroid drugs and levothyroxine. During colposcopy, Lugol's iodine is applied to the vagina and cervix. Normal vaginal tissue stains brown due to its high glycogen content, while tissue suspicious for cancer does not stain, and thus appears pale compared to the surrounding tissue. Biopsy of suspicious tissue can then be performed. This is called a Schiller's test. Patients at high risk of oesophageal squamous cell carcinoma are usually followed using a combination of Lugol's chromoendoscopy and narrow-band imaging. With Lugol's iodine, low-grade dysplasia appears as an unstained or weakly stained area; high-grade dysplasia is consistently unstained. Lugol's iodine may also be used to better visualize the mucogingival junction in the mouth. Similar to the method of staining mentioned above regarding a colposcopy, alveolar mucosa has a high glycogen content that gives a positive iodine reaction vs. the keratinized gingiva. Lugol's iodine may also be used as an oxidizing germicide, however it is somewhat undesirable in that it may lead to scarring and discolors the skin temporarily. One way to avoid this problem is by using a solution of 70% ethanol to wash off the iodine later. Lugol's iodine was distributed in Polish People's Republic after the Chernobyl catastrophe, due to government not being informed of how severe the event was and overestimating radiation, and unavailability of iodine tablets. === Science === As a mordant when performing a Gram stain. It is applied for 1 minute after staining with crystal violet, but before ethanol to ensure that gram positive organisms' peptidoglycan remains stained, easily identifying it as a gram positive in microscopy. This solution is used as an indicator test for the presence of starches in organic compounds, with which it reacts by turning a dark-blue/black. Elemental iodine solutions like Lugol's will stain starches due to iodine's interaction with the coil structure of the polysaccharide. Starches include the plant starches amylose and amylopectin and glycogen in animal cells. Lugol's solution will not detect simple sugars such as glucose or fructose. In the pathologic condition amyloidosis, amyloid deposits (i.e., deposits that stain like starch, but are not) can be so abundant that affected organs will also stain grossly positive for the Lugol reaction for starch. It can be used as a cell stain, making the cell nuclei more visible and for preserving phytoplankton samples. Lugol's solution can also be used in various experiments to observe how a cell membrane uses osmosis and diffusion. Lugol's solution is also used in the marine aquarium industry. Lugol's solution provides a strong source of free iodine and iodide to reef inhabitants and macroalgae. Although the solution is thought to be effective when used with stony corals, systems containing xenia and soft corals are assumed to be particularly benefited by the use of Lugol's solution. Used as a dip for stony and soft or leather corals, Lugol's may help rid the animals of unwanted parasites and harmful bacteria. The solution is thought to foster improved coloration and possibly prevent bleaching of corals due to changes in light intensity, and to enhance coral polyp expansion. The blue colors of Acropora spp. are thought to be intensified by the use of potassium iodide. Specially packaged supplements of the product intended for aquarium use can be purchased at specialty stores and online. This and other iodide-iodine solutions can also function as a less toxic replacement for aqua regia, allowing for gold to be dissolved without the use of strong acids. === Outdated uses === Up until early 1970s, it was often recommended for use in victims of rape in order to avoid pregnancy. The idea stemmed from the fact that, in the laboratory, Lugol's iodine appeared to kill sperm cells even in such great dilutions as 1:32. Thus it was thought that an intrauterine application of Lugol's iodine, immediately after the event, would help avoid pregnancy. == Side effects == Because it contains free iodine, Lugol's solution at 2% or 5% concentration without dilution is irritating and destructive to mucosa, such as the lining of the esophagus and stomach. Doses of 10 mL of undiluted 5% solution have been reported to cause gastric lesions when used in endoscopy. The LD50 for 5% Iodine is 14,000 mg/kg (14 g/kg) in rats, and 22,000 mg/kg (22 g/kg) in mice. The World Health Organization classifies substances taken orally with an LD50 of 5–50 mg/kg as the second highest toxicity class, Class Ib (Highly Hazardous). The Global Harmonized System of Classification and Labeling of Chemicals categorizes this as Category 2 with a hazard statement "Fatal if swallowed". Potassium iodide is not considered hazardous. == Mechanism of action == The above uses and effects are consequences of the fact that the solution is a source of effectively free elemental iodine, which is readily generated from the equilibrium between elemental iodine molecules and polyiodide ions in the solution. == History == It was historically used as a first-line treatment for hyperthyroidism, as the administration of pharmacologic amounts of iodine leads to temporary inhibition of iodine organification in the thyroid gland, caused by phenomena including the Wolff–Chaikoff effect and the Plummer effect. However it is not used to treat certain autoimmune causes of thyroid disease as iodine-induced blockade of iodine organification may result in hypothyroidism. They are not considered as a first line therapy because of possible induction of resistant hyperthyroidism but may be considered as an adjuvant therapy when used together with other hyperthyroidism medications. Lugol's iodine has been used traditionally to replenish iodine deficiency. Because of its wide availability as a drinking-water decontaminant, and high content of potassium iodide, emergency use of it was at first recommended to the Polish government in 1986, after the Chernobyl disaster to replace and block any intake of radioactive 131I, even though it was known to be a non-optimal agent, due to its somewhat toxic free-iodine content. Other sources state that pure potassium iodide solution in water (SSKI) was eventually used for most of the thyroid protection after this accident. There is "strong scientific evidence" for potassium iodide thyroid protection to help prevent thyroid cancer. Potassium iodide does not provide immediate protection but can be a component of a general strategy in a radiation emergency. Historically, Lugol's iodine solution has been widely available and used for a number of health problems with some precautions. Lugol's is sometimes prescribed in a variety of alternative medical treatments. Only since the end of the Cold War has the compound become subject to national regulation in the English-speaking world. == Society and culture == === Regulation === Until 2007, in the United States, Lugol's solution was unregulated and available over the counter as a general reagent, an antiseptic, a preservative, or as a medicament for human or veterinary application. Since 1 August 2007, the DEA regulates all iodine solutions containing greater than 2.2% elemental iodine as a List I precursor because they may potentially be used in the illicit production of methamphetamine. Transactions of up to one fluid ounce (30 ml) of Lugol's solution are exempt from this regulation. === Formula and manufacture === Lugol's solution is commonly available in different potencies of (nominal) 1%, 2%, 5% or 10%. Iodine concentrations greater than 2.2% are subject to US regulations. If the US regulations are taken literally, their 2.2% maximum iodine concentration limits a Lugol's solution to maximum (nominal) 0.87%. The most commonly used (nominal) 5% solution consists of 5% (wt/v) iodine (I2) and 10% (wt/v) potassium iodide (KI) mixed in distilled water and has a total iodine content of 126.4 mg/mL. The (nominal) 5% solution thus has a total iodine content of 6.32 mg per drop of 0.05 mL; the (nominal) 2% solution has 2.53 mg total iodine content per drop. Potassium iodide renders the elementary iodine soluble in water through the formation of the triiodide (I−3) ion. It is not to be confused with tincture of iodine solutions, which consist of elemental iodine, and iodide salts dissolved in water and alcohol. Lugol's solution contains no alcohol. Other names for Lugol's solution are I2KI (iodine-potassium iodide); Markodine, Strong solution (Systemic); and Aqueous Iodine Solution BP. === Economics === In the United Kingdom, in 2015, the NHS paid £9.57 per 500 ml of solution. == References ==
Wikipedia/Lugol's_solution
Iodine-131 (131I, I-131) is an important radioisotope of iodine discovered by Glenn Seaborg and John Livingood in 1938 at the University of California, Berkeley. It has a radioactive decay half-life of about eight days. It is associated with nuclear energy, medical diagnostic and treatment procedures, and natural gas production. It also plays a major role as a radioactive isotope present in nuclear fission products, and was a significant contributor to the health hazards from open-air atomic bomb testing in the 1950s, and from the Chernobyl disaster, as well as being a large fraction of the contamination hazard in the first weeks in the Fukushima nuclear crisis. This is because 131I is a major fission product of uranium and plutonium, comprising nearly 3% of the total products of fission (by weight). See fission product yield for a comparison with other radioactive fission products. 131I is also a major fission product of uranium-233, produced from thorium. Due to its mode of beta decay, iodine-131 causes mutation and death in cells that it penetrates, and other cells up to several millimeters away. For this reason, high doses of the isotope are sometimes less dangerous than low doses, since they tend to kill thyroid tissues that would otherwise become cancerous as a result of the radiation. For example, children treated with moderate dose of 131I for thyroid adenomas had a detectable increase in thyroid cancer, but children treated with a much higher dose did not. Likewise, most studies of very-high-dose 131I for treatment of Graves' disease have failed to find any increase in thyroid cancer, even though there is linear increase in thyroid cancer risk with 131I absorption at moderate doses. Thus, iodine-131 is increasingly less employed in small doses in medical use (especially in children), but increasingly is used only in large and maximal treatment doses, as a way of killing targeted tissues. This is known as "therapeutic use". Iodine-131 can be "seen" by nuclear medicine imaging techniques (e.g., gamma cameras) whenever it is given for therapeutic use, since about 10% of its energy and radiation dose is via gamma radiation. However, since the other 90% of radiation (beta radiation) causes tissue damage without contributing to any ability to see or "image" the isotope, other less-damaging radioisotopes of iodine such as iodine-123 (see isotopes of iodine) are preferred in situations when only nuclear imaging is required. The isotope 131I is still occasionally used for purely diagnostic (i.e., imaging) work, due to its low expense compared to other iodine radioisotopes. Very small medical imaging doses of 131I have not shown any increase in thyroid cancer. The low-cost availability of 131I, in turn, is due to the relative ease of creating 131I by neutron bombardment of natural tellurium in a nuclear reactor, then separating 131I out by various simple methods (i.e., heating to drive off the volatile iodine). By contrast, other iodine radioisotopes are usually created by far more expensive techniques, starting with cyclotron radiation of capsules of pressurized xenon gas. Iodine-131 is also one of the most commonly used gamma-emitting radioactive industrial tracer. Radioactive tracer isotopes are injected with hydraulic fracturing fluid to determine the injection profile and location of fractures created by hydraulic fracturing. Much smaller incidental doses of iodine-131 than those used in medical therapeutic procedures, are supposed by some studies to be the major cause of increased thyroid cancers after accidental nuclear contamination. These studies suppose that cancers happen from residual tissue radiation damage caused by the 131I, and should appear mostly years after exposure, long after the 131I has decayed. Other studies did not find a correlation. == Production == Most 131I production is from neutron irradiation of a natural tellurium target in a nuclear reactor. Irradiation of natural tellurium produces almost entirely 131I as the only radionuclide with a half-life longer than hours, since most lighter isotopes of tellurium become heavier stable isotopes, or else stable iodine or xenon. However, the heaviest naturally occurring tellurium nuclide, 130Te (34% of natural tellurium) absorbs a neutron to become tellurium-131, which beta decays with a half-life of 25 minutes to 131I. A tellurium compound can be irradiated while bound as an oxide to an ion exchange column, with evolved 131I then eluted into an alkaline solution. More commonly, powdered elemental tellurium is irradiated and then 131I separated from it by dry distillation of the iodine, which has a far higher vapor pressure. The element is then dissolved in a mildly alkaline solution in the standard manner, to produce 131I as iodide and hypoiodate (which is soon reduced to iodide). 131I is a fission product with a yield of 2.878% from uranium-235, and can be released in nuclear weapons tests and nuclear accidents. However, the short half-life means it is not present in significant quantities in cooled spent nuclear fuel, unlike iodine-129 whose half-life is nearly a billion times that of 131I. It is discharged to the atmosphere in small quantities by some nuclear power plants. == Radioactive decay == 131I decays with a half-life of 8.0249(6) days with beta minus and gamma emissions. This isotope of iodine has 78 neutrons in its nucleus, while the only stable nuclide, 127I, has 74. On decaying, 131I most often (89% of the time) expends its 971 keV of decay energy by transforming into stable xenon-131 in two steps, with gamma decay following rapidly after beta decay: I 53 131 ⟶ β + ν ¯ e + 54 131 Xe ∗ + 606 keV {\displaystyle {\ce {^{131}_{53}I->\beta {}+{\bar {\nu }}_{e}+_{54}^{131}Xe^{\ast }{}+606keV}}} Xe ∗ 54 131 ⟶ 54 131 Xe + γ + 364 keV {\displaystyle {\ce {^{131}_{54}Xe^{\ast }->_{54}^{131}Xe+\gamma {}+364keV}}} The primary emissions of 131I decay are thus electrons with a maximal energy of 606 keV (89% abundance, others 248–807 keV) and 364 keV gamma rays (81% abundance, others 723 keV). Beta decay also produces an antineutrino, which carries off variable amounts of the beta decay energy. The electrons, due to their high mean energy (190 keV, with typical beta-decay spectra present) have a tissue penetration of 0.6 to 2 mm. == Effects of exposure == Iodine in food is absorbed by the body and preferentially concentrated in the thyroid where it is needed for the functioning of that gland. When 131I is present in high levels in the environment from radioactive fallout, it can be absorbed through contaminated food, and will also accumulate in the thyroid. As it decays, it may cause damage to the thyroid. The primary risk from exposure to 131I is an increased risk of radiation-induced cancer in later life. Other risks include the possibility of non-cancerous growths and thyroiditis. The risk of thyroid cancer in later life appears to diminish with increasing age at time of exposure. Most risk estimates are based on studies in which radiation exposures occurred in children or teenagers. When adults are exposed, it has been difficult for epidemiologists to detect a statistically significant difference in the rates of thyroid disease above that of a similar but otherwise-unexposed group. The risk can be mitigated by taking iodine supplements, raising the total amount of iodine in the body and, therefore, reducing uptake and retention in the face and chest and lowering the relative proportion of radioactive iodine. However, such supplements were not consistently distributed to the population living nearest to the Chernobyl nuclear power plant after the disaster, though they were widely distributed to children in Poland. Within the US, the highest 131I fallout doses occurred during the 1950s and early 1960s to children having consumed fresh milk from sources contaminated as the result of above-ground testing of nuclear weapons. The National Cancer Institute provides additional information on the health effects from exposure to 131I in fallout, as well as individualized estimates, for those born before 1971, for each of the 3070 counties in the US. The calculations are taken from data collected regarding fallout from the nuclear weapons tests conducted at the Nevada Test Site. On 27 March 2011, the Massachusetts Department of Public Health reported that 131I was detected in very low concentrations in rainwater from samples collected in Massachusetts, and that this likely originated from the Fukushima power plant. Farmers near the plant dumped raw milk, while testing in the United States found 0.8 pico-curies per liter of iodine-131 in a milk sample, but the radiation levels were 5,000 times lower than the FDA's "defined intervention level". The levels were expected to drop relatively quickly === Treatment and prevention === A common treatment method for preventing iodine-131 exposure is by saturating the thyroid with regular, stable iodine-127, as an iodide or iodate salt. == Medical use == Iodine-131 is used for unsealed source radiotherapy in nuclear medicine to treat several conditions. It can also be detected by gamma cameras for diagnostic imaging, however it is rarely administered for diagnostic purposes only, imaging will normally be done following a therapeutic dose. Use of the 131I as iodide salt exploits the mechanism of absorption of iodine by the normal cells of the thyroid gland. === Treatment of thyrotoxicosis === Major uses of 131I include the treatment of thyrotoxicosis (hyperthyroidism) due to Graves' disease, and sometimes hyperactive thyroid nodules (abnormally active thyroid tissue that is not malignant). The therapeutic use of radioiodine to treat hyperthyroidism from Graves' disease was first reported by Saul Hertz in 1941. The dose is typically administered orally (either as a liquid or capsule), in an outpatient setting, and is usually 400–600 megabecquerels (MBq). Radioactive iodine (iodine-131) alone can potentially worsen thyrotoxicosis in the first few days after treatment. One side effect of treatment is an initial period of a few days of increased hyperthyroid symptoms. This occurs because when the radioactive iodine destroys the thyroid cells, they can release thyroid hormone into the blood stream. For this reason, sometimes patients are pre-treated with thyrostatic medications such as methimazole, and/or they are given symptomatic treatment such as propranolol. Radioactive iodine treatment is contraindicated in breast-feeding and pregnancy === Treatment of thyroid cancer === Iodine-131, in higher doses than for thyrotoxicosis, is used for ablation of remnant thyroid tissue following a complete thyroidectomy to treat thyroid cancer. ==== Administration of I-131 for ablation ==== Typical therapeutic doses of I-131 are between 2220 and 7400 megabecquerels (MBq). Because of this high radioactivity and because the exposure of stomach tissue to beta radiation would be high near an undissolved capsule, I-131 is sometimes administered to human patients in a small amount of liquid. Administration of this liquid form is usually by straw which is used to slowly and carefully suck up the liquid from a shielded container. For administration to animals (for example, cats with hyperthyroidism), for practical reasons the isotope must be administered by injection. European guidelines recommend administration of a capsule, due to "greater ease to the patient and the superior radiation protection for caregivers". ==== Post-treatment isolation ==== Ablation doses are usually administered on an inpatient basis, and IAEA International Basic Safety Standards recommend that patients are not discharged until the activity falls below 1100 MBq. ICRP advice states that "comforters and carers" of patients undergoing radionuclide therapy should be treated as members of the public for dose constraint purposes and any restrictions on the patient should be designed based on this principle. Patients receiving I-131 radioiodine treatment may be warned not to have sexual intercourse for one month (or shorter, depending on dose given), and women told not to become pregnant for six months afterwards. "This is because a theoretical risk to a developing fetus exists, even though the amount of radioactivity retained may be small and there is no medical proof of an actual risk from radioiodine treatment. Such a precaution would essentially eliminate direct fetal exposure to radioactivity and markedly reduce the possibility of conception with sperm that might theoretically have been damaged by exposure to radioiodine." These guidelines vary from hospital to hospital and will depend on national legislation and guidance, as well as the dose of radiation given. Some also advise not to hug or hold children when the radiation is still high, and a one- or two- metre distance to others may be recommended. I-131 will be eliminated from the body over the next several weeks after it is given. The majority of I-131 will be eliminated from the human body in 3–5 days, through natural decay, and through excretion in sweat and urine. Smaller amounts will continue to be released over the next several weeks, as the body processes thyroid hormones created with the I-131. For this reason, it is advised to regularly clean toilets, sinks, bed sheets and clothing used by the person who received the treatment. Patients may also be advised to wear slippers or socks at all times, and avoid prolonged close contact with others. This minimizes accidental exposure by family members, especially children. Use of a decontaminant specially made for radioactive iodine removal may be advised. The use of chlorine bleach solutions, or cleaners that contain chlorine bleach for cleanup, are not advised, since radioactive elemental iodine gas may be released. Airborne I-131 may cause a greater risk of second-hand exposure, spreading contamination over a wide area. Patient is advised if possible to stay in a room with a bathroom connected to it to limit unintended exposure to family members. Many airports have radiation detectors to detect the smuggling of radioactive materials. Patients should be warned that if they travel by air, they may trigger radiation detectors at airports up to 95 days after their treatment with 131I. === Other therapeutic uses === The 131I isotope is also used as a radioactive label for certain radiopharmaceuticals that can be used for therapy, e.g. 131I-metaiodobenzylguanidine (131I-MIBG) for imaging and treating pheochromocytoma and neuroblastoma. In all of these therapeutic uses, 131I destroys tissue by short-range beta radiation. About 90% of its radiation damage to tissue is via beta radiation, and the rest occurs via its gamma radiation (at a longer distance from the radioisotope). It can be seen in diagnostic scans after its use as therapy, because 131I is also a gamma-emitter. === Diagnostic uses === Because of the carcinogenicity of its beta radiation in the thyroid in small doses, I-131 is rarely used primarily or solely for diagnosis (although in the past this was more common due to this isotope's relative ease of production and low expense). Instead the more purely gamma-emitting radioiodine iodine-123 is used in diagnostic testing (nuclear medicine scan of the thyroid). The longer half-lived iodine-125 is also occasionally used when a longer half-life radioiodine is needed for diagnosis, and in brachytherapy treatment (isotope confined in small seed-like metal capsules), where the low-energy gamma radiation without a beta component makes iodine-125 useful. The other radioisotopes of iodine are never used in brachytherapy. The use of 131I as a medical isotope has been blamed for a routine shipment of biosolids being rejected from crossing the Canada—U.S. border. Such material can enter the sewers directly from the medical facilities, or by being excreted by patients after a treatment == Industrial radioactive tracer uses == Used for the first time in 1951 to localize leaks in a drinking water supply system of Munich, Germany, iodine-131 became one of the most commonly used gamma-emitting industrial radioactive tracers, with applications in isotope hydrology and leak detection. Since the late 1940s, radioactive tracers have been used by the oil industry. Tagged at the surface, water is then tracked downhole, using the appropriated gamma detector, to determine flows and detect underground leaks. I-131 has been the most widely used tagging isotope in an aqueous solution of sodium iodide. It is used to characterize the hydraulic fracturing fluid to help determine the injection profile and location of fractures created by hydraulic fracturing. == In popular culture == The use of iodine-131 as a poison – used in small doses over a period of time to disrupt a person's ability to think and tell right from wrong – played a central role in the episode "The Case of the Melancholy Marksman" of the long-running CBS TV series Perry Mason (season 5, episode 24, first broadcast March 24, 1962). == See also == Isotopes of iodine Iodine in biology == References == == External links == "ANL factsheet" (PDF). Archived from the original (PDF) on 14 June 2003. RadiologyInfo – The radiology information resource for patients: Radioiodine (I −131) Therapy Case Studies in Environmental Medicine: Radiation Exposure from Iodine 131 Sensitivity of Personal Homeland Security Radiation Detectors to Medical Radionuclides and Implications for Counseling of Nuclear Medicine Patients NLM Hazardous Substances Databank – Iodine, Radioactive
Wikipedia/Radioiodine_therapy
Mitochondrial disease is a group of disorders caused by mitochondrial dysfunction. Mitochondria are the organelles that generate energy for the cell and are found in every cell of the human body except red blood cells. They convert the energy of food molecules into the ATP that powers most cell functions. Mitochondrial diseases take on unique characteristics both because of the way the diseases are often inherited and because mitochondria are so critical to cell function. A subclass of these diseases that have neuromuscular symptoms are known as mitochondrial myopathies. == Types == Mitochondrial disease can manifest in many different ways whether in children or adults. Examples of mitochondrial diseases include: Mitochondrial myopathy Maternally inherited diabetes mellitus and deafness (MIDD) While diabetes mellitus and deafness can be found together for other reasons, at an early age this combination can be due to mitochondrial disease, as may occur in Kearns–Sayre syndrome and Pearson syndrome Leber's hereditary optic neuropathy (LHON) LHON is an eye disorder characterized by progressive loss of central vision due to degeneration of the optic nerves and retina (apparently affecting between 1 in 30,000 and 1 in 50,000 people); visual loss typically begins in young adulthood Leigh syndrome, subacute necrotizing encephalomyelopathy after normal development the disease usually begins late in the first year of life, although onset may occur in adulthood a rapid decline in function occurs and is marked by seizures, altered states of consciousness, dementia, ventilatory failure Neuropathy, ataxia, retinitis pigmentosa, and ptosis (NARP) progressive symptoms as described in the acronym dementia Myoneurogenic gastrointestinal encephalopathy (MNGIE) gastrointestinal pseudo-obstruction neuropathy MERRF syndrome progressive myoclonic epilepsy "Ragged Red Fibers" are clumps of diseased mitochondria that accumulate in the subsarcolemmal region of the muscle fiber and appear when muscle is stained with modified Gömöri trichrome stain short stature hearing loss lactic acidosis exercise intolerance MELAS syndrome, mitochondrial encephalopathy, lactic acidosis, and stroke-like episodes Mitochondrial DNA depletion syndrome Conditions such as Friedreich's ataxia can affect the mitochondria but are not associated with mitochondrial proteins. == Presentation == === Associated conditions === Acquired conditions in which mitochondrial dysfunction has been involved include: ALS Alzheimer's disease, Bipolar disorder, schizophrenia, aging and senescence, anxiety disorders Cancer Cardiovascular disease Diabetes Huntington's disease Long Covid ME/CFS Parkinson's disease Sarcopenia The body, and each mutation, is modulated by other genome variants; the mutation that in one individual may cause liver disease might in another person cause a brain disorder. The severity of the specific defect may also be great or small. Some defects include exercise intolerance. Defects often affect the operation of the mitochondria and multiple tissues more severely, leading to multi-system diseases. It has also been reported that drug tolerant cancer cells have an increased number and size of mitochondria, which suggested an increase in mitochondrial biogenesis. A recent study in Nature Nanotechnology has reported that cancer cells can hijack the mitochondria from immune cells via physical tunneling nanotubes. As a rule, mitochondrial diseases are worse when the defective mitochondria are present in the muscles, cerebrum, or nerves, because these cells use more energy than most other cells in the body. Although mitochondrial diseases vary greatly in presentation from person to person, several major clinical categories of these conditions have been defined, based on the most common phenotypic features, symptoms, and signs associated with the particular mutations that tend to cause them. An outstanding question and area of research is whether ATP depletion or reactive oxygen species are in fact responsible for the observed phenotypic consequences. Cerebellar atrophy or hypoplasia has sometimes been reported to be associated. == Causes == Mitochondrial disorders may be caused by mutations (acquired or inherited), in mitochondrial DNA (mtDNA), or in nuclear genes that code for mitochondrial components. They may also be the result of acquired mitochondrial dysfunction due to adverse effects of drugs, infections, or other environmental causes. Nuclear DNA has two copies per cell (except for sperm and egg cells), one copy being inherited from the father and the other from the mother. Mitochondrial DNA, however, is inherited from the mother only (with some exceptions) and each mitochondrion typically contains between 2 and 10 mtDNA copies. During cell division the mitochondria segregate randomly between the two new cells. Those mitochondria make more copies, normally reaching 500 mitochondria per cell. As mtDNA is copied when mitochondria proliferate, they can accumulate random mutations, a phenomenon called heteroplasmy. If only a few of the mtDNA copies inherited from the mother are defective, mitochondrial division may cause most of the defective copies to end up in just one of the new mitochondria (for more detailed inheritance patterns, see human mitochondrial genetics). Mitochondrial disease may become clinically apparent once the number of affected mitochondria reaches a certain level; this phenomenon is called "threshold expression". Mitochondria possess many of the same DNA repair pathways as nuclei do—but not all of them; therefore, mutations occur more frequently in mitochondrial DNA than in nuclear DNA (see Mutation rate). This means that mitochondrial DNA disorders may occur spontaneously and relatively often. Defects in enzymes that control mitochondrial DNA replication (all of which are encoded for by genes in the nuclear DNA) may also cause mitochondrial DNA mutations. Most mitochondrial function and biogenesis is controlled by nuclear DNA. Human mitochondrial DNA encodes 13 proteins of the respiratory chain, while most of the estimated 1,500 proteins and components targeted to mitochondria are nuclear-encoded. Defects in nuclear-encoded mitochondrial genes are associated with hundreds of clinical disease phenotypes including anemia, dementia, hypertension, lymphoma, retinopathy, seizures, and neurodevelopmental disorders. A study by Yale University researchers (published in the February 12, 2004, issue of the New England Journal of Medicine) explored the role of mitochondria in insulin resistance among the offspring of patients with type 2 diabetes. Other studies have shown that the mechanism may involve the interruption of the mitochondrial signaling process in body cells (intramyocellular lipids). A study conducted at the Pennington Biomedical Research Center in Baton Rouge, Louisiana showed that this, in turn, partially disables the genes that produce mitochondria. == Mechanisms == The effective overall energy unit for the available body energy is referred to as the daily glycogen generation capacity, and is used to compare the mitochondrial output of affected or chronically glycogen-depleted individuals to healthy individuals. The glycogen generation capacity is entirely dependent on, and determined by, the operating levels of the mitochondria in all of the cells of the human body; however, the relation between the energy generated by the mitochondria and the glycogen capacity is very loose and is mediated by many biochemical pathways. The energy output of full healthy mitochondrial function can be predicted exactly by a complicated theoretical argument, but this argument is not straightforward, as most energy is consumed by the brain and is not easily measurable. == Diagnosis == Mitochondrial diseases are usually detected by analysing muscle samples, where the presence of these organelles is higher. The most common tests for the detection of these diseases are: Southern blot to detect large deletions or duplications Polymerase chain reaction and specific mutation testing Sequencing == Treatments == Although research is ongoing, treatment options are currently limited; vitamins are frequently prescribed, though the evidence for their effectiveness is limited. Pyruvate has been proposed in 2007 as a treatment option. N-acetyl cysteine reverses many models of mitochondrial dysfunction. === Mood disorders === In the case of mood disorders, specifically bipolar disorder, it is hypothesized that N-acetyl-cysteine (NAC), acetyl-L-carnitine (ALCAR), S-adenosylmethionine (SAMe), coenzyme Q10 (CoQ10), alpha-lipoic acid (ALA), creatine monohydrate (CM), and melatonin could be potential treatment options. === Gene therapy prior to conception === Mitochondrial replacement therapy (MRT), where the nuclear DNA is transferred to another healthy egg cell leaving the defective mitochondrial DNA behind, is an IVF treatment procedure. Using a similar pronuclear transfer technique, researchers at Newcastle University led by Douglass Turnbull successfully transplanted healthy DNA in human eggs from women with mitochondrial disease into the eggs of women donors who were unaffected. In such cases, ethical questions have been raised regarding biological motherhood, since the child receives genes and gene regulatory molecules from two different women. Using genetic engineering in attempts to produce babies free of mitochondrial disease is controversial in some circles and raises important ethical issues. A male baby was born in Mexico in 2016 from a mother with Leigh syndrome using MRT. In September 2012 a public consultation was launched in the UK to explore the ethical issues involved. Human genetic engineering was used on a small scale to allow infertile women with genetic defects in their mitochondria to have children. In June 2013, the United Kingdom government agreed to develop legislation that would legalize the 'three-person IVF' procedure as a treatment to fix or eliminate mitochondrial diseases that are passed on from mother to child. The procedure could be offered from 29 October 2015 once regulations had been established. Embryonic mitochondrial transplant and protofection have been proposed as a possible treatment for inherited mitochondrial disease, and allotopic expression of mitochondrial proteins as a radical treatment for mtDNA mutation load. In June 2018 Australian Senate's Senate Community Affairs References Committee recommended a move towards legalising Mitochondrial replacement therapy (MRT). Research and clinical applications of MRT were overseen by laws made by federal and state governments. State laws were, for the most part, consistent with federal law. In all states, legislation prohibited the use of MRT techniques in the clinic, and except for Western Australia, research on a limited range of MRT was permissible up to day 14 of embryo development, subject to a license being granted. In 2010, the Hon. Mark Butler MP, then Federal Minister for Mental Health and Ageing, had appointed an independent committee to review the two relevant acts: the Prohibition of Human Cloning for Reproduction Act 2002 and the Research Involving Human Embryos Act 2002. The committee's report, released in July 2011, recommended the existing legislation remain unchanged Currently, human clinical trials are underway at GenSight Biologics (ClinicalTrials.gov # NCT02064569) and the University of Miami (ClinicalTrials.gov # NCT02161380) to examine the safety and efficacy of mitochondrial gene therapy in Leber's hereditary optic neuropathy. == Epidemiology == About 1 in 4,000 children in the United States will develop mitochondrial disease by the age of 10 years. Up to 4,000 children per year in the US are born with a type of mitochondrial disease. Because mitochondrial disorders contain many variations and subsets, some particular mitochondrial disorders are very rare. The average number of births per year among women at risk for transmitting mtDNA disease is estimated to approximately 150 in the United Kingdom and 800 in the United States. == History == The first pathogenic mutation in mitochondrial DNA was identified in 1988; from that time to 2016, around 275 other disease-causing mutations were identified. == Notable cases == Notable people with mitochondrial disease include: Mattie Stepanek, a poet, peace advocate, and motivational speaker who had dysautonomic mitochondrial myopathy, and who died at age 13. Rocco Baldelli, a coach and former center fielder in Major League Baseball who had to retire from active play at age 29 due to mitochondrial channelopathy. Charlie Gard, a British boy who had mitochondrial DNA depletion syndrome; decisions about his care were taken to various law courts. Charles Darwin, a nineteenth century naturalist who suffered from a disabling illness, is speculated to have MELAS syndrome. == References == == External links == International Mito Patients (IMP)
Wikipedia/Mitochondrial_dysfunction
The homeostatic model assessment (HOMA) is a method used to quantify insulin resistance and beta-cell function. It was first described under the name HOMA by Matthews et al. in 1985. == Derivation == The HOMA authors used data from physiological studies to develop mathematical equations describing glucose regulation as a feedback loop. They published computer software that solves the equations, so that insulin resistance and β-cell function can be estimated from fasting glucose and insulin levels. They also published an equation (see below) that gave approximately the same answers as an early version of the computer software. The computer model has since been improved to a HOMA2 model to better reflect human physiology and recalibrated to modern insulin assays. In this updated version it is possible to determine insulin sensitivity and β-cell function from paired fasting plasma glucose and radioimmunoassay insulin, specific insulin, or C-peptide concentrations. The authors recommend the computer software be used wherever possible. == Notes == The HOMA model was originally designed as a special case of a more general structural (HOMA-CIGMA) model that includes the continuous infusion of glucose with model assessment (CIGMA) approach; both techniques use mathematical equations to describe the functioning of the major effector organs influencing glucose/insulin interactions. The approximating equation for insulin resistance, in the early model, used a fasting plasma sample, and was derived by use of the insulin-glucose product, divided by a constant: (assuming normal-weight, normal subjects < 35 years, having 100% β-cell function an insulin resistance of 1) IR is insulin resistance and %β is the β-cell function (more precisely, an index for glucose tolerance, i.e. a measure for the ability to counteract the glucose load). Insulin is given in μU/mL. Glucose and insulin are both during fasting. This model correlated well with estimates using the euglycemic clamp method (r = 0.88). The authors have tested HOMA and HOMA2 extensively against other measures of insulin resistance (or its reciprocal, insulin sensitivity) and β-cell function. The approximation formulae above relate to HOMA and are crude estimates of the model near normal levels of glucose and insulin in man. The actual calculated HOMA2 compartmental model is published and is available online. == See also == QUICKI SPINA-GBeta SPINA-GR == References == == External links == iHOMA - Nathan R Hill - University of Oxford iHOMA2 — Nuffield Department of Primary Care Health Sciences, University of Oxford
Wikipedia/Homeostatic_model_assessment
Intensive insulin therapy or flexible insulin therapy is a therapeutic regimen for diabetes mellitus treatment. This newer approach contrasts with conventional insulin therapy. Rather than minimize the number of insulin injections per day (a technique which demands a rigid schedule for food and activities), the intensive approach favors flexible meal times with variable carbohydrate as well as flexible physical activities. The trade-off is the increase from 2 or 3 injections per day to 4 or more injections per day, which was considered "intensive" relative to the older approach. In North America in 2004, many endocrinologists prefer the term "flexible insulin therapy" (FIT) to "intensive therapy" and use it to refer to any method of replacing insulin that attempts to mimic the pattern of small continuous basal insulin secretion of a working pancreas combined with larger insulin secretions at mealtimes. The semantic distinction reflects changing treatment. == Rationale == Long-term studies like the UK Prospective Diabetes Study (UKPDS) and the Diabetes control and complications trial (DCCT) showed that intensive insulin therapy achieved blood glucose levels closer to non-diabetic people and that this was associated with reduced frequency and severity of blood vessel damage. Damage to large and small blood vessels (macro- and microvascular disease) is central to the development of complications of diabetes. This evidence convinced most physicians who specialize in diabetes care that an important goal of treatment is to make the biochemical profile of the diabetic patient (blood lipids, HbA1c, etc.) as close to the values of non-diabetic people as possible. This is especially true for young patients with many decades of life ahead. == General description == A working pancreas continually secretes small amounts of insulin into the blood to maintain normal glucose levels, which would otherwise rise from glucose release by the liver, especially during the early morning dawn phenomenon. This insulin is referred to as basal insulin secretion, and constitutes almost half the insulin produced by the normal pancreas. Bolus insulin is produced during the digestion of meals. Insulin levels rise immediately as we begin to eat, remaining higher than the basal rate for 1 to 4 hours. This meal-associated (prandial) insulin production is roughly proportional to the amount of carbohydrate in the meal. Intensive or flexible therapy involves supplying a continual supply of insulin to serve as the basal insulin, supplying meal insulin in doses proportional to nutritional load of the meals, and supplying extra insulin when needed to correct high glucose levels. These three components of the insulin regimen are commonly referred to as basal insulin, bolus insulin, and high glucose correction insulin. === Two common regimens: pens, injection ports, and pumps === One method of intensive insulinotherapy is based on multiple daily injections (sometimes referred to in medical literature as MDI). Meal insulin is supplied by injection of rapid-acting insulin before each meal in an amount proportional to the meal. Basal insulin is provided as a once or twice daily injection of dose of a long-acting insulin. In an MDI regimen, long-acting insulins are preferred for basal use. An older insulin used for this purpose is ultralente, and beef ultralente in particular was considered for decades to be the gold standard of basal insulin. Long-acting insulin analogs such as insulin glargine (brand name Lantus, made by Sanofi-Aventis) and insulin detemir (brand name Levemir, made by Novo Nordisk) are also used, with insulin glargine used more than insulin detemir. Rapid-acting insulin analogs such as lispro (brand name Humalog, made by Eli Lilly and Company) and aspart (brand name Novolog/Novorapid, made by Novo Nordisk and Apidra made by Sanofi Aventis) are preferred by many clinicians over older regular insulin for meal coverage and high correction. Many people on MDI regimens carry insulin pens to inject their rapid-acting insulins instead of traditional syringes. Some people on an MDI regimen also use injection ports such as the I-port to minimize the number of daily skin punctures. The other method of intensive/flexible insulin therapy is an insulin pump. It is a small mechanical device about the size of a deck of cards. It contains a syringe-like reservoir with about three days' insulin supply. This is connected by thin, disposable, plastic tubing to a needle-like cannula inserted into the patient's skin and held in place by an adhesive patch. The infusion tubing and cannula must be removed and replaced every few days. An insulin pump can be programmed to infuse a steady amount of rapid-acting insulin under the skin. This steady infusion is termed the basal rate and is designed to supply the background insulin needs. Each time the patient eats, he or she must press a button on the pump to deliver a specified dose of insulin to cover that meal. Extra insulin is also given the same way to correct a high glucose reading. Although current pumps can include a glucose sensor, they cannot automatically respond to meals or to rising or falling glucose levels. Both MDI and pumping can achieve similarly excellent glycemic control. Some people prefer injections because they are less expensive than pumps and do not require the wearing of a continually attached device. However, the clinical literature is very clear that patients whose basal insulin requirements tend not to vary throughout the day or do not require dosage precision smaller than 0.5 IU, are much less likely to realize much significant advantage of pump therapy. Another perceived advantage of pumps is the freedom from syringes and injections, however, infusion sets still require less frequent injections to guide infusion sets into the subcutaneous tissue. Intensive/flexible insulin therapy requires frequent blood glucose checking. To achieve the best balance of blood sugar with either intensive/flexible method, a patient must check his or her glucose level with a meter monitoring of blood glucose several times a day. This allows optimization of the basal insulin and meal coverage as well as correction of high glucose episodes. == Advantages and disadvantages == The two primary advantages of intensive/flexible therapy over more traditional two or three injection regimens are: greater flexibility of meal times, carbohydrate quantities, and physical activities, and better glycemic control to reduce the incidence and severity of the complications of diabetes. Major disadvantages of intensive/flexible therapy are that it requires greater amounts of education and effort to achieve the goals, and it increases the daily cost for glucose monitoring four or more times a day. This cost can substantially increase when the therapy is implemented with an insulin pump and/or continuous glucose monitor. It is a common notion that more frequent hypoglycemia is a disadvantage of intensive/flexible regimens. The frequency of hypoglycemia increases with increasing effort to achieve normal blood glucoses with most insulin regimens, but hypoglycemia can be minimized with appropriate glucose targets and control strategies. The difficulties lie in remembering to test, estimating meal size, taking the meal bolus and eating within the prescribed time, and being aware of snacks and meals that are not the expected size. When implemented correctly, flexible regimens offer greater ability to achieve good glycemic control with easier accommodation to variations of eating and physical activity. A 2020 Cochrane systematic review did not find enough evidence of reduction of cardiovascular mortality, non-fatal myocardial infarction or non-fatal stroke when comparing insulin to metformin monotherapy. == Semantics of changing care: why "flexible" is replacing "intensive" therapy == Over the last two decades, the evidence that better glycemic control (i.e., keeping blood glucose and HbA1c levels as close to normal as possible) reduces the rates of many complications of diabetes has become overwhelming. As a result, diabetes specialists have expended increasing effort to help most people with diabetes achieve blood glucose levels as close to normal as achievable. It takes about the same amount of effort to achieve good glycemic control with a traditional two or three injection regimen as it does with flexible therapy: frequent glucose monitoring, attention to timing and amounts of meals. Many diabetes specialists no longer think of flexible insulin therapy as "intensive" or "special" treatment for a select group of patients but simply as standard care for most patients with type 1 diabetes. == Treatment devices used == The insulin pump is one device used in intensive insulinotherapy. The insulin pump is about the size of a beeper. It can be programmed to send a steady stream of insulin as basal insulin. It contains a reservoir or cartridge holding several days' worth of insulin, the tiny battery-operated pump, and the computer chip that regulates how much insulin is pumped. The infusion set is a thin plastic tube with a fine needle at the end. There are also newer "pods" which do not require tubing. It carries the insulin from the pump to the infusion site beneath the skin. It sends a larger amount before eating meals as "bolus" doses. The insulin pump replaces insulin injections. This device is useful for people who regularly forget to inject themselves or for people who don't like injections. This machine does the injecting by replacing the slow-acting insulin for basal needs with an ongoing infusion of rapid-acting insulin. Basal insulin: the insulin that controls blood glucose levels between meals and overnight. It controls glucose in the fasting state. Boluses: the insulin that is released when food is eaten or to correct a high reading. Another device used in intensive insulinotherapy is the injection port. An injection port is a small disposable device, similar to the infusion set used with an insulin pump, configured to accept a syringe. Standard insulin injections are administered through the injection port. When using an injection port, the syringe needle always stays above the surface of the skin, thus reducing the number of skin punctures associated with intensive insulinotheraphy. == References ==
Wikipedia/Intensive_insulin_therapy
Self-protein refers to all proteins endogenously produced by DNA-level transcription and translation within an organism of interest. This does not include proteins synthesized due to viral infection, but may include those synthesized by commensal bacteria within the intestines. Proteins that are not created within the body of the organism of interest, but nevertheless enter through the bloodstream, a breach in the skin, or a mucous membrane, may be designated as “non-self” and subsequently targeted and attacked by the immune system. Tolerance to self-protein is crucial for overall wellbeing; when the body erroneously identifies self-proteins as “non-self”, the subsequent immune response against endogenous proteins may lead to the development of an autoimmune disease. == Examples == Of note, the list provided above is not exhaustive; the list does not mention all possible proteins targeted by the provided autoimmune diseases. == Identification by the immune system == Autoimmune responses and diseases are primarily instigated by T lymphocytes that are incorrectly screened for reactivity to self-protein during cell development. During T-cell development, early T-cell progenitors first move via chemokine gradients from the bone marrow into the thymus, where T-cell receptors are randomly rearranged at the gene level to allow for T-cell receptor generation. These T-cells have the potential to bind to anything, including self-proteins. The immune system must differentiate the T-cells that have receptors capable of binding to self versus non-self proteins; T-cells that can bind to self-proteins must be destroyed to prevent development of an autoimmune disorder. In a process known as “Central Tolerance”, T-cells are exposed to cortical epithelial cells that express a variety of different major histocompatibility complexes (MHC) of both class 1 and class 2, which have the ability to bind to T-cell receptors of CD8+ cytotoxic T-cells, and CD4+ helper T-cells, respectively. The T-cells that display affinity for these MHC are positively selected to continue to the second stage of development, while those that cannot bind to MHC undergo apoptosis. In the second stage, immature T-cells are exposed to a variety of macrophages, dendritic cells, and medullary epithelial cells that express self-protein on MHC class 1 and class 2. These epithelial cells also express the transcription factor labelled autoimmune regulator (AIRE) – this crucial transcription factor allows the medullary epithelial cells of the thymus to express proteins would normally be present in peripheral tissue rather than in an epithelial cell, such as insulin-like peptides, myelin-like peptides, and more. As these epithelial cells now present a large variety of self-proteins that could be encountered across the body, the immature T-cells are tested for affinity to self-protein and self-MHC. If any T-cell has strong affinity for self-protein and self-MHC, the cell undergoes apoptosis to prevent autoimmune function. T-cells that display low/medium affinity are allowed to leave the thymus and circulate throughout the body to react to novel non-self antigen. In this manner, the body attempts to systematically destroy T-cells that could lead to autoimmunity. == References ==
Wikipedia/Self-protein
Massachusetts General Hospital (Mass General or MGH) is a teaching hospital located in the West End neighborhood of Boston, Massachusetts. It is the original and largest clinical education and research facility of Harvard Medical School/Harvard University, and houses the world's largest hospital-based research program with an annual research budget of more than $1.2 billion in 2021. It is the third-oldest general hospital in the United States with a patient capacity of 999 beds. Along with Brigham and Women's Hospital, Mass General is a founding member of Mass General Brigham, formerly known as Partners HealthCare, the largest healthcare provider in Massachusetts. == History == Founded in 1811, the original hospital was designed by the famous American architect Charles Bulfinch. It is the third-oldest general hospital in the United States; only Pennsylvania Hospital (1751) and NewYork-Presbyterian Hospital's predecessor New York Hospital (1771) are older. John Warren, Professor of Anatomy and Surgery at Harvard Medical School, spearheaded the move of the medical school to Boston. Warren's son, John Collins Warren, a graduate of the University of Edinburgh Medical School, along with James Jackson, led the efforts to start the Massachusetts General Hospital, which was initially proposed in 1810 by Rev. John Bartlett, the Chaplain of the Almshouse in Boston. Because all those who had sufficient money were cared for at home, Massachusetts General Hospital, like most hospitals that were founded in the 19th century, was intended to care for the poor. A 30-year-old sailor was the first patient admitted to the hospital on September 3, 1821. During the mid-to-late 19th century, Harvard Medical School was located adjacent to Massachusetts General Hospital. Walter J. Dodd established the radiology department at the hospital. From just after the discovery of x-rays in 1895, until his early death in 1916 from metastatic cancer caused by multiple radiation cancers he oversaw the radiology department. He also underwent over 50 surgical procedures at the hospital to treat his radiation injuries, from skin grafts to amputations. The first American hospital social workers were based in the hospital. The hospital's work with developing specialized computer software systems for medical use in the 1960s led to the development of the MUMPS programming language, which stands for "Massachusetts General Hospital Utility Multi-Programming System", an important programming language and database system heavily used in medical applications such as patient records and billing. A major patient database system called File Manager, which was developed by the Veterans Administration (now the Department of Veterans' Affairs), was created using this language. === Early use of anesthesia === It was in the Ether Dome of MGH in October 1846, that a local dentist, William Thomas Green Morton, was invited to perform a public demonstration of the administration of inhaled ether to produce insensibility to pain during surgery. Several years prior, Dr. Crawford Long of Danielsville, Georgia had given ether for surgery, but his work was unknown outside Georgia until he published his experience in 1849. On 16 October 1846, after administration of ether by Morton, MGH Chief of Surgery, John Collins Warren, painlessly removed a tumor from the neck of a local printer, Edward Gilbert Abbott. Upon completion of the procedure, which was without screaming or restraint, the usually skeptical Warren reportedly quipped, "Gentlemen, this is no humbug." News of this "anesthesia" invention rapidly traveled within months around the world. A reenactment of the Ether Dome event was painted in 2000 by artists Warren and Lucia Prosperi. They used the then-MGH staff to pose as their counterparts from 1846. The Ether Dome still exists and is open to the public. An anesthesia department was established at the MGH in 1936 under the leadership of Henry Knowles Beecher. === First successful replantation of a severed limb === On 23 May 1962, under the direction of Ronald A. Malt, a team of surgeons successfully accomplished the first replantation of completely severed limb. While attempting to hitch a ride on the back of a freight train, Everett Knowles (1950–2016), aged 12 at the time, hit an abutment when the train lurched, severing his arm completely at the shoulder. He and his arm were rushed to MGH, where a 30 year old Malt conducted the team of surgeons. Some doctors prepared Everett for surgery, while others worked on the separated arm. First, they rejoined the "chaotically mangled blood vessels, then the bone and finally the skin". In the time since the accident, the arm had grown a "deathly gray", but grew steadily pink as the surgery progressed and blood vessels were reattached. The nerves would be reconnected in a later surgery. "All we did," said the modest Dr. Malt, "was apply techniques we've known about for a long time and simply never had occasion to correlate before...The astonishing thing was not the newness of the operation but the teamwork—the way 12 doctors with expert skills, distinguished a collection of authorities as you could find anywhere, were willing to stand by and feed the incomparable extent of their knowledge to me, for no gain other than to know they had contributed." In April 2019, MGH received a $200 million gift from Cambridge entrepreneur Phillip "Terry" Ragon to endow a permanent vaccine research center. This gift is the largest in the hospital's history and is addition to the $100 million gift he previously gave the hospital. The center is currently testing an HIV vaccine in South Africa. == Facilities and current operations == The main MGH campus is located at 55 Fruit Street in Boston, Massachusetts. It has expanded into an area formerly known as the West End, adjacent to the Charles River and Beacon Hill. The hospital handles around 1.5 million outpatient visits each year at its main campus, as well as its seven satellite facilities in Boston at Back Bay, Charlestown, Chelsea, Everett, Revere, Waltham and Danvers. With more than 25,000 employees, the hospital is the largest non-governmental employer in Boston. The hospital has 1,011 beds and admits around 50,000 patients each year. The surgical staff performs over 34,000 operations yearly. The obstetrics service handles over 3,800 births each year. The Massachusetts General Hospital Trauma Center is the oldest and largest American College of Surgeons-verified Level One Trauma Center in New England, evaluating and treating over 2,600 trauma patients per year. Architect Hisham N. Ashkouri, working in conjunction with Hoskins Scott Taylor and Partners, provided the space designs and schematics for the pediatrics, neonatal intensive care, and in-patient related floors, as well as the third-floor surgical suites and support facilities. In the fall of 2004, the Yawkey Center for Outpatient Care (named for Jean R. Yawkey) opened. This 380,000-square-foot (35,000 m2) ten-floor facility is the largest and most comprehensive outpatient building in New England. In 2011, the Lunder Building, a 530,000-square-foot (49,000 m2), 14-floor building opened. The building houses three floors of operating rooms, an expanded emergency room, radiation oncology suites, inpatient neurology and neurosurgery floors, and inpatient oncology floors; all of which increase the inpatient capacity by 150 beds. In 2022, the state approved a more than $2 billion expansion proposal from MGB, which included a $1.9 billion expansion of MGH with a new clinical building on Cambridge Street. This building will house 482 inpatient beds. After utilizing many of these beds to convert other parts of the hospital from 2-bed to 1-bed rooms, the hospital will net 94 new inpatient beds. The project is expected to be fully completed in 2030. === Transportation === The closest MBTA stop to the main campus is Charles/MGH on the Red Line. On March 27, 2007, the new Charles/MGH station was opened with new renovations, including handicap accessible elevators. There are five main food service areas for the general public on the MGH campus. They include the Eat Street Cafe in the lower level of the Ellison Building, the Blossom Street Cafe in the Cox lobby, Coffee Central in the White lobby, Tea Leaves and Coffee Beans in the Wang Ambulatory Care Center, and Coffee South in the Yawkey outpatient center. == Massachusetts General Hospital for Children == Massachusetts General Hospital for Children (MGHfC) is a pediatric acute care children's teaching hospital located in Boston, Massachusetts. The hospital has an estimated 100 pediatric beds and is affiliated the Harvard Medical School. The hospital is a member of Mass General Brigham and is the only children's hospital in the network. The hospital provides comprehensive pediatric specialties and subspecialties to patients aged 0–21 throughout Boston and the wider Massachusetts. Massachusetts General Hospital for Children also sometimes treats adults that require pediatric care. The hospital is directly attached to Massachusetts General Hospital and a Ronald McDonald House of New England family room is on site. The hospital has an American Academy of Pediatrics verified level III neonatal intensive care unit that has a capacity of 18 bassinets. The hospital also has a 14-bed pediatric intensive care unit for critical pediatric patients age 0–21. In 2020, amidst the COVID-19 pandemic, the hospital converted their PICU into an adult ICU to help with the surge capacity for COVID-19. Patients that were in the PICU previously were transferred out to the Floating Hospital for Children and Boston Children's Hospital for treatment. === Awards === As of 2021, Massachusetts General Hospital for Children has placed nationally in 5 ranked pediatric specialties on U.S. News & World Report. == The Mass General Research Institute (MGRI) == Massachusetts General Hospital houses the largest hospital-based research program in the United States, the Mass General Research Institute, with an annual research budget of over $1 billion in 2019. MGRI received the 10th-most funding from the National Institutes of Health in 2018, with ~$500 million going to support 959 awards. The Mass General Research Institute was launched in 2015 as a formalized way to support promote and guide research at Massachusetts General Hospital. Research at MGRI takes place in over 30 departments, centers, and institutes across the hospital. The Institute, in conjunction with clinical staff based in the hospital, is home to fundamental research labs investigating the basic building blocks of life as well as a clinical research program with approximately 1,200 active clinical trials. The hospital has six thematic research centers: The Center for Systems Biology The Center for Regenerative Medicine The Center for Genomic Medicine The Wellman Center for Photomedicine The Center for Computational and Integrative Biology The Ragon Institute of MGH, MIT and Harvard Notable scientists at MGH include Jack Szostak, PhD, 2009 winner of the Nobel Prize in Physiology or Medicine, Rakesh Jain, PhD, a 2015 recipient of the National Medal of Science, and Gary Ruvkun, PhD, winner of the 2014 Wolf Prize in Medicine, the 2014 Gruber Prize in Genetics, and the 2014 Breakthrough Prize in Life Sciences. In 2019, 55 scientists from MGH were listed in Clarivate Analytics' Web of Science annual Highly Cited Researchers Report. There are 23 MGH researchers in the National Academy of Medicine (some are listed under their Harvard Medical School affiliation), and four MGH researchers in the National Academy of Sciences. Notable medications that have resulted from research at Mass General include: == Mass General Cancer Center == The Mass General Cancer Center is a center for cancer treatment within the broader Mass-General Hospital system, located at 55 Fruit Street in Boston, Massachusetts. It was formally established in 1989 as an independent aspect of the Mass-General Hospital system, and remains active today. The center was one of only 53 locations to receive the designation as a National Cancer Institute Comprehensive Cancer Center. As part of the Mass General Hospital System, the Cancer Center extends throughout New England and has several different community locations to receive treatment. It continues to expand, and in September 2023 recently received donations from philanthropists Jason and Keely Krantz to establish the Krantz Family Center for Cancer Research. This expansion is the largest individual donation in the Center's 34-year history, although the donation amount has not been disclosed. === Second opinions === The hospital offers a global second opinion service in cooperation with Grand Rounds. == Affiliated institutions == Massachusetts General Hospital is affiliated with Harvard Medical School and is its original and largest teaching hospital. Together they form an academic health science center. In February 2009, the Phillip T. and Susan M. Ragon Institute of immunology was founded to bolster research into creating vaccines and other therapies for acquired immune system conditions, chiefly AIDS. It was made possible by a $100 million gift over ten years, and represents the largest single donation made to MGH. The Recovery Research Institute was created in 2013 by John F. Kelly, the first endowed professor of Addiction Medicine at Harvard Medical School. The institute is a part of the Massachusetts General Hospital Department of Psychiatry and published the National Recovery Study, the first-ever nationally representative study on the number of Americans in recovery from alcohol or other drug use. The institute also created the Addictionary, the first-ever glossary of addiction-related terms and a system for stigmatized terminology alerts. MGH is affiliated with the Dana–Farber Cancer Institute through Dana-Farber/Partners Cancer Care and the Dana–Farber/Harvard Cancer Center. They are also affiliated with Project Pinball Charity. In 2015, MGH Home Base Program became a founding partner of the Warrior Care Network health system focused on treating post-traumatic stress disorder (PTSD) in veterans, along with partners Emory Healthcare, Rush University Medical Center, UCLA Health and Wounded Warrior Project. Though it has its own chief of psychiatry and top-ranking department, MGH is closely affiliated with nearby McLean Hospital, a psychiatric hospital also affiliated with Harvard Medical School. === Educational units === Massachusetts General Hospital Academy Massachusetts General Hospital Psychiatry Academy MGH Institute of Health Professions (in partnership with Harvard University) == Awards and recognition == === Nobel laureates === There have been thirteen Nobel Laureates who have either worked or trained at MGH. 1934 George R. Minot, MD 1947 Carl F. Cori, PhD 1953 Fritz A. Lipmann, MD, PhD 1972 Gerald M. Edelman, MD, PhD 1985 Michael S. Brown, MD, and Joseph L. Goldstein, MD 1989 J. Michael Bishop, MD 1990 Joseph Edward Murray, MD and Donnall Thomas, MD 1998 Ferid Murad, MD, PhD 2009 Jack W. Szostak, PhD 2011 Ralph Steinman, MD 2012 Robert Lefkowitz, MD 2024 Gary Ruvkun, PhD === Rankings === In 2015, MGH was named the number one hospital in the United States by U.S. News & World Report and is nationally ranked in 16 specialties. In 2012, MGH was named the number one hospital in the United States by U.S. News & World Report. In 2011, MGH was named the second-best hospital in the United States by U.S. News & World Report. MGH consistently ranks as one of the country's top hospitals in U.S. News & World Report. In 2011, MGH was also ranked as one of the top three hospitals in the country for Diabetes & Endocrinology; Ear, Nose & Throat; Neurology & Neurosurgery; Ophthalmology; Orthopedics; and Psychiatry. In August 2011, Becker's Hospital Review listed MGH as number 12 on the 100 'Top Grossing Hospitals' in America with $5.64 billion in gross revenue. In 2003, MGH was named the state's first Magnet hospital by the American Nurses Credentialing Center, a subsidiary of the American Nurses Association. Magnet recognition represents the highest honor awarded for nursing excellence. == Controversies == In 1972 MGH received criticism from activists and legislators for their role in conducting a study of the use of amygdalotomy to reduce violence in individuals who received the procedure. This study came after significant pressure on medical practitioners to stop using invasive procedures to try to alter behavior of patients and was denounced as "a new form of lobotomy". Although the study was ended before any surgery was conducted on incarcerated people, MGH was simultaneously criticized for conducting genetic and fingerprint studies of people incarcerated at MCI–Cedar Junction (known as MCI–Walpole at the time), Bridgewater State Hospital, and MCI–Framingham in an attempt to discover markers for "criminal" behavior. This discredited science is often associated with attempts at the time to pathologize and incarcerate black people as a response to the Black Liberation Movement. In May 2015, a former MGH physician filed a lawsuit under seal alleging that at least five orthopedic surgeons endangered patient safety by keeping them under anesthesia longer than necessary while the surgeons performed simultaneous surgeries. That year, MGH fired Dr. Dennis Burke after he spoke to The Boston Globe about the dual-booking practice. In 2019, MGH paid $13 million and agreed to improve safety practices, to settle Burke's wrongful termination suit. Also in 2019, MGH paid $5.1 million to settle a medical malpractice lawsuit involving a concurrent surgery performed on former Boston Red Sox baseball team pitcher Bobby Jenks. Dr. Lisa Wollman refiled her lawsuit in June 2017 under the federal False Claims Act, citing concerns that the hospital was driven by economic benefit and keeping patients unaware of the practice of concurrent surgeries. Wollman's attorney claimed that Medicare and Medicaid were being defrauded because they require that the surgeons must be present for all "critical portions" of the surgery in order to be compensated. MGH settled the lawsuit in 2022 for $14.9 million, including reimbursement for the disputed government payments, and agreed to get specific consent for the practice from patients. In June 2019, approximately 10,000 patients participating in research studies at MGH, had their names, dates of birth, diagnoses, tests, medical record numbers, and medical histories exposed in a data breach by "an unauthorized third party". The incident did not become public until August 2019. == See also == Proto (magazine) Schwartz Center for Compassionate Healthcare == References == == External links == Official website
Wikipedia/Mallinckrodt_General_Clinical_Research_Center
Idiopathic orbital inflammatory (IOI) disease refers to a marginated mass-like enhancing soft tissue involving any area of the orbit. It is the most common painful orbital mass in the adult population, and is associated with proptosis, cranial nerve palsy (Tolosa–Hunt syndrome), uveitis, and retinal detachment. Idiopathic orbital inflammatory syndrome, also known as orbital pseudotumor, was first described by Gleason in 1903 and by Busse and Hochheim. It was then characterized as a distinct entity in 1905 by Birch-Hirschfeld. It is a benign, nongranulomatous orbital inflammatory process characterized by extraocular orbital and adnexal inflammation with no known local or systemic cause. Its diagnosis is of exclusion once neoplasm, primary infection and systemic disorders have been ruled out. Once diagnosed, it is characterized by its chronicity, anatomic location or histologic subtype. Idiopathic orbital inflammation has a varied clinical presentation depending on the involved tissue. It can range from a diffuse inflammatory process to a more localized inflammation of muscle, lacrimal gland or orbital fat. Its former name, orbital pseudotumor, is derived due to resemblance to a neoplasm. However, histologically it is characterized by inflammation. Although a benign condition, it may present with an aggressive clinical course with severe vision loss and oculomotor dysfunction. == Signs and symptoms == Affected individuals typically present with sudden painful proptosis, redness, and edema. Proptosis will vary according to the degree of inflammation, fibrosis, and mass effect. Occasionally, ptosis, chemosis, motility dysfunction (ophthalmoplegia), and optic neuropathy are seen. In the setting of extensive sclerosis there may be restriction, compression, and destruction of orbital tissue. Symptoms usually develop acutely (hours to days), but have also been seen to develop over several weeks or even months. Malaise, headaches, and nausea may accompany these symptoms. Other unusual presentations described include cystoid macular edema, temporal arteritis, and cluster headaches. Pediatric IOI accounts for about 17% of cases idiopathic orbital inflammation. The most common sign is proptosis, but redness and pain are also experienced. Presentation varies slightly compared to adults with bilateral involvement, uveitis, disc edema and tissue eosinophilia being more common in this population. The presence of uveitis generally implies a poor outcome for pediatric IOI. Bilateral presentation may have a higher incidence of systemic disease. == Pathogenesis == The exact cause of IOI is unknown, but infectious and immune-mediated mechanisms have been proposed. Several studies have described cases where onset of orbital pseudotumor was seen simultaneously or several weeks after upper respiratory infections. Another study by Wirostko et al. proposes that organisms resembling Mollicutes cause orbital inflammation by destroying the cytoplasmic organelles of parasitized cells. Orbital pseudotumor has also been observed in association with Crohn's disease, systemic lupus erythematosus, rheumatoid arthritis, diabetes mellitus, myasthenia gravis, and ankylosing spondylitis all of which strengthen the basis of IOI being an immune-mediated disease. Response to corticosteroid treatment and immunosuppressive agents also support this idea. Trauma has also been seen to precede some cases of orbital pseudotumor. However, one study by Mottow-Lippe, Jakobiec, and Smith suggests that the release of circulating antigens caused by local vascular permeability triggers an inflammatory cascade in the affected tissues. Although these mechanisms have been postulated as possible causes of IOI, their exact nature and relationships to the condition still remain unclear. === Histopathology === The histopathology of idiopathic orbital inflammation is described as nondiagnostic and diverse. It includes diverse polymorphous infiltrate, atypical granulomatous inflammation, tissue eosinophilia, and infiltrative sclerosis Although several classification schemes have been postulated, none have been definitively accepted due to the absence of distinct differences among the histopathological types as to the signs, symptoms, clinical course, and outcome. == Diagnosis == A differential diagnosis includes lymphoproliferative lesions, thyroid ophthalmopathy, IgG4-related ophthalmic disease, sarcoidosis, granulomatosis with polyangiitis, orbital cellulitis and carotid-cavernous fistula. === Imaging === The best imaging modality for idiopathic orbital inflammatory disease is contrast-enhanced thin section magnetic resonance with fat suppression. The best diagnostic clue is a poorly marginated, mass-like enhancing soft tissue involving any area of the orbit. Overall, radiographic features for idiopathic orbital inflammatory syndrome vary widely. They include inflammation of the extraocular muscles (myositis) with tendinous involvement, orbital fat stranding, lacrimal gland inflammation and enlargement (dacryoadenitis), involvement of the optic sheath complex, uvea, and sclera, a focal intraorbital mass or even diffuse orbital involvement. Bone destruction and intracranial extension is rare, but has been reported. Depending on the area of involvement, IOI may be categorized as: Myositic Lacrimal Anterior – Involvement of the globe, retrobulbar orbit Diffuse – Multifocal intraconal involvement with or without an extraconal component Apical – Involving the orbital apex and with intracranial involvement Tolosa–Hunt syndrome is a variant of orbital pseudotumor in which there is extension into the cavernous sinus through the superior orbital fissure. Another disease variant is Sclerosing pseudotumor, which more often presents bilaterally and may extend into the sinuses. CT findings In non-enhanced CT one may observe a lacrimal, extra-ocular muscle, or other orbital mass. It may be focal or infiltrative and will have poorly circumscribed soft tissue. In contrast-enhanced CT there is moderate diffuse irregularity and enhancement of the involved structures. A dynamic CT will show an attenuation increase in the late phase, contrary to lymphoma where there is an attenuation decrease. Bone CT will rarely show bone remodeling or erosion, as mentioned above. MR findings On MR examination there is hypointensity in T1 weighted imaging (WI), particularly in sclerosing disease. T1WI with contrast will show moderate to marked diffuse irregularity and enhancement of involved structures. T2 weighted imaging with fat suppression will show iso- or slight hyperintensity compared to muscle. There is also decreased signal intensity compared to most orbital lesions due to cellular infiltrate and fibrosis. In chronic disease or sclerosing variant, T2WI with FS will show hypointensity (due to fibrosis). Findings on STIR (Short T1 Inversion Recovery) are similar to those on T2WI FS. In Tolosa–Hunt syndrome, findings include enhancement and fullness of the anterior cavernous sinus and superior orbital fissure in T1WI with contrast, while MRA may show narrowing of cavernous sinus internal carotid artery (ICA). Ultrasonographic findings On grayscale ultrasound there is reduced reflectivity, regular internal echoes, and weak attenuation, in a way, similar to lymphoproliferative lesions. == Treatment == Corticosteroids remain the main treatment modality for IOI. There is usually a dramatic response to this treatment and is often viewed as pathognomonic for this disease. Although response is usually quick, many agree that corticosteroids should be continued on a tapering basis to avoid breakthrough inflammation. Although many respond to corticosteroid treatment alone, there are several cases in which adjuvant therapy is needed. While many alternatives are available, there is no particular well-established protocol to guide adjuvant therapy. Among the available options there is: surgery, alternative corticosteroid delivery, radiation therapy, non-steroidal anti-inflammatory drugs, cytotoxic agents (chlorambucil, cyclophosphamide), corticosteroid sparing immunosuppressants (methotrexate, cyclosporine, azathioprine), IV immune-globin, plasmapheresis, and biologic treatments (such as TNF-α inhibitors). == Epidemiology == IOI or orbital pseudotumor is the second most common cause of exophthalmos following Grave's orbitopathy and the third most common orbital disorder following thyroid orbitopathy and lymphoproliferative disease accounting for 5–17.6% of orbital disorders, There is no age, sex, or race predilection, but it is most frequently seen in middle-aged individuals. Pediatric cases account for about 17% of all cases of IOI. == See also == Graves' ophthalmopathy IgG4-related ophthalmic disease == References ==
Wikipedia/Idiopathic_orbital_inflammatory_disease
The signs and symptoms of Graves' disease generally result from the direct and indirect effects of hyperthyroidism, although they can also be caused by other thyroidal conditions, such as Graves' ophthalmopathy, goitre and pretibial myxedema. These clinical manifestations can involve virtually every system in the body. The mechanisms that mediate these effects are not well understood. The severity of the signs and symptoms of hyperthyroidism is related to the duration of the disease, the magnitude of the thyroid hormone excess, and the patient's age. Although the majority of patients experience significant improvement and remission after proper medical care, health care providers should be aware of variability in individual response to hyperthyroidism and individual sensitivity to thyroid hormone fluctuations. Patients with Graves' disease can also undergo periods of hypothyroidism (inadequate production of thyroid hormone; see symptoms of hypothyroidism), due to the challenges of finding the right dosage of thyroid hormone suppression and/or supplementation. The body's need for thyroid hormone can also change over time, such as in the first months after radioactive iodine treatment (RAI). Thyroid autoimmune diseases can also be volatile, as hyperthyroidism can interchange with hypothyroidism and euthyroidism. == General symptoms == === Skin/Hair/Nails === Pretibial myxedema: Waxy, discolored induration of the skin Warm and moist skin Redness of the elbows is frequently present. It is probably the result of the combination of increased activity, an exposed part, and a hyperirritable vasomotor system. Thin and fine hair Brittle nails Plummer's nail === Nose/Sinuses === Chronic sinus infections === Neck === Goitre (enlarged thyroid): If the thyroid grows large enough, it may compress the recurrent laryngeal nerve, producing vocal cord paralysis, dysphonia, and even respiratory stridor. Compression of the sympathetic chain may result in Horner's syndrome. === Eyes === Graves' ophthalmopathy (protrusion of one or both eyes) === Respiratory/Cardiac === Shortness of breath Cardiovascular features may include hypertension, and heart rate that may be rapid or irregular in character; these may be perceived as palpitations. Less common findings include high-output heart failure, left ventricular hypertrophy, premature atrial and ventricular contractions, atrial fibrillation, congestive heart failure, angina, myocardial infarction, systemic embolization, death from cardiovascular collapse and resistance to some drug effects (digoxin, coumadin). === Gastrointestinal === Gastrointestinal symptoms. This includes increased bowel movements, but malabsorption is unusual. === Musculoskeletal === Hyperreflexia, with a rapid relaxation phase. Tremor (usually fine shaking; tremor of the outstretched fingers). In a small study of newly diagnosed hyperthyroid patients, tremor was observed in 76% of them. Some studies lay the cause for hyperthyroid tremor with a heightened beta-adrenergic state, others suggest an increased metabolism of dopamine. Weakness or muscle weakness (especially in the large muscles of the arms and legs). This latter occurs in 60 to 80 percent of patients with untreated hyperthyroidism. Muscle weakness is rarely the chief complaint. The likelihood and degree of muscle weakness is correlated with the duration and severity of the hyperthyroid state, and becomes more likely after the age of 40. Muscle strength returns gradually over several months after the hyperthyroidism has been treated. Muscle degeneration === Neurologic === A distinctly excessive reaction to all sorts of stimuli. A marked increase in fatigue, or asthenia, is often prominent. This increased weariness may be combined with hyperactivity; patients remark that they are impelled to incessant activity, which, however, causes great fatigue. Neurological seizures, neuropathy from nerve entrapment by lesions of pretibial myxedema, and hypokalemic periodic paralysis may occur. Athetoid, choreia, and corticospinal tract damage, are rare. An acute thyrotoxic encephalopathy is very rare. === Endocrine === Weight loss despite normal or increased appetite. Some patients (especially younger ones) gain weight due to excessive appetite stimulation that exceeds the weight loss effect. Increased appetite. Increased sweating Heat intolerance Abnormal breast enlargement in men Augmented calcium levels in the blood (by as much as 25% – known as hypercalcaemia). This can cause stomach upset, excessive urination, and impaired kidney function. Diabetes may be activated or intensified, and its control worsened. The diabetes is ameliorated or may disappear when the thyrotoxicosis is treated. Reproductive symptoms in men may include reduced free testosterone (due to the elevation of testosterone-estrogen binding globulin level), diminished libido, erectile dysfunction and (reversible) impaired sperm production with lower mean sperm density, a high incidence of sperm abnormalities, and reduced mobility of the sperm cells. Women may experience infrequent menstruation or irregular and scant menstrual flow along with difficulty conceiving, infertility and recurrent miscarriage. === Psychiatric === Insomnia === Other === Evidence of mild or severe liver disease may be found. == Effects on the skeleton == Overt hyperthyroidism caused by Grave's Disease is associated with accelerated bone remodeling (resulting in increased porosity of cortical bone and reduced volume of trabecular bone). This can lead to reduced bone density and eventually osteoporosis as well as increased fracture rates. The increased hip fracture rates later in life in turn cause excess late mortality. The changes in bone metabolism are connected with negative calcium balance, an increased excretion of calcium and phosphorus in the urine (hypercalciuria) and stool, and, rarely, hypercalcemia. In hyperthyroidism, the normal cycle duration of bone resorption of approximately 200 days is halved, and each cycle is associated with a 9.6 percent loss of mineralized bone. In hypothyroidism, cycle length approximates 700 days and is associated with a 17 percent increase in mineralized bone. The extent of the reduction in bone density in most studies is 10–20%. The clinical manifestations on bone differ depending on the age of the patient. Postmenopausal woman are most sensitive to accelerated bone loss from thyrotoxicosis. Accelerated bone growth in growing children can increase ossification in the short term, but generally results in short-stature adults compared with the predicted heights. If thyrotoxicosis is treated early, bone loss can be minimized. The level of calcium in the blood can be determined by a simple blood test, and a Dual Energy X-Ray Absorptiometry scan can help determine patient bone density relative to the rest of the population. There are many medications that can help to rebuild bone mass and to prevent further bone loss, such as Bisphosphonates. Risedronate treatment has been demonstrated to help restore bone mass in osteopenia/osteoporosis associated with Graves' disease. Nevertheless, weight-bearing exercises, a balanced diet, calcium intake of about 1500 mg a day and enough vitamin D, are of course elementary foundations. == Eye symptoms == Hyperthyroidism almost always causes general eye symptoms like dryness and irritation, regardless of what the cause of the hyperthyroid state is. However, these need to be distinguished from Graves' ophthalmopathy, which can only occur in patients who have Graves' disease. (It may also, rarely, be seen in Hashimoto's thyroiditis, primary hypothyroidism, and thyroid cancer). About 20–25% of patients with Graves' disease will suffer from clinically obvious Graves' ophthalmopathy, and not just from the eye signs of hyperthyroidism. Only 3 to 5% will develop severe ophthalmopathy. However, when subjected to closer inspection (e.g. by magnetic resonance imaging of the orbits) many more patients have evidence of ophthalmopathy. It is estimated that for every 100,000 persons, 16 women and 3 men have Graves' ophthalmopathy every year. Although it is true that in most patients ophthalmopathy, goiter, and symptoms of thyrotoxicosis appear more or less coincidentally, it is also true that in certain cases eye signs may appear long before thyrotoxicosis is evident, or become worse when the thyrotoxicosis is subsiding or has been controlled by treatment. In approximately 20% of ophthalmopathy patients, ophthalmopathy appears before the onset of hyperthyroidism, in about 40% concurrently, and in about 20% in the six months after diagnosis. In the remainder, the eye disease first becomes apparent after treatment of the hyperthyroidism, more often in patients treated with radioiodine. It can sometimes be difficult to distinguish between eye symptoms due to hyperthyroidism and those due to Graves' antibodies, because the two often occur coincidentally. What can make things particularly difficult, is that many patients with hyperthyroidism have lid retraction, which leads to stare and lid lag (due to contraction of the levator palpebrae muscles of the eyelids). This stare may then give the appearance of protruding eyeballs (proptosis), when none in fact exists. This subsides when the hyperthyroidism is treated. === Due to Graves' ophthalmopathy === Graves' ophthalmopathy is characterized by inflammation of the extraocular muscles, orbital fat and connective tissue. It results in the following signs, which can be extremely distressing to the patient: Most frequent are symptoms due to conjunctival or corneal irritation: burning, photophobia, tearing, pain, and a gritty or sandy sensation. Protruding eyeballs (known as proptosis and exophthalmos). Diplopia (double vision) is common. Limitation of eye movement (due to impairment of eye muscle function). Periorbital and conjunctival edema (accumulation of fluid beneath the skin around the eyes). In severe cases, the optic nerve may be compressed and acuity of vision impaired. Occasionally loss of vision. === Due to hyperthyroidism === In the absence of Graves' ophthalmopathy, patients may demonstrate other ophthalmic symptoms and signs due to hyperthyroidism: Dry eyes (due to loss of corneal moisture). A sense of irritation, discomfort, or pain in the eyes. A tingling sensation behind the eyes or the feeling of grit or sand in the eyes. Excessive tearing that is often made worse by exposure to cold air, wind, or bright lights. Swelling or redness of the eyes. Stare Lid lag (Von Graefe's sign) Sensitivity to light Blurring of vision Widened palpebral fissures Infrequent blinking The appearance of lid retraction. == Neuropsychological manifestations == Several studies have suggested a high prevalence of neuropsychiatric disorders and mental disorder symptoms in Graves' disease (and thyroid disease in general), which are similar to those in patients with organic brain disease. These manifestations are diverse, affecting the central and peripheral nervous systems. The vast majority of patients with hyperthyroidism meet criteria for some psychiatric disorders, and those with milder presentations are probably not entirely free of mental symptoms such as emotional lability, tension, depression and anxiety. Anxiety syndromes related to hyperthyroidism are typically complicated by major depression and cognitive decline, such as in memory and attention. Some studies contradict the psychological findings. For example, a large 2002 study found "no statistical association between thyroid dysfunction, and the presence of depression or anxiety disorder." In one study on hospitalised elderly patients, over half had cognitive impairment with either dementia or confusion. However, a controlled study on 31 Graves' disease patients found that while patients had subjective reports of cognitive deficits in the toxic phase of Graves' thyrotoxicosis, formal testing found no cognitive impairment and suggested the reported symptoms may reflect the affective and somatic manifestations of hyperthyroidism. Notably, a literature review of 2006 notes methodology issues in the consistency of Graves' disease diagnostic criteria, which might explain the apparently contradicting findings. These researchers found many reports about residual complaints in patients who were euthyroid after treatment, with a high prevalence of anxiety disorders and bipolar disorder, as well as elevated scores on scales of anxiety, depression and psychological distress. In a 1992 study, a significant proportion of the 137 questioned patients with Graves' disease reported – among other things – increased crying (55%), being easily startled (53%), being tired all the time (47%), a significant decrease in social activity (46%), feelings of being out of control (45%), feelings of hopelessness (43%), loss of sense of humor (41%), loss of interest in things they formerly enjoyed (39%), not being able to 'connect' with others (34%). Several studies point out that the severity of psychiatric symptoms could easily result in an inappropriate referral to a psychiatrist prior to the diagnosis of hyperthyroidism. Consequently, undiagnosed hyperthyroidism sometimes results in inappropriate use of psychotropic medications; prompt recognition of hyperthyroidism (or hypothyroidism) through thyroid function screening is therefore recommended in the evaluation of patients with psychiatric symptoms. Naturally, the management of patients would be improved by collaboration between an endocrinologist and a psychiatrist. Overall, reported symptoms vary from mild to severe aspects of anxiety or depression, and may include psychotic and behavioral disturbances: Varying degrees of anxiety, such as a very active mind, irritability, hyperactivity, agitation, restlessness, nervousness, distractible hyperactivity and panic attacks. In addition patients may experience vivid dreams and, occasionally, nightmares. Depressive features of mental impairment, memory lapses, diminished attention span, fluctuating depression. Emotional lability and in some patients hypomania. The pathological well-being (euphoria) or hyperactivity may produce a state of exhaustion, and profound fatigue or asthenia chiefly characterizes the picture. Erratic behaviour may include intermittent rage disorder and mild attention deficit disorder. Some patients become hyperirritable and combative, which can precipitate accidents or even assaultive behaviour. In more extreme cases features of psychosis, with delusions of persecution or delusions of reference, and pressure of speech may present themselves. Rarely, patients develop visual or auditory hallucinations or a frank psychosis, and may appear schizophrenic, lose touch with reality and become delirious, Such psychotic symptoms may not completely clear up after the hyperthyroidism has been treated. Paranoia and paranoid-hallucinatory psychosis in hyperthyroidism usually have a manic disposition and it is therefore often not clear if the patient is experiencing a paranoid psychosis with depressive streaks, or a depression that has paranoid streaks. Treatment of hyperthyroidism typically leads to improvement in cognitive and behavioral impairments. Agitation, inattention, and frontal lobe impairment may improve more rapidly than other cognitive functions. However, several studies confirm that a substantial proportion of patients with hyperthyroidism have psychiatric disorders or mental symptoms and decreased quality of life even after successful treatment of their hyperthyroidism. === Effects on pre-existing psychiatric disorders === Patients with pre-existing psychiatric disorders, will experience a worsening of their usual symptoms, as observed by several studies. A study of 1999 found that Graves' disease exacerbated the symptoms of Tourette's disorder and attention-deficit hyperactivity disorder (ADHD), and points out that the lack of diagnosis of the Graves' disease compromised the efficacy of the treatment of these disorders. Patients who are known to have a convulsive disorder may become more difficult to control with the usual medications, and seizures may appear in patients who have never previously manifested such symptoms. == Sub-clinical hyperthyroidism == In sub-clinical hyperthyroidism, serum TSH is abnormally low, but T4- and T3-levels fall within laboratory reference ranges. It primarily affects the skeleton and the cardiovascular system (abnormalities in other systems have also been reported), in a similar but less severe and less frequent way than overt hyperthyroidism does. It can alter cardiac function, with increased heart rate, increased left ventricular mass index, increased cardiac contractility, diastolic dysfunction, and induction of ectopic atrial beats. Long-term mild excess of thyroid hormone can thus cause impaired cardiac reserve and exercise capacity. In a large population-based study of 2008, the odds of having poorer cognitive function were greater for sub-clinical hyperthyroidism than for stroke, diabetes mellitus, and Parkinson's disease. Sub-clinical hyperthyroidism might modestly increase the risk of cognitive impairment and dementia. Subclinical hyperthyroidism in pregnancy is associated with an increased risk of pre-eclampsia, low birth weight, miscarriage and preterm birth. Propylthiouracil is the preferred treatment of hyperthyroidism (both overt and subclinical) in the first trimester of pregnancy as it is associated with less birth defects than methimazole. A possible explanation for the mental symptoms of sub-clinical thyroid disease, might be found in the fact that the brain has among the highest expression of THR's, and that neurons are often more sensitive than other tissues to thyroid abnormalities, including sub-clinical hyperthyroidism and thyrotoxicosis. In a 1996 survey study respondents reported a significant decline in memory, attention, planning, and overall productivity from the period 2 years prior to Graves' symptoms onset to the period when hyperthyroid. Also, hypersensitivity of the central nervous system to low-grade hyperthyroidism can result in an anxiety disorder before other Graves' disease symptoms emerge. E.g., panic disorder has been reported to precede Graves' hyperthyroidism by 4 to 5 years in some cases, although it is not known how frequently this occurs. However, while clinical hyperthyroidism is associated with frank neuropsychological and affective alterations, the occurrence of these alterations and their treatment in mild and sub-clinical hyperthyroidism remains a controversial issue. Regardless of the inconsistent findings, a 2007 study by Andersen et al. states that the distinction between sub-clinical and overt thyroid disease is in any case somewhat arbitrary. Sub-clinical hyperthyroidism has been reported in 63% of euthyroid Graves' disease, but only in 4% of cases where Graves' disease was in remission. Subclinical hyperthyroidism has an 8% risk of converting to overt hyperthyroidism at 1 year, and a 26% risk at 5 years. == Children and adolescents == Hyperthyroidism has unique effects in children on growth and pubertal development, e.g. causing epiphyseal maturation. In growing children, accelerated bone growth from hyperthyroidism can increase osteogenesis in the short term, but generally results in short-stature adults compared with the predicted heights. Pubertal development tends to be delayed, or slowed. Girls who have undergone menarche may develop secondary amenorrhea. Hyperthyroidism is associated with high sex hormone-binding globulin (SHBG), which may result in high serum estradiol levels in girls and testosterone levels in boys. However, unbound or free levels of these hormones are decreased. Hyperthyroidism before the age of four may cause neurodevelopmental delay. A study by Segni et al. suggests that permanent brain damage can occur as a result of the illness. Ophthalmopatic findings are more common but less severe in children (severe infiltrative exophthalmos is virtually unknown before mid-adolescence), but besides that, many of the typical clinical features of hyperthyroidism in children and adolescents are similar to those in adults. An important difference between children and adults with Graves' disease, is that children have not yet developed like adults have (psychological and physiological), and that they are a lot more dependent on their environment. The encephalopathy will have profound effects on children's developing personalities and developing relationship with their environment. Disturbments in bodily development further complicates matters. The consequences for the development and the somatic and psychological well-being of the child can be very radical and sometimes irreversible. The earlier a person is affected by thyroid disease, the more the development of personality is affected and the bigger the delay from their potential development level. The child gets behind in its cognitive, emotional and sexual growth, which, by itself, also influences its processing abilities of the endocrine disease. Children with hyperthyroidism tend to have greater mood swings and disturbances of behavior, as compared with adults. Their attention span decreases, they are usually hyperactive and distractible, they sleep poorly, and their school performance deteriorates. Because devastating personality and emotional changes often appear in the child or adolescent with Graves' disease, many hyperthyroid children are (similar to many adults) referred to a developmental specialist or child psychiatrist before the presence of hyperthyroidism is suspected. == Older patients == In older patients, emotional instability may be less evident, or depression may occur, and the symptoms and signs are manifestly circulatory. In many, the thyroid is not readily palpable. Symptoms such as rapid heart rate, shortness of breath on exertion, and edema may predominate. Older patients also tend to have more weight loss and less of an increase in appetite. Thus anorexia in this group is fairly frequent, as is constipation. Elderly patients may have what is called "apathetic thyrotoxicosis", a state in which they have less and less severe symptoms, except for weakness, depression and lethargy (making it even more prone to escape diagnosis). == Graves' disease and work == Considering the many signs and symptoms, the generally delayed diagnosis, and the possibility of residual complaints after treatment, it is little wonder that a significant number of people with Graves' disease have difficulty keeping their job. One study found that of 303 patients successfully treated for hyperthyroidism (77% had Graves' disease) 53% dealt with lack of energy. About one-third were unable to resume their customary work, mainly due to persistent mental problems. In their 1986 study of 26 patients (10 years after successful treatment of hyperthyroidism), Perrild et al. note that four patients had been granted disability pensions on the basis of intellectual dysfunction. Between 2006 and 2008, Ponto et al. surveyed 250 Graves' disease patients. Of these, 36% were written off sick and 5% even had to take early retirement. In the same study, 34% of 400 questioned physicians reported treating patients with fully impaired earning capacity. Patients can and do recover with appropriate therapy while continuing to work, but more rapid and certain progress is made if a period away from the usual occupation can be provided. Two important considerations are adequate rest and attention to nutrition. == References ==
Wikipedia/Signs_and_symptoms_of_Graves'_disease
Zeno's Conscience (Italian: La coscienza di Zeno [la koʃˈʃɛntsa di dˈdzɛːno]) is a novel by Italian writer Italo Svevo. The main character is Zeno Cosini, and the book is the fictional character's memoirs that he keeps because his psychoanalyst recommended to do so in order to overcome his illness. He writes about his father, his business, his wife, and his tobacco habit. The original English translation was published under the title Confessions of Zeno. After two novels, Una vita and Senilità, were ignored by critics and public, and after a long period of literary silence, entirely devoted to work, in 1923 Svevo self-published this novel, quite different in style. It was appreciated by a close friend of Svevo's, James Joyce, who presented the book to two French critics, Valery Larbaud and Benjamin Crémieux. The success of the novel expanded to Italy, thanks to the poet Eugenio Montale. The previous novels follow a naturalistic model; Zeno's Conscience, instead, is narrated in the first person, and is focused entirely on the character's thoughts and feelings. Moreover, Zeno is an unreliable narrator, since in the very first page his doctor, publishing the diary, says that it is made of truths and lies. However, there are important similarities among Svevo's three novels. Even in the third-person narrative of Una vita and Senilità, the reader knows only the point of view of the main characters; and these are, or feel, unfit to live. The titles are revealing: Una vita had previously the title Un inetto (An unfit); Senilità (literally Senility) is a state of mind, a feeling of weakness in front of everyday life events and encounters. As to La coscienza di Zeno, the title's ambiguity causes the different ways it is translated not only into English, but also into German (Zenos Gewissen, Zenos Bewusstsein). Zeno talks of his various failures, of his feeling ill and looking for a cure he'll never find. In the end, however, he recognizes that life itself is an illness, and it always brings to death. == Plot summary == The novel is presented as a diary written by Zeno, published by his doctor (who claims that it is full of lies). The doctor has left a little note in the beginning, saying he had Zeno write an autobiography to help him in his psychoanalysis. The doctor has published the work as revenge for Zeno discontinuing his visits. The diary, however, does not follow chronological order; instead, it is structured in large chapters, each developing a particular theme (tobacco addiction, his father's death, the story of his marriage, and so on). Only the last chapter is a real diary, with pages referring to specific dates at the time of the First World War. Zeno first writes about his cigarette addiction and cites the first times he smoked. In his first few paragraphs, he remembers his life as a child. One of his friends bought cigarettes for his brother and him. Soon, he steals money from his father to buy tobacco, but finally decides not to do this out of shame. Eventually, he starts to smoke his father's half-smoked cigars instead. The problem with his "last cigarette" starts when he is twenty. He contracts a fever and his doctor tells him that to heal he must abstain from smoking. He decides smoking is bad for him and smokes his "last cigarette" so he can quit. However, this is not his last and he soon becomes plagued with "last cigarettes." He attempts to quit on days of important events in his life and soon obsessively attempts to quit on the basis of the harmony in the numbers of dates. Each time, the cigarette fails to truly be the last. He goes to doctors and asks friends to help him give up the habit, but to no avail. He even commits himself into a clinic, but escapes. The whole theme, while objectively serious, is often treated in a humorous way. When Zeno reaches his thirties, his father's health begins to deteriorate. He starts to live closer to his father in case he passes away. Zeno is very different from his father, who is a serious man, while Zeno likes to joke. For instance, when his father states that Zeno is crazy, Zeno goes to the doctor and gets an official certification that he is sane. He shows this to his father who is hurt by this joke and becomes even more convinced that Zeno must be crazy. His father is also afraid of death, being very uncomfortable with the drafting of his will. One night, his father falls gravely ill and loses consciousness. The doctor comes and works on the patient, who is brought out of the clutches of death momentarily. Over the next few days, his father is able to get up and regains a bit of his self. He is restless and shifts positions for comfort often, even though the doctor says that staying in bed would be good for his circulation. One night, as his father tries to roll out of bed, Zeno blocks him from moving, to do as the doctor wished. His angry father then stands up and accidentally slaps Zeno in the face before dying. His last action will haunt Zeno until he reaches his sixties, as he is not able to tell if it was a final punishment or just his illness taking over his body. His memoirs then trace how he meets his wife. When he is starting to learn about the business world, he meets his future father-in-law Giovanni Malfenti, an intelligent and successful businessman, whom Zeno admires. Malfenti has four daughters, Ada, Augusta, Alberta, and Anna, and when Zeno meets them, he decides that he wants to court Ada because of her beauty and since Alberta is quite young, while he regards Augusta as too plain, and Anna is only a little girl. He is unsuccessful and the Malfentis think that he is actually trying to court Augusta, who had fallen in love with him. He soon meets his rival for Ada's love, who is Guido Speier. Guido speaks perfect Tuscan (while Zeno speaks the dialect of Trieste), is handsome, and has a full head of hair (compared with Zeno's bald head). That evening, while Guido and Zeno both visit the Malfentis, Zeno proposes to Ada and she rejects him for Guido. Zeno then proposes to Alberta, who is not interested in marrying, and he is rejected by her also. Finally, he proposes to Augusta (who knows that Zeno first proposed to the other two) and she accepts, because she loves him. Very soon, the couples get married and Zeno starts to realize that he can love Augusta. This surprises him as his love for her does not diminish. However, he meets Carla, a poor aspiring singer, and they start an affair, with Carla thinking that Zeno does not love his wife. Meanwhile, Ada and Guido marry and Mr. Malfenti gets sick. Zeno's affection for both Augusta and Carla increases and he has a daughter named Antonia around the time Giovanni passes away. Finally, one day, Carla expresses a sudden whim to see Augusta. Zeno deceives Carla and causes her to meet Ada instead. Carla misrepresents Ada as Zeno's wife, and moved by her beauty and sadness, breaks off the affair. Zeno goes on to relate the business partnership between him and Guido. The two men set up a merchant business together in Trieste. They hire two workers named Luciano and Carmen (who becomes Guido's mistress) and they attempt to make as much profit as possible. However, due to Guido's obsession with debts and credit as well as with the notion of profit, the company does poorly. Guido and Ada's marriage begins to crumble as does Ada's health and beauty, due to Morbus Basedowii (Basedow's disease). Guido fakes a suicide attempt to gain Ada's compassion and she asks Zeno to help Guido's failing company. Guido starts playing on the Bourse (stock exchange) and loses even more money. On a fishing trip, he asks Zeno about the differences in effects between sodium veronal and veronal and Zeno answers that sodium veronal is fatal while veronal is not. Guido's gambling on the Bourse becomes very destructive and he finally tries to fake another suicide to gain Ada's compassion. However, he is not believed by his doctor and his wife and dies. Soon thereafter, Zeno misses Guido's funeral because he himself gambles Guido's money on the Bourse and recovers three quarters of the losses. Zeno describes his current life during the Great War. His daughter Antonia (who greatly resembles Ada) and son Alfio have grown up. He spends his time visiting doctors, looking for a cure to his imagined sickness. One of the doctors claims he is suffering from the Oedipus complex, but Zeno does not believe it to be true. Not a single doctor is able to treat him. In May 1915, while Italy is still neutral, as Zeno wants it to be, he and his family spend a vacation on the green banks of the Isonzo. Zeno does not yet guess that area will soon become a major battlefield. Renting a house in the village of Lucinico, he sets out on a casual morning stroll without his hat and jacket – when the outbreak of the war between Italy and Austro-Hungary turns the area into a war zone and Zeno is separated from his wife and children by the frontline. Forced to go back to Trieste alone, only much later does he find out that Augusta and the children reached Turin safely. The final entry is from March 1916, after Zeno, alone in wartorn Trieste, has become wealthy by speculating and hoarding, though money has not made him happy or pleased with life. He comes to a realization that life itself resembles sickness because it has advancements and setbacks and always ends in death. Human advancement has given mankind not more able bodies, but weapons that can be sold, bought, stolen to prolong life. This deviation from natural selection causes more sickness and weakness in humans. Zeno imagines a time when a person will invent a new, powerful weapon of mass destruction, just like the modern atomic bomb, that wasn't invented yet at the time, and another will steal it and destroy the world, setting it free of sickness. == See also == Unreliable narrator == References == == External links == e-text of La coscienza di Zeno on Liberliber (the Italian analogue of Project Gutenberg) Listen to some chapters of La coscienza di Zeno on audio mp3 – free download
Wikipedia/Zeno's_Conscience
A randomized controlled trial (or randomized control trial; RCT) is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments. Participants who enroll in RCTs differ from one another in known and unknown ways that can influence study outcomes, and yet cannot be directly controlled. By randomly allocating participants among compared treatments, an RCT enables statistical control over these influences. Provided it is designed well, conducted properly, and enrolls enough participants, an RCT may achieve sufficient control over these confounding factors to deliver a useful comparison of the treatments studied. == Definition and examples == An RCT in clinical research typically compares a proposed new treatment against an existing standard of care; these are then termed the 'experimental' and 'control' treatments, respectively. When no such generally accepted treatment is available, a placebo may be used in the control group so that participants are blinded, or not given information, about their treatment allocations. This blinding principle is ideally also extended as much as possible to other parties including researchers, technicians, data analysts, and evaluators. Effective blinding experimentally isolates the physiological effects of treatments from various psychological sources of bias. The randomness in the assignment of participants to treatments reduces selection bias and allocation bias, balancing both known and unknown prognostic factors, in the assignment of treatments. Blinding reduces other forms of experimenter and subject biases. A well-blinded RCT is considered the gold standard for clinical trials. Blinded RCTs are commonly used to test the efficacy of medical interventions and may additionally provide information about adverse effects, such as drug reactions. A randomized controlled trial can provide compelling evidence that the study treatment causes an effect on human health. The terms "RCT" and "randomized trial" are sometimes used synonymously, but the latter term omits mention of controls and can therefore describe studies that compare multiple treatment groups with each other in the absence of a control group. Similarly, the initialism is sometimes expanded as "randomized clinical trial" or "randomized comparative trial", leading to ambiguity in the scientific literature. Not all RCTs are randomized controlled trials (and some of them could never be, as in cases where controls would be impractical or unethical to use). The term randomized controlled clinical trial is an alternative term used in clinical research; however, RCTs are also employed in other research areas, including many of the social sciences. == History == The first reported clinical trial was conducted by James Lind in 1747 to identify a treatment for scurvy. The first blind experiment was conducted by the French Royal Commission on Animal Magnetism in 1784 to investigate the claims of mesmerism. An early essay advocating the blinding of researchers came from Claude Bernard in the latter half of the 19th century. Bernard recommended that the observer of an experiment should not have knowledge of the hypothesis being tested. This suggestion contrasted starkly with the prevalent Enlightenment-era attitude that scientific observation can only be objectively valid when undertaken by a well-educated, informed scientist. The first study recorded to have a blinded researcher was published in 1907 by W. H. R. Rivers and H. N. Webber to investigate the effects of caffeine. Randomized experiments first appeared in psychology, where they were introduced by Charles Sanders Peirce and Joseph Jastrow in the 1880s, and in education. The earliest experiments comparing treatment and control groups were published by Robert Woodworth and Edward Thorndike in 1901, and by John E. Coover and Frank Angell in 1907. In the early 20th century, randomized experiments appeared in agriculture, due to Jerzy Neyman and Ronald A. Fisher. Fisher's experimental research and his writings popularized randomized experiments. The first published Randomized Controlled Trial in medicine appeared in the 1948 paper entitled "Streptomycin treatment of pulmonary tuberculosis", which described a Medical Research Council investigation. One of the authors of that paper was Austin Bradford Hill, who is credited as having conceived the modern RCT. Trial design was further influenced by the large-scale ISIS trials on heart attack treatments that were conducted in the 1980s. By the late 20th century, RCTs were recognized as the standard method for "rational therapeutics" in medicine. As of 2004, more than 150,000 RCTs were in the Cochrane Library. To improve the reporting of RCTs in the medical literature, an international group of scientists and editors published Consolidated Standards of Reporting Trials (CONSORT) Statements in 1996, 2001 and 2010, and these have become widely accepted. Randomization is the process of assigning trial subjects to treatment or control groups using an element of chance to determine the assignments in order to reduce the bias. == Ethics == Although the principle of clinical equipoise ("genuine uncertainty within the expert medical community... about the preferred treatment") common to clinical trials has been applied to RCTs, the ethics of RCTs have special considerations. For one, it has been argued that equipoise itself is insufficient to justify RCTs. For another, "collective equipoise" can conflict with a lack of personal equipoise (e.g., a personal belief that an intervention is effective). Finally, Zelen's design, which has been used for some RCTs, randomizes subjects before they provide informed consent, which may be ethical for RCTs of screening and selected therapies, but is likely unethical "for most therapeutic trials." Although subjects almost always provide informed consent for their participation in an RCT, studies since 1982 have documented that RCT subjects may believe that they are certain to receive treatment that is best for them personally; that is, they do not understand the difference between research and treatment. Further research is necessary to determine the prevalence of and ways to address this "therapeutic misconception". The RCT method variations may also create cultural effects that have not been well understood. For example, patients with terminal illness may join trials in the hope of being cured, even when treatments are unlikely to be successful. === Trial registration === In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that all trials starting enrolment after July 1, 2005, must be registered prior to consideration for publication in one of the 12 member journals of the committee. However, trial registration may still occur late or not at all. Medical journals have been slow in adapting policies requiring mandatory clinical trial registration as a prerequisite for publication. == Classifications == === By study design === One way to classify RCTs is by study design. From most to least common in the healthcare literature, the major categories of RCT study designs are: Parallel-group – each participant is randomly assigned to a group, and all the participants in the group receive (or do not receive) an intervention. Crossover – over time, each participant receives (or does not receive) an intervention in a random sequence. Cluster – pre-existing groups of participants (e.g., villages, schools) are randomly selected to receive (or not receive) an intervention. Factorial – each participant is randomly assigned to a group that receives a particular combination of interventions or non-interventions (e.g., group 1 receives vitamin X and vitamin Y, group 2 receives vitamin X and placebo Y, group 3 receives placebo X and vitamin Y, and group 4 receives placebo X and placebo Y). An analysis of the 616 RCTs indexed in PubMed during December 2006 found that 78% were parallel-group trials, 16% were crossover, 2% were split-body, 2% were cluster, and 2% were factorial. === By outcome of interest (efficacy vs. effectiveness) === RCTs can be classified as "explanatory" or "pragmatic." Explanatory RCTs test efficacy in a research setting with highly selected participants and under highly controlled conditions. In contrast, pragmatic RCTs (pRCTs) test effectiveness in everyday practice with relatively unselected participants and under flexible conditions; in this way, pragmatic RCTs can "inform decisions about practice." === By hypothesis (superiority vs. noninferiority vs. equivalence) === Another classification of RCTs categorizes them as "superiority trials", "noninferiority trials", and "equivalence trials", which differ in methodology and reporting. Most RCTs are superiority trials, in which one intervention is hypothesized to be superior to another in a statistically significant way. Some RCTs are noninferiority trials "to determine whether a new treatment is no worse than a reference treatment." Other RCTs are equivalence trials in which the hypothesis is that two interventions are indistinguishable from each other. == Randomization == The advantages of proper randomization in RCTs include: "It eliminates bias in treatment assignment," specifically selection bias and confounding. "It facilitates blinding (masking) of the identity of treatments from investigators, participants, and assessors." "It permits the use of probability theory to express the likelihood that any difference in outcome between treatment groups merely indicates chance." There are two processes involved in randomizing patients to different interventions. First is choosing a randomization procedure to generate an unpredictable sequence of allocations; this may be a simple random assignment of patients to any of the groups at equal probabilities, may be "restricted", or may be "adaptive." A second and more practical issue is allocation concealment, which refers to the stringent precautions taken to ensure that the group assignment of patients are not revealed prior to definitively allocating them to their respective groups. Non-random "systematic" methods of group assignment, such as alternating subjects between one group and the other, can cause "limitless contamination possibilities" and can cause a breach of allocation concealment. However empirical evidence that adequate randomization changes outcomes relative to inadequate randomization has been difficult to detect. === Procedures === The treatment allocation is the desired proportion of patients in each treatment arm. An ideal randomization procedure would achieve the following goals: Maximize statistical power, especially in subgroup analyses. Generally, equal group sizes maximize statistical power, however, unequal groups sizes may be more powerful for some analyses (e.g., multiple comparisons of placebo versus several doses using Dunnett's procedure ), and are sometimes desired for non-analytic reasons (e.g., patients may be more motivated to enroll if there is a higher chance of getting the test treatment, or regulatory agencies may require a minimum number of patients exposed to treatment). Minimize selection bias. This may occur if investigators can consciously or unconsciously preferentially enroll patients between treatment arms. A good randomization procedure will be unpredictable so that investigators cannot guess the next subject's group assignment based on prior treatment assignments. The risk of selection bias is highest when previous treatment assignments are known (as in unblinded studies) or can be guessed (perhaps if a drug has distinctive side effects). Minimize allocation bias (or confounding). This may occur when covariates that affect the outcome are not equally distributed between treatment groups, and the treatment effect is confounded with the effect of the covariates (i.e., an "accidental bias"). If the randomization procedure causes an imbalance in covariates related to the outcome across groups, estimates of effect may be biased if not adjusted for the covariates (which may be unmeasured and therefore impossible to adjust for). However, no single randomization procedure meets those goals in every circumstance, so researchers must select a procedure for a given study based on its advantages and disadvantages. ==== Simple ==== This is a commonly used and intuitive procedure, similar to "repeated fair coin-tossing." Also known as "complete" or "unrestricted" randomization, it is robust against both selection and accidental biases. However, its main drawback is the possibility of imbalanced group sizes in small RCTs. It is therefore recommended only for RCTs with over 200 subjects. ==== Restricted ==== To balance group sizes in smaller RCTs, some form of "restricted" randomization is recommended. The major types of restricted randomization used in RCTs are: Permuted-block randomization or blocked randomization: a "block size" and "allocation ratio" (number of subjects in one group versus the other group) are specified, and subjects are allocated randomly within each block. For example, a block size of 6 and an allocation ratio of 2:1 would lead to random assignment of 4 subjects to one group and 2 to the other. This type of randomization can be combined with "stratified randomization", for example by center in a multicenter trial, to "ensure good balance of participant characteristics in each group." A special case of permuted-block randomization is random allocation, in which the entire sample is treated as one block. The major disadvantage of permuted-block randomization is that even if the block sizes are large and randomly varied, the procedure can lead to selection bias. Another disadvantage is that "proper" analysis of data from permuted-block-randomized RCTs requires stratification by blocks. Adaptive biased-coin randomization methods (of which urn randomization is the most widely known type): In these relatively uncommon methods, the probability of being assigned to a group decreases if the group is overrepresented and increases if the group is underrepresented. The methods are thought to be less affected by selection bias than permuted-block randomization. ==== Adaptive ==== At least two types of "adaptive" randomization procedures have been used in RCTs, but much less frequently than simple or restricted randomization: Covariate-adaptive randomization, of which one type is minimization: The probability of being assigned to a group varies in order to minimize "covariate imbalance." Minimization is reported to have "supporters and detractors" because only the first subject's group assignment is truly chosen at random, the method does not necessarily eliminate bias on unknown factors. Response-adaptive randomization, also known as outcome-adaptive randomization: The probability of being assigned to a group increases if the responses of the prior patients in the group were favorable. Although arguments have been made that this approach is more ethical than other types of randomization when the probability that a treatment is effective or ineffective increases during the course of an RCT, ethicists have not yet studied the approach in detail. === Allocation concealment === "Allocation concealment" (defined as "the procedure for protecting the randomization process so that the treatment to be allocated is not known before the patient is entered into the study") is important in RCTs. In practice, clinical investigators in RCTs often find it difficult to maintain impartiality. Stories abound of investigators holding up sealed envelopes to lights or ransacking offices to determine group assignments in order to dictate the assignment of their next patient. Such practices introduce selection bias and confounders (both of which should be minimized by randomization), possibly distorting the results of the study. Adequate allocation concealment should defeat patients and investigators from discovering treatment allocation once a study is underway and after the study has concluded. Treatment related side-effects or adverse events may be specific enough to reveal allocation to investigators or patients thereby introducing bias or influencing any subjective parameters collected by investigators or requested from subjects. Some standard methods of ensuring allocation concealment include sequentially numbered, opaque, sealed envelopes (SNOSE); sequentially numbered containers; pharmacy controlled randomization; and central randomization. It is recommended that allocation concealment methods be included in an RCT's protocol, and that the allocation concealment methods should be reported in detail in a publication of an RCT's results; however, a 2005 study determined that most RCTs have unclear allocation concealment in their protocols, in their publications, or both. On the other hand, a 2008 study of 146 meta-analyses concluded that the results of RCTs with inadequate or unclear allocation concealment tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective. === Sample size === The number of treatment units (subjects or groups of subjects) assigned to control and treatment groups, affects an RCT's reliability. If the effect of the treatment is small, the number of treatment units in either group may be insufficient for rejecting the null hypothesis in the respective statistical test. The failure to reject the null hypothesis would imply that the treatment shows no statistically significant effect on the treated in a given test. But as the sample size increases, the same RCT may be able to demonstrate a significant effect of the treatment, even if this effect is small. == Blinding == An RCT may be blinded, (also called "masked") by "procedures that prevent study participants, caregivers, or outcome assessors from knowing which intervention was received." Unlike allocation concealment, blinding is sometimes inappropriate or impossible to perform in an RCT; for example, if an RCT involves a treatment in which active participation of the patient is necessary (e.g., physical therapy), participants cannot be blinded to the intervention. Traditionally, blinded RCTs have been classified as "single-blind", "double-blind", or "triple-blind"; however, in 2001 and 2006 two studies showed that these terms have different meanings for different people. The 2010 CONSORT Statement specifies that authors and editors should not use the terms "single-blind", "double-blind", and "triple-blind"; instead, reports of blinded RCT should discuss "If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how." RCTs without blinding are referred to as "unblinded", "open", or (if the intervention is a medication) "open-label". In 2008 a study concluded that the results of unblinded RCTs tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective; for example, in an RCT of treatments for multiple sclerosis, unblinded neurologists (but not the blinded neurologists) felt that the treatments were beneficial. In pragmatic RCTs, although the participants and providers are often unblinded, it is "still desirable and often possible to blind the assessor or obtain an objective source of data for evaluation of outcomes." == Analysis of data == The types of statistical methods used in RCTs depend on the characteristics of the data and include: For dichotomous (binary) outcome data, logistic regression (e.g., to predict sustained virological response after receipt of peginterferon alfa-2a for hepatitis C) and other methods can be used. For continuous outcome data, analysis of covariance (e.g., for changes in blood lipid levels after receipt of atorvastatin after acute coronary syndrome) tests the effects of predictor variables. For time-to-event outcome data that may be censored, survival analysis (e.g., Kaplan–Meier estimators and Cox proportional hazards models for time to coronary heart disease after receipt of hormone replacement therapy in menopause) is appropriate. Regardless of the statistical methods used, important considerations in the analysis of RCT data include: Whether an RCT should be stopped early due to interim results. For example, RCTs may be stopped early if an intervention produces "larger than expected benefit or harm", or if "investigators find evidence of no important difference between experimental and control interventions." The extent to which the groups can be analyzed exactly as they existed upon randomization (i.e., whether a so-called "intention-to-treat analysis" is used). A "pure" intention-to-treat analysis is "possible only when complete outcome data are available" for all randomized subjects; when some outcome data are missing, options include analyzing only cases with known outcomes and using imputed data. Nevertheless, the more that analyses can include all participants in the groups to which they were randomized, the less bias that an RCT will be subject to. Whether subgroup analysis should be performed. These are "often discouraged" because multiple comparisons may produce false positive findings that cannot be confirmed by other studies. == Reporting of results == The CONSORT 2010 Statement is "an evidence-based, minimum set of recommendations for reporting RCTs." The CONSORT 2010 checklist contains 25 items (many with sub-items) focusing on "individually randomised, two group, parallel trials" which are the most common type of RCT. For other RCT study designs, "CONSORT extensions" have been published, some examples are: Consort 2010 Statement: Extension to Cluster Randomised Trials Consort 2010 Statement: Non-Pharmacologic Treatment Interventions "Reporting of surrogate endpoints in randomised controlled trial reports (CONSORT-Surrogate): extension checklist with explanation and elaboration" === Relative importance and observational studies === Two studies published in The New England Journal of Medicine in 2000 found that observational studies and RCTs overall produced similar results. The authors of the 2000 findings questioned the belief that "observational studies should not be used for defining evidence-based medical care" and that RCTs' results are "evidence of the highest grade." However, a 2001 study published in Journal of the American Medical Association concluded that "discrepancies beyond chance do occur and differences in estimated magnitude of treatment effect are very common" between observational studies and RCTs. According to a 2014 (updated in 2024) Cochrane review, there is little evidence for significant effect differences between observational studies and randomized controlled trials. To evaluate differences it is necessary to consider things other than design, such as heterogeneity, population, intervention or comparator. Two other lines of reasoning question RCTs' contribution to scientific knowledge beyond other types of studies: If study designs are ranked by their potential for new discoveries, then anecdotal evidence would be at the top of the list, followed by observational studies, followed by RCTs. RCTs may be unnecessary for treatments that have dramatic and rapid effects relative to the expected stable or progressively worse natural course of the condition treated. One example is combination chemotherapy including cisplatin for metastatic testicular cancer, which increased the cure rate from 5% to 60% in a 1977 non-randomized study. === Interpretation of statistical results === Like all statistical methods, RCTs are subject to both type I ("false positive") and type II ("false negative") statistical errors. Regarding Type I errors, a typical RCT will use 0.05 (i.e., 1 in 20) as the probability that the RCT will falsely find two equally effective treatments significantly different. Regarding Type II errors, despite the publication of a 1978 paper noting that the sample sizes of many "negative" RCTs were too small to make definitive conclusions about the negative results, by 2005-2006 a sizeable proportion of RCTs still had inaccurate or incompletely reported sample size calculations. === Peer review === Peer review of results is an important part of the scientific method. Reviewers examine the study results for potential problems with design that could lead to unreliable results (for example by creating a systematic bias), evaluate the study in the context of related studies and other evidence, and evaluate whether the study can be reasonably considered to have proven its conclusions. To underscore the need for peer review and the danger of overgeneralizing conclusions, two Boston-area medical researchers performed a randomized controlled trial in which they randomly assigned either a parachute or an empty backpack to 23 volunteers who jumped from either a biplane or a helicopter. The study was able to accurately report that parachutes fail to reduce injury compared to empty backpacks. The key context that limited the general applicability of this conclusion was that the aircraft were parked on the ground, and participants had only jumped about two feet. == Advantages == RCTs are considered to be the most reliable form of scientific evidence in the hierarchy of evidence that influences healthcare policy and practice because RCTs reduce spurious causality and bias. Results of RCTs may be combined in systematic reviews which are increasingly being used in the conduct of evidence-based practice. Some examples of scientific organizations' considering RCTs or systematic reviews of RCTs to be the highest-quality evidence available are: As of 1998, the National Health and Medical Research Council of Australia designated "Level I" evidence as that "obtained from a systematic review of all relevant randomised controlled trials" and "Level II" evidence as that "obtained from at least one properly designed randomised controlled trial." Since at least 2001, in making clinical practice guideline recommendations the United States Preventive Services Task Force has considered both a study's design and its internal validity as indicators of its quality. It has recognized "evidence obtained from at least one properly randomized controlled trial" with good internal validity (i.e., a rating of "I-good") as the highest quality evidence available to it. The GRADE Working Group concluded in 2008 that "randomised trials without important limitations constitute high quality evidence." For issues involving "Therapy/Prevention, Aetiology/Harm", the Oxford Centre for Evidence-based Medicine as of 2011 defined "Level 1a" evidence as a systematic review of RCTs that are consistent with each other, and "Level 1b" evidence as an "individual RCT (with narrow Confidence Interval)." Notable RCTs with unexpected results that contributed to changes in clinical practice include: After Food and Drug Administration approval, the antiarrhythmic agents flecainide and encainide came to market in 1986 and 1987 respectively. The non-randomized studies concerning the drugs were characterized as "glowing", and their sales increased to a combined total of approximately 165,000 prescriptions per month in early 1989. In that year, however, a preliminary report of an RCT concluded that the two drugs increased mortality. Sales of the drugs then decreased. Prior to 2002, based on observational studies, it was routine for physicians to prescribe hormone replacement therapy for post-menopausal women to prevent myocardial infarction. In 2002 and 2004, however, published RCTs from the Women's Health Initiative claimed that women taking hormone replacement therapy with estrogen plus progestin had a higher rate of myocardial infarctions than women on a placebo, and that estrogen-only hormone replacement therapy caused no reduction in the incidence of coronary heart disease. Possible explanations for the discrepancy between the observational studies and the RCTs involved differences in methodology, in the hormone regimens used, and in the populations studied. The use of hormone replacement therapy decreased after publication of the RCTs. == Disadvantages == Many papers discuss the disadvantages of RCTs. Among the most frequently cited drawbacks are: === Time and costs === RCTs can be expensive; one study found 28 Phase III RCTs funded by the National Institute of Neurological Disorders and Stroke prior to 2000 with a total cost of US$335 million, for a mean cost of US$12 million per RCT. Nevertheless, the return on investment of RCTs may be high, in that the same study projected that the 28 RCTs produced a "net benefit to society at 10-years" of 46 times the cost of the trials program, based on evaluating a quality-adjusted life year as equal to the prevailing mean per capita gross domestic product. The conduct of an RCT takes several years until being published; thus, data is restricted from the medical community for long years and may be of less relevance at time of publication. It is costly to maintain RCTs for the years or decades that would be ideal for evaluating some interventions. Interventions to prevent events that occur only infrequently (e.g., sudden infant death syndrome) and uncommon adverse outcomes (e.g., a rare side effect of a drug) would require RCTs with extremely large sample sizes and may, therefore, best be assessed by observational studies. Due to the costs of running RCTs, these usually only inspect one variable or very few variables, rarely reflecting the full picture of a complicated medical situation; whereas the case report, for example, can detail many aspects of the patient's medical situation (e.g. patient history, physical examination, diagnosis, psychosocial aspects, follow up). === Conflict of interest dangers === A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals; 15 from specialty medicine journals, and 3 from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed an aggregate of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources with 219 (69%) industry funded. 132 of the 509 RCTs reported author conflict of interest disclosures, with 91 studies (69%) disclosing industry financial ties with one or more authors. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded "without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers' understanding and appraisal of the evidence from the meta-analysis may be compromised." Some RCTs are fully or partly funded by the health care industry (e.g., the pharmaceutical industry) as opposed to government, nonprofit, or other sources. A systematic review published in 2003 found four 1986–2002 articles comparing industry-sponsored and nonindustry-sponsored RCTs, and in all the articles there was a correlation of industry sponsorship and positive study outcome. A 2004 study of 1999–2001 RCTs published in leading medical and surgical journals determined that industry-funded RCTs "are more likely to be associated with statistically significant pro-industry findings." These results have been mirrored in trials in surgery, where although industry funding did not affect the rate of trial discontinuation it was however associated with a lower odds of publication for completed trials. One possible reason for the pro-industry results in industry-funded published RCTs is publication bias. Other authors have cited the differing goals of academic and industry sponsored research as contributing to the difference. Commercial sponsors may be more focused on performing trials of drugs that have already shown promise in early stage trials, and on replicating previous positive results to fulfill regulatory requirements for drug approval. === Ethics === If a disruptive innovation in medical technology is developed, it may be difficult to test this ethically in an RCT if it becomes "obvious" that the control subjects have poorer outcomes—either due to other foregoing testing, or within the initial phase of the RCT itself. Ethically it may be necessary to abort the RCT prematurely, and getting ethics approval (and patient agreement) to withhold the innovation from the control group in future RCTs may not be feasible. Historical control trials (HCT) exploit the data of previous RCTs to reduce the sample size; however, these approaches are controversial in the scientific community and must be handled with care. == In social science == Due to the recent emergence of RCTs in social science, the use of RCTs in social sciences is a contested issue. Some writers from a medical or health background have argued that existing research in a range of social science disciplines lacks rigour, and should be improved by greater use of randomized control trials. === Transport science === Researchers in transport science argue that public spending on programmes such as school travel plans could not be justified unless their efficacy is demonstrated by randomized controlled trials. Graham-Rowe and colleagues reviewed 77 evaluations of transport interventions found in the literature, categorising them into 5 "quality levels". They concluded that most of the studies were of low quality and advocated the use of randomized controlled trials wherever possible in future transport research. Dr. Steve Melia took issue with these conclusions, arguing that claims about the advantages of RCTs, in establishing causality and avoiding bias, have been exaggerated. He proposed the following eight criteria for the use of RCTs in contexts where interventions must change human behaviour to be effective: The intervention: Has not been applied to all members of a unique group of people (e.g. the population of a whole country, all employees of a unique organisation etc.) Is applied in a context or setting similar to that which applies to the control group Can be isolated from other activities—and the purpose of the study is to assess this isolated effect Has a short timescale between its implementation and maturity of its effects And the causal mechanisms: Are either known to the researchers, or else all possible alternatives can be tested Do not involve significant feedback mechanisms between the intervention group and external environments Have a stable and predictable relationship to exogenous factors Would act in the same way if the control group and intervention group were reversed === Criminology === A 2005 review found 83 randomized experiments in criminology published in 1982–2004, compared with only 35 published in 1957–1981. The authors classified the studies they found into five categories: "policing", "prevention", "corrections", "court", and "community". Focusing only on offending behavior programs, Hollin (2008) argued that RCTs may be difficult to implement (e.g., if an RCT required "passing sentences that would randomly assign offenders to programmes") and therefore that experiments with quasi-experimental design are still necessary. === Education === RCTs have been used in evaluating a number of educational interventions. Between 1980 and 2016, over 1,000 reports of RCTs have been published. For example, a 2009 study randomized 260 elementary school teachers' classrooms to receive or not receive a program of behavioral screening, classroom intervention, and parent training, and then measured the behavioral and academic performance of their students. Another 2009 study randomized classrooms for 678 first-grade children to receive a classroom-centered intervention, a parent-centered intervention, or no intervention, and then followed their academic outcomes through age 19. == Criticism == A 2018 review of the 10 most cited randomised controlled trials noted poor distribution of background traits, difficulties with blinding, and discussed other assumptions and biases inherent in randomised controlled trials. These include the "unique time period assessment bias", the "background traits remain constant assumption", the "average treatment effects limitation", the "simple treatment at the individual level limitation", the "all preconditions are fully met assumption", the "quantitative variable limitation" and the "placebo only or conventional treatment only limitation". == See also == Drug development Hypothesis testing Impact evaluation Jadad scale Pipeline planning Patient and public involvement Observational study Blinded experiment Statistical inference Royal Commission on Animal Magnetism – 1784 French scientific bodies' investigations involving systematic controlled trials == References == == Further reading ==
Wikipedia/Randomized_control_trial
Pelvic inflammatory disease (PID), also known as pelvic inflammatory disorder, is an infection of the upper part of the female reproductive system, mainly the uterus, fallopian tubes, and ovaries, and inside of the pelvis. Often, there may be no symptoms. Signs and symptoms, when present, may include lower abdominal pain, vaginal discharge, fever, burning with urination, pain with sex, bleeding after sex, or irregular menstruation. Untreated PID can result in long-term complications including infertility, ectopic pregnancy, chronic pelvic pain, and cancer. The disease is caused by bacteria that spread from the vagina and cervix. It has been reported that infections by Neisseria gonorrhoeae or Chlamydia trachomatis are present in 75 to 90 percent of cases. However, in the UK it is reported by the NHS that infections by Neisseria gonorrhoeae and Chlamydia trachomatis are responsible for only a quarter of PID cases. Often, multiple different bacteria are involved. Without treatment, about 10 percent of those with a chlamydial infection and 40 percent of those with a gonorrhea infection will develop PID. Risk factors are generally similar to those of sexually transmitted infections and include a high number of sexual partners and drug use. Vaginal douching may also increase the risk. The diagnosis is typically based on the presenting signs and symptoms. It is recommended that the disease be considered in all women of childbearing age who have lower abdominal pain. A definitive diagnosis of PID is made by finding pus involving the fallopian tubes during surgery. Ultrasound may also be useful in diagnosis. Efforts to prevent the disease include not having sex or having few sexual partners and using condoms. Screening women at risk for chlamydial infection followed by treatment decreases the risk of PID. If the diagnosis is suspected, treatment is typically advised. Treating a woman's sexual partners should also occur. In those with mild or moderate symptoms, a single injection of the antibiotic ceftriaxone along with two weeks of doxycycline and possibly metronidazole by mouth is recommended. For those who do not improve after three days or who have severe disease, intravenous antibiotics should be used. Globally, about 106 million cases of chlamydia and 106 million cases of gonorrhea occurred in 2008. The number of cases of PID, however, is not clear. It is estimated to affect about 1.5 percent of young women yearly. In the United States, PID is estimated to affect about one million people each year. A type of intrauterine device (IUD) known as the Dalkon shield led to increased rates of PID in the 1970s. Current IUDs are not associated with this problem after the first month. == Signs and symptoms == Symptoms in PID range from none to severe. If there are symptoms, fever, cervical motion tenderness, lower abdominal pain, new or different discharge, painful intercourse, uterine tenderness, adnexal tenderness, or irregular menstruation may be noted. Other complications include endometritis, salpingitis, tubo-ovarian abscess, pelvic peritonitis, periappendicitis, and perihepatitis. === Complications === PID can cause scarring inside the reproductive system, which can later cause serious complications, including chronic pelvic pain, infertility, ectopic pregnancy (the leading cause of pregnancy-related deaths in adult females), and other complications of pregnancy. Occasionally, the infection can spread to the peritoneum causing inflammation and the formation of scar tissue on the external surface of the liver (Fitz-Hugh–Curtis syndrome). == Cause == Chlamydia trachomatis and Neisseria gonorrhoeae are common causes of PID. However, PID can also be caused by other untreated infections, like bacterial vaginosis. Data suggest that PID is often polymicrobial. Isolated anaerobes and facultative microorganisms have been obtained from the upper genital tract. N. gonorrhoeae has been isolated from fallopian tubes, facultative and anaerobic organisms were recovered from endometrial tissues. The anatomical structure of the internal organs and tissues of the female reproductive tract provides a pathway for pathogens to ascend from the vagina to the pelvic cavity through the infundibulum. The disturbance of the naturally occurring vaginal microbiota associated with bacterial vaginosis increases the risk of PID. N. gonorrhoea and C. trachomatis are the most common organisms. The least common were infections caused exclusively by anaerobes and facultative organisms. Anaerobes and facultative bacteria were also isolated from 50 percent of the patients from whom Chlamydia and Neisseria were recovered; thus, anaerobes and facultative bacteria were present in the upper genital tract of nearly two-thirds of the PID patients. PCR and serological tests have associated extremely fastidious organism with endometritis, PID, and tubal factor infertility. Microorganisms associated with PID are listed below. Cases of PID have developed in people who have stated they have never had sex. === Bacteria === == Diagnosis == Upon a pelvic examination, cervical motion, uterine, or adnexal tenderness will be experienced. Mucopurulent cervicitis and or urethritis may be observed. In severe cases more testing may be required such as laparoscopy, intra-abdominal bacteria sampling and culturing, or tissue biopsy. Laparoscopy can visualize "violin-string" adhesions, characteristic of Fitz-Hugh–Curtis perihepatitis and other abscesses that may be present. Other imaging methods, such as ultrasonography, computed tomography (CT), and magnetic imaging (MRI), can aid in diagnosis. Blood tests can also help identify the presence of infection: the erythrocyte sedimentation rate (ESR), the C-reactive protein (CRP) level, and chlamydial and gonococcal DNA probes. Nucleic acid amplification tests (NAATs), direct fluorescein tests (DFA), and enzyme-linked immunosorbent assays (ELISA) are highly sensitive tests that can identify specific pathogens present. Serology testing for antibodies is not as useful since the presence of the microorganisms in healthy people can confound interpreting the antibody titer levels, although antibody levels can indicate whether an infection is recent or long-term. Definitive criteria include histopathologic evidence of endometritis, thickened filled fallopian tubes, or laparoscopic findings. Gram stain/smear becomes definitive in the identification of rare, atypical and possibly more serious organisms. Two thirds of patients with laparoscopic evidence of previous PID were not aware they had PID, but even asymptomatic PID can cause serious harm. Laparoscopic identification is helpful in diagnosing tubal disease; a 65 percent to 90 percent positive predictive value exists in patients with presumed PID. Upon gynecologic ultrasound, a potential finding is tubo-ovarian complex, which is edematous and dilated pelvic structures as evidenced by vague margins, but without abscess formation. === Differential diagnosis === A number of other causes may produce similar symptoms including appendicitis, ectopic pregnancy, hemorrhagic or ruptured ovarian cysts, ovarian torsion, and endometriosis and gastroenteritis, peritonitis, and bacterial vaginosis among others. Pelvic inflammatory disease is more likely to reoccur when there is a prior history of the infection, recent sexual contact, recent onset of menses, or an IUD (intrauterine device) in place or if the partner has a sexually transmitted infection. Acute pelvic inflammatory disease is highly unlikely when recent intercourse has not taken place or an IUD is not being used. A sensitive serum pregnancy test is typically obtained to rule out ectopic pregnancy. Culdocentesis will differentiate hemoperitoneum (ruptured ectopic pregnancy or hemorrhagic cyst) from pelvic sepsis (salpingitis, ruptured pelvic abscess, or ruptured appendix). Pelvic and vaginal ultrasounds are helpful in the diagnosis of PID. In the early stages of infection, the ultrasound may appear normal. As the disease progresses, nonspecific findings can include free pelvic fluid, endometrial thickening, uterine cavity distension by fluid or gas. In some instances the borders of the uterus and ovaries appear indistinct. Enlarged ovaries accompanied by increased numbers of small cysts correlates with PID. Laparoscopy is infrequently used to diagnose pelvic inflammatory disease since it is not readily available. Moreover, it might not detect subtle inflammation of the fallopian tubes, and it fails to detect endometritis. Nevertheless, laparoscopy is conducted if the diagnosis is not certain or if the person has not responded to antibiotic therapy after 48 hours. No single test has adequate sensitivity and specificity to diagnose pelvic inflammatory disease. A large multisite U.S. study found that cervical motion tenderness as a minimum clinical criterion increases the sensitivity of the CDC diagnostic criteria from 83 percent to 95 percent. However, even the modified 2002 CDC criteria do not identify women with subclinical disease. == Prevention == Regular testing for sexually transmitted infections is encouraged for prevention. The risk of contracting pelvic inflammatory disease can be reduced by the following: Using barrier methods such as condoms; see human sexual behaviour for other listings. Using latex condoms to prevent STIs that may go untreated. Seeking medical attention if you are experiencing symptoms of PID. Using hormonal combined contraceptive pills also helps in reducing the chances of PID by thickening the cervical mucosal plug & hence preventing the ascent of causative organisms from the lower genital tract. Seeking medical attention after learning that a current or former sex partner has, or might have had a sexually transmitted infection. Getting a STI history from your current partner and strongly encouraging they be tested and treated before intercourse. Diligence in avoiding vaginal activity, particularly intercourse, after the end of a pregnancy (delivery, miscarriage, or abortion) or certain gynecological procedures, to ensure that the cervix closes. Reducing the number of sexual partners; As in sexual monogamy. Avoiding the use of a douche that can upset the natural vaginal microbiota balance. == Treatment == Treatment is often started without confirmation of infection because of the serious complications that may result from delayed treatment. Treatment depends on the infectious agent and generally involves the use of antibiotic therapy although there is no clear evidence of which antibiotic regimen is more effective and safe in the management of PID. If there is no improvement within two to three days, the patient is typically advised to seek further medical attention. Hospitalization sometimes becomes necessary if there are other complications. Treating sexual partners for possible STIs can help in treatment and prevention. There should be no wait for STI results to start treatment. Treatment should not be avoided for longer than 2-3 days due to increasing the risk of infertility. For women with PID of mild to moderate severity, parenteral and oral therapies appear to be effective. It does not matter to their short- or long-term outcome whether antibiotics are administered to them as inpatients or outpatients. Typical regimens include cefoxitin or cefotetan plus doxycycline, and clindamycin plus gentamicin. An alternative parenteral regimen is ampicillin/sulbactam plus doxycycline. Erythromycin-based medications can also be used. A single study suggests superiority of azithromycin over doxycycline. Another alternative is to use a parenteral regimen with ceftriaxone or cefoxitin plus doxycycline. Clinical experience guides decisions regarding transition from parenteral to oral therapy, which usually can be initiated within 24–48 hours of clinical improvement. When PID is caught early there are treatments that can be utilized, however these treatments will not undo any damage PID may has caused. If previously having a PID diagnosis and were to be exposed to another STI the risk of having PID reoccur is higher Early treatment can not prevent the following: chronic abdominal pain. infertility and or ectopic pregnancies. scar tissue within or outside the fallopian tubes. == Prognosis == Early diagnosis and immediate treatment are vital in reducing the chances of later complications from PID. Delaying treatment for even a few days could greatly increase the chances of further complications. Even when the PID infection is cured, effects of the infection may be permanent, or long lasting. This makes early identification essential. A limitation of this is that diagnostic tests are not included in routine check-ups, and cannot be done using signs and symptoms alone; the required diagnostic tests are more invasive than that. Treatment resulting in cure is very important in the prevention of damage to the reproductive system. Around 20 percent of women with PID develop infertility. Even women who do not experience intense symptoms or are asymptomatic can become infertile. This can be caused by the formation of scar tissue due to one or more episodes of PID, and can lead to tubal blockage. Both of these increase the risk of the inability to get pregnant, and 1% results in an ectopic pregnancy. Chronic pelvic/abdominal pain develops post PID 40% of the time. Certain occurrences such as a post pelvic operation, the period of time immediately after childbirth (postpartum), miscarriage or abortion increase the risk of acquiring another infection leading to PID. == Epidemiology == Globally about 106 million cases of chlamydia and 106 million cases of gonorrhea occurred in 2008. The number of cases of PID; however, is not clear.This is largely due to diagnostic tests being invasive and not included in routine check-ups, despite PID being the most common reason for individuals to admit themselves under gynecological care. It is estimated to affect about 1.5 percent of young women yearly. In the United States PID is estimated to affect about one million people yearly. Rates are highest with teenagers and first time mothers. PID causes over 100,000 women to become infertile in the US each year. == Prevalence == Records show that... 18/10000 recorded discharges in the US after a diagnosis of PID. Prevalence of self reported cases of PID for 18–44 was approximately 4.4%. Findings that PID has an associated risk with previous STI diagnosis compared to women with no previous STI diagnosis 1.1% of women, 16-46 years of age, in England and Wales are diagnosed with PID. Despite the indications of a general decrease in PID rates, there is an observed rise in the prevalence of gonorrhea and chlamydia. With that, in order to decrease the prevalence of PID, one should test for gonorrhea and chlamydia. Two nationally representative probability surveys referenced are the National Health and Nutrition Examination Survey (NHANES) and the National Survey of Family Growth (NSFG) surveyed women aged 18 to 44 from 2013 to 2014. The results: 2.5 million women have had a PID diagnosis in the past. The self-reported history decreased from 4.1% in 2013 to 3.6% in 2017. It is possible that increased screening at annual gynecologist appointments has led to an earlier detection and prevention of PID. In white non-Hispanic women, the prevalence decreased from 4.9% to 3.9%, and in Hispanic women, the prevalence decreased from 5.3% to 3.7%. In black non-Hispanic women, the prevalence increased from 3.8% to 6.3%. The highest burden of PID recently is in black women and women living in the Southern United States where there is a higher prevalence of STIs as well. Disparities between races could be due to lower socioeconomic status. Those with a lower income are less likely to get an annual gynecologist appointment or other preventative measures and are more likely to be uninsured. == Population at risk == Source: Those who are sexually active with female (intact)reproductive organs and are under the age of 25 Rarely observed in females who have had a hysterectomy Overall age range 18-44 Those who have an STI that has gone untreated. Women with more than one sexual partner Inconsistent condom use for those not in a mutually monogamous relationship == References == == External links == CDC Unpacking PID: Mysterious Microbes, Diagnostic Dilemmas and Triple Treatments Webinar - 2013 National STD Curriculum, University of Washington Spach, D.H.; Wendel, K.A. (April 2023). "Pelvic Inflammatory Disease". Core Concepts. Spach, D.H.; Wendel, K.A. (April 2023). "Pelvic Inflammatory Disease". STD Lessons (2nd ed.).
Wikipedia/Pelvic_inflammatory_disease
Abnormal uterine bleeding is vaginal bleeding from the uterus that is abnormally frequent, lasts excessively long, is heavier than normal, or is irregular. The term "dysfunctional uterine bleeding" was used when no underlying cause was present. Quality of life may be negatively affected. The underlying causes may be structural or non-structural and are classified in accordance with the FIGO system 1 & 2. Common causes include: Ovulation problems, fibroids, the lining of the uterus growing into the uterine wall, uterine polyps, underlying bleeding problems, side effects from birth control, or cancer. Susceptibility to each cause is often dependent on an individual's stage in life (prepubescent, premenopausal, postmenopausal). More than one category of causes may apply in an individual case. The first step in work-up is to rule out a tumor or pregnancy. Vaginal bleeding during pregnancy may be abnormal in certain circumstances. Please see Obstetrical bleeding and early pregnancy bleeding for more information.Medical imaging or hysteroscopy may help with the diagnosis. Treatment depends on the underlying cause. Options may include hormonal birth control, gonadotropin-releasing hormone agonists, tranexamic acid, nonsteroidal anti-inflammatory drugs, and surgery such as endometrial ablation or hysterectomy. Over the course of a year, roughly 20% of reproductive-aged women self-report at least one symptom of abnormal uterine bleeding. == Signs and symptoms == Although uterine bleeding can be alarming and abnormal, there are many instances in which uterine bleeding is normal. FIGO System 1 is the first part of the classification system developed by the International Federation of Gynecology and Obstetrics to standardize the differences between normal uterine bleeding and abnormal uterine bleeding based on frequency, duration, regularity and individual flow volume. === Normal uterine bleeding === Monthly Menstrual cycle occurring every 21 – 35 days. (Most common cause of uterine bleeding). Neonatal uterine bleeding can occur in newborn females due to rapidly decreasing estrogen levels. Postpartum lochia is a bloody discharge that occurs post pregnancy and can last for several weeks. Uterine procedures such as biopsies, myomectomies, intrauterine device insertion and Pap smears can cause light bleeding that may last for several days. === Abnormal uterine bleeding === Menstrual bleeding starts before 21 days or after 35 days Menstrual bleeding that lasts more than 7 days Heavy menstrual cycle bleeding that necessitates changing pad or tampon roughly every hour (about 80 mL of blood loss) . Any bleeding between menstrual cycles, after sexual intercourse or bleeding after six months of menopause Premenopausal menstrual bleeding that stops for more than 3 months == Causes and mechanisms == The causes of abnormal uterine bleeding are divided into nine categories (PALM COEIN) under the FIGO System 2 which is the second part of the classification system developed by the International Federation of Gynecology and Obstetrics. More than one category of causes may apply in an individual case. Causes of abnormal uterine bleeding can also be narrowed down according to age group because each stage of life brings unique changes to an individual's uterine structure and systemic hormones. Prepubescent group includes all persons with a uterus that have not yet started menstruation (monthly bleeding). Newborn uterine bleeding is a normal occurrence and should gradually stop as estrogen leaves the infant's body. Any bleeding outside of the newborn period is abnormal and should be investigated for a cause, including sexual abuse. Premenopausal group includes all persons with a uterus that have started and are currently experiencing menstruation. Adolescents between the ages of 13 and 19 commonly experience irregular menstrual cycles as their hormones and ovulation cycle regulates. Birth control, coagulopathies, pregnancy, abnormal uterine lining growths and infection are also common causes of abnormal bleeding in this age range. Adults between the ages of 20 and 40 most commonly experience abnormal uterine bleeding due to pregnancy and hormonal birth control. Uterine structural abnormalities (See PALM in chart below) ovulatory and endometrial dysregulation are also common causes. Uterine cancer is a rare cause of abnormal uterine bleeding in this group. Postmenopausal group includes all persons with a uterus that have stopped menstruation for more than one year or 12 consecutive months. Declining ovulatory function or menopause, is the most common cause of abnormal bleeding. Menstrual bleeding becomes gradually less frequent and lighter until it completely stops. Uterine cell wall thinning and overgrowth as well as cancer are common causes for abnormal uterine bleeding concern. The mechanisms, or reasons, that each of the PALM COEIN abnormalities cause uterine bleeding is not well understood, but the table below includes some scientific hypothesis and observations that give a strong indication of what may be happening. For more in-depth information about each of these causes, click on the links in the table below. == Diagnosis == Diagnosis of abnormal uterine bleeding starts with a medical history and physical examination. Normal menstrual bleeding patterns vary from woman to woman, so the medical history covers specific details about the woman's individual menstrual bleeding pattern, such as its predictability, length, volume, and whether she experiences cramps or other pain. The healthcare provider will also check to see whether she or any family members have any potentially related health conditions, and whether she is taking medication that might increase or decrease menstrual bleeding, such as herbal supplements, hormonal contraceptives, over-the-counter drugs such as aspirin, or blood thinners. Medical tests include a blood test, to see whether the abnormal bleeding has caused anemia, and a pelvic ultrasound, to see whether the abnormal bleeding is caused by a structural problem, such as a uterine fibroid. Ultrasound is specifically recommended in those over the age of 35 or those in whom bleeding continues despite initial treatment. Laboratory assessment of thyroid stimulating hormone (TSH), pregnancy, and chlamydia is also recommended. More extensive testing might include magnetic resonance imaging and endometrial sampling. Endometrial sampling is recommended in those over the age of 45 who do not improve with treatment and in those with intermenstrual bleeding that persists. The PALM-COEIN system may be used to classify the uterine bleeding. == Management == Treatment depends on the underlying cause. Options may include hormonal birth control, gonadotropin-releasing hormone (GnRH) agonists, tranexamic acid, nonsteroidal anti-inflammatory drugs, and surgery such as endometrial ablation or hysterectomy. Polyps, adenomyosis, and cancer are generally treated by surgery. Iron supplementation may be needed. == Terminology == The terminology "dysfunctional uterine bleeding" is no longer recommended. Historically dysfunctional uterine bleeding meant there was no structural or systemic problems present. In abnormal uterine bleeding underlying causes may be present. == Epidemiology == About one-third of all medical appointments with gynecologists involve abnormal uterine bleeding, with the proportion rising to 70% in the years around menopause. == References == == External links == Merck Manual Abnormal Uterine Bleeding
Wikipedia/Dysfunctional_uterine_bleeding
A vaginal disease is a pathological condition that affects part or all of the vagina. == Types == === Sexually transmitted infections === Sexually transmitted infections that affect the vagina include: Herpes genitalis. The herpes simplex virus (HSV) can infect the vulva, vagina, and cervix, and this may result in small, painful, recurring blisters and ulcers. It is also common for there to be an absence of any noticeable symptoms. Gonorrhea Chlamydia Trichomoniasis Human papillomavirus (HPV), which may cause genital warts. Because of STIs, health authorities and other health outlets recommend safe sex practices when engaging in sexual activity. === Other infectious diseases === Candidal vulvovaginitis Bacterial vaginosis (BV) associated with the Gardnerella, formerly called "nonspecific vaginitis" === Vaginismus === Vaginismus, which is not the same thing as vaginitis (an inflammation of the vagina), is an involuntary tightening of the vagina due to a conditioned reflex of the muscles in the area during vaginal penetration. It can affect any form of vaginal penetration, including sexual intercourse, insertion of tampons and menstrual cups, and the penetration involved in gynecological examinations. Various psychological and physical treatments are possible to help alleviate it. === Obstruction === A vaginal obstruction is often caused by an imperforate hymen or, less commonly, a transverse vaginal septum. A sign of vaginal obstruction is hydrocolpos, that is, accumulation of watery fluid within the vagina. It may extend to become hydrometrocolpos, that is, accumulation of watery fluid within the vagina as well as within the uterus. === Hypoplasia === Vaginal hypoplasia is the underdevelopment or incomplete development of the vagina. Vaginal hypoplasia can vary in severity from being smaller than normal to being completely absent. The absence of a vagina is a result of vaginal agenesis. Diagnostically, it may look similar to a vaginal obstruction. It is frequently associated with Mayer-Rokitansky-Küstner-Hauser (MRKH) syndrome, in which the most common result is an absent uterus in conjunction with a deformed or missing vagina, despite the presence of normal ovaries and normal external genitalia. It is also associated with cervical agenesis, in which the uterus is present but the uterine cervix is absent. === Lumps === The presence of unusual lumps in the wall or base of the vagina is always abnormal. The most common of these is Bartholin's cyst. The cyst, which can feel like a pea, is formed by a blockage in glands which normally supply the opening of the vagina. This condition is easily treated with minor surgery or silver nitrate. Other less common causes of small lumps or vesicles are herpes simplex. They are usually multiple and very painful with a clear fluid leaving a crust. They may be associated with generalized swelling and are very tender. Lumps associated with cancer of the vaginal wall are very rare and the average age of onset is seventy years. The most common form is squamous cell carcinoma, then cancer of the glands or adenocarcinoma and finally, and even more rarely, vaginal melanoma. === Persistent genital arousal disorder === Persistent genital arousal disorder (PGAD), which results in a spontaneous, persistent, and uncontrollable genital arousal, with or without orgasm, unrelated to any feelings of sexual desire. Because PGAD is relatively rare and, as its own concept apart from clitoral priapism (a rare, potentially painful medical condition in which, for an unusually extended period of time, the erect clitoris does not return to its relaxed state), has only been researched since 2001, there is little research into what may cure or remedy the disorder. In some recorded cases, PGAD was caused by, or caused, a pelvic arterial-venous malformation with arterial branches to the clitoris; surgical treatment was effective in these cases. === Other === Vulvodynia Vaginal prolapse may result in the case of weakened pelvic muscles, which is a common result of childbirth; in the case of this prolapse, the rectum, uterus, or bladder pushes on the vagina, and severe cases result in the vagina protruding out of the body. Kegel exercises have been used to strengthen the pelvic floor, and may help prevent or remedy vaginal prolapse. Cervical cancer (may be prevented by Pap smear screening and HPV vaccines) Vaginal cancer is very rare, but its symptoms include abnormal vaginal bleeding or vaginal discharge. Air embolism is a potentially fatal condition where an air bubble travels throughout the bloodstream and can obstruct a vessel. It can result if air is blown into a pregnant woman's vagina during cunnilingus; this is because pregnant women have an increased vascularity of the vagina and uterus, and an air embolism can force air into the uterine veins. == Symptoms == === Discharge === Most vaginal discharges occur due to normal bodily functions, such as menstruation or sexual arousal (vaginal lubrication). Abnormal discharges, however, can indicate disease. Normal vaginal discharges include blood or menses (from the uterus), the most common, and clear fluid either as a result of sexual arousal or secretions from the cervix. Other non-infective causes include dermatitis. Non-sexually transmitted discharges occur from bacterial vaginosis, aerobic vaginitis and thrush or candidiasis. The final group of discharges include the sexually transmitted diseases gonorrhea, chlamydia, and trichomoniasis. The discharge from thrush is slightly pungent and white, that from trichomoniasis more foul and greenish, and that from foreign bodies resembling the discharge of gonorrhea, greyish or yellow and purulent (pus-like). === Sores === All sores involve a breakdown in the walls of the fine membrane of the vaginal wall. The most common of these are abrasions and small ulcers caused by trauma. While these can be inflicted during rape most are actually caused by excessive rubbing from clothing or improper insertion of a sanitary tampon. The typical ulcer or sore caused by syphilis is painless with raised edges. These are often undetected because they occur mostly inside the vagina. The sores of herpes which occur with vesicles are extremely tender and may cause such swelling that passing urine is difficult. In the developing world, a group of parasitic diseases also cause vaginal ulceration, such as leishmaniasis, but these are rarely encountered in the west. All of the aforementioned local vulvovaginal diseases are easily treated. Often, only shame prevents patients from presenting for treatment. === Inflammation === Vaginitis an inflammation of the vagina, such as caused by infection, hormone disturbance and irritation/allergy. == See also == List of bacterial vaginosis microbiota == References ==
Wikipedia/Vaginal_disease
An endometrial polyp or uterine polyp is a mass in the inner lining of the uterus. They may have a large flat base (sessile) or be attached to the uterus by an elongated pedicle (pedunculated). Pedunculated polyps are more common than sessile ones. They range in size from a few millimeters to several centimeters. If pedunculated, they can protrude through the cervix into the vagina. Small blood vessels may be present, particularly in large polyps. == Signs and symptoms == They often cause no symptoms. Where they occur, symptoms include irregular menstrual bleeding, bleeding between menstrual periods, excessively heavy menstrual bleeding (menorrhagia), and vaginal bleeding after menopause. Bleeding from the blood vessels of the polyp contributes to an increase of blood loss during menstruation and blood "spotting" between menstrual periods, or after menopause. If the polyp protrudes through the cervix into the vagina, pain (dysmenorrhea) may result. == Cause == No definitive cause of endometrial polyps is known, but they appear to be affected by hormone levels and grow in response to circulating estrogen. Risk factors include obesity, high blood pressure and a history of cervical polyps. Taking tamoxifen or hormone replacement therapy can also increase the risk of uterine polyps. The use of an intrauterine system containing levonorgestrel in women taking tamoxifen may reduce the incidence of polyps. == Diagnosis == Endometrial polyps can be detected by vaginal ultrasound (sonohysterography), hysteroscopy and dilation and curettage. Detection by ultrasonography can be difficult, particularly when there is endometrial hyperplasia (excessive thickening of the endometrium). Larger polyps may be missed by curettage. Endometrial polyps can be solitary or occur with others. They are round or oval and measure between a few millimeters and several centimeters in diameter. They are usually the same red/brown color of the surrounding endometrium although large ones can appear to be a darker red. The polyps consist of dense, fibrous tissue (stroma), blood vessels and glandlike spaces lined with endometrial epithelium. If they are pedunculated, they are attached by a thin stalk (pedicle). If they are sessile, they are connected by a flat base to the uterine wall. Pedunculated polyps are more common than sessile ones. == Treatment == Polyps can be surgically removed using curettage with or without hysteroscopy. When curettage is performed without hysteroscopy, polyps may be missed. To reduce this risk, the uterus can be first explored using grasping forceps at the beginning of the curettage procedure. Hysteroscopy involves visualising the endometrium (inner lining of the uterus) and polyp with a camera inserted through the cervix. Large polyps can be cut into sections before each section is removed. The presence of cancerous cells may suggest a hysterectomy (surgical removal of the uterus). A hysterectomy is usually not considered when cancer is not present. In either procedure, general anesthetic is typically supplied. The effects of polyp removal on fertility has not been studied. == Prognosis == Endometrial polyps are usually benign although some may be precancerous or cancerous. About 0.5% of endometrial polyps contain adenocarcinoma cells. Polyps can increase the risk of miscarriage in women undergoing IVF treatment. If they develop near the fallopian tubes, they may lead to difficulty in becoming pregnant. Although treatments such as hysteroscopy usually cure the polyp concerned, recurrence of endometrial polyps is frequent. Untreated, small polyps may regress on their own. == Epidemiology == Endometrial polyps usually occur in women in their 40s and 50s. Endometrial polyps occur in up to 10% of women. It is estimated that they are present in 25% of women with abnormal vaginal bleeding. == See also == Cervical polyp Uterine fibroids == References == == External links ==
Wikipedia/Endometrial_polyp
A vulvar disease is a particular abnormal, pathological condition that affects part or all of the vulva. Several pathologies are defined. Some can be prevented by vulvovaginal health maintenance. == Vulvar cancer == Vulvar cancer accounts for about 5% of all gynecological cancers and typically affects women in later life. Five-year survival rates in the United States are around 70%. Symptoms of vulvar cancer include itching, a lump or sore on the vulva which does not heal and/or grows larger, and sometimes discomfort/pain/swelling in the vulval area. Treatments include vulvectomy – removal of all or part of the vulva. == Vulvo-perineal localization of dermatologic disorders == Systemic disorders may be localized in the vulvo-perineal region. In Langerhans cell histiocytosis, lesions initially are erythematous, purpuric papules and they then become scaly, crusted and sometimes confluent. In Kawasaki disease, an erythematous, desquamating perineal rash may occur in the second week of symptom onset, almost at the same time as palmoplantar desquamation. Acrodermatitis enteropathica is a biochemical disorder of zinc metabolism. Diaper dermatitis in infancy == Blemishes and cysts == Epidermal cysts Angiomas Moles Freckles Lentigos Scars Scarification Vitiligo Tattoos Hypertrophy Sinus pudoris Bartholin's cyst Skene's duct cyst, a paraurethral cyst == Infections == Candidiasis (thrush) Bacterial vaginosis (BV) Genital warts, due to human papilloma virus (HPV) Molluscum contagiosum Herpes simplex (genital herpes) Herpes zoster (shingles) Tinea cruris (fungus) Hidradenitis suppurativa Infestations with pinworms (rare), scabies and lice. == Inflammatory diseases == Eczema/Dermatitis Lichen simplex (chronic eczema) Psoriasis Lichen sclerosus Lichen planus Zoon's vulvitis (Zoon's balanitis in men) Pemphigus vulgaris Pemphigoid (mucous membrane pemphigoid, cicatricial pemphigoid, bullous pemphigoid) == Pain syndromes == Vulvodynia and vulvar vestibulitis Vaginismus == Ulcers == Aphthous ulcer Behcet's Disease == Developmental disorders == Septate vagina Vaginal opening extremely close to the urethra or anus An imperforate hymen Various stages of genital masculinization including fused labia, an absent or partially formed vagina, urethra located on the clitoris. Intersex === Tumoral and hamartomatous diseases === Hemangiomas and vascular dysplasia may involve the perineal region Infantile perianal pyramidal protrusion == Other == Vulvar lymphangioma Paget's disease of the vulva Vulvar intraepithelial neoplasia (VIN) Bowen's disease Bowenoid papulosis Vulvar varicose veins Labial adhesions Perineodynia (perineal pain) Desquamative inflammatory vaginitis (DIV) Childbirth tears and episiotomy related changes Vestibulodynia == See also == Vaginal disease list of ICD-10 codes == References ==
Wikipedia/Vulvar_disease
Apolipoprotein AI (Apo-AI) is a protein that in humans is encoded by the APOA1 gene. As the major component of high-density lipoprotein (HDL) particles, it has a specific role in lipid metabolism. == Structure == APOA1 is located on chromosome 11, with its specific location being 11q23-q24. The gene contains 4 exons. The encoded apolipoprotein AI, is a 28.1 kDa protein composed of 243 amino acids; 21 peptides have been observed through mass spectrometry data. Due to alternative splicing, there exists multiple transcript variants of APOA1, including at least one which encodes a Apo-AI preprotein. == Function == Apolipoprotein AI is the major protein component of high density lipoprotein (HDL) particles in plasma. Chylomicrons secreted from the intestinal enterocyte also contain Apo-AI, but it is quickly transferred to HDL in the bloodstream. The protein, as a component of HDL particles, enables efflux of fat molecules by accepting fats from within cells (including macrophages within the walls of arteries which have become overloaded with ingested fats from oxidized low-density lipoprotein (LDL) particles) for transport (in the water outside cells) elsewhere, including back to LDL particles or to the liver for excretion. It is a cofactor for lecithin–cholesterol acyltransferase (LCAT) which is responsible for the formation of most plasma cholesteryl esters. Apolipoprotein AI has also been isolated as a prostacyclin (PGI2) stabilizing factor, and thus may have an anticlotting effect. Defects in the gene encoding it are associated with HDL deficiencies, including Tangier disease, and with systemic non-neuropathic amyloidosis. Apo-AI is often used as a biomarker for prediction of cardiovascular diseases. The ratio apoB-100/apoA-I (i.e. LDL and larger particles vs. HDL particles), NMR measured lipoprotein (LDL/HDL) particle ratios even more so, has always had a stronger correlation with myocardial infarction event rates than older methods of measuring lipid transport in the water outside cells. Apo-AI is routinely measured using immunoassays such as ELISA or nephelometry. == Applications == Apo-AI can be used to create in vitro lipoprotein nanodiscs for cell-free membrane expression systems. == Clinical significance == === Activity associated with high HDL-C and protection from heart disease === As a major component of the high-density lipoprotein complex (protective "fat removal" particles), Apo-AI helps to clear fats, including cholesterol, from white blood cells within artery walls, making the white blood cells (WBCs) less likely to become fat overloaded, transform into foam cells, die and contribute to progressive atheroma. Five of nine men found to carry a mutation (E164X) who were at least 35 years of age had developed premature coronary artery disease. One of four mutants of Apo-AI is present in roughly 0.3% of the Japanese population, but is found in 6% of those with low HDL cholesterol levels. ApoA-I Milano is a naturally occurring mutant of Apo-AI, found in a few families in Limone sul Garda, Italy, and, by genetic and church record family tree detective work, traced to a single individual, Giovanni Pomarelli, in the 18th century. Described in 1980, it was the first known molecular abnormality of apolipoproteins. Paradoxically, carriers of this mutation have very low HDL-C (HDL-cholesterol) levels, but no increase in the risk of heart disease, often living to age 100 or older. This unusual observation was what lead Italian investigators to track down what was going on and lead to the discovery of apoA-I Milano (the city, Milano, ~160 km away, in which the researcher's lab was located). Biochemically, apo A1 contains an extra cysteine bridge, causing it to exist as a homodimer or as a heterodimer with Apo-AII. However, the enhanced cardioprotective activity of this mutant (which likely depends on fat & cholesterol efflux) cannot easily be replicated by other cysteine mutants. Recombinant Apo-AI Milano dimers formulated into liposomes can reduce atheromas in animal models by up to 30%. Apo-AI Milano has also been shown in small clinical trials to have a statistically significant effect in reducing (reversing) plaque build-up on arterial walls. In human trials the reversal of plaque build-up was measured over the course of five weeks. === Novel haplotypes within apolipoprotein AI-CIII-AIV gene cluster === A study from 2008 describes two novel susceptibility haplotypes, P2-S2-X1 and P1-S2-X1, discovered in ApoAI-CIII-AIV gene cluster on chromosome 11q23, which confer approximately threefold higher risk of coronary heart disease in normal as well as in the patients having type 2 diabetes mellitus. === Role in other diseases === A G/A polymorphism in the promoter of the APOA1 gene has been associated with the age at which Alzheimer disease is presented. Protection from Alzheimer's disease by Apo-AI may rely on a synergistic interaction with alpha-tocopherol. Amyloid deposited in the knee following surgery consists largely of Apo-AI secreted from chondrocytes (cartilage cells). A wide variety of amyloidosis symptoms are associated with rare APOA1 mutants. Apo-AI binds to lipopolysaccharide or endotoxin, and has a major role in the anti-endotoxin function of HDL. In one study, a decrease in Apo-AI levels was detected in schizophrenia patients' CSF, brain and peripheral tissues. === Epistatic impact of Apo-AI === Apolipoprotein AI and ApoE interact epistatically to modulate triglyceride levels in coronary heart disease patients. Individually, neither Apo-AI nor ApoE was found to be associated with triglyceride (TG) levels, but pairwise epistasis (additive x additive model) explored their significant synergistic contributions with raised TG levels (P<0.01). === Factors affecting Apo-AI activity === In a study from 2005 it was reported, that Apo-AI production is decreased by calcitriol. It was concluded, that this regulation happens on transcription level: calcitriol alters yet unknown coactivators or corepressors, resulting in repression of APOA1 promoter activity. Simultaneously, Apo-AI production was increased by vitamin D antagonist, ZK-191784. Exercise or statin treatment may cause an increase in HDL-C levels by inducing Apo-AI production, but this depends on the G/A promoter polymorphism. == Interactions == Apolipoprotein A1 has been shown to interact with: ABCA1 GPLD1 PLTP === Potential binding partners === Apolipoprotein AI binding precursor, a relative of APOA-1 abbreviated APOA1BP, has a predicted biochemical interaction with carbohydrate kinase domain containing protein. The relationship between these two proteins is substantiated by cooccurance across genomes and coexpression. The ortholog of CARKD in E. coli contains a domain not present in any eukaryotic ortholog. This domain has a high sequence identity to APOA1BP. CARKD is a protein of unknown function, and the biochemical basis for this interaction is unknown. === Interactive pathway map === Click on genes, proteins and metabolites below to link to respective articles. == See also == Apolipoprotein B Cardiovascular disease ApoA-1 Milano == References == == External links == Apolipoprotein+A-I at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Applied Research on Apolipoprotein-A1 Human APOA1 genome location and APOA1 gene details page in the UCSC Genome Browser. Overview of all the structural information available in the PDB for UniProt: P02647 (Human Apolipoprotein A-I) at the PDBe-KB. Overview of all the structural information available in the PDB for UniProt: Q00623 (Mouse Apolipoprotein A-I) at the PDBe-KB.
Wikipedia/Apolipoprotein_A1
Stargardt disease is the most common inherited single-gene retinal disease. In terms of the first description of the disease, it follows an autosomal recessive inheritance pattern, which has been later linked to bi-allelic ABCA4 gene variants (STGD1). However, there are Stargardt-like diseases with mimicking phenotypes that are referred to as STGD3 and STGD4, and have a autosomal dominant inheritance due to defects with ELOVL4 or PROM1 genes, respectively. It is characterized by macular degeneration that begins in childhood, adolescence or adulthood, resulting in progressive loss of vision. == Signs and symptoms == The presentation usually occurs in childhood or adolescence, though there is no upper age limit for presentation and late-onset is possible. The main symptom is loss of visual acuity, uncorrectable with glasses. This manifests as the lack of the ability to see fine details when reading or viewing distant objects. Symptoms typically develop before age 20 (median age of onset: ~17 years old), and include: wavy vision, blind spots, blurriness, loss of depth perception, sensitivity to glare, impaired colour vision, and difficulty adapting to dim lighting (delayed dark adaptation). There is a wide variation between individuals in the symptoms experienced as well as the rate of deterioration in vision. Vision loss can be attributed to buildup of byproducts of vitamin A in photoreceptor cells and Peripheral vision is usually less affected than fine, central (foveal) vision. == Genetics == Historically from Stargardt's first description of his eponymous disease until recently, the diagnosis was based on looking at the phenotype using examination and investigation of the eye. Since the advent of genetic testing, the picture has become more complex. What was thought to be one disease is, in fact, probably at least three different diseases, each related to a different genetic change. Therefore it is currently a little confusing to define what Stargardt's disease is. Stargardt disease (STGD1) is caused by bi-allelic ABCA4 gene variants (i.e., autosomal recessive). Importantly, the exact genotype (i.e., combinations of both ABCA4 variants along with the presence of additional genetic modifiers) is highly prognostic for the age of onset and disease progression. Autosomal-dominant Stargardt-like diseases were linked to genes such as PROM1 (STGD3) or ELOVL4 (STGD4) missense mutations play a role remains to be seen. The carrier frequency in the general population of ABCA4 alleles is 5 to 10%. Different combinations of ABCA4 genes will result in widely different age of onset and retinal pathology. The severity of the disease is inversely proportional to ABCA4 function and it is thought that ABCA4 related disease has a role to play in other diseases such as retinitis pigmentosa, cone-rod dystrophies and age-related macular degeneration (AMD). STGD1: By far the most common form of Stargardt disease is the recessive form caused by mutations in the ABCA4 gene. STGD4: A rare dominant defect in the PROM1 gene. STGD3: A rare dominant form of Stargardt disease caused by mutations in the ELOVL4 gene. Late-onset Stargardt disease is associated with missense mutations outside known functional domains of ABCA4. == Pathophysiology == In STGD1, the genetic defect causes malfunction of the ATP-binding cassette transporter (ABCA4) protein of the visual phototransduction cycle. Defective ABCA4 leads to improper shuttling of vitamin A throughout the retina, and accelerated formation of toxic vitamin A dimers (also known as bisretinoids), and associated degradation byproducts. Vitamin A dimers and other byproducts are widely accepted as the cause of STGD1. As such, slowing the formation of vitamin A dimers might lead to a treatment for Stargardt. When vitamin A dimers and byproducts damage the retinal cells, fluorescent granules called lipofuscin in the retinal pigmented epithelium of the retina appear, reflecting such damage. In STGD4, a butterfly pattern of dystrophy is caused by mutations in a gene that encodes a membrane bound protein that is involved in the elongation of very long chain fatty acids (ELOVL4) == Diagnosis == Diagnosis is firstly clinical through history and examination usually with a Slit-lamp. If characteristic features are found the investigations undertaken will depend on locally available equipment and may include Scanning laser ophthalmoscopy which highlights areas of autofluorescence which are associated with retinal pathology. Spectral-domain optical coherence tomography, electroretinography and microperimetry are also useful for diagnostic and prognostic purposes. Fluorescein angiography is used less often than in the past. These investigations may be followed by genetic testing in order to avoid misdiagnosis. Other diseases may have overlapping phenotypic features with Stargardt Disease and the disease itself has multiple variants. In one study, 35% of patients diagnosed with Stargardt Disease through physical ophthalmic examination were found to be misdiagnosed when subsequent genetic testing was done. Genetic testing can be utilized to ensure a proper diagnosis for which the correct treatment can be applied. == Treatment == At present there is no gene therapy for Stargardt Disease. However, ophthalmologists recommend measures that could slow the rate of progression. There are no prospective clinical trials to support these recommendations, but they are based on scientific understanding of the mechanisms underlying the disease pathology. There are three strategies doctors recommend for potential harm reduction: reducing retinal exposure to damaging ultraviolet light, avoiding excess Vitamin A with the hope of lowering lipofuscin accumulation and maintaining good general health and diet. MD Stem Cells' approach using Bone Marrow Derived Stem Cells has shown benefit in various retinal diseases. In Stargardt, 94.1% of patients had improved vision or remained stable with results showing high statistical significance (p=0.0004). Reasons for improvement may include transfer of organelles (mitochondria, lysosomes), enhanced clearing of toxic Vitamin A byproducts, and neuroprotection of photoreceptors. Ultra-violet light has more energy and is a more damaging wavelength spectra than visible light. In an effort to mitigate this, some ophthalmologists may recommend that the patient wears a broad-brimmed hat or sunglasses when they are outdoors. Sometimes, doctors also instruct their patients to wear yellow-tinted glasses (which filter out blue light) when indoors and in artificial light or in front of a digital screen. Certain foods, especially carrots, are rich in vitamin A, but the amount from food is not harmful. Foods with a high vitamin A content are often yellow or orange in color, such as squash, pumpkin, and sweet potato, but some, such as liver, are not. There are supplements on the market with more than a daily allowance of vitamin A that should be avoided, but each individual should discuss this with their doctor. Smoking, overweight or obesity, and poor diet quality may also contribute to more rapid degeneration. On the other hand, the consumption of oily fish, in a diet similar to that which doctors recommend for age related macular degeneration, can be used to slow the progression of the disease. Advances in technology have brought devices that help Stargardt patients who are losing their vision maintain their independence. Low-vision aids can range from hand lenses to electronic devices and can allow those losing their vision to be able to carry out daily activities. Some patients may even opt for in-person services. == Prognosis == The long-term prognosis for patients with Stargardt disease is widely variable and depends on the age of onset and genetic alleles. The majority of patients will progress to legal blindness, which means that central reading vision will be lost. However, perimetry and microperimetry studies indicate that the peripheral light sensitivity is preserved over a long time in a significant fraction of all patients (i.e., >50%). Stargardt disease has no impact on general health and life expectancy is normal. Some patients, usually those with the late-onset form, can maintain excellent visual acuities for extended periods and are therefore able to perform tasks such as reading or driving. == Epidemiology == A 2017 prospective epidemiologic study that recruited 81 patients with STGD over 12 months reported an incidence of between 1 and 1.28 per 10 000 individuals. The median age of presentation was 27 years (range 5–64 years), most (90%) were symptomatic, with a median visual acuity of Snellen equivalent 20/66. == History == Karl Stargardt (1875–1927) was a German ophthalmologist born in Berlin. He studied medicine at the University of Kiel, qualifying in 1899. He later became head of the Bonn University's ophthalmology clinic, followed by a post as chair of ophthalmology at the University of Marburg. In 1909 he described 7 patients with a recessively inherited macular dystrophy, now known as Stargardt's disease, being described as a progressive and severe reduction of central vision, which develops in the first and second decade of life. == Research == There are several clinical trials in various stages involving several potential therapeutic areas, gene therapy, stem cell therapy, drug therapy and artificial retinas. In general all are testing the safety and benefits of their respective therapies in phase I or II trials. These studies are designed to evaluate the safety, dose and effectiveness in small number of people in Phase I with Phase II evaluating similar criteria in a larger population but including a greater insight into potential side effects. Gene therapy is designed to insert a copy of a corrected gene into retinal cells. The hope is to return cell function back to normal and the treatment has the potential to stop disease progression. This therapy will not restore impaired vision back to normal. The research is being undertaken by a partnership between Sanofi and Oxford BioMedica. A Lentiviral vector is used to deliver a normal gene to the target tissue via a subretinal injection. The therapy is known as SAR422459 and it has been terminated prematurely due to halt in developing the drug product. Kubota Vision is in Phase III clinical trials of a visual cycle modulator that modulates RPE65 activity to treat Stargardt's. Kubota Vision published the results of a dose range study of a drug known as Emixustat, with findings that will effect dose selection for their phase III trial set to complete in June 2022. Stem-cell therapy involves injecting cells with the potential to mature into differentiated and functioning retinal cells. This therapy has the potential stop disease progression and in the long term improve vision. To improve vision this technique will need to replicate the complex multi-layered and neurally anatomy of the retina. There are a number of research groups working with stem cells one of which is Ocata Therapeutics. Alkeus Pharma is evaluating the potential of deuterated vitamin A as the drug ALK-001. The hope is that the deuterated vitamin A will reduce the build-up of toxic vitamin A metabolites in the retina and therefore slow rate of visual deterioration. To create deuterated vitamin A some of the hydrogen atoms are replaced with the isotope deuterium which has an extra neutron and is therefore twice the standard atomic weight of hydrogen. A Phase II clinical trial is taking place using ALK-001 with an estimated completion date of December 2024. MD Stem Cells, a research-physician clinical development company using autologous bone marrow derived stem cells (BMSC), has released results of the Stargardt Disease cohort within their ongoing Stem Cell Ophthalmology Study II (SCOTS2) clinical trial (NCT 03011541). Average visual improvement was 17.96% (95% CI, 16.39 to 19.53%) with 61.8% of eyes improving and 23.5% remaining stable with no adverse events occurring. Retinal implants are in the early stages of development and their use could be of benefit to many people with visual impairment though implanting and maintaining an electrical device within the eye that interfaces with the optic nerve presents many challenges. An example of a device is made by Argus retinal prosthesis, the camera is an external device held on spectacles, the camera signal is processed and then fed via wires into the retina to terminate in some electrodes that interface with the optic nerve. == References == == External links == NCBI Genetic Testing Registry
Wikipedia/Stargardt_disease
Surfactant metabolism dysfunction is a condition where pulmonary surfactant is insufficient for adequate respiration. Surface tension at the liquid-air interphase in the alveoli makes the air sacs prone to collapsing post expiration. This is due to the fact that water molecules in the liquid-air surface of alveoli are more attracted to one another than they are to molecules in the air. For sphere-like structures like alveoli, water molecules line the inner walls of the air sacs and stick tightly together through hydrogen bonds. These intermolecular forces put great restraint on the inner walls of the air sac, tighten the surface all together, and unyielding to stretch for inhalation. Thus, without something to alleviate this surface tension, alveoli can collapse and cannot be filled up again. Surfactant is essential mixture that is released into the air-facing surface of inner walls of air sacs to lessen the strength of surface tension. This mixture inserts itself among water molecules and breaks up hydrogen bonds that hold the tension. Multiple lung diseases, like ISD or RDS, in newborns and late-onsets cases have been linked to dysfunction of surfactant metabolism. Surfactant is a mixture of 90% phospholipids and 10% other proteins, produced by epithelial type II cells in the alveolar. This mixture is made and packaged into lysosomally- derived structures called lamellar bodies. Lamellar bodies are then secreted into the liquid-air interphase surface of alveolar through membrane fusion initiated by influx of Ca2+. Released pulmonary surfactant acts as a protective layer to prevent alveolar from collapsing due to surface tension. Furthermore, surfactants also contains some innate immune components to defend against pulmonary infections. Surfactant is classified into two types of proteins, hydrophilic proteins that are responsible for innate immune system, and hydrophobic proteins that carry out physical functions of pulmonary surfactant. Surfactant metabolism dysfunction involves mutations or malfunctions of those hydrophobic proteins that lead to ineffective surfactant layer to protect alveolus integrity. SP-B and SP-C are the two hydrophobic surfactant proteins that participate in its physical functions; these proteins are encoded by SFTPB and SFTPC genes on chromosomes 2 and 8 respectively. Thus, mutations on these genes produce incomplete or nonfunctioning SP-B and SP-C proteins and lead to lung diseases. Both SP-B and SP-C are synthesized in epithelial type II cells as large precursor proteins (proSP-B and proSP-C) and subsequently cleaved by proteolytic enzymes at both amino and carboxyl termini to produce functional mature proteins. proSP-B and proSP-C are first made in the endoplasmic reticulum of epithelial type II cell, they are then translocated through Golgi apparatus to multivesicular bodies for delivery to lamellar bodies. During this transition, proteolytic processing begins to cleave precursor proteins. Once multivescular body reaches the membrane of lamellar body, both membranes fuse together so that processed proteins can be transported into lamellar body, where last steps of maturation for both SP-B and SP-C occur. When lamellar body is ready to be secreted, exocytosis is initiated through influx of Ca2+, and lamellar membrane fuses with plasma membrane to release surfactant phospholipid contents into the surface of the cell. SP-B and SP-C are responsible to carry out adsorption of the lipid monolayer at the liquid-air interphase to prevent post expiration atelectasis. Used surfactant phospholipid materials are taken up into epithelial type II cells by pulmonary macrophages. Another important protein that contributes to outcome of surfactant metabolism dysfunction is ABCA3, a transmembrane phospholipid transporter in lamellar body. ABCA3 has two ATP binding sites in the cytoplasmic domain to power phospholipid transportation through ATP hydrolysis. ABCA3 is synthesized in endoplasmic reticulum and transported through Golgi apparatus to the membrane of lamellar body. Once inserted into the membrane, ABCA3 can help deliver surfactant lipids into the lumen of lamellar body, and create tightly packed internal environment of surfactant lipids and surfactant proteins. Mutations in ABCA3 cause failure in lamellar body synthesis and result in decreased production of surfactant, along with deficiency of SP-B and SP-C. == Cause == Surfactant metabolism dysfunction describes a group of dysfunctions caused by different mutations in surfactant related genes. Severe deficiency of pulmonary surfactant due to disturbed metabolism of any of these proteins can lead to some form of interstitial lung disease in newborns and adults. These conditions share similar pathophysiology and overlapping phenotypes because surfactant gene products interactively communicate and control one another. Thus, dysfunction of a surfactant protein, or relating protein, generates deficiencies of others. === SFTPB mutations === Most disease-causing mutations in SFTPB result in a complete lack of mature SP-B protein 265120. Lung disease is inherited in an autosomal recessive manner, requiring mutations in both alleles. Surfactant produced by infants with SP-B deficiency is abnormal in composition and does not function normally in lowering surface tension. More than 40 different mutations along the length of SFTPB gene have been accounted for in surfactant metabolism dysfunction. SFTPB mutations are inherited in autosomal recessive fashion, loss-of-function mutation on both alleles are required for full expression of disease. About 2/3 or 60%-70% of those accounted disease-causing alleles come from a frameshift mutation, called 121ins2, on exon 4 of SFTPB gene, which also accounts for ~65% of US cases. The rest of the mutated alleles come from nonsense, missense, splice-site mutations, and other possible insertion and deletion mutations throughout the entire gene. These mutations cause total absence or loss-of-function of SP-B and lead to imbalance in surfactant homeostasis. Since SP-B has a major role in surfactant biogenesis and spreading of surfactant and lipid layer, any disruption to existence of SP-B results in ineffective respiration and lethal pulmonary conditions at birth. Pathology manifestation in full-term infant resembles characteristics of newborn with Respiratory Distress Syndrome. Imaging of epithelial type II cells with SP-B deficiency shows immature lamellar bodies without tightly packed membranes, but rather with loose and unorganized membranes. The ratio of phospholipid-protein also decreases with abnormal phospholipids. In addition, surfactant collected from SP-B deficiency epithelial type II cells is not as effective in lowering surface tension and creating film as normal surfactant. Immunohistochemical features of SP-B deficiency show decreased levels of proSP-B and SP-B proteins, along with increased presence of immuno protein SP-A and partially processed intermediate peptides of proSP-C. Appearance of partially processed proSP-C shows significance of mature SP-B in biogenesis and processing of SP-C. Absences of both proSP-B and SP-B proteins are observed in frameshift and nonsense mutations of SFTPB, while low level of proSP-B is detected in missense, in-frame deletionof insertion mutations. However, these mutations prevent proSP-B from fully mature into SP-B, resulting in deficiency of SP-B and surfactant. === SFTPC mutations === Familial cases of SP-C dysfunction 610913 are inherited in an autosomal dominant pattern, although the onset and severity of lung disease are highly variable, even within the same family. More than 40 distinct mutation variations in SFTPC gene have also been described in patients. Wild-type SP-C proteins are embedded inside the phospholipid bilayer of epithelial type II cell and function to generate and maintain monolayer of surfactant on alveolar surface. Individuals with mutated SFTPC genes tend to manifest lung diseases in late childhood or adulthood. Mutated alleles are inherited in autosomal dominant fashion, although de novo mutations can also cause sporadic emergence of diseases. The age of onset and severity vary significantly among patients with SFTPC mutations, some only manifest symptoms in fifth or sixth decade. Most of these mutations are missense, but there have been recordings of frameshift, splice-site mutations, together with small insertions or deletions along the carboxyl terminal of SFTPC. Mutations in SFTPC gene are thought to prevent proSP-C peptides from being fully processed into mature SP-C proteins. ProSP-C proteins tend to self-accumulate along the secretory pathway, due to high hydrophobic nature, and may activate cellular destruction response. SFTPC mutations cause proSP-C proteins to aggregate and misfold during secretory process. These folded proteins trigger unfolded protein response (UPR) and cellular apoptosis to get rid of clusters of mutated peptides. Patients with SP-C dysfunction show lack of mature SP-C in epithelial type II cells and up-regulation of UPR. SFTPC mutation with highest occurrence frequency is substitution of threonine for isoleucine in codon 73, termed I73T, found in more than 25% of patients with SP-C related disorders. Staining of proSP-C shows diffuse staining strictly in cytoplasm and accumulation of immunoreactive substances surrounding the nucleus. Evaluation of diseases related to SFTPC mutations show association with chronic parenchymal lung disease. === ABCA3 mutations === Mutations in ABCA3 appear to be the most common cause of genetic surfactant dysfunction in humans. The mutations result in a loss of or reduced function of the ABCA3 protein, and are inherited in an autosomal recessive manner 610921. There are more than 150 different mutations throughout ABCA3 gene with various allelic heterogeneity, making it the biggest class of genetic cause of surfactant dysfunction. Like SP-B deficiency, ABCA3 mutations are inherited in autosomal recessive trait. Mutations of ABCA3 consist of missense, nonsense, frameshift, splice-cite, insertion or deletion. These mutations are classified into two types of ABCA3 mutations, those that preclude normal trafficking of ABCA3 from ER to lamellar membrane, and those that affect ATP-binding ability of ABCA3 needed for phospholipid transportation. Due to its roles in lamellar body biogenesis and maturation of surfactant proteins, epithelial type II cells with altered ABCA3 exhibit premature lamellar bodies and damaged maturation of SP-B/SP-C. Surfactant samples from patients with ABCA3 deficiency do not lower surface tension as effectively. Affected surface tension ability results from incomplete formation of lamellar bodies, due to lack of lipid influx by ABCA3. Immunostaining of SP-B in ABCA3 patients show decreased level of mature SP-B and impaired process of proSP-B to SP-B, thus, confirming why ABCA3 dysfunction leads to severe surfactant metabolism dysfunction. == Diagnosis == === Types === Non-invasive genetic testing can be used to infer possible interstitial lung disorders caused by surfactant metabolism dysfunction. Although these sequencing tests can take up to several weeks, which may not be so useful in case of acute respiratory problems in newborns. Overlapping phenotypes of surfactant metabolism dysfunction and other interstitial lung diseases make it hard to propose definitive diagnosis for surfactant disorders. Overall testings, family history, external factors, and clinical presentations should all be considered to diagnose surfactant metabolism dysfunction. Testing for surfactant metabolism dysfunction should be considered for newborns with diffuse lung disease or hypoxemia, especially in families with history of neonatal lung diseases or ILD in adults. Neonatal and adult onset lung diseases with unfound causes should also be tested early for surfactant dysfunction. ABCA3 and SP-B dysfunctions manifest in newborns and progress aggressively within the first few months of life, thus, testing for ABCA3 and SP-B disorders should preclude those for SP-C, especially when infants are showing symptoms of ILD or diffuse lung disease. Distinctions between SP-B and ABCA3 are ABCA3 tends to occur in families with neonatal lung disease history, and SP-B testing almost unneeded in older children. Late on-set conditions with inheritance in dominant fashion should infer SP-C dysfunction. Antibodies against proSP-B, proSP-C, SP-B, SP-C, and ABCA3 have been thoroughly developed, which makes detection for these proteins highly accessible and accurate. Immuno staining of each of these type of surfactant dysfunction differs in absence and presence of specific proteins and propeptides, thus immunohistochemisty can help decipher which type of deficiency is being dealt with. In addition, hypothyroidism can cause damaged production of NKX2.1 proteins, which can lead to insufficient transcription of multiple surfactant proteins. == Treatment == Neonates with surfactant metabolism dysfunctions, especially those with SP-B disorder, only have lung transplantation as one possible choice of treatment. Children with lung transplant due to surfactant metabolism dysfunction perform on similar level to those with transplant for due to other reasons. Some less severe cases of ABCA3 dysfunctions manifest in late childhood or adult hood are due to missense mutations that result in semi-sufficient levels of active surfactant, while SP-C clinical presentation varies greatly depending on level of penetration of the mutated alleles. == See also == == References == == Further reading == Reference, Genetics Home. "surfactant dysfunction". Genetics Home Reference. Retrieved 5 November 2017. == External links ==
Wikipedia/Surfactant_metabolism_dysfunction
The endocrine system is a messenger system in an organism comprising feedback loops of hormones that are released by internal glands directly into the circulatory system and that target and regulate distant organs. In vertebrates, the hypothalamus is the neural control center for all endocrine systems. In humans, the major endocrine glands are the thyroid, parathyroid, pituitary, pineal, and adrenal glands, and the (male) testis and (female) ovaries. The hypothalamus, pancreas, and thymus also function as endocrine glands, among other functions. (The hypothalamus and pituitary glands are organs of the neuroendocrine system. One of the most important functions of the hypothalamus—it is located in the brain adjacent to the pituitary gland—is to link the endocrine system to the nervous system via the pituitary gland.) Other organs, such as the kidneys, also have roles within the endocrine system by secreting certain hormones. The study of the endocrine system and its disorders is known as endocrinology. The thyroid secretes thyroxine, the pituitary secretes growth hormone, the pineal secretes melatonin, the testis secretes testosterone, and the ovaries secrete estrogen and progesterone. Glands that signal each other in sequence are often referred to as an axis, such as the hypothalamic–pituitary–adrenal axis. In addition to the specialized endocrine organs mentioned above, many other organs that are part of other body systems have secondary endocrine functions, including bone, kidneys, liver, heart and gonads. For example, the kidney secretes the endocrine hormone erythropoietin. Hormones can be amino acid complexes, steroids, eicosanoids, leukotrienes, or prostaglandins. The endocrine system is contrasted both to exocrine glands, which secrete hormones to the outside of the body, and to the system known as paracrine signalling between cells over a relatively short distance. Endocrine glands have no ducts, are vascular, and commonly have intracellular vacuoles or granules that store their hormones. In contrast, exocrine glands, such as salivary glands, mammary glands, and submucosal glands within the gastrointestinal tract, tend to be much less vascular and have ducts or a hollow lumen. Endocrinology is a branch of internal medicine. == Structure == === Major endocrine systems === The human endocrine system consists of several systems that operate via feedback loops. Several important feedback systems are mediated via the hypothalamus and pituitary. TRH – TSH – T3/T4 GnRH – LH/FSH – sex hormones CRH – ACTH – cortisol Renin – angiotensin – aldosterone Leptin vs. ghrelin === Glands === Endocrine glands are glands of the endocrine system that secrete their products, hormones, directly into interstitial spaces where they are absorbed into blood rather than through a duct. The major glands of the endocrine system include the pineal gland, pituitary gland, pancreas, ovaries, testes, thyroid gland, parathyroid gland, hypothalamus and adrenal glands. The hypothalamus and pituitary gland are neuroendocrine organs. The hypothalamus and the anterior pituitary are two out of the three endocrine glands that are important in cell signaling. They are both part of the HPA axis which is known to play a role in cell signaling in the nervous system. Hypothalamus: The hypothalamus is a key regulator of the autonomic nervous system. The endocrine system has three sets of endocrine outputs which include the magnocellular system, the parvocellular system, and autonomic intervention. The magnocellular is involved in the expression of oxytocin or vasopressin. The parvocellular is involved in controlling the secretion of hormones from the anterior pituitary. Anterior Pituitary: The main role of the anterior pituitary gland is to produce and secrete tropic hormones. Some examples of tropic hormones secreted by the anterior pituitary gland include TSH, ACTH, GH, LH, and FSH. === Endocrine cells === There are many types of cells that make up the endocrine system and these cells typically make up larger tissues and organs that function within and outside of the endocrine system. Hypothalamus Anterior pituitary gland Pineal gland Posterior pituitary gland The posterior pituitary gland is a section of the pituitary gland. This organ does not produce any hormone but stores and secretes hormones such as antidiuretic hormone (ADH) which is synthesized by supraoptic nucleus of hypothalamus and oxytocin which is synthesized by paraventricular nucleus of hypothalamus. ADH functions to help the body to retain water; this is important in maintaining a homeostatic balance between blood solutions and water. Oxytocin functions to induce uterine contractions, stimulate lactation, and allows for ejaculation. Thyroid gland follicular cells of the thyroid gland produce and secrete T3 and T4 in response to elevated levels of TRH, produced by the hypothalamus, and subsequent elevated levels of TSH, produced by the anterior pituitary gland, which further regulates the metabolic activity and rate of all cells, including cell growth and tissue differentiation. Parathyroid gland The endocrine system can control all emotions and can control temperature. Epithelial cells of the parathyroid glands are richly supplied with blood from the inferior and superior thyroid arteries and secrete parathyroid hormone (PTH). PTH acts on bone, the kidneys, and the GI tract to increase calcium reabsorption and phosphate excretion. In addition, PTH stimulates the conversion of Vitamin D to its most active variant, 1,25-dihydroxyvitamin D3, which further stimulates calcium absorption in the GI tract. Thymus Gland Adrenal glands Adrenal cortex Adrenal medulla Pancreas Pancreas contain nearly 1 to 2 million islets of Langerhans (a tissue which consists cells that secrete hormones) and acini. Acini secretes digestive enzymes. Alpha cells The alpha cells of the pancreas secrete hormones to maintain homeostatic blood sugar. Insulin is produced and excreted to lower blood sugar to normal levels. Glucagon, another hormone produced by alpha cells, is secreted in response to low blood sugar levels; glucagon stimulates glycogen stores in the liver to release sugar into the bloodstream to raise blood sugar to normal levels. Beta cells 60% of the cells present in islet of Langerhans are beta cells. Beta cells secrete insulin. Along with glucagon, insulin helps in maintaining glucose levels in our body. Insulin decreases blood glucose level ( a hypoglycemic hormone) whereas glucagon increases blood glucose level. Delta cells F Cells Ovaries Granulosa cells Testis Leydig cells == Development == The fetal endocrine system is one of the first systems to develop during prenatal development. === Adrenal glands === The fetal adrenal cortex can be identified within four weeks of gestation. The adrenal cortex originates from the thickening of the intermediate mesoderm. At five to six weeks of gestation, the mesonephros differentiates into a tissue known as the genital ridge. The genital ridge produces the steroidogenic cells for both the gonads and the adrenal cortex. The adrenal medulla is derived from ectodermal cells. Cells that will become adrenal tissue move retroperitoneally to the upper portion of the mesonephros. At seven weeks of gestation, the adrenal cells are joined by sympathetic cells that originate from the neural crest to form the adrenal medulla. At the end of the eighth week, the adrenal glands have been encapsulated and have formed a distinct organ above the developing kidneys. At birth, the adrenal glands weigh approximately eight to nine grams (twice that of the adult adrenal glands) and are 0.5% of the total body weight. At 25 weeks, the adult adrenal cortex zone develops and is responsible for the primary synthesis of steroids during the early postnatal weeks. === Thyroid gland === The thyroid gland develops from two different clusterings of embryonic cells. One part is from the thickening of the pharyngeal floor, which serves as the precursor of the thyroxine (T4) producing follicular cells. The other part is from the caudal extensions of the fourth pharyngobranchial pouches which results in the parafollicular calcitonin-secreting cells. These two structures are apparent by 16 to 17 days of gestation. Around the 24th day of gestation, the foramen cecum, a thin, flask-like diverticulum of the median anlage develops. At approximately 24 to 32 days of gestation the median anlage develops into a bilobed structure. By 50 days of gestation, the medial and lateral anlage have fused together. At 12 weeks of gestation, the fetal thyroid is capable of storing iodine for the production of TRH, TSH, and free thyroid hormone. At 20 weeks, the fetus is able to implement feedback mechanisms for the production of thyroid hormones. During fetal development, T4 is the major thyroid hormone being produced while triiodothyronine (T3) and its inactive derivative, reverse T3, are not detected until the third trimester. === Parathyroid glands === A lateral and ventral view of an embryo showing the third (inferior) and fourth (superior) parathyroid glands during the 6th week of embryogenesis Once the embryo reaches four weeks of gestation, the parathyroid glands begins to develop. The human embryo forms five sets of endoderm-lined pharyngeal pouches. The third and fourth pouch are responsible for developing into the inferior and superior parathyroid glands, respectively. The third pharyngeal pouch encounters the developing thyroid gland and they migrate down to the lower poles of the thyroid lobes. The fourth pharyngeal pouch later encounters the developing thyroid gland and migrates to the upper poles of the thyroid lobes. At 14 weeks of gestation, the parathyroid glands begin to enlarge from 0.1 mm in diameter to approximately 1 – 2 mm at birth. The developing parathyroid glands are physiologically functional beginning in the second trimester. Studies in mice have shown that interfering with the HOX15 gene can cause parathyroid gland aplasia, which suggests the gene plays an important role in the development of the parathyroid gland. The genes, TBX1, CRKL, GATA3, GCM2, and SOX3 have also been shown to play a crucial role in the formation of the parathyroid gland. Mutations in TBX1 and CRKL genes are correlated with DiGeorge syndrome, while mutations in GATA3 have also resulted in a DiGeorge-like syndrome. Malformations in the GCM2 gene have resulted in hypoparathyroidism. Studies on SOX3 gene mutations have demonstrated that it plays a role in parathyroid development. These mutations also lead to varying degrees of hypopituitarism. === Pancreas === The human fetal pancreas begins to develop by the fourth week of gestation. Five weeks later, the pancreatic alpha and beta cells have begun to emerge. Reaching eight to ten weeks into development, the pancreas starts producing insulin, glucagon, somatostatin, and pancreatic polypeptide. During the early stages of fetal development, the number of pancreatic alpha cells outnumbers the number of pancreatic beta cells. The alpha cells reach their peak in the middle stage of gestation. From the middle stage until term, the beta cells continue to increase in number until they reach an approximate 1:1 ratio with the alpha cells. The insulin concentration within the fetal pancreas is 3.6 pmol/g at seven to ten weeks, which rises to 30 pmol/g at 16–25 weeks of gestation. Near term, the insulin concentration increases to 93 pmol/g. The endocrine cells have dispersed throughout the body within 10 weeks. At 31 weeks of development, the islets of Langerhans have differentiated. While the fetal pancreas has functional beta cells by 14 to 24 weeks of gestation, the amount of insulin that is released into the bloodstream is relatively low. In a study of pregnant women carrying fetuses in the mid-gestation and near term stages of development, the fetuses did not have an increase in plasma insulin levels in response to injections of high levels of glucose. In contrast to insulin, the fetal plasma glucagon levels are relatively high and continue to increase during development. At the mid-stage of gestation, the glucagon concentration is 6 μg/g, compared to 2 μg/g in adult humans. Just like insulin, fetal glucagon plasma levels do not change in response to an infusion of glucose. However, a study of an infusion of alanine into pregnant women was shown to increase the cord blood and maternal glucagon concentrations, demonstrating a fetal response to amino acid exposure. As such, while the fetal pancreatic alpha and beta islet cells have fully developed and are capable of hormone synthesis during the remaining fetal maturation, the islet cells are relatively immature in their capacity to produce glucagon and insulin. This is thought to be a result of the relatively stable levels of fetal serum glucose concentrations achieved via maternal transfer of glucose through the placenta. On the other hand, the stable fetal serum glucose levels could be attributed to the absence of pancreatic signaling initiated by incretins during feeding. In addition, the fetal pancreatic islets cells are unable to sufficiently produce cAMP and rapidly degrade cAMP by phosphodiesterase necessary to secrete glucagon and insulin. During fetal development, the storage of glycogen is controlled by fetal glucocorticoids and placental lactogen. Fetal insulin is responsible for increasing glucose uptake and lipogenesis during the stages leading up to birth. Fetal cells contain a higher amount of insulin receptors in comparison to adults cells and fetal insulin receptors are not downregulated in cases of hyperinsulinemia. In comparison, fetal haptic glucagon receptors are lowered in comparison to adult cells and the glycemic effect of glucagon is blunted. This temporary physiological change aids the increased rate of fetal development during the final trimester. Poorly managed maternal diabetes mellitus is linked to fetal macrosomia, increased risk of miscarriage, and defects in fetal development. Maternal hyperglycemia is also linked to increased insulin levels and beta cell hyperplasia in the post-term infant. Children of diabetic mothers are at an increased risk for conditions such as: polycythemia, renal vein thrombosis, hypocalcemia, respiratory distress syndrome, jaundice, cardiomyopathy, congenital heart disease, and improper organ development. === Gonads === The reproductive system begins development at four to five weeks of gestation with germ cell migration. The bipotential gonad results from the collection of the medioventral region of the urogenital ridge. At the five-week point, the developing gonads break away from the adrenal primordium. Gonadal differentiation begins 42 days following conception. ==== Male gonadal development ==== For males, the testes form at six fetal weeks and the sertoli cells begin developing by the eight week of gestation. SRY, the sex-determining locus, serves to differentiate the Sertoli cells. The Sertoli cells are the point of origin for anti-Müllerian hormone. Once synthesized, the anti-Müllerian hormone initiates the ipsilateral regression of the Müllerian tract and inhibits the development of female internal features. At 10 weeks of gestation, the Leydig cells begin to produce androgen hormones. The androgen hormone dihydrotestosterone is responsible for the development of the male external genitalia. The testicles descend during prenatal development in a two-stage process that begins at eight weeks of gestation and continues through the middle of the third trimester. During the transabdominal stage (8 to 15 weeks of gestation), the gubernacular ligament contracts and begins to thicken. The craniosuspensory ligament begins to break down. This stage is regulated by the secretion of insulin-like 3 (INSL3), a relaxin-like factor produced by the testicles, and the INSL3 G-coupled receptor, LGR8. During the transinguinal phase (25 to 35 weeks of gestation), the testicles descend into the scrotum. This stage is regulated by androgens, the genitofemoral nerve, and calcitonin gene-related peptide. During the second and third trimester, testicular development concludes with the diminution of the fetal Leydig cells and the lengthening and coiling of the seminiferous cords. ==== Female gonadal development ==== For females, the ovaries become morphologically visible by the 8th week of gestation. The absence of testosterone results in the diminution of the Wolffian structures. The Müllerian structures remain and develop into the fallopian tubes, uterus, and the upper region of the vagina. The urogenital sinus develops into the urethra and lower region of the vagina, the genital tubercle develops into the clitoris, the urogenital folds develop into the labia minora, and the urogenital swellings develop into the labia majora. At 16 weeks of gestation, the ovaries produce FSH and LH/hCG receptors. At 20 weeks of gestation, the theca cell precursors are present and oogonia mitosis is occurring. At 25 weeks of gestation, the ovary is morphologically defined and folliculogenesis can begin. Studies of gene expression show that a specific complement of genes, such as follistatin and multiple cyclin kinase inhibitors are involved in ovarian development. An assortment of genes and proteins - such as WNT4, RSPO1, FOXL2, and various estrogen receptors - have been shown to prevent the development of testicles or the lineage of male-type cells. === Pituitary gland === The pituitary gland is formed within the rostral neural plate. The Rathke's pouch, a cavity of ectodermal cells of the oropharynx, forms between the fourth and fifth week of gestation and upon full development, it gives rise to the anterior pituitary gland. By seven weeks of gestation, the anterior pituitary vascular system begins to develop. During the first 12 weeks of gestation, the anterior pituitary undergoes cellular differentiation. At 20 weeks of gestation, the hypophyseal portal system has developed. The Rathke's pouch grows towards the third ventricle and fuses with the diverticulum. This eliminates the lumen and the structure becomes Rathke's cleft. The posterior pituitary lobe is formed from the diverticulum. Portions of the pituitary tissue may remain in the nasopharyngeal midline. In rare cases this results in functioning ectopic hormone-secreting tumors in the nasopharynx. The functional development of the anterior pituitary involves spatiotemporal regulation of transcription factors expressed in pituitary stem cells and dynamic gradients of local soluble factors. The coordination of the dorsal gradient of pituitary morphogenesis is dependent on neuroectodermal signals from the infundibular bone morphogenetic protein 4 (BMP4). This protein is responsible for the development of the initial invagination of the Rathke's pouch. Other essential proteins necessary for pituitary cell proliferation are Fibroblast growth factor 8 (FGF8), Wnt4, and Wnt5. Ventral developmental patterning and the expression of transcription factors is influenced by the gradients of BMP2 and sonic hedgehog protein (SHH). These factors are essential for coordinating early patterns of cell proliferation. Six weeks into gestation, the corticotroph cells can be identified. By seven weeks of gestation, the anterior pituitary is capable of secreting ACTH. Within eight weeks of gestation, somatotroph cells begin to develop with cytoplasmic expression of human growth hormone. Once a fetus reaches 12 weeks of development, the thyrotrophs begin expression of Beta subunits for TSH, while gonadotrophs being to express beta-subunits for LH and FSH. Male fetuses predominately produced LH-expressing gonadotrophs, while female fetuses produce an equal expression of LH and FSH expressing gonadotrophs. At 24 weeks of gestation, prolactin-expressing lactotrophs begin to emerge. == Function == === Hormones === A hormone is any of a class of signaling molecules produced by cells in glands in multicellular organisms that are transported by the circulatory system to target distant organs to regulate physiology and behaviour. Hormones have diverse chemical structures, mainly of 3 classes: eicosanoids, steroids, and amino acid/protein derivatives (amines, peptides, and proteins). The glands that secrete hormones comprise the endocrine system. The term hormone is sometimes extended to include chemicals produced by cells that affect the same cell (autocrine or intracrine signalling) or nearby cells (paracrine signalling). Hormones are used to communicate between organs and tissues for physiological regulation and behavioral activities, such as digestion, metabolism, respiration, tissue function, sensory perception, sleep, excretion, lactation, stress, growth and development, movement, reproduction, and mood. Hormones affect distant cells by binding to specific receptor proteins in the target cell resulting in a change in cell function. This may lead to cell type-specific responses that include rapid changes to the activity of existing proteins, or slower changes in the expression of target genes. Amino acid–based hormones (amines and peptide or protein hormones) are water-soluble and act on the surface of target cells via signal transduction pathways; steroid hormones, being lipid-soluble, move through the plasma membranes of target cells to act within their nuclei. === Cell signalling === The typical mode of cell signalling in the endocrine system is endocrine signaling, that is, using the circulatory system to reach distant target organs. However, there are also other modes, i.e., paracrine, autocrine, and neuroendocrine signaling. Purely neurocrine signaling between neurons, on the other hand, belongs completely to the nervous system. ==== Autocrine ==== Autocrine signaling is a form of signaling in which a cell secretes a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on the same cell, leading to changes in the cells. ==== Paracrine ==== Some endocrinologists and clinicians include the paracrine system as part of the endocrine system, but there is not consensus. Paracrines are slower acting, targeting cells in the same tissue or organ. An example of this is somatostatin which is released by some pancreatic cells and targets other pancreatic cells. ==== Juxtacrine ==== Juxtacrine signaling is a type of intercellular communication that is transmitted via oligosaccharide, lipid, or protein components of a cell membrane, and may affect either the emitting cell or the immediately adjacent cells. It occurs between adjacent cells that possess broad patches of closely opposed plasma membrane linked by transmembrane channels known as connexons. The gap between the cells can usually be between only 2 and 4 nm. == Clinical significance == === Disease === Diseases of the endocrine system are common, including conditions such as diabetes mellitus, thyroid disease, and obesity. Endocrine disease is characterized by misregulated hormone release (a productive pituitary adenoma), inappropriate response to signaling (hypothyroidism), lack of a gland (diabetes mellitus type 1, diminished erythropoiesis in chronic kidney failure), or structural enlargement in a critical site such as the thyroid (toxic multinodular goitre). Hypofunction of endocrine glands can occur as a result of loss of reserve, hyposecretion, agenesis, atrophy, or active destruction. Hyperfunction can occur as a result of hypersecretion, loss of suppression, hyperplastic or neoplastic change, or hyperstimulation. Endocrinopathies are classified as primary, secondary, or tertiary. Primary endocrine disease inhibits the action of downstream glands. Secondary endocrine disease is indicative of a problem with the pituitary gland. Tertiary endocrine disease is associated with dysfunction of the hypothalamus and its releasing hormones. As the thyroid, and hormones have been implicated in signaling distant tissues to proliferate, for example, the estrogen receptor has been shown to be involved in certain breast cancers. Endocrine, paracrine, and autocrine signaling have all been implicated in proliferation, one of the required steps of oncogenesis. Other common diseases that result from endocrine dysfunction include Addison's disease, Cushing's disease and Graves' disease. Cushing's disease and Addison's disease are pathologies involving the dysfunction of the adrenal gland. Dysfunction in the adrenal gland could be due to primary or secondary factors and can result in hypercortisolism or hypocortisolism. Cushing's disease is characterized by the hypersecretion of the adrenocorticotropic hormone (ACTH) due to a pituitary adenoma that ultimately causes endogenous hypercortisolism by stimulating the adrenal glands. Some clinical signs of Cushing's disease include obesity, moon face, and hirsutism. Addison's disease is an endocrine disease that results from hypocortisolism caused by adrenal gland insufficiency. Adrenal insufficiency is significant because it is correlated with decreased ability to maintain blood pressure and blood sugar, a defect that can prove to be fatal. Graves' disease involves the hyperactivity of the thyroid gland which produces the T3 and T4 hormones. Graves' disease effects range from excess sweating, fatigue, heat intolerance and high blood pressure to swelling of the eyes that causes redness, puffiness and in rare cases reduced or double vision. == DALY rates == A DALY (Disability-Adjusted Life Year) is a measure that reflects the total burden of disease. It combines years of life lost (due to premature death) and years lived with disability (adjusted for the severity of the disability). The lower the DALY rates, the lower the burden of endocrine disorders in a country. The map shows that large parts of Asia have lower DALY rates (pale yellow), suggesting that endocrine disorders have a relatively low impact on overall health, whereas some countries in South America and Africa (specifically Suriname and Somalia) have higher DALY rates (dark orange to red), indicating a higher disease burden from endocrine disorders. == Other animals == A neuroendocrine system has been observed in all animals with a nervous system and all vertebrates have a hypothalamus–pituitary axis. All vertebrates have a thyroid, which in amphibians is also crucial for transformation of larvae into adult form. All vertebrates have adrenal gland tissue, with mammals unique in having it organized into layers. All vertebrates have some form of a renin–angiotensin axis, and all tetrapods have aldosterone as a primary mineralocorticoid. == Additional images == == See also == == References == == External links == Media related to Endocrine system at Wikimedia Commons
Wikipedia/Endocrine_systems
Oguchi disease is an autosomal recessive form of congenital stationary night blindness associated with fundus discoloration and abnormally slow dark adaptation. == Presentation == == Genetics == Several mutations have been implicated as a cause of Oguchi disease. These include mutations in the arrestin gene or the rhodopsin kinase gene. The condition is more frequent in individuals of Japanese ethnicity. == Diagnosis == Oguchi disease present with nonprogressive night blindness since young childhood or birth with normal day vision, but they frequently claim improvement of light sensitivities when they remain for some time in a darkened environment. On examination patients have normal visual fields but the fundi have a diffuse or patchy, silver-gray or golden-yellow metallic sheen and the retinal vessels stand out in relief against the background. A prolonged dark adaptation of three hours or more, leads to disappearance of this unusual discoloration and the appearance of a normal reddish appearance. This is known as the Mizuo-Nakamura phenomena and is thought to be caused by the overstimulation of rod cells. === Differential diagnosis === Other conditions with similar appearing fundi include Cone dystrophy X-linked retinitis pigmentosa Juvenile macular dystrophy These conditions do not show the Mizuo-Nakamura phenomenon. === Electroretinographic studies === Oguchi's disease is unique in its electroretinographic responses in the light- and dark-adapted conditions. The A- and b-waves on single flash electroretinograms (ERG) are decreased or absent under lighted conditions but increase after prolonged dark adaptation. There are nearly undetectable rod b waves in the scotopic 0.01 ERG and nearly negative scotopic 3.0 ERGs. Dark-adaptation studies have shown that highly elevated rod thresholds decrease several hours later and eventually result in a recovery to the normal or nearly normal level. The S, M and L cone systems are normal. == Management == == History == It was described by Chuta Oguchi (1875–1945), a Japanese ophthalmologist, in 1907. The characteristic fundal appearances were described by Mizuo in 1913.Treatment of the disease is limited. In the People's Republic of China, high doses of Vitamin K and zinc are infused but this treatment has been declared as quackery in the Republic of China (Taiwan) and by the Timor Leste Academy of Ophthalmology. In the U.S., affected persons have taken high doses of zinc (240 mg every two hours). == References == == External links == Oguchi disease at NIH's Office of Rare Diseases
Wikipedia/Oguchi_disease
Merlin (also called neurofibromin 2 or schwannomin) is a cytoskeletal protein. In humans, it is a tumor suppressor protein involved in neurofibromatosis type II. Sequence data reveal its similarity to the ERM protein family. Merlin is an acronym for "Moesin-Ezrin-Radixin-like protein". == Gene == Human merlin is coded by the gene NF2 in chromosome 22. Mouse merlin gene is located on chromosome 11 and rat merlin gene on chromosome 17. Fruit fly merlin gene (symbol Mer) is located on chromosome 1 and shares 58% similarity to its human homologue. Other merlin-like genes are known from a wide range of animals, and the derivation of merlin is thought to be in early metazoa. Merlin is a member of the ERM family of proteins including ezrin, moesin, and radixin, which are in the protein 4.1 superfamily of proteins. Merlin is also known as schwannomin, a name derived from the most common type of tumor in the NF2 patient phenotype, the schwannoma. == Structure == Vertebrate merlin is a 70 kDa protein. There are 10 known isoforms of human merlin molecule (the full molecule being 595 amino acids in length). The two most common of these are also found in the mouse and are called type 1 and type 2, differing by the absence or presence of exon 16 or 17, respectively). All the known varieties have a conserved N-terminal part, which contains a FERM domain (a domain found in most cytoskeletal-membrane organizing proteins). The FERM domain is followed by an alpha-helical domain and a hydrophilic tail. Merlin can dimerize with itself and heterodimerize with other ERM family proteins. == Function == Merlin is a membrane-cytoskeleton scaffolding protein, i.e. linking actin filaments to cell membrane or membrane glycoproteins. Human merlin is predominantly found in nervous tissue, but also in several other fetal tissues, and is mainly located in adherens junctions. Its tumor suppressor properties are probably associated with contact-mediated growth inhibition. Drosophila merlin is expressed in embryonic hindgut, salivary glands, and imaginal discs, and has apparently a slightly different role than in vertebrates. The phosphorylation of serine 518 is known to alter the functional state of merlin. The signaling pathway of merlin is proposed to include several salient cell growth controlling molecules, including eIF3c, CD44, protein kinase A, and p21 activated kinases. Work in Drosophila identified Merlin as an upstream regulator of the Hippo tumor suppressor pathway, a function that is conserved in mammals. The Hippo pathway is a well conserved signalling pathway that coordinately regulates cell proliferation and apoptosis. Mutations of the NF2 gene cause a human autosomal dominant disease called neurofibromatosis type 2. It is characterized by the development of tumors of the nervous system, most commonly of bilateral vestibular schwannomas (also called acoustic neuromas). NF2 belongs to the tumor suppressor group of genes. == Interactions == Merlin (protein) has been shown to interact with: == References == == External links == GeneReviews/NCBI/NIH/UW entry on Neurofibromatosis 2 FlyBase synopsis of gene Mer
Wikipedia/Merlin_(protein)
Myotonin-protein kinase (MT-PK) also known as myotonic dystrophy protein kinase (MDPK) or dystrophia myotonica protein kinase (DMPK) is an enzyme that in humans is encoded by the DMPK gene. The DMPK gene product is a Ser/Thr protein kinase homologous to the MRCK p21-activated kinases and Rho kinase family. Data obtained by using antibodies that detect specific isoforms of DMPK indicate that the most abundant isoform of DMPK is an 80-kDa protein expressed almost exclusively in smooth, skeletal, and cardiac muscles. This kinase exists both as a membrane-associated and as a soluble form in human left ventricular samples. The different C termini of DMPK that arise from alternative splicing determine its localization to the endoplasmic reticulum, mitochondria, or cytosol in transfected COS-1 cells. Among the substrates for DMPK proposed by in vitro studies are phospholemman, the dihydropyridine receptor, and the myosin phosphatase targeting subunit. However, an in vivo demonstration of the phosphorylation of these substrates by DMPK remains to be established, and a link between these substrates and the clinical manifestations of myotonic dystrophy (DM) is unclear. == Function == Myotonin-protein kinase is a serine-threonine kinase that is closely related to other kinases that interact with members of the Rho family of small GTPases. Substrates for this enzyme include myogenin, the beta-subunit of the L-type calcium channels, and phospholemman. Although the specific function of this protein is unknown, it appears to play an important role in muscle, heart, and brain cells. This protein may be involved in communication within cells. It also appears to regulate the production and function of important structures inside muscle cells by interacting with other proteins. For example, myotonic dystrophy protein kinase has been shown to turn off (inhibit) part of a muscle protein called myosin phosphatase. Myosin phosphatase is an enzyme that plays a role in muscle tensing (contraction) and relaxation. == Structure == Dystrophia myotonica protein kinase (DMPK) is a serine/threonine kinase composed of a kinase domain and a coiled-coil domain involved in the multimerization. The crystal structure of the kinase domain of DMPK bound to the inhibitor bisindolylmaleimide VIII (BIM-8) revealed a dimeric enzyme associated by a conserved dimerization domain. The affinity of dimerisation suggested that the kinase domain alone is insufficient for dimerisation in vivo and that the coiled-coil domains are required for stable dimer formation. The kinase domain is in an active conformation, with a fully ordered and correctly positioned aC helix, and catalytic residues in a conformation competent for catalysis. The conserved hydrophobic motif at the C-terminal extension of the kinase domain is bound to the N-terminal lobe of the kinase domain, despite being unphosphorylated. == Clinical significance == The 3' untranslated region of this gene contains 5-37 copies of a CTG trinucleotide repeat. Expansion of this unstable motif to 50-5,000 copies causes myotonic dystrophy type I, which increases in severity with increasing repeat element copy number. Repeat expansion is associated with condensation of local chromatin structure that disrupts the expression of genes in this region. As the DMPK repeat is replicated, the hairpin loop that is formed leads to repeat expansion (a) or contractions (b). Myotonic dystrophy (DM) 1 is an autosomal dominant neuromuscular disorder affecting approximately 1 in 8000 individuals. Affected individuals display a wide range of symptoms including myotonia, skeletal muscle weakness and wasting, cardiac conduction abnormalities, and cataracts. Despite cloning of the locus, the complex disease phenotype of DM has proven difficult to interpret, and the exact role of DMPK in the pathogenesis of DM remains unclear. == Interactions == Myotonic dystrophy protein kinase has been shown to interact with HSPB2 and RAC1. == Regulation == The close relationship of DMPK to the Rho-kinases has led to speculation whether DMPK activity may be regulated in vivo by small G proteins, particularly of the Rho family. Although DMPK lacks obvious binding sites for known G, DMPK-1 oligomers exhibit low basal catalytic activity due to the presence of the C-terminal autoinhibitory domain (AI). A protease (P) within the membrane cleaves DMPK-1, removing the C-terminal autoinhibitory and membrane association domains and releasing cytosolic, basally active DMPK-2. This processing event would produce longterm activation of the kinase. Short-term activation of DMPK-1 and -2 may be mediated by transitory interaction with a small GTPase (G). A general model that accounts for DMPK oligomerization, processing, and regulation has been proposed. In this model, transient activation of kinase activity would occur in response to G protein second messengers, while longterm activation of DMPK could be mediated by a membrane associated protease that cleaves DMPK-1 to release cytosolic DMPK-2 in a persistently activated form. The persistent activation of serine/threonine kinases has been shown to play a role in the determination of cell fate as well as memory production in the nervous system. In this respect, DMPK may be similar to PKA and PKC, two kinases that can be transiently activated in response to second messengers or persistently activated by proteolytic removal of an autoinhibitory domain. Thus, this model suggests that the two endogenous DMPK forms may possess different activities, localizations, regulators, and substrates and perform distinct physiological functions. == References == == Further reading == == External links == GeneReviews/NCBI/NIH/UW entry on Myotonic Dystrophy Type 1
Wikipedia/Myotonin-protein_kinase
Collagen disease is a term previously used to describe systemic autoimmune diseases (e.g., rheumatoid arthritis, systemic lupus erythematosus, and systemic sclerosis), but now is thought to be more appropriate for diseases associated with defects in collagen, which is a component of the connective tissue. The term "collagen disease" was coined by Dr. Alvin F. Coburn in 1932, on his quest to discover streptococcal infection as the cause for rheumatic fever. == See also == Collagenopathy, types II and XI Connective tissue disease == References == == External links == Collagen disease entry in the public domain NCI Dictionary of Cancer Terms This article incorporates public domain material from Dictionary of Cancer Terms. U.S. National Cancer Institute.
Wikipedia/Collagen_disease
Chronic kidney disease (CKD) is a type of long-term kidney disease, defined by the sustained presence of abnormal kidney function and/or abnormal kidney structure. To meet criteria for CKD, the abnormalities must be present for at least three months. Early in the course of CKD, patients are usually asymptomatic, but later symptoms may include leg swelling, feeling tired, vomiting, loss of appetite, and confusion. Complications can relate to hormonal dysfunction of the kidneys and include (in chronological order) high blood pressure (often related to activation of the renin–angiotensin system), bone disease, and anemia. Additionally CKD patients have markedly increased cardiovascular complications with increased risks of death and hospitalization. CKD can lead to end-stage kidney failure requiring kidney dialysis or kidney transplantation. Causes of chronic kidney disease include diabetes, high blood pressure, glomerulonephritis, and polycystic kidney disease. Risk factors include a family history of chronic kidney disease. Diagnosis is by blood tests to measure the estimated glomerular filtration rate (eGFR), and a urine test to measure albumin. Ultrasound or kidney biopsy may be performed to determine the underlying cause. Several severity-based staging systems are in use. Testing people with risk factors (case-finding) is recommended. Initial treatments may include medications to lower blood pressure, blood sugar, and cholesterol. Angiotensin converting enzyme inhibitors (ACEIs) or angiotensin II receptor antagonists (ARBs) are generally first-line agents for blood pressure control, as they slow progression of the kidney disease and the risk of heart disease. Loop diuretics may be used to control edema and, if needed, to further lower blood pressure. NSAIDs should be avoided. Other recommended measures include staying active, and "to adopt healthy and diverse diets with a higher consumption of plant-based foods compared to animal-based foods and a lower consumption of ultraprocessed foods." Plant-based diets are feasible and are associated with improved intermediate outcomes and biomarkers. An example of a general, healthy diet, suitable for people with CKD who do not require restrictions, is the Canada Food Guide Diet. People with CKD who require dietary restrictions or who have other specific nutritional problems should be referred to a dietitian. Treatments for anemia and bone disease may also be required. Severe disease requires hemodialysis, peritoneal dialysis, or a kidney transplant for survival. Chronic kidney disease affected 753 million people globally in 2016 (417 million females and 336 million males.) In 2015, it caused 1.2 million deaths, up from 409,000 in 1990. The causes that contribute to the greatest number of deaths are high blood pressure at 550,000, followed by diabetes at 418,000, and glomerulonephritis at 238,000. == Signs and symptoms == CKD is initially without symptoms and is usually detected on routine screening blood work by either an increase in serum creatinine, or protein in the urine. As the kidney function decreases, more unpleasant symptoms may emerge: Blood pressure is increased due to fluid overload and the production of vasoactive hormones created by the kidney via the renin-angiotensin system, increasing the risk of developing hypertension and heart failure. People with CKD are more likely than the general population to develop atherosclerosis with consequent cardiovascular disease, an effect that may be at least partly mediated by uremic toxins. People with both CKD and cardiovascular disease have significantly worse prognoses than those with only cardiovascular disease. Urea accumulates, leading to azotemia and ultimately uremia (symptoms ranging from lethargy to pericarditis and encephalopathy). Due to its high systemic concentration, urea is excreted in eccrine sweat at high concentrations and crystallizes on the skin as the sweat evaporates ("uremic frost"). Potassium accumulates in the blood (hyperkalemia with a range of symptoms including malaise and potentially fatal cardiac arrhythmias). Hyperkalemia usually does not develop until the glomerular filtration rate falls to less than 20–25 mL/min/1.73 m2, when the kidneys have decreased ability to excrete potassium. Hyperkalemia in CKD can be exacerbated by acidemia (triggering the cells to release potassium into the bloodstream to neutralize the acid) and from lack of insulin. Fluid overload symptoms may range from mild edema to life-threatening pulmonary edema. Hyperphosphatemia results from poor phosphate elimination in the kidney, and contributes to increased cardiovascular risk by causing vascular calcification. Circulating concentrations of fibroblast growth factor-23 (FGF-23) increase progressively as the kidney capacity for phosphate excretion declines, which may contribute to left ventricular hypertrophy and increased mortality in people with CKD . Hypocalcemia results from 1,25 dihydroxyvitamin D3 deficiency (caused by high FGF-23 and reduced kidney mass) and the skeletal resistance to the calcemic action of parathyroid hormone. Osteocytes are responsible for the increased production of FGF-23, which is a potent inhibitor of the enzyme 1-alpha-hydroxylase (responsible for the conversion of 25-hydroxycholecalciferol into 1,25 dihydroxyvitamin D3). Later, this progresses to secondary hyperparathyroidism, kidney osteodystrophy, and vascular calcification that further impairs cardiac function. An extreme consequence is the occurrence of the rare condition named calciphylaxis. Changes in mineral and bone metabolism that may cause 1) abnormalities of calcium, phosphorus (phosphate), parathyroid hormone, or vitamin D metabolism; 2) abnormalities in bone turnover, mineralization, volume, linear growth, or strength (kidney osteodystrophy); and 3) vascular or other soft-tissue calcification. CKD–mineral and bone disorders have been associated with poor outcomes. Metabolic acidosis may result from decreased capacity to generate enough ammonia from the cells of the proximal tubule. Acidemia affects the function of enzymes and increases the excitability of cardiac and neuronal membranes by the promotion of hyperkalemia. Anemia is common and is especially prevalent in those requiring hemodialysis. It is multifactorial in cause but includes increased inflammation, reduction in erythropoietin, and hyperuricemia leading to bone marrow suppression. Hypoproliferative anemia occurs due to inadequate production of erythropoietin by the kidneys. In later stages, cachexia may develop, leading to unintentional weight loss, muscle wasting, weakness, and anorexia. Cognitive decline in patients experiencing CKD is an emerging symptom revealed in research literature. Research suggests that patients with CKD face a 35–40% higher likelihood of cognitive decline and or dementia. This relation is dependent on the severity of CKD in each patient; although emerging literature indicates that patients at all stages of CKD will have a higher risk of developing these cognitive issues. Sexual dysfunction is very common in both men and women with CKD. A majority of men have a reduced sex drive, difficulty obtaining an erection, and reaching orgasm, and the problems get worse with age. Most women have trouble with sexual arousal, and painful menstruation and problems with performing and enjoying sex are common. == Causes == The most common causes of CKD are diabetes mellitus, hypertension, and glomerulonephritis. About one of five adults with hypertension and one of three adults with diabetes have CKD. If the cause is unknown, it is called idiopathic. === By anatomical location === Vascular disease includes large-vessel disease such as bilateral kidney artery stenosis and small-vessel disease such as ischemic nephropathy, hemolytic–uremic syndrome, and vasculitis. Glomerular disease comprises a diverse group and is classified into: Primary glomerular disease such as focal segmental glomerulosclerosis and IgA nephropathy (or nephritis) Secondary glomerular disease such as diabetic nephropathy and lupus nephritis Tubulointerstitial disease includes drug- and toxin-induced chronic tubulointerstitial nephritis, and reflux nephropathy Obstructive nephropathy, as exemplified by bilateral kidney stones and benign prostatic hyperplasia of the prostate gland; rarely, pinworms infecting the kidney can cause obstructive nephropathy. === Other === Genetic congenital disease such as polycystic kidney disease or 17q12 microdeletion syndrome. Mesoamerican nephropathy, is "a new form of kidney disease that could be called agricultural nephropathy". A high and so-far unexplained number of new cases of CKD, referred to as the Mesoamerican nephropathy, has been noted among male workers in Central America, mainly in sugarcane fields in the lowlands of El Salvador and Nicaragua. Heat stress from long hours of piece-rate work at high average temperatures of about 36 °C (96 °F) is suspected, as are agricultural chemicals Chronic lead exposure == Diagnosis == Diagnosis of CKD is largely based on history, examination, and urine dipstick combined with the measurement of the serum creatinine level. Differentiating CKD from acute kidney injury (AKI) is important because AKI can be reversible. One diagnostic clue that helps differentiate CKD from AKI is a gradual rise in serum creatinine (over several months or years) as opposed to a sudden increase in the serum creatinine (several days to weeks). In many people with CKD, previous kidney disease or other underlying diseases are already known. A significant number present with CKD of unknown cause. === Screening === Screening those who have neither symptoms nor risk factors for CKD is not recommended. Those who should be screened include: those with hypertension or history of cardiovascular disease, those with diabetes or marked obesity, those aged > 60 years, subjects with African American ancestry, those with a history of kidney disease in the past, and subjects who have relatives who had kidney disease requiring dialysis. Screening should include calculation of the estimated GFR (eGFR) from the serum creatinine level, and measurement of urine albumin-to-creatinine ratio (ACR) in a first-morning urine specimen (this reflects the amount of a protein called albumin in the urine), as well as a urine dipstick screen for hematuria. The GFR is derived from the serum creatinine and is proportional to 1/creatinine, i.e. it is a reciprocal relationship; the higher the creatinine, the lower the GFR. It reflects one aspect of kidney function, how efficiently the glomeruli – the filtering units – work. The normal GFR is >90 ml/min. The units of creatinine vary from country to country, but since the glomeruli comprise <5% of the mass of the kidney, the GFR does not indicate all aspects of kidney health and function. This can be done by combining the GFR level with the clinical assessment of the person, including fluid status, and measuring the levels of hemoglobin, potassium, phosphate, and parathyroid hormone. === Ultrasound === Kidney ultrasonography is useful for diagnostic and prognostic purposes in chronic kidney disease. Whether the underlying pathologic change is glomerular sclerosis, tubular atrophy, interstitial fibrosis, or inflammation, the result is often increased echogenicity of the cortex. The echogenicity of the kidney should be related to the echogenicity of the liver or the spleen. Moreover, decreased kidney size and cortical thinning are often seen especially when the disease progresses. However, kidney size correlates to height, and short persons tend to have small kidneys; thus, kidney size as the only parameter is unreliable. === Additional imaging === Additional tests may include nuclear medicine MAG3 scan to confirm blood flow and establish the differential function between the two kidneys. Dimercaptosuccinic acid (DMSA) scans are also used in kidney imaging; with both MAG3 and DMSA being used chelated with the radioactive element technetium-99. === Stages === A glomerular filtration rate (GFR) ≥ 60 mL/min/1.73 m2 is considered normal without chronic kidney disease if there is no kidney damage present. Kidney damage is defined as signs of damage seen in blood, urine, or imaging studies which include lab albumin/creatinine ratio (ACR) ≥ 30. All people with a GFR <60 mL/min/1.73 m2 for 3 months are defined as having chronic kidney disease. Protein in the urine is regarded as an independent marker for the worsening of kidney function and cardiovascular disease. Hence, British guidelines append the letter "P" to the stage of chronic kidney disease if protein loss is significant. Stage 1: Slightly diminished function; kidney damage with normal or relatively high GFR (≥90 mL/min/1.73 m2) and persistent albuminuria. Kidney damage is defined as pathological abnormalities or markers of damage, including abnormalities in blood or urine tests or imaging studies. Stage 2: Mild reduction in GFR (60–89 mL/min/1.73 m2) with kidney damage. Kidney damage is defined as pathological abnormalities or markers of damage, including abnormalities in blood or urine tests or imaging studies. Stage 3: Moderate reduction in GFR (30–59 mL/min/1.73 m2):. British guidelines distinguish between stage 3A (GFR 45–59) and stage 3B (GFR 30–44) for purposes of screening and referral. Stage 4: Severe reduction in GFR (15–29 mL/min/1.73 m2) Preparation for kidney replacement therapy. Stage 5: Established kidney failure (GFR <15 mL/min/1.73 m2), permanent kidney replacement therapy, or end-stage kidney disease. The term "non-dialysis-dependent chronic kidney disease" (NDD-CKD) is a designation used to encompass the status of those persons with an established CKD who do not yet require the life-supporting treatments for kidney failure known as kidney replacement therapy (RRT, including maintenance dialysis or kidney transplantation). The condition of individuals with CKD, who require either of the two types of kidney replacement therapy (dialysis or transplant), is referred to as end-stage kidney disease (ESKD). Hence, the start of the ESKD is practically the irreversible conclusion of the NDD-CKD. Even though the NDD-CKD status refers to the status of persons with earlier stages of CKD (stages 1 to 4), people with advanced stages of CKD (stage 5), who have not yet started kidney replacement therapy, are also referred to as NDD-CKD. == Management == Chronic kidney disease (CKD) is a serious condition often linked to diabetes and high blood pressure. There is no cure, but a combination of lifestyle changes and medications can help slow its progression. This might include a plant-dominant diet with less protein and salt, medications to control blood pressure and sugar, and potentially newer anti-inflammatory drugs. Doctors may also focus on managing heart disease risk, preventing infections, and avoiding further kidney damage. While dialysis may eventually be needed, a gradual transition can help preserve remaining kidney function. More research is ongoing to improve CKD management and patient outcomes. === Blood pressure === Angiotensin converting enzyme inhibitors (ACEIs) or angiotensin II receptor antagonists (ARBs) are recommended as first-line agents since they have been found to slow the decline of kidney function, relative to a more rapid decline in those not on one of these agents. They have also been found to reduce the risk of major cardiovascular events such as myocardial infarction, stroke, heart failure, and death from cardiovascular disease when compared to placebo in individuals with CKD. ACEIs may be superior to ARBs for protection against progression to kidney failure and death from any cause in those with CKD. Aggressive blood pressure lowering decreases people's risk of death. === Other measures === Aggressive treatment of high blood lipids is recommended. A low-protein, low-salt diet may result in slower progression of CKD and reduction in proteinuria as well as controlling symptoms of advanced CKD to delay dialysis start. A tailored low-protein diet, designed for low acidity, may help prevent damage to kidneys for people with CKD. Additionally, controlling salt ingestion helps to decrease the incidence of coronary heart disease, lowering blood pressure and reducing albuminuria. Anemia – A target hemoglobin level of 100–120 g/L is recommended; raising hemoglobin levels to the normal range has not been found to be of benefit. Guidelines recommend treatment with parenteral iron prior to treatment with erythropoietin. Replacement of erythropoietin is often necessary in people with advanced disease. It is unclear if androgens improve anemia. Calcitriol is recommended for vitamin D deficiency and control of metabolic bone disease. Phosphate binders are used to control the serum phosphate levels, which are usually elevated in advanced chronic kidney disease. Phosphodiesterase-5 inhibitors and zinc may improve sexual dysfunction in men. === Lifestyle interventions === ==== Weight loss ==== Obesity may have a negative impact in CKD, increasing the risk of disease progression to ESKD or kidney failure compared to controls with healthy weight, and when in advanced stages also may hinder people's eligibility to kidney transplantation. For example, the consumption of high calorie and high fructose beverages can make an individual "60% more likely to develop CKD". Weight management interventions in overweight and obese adults with CKD include lifestyle interventions (dietary changes, physical activity/exercise, or behavioural strategies), pharmacological (used to reduce absorption or suppress appetite) and surgical interventions. Any of these can help people with CKD lose weight, however, it is not known if they can also prevent death or cardiovascular events like heart complications or stroke. It is recommended that weight management interventions should be individualised, according to a thorough patients' assessment regarding clinical condition, motivations, and preferences. ==== Dietary salt intake ==== High dietary sodium intake may increase the risk of hypertension and cardiovascular disease. The effect of dietary restriction of salt in foods has been investigated in people with chronic kidney disease. For people with CKD, including those on dialysis, reduced salt intake may help to lower both systolic and diastolic blood pressure, as well as albuminuria. Some people may experience low blood pressure and associated symptoms, such as dizziness, with lower salt intake. The effect of salt restriction on extracellular fluid, oedema, and total body weight reduction is unknown. EHealth interventions may improve dietary sodium intake and fluid management for people with CKD. ==== Omega-3 supplementation ==== In people with CKD who require hemodialysis, there is a risk that vascular blockage due to clotting, may prevent dialysis therapy from being possible. Even though Omega-3 fatty acids contribute to the production of eicosanoid molecules that reduce clotting, it does not have any impact on the prevention of vascular blockage in people with CKD. ==== Protein supplementation ==== Regular consumption of oral protein-based nutritional supplements may increase serum albumin levels slightly in people with CKD, especially among those requiring hemodialysis or who are malnourished. Prealbumin level and mid-arm muscle circumference may also be increased following supplementation. Despite possible improvement in these indicators of nutritional status, it is not certain that protein supplements affect the quality of life, life expectancy, inflammation, or body composition. ==== Iron supplementation ==== Intravenous (IV) iron therapy may help more than oral iron supplements in reaching target hemoglobin levels. However, allergic reactions may also be more likely following IV-iron therapy. === Sleep === Individuals with CKD have an increased prevalence of sleep apnea compared to the general population (both obstructive sleep apnea and central sleep apnea). The presence of sleep apnea in CKD has been associated with an increased risk of cardiovascular events and mortality. People with CKD also experience sleep disorders, thus unable to get quality sleep. There are several strategies that could help, such as relaxation techniques, exercise, and medication. Exercise may be helpful with sleep regulation and may decrease fatigue and depression in people with CKD. However, none of these options have been proven to be effective in the treatment of sleep disorders. This means that it is unknown what is the best guidance to improve sleep quality in this population. === Referral to a nephrologist === Guidelines for referral to a nephrologist vary between countries. Most agree that nephrology referral is required by Stage 4 CKD (when eGFR/1.73m2 is less than 30 mL/min; or decreasing by more than 3 mL/min/year). It may also be useful at an earlier stage (e.g. CKD3) when urine albumin-to-creatinine ratio is more than 30 mg/mmol, when blood pressure is difficult to control, or when hematuria or other findings suggest either a primarily glomerular disorder or secondary disease amenable to a specific treatment. Other benefits of early nephrology referral include proper education regarding options for kidney replacement therapy as well as pre-emptive transplantation, and timely workup and placement of an arteriovenous fistula in those people with chronic kidney disease opting for future hemodialysis. === Renal replacement therapy === At stage 5 CKD, kidney replacement therapy is usually required, in the form of either dialysis or a kidney transplant. In CKD numerous uremic toxins accumulate in the blood. Even when ESKD (largely synonymous with CKD5) is treated with dialysis, the toxin levels do not go back to normal as dialysis is not that efficient. Similarly, after a kidney transplant, the levels may not go back to normal as the transplanted kidney may not work 100%. If it does, the creatinine level is often normal. The toxins show various cytotoxic activities in the serum and have different molecular weights, and some of them are bound to other proteins, primarily to albumin. Uremic toxins are classified into three groups as small water-soluble solutes, middle molecular-weight solutes, and protein-bound solutes. Hemodialysis with high-flux dialysis membrane, long or frequent treatment, and increased blood/dialysate flow has improved removal of water-soluble small molecular weight uremic toxins. Middle molecular weight molecules are removed more effectively with hemodialysis using a high-flux membrane, hemodiafiltration, and hemofiltration. However, conventional dialysis treatment is limited in its ability to remove protein-bound uremic toxins. == Prognosis == CKD increases the risk of cardiovascular disease, and people with CKD often have other risk factors for heart disease, such as high blood lipids. The most common cause of death in people with CKD is cardiovascular disease rather than kidney failure. Chronic kidney disease results in worse all-cause mortality (the overall death rate) which increases as kidney function decreases. The leading cause of death in chronic kidney disease is cardiovascular disease, regardless of whether there is progression to stage 5. While kidney replacement therapies can maintain people indefinitely and prolong life, the quality of life is negatively affected. Kidney transplantation increases the survival of people with stage 5 CKD when compared to other options; however, it is associated with an increased short-term mortality due to complications of the surgery. Transplantation aside, high-intensity home hemodialysis appears to be associated with improved survival and a greater quality of life, when compared to the conventional three-times-a-week hemodialysis and peritoneal dialysis. People with ESKD are at increased overall risk for cancer. This risk is particularly high in younger people and gradually diminishes with age. Medical specialty professional organizations recommend that physicians do not perform routine cancer screening in people with limited life expectancies due to ESKD because the evidence does not show that such tests lead to improved outcomes. In children, growth failure is a common complication of CKD. Children with CKD will be shorter than 97% of children the same age and sex. This can be treated with additional nutritional support or medication such as growth hormone. === Survival without dialysis === Survival rates of CKD are generally longer with dialysis than without (having only conservative kidney management). However, from the age of 80 and in elderly patients with comorbidities there is no difference in survival between the two groups. Quality of life might be better for people without dialysis. People who had decided against dialysis treatment when reaching end-stage chronic kidney disease could survive several years and experience improvements in their mental well-being in addition to sustained physical well-being and overall quality of life until late in their illness course. However, use of acute care services in these cases is common, and the intensity of end-of-life care is highly variable among people opting out of dialysis. == High prevalence of CKD in some areas == High Prevalence of Chronic Kidney Disease in Certain Regions Overview Chronic kidney disease (CKD) is a growing global health problem, affecting over 10% of the world's population—more than 800 million people. While CKD is prevalent worldwide, certain regions and communities experience exceptionally high rates, often due to a combination of traditional and unique risk factors. Regional Hotspots and Prevalence In India, the Uddanam region of Andhra Pradesh has reported a CKD prevalence of 18.2%, which is 3–4 times higher than most other areas in India and comparable to global hotspots like Sri Lanka and Central America. Notably, a significant proportion of CKD cases here are of unknown cause (CKDu), not linked to common risk factors like diabetes or hypertension. In Thailand, the overall CKD prevalence was found to be 17.5%, with certain rural areas reaching as high as 22.2%. Globally, the estimated prevalence of CKD is about 13.4%. Risk Factors and Contributing Causes Traditional risk factors include diabetes, hypertension, obesity, and aging, which are rapidly increasing in low- and middle-income countries (LMICs). Environmental and occupational exposures, such as toxins, heat stress, and possibly contaminated water, are suspected contributors in CKD hotspots like Uddanam, Sri Lanka, and Central America, though the exact causes often remain unclear. Limited access to healthcare, high poverty rates, and underfunded health systems exacerbate the problem, leading to late diagnosis and high mortality. Public Health Impact CKD is a leading cause of morbidity and mortality, particularly in regions with high prevalence and limited treatment options. The financial burden of CKD is substantial, especially where access to dialysis and transplantation is limited. Dr Y.Sreehari created new word 'CKDum'. That means Chronic Kidney Disease Un Conclusive etiology and Mystery Un Conclusive: A. Un conclusive etiology means, No BP or Diabetis, no any other risk factors. B. Mystery Un Conclusive means: 1.No causes of Environmental (Toxins,Heavy Metals: Exposure to cadmium, arsenic, lead, and fluoride through drinking water or soil). 2.No Pesticides and Herbicides,No Chronic exposure to glyphosate, organophosphates and other agrochemicals used in farming. 3.No Heat Stress and Dehydration. 4.No Prolonged physical labor in hot, humid climates. 5.No Contaminated Water Sources 6.No Hard water with high fluoride, silica, and sodium content. 7.No Groundwater contamination. 8.No Poor nutrition. 9.No Use of NSAIDs (Painkillers) 2. Dr Y. Sreehari is a physician, and belongs to Visakhapatnam, chairman of Dr Sreehari hospitals. 3. The world has a crores of Villages. 'X' is a world average prevalence CKD . So as per probability, chance, like lottery chances,have some villages may be more than world average prevalence 'X' 5. If we count the top 10 villages with high CKDum prevalence , any 10 villages must be in top position. 6.Probability Theory and CKDum: Probability theory governs the likelihood of events in large populations. In the context of CKDum, we can model its prevalence using a binomial distribution, which describes the number of occurrences of an event (e.g., CKDum cases) in a fixed number of trials (e.g., individuals in a village). Let’s denote: p = the global average prevalence of CKDum (a subset of X, estimated to be lower, e.g., 1–2% in high-risk rural populations []). n = the population of a village. k = the number of CKDum cases in the village. The probability of observing k cases in a village of size n is given by the binomial probability formula: P(k)=(nk)pk(1−p)n−kP(k) = \binom{n}{k} p^k (1-p)^{n-k}P(k)=(kn​)pk(1−p)n−k where (nk)\binom{n}{k}(kn​) is the binomial coefficient, representing the number of ways to choose k cases from n individuals. In a world with millions of villages, the law of large numbers suggests that most villages will have CKDum prevalence close to p. However, the central limit theorem implies that, due to random variation, some villages will exhibit prevalence significantly higher or lower than p. This is analogous to a lottery, where a small number of tickets (villages) win the "prize" of unusually high prevalence by chance alone. 7.The Top 10 Villages: A Statistical Inevitability:Dr. Sreehari posits that if we rank villages worldwide by CKDum prevalence, the top 10 will inevitably show rates far above the global average, purely due to statistical chance. This can be understood through extreme value theory, which studies the behavior of the tails of probability distributions. In a large sample (e.g., millions of villages), the maximum values of CKDum prevalence are expected to be significantly higher than the mean. For example, suppose p = 0.02 (2% prevalence) and a village has n = 1000 residents. The expected number of CKDum cases is n⋅p=20n \cdot p = 20n⋅p=20. However, the standard deviation of the binomial distribution is: σ=n⋅p⋅(1−p)=1000⋅0.02⋅0.98≈4.43\sigma = \sqrt{n \cdot p \cdot (1-p)} = \sqrt{1000 \cdot 0.02 \cdot 0.98} \approx 4.43σ=n⋅p⋅(1−p)​=1000⋅0.02⋅0.98​≈4.43 Using a normal approximation, the probability of observing, say, 50 or more cases (5% prevalence, 2.5 times the expected value) can be calculated using the z-score: z=50−204.43≈6.77z = \frac{50 - 20}{4.43} \approx 6.77z=4.4350−20​≈6.77 The probability of z≥6.77z \geq 6.77z≥6.77 is extremely small (on the order of 10−1110^{-11}10−11), but with millions of villages globally, the expected number of such outliers becomes non-negligible. For instance, with 10 million villages worldwide, we expect: 10,000,000⋅P(z≥6.77)≈a small but positive number of villages10,000,000 \cdot P(z \geq 6.77) \approx \text{a small but positive number of villages}10,000,000⋅P(z≥6.77)≈a small but positive number of villages Thus, it is statistically inevitable that some villages will have CKDum prevalence far exceeding the global average, even without specific causal factors. Global Applicability: With millions of villages worldwide, Dr. Sreehari’s framework applies universally. Whether in Sri Lanka, India, or Central America, the lottery-like distribution of CKDum explains why certain communities are disproportionately affected. Conclusion: The CKDum Mystery Solved Dr. Y. Sreehari’s innovative application of probability theory offers a compelling resolution to the CKDum mystery. ... == Epidemiology == About one in ten people have chronic kidney disease. In Canada 1.9 to 2.3 million people were estimated to have CKD in 2008. CKD affected an estimated 13.9% of U.S. adults aged 18 years and older in the period from 2017 to 2020. In 2007 8.8% of the population of Great Britain and Northern Ireland had symptomatic CKD. Chronic kidney disease was the cause of 956,000 deaths globally in 2013, up from 409,000 deaths in 1990. === Chronic kidney disease of unknown aetiology === The cause of chronic kidney disease is sometimes unknown; it is referred to as chronic kidney disease of unknown aetiology (CKDu). As of 2020 a rapidly progressive chronic kidney disease, unexplained by diabetes and hypertension, had increased dramatically in prevalence over a few decades in several regions in Central America and Mexico, a CKDu referred to as the Mesoamerican nephropathy (MeN). It was estimated in 2013 that at least 20,000 men had died prematurely, some in their 20s and 30s; a figure of 40,000 per year was estimated in 2020. In some affected areas CKD mortality was five times the national rate. MeN primarily affects men working as sugarcane labourers. The cause is unknown, but in 2020 the science found a clearer connection between heavy labour in high temperatures and incidence of CKDu; improvements such as regular access to water, rest and shade, can significantly decrease the potential CKDu incidence. CKDu also affects people in Sri Lanka where it is the eighth largest cause of in-hospital mortality. === Race === African, Hispanic, and South Asian (particularly those from Pakistan, Sri Lanka, Bangladesh, and India) populations are at high risk of developing CKD. Africans are at greater risk due to the number of people affected with hypertension among them. As an example, 37% of ESKD cases in African Americans can be attributed to high blood pressure, compared with 19% among Caucasians. Treatment efficacy also differs between racial groups. Administration of antihypertensive drugs generally halts disease progression in white populations but has little effect in slowing kidney disease among black people, and additional treatment such as bicarbonate therapy is often required. While lower socioeconomic status contributes to the number of people affected with CKD, differences in the number of people affected by CKD are still evident between Africans and Whites when controlling for environmental factors. Although CKD of unknown etiology was first documented among sugar cane workers in Costa Rica in the 1970s, it may well have affected plantation laborers since the introduction of sugar cane farming to the Caribbean in the 1600s. In colonial times the death records of slaves on sugar plantations were much higher than for slaves forced into other labor. ==== Denial of care ==== Denial of care in chronic kidney disease treatment and management is a significant issue for minority populations. This can be due to healthcare provider prejudice, structural barriers, and health insurance coverage disparities. Healthcare provider biases can lead to under-treatment, misdiagnosis, or delayed diagnosis. Structural barriers, such as lack of insurance and limited healthcare facilities, hinder access to timely care. Furthermore, health insurance coverage disparities, with minority populations lacking adequate coverage, contribute to these disparities. Denial of care worsens health outcomes and perpetuates existing health inequities. ==== Race-based kidney function metric ==== Race-based kidney function metrics, particularly normalizing creatinine, pose ethical challenges in diagnosing and managing chronic kidney disease (CKD). While certain racial and ethnic groups are at higher risk, using race as a reference range may reinforce stereotypes and perpetuate health disparities. This approach fails to account for the complex interplay of genetic, environmental, and social factors influencing kidney function. Depending solely on race-based metrics may lead to misdiagnosis or underdiagnosis in minority populations. Alternative approaches that consider socioeconomic status, environmental exposures, and genetic vulnerability, are needed to accurately assess kidney function and address CKD care disparities. == Society and culture == === Organisations === The International Society of Nephrology is an international body representing specialists in kidney diseases. ==== United States ==== The National Kidney Foundation is a national organization representing people with chronic kidney diseases and professionals who treat kidney diseases. The American Kidney Fund is a national nonprofit organization providing treatment-related financial assistance to one of every five people undergoing dialysis each year. The Renal Support Network is a nonprofit, patient-focused, patient-run organization that provides non-medical services to those affected by CKD. The American Association of Kidney Patients is a nonprofit, patient-centric group focused on improving the health and well-being of CKD and people undergoing dialysis . The Renal Physicians Association is an association representing nephrology professionals. ==== United Kingdom ==== It was said to be costing the National Health Service about £1.5 billion a year in 2020. Kidney Care UK and The UK National Kidney Federation represent people with chronic kidney disease. The Renal Association represents Kidney physicians and works closely with the National Service Framework for kidney disease. ==== Australia ==== Kidney Health Australia serves that country. == Other animals == === Dogs === The incidence rate of CKD in dogs was 15.8 cases per 10,000 dog years at risk. The mortality rate of CKD was 9.7 deaths per 10,000 dog years at risk. (Rates developed from a population of 600,000 insured Swedish dogs; one dog year at risk is one dog at risk for one year). The breeds with the highest rates were the Bernese mountain dog, miniature schnauzer, and boxer. The Swedish elkhound, Siberian husky and Finnish spitz were the breeds with the lowest rates. === Cats === Cats with chronic kidney disease may have a buildup of waste products usually removed by the kidneys. They may appear lethargic, unkempt, and lose weight, and may have hypertension. The disease can prevent appropriate concentration of urine, causing cats to urinate in greater volumes and drink more water to compensate. Loss of important proteins and vitamins through urine may cause abnormal metabolism and loss of appetite. The buildup of acids within the blood can result in acidosis, which can lead to anemia (which can sometimes be indicated by pink or whitish gums, but by no means does the presence of normal colored gums guarantee that anemia is not present or developing), and lethargy. == Research == As of 2019 several compounds are in development for the treatment of CKD. These include the angiotensin receptor blocker (ARB) olmesartan medoxomil; and sulodexide, a mixture of low molecular weight heparin and dermatan sulfate. == References == == External links == Dialysis Complications of Chronic Renal Failure at eMedicine Chronic Renal Failure Information Archived 2013-03-15 at the Wayback Machine from Great Ormond Street Hospital Note also the External resources links in the table below.
Wikipedia/Chronic_renal_disease
Kashin–Beck disease (KBD) is a chronic, endemic type of osteochondropathy (disease of the bone) that is mainly distributed from northeastern to southwestern China, including 15 provinces. As of 2011, Tibet has the highest incidence rate of KBD in China. Southeast Siberia and North Korea are other affected areas. KBD usually involves children ages 5–15. To date, more than a million individuals have had KBD. The symptoms of KBD include joint pain, morning stiffness in the joints, disturbances of flexion and extension in the elbows, enlarged inter-phalangeal joints, and limited motion in many joints of the body. Death of cartilage cells in the growth plate and articular surface is the basic pathologic feature; this can result in growth retardation and secondary osteoarthrosis. Histological diagnosis of KBD is particularly difficult; clinical and radiological examinations have proved to be the best means for identifying KBD. Little is known about the early stages of KBD before the visible appearance of the disease becomes evident in the destruction of the joints. This disease has been recognized for over 150 years but its cause has not yet been completely defined. Currently, the accepted potential causes of KBD include mycotoxins present in grain, trace mineral deficiency in nutrition, and high levels of fulvic acid in drinking water. Selenium and iodine have been considered the major deficiencies associated with KBD. Mycotoxins produced by fungi can contaminate grain, which may cause KBD because mycotoxins cause the production of free radicals. T-2 is the mycotoxin implicated with KBD, produced by members of several fungal genera. T-2 toxin can cause lesions in hematopoietic, lymphoid, gastrointestinal, and cartilage tissues, especially in physeal cartilage. Fulvic acid present in drinking water damages cartilage cells. Selenium supplementation in selenium deficient areas has been shown to prevent this disease. However, selenium supplementation in some areas showed no significant effect, meaning that deficiency of selenium may not be the dominant cause in KBD. Recently a significant association between SNP rs6910140 of COL9A1 and Kashin–Beck disease was discovered genetically, suggesting a role of COL9A1 in the development of Kashin–Beck disease. == Cause == The cause of KBD remains controversial. Studies of the pathogenesis and risk factors of KBD have proposed selenium deficiency, inorganic (e.g. manganese, phosphate) and organic matter (humic and fulvic acids) in drinking water, and fungi on self-produced storage grain (Alternaria sp., Fusarium sp.) producing trichotecene (T2) mycotoxins. Most authors accept that the cause of KBD is multifactorial, selenium deficiency being the underlying factor that predisposes the target cells (chondrocytes) to oxidative stress from free-radical carriers, such as mycotoxins in storage grain and fulvic acid in drinking water. In Tibet, epidemiological studies carried out in 1995–1996 attempted to correlate cases of KBD and with fungal contamination of barley grains by Alternaria sp., Trichotecium sp., Cladosporium sp. and Drechslera sp. A severe selenium deficiency was documented as well, but selenium status was not associated with the disease, suggesting that selenium deficiency alone could not explain the occurrence of KBD in the villages under study. An association with the gene Peroxisome Proliferator-Activated Receptor Gamma Coactivator 1 Beta (PPARGC1B) has been reported. This gene is a transcription factor and mutations in this gene would be expected to affect several other genes. == Prevention == Prevention of Kashin–Beck disease has a long history. Intervention strategies were mostly based on one of the three major theories of its cause. Selenium supplementation, with or without additional antioxidant therapy (vitamin E and vitamin C) has been reported to be successful, but in other studies no significant decrease could be shown compared to a control group. Major drawbacks of selenium supplementation are logistic difficulties (including daily or weekly intake and drug supply), potential toxicity (in case of less well-controlled supplementation strategies), associated iodine deficiency (that should be corrected before selenium supplementation to prevent further deterioration of thyroid status) and low compliance. The latter was certainly the case in Tibet, where selenium supplementation has been implemented from 1987 to 1994 in areas of high endemicity. With the mycotoxin theory in mind, backing of grains before storage was proposed in Guangxi province, but results are not reported in international literature. Changing from grain source has been reported to be effective in Heilongjiang Province and North Korea. With respect to the role of drinking water, changing of water sources to deep well water has been reported to decrease the X-ray metaphyseal detection rate in different settings. In general, the effect of preventive measures however remains controversial, due to methodological problems (no randomised controlled trials), lack of documentation, or, as discussed above, inconsistency of results. == Treatment == Treatment of KBD is palliative. Surgical corrections have been made with success by Chinese and Russian orthopedists. By the end of 1992, Médecins Sans Frontières—Belgium started a physical therapy programme aiming at alleviating the symptoms of KBD patients with advanced joint impairment and pain (mainly adults), in Nyemo county, Lhasa prefecture. Physical therapy had significant effects on joint mobility and joint pain in KBD patients. Later on (1994–1996), the programme has been extended to several other counties and prefectures in Tibet. == Epidemiology == Kashin–Beck disease occurrence is limited to 13 provinces and two autonomous regions of China. It has also been reported in Siberia and North Korea, but incidence in these regions is reported to have decreased with socio-economic development. In China, KBD is estimated to affect some 2 million to 3 million people across China, and 30 million are living in endemic areas. Life expectancy in KBD regions has been reported to be significantly decreased in relation to selenium deficiency and Keshan disease (endemic juvenile dilative cardiomyopathia). The prevalence of KBD in Tibet varies strongly from valley to valley, and village to village. Prevalence of clinical symptoms suggestive of KBD reaches 100% in 5- to 15-year-old children in at least one village. Prevalence rates of over 50% are not uncommon. A clinical prevalence survey carried out in Lhasa prefecture yielded a figure of 11.4% for a study population of approximately 50,000 inhabitants. As in other regions of China, farmers are by far the most affected population group. == Eponymy == The condition was named after Russian military physicians Evgeny Vladimirovich Bek (1865–1915) and Nicolai Ivanowich Kashin (1825–1872). Because of varied transliteration from Cyrillic script into the Latin script of both German orthography and English orthography, the disease name has been spelled variously as Kashin–Beck disease, Kashin-Bek disease, and Kaschin-Beck disease. The noneponymous names endemic osteoarthritis, osteoarthritis deformans endemica, and osteoarthritis deformans have also been used. == See also == Iodine deficiency in China == References == == External links == Kashin–Beck disease
Wikipedia/Kashin–Beck_disease
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data. The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science. == History == The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. == Etymology and scope == Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM, in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline. His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases. In the early days of computing, a number of terms for the practitioners of the field of computing were suggested (albeit facetiously) in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain." A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic. Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra. The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines. The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research. == Philosophy == === Epistemology of computer science === Despite the word science in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975, Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Nonetheless, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available. It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena. Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs that can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems. === Paradigms of computer science === A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence). Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems. == Fields == As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science. === Theoretical computer science === Theoretical computer science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. It aims to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. ==== Theory of computation ==== According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems. The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation. ==== Information and coding theory ==== Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods. ==== Data structures and algorithms ==== Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency. ==== Programming language theory and formal methods ==== Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals. Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. === Applied computer science === ==== Computer graphics and visualization ==== Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games. ==== Image and sound processing ==== Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of the unsolved problems in theoretical computer science. ==== Computational science, finance and engineering ==== Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, societies and social situations (notably war games) along with their habitats, and interactions among biological cells. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits. ==== Human–computer interaction ==== Human–computer interaction (HCI) is the field of study and research concerned with the design and use of computer systems, mainly based on the analysis of the interaction between humans and computer interfaces. HCI has several subfields that focus on the relationship between emotions, social behavior and brain activity with computers. ==== Software engineering ==== Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes. ==== Artificial intelligence ==== Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data. === Computer systems === ==== Computer architecture and microarchitecture ==== Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks Jr., members of the Machine Organization department in IBM's main research center in 1959. ==== Concurrent, parallel and distributed computing ==== Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the parallel random access machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals. ==== Computer networks ==== This branch of computer science aims studies the construction and behavior of computer networks. It addresses their performance, resilience, security, scalability, and cost-effectiveness, along with the variety of services they can provide. ==== Computer security and cryptography ==== Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits. ==== Databases and data mining ==== A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets. == Discoveries == The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science: Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything". All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.). Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything". Every algorithm can be expressed in a language for a computer consisting of only five basic instructions: move left one location; move right one location; read symbol at current location; print 0 at current location; print 1 at current location. Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything". Only three rules are needed to combine any set of basic instructions into more complex ones: sequence: first do this, then do that; selection: IF such-and-such is the case, THEN do this, ELSE do that; repetition: WHILE such-and-such is the case, DO this. The three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming). == Programming paradigms == Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include: Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements. Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates. Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another. Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs. Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities. == Research == Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals. == See also == == Notes == == References == == Further reading == == External links == DBLP Computer Science Bibliography Association for Computing Machinery Institute of Electrical and Electronics Engineers
Wikipedia/computer_science
A modeling perspective in information systems is a particular way to represent pre-selected aspects of a system. Any perspective has a different focus, conceptualization, dedication and visualization of what the model is representing. The traditional way to distinguish between modeling perspectives is structural, functional and behavioral/processual perspectives. This together with rule, object, communication and actor and role perspectives is one way of classifying modeling approaches. == Types of perspectives == === Structural modeling perspective === This approach concentrates on describing the static structure. The main concept in this modeling perspective is the entity, this could be an object, phenomena, concept, thing etc. The data modeling languages have traditionally handled this perspective, examples of such being: The ER-language (Entity-Relationship) Generic Semantic Modeling language (GSM) Other approaches including: The NIAM language (Binary relationship language) Conceptual graphs (Sowa) Looking at the ER-language we have the basic components: Entities: Distinctively identifiable phenomenon. Relationships: An association among the entities. Attributes: Used to give value to a property of an entity/relationship. Looking at the generic semantic modeling language we have the basic components: Constructed types built by abstraction: Aggregation, generalization, and association. Attributes. Primitive types: Data types in GSM are classified into printable and abstract types. Printable: Used to specify visible values. Abstract: Representing entities. === Functional modeling perspective === The functional modeling approach concentrates on describing the dynamic process. The main concept in this modeling perspective is the process, this could be a function, transformation, activity, action, task etc. A well-known example of a modeling language employing this perspective is data flow diagrams. The perspective uses four symbols to describe a process, these being: Process: Illustrates transformation from input to output. Store: Data-collection or some sort of material. Flow: Movement of data or material in the process. External Entity: External to the modeled system, but interacts with it. Now, with these symbols, a process can be represented as a network of these symbols. This decomposed process is a DFD, data flow diagram. === Behavioral perspective === Behavioral perspective gives a description of system dynamics. The main concepts in behavioral perspective are states and transitions between states. State transitions are triggered by events. State Transition Diagrams (STD/STM), State charts and Petri-nets are some examples of well-known behaviorally oriented modeling languages. Different types of State Transition Diagrams are used particularly within real-time systems and telecommunications systems. === Rule perspective === Rule perspective gives a description of goals/means connections. The main concepts in rule perspective are rule, goal and constraint. A rule is something that influences the actions of a set of actors. The standard form of rule is “IF condition THEN action/expression”. Rule hierarchies (goal-oriented modeling), Tempora and Expert systems are some examples of rule oriented modeling. === Object perspective === The object-oriented perspective describes the world as autonomous, communicating objects. An object is an “entity” which has a unique and unchangeable identifier and a local state consisting of a collection of attributes with assignable values. The state can only be manipulated with a set of methods defined on the object. The value of the state can only be accessed by sending a message to the object to call on one of its methods. An event is when an operation is being triggered by receiving a message, and the trace of the events during the existence of the object is called the object’s life cycle or the process of an object. Several objects that share the same definitions of attributes and operations can be parts of an object class. The perspective is originally based on design and programming of object oriented systems. Unified Modelling Language (UML) is a well known language for modeling with an object perspective. === Communication perspective === This perspective is based on language/action theory from philosophical linguistics. The basic assumption in this perspective is that person/objects cooperate on a process/action through communication within them. An illocutionary act consists of five elements: Speaker, hearer, time, location and circumstances. It is a reason and goal for the communication, where the participations in a communication act is oriented towards mutual agreement. In a communication act, the speaker generally can raise three claims: truth (referring an object), justice (referring a social world of the participations) and claim to sincerity (referring the subjective world of the speaker). === Actor and role perspective === Actor and role perspective is a description of organisational and system structure. An actor can be defined as a phenomenon that influences the history of another actor, whereas a role can be defined as the behaviour which is expected by an actor, amongst other actors, when filling the role. Modeling within these perspectives is based both on work with object-oriented programming languages and work with intelligent agents in artificial intelligence. I* is an example of an actor oriented language. == See also == Domain-Specific Modeling (DSM) Glossary of Unified Modeling Language terms General-purpose modeling Model Driven Engineering (MDE) Modeling language Three schema approach for data modeling View model == References == == Further reading == Ingeman Arbnor and Björn Bjerke (1997). Methodology for Creating Business Knowledge. California : Sage Publications. (Third Edition 2009).
Wikipedia/Modeling_perspectives
Application software is any computer program that is intended for end-user use – not operating, administering or programming the computer. An application (app, application program, software application) is any program that can be categorized as application software. Common types of applications include word processor, media player and accounting software. The term application software refers to all applications collectively and can be used to differentiate from system and utility software. Applications may be bundled with the computer and its system software or published separately. Applications may be proprietary or open-source. The short term app (coined in 1981 or earlier) became popular with the 2008 introduction of the iOS App Store, to refer to applications for mobile devices such as smartphones and tablets. Later, with introduction of the Mac App Store (in 2010) and Windows Store (in 2011), the term was extended in popular use to include desktop applications. == Terminology == The delineation between system software such as operating systems and application software is not exact and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separate piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable by the user, as in the case of software used to control a VCR, DVD player, or microwave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management. When used as an adjective, application is not restricted to mean: of or on application software. For example, concepts such as application programming interface (API), application server, application virtualization, application lifecycle management and portable application apply to all computer programs alike, not just application software. === Killer app === Sometimes a new and popular application arises that only runs on one platform that results in increasing the desirability of that platform. This is called a killer application or killer app, coined in the late 1980s. For example, VisiCalc was the first modern spreadsheet software for the Apple II and helped sell the then-new personal computers into offices. For the BlackBerry, it was its email software. === Platform specific naming === Some applications are available for multiple platforms while others only work on one and are thus called, for example, a geography application for Microsoft Windows, or an Android application for education, or a Linux game. == Classification == There are many different and alternative ways to classify application software. From the legal point of view, application software is mainly classified with a black-box approach, about the rights of its end-users or subscribers (with eventual intermediate and tiered subscription levels). Software applications are also classified with respect to the programming language in which the source code is written or executed, and concerning their purpose and outputs. === By property and use rights === Application software is usually distinguished into two main classes: closed source vs open source software applications, and free or proprietary software applications. Proprietary software is placed under the exclusive copyright, and a software license grants limited usage rights. The open-closed principle states that software may be "open only for extension, but not for modification". Such applications can only get add-ons from third parties. Free and open-source software (FOSS) shall be run, distributed, sold, or extended for any purpose, and -being open- shall be modified or reversed in the same way. FOSS software applications released under a free license may be perpetual and also royalty-free. Perhaps, the owner, the holder or third-party enforcer of any right (copyright, trademark, patent, or ius in re aliena) are entitled to add exceptions, limitations, time decays or expiring dates to the license terms of use. Public-domain software is a type of FOSS which is royalty-free and - openly or reservedly- can be run, distributed, modified, reversed, republished, or created in derivative works without any copyright attribution and therefore revocation. It can even be sold, but without transferring the public domain property to other single subjects. Public-domain SW can be released under a (un)licensing legal statement, which enforces those terms and conditions for an indefinite duration (for a lifetime, or forever). === By coding language === Since the development and near-universal adoption of the web, an important distinction that has emerged, has been between web applications — written with HTML, JavaScript and other web-native technologies and typically requiring one to be online and running a web browser — and the more traditional native applications written in whatever languages are available for one's particular type of computer. There has been a contentious debate in the computing community regarding web applications replacing native applications for many purposes, especially on mobile devices such as smartphones and tablets. Web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated. === By purpose and output === Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, for example word processors or databases. Vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every specific aspect possible of, for example, manufacturing or banking worker, accounting, or customer service. There are many types of application software: An application suite consists of multiple applications bundled together. They usually have related functions, features, and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, LibreOffice and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music. Enterprise software addresses the needs of an entire organization's processes and data flows, across several departments, often in a large distributed environment. Examples include enterprise resource planning systems, customer relationship management (CRM) systems, data replication engines, and supply chain management software. Departmental Software is a sub-type of enterprise software with a focus on smaller organizations or groups within a large organization. (Examples include travel expense management and IT Helpdesk.) Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include databases, email servers, and systems for managing networks and security.) Application platform as a service (aPaaS) is a cloud computing service that offers development and deployment environments for application services. Information worker software lets users create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, analytical, collaborative and documentation tools. Word processors, spreadsheets, email and blog clients, personal information systems, and individual media editors may aid in multiple information worker tasks. Content access software is used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include media players, web browsers, and help browsers.) Educational software is related to content access software, but has the content or features adapted for use by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities. Simulation software simulates physical or abstract systems for either research, training, or entertainment purposes. Media development software generates print and electronic media for others to consume, most often in a commercial or educational setting. This includes graphic-art software, desktop publishing software, multimedia development software, HTML editors, digital-animation editors, digital audio and video composition, and many others. Product engineering software is used in developing hardware and software products. This includes computer-aided design (CAD), computer-aided engineering (CAE), computer language editing and compiling tools, integrated development environments, and application programmer interfaces. Entertainment Software can refer to video games, screen savers, programs to display motion pictures or play recorded music, and other forms of entertainment which can be experienced through the use of a computing device. === By platform === Applications can also be classified by computing platforms such as a desktop application for a particular operating system, delivery network such as in cloud computing and Web 2.0 applications, or delivery devices such as mobile apps for mobile devices. The operating system itself can be considered application software when performing simple calculating, measuring, rendering, and word processing tasks not used to control hardware via a command-line interface or graphical user interface. This does not include application software bundled within operating systems such as a software calculator or text editor. === Information worker software === Accounting software Data management Contact manager Spreadsheet Database software Documentation Document automation Word processor Desktop publishing software Diagramming software Presentation software Email Blog software Enterprise resource planning Financial software Banking software Clearing systems Financial accounting software Financial software Field service management Workforce management software Project management software Calendaring software Employee scheduling software Workflow software Reservation systems === Entertainment software === Screen savers Video games Arcade video games Console games Mobile games Personal computer games Software art Demo 64K intro === Educational software === Classroom management Reference software Sales readiness software Survey management Encyclopedia software === Enterprise infrastructure software === Artificial Intelligence for IT Operations (AIOps) Business workflow software Database management system (DBMS) Digital asset management (DAM) software Document management software Geographic information system (GIS) === Simulation software === Computer simulators Scientific simulators Social simulators Battlefield simulators Emergency simulators Vehicle simulators Flight simulators Driving simulators Simulation games Vehicle simulation games === Media development software === 3D computer graphics software Animation software Graphic art software Raster graphics editor Vector graphics editor Image organizer Video editing software Audio editing software Digital audio workstation Music sequencer Scorewriter HTML editor Game development tool === Product engineering software === Hardware engineering Computer-aided engineering Computer-aided design (CAD) Computer-aided manufacturing (CAM) Finite element analysis === Software engineering === Compiler software Integrated development environment Compiler Linker Debugger Version control Game development tool License manager == See also == Software development – Creation and maintenance of software Mobile app – Software application designed to run on mobile devices Web application – Application that uses a web browser as a client Server application – Computer to access a central resource or service on a networkPages displaying short descriptions of redirect targets Super-app – Mobile application that provides multiple services including financial transactions == References == == External links == Learning materials related to Application software at Wikiversity
Wikipedia/Application_program
The basic study of system design is the understanding of component parts and their subsequent interaction with one another. Systems design has appeared in a variety of fields, including sustainability, computer/software architecture, and sociology. == Product Development == If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured. Thus in product development, systems design involves the process of defining and developing systems, such as interfaces and data, for an electronic control system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development. There is some overlap with the disciplines of systems analysis, systems architecture and systems engineering. === Physical design === The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed. In physical design, the following requirements about the system are decided. Input requirement, Output requirements, Storage requirements, Processing requirements, System control and backup or recovery. Put another way, the physical portion of system design can generally be broken down into three sub-tasks: User Interface Design Data Design Process Design === Architecture design === Designing the overall structure of a system focuses on creating a scalable, reliable, and efficient system. For example, services like Google, Twitter, Facebook, Amazon, and Netflix exemplify large-scale distributed systems. Here are key considerations: Functional and non-functional requirements Capacity estimation Usage of relational and/or NoSQL databases Vertical scaling, horizontal scaling, sharding Load balancing Primary-secondary replication Cache and CDN Stateless and Stateful servers Datacenter georouting Message Queue, Publish-Subscribe Architecture Performance Metrics Monitoring and Logging Build, test, configure deploy automation Finding single point of failure API Rate Limiting Service Level Agreement === Machine Learning Systems Design === Machine learning systems design focuses on building scalable, reliable, and efficient systems that integrate machine learning (ML) models to solve real-world problems. ML systems require careful consideration of data pipelines, model training, and deployment infrastructure. ML systems are often used in applications such as recommendation engines, fraud detection, and natural language processing. Key components to consider when designing ML systems include: Problem Definition: Clearly define the problem, data requirements, and evaluation metrics. Success criteria often involve accuracy, latency, and scalability. Data Pipeline: Build automated pipelines to collect, clean, transform, and validate data. Model Selection and Training: Choose appropriate algorithms (e.g., linear regression, decision trees, neural networks) and train models using frameworks like TensorFlow or PyTorch. Deployment and Serving: Deploy trained models to production environments using scalable architectures such as containerized services (e.g., Docker and Kubernetes). Monitoring and Maintenance: Continuously monitor model performance, retrain as necessary, and ensure data drift is addressed. Designing an ML system involves balancing trade-offs between accuracy, latency, cost, and maintainability, while ensuring system scalability and reliability. The discipline overlaps with MLOps, a set of practices that unifies machine learning development and operations to ensure smooth deployment and lifecycle management of ML systems. == See also == == References == == Further reading == Bentley, Lonnie D.; Dittman, Kevin C.; Whitten, Jeffrey L. (2004) [1986]. System analysis and design methods. Churchman, C. West (1971). The Design of Inquiring Systems: Basic Concepts of Systems and Organization. New York: Basic Books. ISBN 0-465-01608-1. Gosling, William (1962). The design of engineering systems. New York: Wiley. Hawryszkiewycz, Igor T. (1994). Introduction to system analysis and design. Prentice Hall PTR. Levin, Mark S. (2015). Modular system design and evaluation. Springer. Maier, Mark W.; Rechtin, Eberhardt (2000). The Art of System Architecting (Second ed.). Boca Raton: CRC Press. J. H. Saltzer; D. P. Reed; D. D. Clark (1 November 1984). "End-to-end arguments in system design" (PDF). ACM Transactions on Computer Systems. 2 (4): 277–288. doi:10.1145/357401.357402. ISSN 0734-2071. S2CID 215746877. Wikidata Q56503280. Whitten, Jeffrey L.; Bentley, Lonnie D.; Dittman, Kevin C. (2004). Fundamentals of system analysis and design methods. == External links == Interactive System Design. Course by Chris Johnson, 1993 [1] Course by Prof. Birgit Weller, 2020
Wikipedia/System_design
A capability, in the systems engineering sense, is defined as the ability to execute a specified course of action. A capability may or may not be accompanied by an intention. The term is used in the defense industry but also in private industry (e.g. gap analysis). == Capability gap analysis == The Joint Capabilities Integration Development System is an important part of DoD military planning. The "Operation of the JCIDS" introduces a Capability Based Analysis (CBA) process that includes identification of capability gaps. In essence, a Capability Gap Analysis is the determination of needed capabilities that do not yet exist. The Department of Defense Architecture Framework (DoDAF) suggests the use of the Operational Activity Model (OV-5) in conducting a CGA. == See also == Capability management Operational Activity Model (OV-5) Operational Event-Trace Description (OV-6c) Joint Capabilities Integration Development System == References ==
Wikipedia/Capability_(systems_engineering)
An architectural model (in software) contains several diagrams representing static properties or dynamic (behavioral) properties of the software under design. The diagrams represent different viewpoints of the system at the appropriate scope of analysis. The diagrams are created by using available standards in which the primary aim is to illustrate a specific set of tradeoffs inherent in the structure and design of a system or ecosystem. Software architects utilize architectural models to facilitate communication and obtain peer feedback. Some key elements in a software architectural model include: Rich: For the viewpoint in question, there should be sufficient information to describe the area in detail. The information should not be lacking or vague. The goal is to minimize misunderstandings, not perpetuate them. See notes below on 'primary concern.' Rigorous: The architect has applied a specific methodology to create this particular model, and the resulting model 'looks' a particular way. A test of rigorousness may state that if two architects, in different cities, were describing the same thing, the resulting diagrams would be nearly identical (with the possible exception of visual layout, to a point). Diagram: In general, a model may refer to any abstraction that simplifies something for the sake of addressing a particular viewpoint. This definition specifically subclasses 'architectural models' to the subset of model descriptions that are represented as diagrams. Standards: Standards work when everyone knows them and everyone uses them. This allows a level of communication that cannot be achieved when each diagram is substantially different from another.Unified Modeling Language(UML) is the most often quoted standard. Primary Concern: It is easy to be too detailed by including many different needs in a single diagram. This should be avoided. It is better to draw multiple diagrams, one for each viewpoint, than to draw a 'mega diagram' that is extremely rich in content. Remember this: when building houses, the architect delivers many different diagrams. Each is used differently. Frequently the final package of plans will include diagrams with the floor plan many times: framing plan, electrical plan, heating plan, plumbing, etc. They ensure that the information provided is only what is needed. For example, a plumbing subcontractor does not need the details that an electrician would need to know. Illustrate: The idea behind creating a model is to communicate and seek valuable feedback. The goal of the diagram should be to answer a specific question and to share that answer with others to: see if they agree guide their work. Rule of thumb: know what it is you want to say, and whose work you intend to influence with it. Specific Set of Tradeoffs: The architecture tradeoff analysis method (ATAM) methodology describes a process whereby software architecture can be peer-reviewed for appropriateness. ATAM does this by starting with a basic notion: there is no such thing as a design for all occasions. People can create a generic design, but then they need to alter it to specific situations based on the business requirements. In effect, people make tradeoffs. The diagram should make those specific tradeoffs visible. Therefore, before an architect creates a diagram, they should be prepared to describe, in words, which tradeoffs they are attempting to illustrate in this model. Tradeoffs Inherent in the Structure and Design: A component is not a tradeoff. Tradeoffs rarely translate into an image on the diagram. Tradeoffs are the first principles that produce the design models. When an architect wishes to describe or defend a particular tradeoff, the diagram can be used to defend the position. System or Ecosystem: Modeling in general can be done at different levels of abstraction. It is useful to model the architecture of a specific application, complete with components and interactions. It is also reasonable to model the systems of applications needed to deliver a complete business process (like order-to-cash). It is not commonly useful, however, to view the model of a single component and its classes as software architecture. At that level, the model, while valuable in its own right, illustrates design much more so than architecture. == See also == Service-oriented modeling framework (SOMF) == References == == External links == SEI published Software Architecture Definitions contains a list of definitions of architecture used by classic and modern authors. Architectural Model contains a definition of an architectural model from the University of Ottawa's Object Oriented Software Engineering database. Architectural Tradeoff Analysis Method (ATAM) is a method by which architecture can be evaluated for suitability and fit to requirements.
Wikipedia/Software_Architectural_Model
Health systems engineering or health engineering (often known as health care systems engineering (HCSE)) is an academic and a pragmatic discipline that approaches the health care industry, and other industries connected with health care delivery, as complex adaptive systems, and identifies and applies engineering design and analysis principles in such areas. This can overlap with biomedical engineering (BME) which focuses on design and development of various medical products; industrial engineering (IE) and operations management which involve improving organizational operations; and various health care practice fields like medicine, pharmacy, dentistry, nursing, etc. Other fields participating in this interdisciplinary area include public health, information technology, management studies, and regulatory law. People whose work implicates this field in some capacity can include members of all the above-noted fields, many of which have sub-fields targeted toward health care matters even if health or health care is not a principal focus of the overall field (e.g. management, law). Areas of biomedical engineering in this area often include clinical engineering (sometimes also called "hospital engineering") as well as those BMEs developing medical devices and pharmaceutical drugs. The industrial engineering principles employed tend to include optimization, decision analysis, human factors engineering, quality engineering, and value engineering. The field came to be in the 1950s and 1960s as an outgrowth of industrial engineering as applied to hospitals. == See also == Health system Healthcare engineering Health systems science Biological engineering Systems engineering Regulatory science Complex adaptive systems == References ==
Wikipedia/Health_systems_engineering
Object-oriented modeling (OOM) is an approach to modeling an application that is used at the beginning of the software life cycle when using an object-oriented approach to software development. The software life cycle is typically divided up into stages going from abstract descriptions of the problem to designs then to code and testing and finally to deployment. Modeling is done at the beginning of the process. The reasons to model a system before writing the code are: Communication. Users typically cannot understand programming language or code. Model diagrams can be more understandable and can allow users to give developers feedback on the appropriate structure of the system. A key goal of the Object-Oriented approach is to decrease the "semantic gap" between the system and the real world by using terminology that is the same as the functions that users perform. Modeling is an essential tool to facilitate achieving this goal . Abstraction. A goal of most software methodologies is to first address "what" questions and then address "how" questions. I.e., first determine the functionality the system is to provide without consideration of implementation constraints and then consider how to take this abstract description and refine it into an implementable design and code given constraints such as technology and budget. Modeling enables this by allowing abstract descriptions of processes and objects that define their essential structure and behavior. Object-oriented modeling is typically done via use cases and abstract definitions of the most important objects. The most common language used to do object-oriented modeling is the Object Management Group's Unified Modeling Language (UML). == See also == Object-oriented analysis and design == References ==
Wikipedia/Object-oriented_modeling
UML color standards are a set of four colors associated with Unified Modeling Language (UML) diagrams. The coloring system indicates which of several archetypes apply to the UML object. UML typically identifies a stereotype with a bracketed comment for each object identifying whether it is a class, interface, etc. These colors were first suggested by Peter Coad, Eric Lefebvre, and Jeff De Luca in a series of articles in The Coad Letter,[1][2] and later published in their book Java Modeling In Color With UML.[3] Over hundreds of domain models, it became clear that four major "types" of classes appeared again and again, though they had different names in different domains. After much discussion, these were termed archetypes, which is meant to convey that the classes of a given archetype follow more or less the same form. That is, attributes, methods, associations, and interfaces are fairly similar among classes of a given archetype. When attempting to classify a given domain class, one typically asks about the color standards in this order: pink moment-interval — Does it represent a moment or interval of time that we need to remember and work with for legal or business reasons? Examples in business systems generally model activities involving people, places and things such as a sale, an order, a rental, an employment, making a journey, etc. yellow roles — Is it a way of participating in an activity (by either a person, place, or thing)? A person playing the role of an employee in an employment, a thing playing the role of a product in a sale, a location playing the role of a classroom for a training course, are examples of roles. blue description — Is it simply a catalog-entry-like description which classifies or 'labels' an object? For example, the make and model of a car categorises or describes a number of physical vehicles. The relationship between the blue description and green party, place or thing is a type-instance relationship based on differences in the values of data items held in the 'type' object. green party, place, or thing — Something tangible, uniquely identifiable. Typically the role-players in a system. People are green. Organizations are green. The physical objects involved in a rental such as the physical DVDs are green things. Normally, if you get through the above three questions and end up here, your class is a "green." Although the actual colors vary, most systems tend to use lighter color palettes so that black text can also be easily read on a colored background. Coad, et al., used the 4-color pastel Post-it notes,[4] and later had UML modeling tools support the color scheme by associating a color to one or more class stereotypes. Many people feel colored objects appeal to the pattern recognition section of the brain. Others advocate that you can begin a modeling process with a stack of four-color note cards or colored sticky notes. The value of color modeling was especially obvious when standing back from a model drawn or projected on a wall. That extra dimension allowed modelers to see important aspects of the models (the pink classes, for instance), and to spot areas that may need reviewing (unusual combinations of color classes linked together). The technique also made it easy to help determine aspects of the domain model – especially for newcomers to modeling. For example, by simply looking first for "pinks" in the domain, it was easy to begin to get some important classes identified for a given domain. It was also easy to review the standard types of attributes, methods, and so on, for applicability to the current domain effort. == See also == Upper ontology Object-oriented design == References == ^ The Coad Letter (dead) (Wayback Machine's archived version from 2006) ^ The Coad Letter: Modeling and Design Edition, Issue 44 (dead) The original color scheme was changed slightly. Further articles appeared in issues 51, 54, 58-65 and others. ^ Peter Coad, Eric Lefebvre, Jeff De Luca: Java Modeling In Color With UML: Enterprise Components and Process, Prentice Hall, 1999, ISBN 0-13-011510-X Edward Tufte: Envisioning Information, Graphics Press, 1990, ISBN 0-9613921-1-8 == External links == Developing a UI Design from a UML Color Model Stephen R. Palmer (2009). "Peter Coad's Object Modelling in Colour". Retrieved 2009-01-23. Object-oriented analysis with class archetypes Stephen R. Palmer (2002). "A New Beginning". Retrieved 2006-06-07. Appeared in The Coad Letter: Modeling and Design Edition, Issue 68
Wikipedia/Object_Modeling_in_Color
The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission. == Design intent == The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example). In particular, H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves, the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However, Odum was interested not only in the electronic circuits themselves but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology. == General characteristics == When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could be derived. Moreover, he felt that a general symbolic system, fully defined in electronic terms (including the mathematics thereof) would be useful for depicting real system characteristics, such as the general categories of production, storage, flow, transformation, and consumption. Central principles of electronics also therefore became central features of the energy systems language – Odum's generic symbolism. Depicted to the left is the generic symbol for storage, which Odum named the Bertalanffy module, in honor of the general systems theorist Ludwig von Bertalanffy. For Odum, in order to achieve a holistic understanding of how many apparently different systems actually affect each other, it was important to have a generic language with a massively scalable modeling capacity – to model global-to-local, ecological, physical and economic systems. The intention was, and for those who still apply it, is, to make biological, physical, ecological, economic and other system models thermodynamically, and so also energetically, valid and verifiable. As a consequence the designers of the language also aimed to include the energy metabolism of any system within the scope of inquiry. == Pictographic icons == In order to aid learning, in Modeling for all Scales Odum and Odum (2000) suggested systems might first be introduced with pictographic icons, and then later defined in the generic symbolism. Pictograms have therefore been used in software programs like ExtendSim to depict the basic categories of the Energy Systems Language. Some have argued that such an approach shares similar motivations to Otto Neurath's isotype project, Leibniz's (Characteristica Universalis) Enlightenment Project and Buckminster Fuller's works. == See also == == References == == External links ==
Wikipedia/Energy_systems_language
Object process methodology (OPM) is a conceptual modeling language and methodology for capturing knowledge and designing systems, specified as ISO/PAS 19450. Based on a minimal universal ontology of stateful objects and processes that transform them, OPM can be used to formally specify the function, structure, and behavior of artificial and natural systems in a large variety of domains. OPM was conceived and developed by Dov Dori. The ideas underlying OPM were published for the first time in 1995. Since then, OPM has evolved and developed. In 2002, the first book on OPM was published, and on December 15, 2015, after six years of work by ISO TC184/SC5, ISO adopted OPM as ISO/PAS 19450. A second book on OPM was published in 2016. Since 2019, OPM has become a foundation for a Professional Certificate program in Model-Based Systems Engineering - MBSE at EdX. Lectures are available as web videos on Youtube. == Overview == Object process methodology (OPM) is a conceptual modeling language and methodology for capturing knowledge and designing systems. Based on a minimal universal ontology of stateful objects and processes that transform them, OPM can be used to formally specify the function, structure, and behavior of artificial and natural systems in a large variety of domains. Catering to human cognitive abilities, an OPM model represents the system under design or study bimodally in both graphics and text for improved representation, understanding, communication, and learning. In OPM, an object is anything that does or does not exist. Objects are stateful—they may have states, such that at each point in time, the object is at one of its states or in transition between states. A process is a thing that transforms an object by creating or consuming it, or by changing its state. OPM is bimodal; it is expressed both visually/graphically in object-process diagrams (OPD) and verbally/textually in Object-Process Language (OPL), a set of automatically generated sentences in a subset of English. A patented software package called OPCAT, for generating OPD and OPL, is freely available. == History == The shift to the object-oriented (OO) paradigm for computer programming languages, which occurred in the 1980s and 1990s, was followed by the idea that programming should be preceded by object-oriented analysis and design of the programs, and, more generally, the systems those programs represent and serve. Thus, in the early 1990s, over 30 object-oriented analysis and design methods and notations flourished, leading to what was known as the "methods war". Around that time, in 1991, Dov Dori, who then joined Technion – Israel Institute of Technology as faculty said in his 2016 book Model-Based Systems Engineering with OPM and SysML that he: realized that just as the procedural approach to software was inadequate, so was the “pure” OO approach, which puts objects as the sole “first class” citizens, with “methods” (or “services”) being their second-class subordinate procedures. Dori published the first paper on OPM in 1995. In 1997, Unified Modeling Language (UML), by the Object Management Group (OMG), became the de facto standard for software design. UML 1.1 was submitted to the OMG in August 1997 and adopted by the OMG in November 1997. The first book on OPM, Object-Process Methodology: a Holistic Systems Paradigm, was published in 2002, and OPM has since been applied in many domains. In August 2014, the ISO adopted OPM as ISO/PAS 19450. A second book on OPM, which also covers SysML, was published in 2016. == Design == Object-Process Methodology (OPM) is a systems modeling paradigm that integrates two aspects inherent in any system: its structure and its behavior. Structure is represented via objects and structural relations among them, such as aggregation-participation (whole-part relation) and generalization-specialization ("is-a" relation). Behavior is represented by processes and how they transform objects: How they create or consume objects, or how they change the states of an object.: 2  OPM offers a way to model systems of almost any domain, be it artificial or natural.: x  === Modeling === OPM consists of object process diagramׂs (OPD) and a corresponding set of sentences in a subset of English, called Object Process Language (OPL). OPL is generated automatically by OPCAT, a software tool that supports modeling in OPM. Object process diagram (OPD) OPD is the one and only kind of diagram of OPM. This uniqueness of diagram kind is a major contributor to OPM's simplicity, and it is in sharp contrast to UML, which has 14 kinds of diagrams, and to SysML, which has nine such kinds. An OPD graphically describes objects, processes and links among them. Links can be structural and procedural. Structural links connect objects to objects or processes to processes, expressing the static system aspect—how the system is structured. Procedural links connect objects to processes, expressing the dynamic system aspect—how the system changes over time. The entire system is represented by a set of hierarchically organized OPDs, such that the root OPD, called the systems diagram (SD), specifies the "bird's eye" view of the system, and lower-level OPDs specify the system in increasing levels of detail. All the OPDs in the system's OPD set are "aware" of each other, with each showing the system, or part of it, at some level of detail. The entire system is specified in its entirety by the union of the details (model facts) appearing in all the OPDs. Object process language (OPL) Each OPD construct (i.e., two or more things connected by one or more links) is translated to a sentence in OPL—a subset of natural English. The power of OPL lies in the fact that it is readable by humans but also interpretable by computers. These are the stages where the most important design decisions are made. The graphics-text bimodality of OPM makes it suitable to jointly model requirements by a team that involves both the customer or his domain expert on one hand, and the system architect, modelers, and designers on the other hand.: 3  OPM model animated simulation OPM models are not just static graphical and textual representations of the system—they are also executable. A correct OPM model constructed in OPCAT can be simulated by animating it, visually expressing how the system behaves over time to achieve its function at all detail levels. An incorrect OPM model will not execute all the way through, and will indicate where and why it is stuck, effectively serving as a visual debugger. === Development === In his foreword to Dori's book Model-Based Systems Engineering with OPM and SysML, Edward F. Crawley said: OPM semantics was originally geared towards systems engineering, as it can model information, hardware, people, and regulation. However, in recent years OPM started to serve also researchers in molecular biology, yielding new published findings related to the mRNA lifecycle. This is a clear indication of the universality of the object-and-process ontology.: vi  == Basics == OPM has two main parts: the language and the methodology. The language is bimodal—it is expressed in two complementary ways (modalities): the visual, graphical part—a set of one or more object-process diagrams (OPDs), and a corresponding textual part—a set of sentences in object-process language (OPL), which is a subset of English. The top-level OPD is the system diagram (SD), which provides the context for the system's function. For man-made systems this function is expected to benefit a person or a group of people—the beneficiary. The function is the main process in SD, which also contains the objects involved in this process: the beneficiary, the operand (the object upon which the process operates), and possibly the attribute whose value the process changes. OPM graphical elements are divided into entities, expressed as closed shapes, and relations, expressed as links that connect entities. === Entities === Entities are the building blocks of OPM. They include objects and processes, collectively called things, and object states. Object Associations among objects constitute the object structure of the system being modeled. In OPL text, the object name shall appear in bold face with capitalization of each word. Object state An object state is a particular situation classification of an object at some point during its lifetime. At every point in time, the object is in one of its states or in transition between two of its states—from its input state to its output state. Process A process is an expression of the pattern of transformation of objects in the system. A process does not exist in isolation; it is always associated with and occurs or happens to one or more objects. A process transforms objects by creating them, consuming them, or changing their state. Thus, processes complement objects by providing the dynamic, behavioral aspect of the system. In OPL text, the process name shall appear in bold face with capitalization of each word. === Links === Structural link A structural links defines a structural relation. A structural relation shall specify an association that persists in the system for at least some interval of time. Procedural link A procedural link defines procedural relation. A procedural relation shall specify how the system operates to attain its function, designating time dependent or conditional triggering of processes, which transform objects. Event and condition The Event-Condition-Action paradigm provides the OPM operational semantics and flow of control. An event is a point in time at which an object is created (or appears to be created from the system's perspective) or an object enters a specified state. At runtime, this process triggering initiates evaluation of the process precondition. Thus, starting a process execution has two prerequisites: (1) a triggering event, and (2) satisfaction of a precondition. Once the event triggers a process, the event ceases to exist. == Syntax and semantics == === Things === Objects and processes are symmetric in many regards and have much in common in terms of relations, such as aggregation, generalization, and characterization. To apply OPM in a useful manner, the modeler has to make the essential distinction between objects and processes, as a prerequisite for successful system analysis and design. By default, a noun shall identify an object. === Thing generic attributes === OPM things have three generic attributes: Perseverance Essence Affiliation OPM thing generic attributes have the following default values: The default value of the Affiliation generic attribute of a thing is systemic. System essence shall be the primary essence of the system. Like thing essence, its values are informatical and physical. Information systems, in which the majority of things are informatical, shall be primarily informatical, while systems in which the majority of things are physical shall be primarily physical. The default value of the Essence generic attribute of a thing in a primarily informatical [physical] system shall be informatical [physical]. === Object states === Stateful and stateless objects Dov Dori explains in Model-Based Systems Engineering with OPM and SysML that "An object state is a possible situation in which an object may exist. An object state has meaning only in the context of the object to which it belongs." A stateless object shall be an object that has no specification of states. A stateful object shall be an object for which a set of permissible states are specified. In a runtime model, at any point in time, any stateful object instance is at a particular permissible state or in transition between two states. Attribute values An attribute is an object that characterizes a thing. An attribute value is a specialization of state in the sense that a value is a state of an attribute: an object has an attribute, which is a different object, to which that value is assigned for some period of time during the existence of the object exhibiting that attribute. Object state representation A state is graphically defined by a labelled, rounded-corner rectangle placed inside the owning object. It can not live without an object. In OPL text, the state name shall appear in bold face without capitalization. Initial, default, and final states Initial, final, and default state representation A state that is initial is graphically defined by a state representation with thick contour. A state that is final is graphically defined by a state representation with double contour. A state that is default is graphically defined by a state representation with an open arrow pointing diagonally from the left. The corresponding OPL sentences shall include explicit indicators for an initial, final or default state. === Links === ==== Procedural links ==== A procedural link is one of three kinds: Transforming link, which connects a transformer (an object that the process transforms) or its state with a process to model object transformation, namely generation, consumption, or state change of that object as a result of the process execution. Enabling link, which connects an enabler (an object that enables the process occurrence but is not transformed by that process) or its state, to a process, which enables the occurrence of that process. Control link, which is a procedural (transforming or enabling) link with a control modifier—the letter e (for event) or c (for condition), which adds semantics of a control element. The letter e signifies an event for triggering the linked process, while the letter c signifies a condition for execution of the linked process, or connection of two processes denoting invocation, or exception. Procedural link uniqueness OPM principle A process needs to transform at least one object. Hence, a process shall be connected via a transforming link to at least one object or object state. At any particular extent of abstraction, an object or any one of its states shall have exactly one role as a model element with respect to a process to which it links: the object may be a transformee or an enabler. Additionally, it can be a trigger for an event (if it has the control modifier e), or a conditioning object (if it has the control modifier c), or both. State-specified procedural links A state-specified procedural link is a detailed version of its procedural link counterpart in that rather than connecting a process to an object, it connects a process to a specific state of that object. Transforming links The three kinds of transforming links are: Consumption link: Graphically, an arrow with a closed arrowhead pointing from the consumee to the consuming process defines the consumption link. By assumption, the consumed object disappears as soon as the process begins execution. The syntax of a consumption link OPL sentence is: Processing consumes Consumee. Effect link: A transforming link specifying that the linked process affects the linked object, which is the affectee, i.e., the process causes some unspecified change in the state of the affectee. Graphically, a bidirectional arrow with two closed arrowheads, one pointing in each direction between the affecting process and the affected object, shall define the effect link. The syntax of an effect link OPL sentence is: Processing affects Affectee. Result link: Graphically, an arrow with a closed arrowhead pointing from the creating process to the resultee shall define a result link. The syntax of a result link OPL sentence is: Processing yields Resultee. Enabling links An enabling link is a procedural link specifying an enabler for a process—an object that must be present for that process to occur, but the existence and state of that object after the process is complete are the same as just before the process began. The two kinds of enabling links are: Agent and agent link: A human or a group of humans capable of intelligent decision-making, who enable a process by interacting with the system to enable or control the process throughout execution. Graphically, a line with a filled circle ("black lollipop") at the terminal end extending from the agent object to the process it enables defines an agent link. The syntax of an agent link OPL sentence is: Agent handles Processing. Instrument and instrument link: An inanimate or otherwise non-decision-making enabler of a process that cannot start or take place without the existence and availability of the instrument. State-specified transforming links State-specified consumption link: A consumption link that originates from a particular state of the consumee, meaning that the consumee must be in that state for it to be consumed by the process to which it is linked. Graphically, an arrow with a closed arrowhead pointing from the particular object state to the process, which consumes the object, defines the state-specified consumption link. State-specified result link: A result link that terminates at a specific state of the resultee, meaning that the resultee shall be in that resultant state upon its construction. Graphically, an arrow with a closed arrowhead pointing from the process to the particular object state defines the state-specified result link. The syntax OPL sentence is: Process yields qualified-state Object. State-specified effect links: Input and output effect links- An input link is the link from the object's input state to the transforming process, while the output link is the link from the transforming process to the object's output state. Input-output-specified effect link: A pair of effect links, where the input link originates from a particular state of the affectee and the output link originates from that process and terminates at the output state of the same affectee. Graphically, a pair of arrows with a closed arrowhead from the input state of the affectee to the affecting process and a similar arrow from that process to the state of the affectee at process terminates defines the input-output-specified effect link. The syntax OPL sentence is: Process changes Object from input-state to output-state. Input-specified effect link: A pair of effect links, where the input link originates from a particular state of the affectee and the output link originates from that process and terminates at the affectee without specifying a particular state. Graphically, a pair of arrows consisting of an arrow with a closed arrowhead from a particular state—the input state—of the affectee to the process, and a similar arrow from that process to the affectee but not to any one of its states defines the input-specified effect link. The syntax OPL sentence is: Process changes Object from input-state. Output-specified effect link: A pair of effect links, where the input (source) link originates from an affectee, and the output link originates from the process and terminates at the output (destination, resultant) state of the same affectee. Graphically, a pair of arrows consisting of an arrow with a closed arrowhead from the affectee, but not from any one of its states, to the affecting process, and a similar arrow from that process to a particular state of that affectee— the output state— defines the output-specified effect link. State-specified enabling links Originate from a specific qualifying state and terminate at a process, meaning that the process may occur if and only if the object exists at the state from which the link originates. State-specified agent link: Graphically, a line with a filled circle ("black lollipop") at the terminal end extending from the qualifying state of the agent object to the process it enables defines a state-specified agent link. The syntax OPL sentence is: Qualifying-state Agent handles Processing. State-specified instrument link: An instrument link that originates from a specific qualifying state of the instrument. Graphically, a line with an empty circle ("white lollipop") at the terminal end extending from the qualifying state of the instrument object to the process it enables defines a state-specified instrument link. The syntax OPL sentence is: Processing requires qualifying-state instrument. ==== Event-Condition-Action control ==== Preprocess object set and process precondition In order for an OPM process to start executing once it has been triggered, it needs a set of objects comprising one or more consumes, some possibly at specific states, and/or affects, collectively called the preprocess object set. At instance-level execution, each consume B in the pre-process object set of process P shall be consumed and stop to exist at the beginning of the lowest level sub-process of P which consumes B. Each affected (an object whose state changes) B in the preprocess object set of process P shall exit from its input state at the beginning of the lowest level sub-process of P. Post-process object set and process post-condition A set of objects, comprising one or more results, some possibly at given states, and/or affects, collectively called the post-process object set, shall result from executing a process and carrying out the transformations associated with its execution. Each resulted B in the post process object set of process P shall be created and start to exist at the end of the lowest level sub process of P which yields B. Each affected B in the post-process object set of process P shall enter its output state at the end of the lowest level sub-process of P. ==== Control links ==== An event link and a condition link express an event and a condition, respectively. Control links occur either between an object and a process or between two processes. Event links Triggering a process initiates an attempt to execute the process, but it does not guarantee success of this attempt. The triggering event forces an evaluation of the process' precondition for satisfaction, which, if and only if satisfied, allows process execution to proceed and the process becomes active. Regardless of whether the precondition is satisfied or not, the event will be lost. If the precondition is not satisfied, process execution will not occur until another event activates the process and a successful precondition evaluation allows the process to execute. Basic transforming event links: A consumption event link is a link between an object and a process, which an instance of the object activates. Consumption event link: Graphically, an arrow with a closed arrowhead pointing from the object to the process with the small letter e (for event). The syntax of a consumption event link OPL sentence is: Object triggers Process, which consumes Object. Effect event link: Graphically, a bidirectional arrow with closed arrowheads at each end between the object and the process with a small letter e (for event). The syntax of an effect event link OPL sentence is: Object triggers Process, which affects Object. Basic enabling event links: Agent event link: An agent event link is an enabling link from an agent object to the process that it activates and enables. Graphically, a line with a filled circle ("black lollipop") at the terminal end extending from an agent object to the process it activates and enables with a small letter e (for event). The syntax of an agent event link OPL sentence is: Agent triggers and handles Process. Instrument event link: Graphically, a line with an empty circle ("white lollipop') at the terminal end extending from the instrument object to the process it activates and enables with a small letter e (for event). The syntax of an instrument event link OPL sentence is: Instrument triggers Process, which requires Instrument. State-specified transforming event links: State-specified consumption event link: A state-specified consumption event link is a consumption link that originates from a specific state of an object and terminates at a process, which an instance of the object activates. Graphically, an arrow with a closed arrowhead pointing from the object state to the process with the small letter e (for event). The syntax of a state-specified consumption event link OPL sentence is: Specified-state Object triggers Process, which consumes Object. Input-output-specified effect event link: An input-output-specified effect event link is an input-output-specified effect link with the additional meaning of activating the affecting process when the object enters the specified input state. Graphically, the input-output-specified effect link with a small letter e (for event). The syntax of an input-output specified effect event link OPL sentence is: Input-state Object triggers Process, which changes Object from input-state to output-state. Input-specified effect event link: An input-specified effect event link is an input-specified effect link with the additional meaning of activating the affecting process when the object enters the specified input state. Graphically, the input-specified effect link with a small letter e (for event. The syntax of an input-specified effect event link OPL sentence is: Input-state Object triggers Process, which changes Object from input-state. Output-specified effect event link: An output-specified effect event link is an output-specified effect link with the additional meaning of activating the affecting process when the object comes into existence. Graphically, the output-specified effect link with a small letter e (for event). The syntax of an output-specified effect event link OPL sentence is: Object in any state triggers Process, which changes Object to destination-state State-specified agent event link: State-specified agent event link: A state-specified agent event link is a state-specified agent link with the additional meaning of activating the process when the agent enters the specified state. Graphically, the state-specified agent link with a small letter e (for event). The syntax of a state-specified agent event link OPL sentence is: Qualifying-state Agent triggers and handles Processing". State-specified instrument event link: A state-specified instrument event link is a state-specified instrument link with the additional meaning of activating the process when the instrument enters the specified state. Graphically, the state-specified instrument link with a small letter e (for event). The syntax of a state-specified instrument event link OPL sentence is: Qualifying-state Instrument triggers Processing, which requires qualifying-state Instrument." Invocation links Process invocation Self-invocation link Implicit invocation link: Implicit invocation occurs upon sub-process termination within the context of an in-zoomed process, at which time the sub-process invokes the one(s) immediately below it. Graphically, there is no link between the invoking and the invoked sub-processes; their relative heights within the in-zoom context of their ancestor process implies this semantics. Condition links A condition link is a procedural link between a source object or object state and a destination process that provides a bypass mechanism. Condition consumption link: A condition consumption link is a condition link from an object to a process, meaning that if in run-time an object instance exists, then the process precondition is satisfied, the process executes and consumes the object instance. Graphically, an arrow with a closed arrowhead pointing from the object to the process with the small letter c (for condition) near the arrowhead shall denote a condition consumption link. Condition effect link: However, if that object instance does not exist, then the process precondition evaluation fails and the control skips the process. Graphically, a bidirectional arrow with two closed arrowheads, one pointing in each direction between the affected object and the affecting process, with the small letter c (for condition) near the process end of the arrow. Condition agent link: Graphically, a line with a filled circle ('black lollipop") at the terminal end extending from an agent object to the process it enables, with the small letter c (for condition) near the process end. The syntax of the condition agent link OPL sentence is: Agent handles Process if Agent exists, else Process is skipped. Condition instrument link: Graphically, a line with an empty circle ("white lollipop") at the terminal end, extending from an instrument object to the process it enables, with the small letter c (for condition) near the process end, shall denote a condition instrument link. The syntax of the condition instrument link OPL sentence shall be: Process occurs if Instrument exists, else Process is skipped. Condition state-specified consumption link: A condition state-specified consumption link is a condition consumption link that originates from a specified state of an object and terminates at a process, meaning that if an object instance exists in the specified state and the rest of the process precondition is satisfied, then the process executes and consumes the object instance. Graphically, an arrow with a closed arrowhead pointing from the object qualifying state to the process with the small letter c (for condition) near the arrowhead. Condition input-output-specified effect link: A condition input-output-specified effect link is an input-output specified effect link with the additional meaning that if at run-time an object instance exists and it is in the process input state (and assuming that the rest of the process precondition is satisfied), then the process executes and affects the object instance. Graphically, the condition input-output-specified effect link with the small letter c (for condition) near the arrowhead of the input. The syntax of the condition input-output-specified effect link OPL sentence is: Process occurs if Object is input-state, in which case Process changes Object from input-state to output-state, otherwise Process is skipped. Condition input-specified effect link: A condition input specified effect link is an input-specified effect link with the additional meaning that if at run-time an object instance exists in the specified input state and the rest of the process precondition is satisfied, then the process executes and affects the object instance by changing its state from its input state to an unspecified state. However, if that object instance does not exist at the input state, then the process precondition evaluation fails and the control skips the process. Graphically, the condition input-specified effect link with the small letter c (for condition) near the arrowhead of the input link. The syntax of a condition input-specified effect link OPL sentence is: Process occurs if Object is input state, in which case Process changes Object from input-state, otherwise Process is skipped. Condition output-specified effect link: A condition output-specified effect link is an output-specified effect link with the additional meaning that if at run-time an object instance exists and the rest of the process precondition is satisfied, then the process executes and affects the object instance by changing its state to the specified output-state. However, if that object instance does not exist, then the process precondition evaluation fails and the control skips the process. Graphically, the condition output-specified effect link with the small letter c (for condition) near the arrowhead of the input link. The syntax of the condition output-specified effect OPL sentence is: Process occurs if Object exists, in which case Process changes Object to output-state, otherwise Process is skipped. Condition state-specified agent link: The syntax of the condition state-specified agent link OPL sentence is: Agent handles Process if Agent is qualifying-state, else Process is skipped. Condition state-specified instrument link More information and examples can be found in Model-Based Systems Engineering with OPM and SysML, Chapter 13 "The Dynamic System Aspect". ==== Structural links ==== Structural links specify static, time-independent, long-lasting relations in the system. A structural link connects two or more objects or two or more processes, but not an object and a process, except in the case of an exhibition-characterization link. Unidirectional tagged structural link Has a user-defined semantics regarding the nature of the relation from one thing to the other. Graphically, an arrow with an open arrowhead. Along the tagged structural link, the modeler should record a meaningful tag in the form of a textual phrase that expresses the nature of the structural relation between the connected objects (or processes) and makes sense when placed in the OPL sentence whose syntax follows. Unidirectional null-tagged structural link A unidirectional tagged structural link with no tag. In this case, the default unidirectional tag is used. The modeler has the option of setting the default unidirectional tag for a specific system or a set of systems. If no default is defined, the default tag is "relates to". Bidirectional tagged structural link When the tags in both directions are meaningful and not just the inverse of each other, they may be recorded by two tags on either side of a single bidirectional tagged structural link. The syntax of the resulting tagged structural link is two separate tagged structural link OPL sentences, one for each direction. Graphically, a line with harpoon shaped arrowheads on opposite sides at both ends of the link's line shall. Reciprocal tagged structural link A bidirectional tagged structural link with one tag. In either case, reciprocity indicate that the tag of a bidirectional structural link has the same semantics for its forward and backward directions. When no tag appears, the default tag shall be "are related". The syntax of the reciprocal tagged structural link with only one tag shall be: Source-thing and destination thing are reciprocity-tag. The syntax of the reciprocal tagged structural link with no tag is: Source thing and Destination-thing are related. Fundamental structural relations The most prevalent structural relations among OPM things and are of particular significance for specifying and understanding systems. Each of the fundamental relations is elaborate or refine one OPM thing, the source thing, or refinee, into a collection of one or more OPM things, the destination thing or things, or refineables. Aggregation-participation link A refinee—the whole—aggregates one or more other refineables—the parts. Graphically, a black solid (filled in) triangle with its apex connecting by a line to the whole and the parts connecting by lines to the opposite horizontal base shall denote the aggregation-participation relation link. Exhibition-characterization link A thing exhibits, or is characterized by, another thing. The exhibition-characterization relation binds a refinee—the exhibitor—with one or more refineables, which shall identify features that characterize the exhibitor Graphically, a smaller black triangle inside a larger empty triangle with that larger triangle's apex connecting by a line to the exhibitor and the features connecting to the opposite (horizontal) base defines the exhibition-characterization relation link. Generalization-specialization and inheritance These are structural relations which provide for abstracting any number of objects or process classes into superclasses, and assigning attributes of superclasses to subordinate classes. Generalization-specialization link Inheritance through specialization Specialization restriction through discriminating attribute: A subset of the possible values of an inherited attribute may restrict the specialization. Classification-instantiation and system execution Classification-instantiation link: A source thing, which is an object class or a process class connect to one or more destination things, which are valued instances of the source thing's pattern, i.e. the features specified by the pattern acquire explicit values. This relation provides the modeler with an explicit mechanism for expressing the relationship between a class and its instances created by the provision of feature values. Graphically, a small black circle inside an otherwise empty larger triangle with apex connecting by a line to the class thing and the instance things connecting by lines to the opposite base defines the classification-instantiation relation link. The syntax is: Instance-thing is an instance of Class-thing. Instances of object class and process class State-specified structural relations and links State-specified characterization relation and link: An exhibition-characterization relation from a specialized object that exhibits a value for a discriminating attribute of that object, meaning that the specialized object shall have only that value. Graphically, the exhibition-characterization link triangular symbol, with its apex connecting to the specialized object and its opposite base connecting to the value, defines the state-specified characterization relation. The syntax is: Specialized-object exhibits value-name Attribute-Name. State-specified tagged structural relations and links: A structural relation between a state of an object or value of an attribute and another object or its state or value, meaning that these two entities are associated with the tag expressing the semantics of the association. In case of a null tag (i.e., the tag is not specified), the corresponding default null tag is used. Three groups of state-specified tagged structural relations exist: (1) source state-specified tagged structural relation, (2) destination state-specified tagged structural relation, (3) source-and-destination state-specified tagged structural relation. Each of these groups includes the appropriate unidirectional, bidirectional, and reciprocal tagged structural relation, giving rise to seven kinds of state-specified tagged structural relation link and corresponding OPL sentences. More information and examples can be found in Model-Based Systems Engineering with OPM and SysML, Chapter 3.3 "Adding structural links". === Relationship cardinalities === Object multiplicity in structural and procedural links Object multiplicity shall refer to a requirement or constraint specification on the quantity or count of object instances associated with a link. Unless a multiplicity specification is present, each end of a link shall specify only one thing instance. The syntax of an OPL sentence that includes an object with multiplicity shall include the object multiplicity preceding the object name, with the object name appearing in its plural form. Multiplicity specifications may appear in the following cases: to specify multiple source or destination object instances for a tagged structural link of any kind; to specify a participant object with multiple instances in an aggregation-participation link, where a different participation specification may be attached to each one of the parts of the whole; to specify an object with multiple instances in a procedural relation. Object multiplicity expressions and constraints Object multiplicity may include arithmetic expressions, which shall use the operator symbols "+", "–", "*", "/", "(", and ")" with their usual semantics and shall use the usual textual correspondence in the corresponding OPL sentences. An integer or an arithmetic expression may constrain object multiplicity. Graphically, expression constraints shall appear after a semicolon separating them from the expression that they constrain and shall use the equality/inequality symbols "=", "<", ">", "<=", and ">=", the curly braces "{" and "}" for enclosing set members, and the membership operator "in" (element of, ∈), all with their usual semantics. The corresponding OPL sentence shall place the constraint phrase in bold letters after the object to which the constraint applies in the form ", where constraint". Attribute value and multiplicity constraints The expression of object multiplicity for structural and procedural links specifies integer values or parameter symbols that resolve to integer values. In contrast, the values associated with attributes of objects or processes may be integer or real values, or parameter symbols that resolve to integer or real values, as well as character strings and enumerated values. Graphically, a labelled, rounded-corner rectangle placed inside the attribute to which it belongs shall denote an attribute value with the value or value range (integers, real numbers, or string characters) corresponding to the label name. In OPL text, the attribute value shall appear in bold face without capitalization. The syntax for an object with an attribute value OPL sentence shall be: Attribute of Object is value. The syntax for an object with an attribute value range OPL sentence shall be: Attribute of Object range is value-range. A structural or a procedural link connecting with an attribute that has a real number value may specify a relationship constraint, which is distinct from an object multiplicity. Graphically, an attribute value constraint is an annotation by a number, integer or real, or a symbol parameter, near the attribute end of the link and aligning with the link. === Logical operators: AND, XOR, and OR === Logical AND procedural links The logical operators AND, XOR, and OR among procedural relations enable specification of elaborate process precondition and postcondition. Separate, non-touching links shall have the semantics of logical AND. Here, unlocking the safe requires all three keys. Logical XOR and OR procedural links A link fan shall follow the semantics of either a XOR or an OR operator. The link fan end that is common to the links shall be the convergent link end. The link end that is not common to the links shall be the divergent link end. The XOR operator shall mean that exactly one of the things in the span of the link fan exists, if the divergent link end has objects, or happens, if the divergent link end has processes. Graphically, a dashed arc across the links in the link fan with the arc focal point at the convergent end point of contact shall denote the XOR operator. The OR operator shall mean that at least one of the two or more things in the span of the link fan exists, if the divergent link end has objects, or happens, if the divergent end has processes. Graphically, two concentric dashed arcs across the links with their focal point at the convergent end point of contact shall denote the OR operator. State-specified XOR and OR link fans Control-modified link fans Link probabilities and probabilistic link fans Execution path and path labels A path label shall be a label along a procedural link, which, in the case that there is more than one option to follow upon process termination, prescribes that the link to follow will be the one having the same label as the one which we entered the process. == Modeling principles and model comprehension == The definition of system purpose, scope, and function in terms of boundary, stakeholders and preconditions is the basis for determining whether other elements should appear in the model. This determines the scope of the system model. OPM provides abstracting and refining mechanisms to manage the expression of model clarity and completeness. Stakeholder and system's beneficiary identification For man-made systems this function is expected to benefit a person or a group of people—the beneficiary. After the function of the system aligns with the functional value expectation of its main beneficiary, the modeler identifies and adds other principal stakeholders to the OPM model. System diagram The resulting top-level OPD is the system diagram (SD), which includes the stakeholder group, in particular the beneficiary group, and additional top-level environmental things, which provide the context for the system's operation. The SD should contain only the central and important things—those things indispensable for understanding the function and context of the system. The function is the main process in SD, which also contains the objects involved in this process: the beneficiary, the operand (the object upon which the process operates), and possibly the attribute of the operand whose value the process changes. SD should also contain an object representing the system that enables the function. The default name of this system is created by adding the word "System" to the name of the function. For example, if the function is Car Painting, the name of the system would be Car Painting System. OPD tree Clarity and completeness trade-off Establishing an appropriate balance requires careful management of context during model development. However, the modeler may take advantage of the union of information provided by the entire OPD set of an OPM system model and have one OPD which is clear and unambiguous but not complete, and another that focuses on completeness for some smaller part of the system by adding more details. Refinement-abstraction mechanisms OPM shall provide abstracting and refining mechanisms to manage the expression of model clarity and completeness. These mechanisms shall enable presenting and viewing the system, and the things that comprise it, in various contexts that are interrelated by the objects, processes and relations that are common amongst them. State expression and state suppression The inverse of state suppression shall be state expression, i.e., refining the OPD by adding the information concerning possible object states. The OPL corresponding to the OPD shall express only the states of the objects that are depicted. Unfolding and folding It reveals a set of things that are hierarchically below the unfolded thing. The result is a hierarchy tree, the root of which is the unfolded thing. Linked to the root are the things that constitute the context of the unfolded thing. Conversely, folding is a mechanism for abstraction or composition, which applies to an unfolded hierarchical tree. In-zooming and out-zooming In-zooming is a kind of unfolding, which is applicable to aggregation-participation only and has additional semantics. For processes, in-zooming enables modeling the sub-processes, their temporal order, their interactions with objects, and passing of control to and from this context. For objects, in-zooming creates a distinct context that enables modeling the constituent objects spatial or logical order. Graphically, the timeline within the context of an in-zoomed process flows from the top of its process ellipse symbol to the ellipse bottom. == Meta modeling == OPM model structure Model of OPD Construct and Basic Construct The model, as seen in the image of OPD metamodel, elaborates the OPD Construct concept. The purpose of this model is to distinguish Basic Construct from another possible OPD Construct. A Basic Construct is a specialization of OPD Construct, which consists of exactly two Things connected by exactly one Link. The non-basic constructs include, among others, those with link fans or more than two refinees. A modeller could add a process to the model, by adding states disconnected and connected of Thing Set. The purpose of the model thus includes the action of transforming a disconnected Thing Set to a connected Thing Set using the Link Set as an instrument of connection. OPM model of Thing OPM model of Thing, is a model for an OPM Thing, showing its specialization into Object and Process, as depicted in the image of model of thing below. A set of States characterize Object, which can be empty, in a Stateless Object, or non-empty in the case of a Stateful Object. A Stateful Object with s States gives rise to a set of s stateless State-Specific Objects, one for each State. A particular State-Specific Object refers to an object in a specific state. Modelling the concept of State-Specific Object as both an Object and a State enables simplifying the conceptual model by referring to an object and any one or its states by simply specifying Object. OPM model of Thing generic properties OPM model of Thing generic properties, depicts Thing and its Perseverance, Essence, and Affiliation generic properties modelled as attribute refinees of an exhibition-characterization link. Perseverance is the discriminating attribute between Object and Process. In-zooming and out-zooming models Both new-diagram in-zooming and new-diagram out-zooming create a new OPD context from an existing OPD context. New-diagram in-zooming starts with an OPD of relatively less details and adds elaboration or refinement as a descendant OPD that applies to a specific thing in the less detailed OPD. == Versions == OPM The current version of OPM is ISO/PAS 19450:2015 as specified in Automation Systems and Integration — Object-Process Methodology. The specification in Dori's 2016 book is a superset of ISO/PAS 19450:2015. The previous version of OPM was specified in Dori's 2002 book. OPCAT The current OPCAT version is 4.1. It is available freely from Technion's Enterprise Systems Modeling Laboratory. A previous OPCAT version, 3.1, with fewer capabilities, is also available from the same site. Both are coded in Java. The first OPCAT version, OPCAT 1.X, was written in Visual C++ in 1998. In the beginning of 2016 a team of students under the management of Dori began working on the new generation of OPCAT which will be called OPCloud. As suggested by the name of the software, it will be a cloud-based application, and will enable users to create OPM models using a web-based application. == Standardization == ISO—the International Organization for Standardization—is an independent, non-governmental international organization with a membership of 162 national standards bodies, which develops voluntary, consensus-based, market relevant International Standards that support innovation and provide solutions to global challenges. These standards provide world-class specifications for products, services and systems, to ensure quality, safety and efficiency. === ISO and OPM === In June 2008, Richard Martin approached Dov Dori after his presentation at the INCOSE International Symposium in Utrecht, the Netherlands, to inquire about the possibility of creating an International Standard for OPM. Martin, convener of ISO TC184/SC5/WG1 for automation systems interoperability architecture and modelling, had for some time been searching for methodologies offering more than static information and process modeling. He provided Dori with a simple example to model that could demonstrate both the modelling capability of OPM and its dynamic simulation opportunity. In May 2010, Dori presented a brief overview of OPM and his demonstration model at the ISO Technical Committee 184/Sub-Committee 5 (TC184/SC5) plenary meeting, which then adopted a resolution to create an OPM Study Group for the purpose of examining the potential for OPM to enhance the standards created by SC5. The OPM Study Group began its work in October 2010 and issued an interim report for the 2011 SC5 Plenary. The report included several uses of OPM to model existing SC5 standards and led to an initial motivation for the standardization of OPM with the realization that being text-based, ISO standards are prone to suffer from inconsistencies and incomplete information. This deficiency could be significantly reduced if the standards were model-based rather than text-based, and OPM offered a useful underlying modeling paradigm for this purpose. A final OPM Study Group Report and a draft for a metamodel for model-based standards authoring document were delivered at the 2012 SC5 Plenary. As the OPM Study Group effort progressed, it became obvious that OPM could also serve as a solid and comprehensive basis for model-based systems engineering (MBSE) and for modeling both natural and man-made systems. === ISO 19450 Document === TC184/SC5/WG1 participants received the first draft of the OPM PAS in September 2011 with 16 pages, 2 annexes and a bibliography for a total of 25 pages. Most of the content simply identified sub-clause headings and space holder graphics. By the 2012 SC5 Plenary, the PAS draft included 10 full clauses describing OPM features and 6 annexes totaling 86 pages. One annex was an EBNF (Extended Backus-Naur Form, used to formally specify context free languages, enabling parsing of programming languages) specification for OPL and another detailed OPD graph grammar. To facilitate verification of the EBNF specification, David Shorter wrote a script to evaluate consistency and completeness of the EBNF statement set. Further effort to add meaningful examples and complete all of the identified sections resulted in a draft of 138 pages by the time of the 2013 SC5 Plenary. Subsequently, the working draft was registered with the SC5 Secretariat as a Committee Draft for initial circulation to SC5 members. Because the SC5 resolution calling for the OPM specification indicated that the document was to be registered as a Publicly Available Specification (PAS), it would have only one acceptance ballot opportunity. In April 2014, the New Work Item Proposal and revised Committee Draft for ISO/PAS 19450 was delivered to SC5 for consideration. By now the Committee Draft was 98 pages plus front matter, four annexes and 30 bibliographic references, totaling 183 pages. In March 2015, ISO registered the result of balloting for ISO/PAS 19450 as 8 Approve, 1 Approve with comments, and 1 abstain. ISO/PAS 19450 was formally published with a total of 162 pages by ISO on December 15, 2015, culminating a six-year effort to provide the standardization community with a formal specification for a new approach to modeling that binds together graphics and textual representations into a single paradigm suitable for automated simulation of model behavior. == OPM vs. SysML and UML == OPM vs. SysML SysML is defined as an extension of the Unified Modeling Language (UML) using UML's profile mechanism. OPM vs. UML The differences between OPM and UML are highly perceivable during the analysis and design stages. While UML is a multi-model, OPM supports a single unifying structure-behavior model. The crucial differences stem from the structure-oriented approach of UML, in which behavior is spread over thirteen diagram types, a fact that inevitably invokes the model multiplicity problem. First, using the OPM approach enables to view at main diagram (SD) the main process, objects and the connection between them. In addition, it is easy to understand what is the main system's benefit (presented at the SD). In OPM, it's also easier to understand the main three aspects of the system: behavior, structure and functionality (contrary to UML which describes these aspects with different types of diagrams). Database unfolding modeling contributes to the understanding of system and all details which is stored in the system. In addition, creating in-zooming enables simplifying the model. OPM requires extensive knowledge of systematic processes such as how the system saved the path and gets decisions. == Generating SysML views from an OPM model == While both languages aim at the same purpose of providing a means for general-purpose systems engineering, these languages take different approaches in realizing this goal. SysML is a profile of UML (Unified Modeling Language). The OPM-to SysML translation is one-to-many in the sense that a single OPM element (entity or link) usually translates to several SysML elements that belong in different SysML diagram types. For example, an OPM process, which is defined as an entity that transforms (generates, consumes, or changes the state of) an object, can be mapped to any subset of the following SysML entities: Use case (in a use case diagram) Action (in an activity diagram) State transition trigger (in a state machine diagram). As OPM and SysML are two distinct and differently designed languages, not all the constructs in one language have equivalent constructs in the other language. The first type of diagram in UML that can be generated from an OPM diagram is the use case diagram which is intended for modeling the usage of a system. The main elements comprising the use case diagram are actors and use cases (the entities) along with the relationships (links) among them. Generation of a use case diagram from OPM is therefore based on environmental objects (the actors) and the processes (the use cases) linked to them. Figure 1 is an example of use case diagram generation of SD0. The figure shows the root OPM diagram (a), the corresponding OPL text (b), and the created use case diagram (c). Figure 2 shows a SD1 level of OPD from the same OPM model (a), and the generated use case diagram (b). The second type of diagram is the block definition diagram (BDD) which defines features of blocks (like properties and operations) and relationships between blocks, such as associations and generalizations. Generating a BDD is based upon the systemic objects of the OPM model and their relationships—mainly structural relations to other model elements. The third type of diagram is activity diagrams which are intended to specify flow. Key components included in the activity diagram are actions and routing flow elements. In our context, a separate Activity Diagram can be generated for each OPM process containing child subprocesses, i.e., a process which is in-zoomed in the OPM model. There are two kinds of user parameters that can be specified via the settings dialog. The first one deals with selection of the OPM processes: One option is to explicitly specify the required OPM processes by selection from a list. The alternative, which is the default option, is to start with the root OPD (SD) and go down the hierarchy. Here we reach the second parameter (that is independent of the first one), which is the required number of OPD levels (k) to go down the hierarchy. In order to give the user control over the level of abstraction, the diagrams are generated up to k levels down the hierarchy. Each level will result in the generation of an additional activity diagram, which is a child activity (subdiagram) contained in the enclosing higher-level activity. The default setting for this option is "all levels down" (i.e., "k = ∞"). == See also == Formal ontology Process ontology Ontology language Upper ontology == References == == External links == Object-Process Methodology and Its Application to the Visual Semantic Web, presentation by Dov Dori, 2003. Some Features of the Technical Language of Navya-Nyāya Formalizing the Conceptual Modeling Thought Process to Benefit Engineers and Scientists., presentation by Dov Dori, 2015. Formalizing the Conceptual Modeling Thought Process to Benefit Engineers and Scientists US Patent US7099809B2 on conversion of OPD to and from text formats
Wikipedia/Object_process_methodology
The V-model is a graphical representation of a systems development lifecycle. It is used to produce rigorous development lifecycle models and project management models. The V-model falls into three broad categories, the German V-Modell, a general testing model, and the US government standard. The V-model summarizes the main steps to be taken in conjunction with the corresponding deliverables within computerized system validation framework, or project life cycle development. It describes the activities to be performed and the results that have to be produced during product development. The left side of the "V" represents the decomposition of requirements, and the creation of system specifications. The right side of the "V" represents an integration of parts and their validation. However, requirements need to be validated first against the higher level requirements or user needs. Furthermore, there is also something as validation of system models. This can partially be done on the left side also. To claim that validation only occurs on the right side may not be correct. The easiest way is to say that verification is always against the requirements (technical terms) and validation is always against the real world or the user's needs. The aerospace standard RTCA DO-178B states that requirements are validated—confirmed to be true—and the end product is verified to ensure it satisfies those requirements. Validation can be expressed with the query "Are you building the right thing?" and verification with "Are you building it right?" == Types == There are three general types of V-model. === V-Modell === "V-Modell" is the official project management method of the German government. It is roughly equivalent to PRINCE2, but more directly relevant to software development. The key attribute of using a "V" representation was to require proof that the products from the left-side of the V were acceptable by the appropriate test and integration organization implementing the right-side of the V. === General testing === Throughout the testing community worldwide, the V-model is widely seen as a vaguer illustrative depiction of the software development process as described in the International Software Testing Qualifications Board Foundation Syllabus for software testers. There is no single definition of this model, which is more directly covered in the alternative article on the V-Model (software development). === US government standard === The US also has a government standard V-model. Its scope is a narrower systems development lifecycle model, but far more detailed and more rigorous than most UK practitioners and testers would understand by the V-model. == Validation vs. verification == It is sometimes said that validation can be expressed by the query "Are you building the right thing?" and verification by "Are you building it right?" In practice, the usage of these terms varies. The PMBOK guide, also adopted by the IEEE as a standard (jointly maintained by INCOSE, the Systems engineering Research Council SERC, and IEEE Computer Society) defines them as follows in its 4th edition: "Validation. The assurance that a product, service, or system meets the needs of the customer and other identified stakeholders. It often involves acceptance and suitability with external customers. Contrast with verification." "Verification. The evaluation of whether or not a product, service, or system complies with a regulation, requirement, specification, or imposed condition. It is often an internal process. Contrast with validation." == Objectives == The V-model provides guidance for the planning and realization of projects. The following objectives are intended to be achieved by a project execution: Minimization of project risks: The V-model improves project transparency and project control by specifying standardized approaches and describing the corresponding results and responsible roles. It permits an early recognition of planning deviations and risks and improves process management, thus reducing the project risk. Improvement and guarantee of quality: As a standardized process model, the V-model ensures that the results to be provided are complete and have the desired quality. Defined interim results can be checked at an early stage. Uniform product contents will improve readability, understandability and verifiability. Reduction of total cost over the entire project and system life cycle: The effort for the development, production, operation and maintenance of a system can be calculated, estimated and controlled in a transparent manner by applying a standardized process model. The results obtained are uniform and easily retraced. This reduces the acquirer's dependency on the supplier and the effort for subsequent activities and projects. Improvement of communication between all stakeholders: The standardized and uniform description of all relevant elements and terms is the basis for the mutual understanding between all stakeholders. Thus, the frictional loss between user, acquirer, supplier and developer is reduced. == V-model topics == === Systems engineering and verification === The systems engineering process (SEP) provides a path for improving the cost-effectiveness of complex systems as experienced by the system owner over the entire life of the system, from conception to retirement. It involves early and comprehensive identification of goals, a concept of operations that describes user needs and the operating environment, thorough and testable system requirements, detailed design, implementation, rigorous acceptance testing of the implemented system to ensure it meets the stated requirements (system verification), measuring its effectiveness in addressing goals (system validation), on-going operation and maintenance, system upgrades over time, and eventual retirement. The process emphasizes requirements-driven design and testing. All design elements and acceptance tests must be traceable to one or more system requirements and every requirement must be addressed by at least one design element and acceptance test. Such rigor ensures nothing is done unnecessarily and everything that is necessary is accomplished. === The two streams === ==== Specification stream ==== The specification stream mainly consists of: User requirement specifications Functional requirement specifications Design specifications ==== Testing stream ==== The testing stream generally consists of: Installation qualification (IQ) Operational qualification (OQ) Performance qualification (PQ) The development stream can consist (depending on the system type and the development scope) of customization, configuration or coding. == Applications == The V-model is used to regulate the software development process within the German federal administration. Nowadays it is still the standard for German federal administration and defense projects, as well as software developers within the region. The concept of the V-model was developed simultaneously, but independently, in Germany and in the United States in the late 1980s: The German V-model was originally developed by IABG in Ottobrunn, near Munich, in cooperation with the Federal Office for Defense Technology and Procurement in Koblenz, for the Federal Ministry of Defense. It was taken over by the Federal Ministry of the Interior for the civilian public authorities domain in summer 1992. The US V-model, as documented in the 1991 proceedings for the National Council on Systems Engineering (NCOSE; now INCOSE as of 1995), was developed for satellite systems involving hardware, software, and human interaction. The V-model first appeared at Hughes Aircraft circa 1982 as part of the pre-proposal effort for the FAA Advanced Automation System (AAS) program. It eventually formed the test strategy for the Hughes AAS Design Competition Phase (DCP) proposal. It was created to show the test and integration approach which was driven by new challenges to surface latent defects in the software. The need for this new level of latent defect detection was driven by the goal to start automating the thinking and planning processes of the air traffic controller as envisioned by the automated enroute air traffic control (AERA) program. The reason the V is so powerful comes from the Hughes culture of coupling all text and analysis to multi dimensional images. It was the foundation of Sequential Thematic Organization of Publications (STOP) created by Hughes in 1963 and used until Hughes was divested by the Howard Hughes Medical Institute in 1985. The US Department of Defense puts the systems engineering process interactions into a V-model relationship. It has now found widespread application in commercial as well as defense programs. Its primary use is in project management and throughout the project lifecycle. One fundamental characteristic of the US V-model is that time and maturity move from left to right and one cannot move back in time. All iteration is along a vertical line to higher or lower levels in the system hierarchy, as shown in the figure. This has proven to be an important aspect of the model. The expansion of the model to a dual-Vee concept is treated in reference. As the V-model is publicly available many companies also use it. In project management it is a method comparable to PRINCE2 and describes methods for project management as well as methods for system development. The V-model, while rigid in process, can be very flexible in application, especially as it pertains to the scope outside of the realm of the System Development Lifecycle normal parameters. == Advantages == These are the advantages V-model offers in front of other systems development models: The users of the V-model participate in the development and maintenance of the V-model. A change control board publicly maintains the V-model. The change control board meets anywhere from every day to weekly and processes all change requests received during system development and test. The V-model provides concrete assistance on how to implement an activity and its work steps, defining explicitly the events needed to complete a work step: each activity schema contains instructions, recommendations and detailed explanations of the activity. == Limitations == The following aspects are not covered by the V-model, they must be regulated in addition, or the V-model must be adapted accordingly: The placing of contracts for services is not regulated. The organization and execution of operation, maintenance, repair and disposal of the system are not covered by the V-model. However, planning and preparation of a concept for these tasks are regulated in the V-model. The V-model addresses software development within a project rather than a whole organization. == See also == Engineering information management (EIM) ARCADIA (as supporting systems modeling method) IBM Rational Unified Process (as a supporting software process) Waterfall model of software development Systems architecture Systems design Systems engineering Model-based systems engineering Theory U == References == == External links == "INCOSE G2SEBOK 3.30: Vee Model of Systems Engineering Design and Integration". g2sebok.incose.org. International Council on Systems Engineering. Archived from the original on 2007-09-27. "Das V-Modell XT". cio.bund.de (in German). Federal Office for Information Security (BMI). "Using V Models for Testing". insights.sei.cmu.edu. Software Engineering Institute, Carnegie Mellon University. 11 November 2013.
Wikipedia/V-model
The term system of systems refers to a collection of task-oriented or dedicated systems that pool their resources and capabilities together to create a new, more complex system which offers more functionality and performance than simply the sum of the constituent systems. Currently, systems of systems is a critical research discipline for which frames of reference, thought processes, quantitative analysis, tools, and design methods are incomplete. referred to system of systems engineering. == Overview == Commonly proposed descriptions—not necessarily definitions—of systems of systems, are outlined below in order of their appearance in the literature: Linking systems into joint system of systems allows for the interoperability and synergism of Command, Control, Computers, Communications and Information (C4I) and Intelligence, Surveillance and Reconnaissance (ISR) Systems: description in the field of information superiority in modern military. System of systems are large-scale concurrent and distributed systems the components of which are complex systems themselves: description in the field of communicating structures and information systems in private enterprise. System of systems education involves the integration of systems into system of systems that ultimately contribute to evolution of the social infrastructure: description in the field of education of engineers on the importance of systems and their integration. System of systems integration is a method to pursue development, integration, interoperability and optimization of systems to enhance performance in future battlefield scenarios: description in the field of information intensive systems integration in the military. Modern systems that comprise system of systems problems are not monolithic, rather they have five common characteristics: operational independence of the individual systems, managerial independence of the systems, geographical distribution, emergent behavior and evolutionary development: description in the field of evolutionary acquisition of complex adaptive systems in the military. Enterprise systems of systems engineering is focused on coupling traditional systems engineering activities with enterprise activities of strategic planning and investment analysis: description in the field of information intensive systems in private enterprise. System of systems problems are a collection of trans-domain networks of heterogeneous systems that are likely to exhibit operational and managerial independence, geographical distribution, and emergent and evolutionary behaviors that would not be apparent if the systems and their interactions are modeled separately: description in the field of National Transportation System, Integrated Military and Space Exploration. Taken together, all these descriptions suggest that a complete system of systems engineering framework is needed to improve decision support for system of systems problems. Specifically, an effective system of systems engineering framework is needed to help decision makers to determine whether related infrastructure, policy and/or technology considerations as an interrelated whole are good, bad or neutral over time. The need to solve system of systems problems is urgent not only because of the growing complexity of today's challenges, but also because such problems require large monetary and resource investments with multi-generational consequences. == System-of-systems topics == === The system-of-systems approach === While the individual systems constituting a system of systems can be very different and operate independently, their interactions typically expose and deliver important emergent properties. These emergent patterns have an evolving nature that stakeholders must recognize, analyze and understand. The system of systems approach does not advocate particular tools, methods or practices; instead, it promotes a new way of thinking for solving grand challenges where the interactions of technology, policy, and economics are the primary drivers. System of systems study is related to the general study of designing, complexity and systems engineering, but also brings to the fore the additional challenge of design. Systems of systems typically exhibit the behaviors of complex systems, but not all complex problems fall in the realm of systems of systems. Inherent to system of systems problems are several combinations of traits, not all of which are exhibited by every such problem: Operational Independence of Elements Managerial Independence of Elements Evolutionary Development Emergent Behavior Geographical Distribution of Elements Interdisciplinary Study Heterogeneity of Systems Networks of Systems The first five traits are known as Maier's criteria for identifying system of systems challenges. The remaining three traits have been proposed from the study of mathematical implications of modeling and analyzing system of systems challenges by Dr. Daniel DeLaurentis and his co-researchers at Purdue University. === Research === Current research into effective approaches to system of systems problems includes: Establishment of an effective frame of reference Crafting of a unifying lexicon Developing effective methodologies to visualize and communicate complex systems Distributed resource management Study of designing architecture Interoperability Data distribution policies: policy definition, design guidance and verification Formal modelling language with integrated tools platform Study of various modeling, simulation and analysis techniques network theory agent based modeling general systems theory probabilistic robust design (including uncertainty modeling/management) object-oriented simulation and programming multi-objective optimization Study of various numerical and visual tools for capturing the interaction of system requirements, concepts and technologies === Applications === Systems of systems, while still being investigated predominantly in the defense sector, is also seeing application in such fields as national air and auto transportation and space exploration. Other fields where it can be applied include health care, design of the Internet, software integration, and energy management and power systems. Social-ecological interpretations of resilience, where different levels of our world (e.g., the Earth system, the political system) are interpreted as interconnected or nested systems, take a systems-of-systems approach. An application in business can be found for supply chain resilience. == Educational institutions and industry == Collaboration among a wide array of organizations is helping to drive the development of defining system of systems problem class and methodology for modeling and analysis of system of systems problems. There are ongoing projects throughout many commercial entities, research institutions, academic programs, and government agencies. Major stakeholders in the development of this concept are: Universities working on system of systems problems, including Purdue University, the Georgia Institute of Technology, Old Dominion University, George Mason University, the University of New Mexico, the Massachusetts Institute of Technology, Naval Postgraduate School and Carnegie Mellon University. Corporations active in this area of research such as The MITRE Corporation, Airbus, BAE Systems, Northrop Grumman, Boeing, Raytheon, Thales Group, CAE Inc., Altair Engineering, Saber Astronautics and Lockheed Martin. Government agencies that perform and support research in systems of systems research and applications, such as DARPA, the U.S. Federal Aviation Administration, NASA and Department of Defense (DoD) For example, DoD recently established the National Centers for System of Systems Engineering to develop a formal methodology for system-of-systems engineering for applications in defense-related projects. In another example, according to the Exploration Systems Architecture Study, NASA established the Exploration Systems Mission Directorate (ESMD) organization to lead the development of a new exploration "system-of-systems" to accomplish the goals outlined by President G.W. Bush in the 2004 Vision for Space Exploration. A number of research projects and support actions, sponsored by the European Commission, were performed in the Seventh Framework Programme. These target Strategic Objective IST-2011.3.3 in the FP7 ICT Work Programme (New paradigms for embedded systems, monitoring and control towards complex systems engineering). This objective had a specific focus on the "design, development and engineering of System-of-Systems". These projects included: T-AREA-SoS (Trans-Atlantic Research and Education Agenda on Systems of Systems), which aims "to increase European competitiveness in, and improve the societal impact of, the development and management of large complex systems in a range of sectors through the creation of a commonly agreed EU-US Systems of Systems (SoS) research agenda". COMPASS (Comprehensive Modelling for Advanced Systems of Systems), aiming to provide a semantic foundation and open tools framework to allow complex SoSs to be successfully and cost-effectively engineered, using methods and tools that promote the construction and early analysis of models. DANSE (Designing for Adaptability and evolutioN in System of systems Engineering), which aims to develop "a new methodology to support evolving, adaptive and iterative System of Systems life-cycle models based on a formal semantics for SoS inter-operations and supported by novel tools for analysis, simulation, and optimisation". ROAD2SOS (Roadmaps for System-of-System Engineering), aiming to develop "strategic research and engineering roadmaps in Systems of Systems Engineering and related case studies". DYMASOS (DYnamic MAnagement of physically-coupled Systems Of Systems), aiming to develop theoretical approaches and engineering tools for dynamic management of SoS based on industrial use cases. AMADEOS (Architecture for Multi-criticality Agile Dependable Evolutionary Open System-of-Systems) aiming to bring time awareness and evolution into the design of System-of- Systems (SoS) with possible emergent behavior, to establish a sound conceptual model, a generic architectural framework and a design methodology. Ongoing European projects which are using a System of Systems approach include: Arctic PASSION (Pan-Arctic observing System of Systems: Implementing Observations for societal Needs; July 2021 - June 2025) is a Horizon 2020 research project with the key motivation of co-creating and implementing a coherent, integrated Arctic observing system: the Pan-Arctic Observing System of Systems - pan-AOSS. The project aims to overcome shortcomings in the present observing system by refining its operability, improving and extending pan-Arctic scientific and community-based monitoring and the integration with indigenous and local knowledge. COLOSSUS (Collaborative System of Systems Exploration of Aviation Products, Services and Business Models; Feb 2023 - Jan 2026) is a Horizon Europe research project for the development of a system-of-systems design methodology which for the first time will enable the combined optimization of aircraft, operations and business models. The project aims at establishing a transformative digital collaborative (TDC) framework to enable European aviation to conduct research, technology development, and innovation. The TDC framework will support the simulation, analysis, optimization and evaluation of complex products and services in real-world scenarios. == See also == Inheritance Software library Object-oriented programming Model-based systems engineering Enterprise systems engineering Complex adaptive system Systems architecture Process architecture Software architecture Enterprise architecture Ultra-large-scale systems Department of Defense Architecture Framework New Cybernetics == References == == Further reading == Yaneer Bar-Yam et al. (2004) "The Characteristics and Emerging Behaviors of System-of-Systems" in: NECSI: Complex Physical, Biological and Social Systems Project, January 7, 2004. Kenneth E. Boulding (1954) "General Systems Theory - The Skeleton of Science," Management Science, Vol. 2, No. 3, ABI/INFORM Global, pp. 197–208. Crossley, W.A., System-of-Systems:, Introduction of Purdue University Schools of Engineering's Signature Area. Mittal, S., Martin, J.L.R. (2013) Netcentric System of Systems Engineering with DEVS Unified Process, CRC Press, Boca Raton, FL DeLaurentis, D. "Understanding Transportation as a System of Systems Design Problem," 43rd AIAA Aerospace Sciences Meeting, Reno, Nevada, January 10–13, 2005. AIAA-2005-0123. J. Lewe, D. Mavris, [14] Foundation for Study of Future Transportation Systems Through Agent-Based Simulation}, in: Proceedings of 24th International Congress of the Aeronautical Sciences (ICAS), Yokohama, Japan, August 2004. Session 8.1. Maier, M.W. (1998). "Architecting Principles for System of Systems". Systems Engineering. 1 (4): 267–284. doi:10.1002/(sici)1520-6858(1998)1:4<267::aid-sys3>3.0.co;2-d. Retrieved 2012-12-13. Held, J.M.,The Modelling of Systems of Systems, PhD Thesis, University of Sydney, 2008 D. Luzeaux & J.R. Ruault, "Systems of Systems", ISTE Ltd and John Wiley & Sons Inc, 2010 D. Luzeaux, J.R. Ruault & J.L. Wippler, "Complex Systems and Systems of Systems Engineering", ISTE Ltd and John Wiley & Sons Inc, 2011 Popper, S., Bankes, S., Callaway, R., and DeLaurentis, D. (2004) System-of-Systems Symposium: Report on a Summer Conversation, July 21–22, 2004, Potomac Institute for Policy Studies, Arlington, VA. == External links == System of Systems - video IBM "A Smarter Planet blog - The Internet of Things and System of Systems". IBM. October 2015. IEEE International Conference on System of Systems Engineering (SoSE) System of Systems Engineering Center of Excellence System of Systems, Systems Engineering Guide (USD AT&L Aug 2008) International Journal of System of Systems Engineering (IJSSE)
Wikipedia/System_of_systems
Model-based systems engineering (MBSE) represents a paradigm shift in systems engineering, replacing traditional document-centric approaches with a methodology that uses structured domain models as the primary means of information exchange and system representation throughout the engineering lifecycle. Unlike document-based approaches where system specifications are scattered across numerous text documents, spreadsheets, and diagrams that can become inconsistent over time, MBSE centralizes information in interconnected models that automatically maintain relationships between system elements. These models serve as the authoritative source of truth for system design, enabling automated verification of requirements, real-time impact analysis of proposed changes, and generation of consistent documentation from a single source. This approach significantly reduces errors from manual synchronization, improves traceability between requirements and implementation, and facilitates earlier detection of design flaws through simulation and analysis. The MBSE approach has been widely adopted across industries dealing with complex systems development, including aerospace, defense, rail, automotive, and manufacturing. By enabling consistent system representation across disciplines and development phases, MBSE helps organizations manage complexity, reduce development risks, improve quality, and enhance collaboration among multidisciplinary teams. The International Council on Systems Engineering (INCOSE) defines MBSE as the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases. == History == The first known prominent public usage of the term "Model-Based Systems Engineering" is a book by A. Wayne Wymore with the same name. The MBSE term was also commonly used among the SysML Partners consortium during the formative years of their Systems Modeling Language (SysML) open source specification project during 2003-2005, so they could distinguish SysML from its parent language UML v2, where the latter was software-centric and associated with the term Model-Driven Development (MDD). The standardization of SysML in 2006 resulted in widespread modeling tool support for it and associated MBSE processes that emphasized SysML as their lingua franca. In September 2007, the MBSE approach was further generalized and popularized when INCOSE introduced its "MBSE 2020 Vision", which was not restricted to SysML, and supported other competitive modeling language standards, such as AP233, HLA, and Modelica. According to the MBSE 2020 Vision: "MBSE is expected to replace the document-centric approach that has been practiced by systems engineers in the past and to influence the future practice of systems engineering by being fully integrated into the definition of systems engineering processes." As of 2014, the scope of MBSE started to cover more Modeling and Simulation topics, in an attempt to bridge the gap between system model specifications and related system software simulations. As a consequence, the term "modeling and simulation-based systems engineering" has also been increasingly associated along with MBSE. According to the INCOSE SEBoK (Systems Engineering Book of Knowledge) MBSE may be considered a "subset of digital engineering". INCOSE hosts an annual meeting on MBSE, as well as MBSE working groups. == See also == AUTOSAR (AUTomotive Open System ARchitecture) Engineering Information Management (EIM) Hardware-in-the-loop simulation List of requirements engineering tools List of SysML tools Model-based design (MBD) Model-driven development (MDD) Object Management Group OPM - Object Process Methodology (ISO/PAS 19450:2015) Systems engineering (SE) SysML - Systems Modeling Language UML - Unified Modeling Language == References == == Further reading == Eclipse IDE Modeling Project: Gronback, Richard. "Eclipse Modeling Project". www.eclipse.org. Retrieved 2021-04-10. Estefan, Jeff A. "Survey of model-based systems engineering (MBSE) methodologies." Incose MBSE Focus Group 25 (2007): 8. David Long, Zane Scott. A Primer for Model-Based Systems Engineering, 2011, Vitech Corporation. Patrice Micouin, Model Based Systems Engineering: Fundamentals and Methods, 2014. Ana Luísa Ramos, José Vasconcelos Ferreira and Jaume Barceló. "Model-based systems engineering: An emerging approach for modern systems." Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 42.1 (2012): 101-111. A. Wayne Wymore, Model-Based Systems Engineering, 1993. Pascal Roques. MBSE with the ARCADIA Method and the Capella Tool - 8th European Congress on Embedded Real Time Software and Systems (ERTS 2016). Somers, James. "The Coming Software Apocalypse," The Atlantic Magazine, 2017. Casteran, Regis. "Functions in systems model.", Medium article, 2019. Knizhnik et al. "An Exploration of Lessons Learned from NASA's MBSE Infusion and Modernization Initiative (MIAMI)", NASA Systems Engineering, 2020. Meillier, Renaud. "System Simulation in the Context of MBSE", Siemens article 2021. == External links == An Introduction to MBSE - SEI Blog Model Based Systems Engineering (MBSE) - NASA
Wikipedia/Model-based_systems_engineering
System of systems engineering (SoSE) is a set of developing processes, tools, and methods for designing, re-designing and deploying solutions to system-of-systems challenges. == Overview == System of Systems Engineering (SoSE) methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration, industry 4.0 and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains. Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements. System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare decision-makers to design informed architectural solutions for System-of-Systems problems. Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Systems Engineering. One of the proposed SoSE frameworks, by Dr. Daniel A. DeLaurentis, recommends a three-phase method where a SoS problem is defined (understood), abstracted, modeled and analyzed for behavioral patterns. More information on this method and other proposed methods can be found in the listed SoSE focused organizations and SoSE literature in the subsequent sections. == See also == Enterprise systems engineering System of systems Enterprise architecture == References == == Further reading == Kenneth Cureton, F. Stan Settlers, "System-of-Systems Architecting: Educational Findings and Implications," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 2726–2731. Mo Jamshidi, "System-of-Systems Engineering — A Definition," IEEE SMC 2005, Big Island, Hawaii, URL: http://ieeesmc2005.unm.edu/SoSE_Defn.htm Saurabh Mittal, Jose L. Risco Martin, "Netcentric System of Systems Engineering with DEVS Unified Process", CRC Press, Boca Raton, Florida, 2013 URL:http://www.crcpress.com/product/isbn/9781439827062 Charles Keating, Ralph Rogers, Resit Unal, David Dryer, et al. "System of Systems Engineering," Engineering Management Journal, Vol. 15, no. 3, pp. 36. Charles Keating, "Research Foundations for System of Systems Engineering," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 2720–2725. Jack Ring, Azad Madni, "Key Challenges and Opportunities in 'System of Systems' Engineering," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 973–978. R.E. Raygan, "Configuration management in a system-of-systems environment delivering IT services," 2007 IEEE International Engineering Management Conference, Austin, Texas, July 29, 2007-Aug, 1 2007. pp. 330 – 335. D. Luzeaux & JR Ruault, "Systems of Systems", ISTE Ltd and John Wiley & Sons Inc, 2010 D. Luzeaux, JR Ruault & JL Wippler, "Complex System and Systems of Systems Engineering", ISTE Ltd and John Wiley & Sons Inc, 2011 == External links == System of Systems Signature Area at Purdue University's College of Engineering (Apr 2015 - content no longer specific to System of Systems) National Centers for System of Systems Engineering at Old Dominion University (Apr 2015 - content blocked) Center for Intelligent Networked Systems at Stevens Institute of Technology (Apr 2015 - page timed out, presumed to no longer exist) System of Systems Engineering Center of Excellence (Apr 2015 - no SOSE content)
Wikipedia/System_of_systems_engineering
Cognitive systems engineering (CSE) is an interdisciplinary field that examines the intersection of people, work, and technology, with a particular focus on safety-critical systems. The central tenet of CSE is to treat collections of people and technologies as a single unified entity—called a joint cognitive system (JCS)—capable of performing cognitive work rather than as separate human and technological components. The field was formally established in the early 1980s by Erik Hollnagel and David Woods. Unlike cognitive engineering, which primarily applies cognitive science to design technological systems that support user cognition, CSE takes a more holistic approach by analyzing how cognition is distributed across entire work systems. This perspective emphasizes understanding the functional relationships between humans and technology in complex operational environments such as air traffic control, medical systems, nuclear power plants, and other high-risk contexts. CSE draws on theoretical foundations from multiple disciplines including cognitive psychology, cognitive anthropology, systems theory, and ecological psychology. Key intellectual influences include Edwin Hutchins's distributed cognition, James Gibson's ecological theory of visual perception, Ulric Neisser's perceptual cycle, and William Clancey's situated cognition. The field has also been shaped by Jens Rasmussen's work on human error and abstraction hierarchy. Methodologically, CSE employs techniques such as cognitive task analysis, cognitive work analysis, and work domain analysis to understand how cognition is distributed across human and technological agents. These approaches focus on identifying system constraints and designing for resilience rather than merely preventing errors. == History == Cognitive systems engineering emerged in the wake of the Three Mile Island (TMI) accident. At the time, existing theories about safety were unable to explain how the operators at TMI could be confused about what was actually happening inside of the plant. Following the accident, Jens Rasmussen did early research on cognitive aspects of nuclear power plant control rooms. This work influenced a generation of researchers who would later come to be associated with cognitive systems engineering, including Morten Lind, Erik Hollnagel, and David Woods. Following the publication of a textbook on cognitive systems engineering by Kim Vicente in 1999 the techniques employed to establish a cognitive work analysis (CWA) were used to aid the design of any kind of system were humans have to interact with technology. The tools outlined by Vicente were not tried and tested, and there are few if any published accounts of the five phases of analysis being implemented. === "Cognitive systems engineering" vs "Cognitive engineering" === The term "cognitive systems engineering" was introduced in a 1983 paper by Hollnagel and Woods. Although the term cognitive engineering had already been introduced by Don Norman, Hollnagel and Woods deliberately introduced new terminology. They were unhappy with the framing of the term cognitive engineering, which they felt focused too much on improving the interaction between humans and computers, through the application of cognitive science. Instead, Hollnagel and Woods wished to emphasize a shift in focus from human-computer interaction to joint cognitive systems as the unit of analysis. Despite the intention by Hollnagel and Woods to distinguish cognitive engineering from cognitive systems engineering, some researchers continue to use the two terms interchangeably. == Themes == === Joint cognitive systems === As mentioned in the Origins section above, one of the key tenets of cognitive systems engineering is that the base unit of analysis is the joint cognitive system. Instead of viewing cognitive tasks as being done only by individuals, CSE views cognitive work as being accomplished by a collection of people coordinating with each other and using technology to jointly perform cognitive work as a system. === Studying work in context === CSE researchers focus their studies on work in situ, as opposed to studying how work is done in controlled laboratory environments. This research approach, known as macrocognition, is similar to the one taken by naturalistic decision-making. Examples of studies of work done in context include Julian Orr's ethnographic studies of copy machine technicians, Lucy Suchman's ethnographic studies of how people use photocopiers, Diane Vaughan's study of engineering work at NASA in the wake of the Space Shuttle Challenger disaster, and Scott Snook's study of military work in the wake of the 1994 Black Hawk shootdown incident. === Coping with complexity === A general thread that runs through cognitive systems engineering research is the question of how to design joint cognitive systems that can deal effectively with complexity, including common patterns in how such systems can fail to deal effectively with complexity. === Anomaly response === As mentioned in the Origins section above, CSE researchers were influenced by TMI. One specific application of coping with complexity is the work that human operators must do when they are supervising a process such as nuclear power plant, and they must then deal with a problem that arises. This work is sometimes known as anomaly response or dynamic fault management. This type of work often involves uncertainty, quickly changing conditions, and risk tradeoffs in deciding what remediation actions to take. === Coordination === Because joint cognitive systems involve multiple agents that must work together to complete cognitive tasks, coordination is another topic of interest in CSE. One specific example is the notion of common ground and its implications for building software that can contribute effectively as agents in a joint cognitive system. === Cognitive artifacts === CSE researchers study how people use technology to support cognitive work and coordinate this work across multiple people. Examples of such cognitive artifacts, which have been studied by researchers, include "the bed book" used in intensive care units, "voice loops" used in space operations, "speed bugs" used in aviation, drawings and sketches in engineering work, and the various tools used in marine navigation. Of particular interest to CSE researchers is how computer-based tools influence joint cognitive work, in particular the impact of automation, and computerized interfaces used by system operators. == Founders and Foundational Contributors == Erik Hollnagel* David Woods* Robert Hoffman Philip Smith Jens Rasmussen Emily Roth Gary Klein == Books == Cognitive Systems Engineering: The Future for a Changing World by Philip J. Smith and Robbert R. Hoffman, eds. 2017 Joint Cognitive Systems: Patterns in Cognitive Systems Engineering by David Woods and Erik Hollnagel, 2005. 978-0849328213 Joint Cognitive Systems: Foundations of Cognitive Systems Engineering by Erik Hollnagel and David Woods, 2005. 978-0367864156 Cognitive Systems Engineering by Jens Rasmussen, Annelise Mark Pejtersen, and L.P. Goodstein, 1994. == See also == Cognitive work analysis Ecological interface design == References == == External links == === Journals === Cognition, Technology & Work International Journal of Human-Computer Studies Ergonomics Computer Supported Cooperative Work (CSCW): The Journal of Collaborative Computing and Work Practices
Wikipedia/Cognitive_systems_engineering
Object-oriented analysis and design (OOAD) is a technical approach for analyzing and designing an application, system, or business by applying object-oriented programming, as well as using visual modeling throughout the software development process to guide stakeholder communication and product quality. OOAD in modern software engineering is typically conducted in an iterative and incremental way. The outputs of OOAD activities are analysis models (for OOA) and design models (for OOD) respectively. The intention is for these to be continuously refined and evolved, driven by key factors like risks and business value. == History == In the early days of object-oriented technology before the mid-1990s, there were many different competing methodologies for software development and object-oriented modeling, often tied to specific Computer Aided Software Engineering (CASE) tool vendors. No standard notations, consistent terms and process guides were the major concerns at the time, which degraded communication efficiency and lengthened learning curves. Some of the well-known early object-oriented methodologies were from and inspired by gurus such as Grady Booch, James Rumbaugh, Ivar Jacobson (the Three Amigos), Robert Martin, Peter Coad, Sally Shlaer, Stephen Mellor, and Rebecca Wirfs-Brock. In 1994, the Three Amigos of Rational Software started working together to develop the Unified Modeling Language (UML). Later, together with Philippe Kruchten and Walker Royce (eldest son of Winston Royce), they have led a successful mission to merge their own methodologies, OMT, OOSE and Booch method, with various insights and experiences from other industry leaders into the Rational Unified Process (RUP), a comprehensive iterative and incremental process guide and framework for learning industry best practices of software development and project management. Since then, the Unified Process family has become probably the most popular methodology and reference model for object-oriented analysis and design. == Overview == An object contains encapsulated data and procedures grouped to represent an entity. The 'object interface' defines how the object can be interacted with. An object-oriented program is described by the interaction of these objects. Object-oriented design is the discipline of defining the objects and their interactions to solve a problem that was identified and documented during object-oriented analysis. What follows is a description of the class-based subset of object-oriented design, which does not include object prototype-based approaches where objects are not typically obtained by instantiating classes but by cloning other (prototype) objects. Object-oriented design is a method of design encompassing the process of object-oriented decomposition and a notation for depicting both logical and physical as well as state and dynamic models of the system under design. The software life cycle is typically divided up into stages, going from abstract descriptions of the problem, to designs, then to code and testing, and finally to deployment. The earliest stages of this process are analysis and design. The analysis phase is also often called "requirements acquisition". In some approaches to software development—known collectively as waterfall models—the boundaries between each stage are meant to be fairly rigid and sequential. The term "waterfall" was coined for such methodologies to signify that progress went sequentially in one direction only, i.e., once analysis was complete then and only then was design begun and it was rare (and considered a source of error) when a design issue required a change in the analysis model or when a coding issue required a change in design. The alternative to waterfall models are iterative models. This distinction was popularized by Barry Boehm in a very influential paper on his Spiral Model for iterative software development. With iterative models it is possible to do work in various stages of the model in parallel. So for example it is possible—and not seen as a source of error—to work on analysis, design, and even code all on the same day and to have issues from one stage impact issues from another. The emphasis on iterative models is that software development is a knowledge-intensive process and that things like analysis can't really be completely understood without understanding design issues, that coding issues can affect design, that testing can yield information about how the code or even the design should be modified, etc. Although it is possible to do object-oriented development using a waterfall model, in practice most object-oriented systems are developed with an iterative approach. As a result, in object-oriented processes "analysis and design" are often considered at the same time. The object-oriented paradigm emphasizes modularity and re-usability. The goal of an object-oriented approach is to satisfy the "open–closed principle". A module is open if it supports extension, or if the module provides standardized ways to add new behaviors or describe new states. In the object-oriented paradigm this is often accomplished by creating a new subclass of an existing class. A module is closed if it has a well defined stable interface that all other modules must use and that limits the interaction and potential errors that can be introduced into one module by changes in another. In the object-oriented paradigm this is accomplished by defining methods that invoke services on objects. Methods can be either public or private, i.e., certain behaviors that are unique to the object are not exposed to other objects. This reduces a source of many common errors in computer programming. The software life cycle is typically divided up into stages going from abstract descriptions of the problem to designs then to code and testing and finally to deployment. The earliest stages of this process are analysis and design. The distinction between analysis and design is often described as "what vs. how". In analysis developers work with users and domain experts to define what the system is supposed to do. Implementation details are supposed to be mostly or totally (depending on the particular method) ignored at this phase. The goal of the analysis phase is to create a functional model of the system regardless of constraints such as appropriate technology. In object-oriented analysis this is typically done via use cases and abstract definitions of the most important objects. The subsequent design phase refines the analysis model and makes the needed technology and other implementation choices. In object-oriented design the emphasis is on describing the various objects, their data, behavior, and interactions. The design model should have all the details required so that programmers can implement the design in code. == Object-oriented analysis == The purpose of any analysis activity in the software life-cycle is to create a model of the system's functional requirements that is independent of implementation constraints. The main difference between object-oriented analysis and other forms of analysis is that by the object-oriented approach we organize requirements around objects, which integrate both behaviors (processes) and states (data) modeled after real world objects that the system interacts with. In other or traditional analysis methodologies, the two aspects: processes and data are considered separately. For example, data may be modeled by ER diagrams, and behaviors by flow charts or structure charts. Common models used in OOA are use cases and object models. Use cases describe scenarios for standard domain functions that the system must accomplish. Object models describe the names, class relations (e.g. Circle is a subclass of Shape), operations, and properties of the main objects. User-interface mockups or prototypes can also be created to help understanding. == Object-oriented design == Object-oriented design (OOD) is the process of planning a system of interacting objects to solve a software problem. It is a method for software design. By defining classes and their functionality for their children (instantiated objects), each object can run the same implementation of the class with its state. During OOD, a developer applies implementation constraints to the conceptual model produced in object-oriented analysis. Such constraints could include the hardware and software platforms, the performance requirements, persistent storage and transaction, usability of the system, and limitations imposed by budgets and time. Concepts in the analysis model which is technology independent, are mapped onto implementing classes and interfaces resulting in a model of the solution domain, i.e., a detailed description of how the system is to be built on concrete technologies. Important topics during OOD also include the design of software architectures by applying architectural patterns and design patterns with the object-oriented design principles. === Input (sources) for object-oriented design === The input for object-oriented design is provided by the output of object-oriented analysis. Realize that an output artifact does not need to be completely developed to serve as input of object-oriented design; analysis and design may occur in parallel, and in practice, the results of one activity can feed the other in a short feedback cycle through an iterative process. Both analysis and design can be performed incrementally, and the artifacts can be continuously grown instead of completely developed in one shot. Some typical input artifacts for object-oriented design are: Conceptual model: The result of object-oriented analysis, captures concepts in the problem domain. The conceptual model is explicitly chosen to be independent of implementation details, such as concurrency or data storage. Use case: A description of sequences of events that, taken together, lead to a system doing something useful. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function. Use case actors may be end users or other systems. In many circumstances use cases are further elaborated into use case diagrams. Use case diagrams are used to identify the actor (users or other systems) and the processes they perform. System sequence diagram: A system sequence diagram (SSD) is a picture that shows, for a particular scenario of a use case, the events that external actors generate, their order, and possible inter-system events. User interface documentation (if applicable): Document that shows and describes the look and feel of the end product's user interface. It is not mandatory to have this, but it helps to visualize the end product and therefore helps the designer. Relational data model (if applicable): A data model is an abstract model that describes how data is represented and used. If an object database is not used, the relational data model should usually be created before the design since the strategy chosen for object–relational mapping is an output of the OO design process. However, it is possible to develop the relational data model and the object-oriented design artifacts in parallel, and the growth of an artifact can stimulate the refinement of other artifacts. === Object-oriented concepts === The five basic concepts of object-oriented design are the implementation-level features built into the programming language. These features are often referred to by these common names: Object/Class: A tight coupling or association of data structures with the methods or functions that act on the data. This is called a class, or object (an object is created based on a class). Each object serves a separate function. It is defined by its properties, what it is and what it can do. An object can be part of a class, which is a set of similar objects. Information hiding: The ability to protect some object components from external entities. This is realized by language keywords to enable a variable to be declared as private or protected to the owning class. Inheritance: The ability for a class to extend or override the functionality of another class. The so-called subclass has a whole section derived (inherited) from the superclass and has its own set of functions and data. Interface (object-oriented programming): The ability to defer the implementation of a method. The ability to define the functions or methods signatures without implementing them. Polymorphism (specifically, Subtyping): The ability to replace an object with its sub-objects. The ability of an object-variable to contain not only that object but also all of its sub-objects. === Designing concepts === Defining objects, creating class diagram from conceptual diagram: Usually map entity to class. Identifying attributes and their models. Use design patterns (if applicable): A design pattern is not a finished design, it is a description of a solution to a common problem, in a context. The main advantage of using a design pattern is that it can be reused in multiple applications. It can also be thought of as a template for how to solve a problem that can be used in many different situations and/or applications. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects involved. Define application framework (if applicable): An application framework is usually a set of libraries or classes that are used to implement the standard structure of an application for a specific operating system. By bundling a large amount of reusable code into a framework, much time is saved for the developer since he/she is saved the task of rewriting large amounts of standard code for each new application that is developed. Identify persistent objects/data (if applicable): Identify objects that have to last longer than a single runtime of the application. Design the object relation mapping if a relational database is used. Identify and define remote objects (if applicable) and their variations. === Output (deliverables) of object-oriented design === Sequence diagram – Extend the system sequence diagram to add specific objects that handle the system events. A sequence diagram shows, as parallel vertical lines, different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur. Class diagram – A class diagram is a type of static structure UML diagram that describes the structure of a system by showing the system's classes, its attributes, and the relationships between the classes. The messages and classes identified through the development of the sequence diagrams can serve as input to the automatic generation of the global class diagram of the system. === Some design principles and strategies === Dependency injection: The basic idea is that if an object depends upon having an instance of some other object then the needed object is "injected" into the dependent object; for example, being passed a database connection as an argument to the constructor instead of creating one internally. Acyclic dependencies principle: The dependency graph of packages or components (the granularity depends on the scope of work for one developer) should have no cycles. This is also referred to as having a directed acyclic graph. For example, package C depends on package B, which depends on package A. If package A depended on package C, you would have a cycle. Composite reuse principle: Favor polymorphic composition of objects over inheritance. == Object-oriented modeling == Object-oriented modeling (OOM) is a common approach to modeling applications, systems, and business domains by using the object-oriented paradigm throughout the entire development life cycles. OOM is a main technique heavily used by both OOD and OOA activities in modern software engineering. Object-oriented modeling typically divides into two aspects of work: the modeling of dynamic behaviors like business processes and use cases, and the modeling of static structures like classes and components. OOA and OOD are the two distinct abstract levels (i.e. the analysis level and the design level) during OOM. The Unified Modeling Language (UML) and SysML are the two popular international standard languages used for object-oriented modeling. The benefits of OOM are: Efficient and effective communication Users typically have difficulties in understanding comprehensive documents and programming language codes well. Visual model diagrams can be more understandable and can allow users and stakeholders to give developers feedback on the appropriate requirements and structure of the system. A key goal of the object-oriented approach is to decrease the "semantic gap" between the system and the real world, and to have the system be constructed using terminology that is almost the same as the stakeholders use in everyday business. Object-oriented modeling is an essential tool to facilitate this. Useful and stable abstraction Modeling helps coding. A goal of most modern software methodologies is to first address "what" questions and then address "how" questions, i.e. first determine the functionality the system is to provide without consideration of implementation constraints, and then consider how to make specific solutions to these abstract requirements, and refine them into detailed designs and codes by constraints such as technology and budget. Object-oriented modeling enables this by producing abstract and accessible descriptions of both system requirements and designs, i.e. models that define their essential structures and behaviors like processes and objects, which are important and valuable development assets with higher abstraction levels above concrete and complex source code. == See also == == References == == Further reading == Grady Booch. "Object-oriented Analysis and Design with Applications, 3rd edition":http://www.informit.com/store/product.aspx?isbn=020189551X Addison-Wesley 2007. Rebecca Wirfs-Brock, Brian Wilkerson, Lauren Wiener. Designing Object Oriented Software. Prentice Hall, 1990. [A down-to-earth introduction to the object-oriented programming and design.] A Theory of Object-Oriented Design: The building-blocks of OOD and notations for representing them (with focus on design patterns.) Martin Fowler. Analysis Patterns: Reusable Object Models. Addison-Wesley, 1997. [An introduction to object-oriented analysis with conceptual models] Bertrand Meyer. Object-oriented software construction. Prentice Hall, 1997 Craig Larman. Applying UML and Patterns – Introduction to OOA/D & Iterative Development. Prentice Hall PTR, 3rd ed. 2005. Setrag Khoshafian. Object Orientation. Ulrich Norbisrath, Albert Zündorf, Ruben Jubeh. Story Driven Modeling. Amazon Createspace. p. 333, 2013. ISBN 9781483949253. == External links == Article Object-Oriented Analysis and Design with UML and RUP an overview (also about CRC cards). Applying UML – Object Oriented Analysis & Design tutorial OOAD & UML Resource website and Forums – Object Oriented Analysis & Design with UML Software Requirement Analysis using UML article by Dhiraj Shetty Article Object-Oriented Analysis in the Real World Object-Oriented Analysis & Design – overview using UML Larman, Craig. Applying UML and Patterns – Third Edition Object-Oriented Analysis and Design LePUS3 and Class-Z: formal modelling languages for object-oriented design The Hierarchy of Objects
Wikipedia/Object-oriented_analysis_and_design
A functional flow block diagram (FFBD) is a multi-tier, time-sequenced, step-by-step flow diagram of a system's functional flow. The term "functional" in this context is different from its use in functional programming or in mathematics, where pairing "functional" with "flow" would be ambiguous. Here, "functional flow" pertains to the sequencing of operations, with "flow" arrows expressing dependence on the success of prior operations. FFBDs may also express input and output data dependencies between functional blocks, as shown in figures below, but FFBDs primarily focus on sequencing. The FFBD notation was developed in the 1950s, and is widely used in classical systems engineering. FFBDs are one of the classic business process modeling methodologies, along with flow charts, data flow diagrams, control flow diagrams, Gantt charts, PERT diagrams, and IDEF. FFBDs are also referred to as functional flow diagrams, functional block diagrams, and functional flows. == History == The first structured method for documenting process flow, the flow process chart, was introduced by Frank Gilbreth to members of American Society of Mechanical Engineers (ASME) in 1921 as the presentation “Process Charts—First Steps in Finding the One Best Way”. Gilbreth's tools quickly found their way into industrial engineering curricula. In the early 1930s, an industrial engineer, Allan H. Mogensen began training business people in the use of some of the tools of industrial engineering at his Work Simplification Conferences in Lake Placid, New York. A 1944 graduate of Mogensen's class, Art Spinanger, took the tools back to Procter and Gamble where he developed their Deliberate Methods Change Program. Another 1944 graduate, Ben S. Graham, Director of Formcraft Engineering at Standard Register Industrial, adapted the flow process chart to information processing with his development of the multi-flow process chart to display multiple documents and their relationships. In 1947, ASME adopted a symbol set as the ASME Standard for Operation and Flow Process Charts, derived from Gilbreth's original work. The modern Functional Flow Block Diagram was developed by TRW Incorporated, a defense-related business, in the 1950s. In the 1960s it was exploited by NASA to visualize the time sequence of events in space systems and flight missions. FFBDs became widely used in classical systems engineering to show the order of execution of system functions. == Development of functional flow block diagrams == FFBDs can be developed in a series of levels. FFBDs show the same tasks identified through functional decomposition and display them in their logical, sequential relationship. For example, the entire flight mission of a spacecraft can be defined in a top level FFBD, as shown in Figure 2. Each block in the first level diagram can then be expanded to a series of functions, as shown in the second level diagram for "perform mission operations." Note that the diagram shows both input (transfer to operational orbit) and output (transfer to space transportation system orbit), thus initiating the interface identification and control process. Each block in the second level diagram can be progressively developed into a series of functions, as shown in the third level diagram on Figure 2. These diagrams are used both to develop requirements and to identify profitable trade studies. For example, does the spacecraft antenna acquire the tracking and data relay satellite (TDRS) only when the payload data are to be transmitted, or does it track TDRS continually to allow for the reception of emergency commands or transmission of emergency data? The FFBD also incorporates alternate and contingency operations, which improve the probability of mission success. The flow diagram provides an understanding of total operation of the system, serves as a basis for development of operational and contingency procedures, and pinpoints areas where changes in operational procedures could simplify the overall system operation. In certain cases, alternate FFBDs may be used to represent various means of satisfying a particular function until data are acquired, which permits selection among the alternatives. == Building blocks == === Key attributes === An overview of the key FFBD attributes: Function block: Each function on an FFBD should be separate and be represented by single box (solid line). Each function needs to stand for definite, finite, discrete action to be accomplished by system elements. Function numbering: Each level should have a consistent number scheme and provide information concerning function origin. These numbers establish identification and relationships that will carry through all Functional Analysis and Allocation activities and facilitate traceability from lower to top levels. Functional reference: Each diagram should contain a reference to other functional diagrams by using a functional title reference (box in brackets). Flow connection: Lines connecting functions should only indicate function flow and not a lapse in time or intermediate activity. Flow direction: Diagrams should be laid out so that the flow direction is generally from left to right. Arrows are often used to indicate functional flows. Summing gate: A circle is used to denote a summing gate and is used when AND/OR is present. AND is used to indicate parallel functions and all conditions must be satisfied to proceed. OR is used to indicate that alternative paths can be satisfied to proceed. GO and NO-GO path: “G” and “bar G” are used to denote “go” and “no-go” conditions. These symbols are placed adjacent to lines leaving a particular function to indicate alternative paths. === Function symbolism === A function shall be represented by a rectangle containing the title of the function (an action verb followed by a noun phrase) and its unique decimal delimited number. A horizontal line shall separate this number and the title, as shown in see Figure 3 above. The figure also depicts how to represent a reference function, which provides context within a specific FFBD. See Figure 9 for an example regarding use of a reference function. === Directed lines === A line with a single arrowhead shall depict functional flow from left to right, see Figure 4. === Logic symbols === The following basic logic symbols shall be used. AND: A condition in which all preceding or succeeding paths are required. The symbol may contain a single input with multiple outputs or multiple inputs with a single output, but not multiple inputs and outputs combined (Figure 5). Read the figure as follows: F2 AND F3 may begin in parallel after completion of F1. Likewise, F4 may begin after completion of F2 AND F3. Exclusive OR: A condition in which one of multiple preceding or succeeding paths is required, but not all. The symbol may contain a single input with multiple outputs or multiple inputs with single output, but not multiple inputs and outputs combined (Figure 6). Read the figure as follows: F2 OR F3 may begin after completion of F1. Likewise, F4 may begin after completion of either F2 OR F3. Inclusive OR: A condition in which one, some, or all of the multiple preceding or succeeding paths are required. Figure 7 depicts Inclusive OR logic using a combination of the AND symbol (Figure 5) and the Exclusive OR symbol (Figure 6). Read Figure 7 as follows: F2 OR F3 (exclusively) may begin after completion of F1, OR (again exclusive) F2 AND F3 may begin after completion of F1. Likewise, F4 may begin after completion of either F2 OR F3 (exclusively), OR (again exclusive) F4 may begin after completion of both F2 AND F3 === Contextual and administrative data === Each FFBD shall contain the following contextual and administrative data: Date the diagram was created Name of the engineer, organization, or working group that created the diagram Unique decimal delimited number of the function being diagrammed Unique function name of the function being diagrammed. Figure 8 and Figure 9 present the data in an FFBD. Figure 9 is a decomposition of the function F2 contained in Figure 8 and illustrates the context between functions at different levels of the model. == See also == Activity diagram Block diagram Business process mapping Dataflow Data and information visualization DRAKON Flow diagram Flow process chart Function model Functional block diagram IDEF0 N2 Chart SADT Signal flow Signal-flow graph == Notes == == Further reading == DAU (2001) Systems Engineering Fundamentals. Defense Acquisition University Press. FAA (2007) System Engineering Manual. Federal Aviation Administration Washington.
Wikipedia/Functional_flow_block_diagram
The Unified Modeling Language (UML) is a general-purpose visual modeling language that is intended to provide a standard way to visualize the design of a system. UML provides a standard notation for many types of diagrams which can be roughly divided into three main groups: behavior diagrams, interaction diagrams, and structure diagrams. The creation of UML was originally motivated by the desire to standardize the disparate notational systems and approaches to software design. It was developed at Rational Software in 1994–1995, with further development led by them through 1996. In 1997, UML was adopted as a standard by the Object Management Group (OMG) and has been managed by this organization ever since. In 2005, UML was also published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as the ISO/IEC 19501 standard. Since then the standard has been periodically revised to cover the latest revision of UML. In software engineering, most practitioners do not use UML, but instead produce informal hand drawn diagrams; these diagrams, however, often include elements from UML.: 536  == History == === Before UML 1.0 === UML has evolved since the second half of the 1990s and has its roots in the object-oriented programming methods developed in the late 1980s and early 1990s. The timeline (see image) shows the highlights of the history of object-oriented modeling methods and notation. It is originally based on the notations of the Booch method, the object-modeling technique (OMT), and object-oriented software engineering (OOSE), which it has integrated into a single language. Rational Software Corporation hired James Rumbaugh from General Electric in 1994 and after that, the company became the source for two of the most popular object-oriented modeling approaches of the day: Rumbaugh's object-modeling technique (OMT) and Grady Booch's method. They were soon assisted in their efforts by Ivar Jacobson, the creator of the object-oriented software engineering (OOSE) method, who joined them at Rational in 1995. === UML 1.x === Under the technical leadership of those three (Rumbaugh, Jacobson, and Booch), a consortium called the UML Partners was organized in 1996 to complete the Unified Modeling Language (UML) specification and propose it to the Object Management Group (OMG) for standardization. The partnership also contained additional interested parties (for example HP, DEC, IBM, and Microsoft). The UML Partners' UML 1.0 draft was proposed to the OMG in January 1997 by the consortium. During the same month, the UML Partners formed a group, designed to define the exact meaning of language constructs, chaired by Cris Kobryn and administered by Ed Eykholt, to finalize the specification and integrate it with other standardization efforts. The result of this work, UML 1.1, was submitted to the OMG in August 1997 and adopted by the OMG in November 1997. After the first release, a task force was formed to improve the language, which released several minor revisions, 1.3, 1.4, and 1.5. The standards it produced (as well as the original standard) have been noted as being ambiguous and inconsistent. ==== Cardinality notation ==== As with database Chen, Bachman, and ISO ER diagrams, class models are specified to use "look-across" cardinalities, even though several authors (Merise, Elmasri & Navathe, amongst others) prefer same-side or "look-here" for roles and both minimum and maximum cardinalities. Recent researchers (Feinerer and Dullea et al.) have shown that the "look-across" technique used by UML and ER diagrams is less effective and less coherent when applied to n-ary relationships of order strictly greater than 2. Feinerer says: "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann investigates this situation and shows how and why different transformations fail.", and: "As we will see on the next few pages, the look-across interpretation introduces several difficulties which prevent the extension of simple mechanisms from binary to n-ary associations." === UML 2 === UML 2.0 major revision replaced version 1.5 in 2005, which was developed with an enlarged consortium to improve the language further to reflect new experiences on the usage of its features. Although UML 2.1 was never released as a formal specification, versions 2.1.1 and 2.1.2 appeared in 2007, followed by UML 2.2 in February 2009. UML 2.3 was formally released in May 2010. UML 2.4.1 was formally released in August 2011. UML 2.5 was released in October 2012 as an "In progress" version and was officially released in June 2015. The formal version 2.5.1 was adopted in December 2017. There are four parts to the UML 2.x specification: The Superstructure that defines the notation and semantics for diagrams and their model elements The Infrastructure that defines the core metamodel on which the Superstructure is based The Object Constraint Language (OCL) for defining rules for model elements The UML Diagram Interchange that defines how UML 2 diagram layouts are exchanged Until UML 2.4.1, the latest versions of these standards were: UML Superstructure version 2.4.1 UML Infrastructure version 2.4.1 OCL version 2.3.1 UML Diagram Interchange version 1.0. Since version 2.5, the UML Specification has been simplified (without Superstructure and Infrastructure), and the latest versions of these standards are now: UML Specification 2.5.1 OCL version 2.4 It continues to be updated and improved by the revision task force, who resolve any issues with the language. == Design == UML offers a way to visualize a system's architectural blueprints in a diagram, including elements such as: any activities (jobs); individual components of the system; and how they can interact with other software components; how the system will run; how entities interact with others (components and interfaces); external user interface. Although originally intended for object-oriented design documentation, UML has been extended to a larger set of design documentation (as listed above), and has been found useful in many contexts. === Software development methods === UML is not a development method by itself; however, it was designed to be compatible with the leading object-oriented software development methods of its time, for example, OMT, Booch method, Objectory, and especially RUP it was originally intended to be used with when work began at Rational Software. === Modeling === It is important to distinguish between the UML model and the set of diagrams of a system. A diagram is a partial graphic representation of a system's model. The set of diagrams need not completely cover the model and deleting a diagram does not change the model. The model may also contain documentation that drives the model elements and diagrams (such as written use cases). UML diagrams represent two different views of a system model: Static (or structural) view: emphasizes the static structure of the system using objects, attributes, operations and relationships. It includes class diagrams and composite structure diagrams. Dynamic (or behavioral) view: emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. This view includes sequence diagrams, activity diagrams and state machine diagrams. UML models can be exchanged among UML tools by using the XML Metadata Interchange (XMI) format. In UML, one of the key tools for behavior modeling is the use-case model, caused by OOSE. Use cases are a way of specifying required usages of a system. Typically, they are used to capture the requirements of a system, that is, what a system is supposed to do. == Diagrams == UML 2 has many types of diagrams, which are divided into two categories. Some types represent structural information, and the rest represent general types of behavior, including a few that represent different aspects of interactions. These diagrams can be categorized hierarchically as shown in the following class diagram: These diagrams may all contain comments or notes explaining usage, constraint, or intent. === Structure diagrams === Structure diagrams represent the static aspects of the system. It emphasizes the things that must be present in the system being modeled. Since structure diagrams represent the structure, they are used extensively in documenting the software architecture of software systems. For example, the component diagram describes how a software system is split up into components and shows the dependencies among these components. === Behavior diagrams === Behavior diagrams represent the dynamic aspect of the system. It emphasizes what must happen in the system being modeled. Since behavior diagrams illustrate the behavior of a system, they are used extensively to describe the functionality of software systems. As an example, the activity diagram describes the business and operational step-by-step activities of the components in a system. Visual Representation: Staff User → Complaints System: Submit Complaint Complaints System → HR System: Forward Complaint HR System → Department: Assign Complaint Department → Complaints System: Update Resolution Complaints System → Feedback System: Request Feedback Feedback System → Staff User: Provide Feedback Staff User → Feedback System: Submit Feedback. This description can be used to draw a sequence diagram using tools like Lucidchart, Draw.io, or any UML diagram software. The diagram would have actors on the left side, with arrows indicating the sequence of actions and interactions between systems and actors as described. Sequence diagrams should be drawn for each use case to show how different objects interact with each other to achieve the functionality of the use case. == Artifacts == In UML, an artifact is the "specification of a physical piece of information that is used or produced by a software development process, or by deployment and operation of a system." "Examples of artifacts include model files, source files, scripts, and binary executable files, a table in a database system, a development deliverable, a word-processing document, or a mail message." Artifacts are the physical entities that are deployed on Nodes (i.e. Devices and Execution Environments). Other UML elements such as classes and components are first manifested into artifacts and instances of these artifacts are then deployed. Artifacts can also be composed of other artifacts. == Metamodeling == The Object Management Group (OMG) has developed a metamodeling architecture to define the UML, called the Meta-Object Facility. MOF is designed as a four-layered architecture, as shown in the image at right. It provides a meta-meta model at the top, called the M3 layer. This M3-model is the language used by Meta-Object Facility to build metamodels, called M2-models. The most prominent example of a Layer 2 Meta-Object Facility model is the UML metamodel, which describes the UML itself. These M2-models describe elements of the M1-layer, and thus M1-models. These would be, for example, models written in UML. The last layer is the M0-layer or data layer. It is used to describe runtime instances of the system. The meta-model can be extended using a mechanism called stereotyping. This has been criticized as being insufficient/untenable by Brian Henderson-Sellers and Cesar Gonzalez-Perez in "Uses and Abuses of the Stereotype Mechanism in UML 1.x and 2.0". == Adoption == In 2013, UML had been marketed by OMG for many contexts, but aimed primarily at software development with limited success. It has been treated, at times, as a design silver bullet, which leads to problems. UML misuse includes overuse (designing every part of the system with it, which is unnecessary) and assuming that novices can design with it. It is considered a large language, with many constructs. Some people (including Jacobson) feel that UML's size hinders learning and therefore uptake. MS Visual Studio dropped support for UML in 2016 due to lack of usage. According to Google Trends, UML has been on a steady decline since 2004. == See also == Applications of UML BPMN (Business Process Model and Notation) C4 model Department of Defense Architecture Framework DOT (graph description language) List of Unified Modeling Language tools MODAF Model-based testing Model-driven engineering Object-oriented role analysis and modeling Process Specification Language Systems Modeling Language (SysML) == References == == Further reading == Ambler, Scott William (2004). The Object Primer: Agile Model Driven Development with UML 2. Cambridge University Press. ISBN 0-521-54018-6. Archived from the original on 31 January 2010. Retrieved 29 April 2006. Chonoles, Michael Jesse; James A. Schardt (2003). UML 2 for Dummies. Wiley Publishing. ISBN 0-7645-2614-6. Fowler, Martin (2004). UML Distilled: A Brief Guide to the Standard Object Modeling Language (3rd ed.). Addison-Wesley. ISBN 0-321-19368-7. Jacobson, Ivar; Grady Booch; James Rumbaugh (1998). The Unified Software Development Process. Addison Wesley Longman. ISBN 0-201-57169-2. Martin, Robert Cecil (2003). UML for Java Programmers. Prentice Hall. ISBN 0-13-142848-9. Noran, Ovidiu S. "Business Modelling: UML vs. IDEF" (PDF). Retrieved 14 November 2022. Horst Kargl. "Interactive UML Metamodel with additional Examples". Penker, Magnus; Hans-Erik Eriksson (2000). Business Modeling with UML. John Wiley & Sons. ISBN 0-471-29551-5. Douglass, Bruce Powel. "Bruce Douglass: Real-Time Agile Systems and Software Development" (web). Retrieved 1 January 2019. Douglass, Bruce (2014). Real-Time UML Workshop 2nd Edition. Newnes. ISBN 978-0-471-29551-8. Douglass, Bruce (2004). Real-Time UML 3rd Edition. Newnes. ISBN 978-0321160768. Douglass, Bruce (2002). Real-Time Design Patterns. Addison-Wesley Professional. ISBN 978-0201699562. Douglass, Bruce (2009). Real-Time Agility. Addison-Wesley Professional. ISBN 978-0321545497. Douglass, Bruce (2010). Design Patterns for Embedded Systems in C. Newnes. ISBN 978-1856177078. == External links == Official website Current UML specification: Unified Modeling Language 2.5.1. OMG Document Number formal/2017-12-05. Object Management Group Standards Development Organization (OMG SDO). December 2017.
Wikipedia/Unified_Modelling_Language
A method in object-oriented programming (OOP) is a procedure associated with an object, and generally also a message. An object consists of state data and behavior; these compose an interface, which specifies how the object may be used. A method is a behavior of an object parametrized by a user. Data is represented as properties of the object, and behaviors are represented as methods. For example, a Window object could have methods such as open and close, while its state (whether it is open or closed at any given point in time) would be a property. In class-based programming, methods are defined within a class, and objects are instances of a given class. One of the most important capabilities that a method provides is method overriding - the same name (e.g., area) can be used for multiple different kinds of classes. This allows the sending objects to invoke behaviors and to delegate the implementation of those behaviors to the receiving object. A method in Java programming sets the behavior of a class object. For example, an object can send an area message to another object and the appropriate formula is invoked whether the receiving object is a rectangle, circle, triangle, etc. Methods also provide the interface that other classes use to access and modify the properties of an object; this is known as encapsulation. Encapsulation and overriding are the two primary distinguishing features between methods and procedure calls. == Overriding and overloading == Method overriding and overloading are two of the most significant ways that a method differs from a conventional procedure or function call. Overriding refers to a subclass redefining the implementation of a method of its superclass. For example, findArea may be a method defined on a shape class, triangle, etc. would each define the appropriate formula to calculate their area. The idea is to look at objects as "black boxes" so that changes to the internals of the object can be made with minimal impact on the other objects that use it. This is known as encapsulation and is meant to make code easier to maintain and re-use. Method overloading, on the other hand, refers to differentiating the code used to handle a message based on the parameters of the method. If one views the receiving object as the first parameter in any method then overriding is just a special case of overloading where the selection is based only on the first argument. The following simple Java example illustrates the difference: == Accessor, mutator and manager methods == Accessor methods are used to read the data values of an object. Mutator methods are used to modify the data of an object. Manager methods are used to initialize and destroy objects of a class, e.g. constructors and destructors. These methods provide an abstraction layer that facilitates encapsulation and modularity. For example, if a bank-account class provides a getBalance() accessor method to retrieve the current balance (rather than directly accessing the balance data fields), then later revisions of the same code can implement a more complex mechanism for balance retrieval (e.g., a database fetch), without the dependent code needing to be changed. The concepts of encapsulation and modularity are not unique to object-oriented programming. Indeed, in many ways the object-oriented approach is simply the logical extension of previous paradigms such as abstract data types and structured programming. === Constructors === A constructor is a method that is called at the beginning of an object's lifetime to create and initialize the object, a process called construction (or instantiation). Initialization may include an acquisition of resources. Constructors may have parameters but usually do not return values in most languages. See the following example in Java: === Destructor === A Destructor is a method that is called automatically at the end of an object's lifetime, a process called Destruction. Destruction in most languages does not allow destructor method arguments nor return values. Destructors can be implemented so as to perform cleanup chores and other tasks at object destruction. ==== Finalizers ==== In garbage-collected languages, such as Java,: 26, 29  C#,: 208–209  and Python, destructors are known as finalizers. They have a similar purpose and function to destructors, but because of the differences between languages that utilize garbage-collection and languages with manual memory management, the sequence in which they are called is different. == Abstract methods == An abstract method is one with only a signature and no implementation body. It is often used to specify that a subclass must provide an implementation of the method, as in an abstract class. Abstract methods are used to specify interfaces in some programming languages. === Example === The following Java code shows an abstract class that needs to be extended: The following subclass extends the main class: === Reabstraction === If a subclass provides an implementation for an abstract method, another subclass can make it abstract again. This is called reabstraction. In practice, this is rarely used. ==== Example ==== In C#, a virtual method can be overridden with an abstract method. (This also applies to Java, where all non-private methods are virtual.) Interfaces' default methods can also be reabstracted, requiring subclasses to implement them. (This also applies to Java.) == Class methods == Class methods are methods that are called on a class rather than an instance. They are typically used as part of an object meta-model. I.e, for each class, defined an instance of the class object in the meta-model is created. Meta-model protocols allow classes to be created and deleted. In this sense, they provide the same functionality as constructors and destructors described above. But in some languages such as the Common Lisp Object System (CLOS) the meta-model allows the developer to dynamically alter the object model at run time: e.g., to create new classes, redefine the class hierarchy, modify properties, etc. == Special methods == Special methods are very language-specific and a language may support none, some, or all of the special methods defined here. A language's compiler may automatically generate default special methods or a programmer may be allowed to optionally define special methods. Most special methods cannot be directly called, but rather the compiler generates code to call them at appropriate times. === Static methods === Static methods are meant to be relevant to all the instances of a class rather than to any specific instance. They are similar to static variables in that sense. An example would be a static method to sum the values of all the variables of every instance of a class. For example, if there were a Product class it might have a static method to compute the average price of all products. A static method can be invoked even if no instances of the class exist yet. Static methods are called "static" because they are resolved at compile time based on the class they are called on and not dynamically as in the case with instance methods, which are resolved polymorphically based on the runtime type of the object. ==== Examples ==== ===== In Java ===== In Java, a commonly used static method is: Math.max(double a, double b) This static method has no owning object and does not run on an instance. It receives all information from its arguments. === Copy-assignment operators === Copy-assignment operators define actions to be performed by the compiler when a class object is assigned to a class object of the same type. === Operator methods === Operator methods define or redefine operator symbols and define the operations to be performed with the symbol and the associated method parameters. C++ example: == Member functions in C++ == Some procedural languages were extended with object-oriented capabilities to leverage the large skill sets and legacy code for those languages but still provide the benefits of object-oriented development. Perhaps the most well-known example is C++, an object-oriented extension of the C programming language. Due to the design requirements to add the object-oriented paradigm on to an existing procedural language, message passing in C++ has some unique capabilities and terminologies. For example, in C++ a method is known as a member function. C++ also has the concept of virtual functions which are member functions that can be overridden in derived classes and allow for dynamic dispatch. === Virtual functions === Virtual functions are the means by which a C++ class can achieve polymorphic behavior. Non-virtual member functions, or regular methods, are those that do not participate in polymorphism. C++ Example: == See also == Property (programming) Remote method invocation Subroutine, also called subprogram, routine, procedure or function == Notes == == References ==
Wikipedia/Method_(computer_programming)
A timing diagram in Unified Modeling Language 2.5.1 is a specific type of interaction diagram, where the focus is on timing constraints. Timing diagrams are used to explore the behaviors of objects throughout a given period of time. A timing diagram is a special form of a sequence diagram. The differences between timing diagram and sequence diagram are the axes are reversed so that the time increases from left to right and the lifelines are shown in separate compartments arranged vertically. There are two basic flavors of timing diagram: the concise notation, and the robust notation . == References == == External links == Introduction to UML 2 Timing Diagrams UML 2 Timing Diagrams
Wikipedia/Timing_diagram_(Unified_Modeling_Language)
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern analytical techniques. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. == History == Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century. The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples. Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. == Classical methods == Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs. === Qualitative analysis === Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity. ==== Chemical tests ==== There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. ==== Flame test ==== Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient. === Quantitative analysis === Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis). ==== Gravimetric analysis ==== The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water. ==== Volumetric analysis ==== Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titration is a family of techniques used to determine the concentration of an analyte. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant. == Instrumental methods == === Spectroscopy === Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on. === Mass spectrometry === Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. In a mass spectrometer, a small amount of sample is ionized and converted to gaseous ions, where they are separated and analyzed according to their mass-to-charge ratios. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. === Electrochemical analysis === Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). Potentiometry measures the cell's potential, coulometry measures the cell's current, and voltammetry measures the change in current when cell potential changes. === Thermal analysis === Calorimetry and thermogravimetric analysis measure the interaction of a material and heat. === Separation === Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field. ==== Chromatographic assays ==== Chromatography can be used to determine the presence of substances in a sample as different components in a mixture have different tendencies to adsorb onto the stationary phase or dissolve in the mobile phase. Thus, different components of the mixture move at different speed. Different components of a mixture can therefore be identified by their respective Rƒ values, which is the ratio between the migration distance of the substance and the migration distance of the solvent front during chromatography. In combination with the instrumental methods, chromatography can be used in quantitative determination of the substances. Chromatography separates the analyte from the rest of the sample so that it may be measured without interference from other compounds. There are different types of chromatography that differ from the media they use to separate the analyte and the sample. In Thin-layer chromatography, the analyte mixture moves up and separates along the coated sheet under the volatile mobile phase. In Gas chromatography, gas separates the volatile analytes. A common method for chromatography using liquid as a mobile phase is High-performance liquid chromatography. === Hybrid techniques === Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry. Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. === Microscopy === The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries. === Lab-on-a-chip === Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. == Errors == Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment. In error the true value and observed value in chemical analysis can be related with each other by the equation ε a = | x − x ¯ | {\displaystyle \varepsilon _{\rm {a}}=|x-{\bar {x}}|} where ε a {\displaystyle \varepsilon _{\rm {a}}} is the absolute error. x {\displaystyle x} is the true value. x ¯ {\displaystyle {\bar {x}}} is the observed value. An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement. Errors can be expressed relatively. Given the relative error( ε r {\displaystyle \varepsilon _{\rm {r}}} ): ε r = ε a | x | = | x − x ¯ x | {\displaystyle \varepsilon _{\rm {r}}={\frac {\varepsilon _{\rm {a}}}{|x|}}=\left|{\frac {x-{\bar {x}}}{x}}\right|} The percent error can also be calculated: ε r × 100 % {\displaystyle \varepsilon _{\rm {r}}\times 100\%} If we want to use these values in a function, we may also want to calculate the error of the function. Let f {\displaystyle f} be a function with N {\displaystyle N} variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in f {\displaystyle f} : ε a ( f ) ≈ ∑ i = 1 N | ∂ f ∂ x i | ε a ( x i ) = | ∂ f ∂ x 1 | ε a ( x 1 ) + | ∂ f ∂ x 2 | ε a ( x 2 ) + … + | ∂ f ∂ x N | ε a ( x N ) {\displaystyle \varepsilon _{\rm {a}}(f)\approx \sum _{i=1}^{N}\left|{\frac {\partial f}{\partial x_{i}}}\right|\varepsilon _{\rm {a}}(x_{i})=\left|{\frac {\partial f}{\partial x_{1}}}\right|\varepsilon _{\rm {a}}(x_{1})+\left|{\frac {\partial f}{\partial x_{2}}}\right|\varepsilon _{\rm {a}}(x_{2})+\ldots +\left|{\frac {\partial f}{\partial x_{N}}}\right|\varepsilon _{\rm {a}}(x_{N})} == Standards == === Standard curve === A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample. === Internal standards === Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution. === Standard addition === The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem. == Signals and noise == One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR). Noise can arise from environmental factors as well as from fundamental physical processes. === Thermal noise === Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum. The root mean square value of the thermal noise in a resistor is given by v R M S = 4 k B T R Δ f , {\displaystyle v_{\rm {RMS}}={\sqrt {4k_{\rm {B}}TR\Delta f}},} where kB is the Boltzmann constant, T is the temperature, R is the resistance, and Δ f {\displaystyle \Delta f} is the bandwidth of the frequency f {\displaystyle f} . === Shot noise === Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal. Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by i R M S = 2 e I Δ f {\displaystyle i_{\rm {RMS}}={\sqrt {2eI\Delta f}}} where e is the elementary charge and I is the average current. Shot noise is white noise. === Flicker noise === Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier. === Environmental noise === Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments. === Noise reduction === Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods. == Applications == Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry. Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (μTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used. Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules. Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on. The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic. Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations. == See also == Calorimeter Clinical chemistry Environmental chemistry Ion beam analysis List of chemical analysis methods Important publications in analytical chemistry List of materials analysis methods Measurement uncertainty Metrology Microanalysis Nuclear reaction analysis Quality of analytical results Radioanalytical chemistry Rutherford backscattering spectroscopy Sensory analysis - in the field of Food science Virtual instrumentation Working range == References == == Further reading == == External links == Infografik and animation showing the progress of analytical chemistry aas Atomic Absorption Spectrophotometer
Wikipedia/Analytical_method
Strategy (from Greek στρατηγία stratēgia, "troop leadership; office of general, command, generalship") is a general plan to achieve one or more long-term or overall goals under conditions of uncertainty. In the sense of the "art of the general", which included several subsets of skills including military tactics, siegecraft, logistics etc., the term came into use in the 6th century C.E. in Eastern Roman terminology, and was translated into Western vernacular languages only in the 18th century. From then until the 20th century, the word "strategy" came to denote "a comprehensive way to try to pursue political ends, including the threat or actual use of force, in a dialectic of wills" in a military conflict, in which both adversaries interact. Strategy is important because the resources available to achieve goals are usually limited. Strategy generally involves setting goals and priorities, determining actions to achieve the goals, and mobilizing resources to execute the actions. A strategy describes how the ends (goals) will be achieved by the means (resources). Strategy can be intended or can emerge as a pattern of activity as the organization adapts to its environment or competes. It involves activities such as strategic planning and strategic thinking. Henry Mintzberg from McGill University defined strategy as a pattern in a stream of decisions to contrast with a view of strategy as planning,. while Max McKeown (2011) argues that "strategy is about shaping the future" and is the human attempt to get to "desirable ends with available means". Vladimir Kvint defines strategy as "a system of finding, formulating, and developing a doctrine that will ensure long-term success if followed faithfully." == Military theory == Subordinating the political point of view to the military would be absurd, for it is policy that has created war...Policy is the guiding intelligence, and war only the instrument, not vice-versa. In military theory, strategy is "the utilization during both peace and war, of all of the nation's forces, through large scale, long-range planning and development, to ensure security and victory" (Random House Dictionary). The father of Western modern strategic study, Carl von Clausewitz, defined military strategy as "the employment of battles to gain the end of war." B. H. Liddell Hart's definition put less emphasis on battles, defining strategy as "the art of distributing and applying military means to fulfill the ends of policy". Hence, both gave the pre-eminence to political aims over military goals. U.S. Naval War College instructor Andrew Wilson defined strategy as the "process by which political purpose is translated into military action." Lawrence Freedman defined strategy as the "art of creating power." Eastern military philosophy dates back much further, with examples such as The Art of War by Sun Tzu dated around 500 B.C. === Counterterrorism Strategy === Because counterterrorism involves the synchronized efforts of numerous competing bureaucratic entities, national governments frequently create overarching counterterrorism strategies at the national level. A national counterterrorism strategy is a government's plan to use the instruments of national power to neutralize terrorists, their organizations, and their networks in order to render them incapable of using violence to instill fear and to coerce the government or its citizens to react in accordance with the terrorists' goals. The United States has had several such strategies in the past, including the United States National Strategy for Counterterrorism (2018); the Obama-era National Strategy for Counterterrorism (2011); and the National Strategy for Combatting Terrorism (2003). There have also been a number of ancillary or supporting plans, such as the 2014 Strategy to Counter the Islamic State of Iraq and the Levant, and the 2016 Strategic Implementation Plan for Empowering Local Partners to Prevent Violent Extremism in the United States. Similarly, the United Kingdom's counterterrorism strategy, CONTEST, seeks "to reduce the risk to the UK and its citizens and interests overseas from terrorism so that people can go about their lives freely and with confidence." == Management theory == The essence of formulating competitive strategy is relating a company to its environment. Modern business strategy emerged as a field of study and practice in the 1960s; prior to that time, the words "strategy" and "competition" rarely appeared in the most prominent management literature. Alfred Chandler wrote in 1962 that: "Strategy is the determination of the basic long-term goals of an enterprise, and the adoption of courses of action and the allocation of resources necessary for carrying out these goals." Michael Porter defined strategy in 1980 as the "...broad formula for how a business is going to compete, what its goals should be, and what policies will be needed to carry out those goals" and the "...combination of the ends (goals) for which the firm is striving and the means (policies) by which it is seeking to get there." === Definition === Henry Mintzberg described five definitions of strategy in 1998: Strategy as plan – a directed course of action to achieve an intended set of goals; similar to the strategic planning concept; Strategy as pattern – a consistent pattern of past behavior, with a strategy realized over time rather than planned or intended. Where the realized pattern was different from the intent, he referred to the strategy as emergent; Strategy as position – locating brands, products, or companies within the market, based on the conceptual framework of consumers or other stakeholders; a strategy determined primarily by factors outside the firm; Strategy as ploy – a specific maneuver intended to outwit a competitor; and Strategy as perspective – executing strategy based on a "theory of the business" or natural extension of the mindset or ideological perspective of the organization. Complexity theorists define strategy as the unfolding of the internal and external aspects of the organization that results in actions in a socio-economic context. === Strategic Problem === In 1998, Crouch defined the strategic problem as maintaining flexible relationships that can range from intense competition to harmonious cooperation among different players in a dynamic market. While Crouch was open to the idea of cooperation between players, his approach still emphasized that strategy is shaped by market conditions and organizational structure. This view aligns with the definitions of strategy proposed by Porter and Mintzberg. In contrast, Burnett regards strategy as a plan formulated through methodology in which the strategic problem encompasses six tasks: goal formulation, environmental analysis, strategy formulation, strategy evaluation, strategy implementation, and strategy control. The literature identifies two main sources for defining a strategic problem. The first is related to environmental factors, and the second focuses on the organizational context (Mukherji and Hurtado, 2001). These two sources summarize three dimensions originally proposed by Ansoff and Hayes (1981). According to them, a strategic problem arises from analysis of internal and external issues, the processes to solve them, and the variables involved. In Terra and Passador's view, organizations and the systems around them are tightly connected, so they rely on each other to survive. This means a strategy should balance being proactive and reactive. This involves recognizing the organization’s impact on the environment and acting to minimize harm while adapting to new demands. The strategy should also align internal and external aspects of the organization and include all related entities. This helps build a complex socio-economic system where the organization is part of a sustainable ecosystem. === Complexity theory === Complexity science, as articulated by R. D. Stacey, represents a conceptual framework capable of harmonizing emergent and deliberate strategies. Within complexity approaches the term "strategy" is intricately linked to action. Complexity theorists view programs merely as predetermined sequences effective in highly ordered and less chaotic environments. Conversely, strategy emerges from a simultaneous examination of determined conditions (order) and uncertainties (disorder) that drive action. Complexity theory posits that strategy involves execution, encompasses control and emergence, scrutinizes both internal and external organizational aspects, and can take the form of maneuvers or any other act or process. The works of Stacey stand as pioneering efforts in applying complexity principles to the field of strategy. This author applied self-organization and chaos principles to describe strategy, organizational change dynamics, and learning. Their propositions advocate for strategy approached through choices and the evolutionary process of competitive selection. In this context, corrections of anomalies occur through actions involving negative feedback, while innovation and continuous change stem from actions guided by positive feedback. Dynamically, complexity in strategic management can be elucidated through the model of "Symbiotic Dynamics" by Terra and Passador. This model conceives the social organization of production as an interplay between two distinct systems existing in a symbiotic relationship while interconnected with the external environment. The organization's social network acts as a self-referential entity controlling the organization's life, while its technical structure resembles a purposeful "machine" supplying the social system by processing resources. These intertwined structures exchange disturbances and residues while interacting with the external world through their openness. Essentially, as the organization produces itself, it also hetero-produces, surviving through energy and resource flows across its subsystems. This dynamic has strategic implications, governing organizational dynamics through a set of attraction basins establishing operational and regenerative capabilities. Hence, one of the primary roles of strategists is to identify "human attractors" and assess their impacts on organizational dynamics. According to the theory of Symbiotic Dynamics, both leaders and the technical system can act as attractors, directly influencing organizational dynamics and responses to external disruptions. Terra and Passador further assert that while producing, organizations contribute to environmental entropy, potentially leading to abrupt ruptures and collapses within their subsystems, even within the organizations themselves. Given this issue, the authors conclude that organizations intervening to maintain the environment's stability within suitable parameters for survival tend to exhibit greater longevity. The theory of Symbiotic Dynamics posits that organizations must acknowledge their impact on the external environment (markets, society, and the environment) and act systematically to reduce their degradation while adapting to the demands arising from these interactions. To achieve this, organizations need to incorporate all interconnected systems into their decision-making processes, enabling the envisioning of complex socio-economic systems where they integrate in a stable and sustainable manner. This blend of proactivity and reactivity is fundamental to ensuring the survival of the organization itself. === Components === Professor Richard P. Rumelt described strategy as a type of problem solving in 2011. He wrote that good strategy has an underlying structure he called a kernel. The kernel has three parts: 1) A diagnosis that defines or explains the nature of the challenge; 2) A guiding policy for dealing with the challenge; and 3) Coherent actions designed to carry out the guiding policy. President Kennedy illustrated these three elements of strategy in his Cuban Missile Crisis Address to the Nation of 22 October 1962: Diagnosis: "This Government, as promised, has maintained the closest surveillance of the Soviet military buildup on the island of Cuba. Within the past week, unmistakable evidence has established the fact that a series of offensive missile sites are now in preparation on that imprisoned island. The purpose of these bases can be none other than to provide a nuclear strike capability against the Western Hemisphere." Guiding Policy: "Our unswerving objective, therefore, must be to prevent the use of these missiles against this or any other country, and to secure their withdrawal or elimination from the Western Hemisphere." Action Plans: First among seven numbered steps was the following: "To halt this offensive buildup a strict quarantine on all offensive military equipment under shipment to Cuba is being initiated. All ships of any kind bound for Cuba from whatever nation or port will, if found to contain cargoes of offensive weapons, be turned back." Rumelt wrote in 2011 that three important aspects of strategy include "premeditation, the anticipation of others' behavior, and the purposeful design of coordinated actions." He described strategy as solving a design problem, with trade-offs among various elements that must be arranged, adjusted and coordinated, rather than a plan or choice. === Formulation and implementation === Strategy typically involves two major processes: formulation and implementation. Formulation involves analyzing the environment or situation, making a diagnosis, and developing guiding policies. It includes such activities as strategic planning and strategic thinking. Implementation refers to the action plans taken to achieve the goals established by the guiding policy. Bruce Henderson wrote in 1981 that: "Strategy depends upon the ability to foresee future consequences of present initiatives." He wrote that the basic requirements for strategy development include, among other factors: 1) extensive knowledge about the environment, market and competitors; 2) ability to examine this knowledge as an interactive dynamic system; and 3) the imagination and logic to choose between specific alternatives. Henderson wrote that strategy was valuable because of: "finite resources, uncertainty about an adversary's capability and intentions; the irreversible commitment of resources; necessity of coordinating action over time and distance; uncertainty about control of the initiative; and the nature of adversaries' mutual perceptions of each other." == Game theory == In game theory, a player's strategy is any of the options that the player would choose in a specific setting. Any optimal outcomes they receive depend not only on their actions but also, the actions of other players. == See also == Concept Driven Strategy Consultant Odds algorithm (Odds strategy) Sports strategy Strategy game Strategic management Strategy pattern Strategic planning Strategic voting Strategist Strategy Markup Language Tactic (method) Time management U.S. Army Strategist == Further reading == Burgelman, James. Strategy is Destiny (2002): Strategy Is Destiny: How Strategy-Making Shapes a Company's Future Freedman, Lawrence. Strategy: A History (2013): Strategy: A History 1st Edition Heuser, Beatrice. The Evolution of Strategy (2010): The Evolution of Strategy: Thinking War from Antiquity to the Present Kvint, Vladimir. Strategy for the Global Market: Theory and Practical Applications (2016): Excerpt from Google Books == References == == External links ==
Wikipedia/Strategies
Management Science is a peer-reviewed academic journal that covers research on all aspects of management related to strategy, entrepreneurship, innovation, information technology, and organizations as well as all functional areas of business, such as accounting, finance, marketing, and operations. It is published by the Institute for Operations Research and the Management Sciences and was established in 1954 by the institute's precursor, the Institute of Management Sciences. C. West Churchman was the founding editor-in-chief. According to the Journal Citation Reports, the journal has a 2022 impact factor of 5.4. == Editors-in-chief == The following persons are, or have been, editors-in-chief: 2018–2023: David Simchi-Levi 2014–2018: Teck-Hua Ho 2009–2014: Gérard Cachon 2003–2008: Wallace Hopp 1997–2002: Hau L. Lee 1993–1997: Gabriel R. Bitran 1983–1990: Donald G. Morrison 1968–1983: Martin K. Starr 1960–1967: Robert M. Thrall 1954–1960: C. West Churchman == Notable papers == According to Google Scholar, the following three papers have been cited most frequently: Sharpe, William F. (1963). "A Simplified Model for Portfolio Analysis". Management Science. 9 (2): 277–293. doi:10.1287/mnsc.9.2.277. Harsanyi, John C. (1967). "Games with Incomplete Information Played by "Bayesian" Players, I–III: Part I. The Basic Model&". Management Science. 50 (12_supplement): 1804–1817. doi:10.1287/mnsc.1040.0270. Bass, Frank M. (1969). "A New Product Growth for Model Consumer Durables". Management Science. 15 (5): 215–227. doi:10.1287/mnsc.15.5.215. == References == == External links == Official website
Wikipedia/Management_Science_(journal)
A process design kit (PDK) is a set of files used within the semiconductor industry to model a fabrication process for the design tools used to design an integrated circuit. The PDK is created by the foundry defining a certain technology variation for their processes. It is then passed to their customers to use in the design process. The customers may enhance the PDK, tailoring it to their specific design styles and markets. The designers use the PDK to design, simulate, draw and verify the design before handing the design back to the foundry to produce chips. The data in the PDK is specific to the foundry's process variation and is chosen early in the design process, influenced by the market requirements for the chip. An accurate PDK will increase the chances of first-pass successful silicon. == Description == Different tools in the design flow have different input formats for the PDK data. The PDK engineers have to decide which tools they will support in the design flows and create the libraries and rule sets which support those flows. A typical PDK contains: A primitive device library Symbols Device parameters PCells Verification checks Design Rule Checking Layout Versus Schematic Antenna and Electrical rule check Physical Extraction Technology data Layers, layer names, layer/purpose pairs Colors, fills and display attributes Process constraints Electrical rules Rule files LEF Tool dependent rule formats Simulation models of primitive devices (SPICE or SPICE derivatives) Transistors (typically SPICE) Capacitors Resistors Inductors Design Rule Manual A user friendly representation of the process requirements A PDK may also include standard cell libraries from the foundry, a library vendor or developed internally LEF format of abstracted layout data Symbols Library (.lib) files GDSII layout data == References == == Further reading == Yu Cao, "Predictive process design kits", ch. 8 in, Predictive Technology Model for Robust Nanoelectronic Design, Springer Science & Business Media, 2011 ISBN 1461404452. Lukas Chrostowski, Michael Hochberg, "Process design kit (PDK)", section 10.1 in, Silicon Photonics Design, Cambridge University Press, 2015 ISBN 1107085454. Michael Liehr et al., "Silicon photonics integrated circuit process design kit", section 4.8 in, Alan Willner (ed), Optical Fiber Telecommunications, vol. 11, Elsevier, 2019 ISBN 0128165022. Ian Robertson, Nutapong Somjit, Mitchai Chongcheawchamnan, "Process design kits for RFIC and MMIC design", section 17.8.1 in, Microwave and Millimetre-Wave Design for Wireless Communications, John Wiley & Sons, 2016 ISBN 1118917219.
Wikipedia/Process_design_kit
In electronic design automation, a design rule is a geometric constraint imposed on circuit board, semiconductor device, and integrated circuit (IC) designers to ensure their designs function properly, reliably, and can be produced with acceptable yield. Design rules for production are developed by process engineers based on the capability of their processes to realize design intent. Electronic design automation is used extensively to ensure that designers do not violate design rules; a process called design rule checking (DRC). DRC is a major step during physical verification signoff on the design, which also involves LVS (layout versus schematic) checks, XOR checks, ERC (electrical rule check), and antenna checks. The importance of design rules and DRC is greatest for ICs, which have micro- or nano-scale geometries; for advanced processes, some fabs also insist upon the use of more restricted rules to improve yield. == Design rules == Design rules are a series of parameters provided by semiconductor manufacturers that enable the designer to verify the correctness of a mask set. Design rules are specific to a particular semiconductor manufacturing process. A design rule set specifies certain geometric and connectivity restrictions to ensure sufficient margins to account for variability in semiconductor manufacturing processes, so as to ensure that most of the parts work correctly. The most basic design rules are shown in the diagram on the right. The first are single layer rules. A width rule specifies the minimum width of any shape in the design. A spacing rule specifies the minimum distance between two adjacent objects. These rules will exist for each layer of semiconductor manufacturing process, with the lowest layers having the smallest rules (typically 100 nm as of 2007) and the highest metal layers having larger rules (perhaps 400 nm as of 2007). A two layer rule specifies a relationship that must exist between two layers. For example, an enclosure rule might specify that an object of one type, such as a contact or via, must be covered, with some additional margin, by a metal layer. A typical value as of 2007 might be about 10 nm. There are many other rule types not illustrated here. A minimum area rule is just what the name implies. Antenna rules are complex rules that check ratios of areas of every layer of a net for configurations that can result in problems when intermediate layers are etched. Many other such rules exist and are explained in detail in the documentation provided by the semiconductor manufacturer. Academic design rules are often specified in terms of a scalable parameter, λ, so that all geometric tolerances in a design may be defined as integer multiples of λ. This simplifies the migration of existing chip layouts to newer processes. Industrial rules are more highly optimized, and only approximate uniform scaling. Design rule sets have become increasingly more complex with each subsequent generation of semiconductor process. == Software == The main objective of design rule checking (DRC) is to achieve a high overall yield and reliability for the design. If design rules are violated the design may not be functional. To meet this goal of improving die yields, DRC has evolved from simple measurement and Boolean checks, to more involved rules that modify existing features, insert new features, and check the entire design for process limitations such as layer density. A completed layout consists not only of the geometric representation of the design, but also data that provides support for the manufacture of the design. While design rule checks do not validate that the design will operate correctly, they are constructed to verify that the structure meets the process constraints for a given design type and process technology. DRC software usually takes as input a layout in the GDSII standard format and a list of rules specific to the semiconductor process chosen for fabrication. From these it produces a report of design rule violations that the designer may or may not choose to correct. Carefully "stretching" or waiving certain design rules is often used to increase performance and component density at the expense of yield. DRC products define rules in a language to describe the operations needed to be performed in DRC. For example, Mentor Graphics uses Standard Verification Rule Format (SVRF) language in their DRC rules files and Magma Design Automation is using Tcl-based language. A set of rules for a particular process is referred to as a run-set, rule deck, or just a deck. DRC is a very computationally intense task. Usually DRC checks will be run on each sub-section of the ASIC to minimize the number of errors that are detected at the top level. If run on a single CPU, customers may have to wait up to a week to get the result of a Design Rule check for modern designs. Most design companies require DRC to run in less than a day to achieve reasonable cycle times since the DRC will likely be run several times prior to design completion. With today's processing power, full-chip DRCs may run in much shorter times as quick as one hour depending on the chip complexity and size. Some categories of design rules (checked by DRC) in IC design include: Active to active spacing Well to well spacing Minimum channel length of the transistor Minimum metal width Metal to metal spacing Metal fill density (for processes using CMP) Poly density ESD and I/O rules Antenna effect === Commercial === Major products in the DRC area of EDA include: Altium Designer Advanced Design System Desktop DRC by PathWave Design (Keysight Technologies Previously Agilent's EEsof EDA division) Calibre by Mentor Graphics Diva, DRACULA, Assura, PVS and Pegasus by Cadence Design Systems Hercules and IC Validator by Synopsys Guardian by Silvaco HyperLynx DRC Free/Gold by Mentor Graphics PowerDRC -now SmartDRC by Silvaco SmartDRC by Silvaco Quartz by Magma Design Automation === Free software === Electric VLSI Design System KLayout Magic Alliance -- A Free VLSI/CAD System Opencircuitdesign software: Microwind -- An educational layout CAD system Opensource 130nm CMOS PDK by Google and SkyWater tech. Foundry == References == Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 A survey of the field, from which part of the above summary were derived, with permission.
Wikipedia/Design_rule_checking
In electronics and especially synchronous digital circuits, a clock signal (historically also known as logic beat) is an electronic logic signal (voltage or current) which oscillates between a high and a low state at a constant frequency and is used like a metronome to synchronize actions of digital circuits. In a synchronous logic circuit, the most common type of digital circuit, the clock signal is applied to all storage devices, flip-flops and latches, and causes them all to change state simultaneously, preventing race conditions. A clock signal is produced by an electronic oscillator called a clock generator. The most common clock signal is in the form of a square wave with a 50% duty cycle. Circuits using the clock signal for synchronization may become active at either the rising edge, falling edge, or, in the case of double data rate, both in the rising and in the falling edges of the clock cycle. == Digital circuits == Most integrated circuits (ICs) of sufficient complexity use a clock signal in order to synchronize different parts of the circuit, cycling at a rate slower than the worst-case internal propagation delays. In some cases, more than one clock cycle is required to perform a predictable action. As ICs become more complex, the problem of supplying accurate and synchronized clocks to all the circuits becomes increasingly difficult. The preeminent example of such complex chips is the microprocessor, the central component of modern computers, which relies on a clock from a crystal oscillator. The only exceptions are asynchronous circuits such as asynchronous CPUs. A clock signal might also be gated, that is, combined with a controlling signal that enables or disables the clock signal for a certain part of a circuit. This technique is often used to save power by effectively shutting down portions of a digital circuit when they are not in use, but comes at a cost of increased complexity in timing analysis. === Single-phase clock === Most modern synchronous circuits use only a "single phase clock" – in other words, all clock signals are (effectively) transmitted on a single wire. === Two-phase clock === In synchronous circuits, a "two-phase clock" refers to clock signals distributed on two wires, each with non-overlapping pulses. Traditionally one wire is called "phase 1" or "φ1" (phi1), the other wire carries the "phase 2" or "φ2" signal. Because the two phases are guaranteed non-overlapping, gated latches rather than edge-triggered flip-flops can be used to store state information so long as the inputs to latches on one phase only depend on outputs from latches on the other phase. Since a gated latch uses only four gates versus six gates for an edge-triggered flip-flop, a two phase clock can lead to a design with a smaller overall gate count but usually at some penalty in design difficulty and performance. Metal oxide semiconductor (MOS) ICs typically used dual clock signals (a two-phase clock) in the 1970s. These were generated externally for both the Motorola 6800 and Intel 8080 microprocessors. The next generation of microprocessors incorporated the clock generation on chip. The 8080 uses a 2 MHz clock but the processing throughput is similar to the 1 MHz 6800. The 8080 requires more clock cycles to execute a processor instruction. Due to their dynamic logic, the 6800 has a minimum clock rate of 100 kHz and the 8080 has a minimum clock rate of 500 kHz. Higher speed versions of both microprocessors were released by 1976. The 6501 requires an external 2-phase clock generator. The MOS Technology 6502 uses the same 2-phase logic internally, but also includes a 2-phase clock generator on-chip, so it only needs a single phase clock input, simplifying system design. === 4-phase clock === Some early integrated circuits use four-phase logic, requiring a four-phase clock input consisting of four separate, non-overlapping clock signals. This was particularly common among early microprocessors such as the National Semiconductor IMP-16, Texas Instruments TMS9900, and the Western Digital MCP-1600 chipset used in the DEC LSI-11. Four phase clocks have only rarely been used in newer CMOS processors such as the DEC WRL MultiTitan microprocessor. and in Intrinsity's Fast14 technology. Most modern microprocessors and microcontrollers use a single-phase clock. === Clock multiplier === Many modern microcomputers use a "clock multiplier" which multiplies a lower frequency external clock to the appropriate clock rate of the microprocessor. This allows the CPU to operate at a much higher frequency than the rest of the computer, which affords performance gains in situations where the CPU does not need to wait on an external factor (like memory or input/output). === Dynamic frequency change === The vast majority of digital devices do not require a clock at a fixed, constant frequency. As long as the minimum and maximum clock periods are respected, the time between clock edges can vary widely from one edge to the next and back again. Such digital devices work just as well with a clock generator that dynamically changes its frequency, such as spread-spectrum clock generation, dynamic frequency scaling, etc. Devices that use static logic do not even have a maximum clock period (or in other words, minimum clock frequency); such devices can be slowed and paused indefinitely, then resumed at full clock speed at any later time. == Other circuits == Some sensitive mixed-signal circuits, such as precision analog-to-digital converters, use sine waves rather than square waves as their clock signals, because square waves contain high-frequency harmonics that can interfere with the analog circuitry and cause noise. Such sine wave clocks are often differential signals, because this type of signal has twice the slew rate, and therefore half the timing uncertainty, of a single-ended signal with the same voltage range. Differential signals radiate less strongly than a single line. Alternatively, a single line shielded by power and ground lines can be used. In CMOS circuits, gate capacitances are charged and discharged continually. A capacitor does not dissipate energy, but energy is wasted in the driving transistors. In reversible computing, inductors can be used to store this energy and reduce the energy loss, but they tend to be quite large. Alternatively, using a sine wave clock, CMOS transmission gates and energy-saving techniques, the power requirements can be reduced. == Distribution == The most effective way to get the clock signal to every part of a chip that needs it, with the lowest skew, is a metal grid. In a large microprocessor, the power used to drive the clock signal can be over 30% of the total power used by the entire chip. The whole structure with the gates at the ends and all amplifiers in between have to be loaded and unloaded every cycle. To save energy, clock gating temporarily shuts off part of the tree. The clock distribution network (or clock tree, when this network forms a tree such as an H-tree) distributes the clock signal(s) from a common point to all the elements that need it. Since this function is vital to the operation of a synchronous system, much attention has been given to the characteristics of these clock signals and the electrical networks used in their distribution. Clock signals are often regarded as simple control signals; however, these signals have some very special characteristics and attributes. Clock signals are typically loaded with the greatest fanout and operate at the highest speeds of any signal within the synchronous system. Since the data signals are provided with a temporal reference by the clock signals, the clock waveforms must be particularly clean and sharp. Furthermore, these clock signals are particularly affected by technology scaling (see Moore's law), in that long global interconnect lines become significantly more resistive as line dimensions are decreased. This increased line resistance is one of the primary reasons for the increasing significance of clock distribution on synchronous performance. Finally, the control of any differences and uncertainty in the arrival times of the clock signals can severely limit the maximum performance of the entire system and create race conditions in which an incorrect data signal may latch within a register. Most synchronous digital systems consist of cascaded banks of sequential registers with combinational logic between each set of registers. The functional requirements of the digital system are satisfied by the logic stages. Each logic stage introduces delay that affects timing performance, and the timing performance of the digital design can be evaluated relative to the timing requirements by a timing analysis. Often special considerations must be given in order to meet the timing requirements. For example, the global performance and local timing requirements may be satisfied by the careful insertion of pipeline registers into equally spaced time windows to satisfy critical worst-case timing constraints. A proper design of the clock distribution network helps ensure that critical timing requirements are satisfied and that no race conditions exist (see also clock skew). The delay components that make up a general synchronous system are composed of three individual subsystems: the memory storage elements, the logic elements, and the clocking circuitry and distribution network. Novel structures are currently under development to ameliorate these issues and provide effective solutions. Important areas of research include resonant clocking techniques ("resonant clock mesh"), on-chip optical interconnect, and local synchronization methodologies. == See also == Bit-synchronous operation Clock domain crossing – Crossing in digital electronic design Clock rate – Frequency at which a CPU chip or core is operating Design flow (EDA) – Suite of electronic design tools Electronic design automation – Software for designing electronic systems Four-phase logic – type of, and design methodology for dynamic logicPages displaying wikidata descriptions as a fallback Integrated circuit design – Engineering process for electronic hardware Interface Logic Model Jitter – Clock deviation from perfect periodicity Pulse-per-second signal – Class of electrical signals Timecode – Sequence of numeric codes generated at regular intervals by a timing synchronization system Self-clocking signal – Signal able to be decoded without an outside source of synchronization == References == == Further reading == Eby G. Friedman (Ed.), Clock Distribution Networks in VLSI Circuits and Systems, ISBN 0-7803-1058-6, IEEE Press. 1995. Eby G. Friedman, "Clock Distribution Networks in Synchronous Digital Integrated Circuits" , Proceedings of the IEEE, Vol. 89, No. 5, pp. 665–692, May 2001. "ISPD 2010 High Performance Clock Network Synthesis Contest", International Symposium on Physical Design, Intel, IBM, 2010. D.-J. Lee, "High-performance and Low-power Clock Network Synthesis in the Presence of Variation", Ph.D. dissertation, University of Michigan, 2011. I. L. Markov, D.-J. Lee, "Algorithmic Tuning of Clock Trees and Derived Non-Tree Structures", in Proc. Int'l. Conf. Comp.-Aided Design (ICCAD), 2011. V. G. Oklobdzija, V. M. Stojanovic, D. M. Markovic, and N. M. Nedovic, Digital System Clocking: High-Performance and Low-Power Aspects, ISBN 0-471-27447-X, IEEE Press/Wiley-Interscience, 2003. Mitch Dale, "The power of RTL Clock-gating", Electronic Systems Design Engineering Incorporating Chip Design, January 20, 2007. Adapted from Eby Friedman Archived 2014-08-12 at the Wayback Machine's column in the ACM SIGDA e-newsletter by Igor Markov Original text is available at https://web.archive.org/web/20100711135550/http://www.sigda.org/newsletter/2005/eNews_051201.html
Wikipedia/Clock_distribution_network
A logical data model or logical schema is a data model of a specific problem domain expressed independently of a particular database management product or storage technology (physical data model) but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags. This is as opposed to a conceptual data model, which describes the semantics of an organization without reference to technology. == Overview == Logical data models represent the abstract structure of a domain of information. They are often diagrammatic in nature and are most typically used in business processes that seek to capture things of importance to an organization and how they relate to one another. Once validated and approved, the logical data model can become the basis of a physical data model and form the design of a database. Logical data models should be based on the structures identified in a preceding conceptual data model, since this describes the semantics of the information context, which the logical model should also reflect. Even so, since the logical data model anticipates implementation on a specific computing system, the content of the logical data model is adjusted to achieve certain efficiencies. The term 'Logical Data Model' is sometimes used as a synonym of 'domain model' or as an alternative to the domain model. While the two concepts are closely related, and have overlapping goals, a domain model is more focused on capturing the concepts in the problem domain rather than the structure of the data associated with that domain. == History == When ANSI first laid out the idea of a logical schema in 1975, the choices were hierarchical and network. The relational model – where data is described in terms of tables and columns – had just been recognized as a data organization theory but no software existed to support that approach. Since that time, an object-oriented approach to data modelling – where data is described in terms of classes, attributes, and associations – has also been introduced. == Logical data model topics == === Reasons for building a logical data structure === Helps common understanding of business data elements and requirement Provides foundation for designing a database Facilitates avoidance of data redundancy and thus prevent data and business transaction inconsistency Facilitates data re-use and sharing Decreases development and maintenance time and cost Confirms a logical process model and helps impact analysis. === Conceptual, logical and physical data model === A logical data model is sometimes incorrectly called a physical data model, which is not what the ANSI people had in mind. The physical design of a database involves deep use of particular database management technology. For example, a table/column design could be implemented on a collection of computers, located in different parts of the world. That is the domain of the physical model. Conceptual, logical and physical data models are very different in their objectives, goals and content. Key differences noted below. == See also == DODAF Core architecture data model Database design Entity-relationship model Database schema Object-role modeling FCO-IM == References == == External links == Building a Logical Data Model By George Tillmann, DBMS, June 1995.
Wikipedia/Logical_data_model
The term process model is used in various contexts. For example, in business process modeling the enterprise process model is often referred to as the business process model. == Overview == Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development. The goals of a process model are to be: Descriptive Track what actually happens during a process Take the point of view of an external observer who looks at the way a process has been performed and determines the improvements that must be made to make it perform more effectively or efficiently. Prescriptive Define the desired processes and how they should/could/might be performed. Establish rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance. Explanatory Provide explanations about the rationale of processes. Explore and evaluate the several possible courses of action based on rational arguments. Establish an explicit link between processes and the requirements that the model needs to fulfill. Pre-defines points at which data can be extracted for reporting purposes. == Purpose == From a theoretical point of view, the meta-process modeling explains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process modeling is aimed at providing guidance for method engineers and application developers. The activity of modeling a business process usually predicates a need to change processes or identify issues to be corrected. This transformation may or may not require IT involvement, although that is a common driver for the need to model a business process. Change management programmes are desired to put the processes into practice. With advances in technology from larger platform vendors, the vision of business process models (BPM) becoming fully executable (and capable of round-trip engineering) is coming closer to reality every day. Supporting technologies include Unified Modeling Language (UML), model-driven architecture, and service-oriented architecture. Process modeling addresses the process aspects of an enterprise business architecture, leading to an all encompassing enterprise architecture. The relationships of a business processes in the context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create greater capabilities in analyzing and planning a change. One real-world example is in corporate mergers and acquisitions; understanding the processes in both companies in detail, allowing management to identify redundancies resulting in a smoother merger. Process modeling has always been a key aspect of business process reengineering, and continuous improvement approaches seen in Six Sigma. == Classification of process models == === By coverage === There are five types of coverage where the term process model has been defined differently: Activity-oriented: related set of activities conducted for the specific purpose of product definition; a set of partially ordered steps intended to reach a goal. Product-oriented: series of activities that cause sensitive product transformations to reach the desired product. Decision-oriented: set of related decisions conducted for the specific purpose of product definition. Context-oriented: sequence of contexts causing successive product transformations under the influence of a decision taken in a context. Strategy-oriented: allow building models representing multi-approach processes and plan different possible ways to elaborate the product based on the notion of intention and strategy. === By alignment === Processes can be of different kinds. These definitions "correspond to the various ways in which a process can be modelled". Strategic processes investigate alternative ways of doing a thing and eventually produce a plan for doing it are often creative and require human co-operation; thus, alternative generation and selection from an alternative are very critical activities Tactical processes help in the achievement of a plan are more concerned with the tactics to be adopted for actual plan achievement than with the development of a plan of achievement Implementation processes are the lowest level processes are directly concerned with the details of the what and how of plan implementation === By granularity === Granularity refers to the level of detail of a process model and affects the kind of guidance, explanation and trace that can be provided. Coarse granularity restricts these to a rather limited level of detail whereas fine granularity provides more detailed capability. The nature of granularity needed is dependent on the situation at hand. Project manager, customer representatives, the general, top-level, or middle management require rather coarse-grained process description as they want to gain an overview of time, budget, and resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or software system architects will prefer a fine-grained process model where the details of the model can provide them with instructions and important execution dependencies such as the dependencies between people. While notations for fine-grained models exist, most traditional process models are coarse-grained descriptions. Process models should, ideally, provide a wide range of granularity (e.g. Process Weaver). === By flexibility === It was found that while process models were prescriptive, in actual practice departures from the prescription can occur. Thus, frameworks for adopting methods evolved so that systems development methods match specific organizational situations and thereby improve their usefulness. The development of such frameworks is also called situational method engineering. Method construction approaches can be organized in a flexibility spectrum ranging from 'low' to 'high'. Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there are modular method construction. Rigid methods are completely pre-defined and leave little scope for adapting them to the situation at hand. On the other hand, modular methods can be modified and augmented to fit a given situation. Selecting a rigid methods allows each project to choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a method consists of choosing the appropriate path for the situation at hand. Finally, selecting and tuning a method allows each project to select methods from different approaches and tune them to the project's needs." == Quality of methods == As the quality of process models is being discussed in this paper, there is a need to elaborate quality of modeling techniques as an important essence in quality of process models. In most existing frameworks created for understanding the quality, the line between quality of modeling techniques and the quality of models as a result of the application of those techniques are not clearly drawn. This report will concentrate both on quality of process modeling techniques and quality of process models to clearly differentiate the two. Various frameworks were developed to help in understanding quality of process modeling techniques, one example is Quality based modeling evaluation framework or known as Q-Me framework which argued to provide set of well defined quality properties and procedures to make an objective assessment of this properties possible. This framework also has advantages of providing uniform and formal description of the model element within one or different model types using one modeling techniques In short this can make assessment of both the product quality and the process quality of modeling techniques with regard to a set of properties that have been defined before. Quality properties that relate to business process modeling techniques discussed in are: Expressiveness: the degree to which a given modeling technique is able to denote the models of any number and kinds of application domains. Arbitrariness: the degree of freedom one has when modeling one and the same domain Suitability: the degree to which a given modeling technique is specifically tailored for a specific kind of application domain. Comprehensibility: the ease with which the way of working and way of modeling are understood by participants. Coherence: the degree to which the individual sub models of a way of modeling constitute a whole. Completeness; the degree to which all necessary concepts of the application domain are represented in the way of modeling. Efficiency: the degree to which the modeling process uses resources such as time and people. Effectiveness: the degree to which the modeling process achieves its goal. To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic essentials modeling of the organisation (DEMO) business modeling techniques. It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques has revealed the shortcomings of Q-ME. One particular is that it does not include quantifiable metric to express the quality of business modeling technique which makes it hard to compare quality of different techniques in an overall rating. There is also a systematic approach for quality measurement of modeling techniques known as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as a basis for computation of these complexity metrics. In comparison to quality framework proposed by Krogstie, quality measurement focus more on technical level instead of individual model level. Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to measure the simplicity and understandability of a design. This is supported by later research done by Mendling et al. who argued that without using the quality metrics to help question quality properties of a model, simple process can be modeled in a complex and unsuitable way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps inefficient execution of the process in question. The quality of modeling technique is important in creating models that are of quality and contribute to the correctness and usefulness of models. == Quality of models == Earliest process models reflected the dynamics of the process with a practical process obtained by instantiation in terms of relevant concepts, available technologies, specific implementation environments, process constraints and so on. Enormous number of research has been done on quality of models but less focus has been shifted towards the quality of process models. Quality issues of process models cannot be evaluated exhaustively however there are four main guidelines and frameworks in practice for such. These are: top-down quality frameworks, bottom-up metrics related to quality aspects, empirical surveys related to modeling techniques, and pragmatic guidelines. Hommes quoted Wang et al. (1994) that all the main characteristic of quality of models can all be grouped under 2 groups namely correctness and usefulness of a model, correctness ranges from the model correspondence to the phenomenon that is modeled to its correspondence to syntactical rules of the modeling and also it is independent of the purpose to which the model is used. Whereas the usefulness can be seen as the model being helpful for the specific purpose at hand for which the model is constructed at first place. Hommes also makes a further distinction between internal correctness (empirical, syntactical and semantic quality) and external correctness (validity). A common starting point for defining the quality of conceptual model is to look at the linguistic properties of the modeling language of which syntax and semantics are most often applied. Also the broader approach is to be based on semiotics rather than linguistic as was done by Krogstie using the top-down quality framework known as SEQUAL. It defines several quality aspects based on relationships between a model, knowledge Externalisation, domain, a modeling language, and the activities of learning, taking action, and modeling. The framework does not however provide ways to determine various degrees of quality but has been used extensively for business process modeling in empirical tests carried out According to previous research done by Moody et al. with use of conceptual model quality framework proposed by Lindland et al. (1994) to evaluate quality of process model, three levels of quality were identified: Syntactic quality: Assesses extent to which the model conforms to the grammar rules of modeling language being used. Semantic quality: whether the model accurately represents user requirements Pragmatic quality: whether the model can be understood sufficiently by all relevant stakeholders in the modeling process. That is the model should enable its interpreters to make use of it for fulfilling their need. From the research it was noticed that the quality framework was found to be both easy to use and useful in evaluating the quality of process models however it had limitations in regards to reliability and difficult to identify defects. These limitations led to refinement of the framework through subsequent research done by Krogstie. This framework is called SEQUEL framework by Krogstie et al. 1995 (Refined further by Krogstie & Jørgensen, 2002) which included three more quality aspects. Physical quality: whether the externalized model is persistent and available for the audience to make sense of it. Empirical quality: whether the model is modeled according to the established regulations regarding a given language. Social quality: This regards the agreement between the stakeholders in the modeling domain. Dimensions of Conceptual Quality framework Modeling Domain is the set of all statements that are relevant and correct for describing a problem domain, Language Extension is the set of all statements that are possible given the grammar and vocabulary of the modeling languages used. Model Externalization is the conceptual representation of the problem domain. It is defined as the set of statements about the problem domain that are actually made. Social Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors both human model users and the tools that interact with the model, respectively 'think' the conceptual representation of the problem domain contains. Finally, Participant Knowledge is the set of statements that human actors, who are involved in the modeling process, believe should be made to represent the problem domain. These quality dimensions were later divided into two groups that deal with physical and social aspects of the model. In later work, Krogstie et al. stated that while the extension of the SEQUAL framework has fixed some of the limitation of the initial framework, however other limitation remain . In particular, the framework is too static in its view upon semantic quality, mainly considering models, not modeling activities, and comparing these models to a static domain rather than seeing the model as a facilitator for changing the domain. Also, the framework's definition of pragmatic quality is quite narrow, focusing on understanding, in line with the semiotics of Morris, while newer research in linguistics and semiotics has focused beyond mere understanding, on how the model is used and affects its interpreters. The need for a more dynamic view in the semiotic quality framework is particularly evident when considering process models, which themselves often prescribe or even enact actions in the problem domain, hence a change to the model may also change the problem domain directly. This paper discusses the quality framework in relation to active process models and suggests a revised framework based on this. Further work by Krogstie et al. (2006) to revise SEQUAL framework to be more appropriate for active process models by redefining physical quality with a more narrow interpretation than previous research. The other framework in use is Guidelines of Modeling (GoM) based on general accounting principles include the six principles: Correctness, Clarity deals with the comprehensibility and explicitness (System description) of model systems. Comprehensibility relates to graphical arrangement of the information objects and, therefore, supports the understand ability of a model. Relevance relates to the model and the situation being presented. Comparability involves the ability to compare models that is semantic comparison between two models, Economic efficiency; the produced cost of the design process need at least to be covered by the proposed use of cost cuttings and revenue increases. Since the purpose of organizations in most cases is the maximization of profit, the principle defines the borderline for the modeling process. The last principle is Systematic design defines that there should be an accepted differentiation between diverse views within modeling. Correctness, relevance and economic efficiency are prerequisites in the quality of models and must be fulfilled while the remaining guidelines are optional but necessary. The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used by people who are not competent with modeling. They provide major quality metrics but are not easily applicable by non-experts. The use of bottom-up metrics related to quality aspects of process models is trying to bridge the gap of use of the other two frameworks by non-experts in modeling but it is mostly theoretical and no empirical tests have been carried out to support their use. Most experiments carried out relate to the relationship between metrics and quality aspects and these works have been done individually by different authors: Canfora et al. study the connection mainly between count metrics (for example, the number of tasks or splits -and maintainability of software process models); Cardoso validates the correlation between control flow complexity and perceived complexity; and Mendling et al. use metrics to predict control flow errors such as deadlocks in process models. The results reveal that an increase in size of a model appears to reduce its quality and comprehensibility. Further work by Mendling et al. investigates the connection between metrics and understanding and While some metrics are confirmed regarding their effect, also personal factors of the modeler – like competence – are revealed as important for understanding about the models. Several empirical surveys carried out still do not give clear guidelines or ways of evaluating the quality of process models but it is necessary to have clear set of guidelines to guide modelers in this task. Pragmatic guidelines have been proposed by different practitioners even though it is difficult to provide an exhaustive account of such guidelines from practice. Most of the guidelines are not easily put to practice but "label activities verb–noun" rule has been suggested by other practitioners before and analyzed empirically. From the research. value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels which need to be analyzed. It was found that it results in better models in terms of understanding than alternative labelling styles. From the earlier research and ways to evaluate process model quality it has been seen that the process model's size, structure, expertise of the modeler and modularity affect its overall comprehensibility. Based on these a set of guidelines was presented 7 Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as guidelines on the number of elements in a model, the application of structured modeling, and the decomposition of a process model. The guidelines are as follows: G1 Minimize the number of elements in a model G2 Minimize the routing paths per element G3 Use one start and one end event G4 Model as structured as possible G5 Avoid OR routing elements G6 Use verb-object activity labels G7 Decompose a model with more than 50 elements 7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the content of a process model, but only to the way this content is organized and represented. It does suggest ways of organizing different structures of the process model while the content is kept intact but the pragmatic issue of what must be included in the model is still left out. The second limitation relates to the prioritizing guideline the derived ranking has a small empirical basis as it relies on the involvement of 21 process modelers only. This could be seen on the one hand as a need for a wider involvement of process modelers' experience, but it also raises the question, what alternative approaches may be available to arrive at a prioritizing guideline? == See also == Model selection Process (science) Process architecture Process calculus Process flow diagram Process ontology Process Specification Language == References == == External links == Modeling processes regarding workflow patterns; link appears to be broken "Abstraction Levels for Processes Presentation: Process Modeling Principles" (PDF). Archived from the original (PDF) on 2011-07-14. Retrieved 2008-06-12. American Productivity and Quality Center (APQC), a worldwide organization for process and performance improvement The Application of Petri Nets to Workflow Management, W.M.P. van der Aalst, 1998.
Wikipedia/Process_model
Generic data models are generalizations of conventional data models. They define standardised general relation types, together with the kinds of things that may be related by such a relation type. == Overview == The definition of generic data model is similar to the definition of a natural language. For example, a generic data model may define relation types such as a 'classification relation', being a binary relation between an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related. Given an extensible list of classes, this allows the classification of any individual thing and to specify part-whole relations for any individual object. By standardisation of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages. Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model. == History == Generic data models are developed as an approach to solve some shortcomings of conventional data models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements which are to be rendered more concretely, in order to make the differences less significant. == Generic data model topics == === Generic patterns === There are generic patterns that can be used to advantage for modeling business. These include entity types for PARTY (with included PERSON and ORGANIZATION), PRODUCT TYPE, PRODUCT INSTANCE, ACTIVITY TYPE, ACTIVITY INSTANCE, CONTRACT, GEOGRAPHIC AREA, and SITE. A model which explicitly includes versions of these entity classes will be both reasonably robust and reasonably easy to understand. More abstract models are suitable for general purpose tools, and consist of variations on THING and THING TYPE, with all actual data being instances of these. Such abstract models are on one hand more difficult to manage, since they are not very expressive of real world things, but on the other hand they have a much wider applicability, especially if they are accompanied by a standardised dictionary. More concrete and specific data models will risk having to change as the scope or environment changes. === Approach to generic data modeling === One approach to generic data modeling has the following characteristics: A generic data model shall consist of generic entity types, such as 'individual thing', 'class', 'relationship', and possibly a number of their subtypes. Every individual thing is an instance of a generic entity called 'individual thing' or one of its subtypes. Every individual thing is explicitly classified by a kind of thing ('class') using an explicit classification relationship. The classes used for that classification are separately defined as standard instances of the entity 'class' or one of its subtypes, such as 'class of relationship'. These standard classes are usually called 'reference data'. This means that domain specific knowledge is captured in those standard instances and not as entity types. For example, concepts such as car, wheel, building, ship, and also temperature, length, etc. are standard instances. But also standard types of relationship, such as 'is composed of' and 'is involved in' can be defined as standard instances. This way of modeling allows the addition of standard classes and standard relation types as data (instances), which makes the data model flexible and prevents data model changes when the scope of the application changes. === Generic data model rules === A generic data model obeys the following rules]: Candidate attributes are treated as representing relationships to other entity types. Entity types are represented, and are named after, the underlying nature of a thing, not the role it plays in a particular context. Entity types are chosen. Thus as a result of this principle, any occurrence of an entity type will belong to it from the time it is created to the time it is destroyed, not just whilst it is of interest. This is important when managing the underlying data, rather than the views on it used by applications. We call entity types that conform to this principle generic entity types. Entities have a local identifier within a database or exchange file. These should be artificial and managed to be unique. Relationships are not used as part of the local identifier. Activities, relationships and event-effects are represented by entity types (not attributes). Entity types are part of a sub-type/super-type hierarchy of entity types, in order to define a universal context for the model. As types of relationships are also entity types, they are also arranged in a sub-type/super-type hierarchy of types of relationship. Types of relationships are defined on a high (generic) level, being the highest level where the type of relationship is still valid. For example, a composition relationship (indicated by the phrase: 'is composed of') is defined as a relationship between an 'individual thing' and another 'individual thing' (and not just between e.g. an order and an order line). This generic level means that the type of relation may in principle be applied between any individual thing and any other individual thing. Additional constraints are defined in the 'reference data', being standard instances of relationships between kinds of things. == Examples == Examples of generic data models are ISO 10303-221, ISO 15926 and Gellish or Gellish English. Found in Data Model Patterns: Conventions of Thought by David C. Hay. 1995 Found in Enterprise Model Patterns: Describing the World by David C. Hay. 2011 == See also == Entity-attribute-value model Attribute-value system Common data model == References == 1. David C. Hay. 1995. Data Model Patterns: Conventions of Thought. (New York: Dorset House). 2. David C. Hay. 2011. Enterprise Model Patterns: Describing the World. (Bradley Beach,New Jersey: Technics Publications). 3. Matthew West 2011. Developing High Quality Data Models (Morgan Kaufmann) == External links == Data Flow Diagram Gellish English and the Gellish Dictionary and documents about Gellish [1]
Wikipedia/Generic_data_model
The enhanced entity–relationship (EER) model (or extended entity–relationship model) in computer science is a high-level or conceptual data model incorporating extensions to the original entity–relationship (ER) model, used in the design of databases. It was developed to reflect more precisely the properties and constraints that are found in more complex databases, such as in engineering design and manufacturing (CAD/CAM), telecommunications, complex software systems and geographic information systems (GIS). == Mechanics == The EER model includes all of the concepts introduced by the ER model. Additionally it includes the concepts of a subclass and superclass (Is-a), along with the concepts of specialization and generalization. Furthermore, it introduces the concept of a union type or category, which represents a collection of objects that is the union of objects of different entity types. The EER model also includes EER diagrams that are conceptual models that accurately represent the requirements of complex databases. === Subclass and superclass === Entity type Y is a subtype (subclass) of an entity type X if and only if every Y is necessarily an X. A subclass entity inherits all attributes and relationships of its superclass entity. This property is called the attribute and relationship inheritance. A subclass entity may have its specific attributes and relationships (together with all the attributes and relationships it inherits from the superclass). A common superclass example is a Vehicle superclass along with the subclasses of Car and Truck. There are a number of common attributes between a car and a truck, which would be part of the superclass, while the attributes specific to a car or a truck (such as max payload, truck type...) would make up two subclasses. == Tools == The MySQL Workbench offers creating, editing and exporting EER Models. Exporting to PNG and PDF allows easy sharing for presentations. Skipper allows users to create, import and export from object–relational mapping (ORM) schema definitions to editable EER models. SAP PowerDesigner is a complex tool for modelling and transforming different models. == See also == Object–relational database Slowly changing dimension Structured type == References == == Further reading == Textbooks discussing EER and implementation using purely relational databases: Elmasri, Ramez; Navathe, Shamkant B. (2011). Fundamentals of Database Systems (6th ed.). Pearson/Addison Wesley. Chapters 8 and 9. ISBN 978-0-136-08620-8. Coronel, Carlos; Morris, Steven; Rob, Peter (2011). Database Systems: Design, Implementation, and Management (9th ed.). Cengage Learning. Chapter 5. ISBN 978-0-538-46968-5. Connolly, Thomas M.; Begg, Carolyn E. (2005). Database Systems: A Practical Approach to Design, Implementation, and Management (4th ed.). Addison-Wesley. Chapters 12 and 16. ISBN 978-0-321-21025-8. Booklet discussing EER and implementation using object-oriented and object–relational databases: Dietrich, Suzanne W.; Urban, Susan D. (2011). Fundamentals of Object Databases: Object-Oriented and Object–Relational Design. Morgan & Claypool Publishers. ISBN 978-1-60845-476-1. Textbook discussing implementation in relational and object–relational databases: Ricardo, Catherine (2011). Databases Illuminated (2nd ed.). Jones & Bartlett Publishers. Chapter 8. ISBN 978-1-4496-0600-8. Shorter survey articles: Teorey, Toby J.; Yang, Dongqing; Fry, James P. (1986). "A logical design methodology for relational databases using the extended entity–relationship model". ACM Computing Surveys. 18 (2): 197–222. CiteSeerX 10.1.1.105.7211. doi:10.1145/7474.7475. Sikha Bagui (2006). "Extended Entity Relationship Modeling". In Laura C. Rivero; Jorge H. Doorn; Viviana E. Ferraggine (eds.). Encyclopedia of Database Technologies and Applications. Idea Group Inc (IGI). pp. 233–239. ISBN 978-1-59140-795-9. == External links == [1] - Slides for chapter 8 from Fundamentals of Database Systems by Elmasri and Navathe (Pearson, 2011) [2] - Lecture notes from the University of Toronto [3] - The ER Conference
Wikipedia/Enhanced_entity–relationship_model
An entity–relationship model (or ER model) describes interrelated things of interest in a specific domain of knowledge. A basic ER model is composed of entity types (which classify the things of interest) and specifies relationships that can exist between entities (instances of those entity types). In software engineering, an ER model is commonly formed to represent things a business needs to remember in order to perform business processes. Consequently, the ER model becomes an abstract data model, that defines a data or information structure that can be implemented in a database, typically a relational database. Entity–relationship modeling was developed for database and design by Peter Chen and published in a 1976 paper, with variants of the idea existing previously. Today it is commonly used for teaching students the basics of database structure. Some ER models show super and subtype entities connected by generalization-specialization relationships, and an ER model can also be used to specify domain-specific ontologies. == Introduction == An ER model usually results from systematic analysis to define and describe the data created and needed by processes in a business area. Typically, it represents records of entities and events monitored and directed by business processes, rather than the processes themselves. It is usually drawn in a graphical form as boxes (entities) that are connected by lines (relationships) which express the associations and dependencies between entities. It can also be expressed in a verbal form, for example: one building may be divided into zero or more apartments, but one apartment can only be located in one building. Entities may be defined not only by relationships, but also by additional properties (attributes), which include identifiers called "primary keys". Diagrams created to represent attributes as well as entities and relationships may be called entity-attribute-relationship diagrams, rather than entity–relationship models. An ER model is typically implemented as a database. In a simple relational database implementation, each row of a table represents one instance of an entity type, and each field in a table represents an attribute type. In a relational database a relationship between entities is implemented by storing the primary key of one entity as a pointer or "foreign key" in the table of another entity. There is a tradition for ER/data models to be built at two or three levels of abstraction. The conceptual-logical-physical hierarchy below is used in other kinds of specification, and is different from the three schema approach to software engineering. Conceptual data model This is the highest level ER model in that it contains the least granular detail but establishes the overall scope of what is to be included within the model set. The conceptual ER model normally defines master reference data entities that are commonly used by the organization. Developing an enterprise-wide conceptual ER model is useful to support documenting the data architecture for an organization. A conceptual ER model may be used as the foundation for one or more logical data models (see below). The purpose of the conceptual ER model is then to establish structural metadata commonality for the master data entities between the set of logical ER models. The conceptual data model may be used to form commonality relationships between ER models as a basis for data model integration. Logical data model A logical ER model does not require a conceptual ER model, especially if the scope of the logical ER model includes only the development of a distinct information system. The logical ER model contains more detail than the conceptual ER model. In addition to master data entities, operational and transactional data entities are now defined. The details of each data entity are developed and the relationships between these data entities are established. The logical ER model is however developed independently of the specific database management system into which it can be implemented. Physical data model One or more physical ER models may be developed from each logical ER model. The physical ER model is normally developed to be instantiated as a database. Therefore, each physical ER model must contain enough detail to produce a database and each physical ER model is technology dependent since each database management system is somewhat different. The physical model is normally instantiated in the structural metadata of a database management system as relational database objects such as database tables, database indexes such as unique key indexes, and database constraints such as a foreign key constraint or a commonality constraint. The ER model is also normally used to design modifications to the relational database objects and to maintain the structural metadata of the database. The first stage of information system design uses these models during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain area of interest. In the case of the design of an information system that is based on a database, the conceptual data model is, at a later stage (usually called logical design), mapped to a logical data model, such as the relational model. This in turn is mapped to a physical model during physical design. Sometimes, both of these phases are referred to as "physical design." == Components == An entity may be defined as a thing that is capable of an independent existence that can be uniquely identified, and is capable of storing data. An entity is an abstraction from the complexities of a domain. When we speak of an entity, we normally speak of some aspect of the real world that can be distinguished from other aspects of the real world. An entity is a thing that exists either physically or logically. An entity may be a physical object such as a house or a car (they exist physically), an event such as a house sale or a car service, or a concept such as a customer transaction or order (they exist logically—as a concept). Although the term entity is the one most commonly used, following Chen, entities and entity-types should be distinguished. An entity-type is a category. An entity, strictly speaking, is an instance of a given entity-type. There are usually many instances of an entity-type. Because the term entity-type is somewhat cumbersome, most people tend to use the term entity as a synonym. Entities can be thought of as nouns. Examples include a computer, an employee, a song, or a mathematical theorem. A relationship captures how entities are related to one another. Relationships can be thought of as verbs, linking two or more nouns. Examples include an owns relationship between a company and a computer, a supervises relationship between an employee and a department, a performs relationship between an artist and a song, and a proves relationship between a mathematician and a conjecture. The model's linguistic aspect described above is used in the declarative database query language ERROL, which mimics natural language constructs. ERROL's semantics and implementation are based on reshaped relational algebra (RRA), a relational algebra that is adapted to the entity–relationship model and captures its linguistic aspect. Entities and relationships can both have attributes. For example, an employee entity might have a Social Security Number (SSN) attribute, while a proved relationship may have a date attribute. All entities except weak entities must have a minimal set of uniquely identifying attributes that may be used as a unique/primary key. Entity-relationship diagrams (ERDs) do not show single entities or single instances of relations. Rather, they show entity sets (all entities of the same entity type) and relationship sets (all relationships of the same relationship type). For example, a particular song is an entity, the collection of all songs in a database is an entity set, the eaten relationship between a child and his lunch is a single relationship, and the set of all such child-lunch relationships in a database is a relationship set. In other words, a relationship set corresponds to a relation in mathematics, while a relationship corresponds to a member of the relation. Certain cardinality constraints on relationship sets may be indicated as well. Physical views show how data is actually stored. === Relationships, roles, and cardinalities === Chen's original paper gives an example of a relationship and its roles. He describes a relationship "marriage" and its two roles, "husband" and "wife". A person plays the role of husband in a marriage (relationship) and another person plays the role of wife in the (same) marriage. These words are nouns. Chen's terminology has also been applied to earlier ideas. The lines, arrows, and crow's feet of some diagrams owes more to the earlier Bachman diagrams than to Chen's relationship diagrams. Another common extension to Chen's model is to "name" relationships and roles as verbs or phrases. === Role naming === It has also become prevalent to name roles with phrases such as is the owner of and is owned by. Correct nouns in this case are owner and possession. Thus, person plays the role of owner and car plays the role of possession rather than person plays the role of, is the owner of, etc. Using nouns has direct benefit when generating physical implementations from semantic models. When a person has two relationships with car it is possible to generate names such as owner_person and driver_person, which are immediately meaningful. === Cardinalities === Modifications to the original specification can be beneficial. Chen described look-across cardinalities. As an aside, the Barker–Ellis notation, used in Oracle Designer, uses same-side for minimum cardinality (analogous to optionality) and role, but look-across for maximum cardinality (the crow's foot). Research by Merise, Elmasri & Navathe and others has shown there is a preference for same-side for roles and both minimum and maximum cardinalities, and researchers (Feinerer, Dullea et al.) have shown that this is more coherent when applied to n-ary relationships of order greater than 2. Dullea et al. states: "A 'look across' notation such as used in the UML does not effectively represent the semantics of participation constraints imposed on relationships where the degree is higher than binary." Feinerer says: "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann investigates this situation and shows how and why different transformations fail." (Although the "reduction" mentioned is spurious as the two diagrams 3.4 and 3.5 are in fact the same) and also "As we will see on the next few pages, the look-across interpretation introduces several difficulties that prevent the extension of simple mechanisms from binary to n-ary associations." Chen's notation for entity–relationship modeling uses rectangles to represent entity sets, and diamonds to represent relationships appropriate for first-class objects: they can have attributes and relationships of their own. If an entity set participates in a relationship set, they are connected with a line. Attributes are drawn as ovals and connected with a line to exactly one entity or relationship set. Cardinality constraints are expressed as follows: a double line indicates a participation constraint, totality, or surjectivity: all entities in the entity set must participate in at least one relationship in the relationship set; an arrow from an entity set to a relationship set indicates a key constraint, i.e. injectivity: each entity of the entity set can participate in at most one relationship in the relationship set; a thick line indicates both, i.e. bijectivity: each entity in the entity set is involved in exactly one relationship. an underlined name of an attribute indicates that it is a key: two different entities or relationships with this attribute always have different values for this attribute. Attributes are often omitted as they can clutter up a diagram. Other diagram techniques often list entity attributes within the rectangles drawn for entity sets. == Related diagramming convention techniques == Bachman notation Barker's notation EXPRESS IDEF1X § Crow's foot notation (also Martin notation) (min, max)-notation of Jean-Raymond Abrial in 1974 UML class diagrams Merise Object-role modeling === Crow's foot notation === Crow's foot notation, the beginning of which dates back to an article by Gordon Everest (1976), is used in Barker's notation, Structured Systems Analysis and Design Method (SSADM), and information technology engineering. Crow's foot diagrams represent entities as boxes, and relationships as lines between the boxes. Different shapes at the ends of these lines represent the relative cardinality of the relationship. Crow's foot notation was in use in ICL in 1978, and was used in the consultancy practice CACI. Many of the consultants at CACI (including Richard Barker) came from ICL and subsequently moved to Oracle UK, where they developed the early versions of Oracle's CASE tools, introducing the notation to a wider audience. With this notation, relationships cannot have attributes. Where necessary, relationships are promoted to entities in their own right: for example, if it is necessary to capture where and when an artist performed a song, a new entity "performance" is introduced (with attributes reflecting the time and place), and the relationship of an artist to a song becomes an indirect relationship via the performance (artist-performs-performance, performance-features-song). Three symbols are used to represent cardinality: the ring represents "zero" the dash represents "one" the crow's foot represents "many" or "infinite" These symbols are used in pairs to represent the four types of cardinality that an entity may have in a relationship. The inner component of the notation represents the minimum, and the outer component represents the maximum. ring and dash → minimum zero, maximum one (optional) dash and dash → minimum one, maximum one (mandatory) ring and crow's foot → minimum zero, maximum many (optional) dash and crow's foot → minimum one, maximum many (mandatory) == Model usability issues == Users of a modeled database can encounter two well-known issues where the returned results differ from what the query author assumed. These are known as the fan trap and the chasm trap, and they can lead to inaccurate query results if not properly handled during the design of the Entity-Relationship Model (ER Model). Both the fan trap and chasm trap underscore the importance of ensuring that ER models are not only technically correct but also fully and accurately reflect the real-world relationships they are designed to represent. Identifying and resolving these traps early in the design process helps avoid significant issues later, especially in complex databases intended for business intelligence or decision support. === Fan trap === The first issue is the fan trap. It occurs when a (master) table links to multiple tables in a one-to-many relationship. The issue derives its name from the visual appearance of the model when it is drawn in an entity–relationship diagram, as the linked tables 'fan out' from the master table. This type of model resembles a star schema, which is a common design in data warehouses. When attempting to calculate sums over aggregates using standard SQL queries based on the master table, the results can be unexpected and often incorrect due to the way relationships are structured. The miscalculation happens because SQL treats each relationship individually, which may result in double-counting or other inaccuracies. This issue is particularly common in decision support systems. To mitigate this, either the data model or the SQL query itself must be adjusted. Some database querying software designed for decision support includes built-in methods to detect and address fan traps. === Chasm trap === The second issue is the chasm trap. A chasm trap occurs when a model suggests the existence of a relationship between entity types, but the pathway between these entities is incomplete or missing in certain instances. For example, imagine a database where a Building has one or more Rooms, and these Rooms hold zero or more Computers. One might expect to query the model to list all Computers in a Building. However, if a Computer is temporarily not assigned to a Room (perhaps under repair or stored elsewhere), it won't be included in the query results. The query would only return Computers currently assigned to Rooms, not all Computers in the Building. This reflects a flaw in the model, as it fails to account for Computers that are in the Building but not in a Room. To resolve this, an additional relationship directly linking the Building and Computers would be required. == In semantic modeling == === Semantic model === A semantic model is a model of concepts and is sometimes called a "platform independent model". It is an intensional model. At least since Carnap, it is well known that: "...the full meaning of a concept is constituted by two aspects, its intension and its extension. The first part comprises the embedding of a concept in the world of concepts as a whole, i.e. the totality of all relations to other concepts. The second part establishes the referential meaning of the concept, i.e. its counterpart in the real or in a possible world". === Extension model === An extensional model is one that maps to the elements of a particular methodology or technology, and is thus a "platform specific model". The UML specification explicitly states that associations in class models are extensional and this is in fact self-evident by considering the extensive array of additional "adornments" provided by the specification over and above those provided by any of the prior candidate "semantic modelling languages"."UML as a Data Modeling Notation, Part 2" === Entity–relationship origins === Peter Chen, the father of ER modeling said in his seminal paper: "The entity-relationship model adopts the more natural view that the real world consists of entities and relationships. It incorporates some of the important semantic information about the real world." In his original 1976 article Chen explicitly contrasts entity–relationship diagrams with record modelling techniques: "The data structure diagram is a representation of the organization of records and is not an exact representation of entities and relationships." Several other authors also support Chen's program: ==== Philosophical alignment ==== Chen is in accord with philosophical traditions from the time of the Ancient Greek philosophers: Plato and Aristotle. Plato himself associates knowledge with the apprehension of unchanging Forms (namely, archetypes or abstract representations of the many types of things, and properties) and their relationships to one another. == Limitations == An ER model is primarily conceptual, an ontology that expresses predicates in a domain of knowledge. ER models are readily used to represent relational database structures (after Codd and Date) but not so often to represent other kinds of data structure (such as data warehouses and document stores) Some ER model notations include symbols to show super-sub-type relationships and mutual exclusion between relationships; some do not. An ER model does not show an entity's life history (how its attributes and/or relationships change over time in response to events). For many systems, such state changes are nontrivial and important enough to warrant explicit specification. Some have extended ER modeling with constructs to represent state changes, an approach supported by the original author; an example is Anchor Modeling. Others model state changes separately, using state transition diagrams or some other process modeling technique. Many other kinds of diagram are drawn to model other aspects of systems, including the 14 diagram types offered by UML. Today, even where ER modeling could be useful, it is uncommon because many use tools that support similar kinds of model, notably class diagrams for OO programming and data models for relational database management systems. Some of these tools can generate code from diagrams and reverse-engineer diagrams from code. In a survey, Brodie and Liu could not find a single instance of entity–relationship modeling inside a sample of ten Fortune 100 companies. Badia and Lemire blame this lack of use on the lack of guidance but also on the lack of benefits, such as lack of support for data integration. The enhanced entity–relationship model (EER modeling) introduces several concepts not in ER modeling, but are closely related to object-oriented design, like is-a relationships. For modelling temporal databases, numerous ER extensions have been considered. Similarly, the ER model was found unsuitable for multidimensional databases (used in OLAP applications); no dominant conceptual model has emerged in this field yet, although they generally revolve around the concept of OLAP cube (also known as data cube within the field). == See also == Associative entity – Term in relational and entity–relationship theory Concept map – Diagram showing relationships among concepts Database design – Designing how data is held in a database Data structure diagram – visual representation of a certain kind of data model that contains entities, their relationships, and the constraints that are placed on themPages displaying wikidata descriptions as a fallback Enhanced entity–relationship model – Data model Enterprise architecture framework – Frame in which the architecture of a company is defined Entity Data Model – Open source object-relational mapping frameworkPages displaying short descriptions of redirect targets Value range structure diagrams Comparison of data modeling tools – Comparison of notable data modeling tools Knowledge graph – Type of knowledge base Ontology – Specification of a conceptualization Object-role modeling – Programming techniquePages displaying short descriptions of redirect targets Three schema approach – Approach to building information systemsPages displaying short descriptions of redirect targets Structured entity relationship model Schema-agnostic databases – type of databankPages displaying wikidata descriptions as a fallback == References == == Further reading == Chen, Peter (2002). "Entity-Relationship Modeling: Historical Events, Future Trends, and Lessons Learned" (PDF). Software pioneers. Springer-Verlag. pp. 296–310. ISBN 978-3-540-43081-0. Barker, Richard (1990). CASE Method: Entity Relationship Modelling. Addison-Wesley. ISBN 978-0201416961. Barker, Richard (1990). CASE Method: Tasks and Deliverables. Addison-Wesley. ISBN 978-0201416978. Mannila, Heikki; Räihä, Kari-Jouko (1992). The Design of Relational Databases. Addison-Wesley. ISBN 978-0201565232. Thalheim, Bernhard (2000). Entity-Relationship Modeling: Foundations of Database Technology. Springer. ISBN 978-3-540-65470-4. Bagui, Sikha; Earp, Richard Walsh (2022). Database Design Using Entity-Relationship Diagrams. Auerbach Publications. ISBN 978-1-032-01718-1. == External links == "The Entity Relationship Model: Toward a Unified View of Data" Entity Relationship Modelling Logical Data Structures (LDSs) - Getting started by Tony Drewry. Crow's Foot Notation Kinds of Data Models -- and How to Name Them presentation by David Hay
Wikipedia/Entity_relationship_model
Object–role modeling (ORM) is used to model the semantics of a universe of discourse. ORM is often used for data modeling and software engineering. An object–role model uses graphical symbols that are based on first order predicate logic and set theory to enable the modeler to create an unambiguous definition of an arbitrary universe of discourse. Attribute free, the predicates of an ORM Model lend themselves to the analysis and design of graph database models in as much as ORM was originally conceived to benefit relational database design. The term "object–role model" was coined in the 1970s and ORM based tools have been used for more than 30 years – principally for data modeling. More recently ORM has been used to model business rules, XML-Schemas, data warehouses, requirements engineering and web forms. == History == The roots of ORM can be traced to research into semantic modeling for information systems in Europe during the 1970s. There were many pioneers and this short summary does not by any means mention them all. An early contribution came in 1973 when Michael Senko wrote about "data structuring" in the IBM Systems Journal. In 1974 Jean-Raymond Abrial contributed an article about "Data Semantics". In June 1975, Eckhard Falkenberg's doctoral thesis was published and in 1976 one of Falkenberg's papers mentions the term "object–role model". G.M. Nijssen made fundamental contributions by introducing the "circle-box" notation for object types and roles, and by formulating the first version of the conceptual schema design procedure. Robert Meersman extended the approach by adding subtyping, and introducing the first truly conceptual query language. Object role modeling also evolved from the Natural language Information Analysis Method, a methodology that was initially developed by the academic researcher, G.M. Nijssen in the Netherlands (Europe) in the mid-1970s and his research team at the Control Data Corporation Research Laboratory in Belgium, and later at the University of Queensland, Australia in the 1980s. The acronym NIAM originally stood for "Nijssen's Information Analysis Methodology", and later generalised to "Natural language Information Analysis Methodology" and Binary Relationship Modeling since G. M. Nijssen was only one of many people involved in the development of the method. In 1989, Terry Halpin completed his PhD thesis on ORM, providing the first full formalization of the approach and incorporating several extensions. Also in 1989, Terry Halpin and G.M. Nijssen co-authored the book "Conceptual Schema and Relational Database Design" and several joint papers, providing the first formalization of object–role modeling. A graphical NIAM design tool which included the ability to generate database-creation scripts for Oracle, DB2 and DBQ was developed in the early 1990s in Paris. It was originally named Genesys and was marketed successfully in France and later Canada. It could also handle ER diagram design. It was ported to SCO Unix, SunOs, DEC 3151's and Windows 3.0 platforms, and was later migrated to succeeding Microsoft operating systems, utilising XVT for cross operating system graphical portability. The tool was renamed OORIANE and is currently being used for large data warehouse and SOA projects. Also evolving from NIAM is "Fully Communication Oriented Information Modeling" FCO-IM (1992). It distinguishes itself from traditional ORM in that it takes a strict communication-oriented perspective. Rather than attempting to model the domain and its essential concepts, it models the communication in this domain (universe of discourse). Another important difference is that it does this on instance level, deriving type level and object/fact level during analysis. Another recent development is the use of ORM in combination with standardised relation types with associated roles and a standard machine-readable dictionary and taxonomy of concepts as are provided in the Gellish English dictionary. Standardisation of relation types (fact types), roles and concepts enables increased possibilities for model integration and model reuse. == Concepts == === Facts === Object–role models are based on elementary facts, and expressed in diagrams that can be verbalised into natural language. A fact is a proposition such as "John Smith was hired on 5 January 1995" or "Mary Jones was hired on 3 March 2010". With ORM, propositions such as these, are abstracted into "fact types" for example "Person was hired on Date" and the individual propositions are regarded as sample data. The difference between a "fact" and an "elementary fact" is that an elementary fact cannot be simplified without loss of meaning. This "fact-based" approach facilitates modeling, transforming, and querying information from any domain. === Attribute-free === ORM is attribute-free: unlike models in the entity–relationship (ER) and Unified Modeling Language (UML) methods, ORM treats all elementary facts as relationships and so treats decisions for grouping facts into structures (e.g. attribute-based entity types, classes, relation schemes, XML schemas) as implementation concerns irrelevant to semantics. By avoiding attributes, ORM improves semantic stability and enables verbalization into natural language. === Fact-based modeling === Fact-based modeling includes procedures for mapping facts to attribute-based structures, such as those of ER or UML. Fact-based textual representations are based on formal subsets of native languages. ORM proponents argue that ORM models are easier to understand by people without a technical education. For example, proponents argue that object–role models are easier to understand than declarative languages such as Object Constraint Language (OCL) and other graphical languages such as UML class models. Fact-based graphical notations are more expressive than those of ER and UML. An object–role model can be automatically mapped to relational and deductive databases (such as datalog). === ORM 2 graphical notation === ORM2 is the latest generation of object–role modeling. The main objectives for the ORM 2 graphical notation are: More compact display of ORM models without compromising clarity Improved internationalization (e.g. avoid English language symbols) Simplified drawing rules to facilitate creation of a graphical editor Extended use of views for selectively displaying/suppressing detail Support for new features (e.g. role path delineation, closure aspects, modalities) === Design procedure === System development typically involves several stages such as: feasibility study; requirements analysis; conceptual design of data and operations; logical design; external design; prototyping; internal design and implementation; testing and validation; and maintenance. The seven steps of the conceptual schema design procedure are: Transform familiar information examples into elementary facts, and apply quality checks Draw the fact types, and apply a population check Check for entity types that should be combined, and note any arithmetic derivations Add uniqueness constraints, and check arity of fact types Add mandatory role constraints, and check for logical derivations Add value, set comparison and subtyping constraints Add other constraints and perform final checks ORM's conceptual schema design procedure (CSDP) focuses on the analysis and design of data. == See also == Concept map Conceptual schema Enhanced entity–relationship model (EER) Information flow diagram Ontology double articulation Ontology engineering Relational algebra Three-schema approach == References == == Further reading == Halpin, Terry (1989), Conceptual Schema and Relational Database Design, Sydney: Prentice Hall, ISBN 978-0-13-167263-5 Rossi, Matti; Siau, Keng (April 2001), Information Modeling in the New Millennium, IGI Global, ISBN 978-1-878289-77-3 Halpin, Terry; Evans, Ken; Hallock, Pat; Maclean, Bill (September 2003), Database Modeling with Microsoft Visio for Enterprise Architects, Morgan Kaufmann, ISBN 978-1-55860-919-8 Halpin, Terry; Morgan, Tony (March 2008), Information Modeling and Relational Databases: From Conceptual Analysis to Logical Design (2nd ed.), Morgan Kaufmann, ISBN 978-0-12-373568-3 == External links ==
Wikipedia/Object–role_modeling
Core architecture data model (CADM) in enterprise architecture is a logical data model of information used to describe and build architectures. The CADM is essentially a common database schema, defined within the US Department of Defense Architecture Framework DoDAF. It was initially published in 1997 as a logical data model for architecture data. == Overview == Core architecture data model (CADM) is designed to capture DoDAF architecture information in a standardized structure. CADM was developed to support the data requirements of the DoDAF. The CADM defines the entities and relationships for DoDAF architecture data elements that enable integration within and across architecture descriptions. In this manner, the CADM supports the exchange of architecture information among mission areas, components, and federal and coalition partners, thus facilitating the data interoperability of architectures. CADM is a critical aspect of being able to integrate architectures in conformance with DoDAF. This includes the use of common data element definitions, semantics, and data structure for all architecture description entities or objects. The use of the underlying CADM faithfully relates common objects across multiple views. Adherence with the framework, which includes conformance with the currently approved version of CADM, provides both a common approach for developing architectures and a basic foundation for relating architectures. Conformance with the CADM ensures the use of common architecture data elements (or types). == History == The CADM was initially published in 1997 as a logical data model for architecture data. It was revised in 1998 to meet all the requirements of the C4ISR Architecture Framework Version 2.0.1 As a logical data model, the initial CADM provided a conceptual view of how architecture information is organized. It identified and defined entities, attributes, and relations. The CADM has evolved since 1998, so that it now has a physical view providing the data types, abbreviated physical names, and domain values that are needed for a database implementation. Because the CADM is also a physical data model, it constitutes a database design and can be used to automatically generate databases. The CADM v1.01 was released with the DoD Architecture Framework v1.0 in August 2003. This DoDAF version restructured the C4ISR Framework v2.0 to offer guidance, product descriptions, and supplementary information in two volumes and a desk book. It broadened the applicability of architecture tenets and practices to all mission areas rather than just the C4ISR community. This document addressed usage, integrated architectures, DoD and Federal policies, value of architecture, architecture measures, DoD decision support processes, development techniques, analytical techniques, and the CADM v1.01, and moved towards a repository-based approach by placing emphasis on architecture data elements that comprise architecture products. The CADM v1.5 was pre-released with the DoD Architecture Framework, v1.5 in April 2007. The DoDAF v1.5 was an evolution of the DoDAF v1.0 and reflects and leverages the experience that the DoD components have gained in developing and using architecture descriptions. This transitional version provided additional guidance on how to reflect net-centric concepts within architecture descriptions, includes information on architecture data management and federating architectures through the department, and incorporates the pre-release CADM v1.5, a simplified model of previous CADM versions that includes net-centric elements. Pre-release CADM v1.5 is also backward compatible with previous CADM versions. Data sets built in accordance with the vocabulary of CADM v1.02/1.03 can be expressed faithfully and completely using the constructs of CADM v1.5. Note: For DoDAF V2.0, The DoDAF Meta-model (DM2) is working to replace the core architecture data model (CADM) which supported previous versions of the DoDAF. DM2 is a data construct that facilitates reader understanding of the use of data within an architecture document. CADM can continue to be used in support of architectures created in previous versions of DoDAF. == Topics == === Building blocks === The major elements of a core architecture data model are described as follows: Core : The essential elements of architecture information that need to be developed, validated, and maintained and that should be sharable across architecture concerns to achieve architecture goals (e.g., interoperability, investment optimization). Architecture data : The possible piece-parts of architecture products and related analytical tools in a rigorous definition of the pieces (object classes), their properties, features, or attributes, and inter-relationships. Data model: A data model defines the objects of a domain, their inter-relationships, and their properties, normally for the purpose of a database design. There are three data model levels, from highest to lowest: conceptual, logical, and physical. Conceptual data models are the highest level. They model the user concepts in terms familiar to users. Details may be left out to improve clarity and focus with users. Logical models are more formal, often with considerations of unique data representation (non-redundancy or database normalization), emphasis on semantic well-definedness and exclusivity (nonoverlapping entities), and domain-level completeness. Logical data models need not commit to a specific Data Base Management System (DBMS). Physical data models are usually the most detailed and the level sufficient for database generation. The Physical model must contain all the information necessary for implementation. The Physical model often addresses performance considerations. === Data modeling and visualization === The DoDAF incorporates data modeling (CADM) and visualization aspects (products and views) to support architecture analysis. The DoDAF's data model, CADM, defines architecture data entities, the relationships between them, and the data entity attributes, essentially specifying the “grammar” for the architecture community. It contains a set of “nouns,” “verbs,” and “adjectives” that, together with the “grammar,” allow one to create “sentences” about architecture artifacts that are consistent with the DoDAF. The CADM is a necessary aspect of the architecture and provides the meaning behind the architectural visual representations (products). It enables the effective comparing and sharing of architecture data across the enterprise, contributing to the overall usefulness of architectures. The CADM describes the following data model levels in further detail: Conceptual : Models the user concepts in terms familiar to users Logical : More formal model that considers unique data representation, emphasis on semantic well-defineness and exclusivity, and domain-level completeness Physical : Models all the information necessary for database implementation Data visualization is a way of graphically or textually representing architecture data to support decision-making analysis. The DoDAF provides products as a way of representing the underlying data in a user-friendly manner. In some cases, the existing DoDAF products are sufficient for representing the required information. Regardless of how one chooses to represent the architecture description, the underlying data (CADM) remains consistent, providing a common foundation to which analysis requirements are mapped. === Data model diagram notation. === As illustrated in the figure, boxes represent entities for which architecture data are collected (representing tables when used for a relational database); they are depicted by open boxes with square corners (independent entities) or rounded corners (dependent entities). The entity name is outside and on top of the open box. The lines of text inside the box denote the attributes of that entity (representing columns in the entity table when used for a relational database). The horizontal line in each box separates the primary key attributes (used to find unique instances of the entity) from the non-key descriptive attributes. The symbol with a circle and line underneath indicates subtyping, for which all the entities connected below are non-overlapping subsets of the entity connected at the top of the symbol. Relationships are represented by dotted (non-identifying) and solid (identifying) relationships in which the child entity (the one nearest the solid dot) has zero, one, or many instances associated to each instance of the parent entity (the other entity connected by the relationship line). === Basic architectural elements === An architecture data repository responsive to the architecture products of the DoDAF contains information on basic architectural elements such as the following: Operational nodes may be organizations, organization types, and operational (human) roles. (A role may be a skill, occupation, occupational specialty, or position.). Operational activities including tasks defined in the Universal Joint Task List (UJTL). Information and data refers to information provided by domain databases and other information asset sources (which may be network centric) and systems data that implement that information. These information sources and systems data may define information exchanges or details for system interfaces. Systems nodes refers to nodes associated with physical entities as well as systems and may be facilities, platforms, units,3 or locations. Systems include families of systems (FOSs) and systems of systems (SOSs) and contain software and hardware equipment items. System functions are required by operational activities and are performed by one or more systems. Performance refers to performance characteristics of systems, system functions, links (i.e., physical links), computer networks, and system data exchanges. Standards are associated with technologies, systems, systems nodes, and data, and refer to technical standards for information processing, information transfer, data, security, and human computer interface. Technologies include future technologies and relates to systems and emerging standards concerning the use of such technologies. The depicted (conceptual) relationships shown in this diagram include the following (among many others): Operational nodes perform many operational activities. Operational nodes require information. Information are related to systems and implemented as data, which is associated with standards. Systems perform system functions. Systems have performance characteristics; both systems and performance may relate to a system function being performed. With these relationships, many types of architectural and related information can be represented such as networks, information flows, information requirements, interfaces, and so forth. === Related models === The counterpart to CADM within NASA is the NASA Exploration Information Ontology Model (NeXIOM), which is designed to capture and expressively describe the engineering and programmatic data that drives exploration program decisions. NeXIOM is intended to be a repository that can be accessed by various simulation tools and models that need to exchange information and data. == References == == External links == Media related to Core Architecture Data Model at Wikimedia Commons
Wikipedia/Core_architecture_data_model
An entity–relationship model (or ER model) describes interrelated things of interest in a specific domain of knowledge. A basic ER model is composed of entity types (which classify the things of interest) and specifies relationships that can exist between entities (instances of those entity types). In software engineering, an ER model is commonly formed to represent things a business needs to remember in order to perform business processes. Consequently, the ER model becomes an abstract data model, that defines a data or information structure that can be implemented in a database, typically a relational database. Entity–relationship modeling was developed for database and design by Peter Chen and published in a 1976 paper, with variants of the idea existing previously. Today it is commonly used for teaching students the basics of database structure. Some ER models show super and subtype entities connected by generalization-specialization relationships, and an ER model can also be used to specify domain-specific ontologies. == Introduction == An ER model usually results from systematic analysis to define and describe the data created and needed by processes in a business area. Typically, it represents records of entities and events monitored and directed by business processes, rather than the processes themselves. It is usually drawn in a graphical form as boxes (entities) that are connected by lines (relationships) which express the associations and dependencies between entities. It can also be expressed in a verbal form, for example: one building may be divided into zero or more apartments, but one apartment can only be located in one building. Entities may be defined not only by relationships, but also by additional properties (attributes), which include identifiers called "primary keys". Diagrams created to represent attributes as well as entities and relationships may be called entity-attribute-relationship diagrams, rather than entity–relationship models. An ER model is typically implemented as a database. In a simple relational database implementation, each row of a table represents one instance of an entity type, and each field in a table represents an attribute type. In a relational database a relationship between entities is implemented by storing the primary key of one entity as a pointer or "foreign key" in the table of another entity. There is a tradition for ER/data models to be built at two or three levels of abstraction. The conceptual-logical-physical hierarchy below is used in other kinds of specification, and is different from the three schema approach to software engineering. Conceptual data model This is the highest level ER model in that it contains the least granular detail but establishes the overall scope of what is to be included within the model set. The conceptual ER model normally defines master reference data entities that are commonly used by the organization. Developing an enterprise-wide conceptual ER model is useful to support documenting the data architecture for an organization. A conceptual ER model may be used as the foundation for one or more logical data models (see below). The purpose of the conceptual ER model is then to establish structural metadata commonality for the master data entities between the set of logical ER models. The conceptual data model may be used to form commonality relationships between ER models as a basis for data model integration. Logical data model A logical ER model does not require a conceptual ER model, especially if the scope of the logical ER model includes only the development of a distinct information system. The logical ER model contains more detail than the conceptual ER model. In addition to master data entities, operational and transactional data entities are now defined. The details of each data entity are developed and the relationships between these data entities are established. The logical ER model is however developed independently of the specific database management system into which it can be implemented. Physical data model One or more physical ER models may be developed from each logical ER model. The physical ER model is normally developed to be instantiated as a database. Therefore, each physical ER model must contain enough detail to produce a database and each physical ER model is technology dependent since each database management system is somewhat different. The physical model is normally instantiated in the structural metadata of a database management system as relational database objects such as database tables, database indexes such as unique key indexes, and database constraints such as a foreign key constraint or a commonality constraint. The ER model is also normally used to design modifications to the relational database objects and to maintain the structural metadata of the database. The first stage of information system design uses these models during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain area of interest. In the case of the design of an information system that is based on a database, the conceptual data model is, at a later stage (usually called logical design), mapped to a logical data model, such as the relational model. This in turn is mapped to a physical model during physical design. Sometimes, both of these phases are referred to as "physical design." == Components == An entity may be defined as a thing that is capable of an independent existence that can be uniquely identified, and is capable of storing data. An entity is an abstraction from the complexities of a domain. When we speak of an entity, we normally speak of some aspect of the real world that can be distinguished from other aspects of the real world. An entity is a thing that exists either physically or logically. An entity may be a physical object such as a house or a car (they exist physically), an event such as a house sale or a car service, or a concept such as a customer transaction or order (they exist logically—as a concept). Although the term entity is the one most commonly used, following Chen, entities and entity-types should be distinguished. An entity-type is a category. An entity, strictly speaking, is an instance of a given entity-type. There are usually many instances of an entity-type. Because the term entity-type is somewhat cumbersome, most people tend to use the term entity as a synonym. Entities can be thought of as nouns. Examples include a computer, an employee, a song, or a mathematical theorem. A relationship captures how entities are related to one another. Relationships can be thought of as verbs, linking two or more nouns. Examples include an owns relationship between a company and a computer, a supervises relationship between an employee and a department, a performs relationship between an artist and a song, and a proves relationship between a mathematician and a conjecture. The model's linguistic aspect described above is used in the declarative database query language ERROL, which mimics natural language constructs. ERROL's semantics and implementation are based on reshaped relational algebra (RRA), a relational algebra that is adapted to the entity–relationship model and captures its linguistic aspect. Entities and relationships can both have attributes. For example, an employee entity might have a Social Security Number (SSN) attribute, while a proved relationship may have a date attribute. All entities except weak entities must have a minimal set of uniquely identifying attributes that may be used as a unique/primary key. Entity-relationship diagrams (ERDs) do not show single entities or single instances of relations. Rather, they show entity sets (all entities of the same entity type) and relationship sets (all relationships of the same relationship type). For example, a particular song is an entity, the collection of all songs in a database is an entity set, the eaten relationship between a child and his lunch is a single relationship, and the set of all such child-lunch relationships in a database is a relationship set. In other words, a relationship set corresponds to a relation in mathematics, while a relationship corresponds to a member of the relation. Certain cardinality constraints on relationship sets may be indicated as well. Physical views show how data is actually stored. === Relationships, roles, and cardinalities === Chen's original paper gives an example of a relationship and its roles. He describes a relationship "marriage" and its two roles, "husband" and "wife". A person plays the role of husband in a marriage (relationship) and another person plays the role of wife in the (same) marriage. These words are nouns. Chen's terminology has also been applied to earlier ideas. The lines, arrows, and crow's feet of some diagrams owes more to the earlier Bachman diagrams than to Chen's relationship diagrams. Another common extension to Chen's model is to "name" relationships and roles as verbs or phrases. === Role naming === It has also become prevalent to name roles with phrases such as is the owner of and is owned by. Correct nouns in this case are owner and possession. Thus, person plays the role of owner and car plays the role of possession rather than person plays the role of, is the owner of, etc. Using nouns has direct benefit when generating physical implementations from semantic models. When a person has two relationships with car it is possible to generate names such as owner_person and driver_person, which are immediately meaningful. === Cardinalities === Modifications to the original specification can be beneficial. Chen described look-across cardinalities. As an aside, the Barker–Ellis notation, used in Oracle Designer, uses same-side for minimum cardinality (analogous to optionality) and role, but look-across for maximum cardinality (the crow's foot). Research by Merise, Elmasri & Navathe and others has shown there is a preference for same-side for roles and both minimum and maximum cardinalities, and researchers (Feinerer, Dullea et al.) have shown that this is more coherent when applied to n-ary relationships of order greater than 2. Dullea et al. states: "A 'look across' notation such as used in the UML does not effectively represent the semantics of participation constraints imposed on relationships where the degree is higher than binary." Feinerer says: "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann investigates this situation and shows how and why different transformations fail." (Although the "reduction" mentioned is spurious as the two diagrams 3.4 and 3.5 are in fact the same) and also "As we will see on the next few pages, the look-across interpretation introduces several difficulties that prevent the extension of simple mechanisms from binary to n-ary associations." Chen's notation for entity–relationship modeling uses rectangles to represent entity sets, and diamonds to represent relationships appropriate for first-class objects: they can have attributes and relationships of their own. If an entity set participates in a relationship set, they are connected with a line. Attributes are drawn as ovals and connected with a line to exactly one entity or relationship set. Cardinality constraints are expressed as follows: a double line indicates a participation constraint, totality, or surjectivity: all entities in the entity set must participate in at least one relationship in the relationship set; an arrow from an entity set to a relationship set indicates a key constraint, i.e. injectivity: each entity of the entity set can participate in at most one relationship in the relationship set; a thick line indicates both, i.e. bijectivity: each entity in the entity set is involved in exactly one relationship. an underlined name of an attribute indicates that it is a key: two different entities or relationships with this attribute always have different values for this attribute. Attributes are often omitted as they can clutter up a diagram. Other diagram techniques often list entity attributes within the rectangles drawn for entity sets. == Related diagramming convention techniques == Bachman notation Barker's notation EXPRESS IDEF1X § Crow's foot notation (also Martin notation) (min, max)-notation of Jean-Raymond Abrial in 1974 UML class diagrams Merise Object-role modeling === Crow's foot notation === Crow's foot notation, the beginning of which dates back to an article by Gordon Everest (1976), is used in Barker's notation, Structured Systems Analysis and Design Method (SSADM), and information technology engineering. Crow's foot diagrams represent entities as boxes, and relationships as lines between the boxes. Different shapes at the ends of these lines represent the relative cardinality of the relationship. Crow's foot notation was in use in ICL in 1978, and was used in the consultancy practice CACI. Many of the consultants at CACI (including Richard Barker) came from ICL and subsequently moved to Oracle UK, where they developed the early versions of Oracle's CASE tools, introducing the notation to a wider audience. With this notation, relationships cannot have attributes. Where necessary, relationships are promoted to entities in their own right: for example, if it is necessary to capture where and when an artist performed a song, a new entity "performance" is introduced (with attributes reflecting the time and place), and the relationship of an artist to a song becomes an indirect relationship via the performance (artist-performs-performance, performance-features-song). Three symbols are used to represent cardinality: the ring represents "zero" the dash represents "one" the crow's foot represents "many" or "infinite" These symbols are used in pairs to represent the four types of cardinality that an entity may have in a relationship. The inner component of the notation represents the minimum, and the outer component represents the maximum. ring and dash → minimum zero, maximum one (optional) dash and dash → minimum one, maximum one (mandatory) ring and crow's foot → minimum zero, maximum many (optional) dash and crow's foot → minimum one, maximum many (mandatory) == Model usability issues == Users of a modeled database can encounter two well-known issues where the returned results differ from what the query author assumed. These are known as the fan trap and the chasm trap, and they can lead to inaccurate query results if not properly handled during the design of the Entity-Relationship Model (ER Model). Both the fan trap and chasm trap underscore the importance of ensuring that ER models are not only technically correct but also fully and accurately reflect the real-world relationships they are designed to represent. Identifying and resolving these traps early in the design process helps avoid significant issues later, especially in complex databases intended for business intelligence or decision support. === Fan trap === The first issue is the fan trap. It occurs when a (master) table links to multiple tables in a one-to-many relationship. The issue derives its name from the visual appearance of the model when it is drawn in an entity–relationship diagram, as the linked tables 'fan out' from the master table. This type of model resembles a star schema, which is a common design in data warehouses. When attempting to calculate sums over aggregates using standard SQL queries based on the master table, the results can be unexpected and often incorrect due to the way relationships are structured. The miscalculation happens because SQL treats each relationship individually, which may result in double-counting or other inaccuracies. This issue is particularly common in decision support systems. To mitigate this, either the data model or the SQL query itself must be adjusted. Some database querying software designed for decision support includes built-in methods to detect and address fan traps. === Chasm trap === The second issue is the chasm trap. A chasm trap occurs when a model suggests the existence of a relationship between entity types, but the pathway between these entities is incomplete or missing in certain instances. For example, imagine a database where a Building has one or more Rooms, and these Rooms hold zero or more Computers. One might expect to query the model to list all Computers in a Building. However, if a Computer is temporarily not assigned to a Room (perhaps under repair or stored elsewhere), it won't be included in the query results. The query would only return Computers currently assigned to Rooms, not all Computers in the Building. This reflects a flaw in the model, as it fails to account for Computers that are in the Building but not in a Room. To resolve this, an additional relationship directly linking the Building and Computers would be required. == In semantic modeling == === Semantic model === A semantic model is a model of concepts and is sometimes called a "platform independent model". It is an intensional model. At least since Carnap, it is well known that: "...the full meaning of a concept is constituted by two aspects, its intension and its extension. The first part comprises the embedding of a concept in the world of concepts as a whole, i.e. the totality of all relations to other concepts. The second part establishes the referential meaning of the concept, i.e. its counterpart in the real or in a possible world". === Extension model === An extensional model is one that maps to the elements of a particular methodology or technology, and is thus a "platform specific model". The UML specification explicitly states that associations in class models are extensional and this is in fact self-evident by considering the extensive array of additional "adornments" provided by the specification over and above those provided by any of the prior candidate "semantic modelling languages"."UML as a Data Modeling Notation, Part 2" === Entity–relationship origins === Peter Chen, the father of ER modeling said in his seminal paper: "The entity-relationship model adopts the more natural view that the real world consists of entities and relationships. It incorporates some of the important semantic information about the real world." In his original 1976 article Chen explicitly contrasts entity–relationship diagrams with record modelling techniques: "The data structure diagram is a representation of the organization of records and is not an exact representation of entities and relationships." Several other authors also support Chen's program: ==== Philosophical alignment ==== Chen is in accord with philosophical traditions from the time of the Ancient Greek philosophers: Plato and Aristotle. Plato himself associates knowledge with the apprehension of unchanging Forms (namely, archetypes or abstract representations of the many types of things, and properties) and their relationships to one another. == Limitations == An ER model is primarily conceptual, an ontology that expresses predicates in a domain of knowledge. ER models are readily used to represent relational database structures (after Codd and Date) but not so often to represent other kinds of data structure (such as data warehouses and document stores) Some ER model notations include symbols to show super-sub-type relationships and mutual exclusion between relationships; some do not. An ER model does not show an entity's life history (how its attributes and/or relationships change over time in response to events). For many systems, such state changes are nontrivial and important enough to warrant explicit specification. Some have extended ER modeling with constructs to represent state changes, an approach supported by the original author; an example is Anchor Modeling. Others model state changes separately, using state transition diagrams or some other process modeling technique. Many other kinds of diagram are drawn to model other aspects of systems, including the 14 diagram types offered by UML. Today, even where ER modeling could be useful, it is uncommon because many use tools that support similar kinds of model, notably class diagrams for OO programming and data models for relational database management systems. Some of these tools can generate code from diagrams and reverse-engineer diagrams from code. In a survey, Brodie and Liu could not find a single instance of entity–relationship modeling inside a sample of ten Fortune 100 companies. Badia and Lemire blame this lack of use on the lack of guidance but also on the lack of benefits, such as lack of support for data integration. The enhanced entity–relationship model (EER modeling) introduces several concepts not in ER modeling, but are closely related to object-oriented design, like is-a relationships. For modelling temporal databases, numerous ER extensions have been considered. Similarly, the ER model was found unsuitable for multidimensional databases (used in OLAP applications); no dominant conceptual model has emerged in this field yet, although they generally revolve around the concept of OLAP cube (also known as data cube within the field). == See also == Associative entity – Term in relational and entity–relationship theory Concept map – Diagram showing relationships among concepts Database design – Designing how data is held in a database Data structure diagram – visual representation of a certain kind of data model that contains entities, their relationships, and the constraints that are placed on themPages displaying wikidata descriptions as a fallback Enhanced entity–relationship model – Data model Enterprise architecture framework – Frame in which the architecture of a company is defined Entity Data Model – Open source object-relational mapping frameworkPages displaying short descriptions of redirect targets Value range structure diagrams Comparison of data modeling tools – Comparison of notable data modeling tools Knowledge graph – Type of knowledge base Ontology – Specification of a conceptualization Object-role modeling – Programming techniquePages displaying short descriptions of redirect targets Three schema approach – Approach to building information systemsPages displaying short descriptions of redirect targets Structured entity relationship model Schema-agnostic databases – type of databankPages displaying wikidata descriptions as a fallback == References == == Further reading == Chen, Peter (2002). "Entity-Relationship Modeling: Historical Events, Future Trends, and Lessons Learned" (PDF). Software pioneers. Springer-Verlag. pp. 296–310. ISBN 978-3-540-43081-0. Barker, Richard (1990). CASE Method: Entity Relationship Modelling. Addison-Wesley. ISBN 978-0201416961. Barker, Richard (1990). CASE Method: Tasks and Deliverables. Addison-Wesley. ISBN 978-0201416978. Mannila, Heikki; Räihä, Kari-Jouko (1992). The Design of Relational Databases. Addison-Wesley. ISBN 978-0201565232. Thalheim, Bernhard (2000). Entity-Relationship Modeling: Foundations of Database Technology. Springer. ISBN 978-3-540-65470-4. Bagui, Sikha; Earp, Richard Walsh (2022). Database Design Using Entity-Relationship Diagrams. Auerbach Publications. ISBN 978-1-032-01718-1. == External links == "The Entity Relationship Model: Toward a Unified View of Data" Entity Relationship Modelling Logical Data Structures (LDSs) - Getting started by Tony Drewry. Crow's Foot Notation Kinds of Data Models -- and How to Name Them presentation by David Hay
Wikipedia/Entity-relationship_model
Integrated Computer-Aided Manufacturing (ICAM) is a US Air Force program that develops tools, techniques, and processes to support manufacturing integration. It influenced the computer-integrated manufacturing (CIM) and computer-aided manufacturing (CAM) project efforts of many companies. The ICAM program was founded in 1976 and initiative managed by the US Air Force at Wright-Patterson as a part of their technology modernization efforts. The program initiated the development a series of standards for modeling and analysis in management and business improvement, called Integrated Definitions, short IDEFs. == Overview == The USAF ICAM program was founded in 1976 at the US Air Force Materials Laboratory, Wright-Patterson Air Force Base in Ohio by Dennis E. Wisnosky and Dan L. Shunk and others. In the mid-1970s Joseph Harrington had assisted Wisnosky and Shunk in designing the ICAM program and had broadened the concept of CIM to include the entire manufacturing company. Harrington considered manufacturing a "monolithic function". The ICAM program was visionary in showing that a new approach was necessary to achieve integration in manufacturing firms. Wisnosky and Shunk developed a "wheel" to illustrate the architecture of their ICAM project and to show the various elements that had to work together. Wisnosky and Shunk were among the first to understand the web of interdependencies needed for integration. Their work represents the first major step in shifting the focus of manufacturing from a series of sequential operations to parallel processing. The ICAM program has spent over $100 million to develop tools, techniques, and processes to support manufacturing integration. The Air Force's ICAM program recognizes the role of data as central to any integration effort. Data must be common and shareable across functions. The concept still remains ahead of its time, because most major companies did not seriously begin to attack the data architecture challenge until well into the 1990s. The ICAM program also recognizes the need for ways to analyze and document major activities within the manufacturing establishment. Thus, from ICAM came the IDEFs, the standard for modeling and analysis in management and business improvement efforts. IDEF means ICAM DEFinition. == The impact == === Standard data models === To extract real meaning from the data, we must also have formulated, and agreed on, a model of the world the data describes. We now understand that this actually involves two different kinds of model: Static associations between data and real-world physical and conceptual objects it describes—called the information model Rules for use and modification of the data, which derive from the dynamic characteristics of the objects themselves—called the functional model The significance of these models to data interchange for manufacturing and materials flow was recognized early in the Air Force Integrated Computer Aided Manufacturing (ICAM) Project and gave rise to the IDEF formal modeling project. IDEF produced a specification for a formal functional modeling approach (IDEF0) and an information modeling language (IDEF1). The more recent "Product Data Exchange Specification" (PDES) project in the U.S., the related ISO Standard for the exchange of product model data (STEP) and the Computer Integrated Manufacture Open Systems Architecture (CIMOSA) [ISO87] project in the European Economic Community have whole heartedly accepted the notion that useful data sharing is not possible without formal semantic data models of the context the data describes. Within their respective spectra of efforts, each of these projects has a panoply of information models for manufactured objects, materials and product characteristics, and for manufacturing and assembly processes. Each also has a commitment to detailed functional models of the various phases of product life cycle. The object of all of these recent efforts is to standardize the interchange of information in many aspects of product design, manufacture, delivery and support. === Further research with ICAM definitions === The research in expending and applying the ICAM definitions have proceeded. In the 1990s for example the Material Handling Research Center (MHRC) of the Georgia Institute of Technology and University of Arkansas had included it in their Information Systems research area. That area focuses on the information that must accompany material movements and the application of artificial intelligence to material handling problems. MHRC's research involves expanding the integrated computer-aided manufacturing definition (IDEF) approach to include the information flow as well as the material flow needed to support a manufacturing enterprise, as well as models to handle unscheduled events such as machine breakdowns or material shortages. Past research resulted in software to automatically palletize random-size packages, a system to automatically load and unload truck trailers, and an integrated production control system to fabricate optical fibers. == See also == CIMOSA IDEF == References == == Further reading == Charles Savage, 1996, Fifth Generation Management, Dynamic Teaming, Virtual Enterprising and Knowledge Networking, page 184, ISBN 0-7506-9701-6, Butterworth-Heinemann. Joseph Harrington (1984). Understanding the Manufacturing Process. ISBN 978-0-8247-7170-6
Wikipedia/Integrated_Computer-Aided_Manufacturing
Building information modeling (BIM) is an approach involving the generation and management of digital representations of the physical and functional characteristics of buildings or other physical assets and facilities. BIM is supported by various tools, processes, technologies and contracts. Building information models (BIMs) are computer files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged or networked to support decision-making regarding a built asset. BIM software is used by individuals, businesses and government agencies who plan, design, construct, operate and maintain buildings and diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports and tunnels. The concept of BIM has been in development since the 1970s, but it only became an agreed term in the early 2000s. The development of standards and the adoption of BIM has progressed at different speeds in different countries. Developed by buildingSMART, Industry Foundation Classes (IFCs) – data structures for representing information – became an international standard, ISO 16739, in 2013, and BIM process standards developed in the United Kingdom from 2007 onwards formed the basis of an international standard, ISO 19650, launched in January 2019. == History == The concept of BIM has existed since the 1970s. The first software tools developed for modeling buildings emerged in the late 1970s and early 1980s, and included workstation products such as Chuck Eastman's Building Description System and GLIDE, RUCAPS, Sonata, Reflex and Gable 4D Series. The early applications, and the hardware needed to run them, were expensive, which limited widespread adoption. The pioneering role of applications such as RUCAPS, Sonata and Reflex has been recognized by Laiserin as well as the UK's Royal Academy of Engineering; former GMW employee Jonathan Ingram worked on all three products. What became known as BIM products differed from architectural drafting tools such as AutoCAD by allowing the addition of further information (time, cost, manufacturers' details, sustainability, and maintenance information, etc.) to the building model. As Graphisoft had been developing such solutions for longer than its competitors, Laiserin regarded its ArchiCAD application as then "one of the most mature BIM solutions on the market." Following its launch in 1987, ArchiCAD became regarded by some as the first implementation of BIM, as it was the first CAD product on a personal computer able to create both 2D and 3D geometry, as well as the first commercial BIM product for personal computers. However, Graphisoft founder Gábor Bojár has acknowledged to Jonathan Ingram in an open letter, that Sonata "was more advanced in 1986 than ArchiCAD at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later". The term 'building model' (in the sense of BIM as used today) was first used in papers in the mid-1980s: in a 1985 paper by Simon Ruffle eventually published in 1986, and later in a 1986 paper by Robert Aish – then at GMW Computers Ltd, developer of RUCAPS software – referring to the software's use at London's Heathrow Airport. The term 'Building Information Model' first appeared in a 1992 paper by G.A. van Nederveen and F. P. Tolman. However, the terms 'Building Information Model' and 'Building Information Modeling' (including the acronym "BIM") did not become popularly used until some 10 years later. Facilitating exchange and interoperability of information in digital format was variously with differing terminology: by Graphisoft as "Virtual Building" or "Single Building Model", Bentley Systems as "Integrated Project Models", and by Autodesk or Vectorworks as "Building Information Modeling". In 2002, Autodesk released a white paper entitled "Building Information Modeling," and other software vendors also started to assert their involvement in the field. By hosting contributions from Autodesk, Bentley Systems and Graphisoft, plus other industry observers, in 2003, Jerry Laiserin helped popularize and standardize the term as a common name for the digital representation of the building process. === Interoperability and BIM standards === As some BIM software developers have created proprietary data structures in their software, data and files created by one vendor's applications may not work in other vendor solutions. To achieve interoperability between applications, neutral, non-proprietary or open standards for sharing BIM data among different software applications have been developed. Poor software interoperability has long been regarded as an obstacle to industry efficiency in general and to BIM adoption in particular. In August 2004 a US National Institute of Standards and Technology (NIST) report conservatively estimated that $15.8 billion was lost annually by the U.S. capital facilities industry due to inadequate interoperability arising from "the highly fragmented nature of the industry, the industry’s continued paper-based business practices, a lack of standardization, and inconsistent technology adoption among stakeholders". An early BIM standard was the CIMSteel Integration Standard, CIS/2, a product model and data exchange file format for structural steel project information (CIMsteel: Computer Integrated Manufacturing of Constructional Steelwork). CIS/2 enables seamless and integrated information exchange during the design and construction of steel framed structures. It was developed by the University of Leeds and the UK's Steel Construction Institute in the late 1990s, with inputs from Georgia Tech, and was approved by the American Institute of Steel Construction as its data exchange format for structural steel in 2000. BIM is often associated with Industry Foundation Classes (IFCs) and aecXML – data structures for representing information – developed by buildingSMART. IFC is recognised by the ISO and has been an international standard, ISO 16739, since 2013. OpenBIM is an initiative by buildingSMART that promotes open standards and interoperability. Based on the IFC standard, it allows vendor-neutral BIM data exchange. OpenBIM standards also include BIM Collaboration Format (BCF) for issue tracking and Information Delivery Specification (IDS) for defining model requirements. Construction Operations Building information exchange (COBie) is also associated with BIM. COBie was devised by Bill East of the United States Army Corps of Engineers in 2007, and helps capture and record equipment lists, product data sheets, warranties, spare parts lists, and preventive maintenance schedules. This information is used to support operations, maintenance and asset management once a built asset is in service. In December 2011, it was approved by the US-based National Institute of Building Sciences as part of its National Building Information Model (NBIMS-US) standard. COBie has been incorporated into software, and may take several forms including spreadsheet, IFC, and ifcXML. In early 2013 BuildingSMART was working on a lightweight XML format, COBieLite, which became available for review in April 2013. In September 2014, a code of practice regarding COBie was issued as a British Standard: BS 1192-4. In January 2019, ISO published the first two parts of ISO 19650, providing a framework for building information modelling, based on process standards developed in the United Kingdom. UK BS and PAS 1192 specifications form the basis of further parts of the ISO 19650 series, with parts on asset management (Part 3) and security management (Part 5) published in 2020. The IEC/ISO 81346 series for reference designation has published 81346-12:2018, also known as RDS-CW (Reference Designation System for Construction Works). The use of RDS-CW offers the prospect of integrating BIM with complementary international standards based classification systems being developed for the Power Plant sector. == Definition == ISO 19650-1:2018 defines BIM as: Use of a shared digital representation of a built asset to facilitate design, construction and operation processes to form a reliable basis for decisions. The US National Building Information Model Standard Project Committee has the following definition: Building Information Modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition. Traditional building design was largely reliant upon two-dimensional technical drawings (plans, elevations, sections, etc.). Building information modeling extends the three primary spatial dimensions (width, height and depth), incorporating information about time (so-called 4D BIM), cost (5D BIM), asset management, sustainability, etc. BIM therefore covers more than just geometry. It also covers spatial relationships, geospatial information, quantities and properties of building components (for example, manufacturers' details), and enables a wide range of collaborative processes relating to the built asset from initial planning through to construction and then throughout its operational life. BIM authoring tools present a design as combinations of "objects" – vague and undefined, generic or product-specific, solid shapes or void-space oriented (like the shape of a room), that carry their geometry, relations, and attributes. BIM applications allow extraction of different views from a building model for drawing production and other uses. These different views are automatically consistent, being based on a single definition of each object instance. BIM software also defines objects parametrically; that is, the objects are defined as parameters and relations to other objects so that if a related object is amended, dependent ones will automatically also change. Each model element can carry attributes for selecting and ordering them automatically, providing cost estimates as well as material tracking and ordering. For the professionals involved in a project, BIM enables a virtual information model to be shared by the design team (architects, landscape architects, surveyors, civil, structural and building services engineers, etc.), the main contractor and subcontractors, and the owner/operator. Each professional adds discipline-specific data to the shared model – commonly, a 'federated' model which combines several different disciplines' models into one. Combining models enables visualisation of all models in a single environment, better coordination and development of designs, enhanced clash avoidance and detection, and improved time and cost decision-making. === BIM wash === "BIM wash" or "BIM washing" is a term sometimes used to describe inflated, and/or deceptive, claims of using or delivering BIM services or products. == Usage throughout the asset life cycle == Use of BIM goes beyond the planning and design phase of a project, extending throughout the life cycle of the asset. The supporting processes of building lifecycle management include cost management, construction management, project management, facility operation and application in green building. === Common Data Environment === A 'Common Data Environment' (CDE) is defined in ISO 19650 as an: Agreed source of information for any given project or asset, for collecting, managing and disseminating each information container through a managed process. A CDE workflow describes the processes to be used while a CDE solution can provide the underlying technologies. A CDE is used to share data across a project or asset lifecycle, supporting collaboration across a whole project team. The concept of a CDE overlaps with enterprise content management, ECM, but with a greater focus on BIM issues. === Management of building information models === Building information models span the whole concept-to-occupation time-span. To ensure efficient management of information processes throughout this span, a BIM manager might be appointed. The BIM manager is retained by a design build team on the client's behalf from the pre-design phase onwards to develop and to track the object-oriented BIM against predicted and measured performance objectives, supporting multi-disciplinary building information models that drive analysis, schedules, take-off and logistics. Companies are also now considering developing BIMs in various levels of detail, since depending on the application of BIM, more or less detail is needed, and there is varying modeling effort associated with generating building information models at different levels of detail. === BIM in construction management === Participants in the building process are constantly challenged to deliver successful projects despite tight budgets, limited staffing, accelerated schedules, and limited or conflicting information. The significant disciplines such as architectural, structural and MEP designs should be well-coordinated, as two things can't take place at the same place and time. BIM additionally is able to aid in collision detection, identifying the exact location of discrepancies. The BIM concept envisages virtual construction of a facility prior to its actual physical construction, in order to reduce uncertainty, improve safety, work out problems, and simulate and analyze potential impacts. Sub-contractors from every trade can input critical information into the model before beginning construction, with opportunities to pre-fabricate or pre-assemble some systems off-site. Waste can be minimised on-site and products delivered on a just-in-time basis rather than being stock-piled on-site. Quantities and shared properties of materials can be extracted easily. Scopes of work can be isolated and defined. Systems, assemblies and sequences can be shown in a relative scale with the entire facility or group of facilities. BIM also prevents errors by enabling conflict or 'clash detection' whereby the computer model visually highlights to the team where parts of the building (e.g.:structural frame and building services pipes or ducts) may wrongly intersect. === BIM in facility operation and asset management === BIM can bridge the information loss associated with handing a project from design team, to construction team and to building owner/operator, by allowing each group to add to and reference back to all information they acquire during their period of contribution to the BIM model. Enabling an effective handover of information from design and construction (including via IFC or COBie) can thus yield benefits to the facility owner or operator. BIM-related processes relating to longer-term asset management are also covered in ISO-19650 Part 3. For example, a building owner may find evidence of a water leak in a building. Rather than exploring the physical building, the owner may turn to the model and see that a water valve is located in the suspect location. The owner could also have in the model the specific valve size, manufacturer, part number, and any other information ever researched in the past, pending adequate computing power. Such problems were initially addressed by Leite and Akinci when developing a vulnerability representation of facility contents and threats for supporting the identification of vulnerabilities in building emergencies. Dynamic information about the building, such as sensor measurements and control signals from the building systems, can also be incorporated within software to support analysis of building operation and maintenance. As such, BIM in facility operation can be related to internet of things approaches; rapid access to data may also be aided by use of mobile devices (smartphones, tablets) and machine-readable RFID tags or barcodes; while integration and interoperability with other business systems - CAFM, ERP, BMS, IWMS, etc - can aid operational reuse of data. There have been attempts at creating information models for older, pre-existing facilities. Approaches include referencing key metrics such as the Facility Condition Index (FCI), or using 3D laser-scanning surveys and photogrammetry techniques (separately or in combination) or digitizing traditional building surveying methodologies by using mobile technology to capture accurate measurements and operation-related information about the asset that can be used as the basis for a model. Trying to retrospectively model a building constructed in, say 1927, requires numerous assumptions about design standards, building codes, construction methods, materials, etc, and is, therefore, more complex than building a model during design. One of the challenges to the proper maintenance and management of existing facilities is understanding how BIM can be utilized to support a holistic understanding and implementation of building management practices and "cost of ownership" principles that support the full product lifecycle of a building. An American National Standard entitled APPA 1000 – Total Cost of Ownership for Facilities Asset Management incorporates BIM to factor in a variety of critical requirements and costs over the life-cycle of the building, including but not limited to: replacement of energy, utility, and safety systems; continual maintenance of the building exterior and interior and replacement of materials; updates to design and functionality; and recapitalization costs. === BIM in green building === BIM in green building, or "green BIM", is a process that can help architecture, engineering and construction firms to improve sustainability in the built environment. It can allow architects and engineers to integrate and analyze environmental issues in their design over the life cycle of the asset. In the ERANet projects EPC4SES and FinSESCo projects worked on the digital representation of the energy demand of the building. The nucleus is the XML from issuing Energy Performance Certificates, amended by roof data to be able to retrieve the position and size of PV or PV/T. == International developments == === Asia === ==== China ==== China began its exploration on informatisation in 2001. The Ministry of Construction announced BIM was as the key application technology of informatisation in "Ten new technologies of construction industry" (by 2010). The Ministry of Science and Technology (MOST) clearly announced BIM technology as a national key research and application project in "12th Five-Year" Science and Technology Development Planning. Therefore, the year 2011 was described as "The First Year of China's BIM". ==== Hong Kong ==== In 2006 the Hong Kong Housing Authority introduced BIM, and then set a target of full BIM implementation in 2014/2015. BuildingSmart Hong Kong was inaugurated in Hong Kong SAR in late April 2012. The Government of Hong Kong mandates the use of BIM for all government projects over HK$30M since 1 January 2018. ==== India ==== India Building Information Modelling Association (IBIMA) is a national-level society that represents the entire Indian BIM community. In India BIM is also known as VDC: Virtual Design and Construction. Due to its population and economic growth, India has an expanding construction market. In spite of this, BIM usage was reported by only 22% of respondents in a 2014 survey. In 2019, government officials said BIM could help save up to 20% by shortening construction time, and urged wider adoption by infrastructure ministries. ==== Iran ==== The Iran Building Information Modeling Association (IBIMA) was founded in 2012 by professional engineers from five universities in Iran, including the Civil and Environmental Engineering Department at Amirkabir University of Technology. While it is not currently active, IBIMA aims to share knowledge resources to support construction engineering management decision-making. ==== Malaysia ==== BIM implementation is targeted towards BIM Stage 2 by the year 2020 led by the Construction Industry Development Board (CIDB Malaysia). Under the Construction Industry Transformation Plan (CITP 2016–2020), it is hoped more emphasis on technology adoption across the project life-cycle will induce higher productivity. ==== Singapore ==== The Building and Construction Authority (BCA) has announced that BIM would be introduced for architectural submission (by 2013), structural and M&E submissions (by 2014) and eventually for plan submissions of all projects with gross floor area of more than 5,000 square meters by 2015. The BCA Academy is training students in BIM. ==== Japan ==== The Ministry of Land, Infrastructure and Transport (MLIT) has announced "Start of BIM pilot project in government building and repairs" (by 2010). Japan Institute of Architects (JIA) released the BIM guidelines (by 2012), which showed the agenda and expected effect of BIM to architects. MLIT announced " BIM will be mandated for all of its public works from the fiscal year of 2023, except those having particular reasons". The works subject to WTO Government Procurement Agreement shall comply with the published ISO standards related to BIM such as ISO19650 series as determined by the Article 10 (Technical Specification) of the Agreement. ==== South Korea ==== Small BIM-related seminars and independent BIM effort existed in South Korea even in the 1990s. However, it was not until the late 2000s that the Korean industry paid attention to BIM. The first industry-level BIM conference was held in April 2008, after which, BIM has been spread very rapidly. Since 2010, the Korean government has been gradually increasing the scope of BIM-mandated projects. McGraw Hill published a detailed report in 2012 on the status of BIM adoption and implementation in South Korea. ==== United Arab Emirates ==== Dubai Municipality issued a circular (196) in 2014 mandating BIM use for buildings of a certain size, height or type. The one page circular initiated strong interest in BIM and the market responded in preparation for more guidelines and direction. In 2015 the Municipality issued another circular (207) titled 'Regarding the expansion of applying the (BIM) on buildings and facilities in the emirate of Dubai' which made BIM mandatory on more projects by reducing the minimum size and height requirement for projects requiring BIM. This second circular drove BIM adoption further with several projects and organizations adopting UK BIM standards as best practice. In 2016, the UAE's Quality and Conformity Commission set up a BIM steering group to investigate statewide adoption of BIM. === Europe === ==== Austria ==== Austrian standards for digital modeling are summarized in the ÖNORM A 6241, published on 15 March 2015. The ÖNORM A 6241-1 (BIM Level 2), which replaced the ÖNORM A 6240-4, has been extended in the detailed and executive design stages, and corrected in the lack of definitions. The ÖNORM A 6241-2 (BIM Level 3) includes all the requirements for the BIM Level 3 (iBIM). ==== Czech Republic ==== The Czech BIM Council, established in May 2011, aims to implement BIM methodologies into the Czech building and designing processes, education, standards and legislation. ==== Estonia ==== In Estonia digital construction cluster (Digitaalehituse Klaster) was formed in 2015 to develop BIM solutions for the whole life-cycle of construction. The strategic objective of the cluster is to develop an innovative digital construction environment as well as VDC new product development, Grid and e-construction portal to increase the international competitiveness and sales of Estonian businesses in the construction field. The cluster is equally co-funded by European Structural and Investment Funds through Enterprise Estonia and by the members of the cluster with a total budget of 600 000 euros for the period 2016–2018. ==== France ==== The French arm of buildingSMART, called Mediaconstruct (existing since 1989), is supporting digital transformation in France. A building transition digital plan – French acronym PTNB – was created in 2013 (mandated since 2015 to 2017 and under several ministries). A 2013 survey of European BIM practice showed France in last place, but, with government support, in 2017 it had risen to third place with more than 30% of real estate projects carried out using BIM. PTNB was superseded in 2018 by Plan BIM 2022, administered by an industry body, the Association for the Development of Digital in Construction (AND Construction), founded in 2017, and supported by a digital platform, KROQI, developed and launched in 2017 by CSTB (France's Scientific and Technical Centre for Building). ==== Germany ==== In December 2015, the German minister for transport Alexander Dobrindt announced a timetable for the introduction of mandatory BIM for German road and rail projects from the end of 2020. Speaking in April 2016, he said digital design and construction must become standard for construction projects in Germany, with Germany two to three years behind The Netherlands and the UK in aspects of implementing BIM. BIM was piloted in many areas of German infrastructure delivery and in July 2022 Volker Wissing, Federal Minister for Digital and Transport, announced that, from 2025, BIM will be used as standard in the construction of federal trunk roads in addition to the rail sector. ==== Ireland ==== In November 2017, Ireland's Department for Public Expenditure and Reform launched a strategy to increase use of digital technology in delivery of key public works projects, requiring the use of BIM to be phased in over the next four years. ==== Italy ==== Through the new D.l. 50, in April 2016 Italy has included into its own legislation several European directives including 2014/24/EU on Public Procurement. The decree states among the main goals of public procurement the "rationalization of designing activities and of all connected verification processes, through the progressive adoption of digital methods and electronic instruments such as Building and Infrastructure Information Modelling". A norm in 8 parts is also being written to support the transition: UNI 11337-1, UNI 11337-4 and UNI 11337-5 were published in January 2017, with five further chapters to follow within a year. In early 2018 the Italian Ministry of Infrastructure and Transport issued a decree (DM 01/12/17) creating a governmental BIM Mandate compelling public client organisations to adopt a digital approach by 2025, with an incremental obligation which will start on 1 January 2019. ==== Lithuania ==== Lithuania is moving towards adoption of BIM infrastructure by founding a public body "Skaitmeninė statyba" (Digital Construction), which is managed by 13 associations. Also, there is a BIM work group established by Lietuvos Architektų Sąjunga (a Lithuanian architects body). The initiative intends Lithuania to adopt BIM, Industry Foundation Classes (IFC) and National Construction Classification as standard. An international conference "Skaitmeninė statyba Lietuvoje" (Digital Construction in Lithuania) has been held annually since 2012. ==== The Netherlands ==== On 1 November 2011, the Rijksgebouwendienst, the agency within the Dutch Ministry of Housing, Spatial Planning and the Environment that manages government buildings, introduced the Rgd BIM Standard, which it updated on 1 July 2012. ==== Norway ==== In Norway BIM has been used increasingly since 2008. Several large public clients require use of BIM in open formats (IFC) in most or all of their projects. The Government Building Authority bases its processes on BIM in open formats to increase process speed and quality, and all large and several small and medium-sized contractors use BIM. National BIM development is centred around the local organisation, buildingSMART Norway which represents 25% of the Norwegian construction industry. ==== Poland ==== BIMKlaster (BIM Cluster) is a non-governmental, non-profit organisation established in 2012 with the aim of promoting BIM development in Poland. In September 2016, the Ministry of Infrastructure and Construction began a series of expert meetings concerning the application of BIM methodologies in the construction industry. ==== Portugal ==== Created in 2015 to promote the adoption of BIM in Portugal and its normalisation, the Technical Committee for BIM Standardisation, CT197-BIM, has created the first strategic document for construction 4.0 in Portugal, aiming to align the country's industry around a common vision, integrated and more ambitious than a simple technology change. ==== Russia ==== The Russian government has approved a list of the regulations that provide the creation of a legal framework for the use of information modeling of buildings in construction and encourages the use of BIM in government projects. ==== Slovakia ==== The BIM Association of Slovakia, "BIMaS", was established in January 2013 as the first Slovak professional organisation focused on BIM. Although there are neither standards nor legislative requirements to deliver projects in BIM, many architects, structural engineers and contractors, plus a few investors are already applying BIM. A Slovak implementation strategy created by BIMaS and supported by the Chamber of Civil Engineers and Chamber of Architects has yet to be approved by Slovak authorities due to their low interest in such innovation. ==== Spain ==== A July 2015 meeting at Spain's Ministry of Infrastructure [Ministerio de Fomento] launched the country's national BIM strategy, making BIM a mandatory requirement on public sector projects with a possible starting date of 2018. Following a February 2015 BIM summit in Barcelona, professionals in Spain established a BIM commission (ITeC) to drive the adoption of BIM in Catalonia. ==== Switzerland ==== Since 2009 through the initiative of buildingSmart Switzerland, then 2013, BIM awareness among a broader community of engineers and architects was raised due to the open competition for Basel's Felix Platter Hospital where a BIM coordinator was sought. BIM has also been a subject of events by the Swiss Society for Engineers and Architects, SIA. ==== United Kingdom ==== In May 2011 UK Government Chief Construction Adviser Paul Morrell called for BIM adoption on UK government construction projects. Morrell also told construction professionals to adopt BIM or be "Betamaxed out". In June 2011 the UK government published its BIM strategy, announcing its intention to require collaborative 3D BIM (with all project and asset information, documentation and data being electronic) on its projects by 2016. Initially, compliance would require building data to be delivered in a vendor-neutral 'COBie' format, thus overcoming the limited interoperability of BIM software suites available on the market. The UK Government BIM Task Group led the government's BIM programme and requirements, including a free-to-use set of UK standards and tools that defined 'level 2 BIM'. In April 2016, the UK Government published a new central web portal as a point of reference for the industry for 'level 2 BIM'. The work of the BIM Task Group then continued under the stewardship of the Cambridge-based Centre for Digital Built Britain (CDBB), announced in December 2017 and formally launched in early 2018. Outside of government, industry adoption of BIM since 2016 has been led by the UK BIM Alliance, an independent, not-for-profit, collaboratively-based organisation formed to champion and enable the implementation of BIM, and to connect and represent organisations, groups and individuals working towards digital transformation of the UK's built environment industry. In November 2017, the UK BIM Alliance merged with the UK and Ireland chapter of BuildingSMART. In October 2019, CDBB, the UK BIM Alliance and the BSI Group launched the UK BIM Framework. Superseding the BIM levels approach, the framework describes an overarching approach to implementing BIM in the UK, giving free guidance on integrating the international ISO 19650 series of standards into UK processes and practice. National Building Specification (NBS) has published research into BIM adoption in the UK since 2011, and in 2020 published its 10th annual BIM report. In 2011, 43% of respondents had not heard of BIM; in 2020 73% said they were using BIM. === North America === ==== Canada ==== BIM is not mandatory in Canada. Several organizations support BIM adoption and implementation in Canada: the Canada BIM Council (CANBIM, founded in 2008), the Institute for BIM in Canada, and buildingSMART Canada (the Canadian chapter of buildingSMART International). Public Services and Procurement Canada (formerly Public Works and Government Services Canada) is committed to using non-proprietary or "OpenBIM" BIM standards and avoids specifying any specific proprietary BIM format. Designers are required to use the international standards of interoperability for BIM (IFC). ==== United States ==== The Associated General Contractors of America and US contracting firms have developed various working definitions of BIM that describe it generally as: an object-oriented building development tool that utilizes 5-D modeling concepts, information technology and software interoperability to design, construct and operate a building project, as well as communicate its details. Although the concept of BIM and relevant processes are being explored by contractors, architects and developers alike, the term itself has been questioned and debated with alternatives including Virtual Building Environment (VBE) also considered. Unlike some countries such as the UK, the US has not adopted a set of national BIM guidelines, allowing different systems to remain in competition. In 2021, the National Institute of Building Sciences (NIBS) looked at applying UK BIM experiences to developing shared US BIM standards and processes. The US National BIM Standard had largely been developed through volunteer efforts; NIBS aimed to create a national BIM programme to drive effective adoption at a national scale. BIM is seen to be closely related to Integrated Project Delivery (IPD) where the primary motive is to bring the teams together early on in the project. A full implementation of BIM also requires the project teams to collaborate from the inception stage and formulate model sharing and ownership contract documents. The American Institute of Architects has defined BIM as "a model-based technology linked with a database of project information",[3] and this reflects the general reliance on database technology as the foundation. In the future, structured text documents such as specifications may be able to be searched and linked to regional, national, and international standards. === Africa === ==== Nigeria ==== BIM has the potential to play a vital role in the Nigerian AEC sector. In addition to its potential clarity and transparency, it may help promote standardization across the industry. For instance, Utiome suggests that, in conceptualizing a BIM-based knowledge transfer framework from industrialized economies to urban construction projects in developing nations, generic BIM objects can benefit from rich building information within specification parameters in product libraries, and used for efficient, streamlined design and construction. Similarly, an assessment of the current 'state of the art' by Kori found that medium and large firms were leading the adoption of BIM in the industry. Smaller firms were less advanced with respect to process and policy adherence. There has been little adoption of BIM in the built environment due to construction industry resistance to changes or new ways of doing things. The industry is still working with conventional 2D CAD systems in services and structural designs, although production could be in 3D systems. There is virtually no utilisation of 4D and 5D systems. BIM Africa Initiative, primarily based in Nigeria, is a non-profit institute advocating the adoption of BIM across Africa. Since 2018, it has been engaging with professionals and the government towards the digital transformation of the built industry. Produced annually by its research and development committee, the African BIM Report gives an overview of BIM adoption across the African continent. ==== South Africa ==== The South African BIM Institute, established in May 2015, aims to enable technical experts to discuss digital construction solutions that can be adopted by professionals working within the construction sector. Its initial task was to promote the SA BIM Protocol. There are no mandated or national best practice BIM standards or protocols in South Africa. Organisations implement company-specific BIM standards and protocols at best (there are isolated examples of cross-industry alliances). === Oceania === ==== Australia ==== In February 2016, Infrastructure Australia recommended: "Governments should make the use of Building Information Modelling (BIM) mandatory for the design of large-scale complex infrastructure projects. In support of a mandatory rollout, the Australian Government should commission the Australasian Procurement and Construction Council, working with industry, to develop appropriate guidance around the adoption and use of BIM; and common standards and protocols to be applied when using BIM". ==== New Zealand ==== In 2015, many projects in the rebuilding of Christchurch were being assembled in detail on a computer using BIM well before workers set foot on the site. The New Zealand government started a BIM acceleration committee, as part of a productivity partnership with the goal of 20 per cent more efficiency in the construction industry by 2020. Today, BIM use is still not mandated in the country while several challenges have been identified for its implementation in the country. However, members of the AEC industry and academia have developed a national BIM handbook providing definitions, case studies and templates. == Purposes or dimensionality == Some purposes or uses of BIM may be described as 'dimensions'. However, there is little consensus on definitions beyond 5D. Some organisations dismiss the term; for example, the UK Institution of Structural Engineers does not recommend using nD modelling terms beyond 4D, adding "cost (5D) is not really a 'dimension'." === 3D === 3D BIM, an acronym for three-dimensional building information modeling, refers to the graphical representation of an asset's geometric design, augmented by information describing attributes of individual components. 3D BIM work may be undertaken by professional disciplines such as architectural, structural, and MEP, and the use of 3D models enhances coordination and collaboration between disciplines. A 3D virtual model can also be created by creating a point cloud of the building or facility using laser scanning technology. === 4D === 4D BIM, an acronym for 4-dimensional building information modeling, refers to the intelligent linking of individual 3D CAD components or assemblies with time- or scheduling-related information. The term 4D refers to the fourth dimension: time, i.e. 3D plus time. 4D modelling enables project participants (architects, designers, contractors, clients) to plan, sequence the physical activities, visualise the critical path of a series of events, mitigate the risks, report and monitor progress of activities through the lifetime of the project. 4D BIM enables a sequence of events to be depicted visually on a time line that has been populated by a 3D model, augmenting traditional Gantt charts and critical path (CPM) schedules often used in project management. Construction sequences can be reviewed as a series of problems using 4D BIM, enabling users to explore options, manage solutions and optimize results. As an advanced construction management technique, it has been used by project delivery teams working on larger projects. 4D BIM has traditionally been used for higher end projects due to the associated costs, but technologies are now emerging that allow the process to be used by laymen or to drive processes such as manufacture. === 5D === 5D BIM, an acronym for 5-dimensional building information modeling refers to the intelligent linking of individual 3D components or assemblies with time schedule (4D BIM) constraints and then with cost-related information. 5D models enable participants to visualise construction progress and related costs over time. This BIM-centric project management technique has potential to improve management and delivery of projects of any size or complexity. In June 2016, McKinsey & Company identified 5D BIM technology as one of five big ideas poised to disrupt construction. It defined 5D BIM as "a five-dimensional representation of the physical and functional characteristics of any project. It considers a project’s time schedule and cost in addition to the standard spatial design parameters in 3-D." === 6D === 6D BIM, an acronym for 6-dimensional building information modeling, is sometimes used to refer to the intelligent linking of individual 3D components or assemblies with all aspects of project life-cycle management information. However, there is less consensus about the definition of 6D BIM; it is also sometimes used to cover use of BIM for sustainability purposes. In the project life cycle context, a 6D model is usually delivered to the owner when a construction project is finished. The "As-Built" BIM model is populated with relevant building component information such as product data and details, maintenance/operation manuals, cut sheet specifications, photos, warranty data, web links to product online sources, manufacturer information and contacts, etc. This database is made accessible to the users/owners through a customized proprietary web-based environment. This is intended to aid facilities managers in the operation and maintenance of the facility. The term is less commonly used in the UK and has been replaced with reference to the Asset Information Requirements (AIR) and an Asset Information Model (AIM) as specified in BS EN ISO 19650-3:2020. == See also == Data model Design computing Digital twin (the physical manifestation instrumented and connected to the model) BCF GIS Digital Building Logbook Landscape design software Lean construction List of BIM software Macro BIM Open-source 3D file formats OpenStreetMap Pre-fire planning System information modelling Whole Building Design Guide Facility management (or Building management) Building automation (and Building management systems) == Notes == == References == == Further reading == Kensek, Karen (2014). Building Information Modeling, Routledge. ISBN 978-0-415-71774-8 Kensek, Karen and Noble, Douglas (2014). Building Information Modeling: BIM in Current and Future Practice, Wiley. ISBN 978-1-118-76630-9 Eastman, Chuck; Teicholz, Paul; Sacks, Rafael; Liston, Kathleen (2011). 'BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers, and Contractors (2 ed.). John Wiley. ISBN 978-0-470-54137-1. Lévy, François (2011). BIM in Small-Scale Sustainable Design, Wiley. ISBN 978-0470590898 Weygant, Robert S. (2011) BIM Content Development: Standards, Strategies, and Best Practices, Wiley. ISBN 978-0-470-58357-9 Hardin, Brad (2009). Martin Viveros (ed.). BIM and Construction Management: Proven Tools, Methods and Workflows. Sybex. ISBN 978-0-470-40235-1. Smith, Dana K. and Tardif, Michael (2009). Building Information Modeling: A Strategic Implementation Guide for Architects, Engineers, Constructors, and Real Estate Asset Managers, Wiley. ISBN 978-0-470-25003-7 Underwood, Jason, and Isikdag, Umit (2009). Handbook of Research on Building Information Modeling and Construction Informatics: Concepts and Technologies, Information Science Publishing. ISBN 978-1-60566-928-1 Krygiel, Eddy and Nies, Brad (2008). Green BIM: Successful Sustainable Design with Building Information Modeling, Sybex. ISBN 978-0-470-23960-5 Kymmell, Willem (2008). Building Information Modeling: Planning and Managing Construction Projects with 4D CAD and Simulations, McGraw-Hill Professional. ISBN 978-0-07-149453-3 Jernigan, Finith (2007). BIG BIM little bim. 4Site Press. ISBN 978-0-9795699-0-6.
Wikipedia/Facility_information_model
The term process model is used in various contexts. For example, in business process modeling the enterprise process model is often referred to as the business process model. == Overview == Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development. The goals of a process model are to be: Descriptive Track what actually happens during a process Take the point of view of an external observer who looks at the way a process has been performed and determines the improvements that must be made to make it perform more effectively or efficiently. Prescriptive Define the desired processes and how they should/could/might be performed. Establish rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance. Explanatory Provide explanations about the rationale of processes. Explore and evaluate the several possible courses of action based on rational arguments. Establish an explicit link between processes and the requirements that the model needs to fulfill. Pre-defines points at which data can be extracted for reporting purposes. == Purpose == From a theoretical point of view, the meta-process modeling explains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process modeling is aimed at providing guidance for method engineers and application developers. The activity of modeling a business process usually predicates a need to change processes or identify issues to be corrected. This transformation may or may not require IT involvement, although that is a common driver for the need to model a business process. Change management programmes are desired to put the processes into practice. With advances in technology from larger platform vendors, the vision of business process models (BPM) becoming fully executable (and capable of round-trip engineering) is coming closer to reality every day. Supporting technologies include Unified Modeling Language (UML), model-driven architecture, and service-oriented architecture. Process modeling addresses the process aspects of an enterprise business architecture, leading to an all encompassing enterprise architecture. The relationships of a business processes in the context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create greater capabilities in analyzing and planning a change. One real-world example is in corporate mergers and acquisitions; understanding the processes in both companies in detail, allowing management to identify redundancies resulting in a smoother merger. Process modeling has always been a key aspect of business process reengineering, and continuous improvement approaches seen in Six Sigma. == Classification of process models == === By coverage === There are five types of coverage where the term process model has been defined differently: Activity-oriented: related set of activities conducted for the specific purpose of product definition; a set of partially ordered steps intended to reach a goal. Product-oriented: series of activities that cause sensitive product transformations to reach the desired product. Decision-oriented: set of related decisions conducted for the specific purpose of product definition. Context-oriented: sequence of contexts causing successive product transformations under the influence of a decision taken in a context. Strategy-oriented: allow building models representing multi-approach processes and plan different possible ways to elaborate the product based on the notion of intention and strategy. === By alignment === Processes can be of different kinds. These definitions "correspond to the various ways in which a process can be modelled". Strategic processes investigate alternative ways of doing a thing and eventually produce a plan for doing it are often creative and require human co-operation; thus, alternative generation and selection from an alternative are very critical activities Tactical processes help in the achievement of a plan are more concerned with the tactics to be adopted for actual plan achievement than with the development of a plan of achievement Implementation processes are the lowest level processes are directly concerned with the details of the what and how of plan implementation === By granularity === Granularity refers to the level of detail of a process model and affects the kind of guidance, explanation and trace that can be provided. Coarse granularity restricts these to a rather limited level of detail whereas fine granularity provides more detailed capability. The nature of granularity needed is dependent on the situation at hand. Project manager, customer representatives, the general, top-level, or middle management require rather coarse-grained process description as they want to gain an overview of time, budget, and resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or software system architects will prefer a fine-grained process model where the details of the model can provide them with instructions and important execution dependencies such as the dependencies between people. While notations for fine-grained models exist, most traditional process models are coarse-grained descriptions. Process models should, ideally, provide a wide range of granularity (e.g. Process Weaver). === By flexibility === It was found that while process models were prescriptive, in actual practice departures from the prescription can occur. Thus, frameworks for adopting methods evolved so that systems development methods match specific organizational situations and thereby improve their usefulness. The development of such frameworks is also called situational method engineering. Method construction approaches can be organized in a flexibility spectrum ranging from 'low' to 'high'. Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there are modular method construction. Rigid methods are completely pre-defined and leave little scope for adapting them to the situation at hand. On the other hand, modular methods can be modified and augmented to fit a given situation. Selecting a rigid methods allows each project to choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a method consists of choosing the appropriate path for the situation at hand. Finally, selecting and tuning a method allows each project to select methods from different approaches and tune them to the project's needs." == Quality of methods == As the quality of process models is being discussed in this paper, there is a need to elaborate quality of modeling techniques as an important essence in quality of process models. In most existing frameworks created for understanding the quality, the line between quality of modeling techniques and the quality of models as a result of the application of those techniques are not clearly drawn. This report will concentrate both on quality of process modeling techniques and quality of process models to clearly differentiate the two. Various frameworks were developed to help in understanding quality of process modeling techniques, one example is Quality based modeling evaluation framework or known as Q-Me framework which argued to provide set of well defined quality properties and procedures to make an objective assessment of this properties possible. This framework also has advantages of providing uniform and formal description of the model element within one or different model types using one modeling techniques In short this can make assessment of both the product quality and the process quality of modeling techniques with regard to a set of properties that have been defined before. Quality properties that relate to business process modeling techniques discussed in are: Expressiveness: the degree to which a given modeling technique is able to denote the models of any number and kinds of application domains. Arbitrariness: the degree of freedom one has when modeling one and the same domain Suitability: the degree to which a given modeling technique is specifically tailored for a specific kind of application domain. Comprehensibility: the ease with which the way of working and way of modeling are understood by participants. Coherence: the degree to which the individual sub models of a way of modeling constitute a whole. Completeness; the degree to which all necessary concepts of the application domain are represented in the way of modeling. Efficiency: the degree to which the modeling process uses resources such as time and people. Effectiveness: the degree to which the modeling process achieves its goal. To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic essentials modeling of the organisation (DEMO) business modeling techniques. It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques has revealed the shortcomings of Q-ME. One particular is that it does not include quantifiable metric to express the quality of business modeling technique which makes it hard to compare quality of different techniques in an overall rating. There is also a systematic approach for quality measurement of modeling techniques known as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as a basis for computation of these complexity metrics. In comparison to quality framework proposed by Krogstie, quality measurement focus more on technical level instead of individual model level. Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to measure the simplicity and understandability of a design. This is supported by later research done by Mendling et al. who argued that without using the quality metrics to help question quality properties of a model, simple process can be modeled in a complex and unsuitable way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps inefficient execution of the process in question. The quality of modeling technique is important in creating models that are of quality and contribute to the correctness and usefulness of models. == Quality of models == Earliest process models reflected the dynamics of the process with a practical process obtained by instantiation in terms of relevant concepts, available technologies, specific implementation environments, process constraints and so on. Enormous number of research has been done on quality of models but less focus has been shifted towards the quality of process models. Quality issues of process models cannot be evaluated exhaustively however there are four main guidelines and frameworks in practice for such. These are: top-down quality frameworks, bottom-up metrics related to quality aspects, empirical surveys related to modeling techniques, and pragmatic guidelines. Hommes quoted Wang et al. (1994) that all the main characteristic of quality of models can all be grouped under 2 groups namely correctness and usefulness of a model, correctness ranges from the model correspondence to the phenomenon that is modeled to its correspondence to syntactical rules of the modeling and also it is independent of the purpose to which the model is used. Whereas the usefulness can be seen as the model being helpful for the specific purpose at hand for which the model is constructed at first place. Hommes also makes a further distinction between internal correctness (empirical, syntactical and semantic quality) and external correctness (validity). A common starting point for defining the quality of conceptual model is to look at the linguistic properties of the modeling language of which syntax and semantics are most often applied. Also the broader approach is to be based on semiotics rather than linguistic as was done by Krogstie using the top-down quality framework known as SEQUAL. It defines several quality aspects based on relationships between a model, knowledge Externalisation, domain, a modeling language, and the activities of learning, taking action, and modeling. The framework does not however provide ways to determine various degrees of quality but has been used extensively for business process modeling in empirical tests carried out According to previous research done by Moody et al. with use of conceptual model quality framework proposed by Lindland et al. (1994) to evaluate quality of process model, three levels of quality were identified: Syntactic quality: Assesses extent to which the model conforms to the grammar rules of modeling language being used. Semantic quality: whether the model accurately represents user requirements Pragmatic quality: whether the model can be understood sufficiently by all relevant stakeholders in the modeling process. That is the model should enable its interpreters to make use of it for fulfilling their need. From the research it was noticed that the quality framework was found to be both easy to use and useful in evaluating the quality of process models however it had limitations in regards to reliability and difficult to identify defects. These limitations led to refinement of the framework through subsequent research done by Krogstie. This framework is called SEQUEL framework by Krogstie et al. 1995 (Refined further by Krogstie & Jørgensen, 2002) which included three more quality aspects. Physical quality: whether the externalized model is persistent and available for the audience to make sense of it. Empirical quality: whether the model is modeled according to the established regulations regarding a given language. Social quality: This regards the agreement between the stakeholders in the modeling domain. Dimensions of Conceptual Quality framework Modeling Domain is the set of all statements that are relevant and correct for describing a problem domain, Language Extension is the set of all statements that are possible given the grammar and vocabulary of the modeling languages used. Model Externalization is the conceptual representation of the problem domain. It is defined as the set of statements about the problem domain that are actually made. Social Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors both human model users and the tools that interact with the model, respectively 'think' the conceptual representation of the problem domain contains. Finally, Participant Knowledge is the set of statements that human actors, who are involved in the modeling process, believe should be made to represent the problem domain. These quality dimensions were later divided into two groups that deal with physical and social aspects of the model. In later work, Krogstie et al. stated that while the extension of the SEQUAL framework has fixed some of the limitation of the initial framework, however other limitation remain . In particular, the framework is too static in its view upon semantic quality, mainly considering models, not modeling activities, and comparing these models to a static domain rather than seeing the model as a facilitator for changing the domain. Also, the framework's definition of pragmatic quality is quite narrow, focusing on understanding, in line with the semiotics of Morris, while newer research in linguistics and semiotics has focused beyond mere understanding, on how the model is used and affects its interpreters. The need for a more dynamic view in the semiotic quality framework is particularly evident when considering process models, which themselves often prescribe or even enact actions in the problem domain, hence a change to the model may also change the problem domain directly. This paper discusses the quality framework in relation to active process models and suggests a revised framework based on this. Further work by Krogstie et al. (2006) to revise SEQUAL framework to be more appropriate for active process models by redefining physical quality with a more narrow interpretation than previous research. The other framework in use is Guidelines of Modeling (GoM) based on general accounting principles include the six principles: Correctness, Clarity deals with the comprehensibility and explicitness (System description) of model systems. Comprehensibility relates to graphical arrangement of the information objects and, therefore, supports the understand ability of a model. Relevance relates to the model and the situation being presented. Comparability involves the ability to compare models that is semantic comparison between two models, Economic efficiency; the produced cost of the design process need at least to be covered by the proposed use of cost cuttings and revenue increases. Since the purpose of organizations in most cases is the maximization of profit, the principle defines the borderline for the modeling process. The last principle is Systematic design defines that there should be an accepted differentiation between diverse views within modeling. Correctness, relevance and economic efficiency are prerequisites in the quality of models and must be fulfilled while the remaining guidelines are optional but necessary. The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used by people who are not competent with modeling. They provide major quality metrics but are not easily applicable by non-experts. The use of bottom-up metrics related to quality aspects of process models is trying to bridge the gap of use of the other two frameworks by non-experts in modeling but it is mostly theoretical and no empirical tests have been carried out to support their use. Most experiments carried out relate to the relationship between metrics and quality aspects and these works have been done individually by different authors: Canfora et al. study the connection mainly between count metrics (for example, the number of tasks or splits -and maintainability of software process models); Cardoso validates the correlation between control flow complexity and perceived complexity; and Mendling et al. use metrics to predict control flow errors such as deadlocks in process models. The results reveal that an increase in size of a model appears to reduce its quality and comprehensibility. Further work by Mendling et al. investigates the connection between metrics and understanding and While some metrics are confirmed regarding their effect, also personal factors of the modeler – like competence – are revealed as important for understanding about the models. Several empirical surveys carried out still do not give clear guidelines or ways of evaluating the quality of process models but it is necessary to have clear set of guidelines to guide modelers in this task. Pragmatic guidelines have been proposed by different practitioners even though it is difficult to provide an exhaustive account of such guidelines from practice. Most of the guidelines are not easily put to practice but "label activities verb–noun" rule has been suggested by other practitioners before and analyzed empirically. From the research. value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels which need to be analyzed. It was found that it results in better models in terms of understanding than alternative labelling styles. From the earlier research and ways to evaluate process model quality it has been seen that the process model's size, structure, expertise of the modeler and modularity affect its overall comprehensibility. Based on these a set of guidelines was presented 7 Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as guidelines on the number of elements in a model, the application of structured modeling, and the decomposition of a process model. The guidelines are as follows: G1 Minimize the number of elements in a model G2 Minimize the routing paths per element G3 Use one start and one end event G4 Model as structured as possible G5 Avoid OR routing elements G6 Use verb-object activity labels G7 Decompose a model with more than 50 elements 7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the content of a process model, but only to the way this content is organized and represented. It does suggest ways of organizing different structures of the process model while the content is kept intact but the pragmatic issue of what must be included in the model is still left out. The second limitation relates to the prioritizing guideline the derived ranking has a small empirical basis as it relies on the involvement of 21 process modelers only. This could be seen on the one hand as a need for a wider involvement of process modelers' experience, but it also raises the question, what alternative approaches may be available to arrive at a prioritizing guideline? == See also == Model selection Process (science) Process architecture Process calculus Process flow diagram Process ontology Process Specification Language == References == == External links == Modeling processes regarding workflow patterns; link appears to be broken "Abstraction Levels for Processes Presentation: Process Modeling Principles" (PDF). Archived from the original (PDF) on 2011-07-14. Retrieved 2008-06-12. American Productivity and Quality Center (APQC), a worldwide organization for process and performance improvement The Application of Petri Nets to Workflow Management, W.M.P. van der Aalst, 1998.
Wikipedia/Process_modeling
Frameworx is an enterprise architecture framework geared towards communications service providers. It is developed by the TM Forum. == Structure == Frameworx consists of four frameworks: Application Framework (sometimes referred to as the Telecom Application Map (TAM)) Business Process Framework (eTOM) Information Framework (sometimes referred to as the Shared Information/Data (SID) model) Integration Frameworks (which is developed in the TM Forum Integration Program (TIP)) === Information Framework === The Information Framework (formally Shared Information/Data Model or SID) is a unified reference data model providing a single set of terms for business objects in telecommunications. The objective is to enable people in different departments, companies or geographical locations to use the same terms to describe the same real world objects, practices and relationships. It is part of Frameworx. The Information Framework, as the Frameworx information model, provides an information/data reference model and a common information/data vocabulary from a business as well as a systems perspective. The Information Framework uses Unified Modeling Language to formalize the expression of the needs of a particular stakeholder viewpoint. The Information Framework provides the common language for communicating the concerns of the four major groups of constituents (stakeholders) represented by the Frameworx Viewpoints - Business, System, Implementation and Deployment, as defined in the Frameworx Lifecycle. Used in combination with the Business Process Framework (eTOM) business process and activity descriptions and the Telecom Application Map the Information Framework make it possible to bridge between the business and Information Technology groups within an organization by providing definitions that are understandable by the business, but are also rigorous enough to be used for software development. The Information Framework model takes inspiration from a wide variety of industry sources, but its principal origins are the Alliance Common Information Architecture (ACIA) created by a team led by Bill Brook from AT&T and BT Group and the Directory Enabled Networks - next generation (DEN-ng) model created by John Strassner. When initially released in 2000, the Information Framework model covered the business (BSS) arena well, and also the device management field well, but was insufficient in its ability to represent logical networks and capacity. These deficiencies are being addressed through revision of the model to include concepts such as topologies, but the history has resulted in poor utilisation of the model in certain telecom fields, such as inventory management. == Principles == Frameworx is based around these key principles. === Separation of Business Process from Component Implementation === When Operations Support Systems (OSSs) are linked together, the business processes they support become distributed across the IT estate. In effect the situation is reached where a process starts with application A, which processes some data and then knows that it must call application B, which also does some processing and then calls C, etc. The result of this is that it's extremely difficult to understand where any of these flows actually are (e.g. if the process flow is one intended to take a customer order, is it Application A or B or C that's currently handling that order?) and it's even more difficult to change the process owing to its distributed nature. Frameworx proposes that the process is managed as part of the centralised infrastructure, using a workflow engine that is responsible for controlling the flow of the business process between the applications. Therefore, the workflow engine would initiate a process on application A, which would then return control to workflow engine, which would then call application B, and so on. In this way it's always possible to find out where an individual process flow is, since it is controlled by the central workflow engine, and process modifications can be made using the engine's process definition tools. Clearly some lower level process flows will be embedded in the individual applications, but this should be below the level of business-significant processing (i.e. below the level at which business policy and rules are implemented). The Frameworx certification methodologies help us deal with the scope of preferences that are not linearly distributed as an opening to improve the customer accepted undeniably appropriate method. === Loosely Coupled Distributed System === "Loosely coupled" means that each application is relatively independent of the other applications in the overall system. Therefore, in a loosely coupled environment, one application can be altered without the alteration necessarily affecting others. Taken to extreme, this can sometimes be viewed as producing the ability to "plug and play" applications, where they are so independent that they can be changed without affecting the overall system behaviour. That extreme is considered an unlikely nirvana at the present time. The "distributed system" is emphasising that Frameworx is not based on a Communication Service Provider (CSP) using a single monolithic application to manage all its activities, but is instead using a set of integrated and co-operating applications. === Shared Information Model === Integrating OSSs means that data must be shared between the applications. For this to be effective, either each application must understand how every other application understands/interprets that part of the data that is shared, or there must be a common model of the shared data. To understand this, consider an order handling application which has gone through a process to enter a customer order and where it now needs to send out a bill using application B (a billing system). Application A will have a record of the customer address and it therefore needs to ensure that application B sends the bill to this address. Passing this data between the systems simply requires a common format for the address information – each system needs to expect the same number of address lines, with each line being the same length. That's fairly straightforward. But imagine the difficulty that would occur if the ordering application worked on products that consists of bundles of sub-products (e.g. a broadband access product made from a copper line, a modem, a set of filters and a broadband conversion), whereas the billing application only expected single product/order lines. Trying to convert hierarchical products into non-hierarchical ones without losing information would not be possible. A single information model for data that is shared between applications in this way provides a solution to this problem. The TMF solution to this is called the Shared Information/Data-Model (SID). === Common communications infrastructure === Through the mid-1980s, computer-based OSSs were developed as stand-alone applications. However, during the early 1990s it became apparent that employing these as purely isolated applications was highly inefficient, since it led to a situation where, for example, orders would be taken on one system but the details would then need to be re-keyed into another in order to configure the relevant network equipment. Major efficiency gains were shown to be available from linking the standalone OSSs together, to allow such features as "Flow-through provisioning", where an order could be placed online and automatically result in equipment being provisioned, without any human intervention. However, for large operators with many hundreds of separate OSSs, the proliferation of interfaces became a serious problem. Each OSS needed to "talk to" many others, leading to the number of interfaces increasing with the square of the number of OSSs. Frameworx describes the use of a Common Communications Infrastructure (CCI). In this model, OSSs interface with the CCI rather than directly with each other. The CCI thus allows applications to work together using the CCI to link them together. In this way, each application only requires one interface (to the CCI) rather than many (to other applications). The complexity is therefore reduced to one of order n, rather than n2. The CCI may also provide other services, including security, data translation, etc. === Contract defined interfaces === Given the description above of how applications interface to the CCI, it's clear that we need a way of documenting those interfaces, both in terms of the technology employed (e.g. is it Java/JMS or Web services/SOAP?) but also the functionality of the application, the data used, the pre- and post-conditions, etc. The Frameworx contract specification provides a means to document these interfaces, and these are therefore contract defined interfaces. Frameworx contracts can be seen as extensions of Application Programming Interface (API) specifications. == Deliverables == === Process model === The eTOM (enhanced Telecom Operations Map, pronounced ee-tom) is the Frameworx business process framework. === Shared information model === The Frameworx Information is the Shared Information/Data Model (SID). === Lifecycle model === The Frameworx lifecycle model is aimed at defining the use and deployment of Frameworx within an organisation, and provides a framework for using the SID, eTOM and the Frameworx architecture. The model is based on considerable earlier work, including Zachman Framework, Kernighan, Yourdon, and the Object Management Group's Model Driven Architecture. The Frameworx lifecycle divides systems development into 4 stages: requirements, system design, implementation and operation. === Contract Specifications === As stated earlier, the Frameworx Contract is the fundamental unit of interoperability in a Frameworx system. Interoperability is important for each of the four views defined by the Frameworx Lifecycle. For example, the Contract is used to define the service to be delivered, as well as to specify information and code that implement the service. The Contract is also used to monitor, administer and maintain the service and ensure that any external obligations of the contract (e.g., from an SLA (Service Level Agreement)) are met and to define what measures to take if they are violated in some way. === Telecom Application Map === The Applications Framework (formally Telecom Application Map (TAM)) is one of the primary Frameworx artifacts. It considers the role and the functionality of the various applications that deliver OSS (Operations Support System) and BSS (Business Support System) capability. In doing so it enables procurement documents to be written with reference to the framework, thereby providing clear unambiguous statements of the functionality required of any given application, functional overlaps of existing applications to be identified, thereby facilitating rationalization and functional gaps to be identified. The level of functional decomposition is such that these benefits can be realized but without being over prescriptive. Within the TM Forum there is a strong definition of process and data. The Applications Framework provides a formalized way of grouping together function and data into recognised components, which would then be regarded as potentially procurable as either applications or services. An application or service (for example: web services) can be a relatively coarsely grained software that implements functions/processes and acts on or uses data. In daily life we see applications such as word processors or mail clients; in OSS terms we would regard an application as something such as a CRM component, a billing system or an inventory solution – although we also understand that these can be decomposed to some extent – for example a billing system will include a number of smaller applications, such as a rating engine. An “application” is defined as a set of one or more software artifacts comprising well defined functions, data, business flows, rules and interfaces. This would include a Data Model, for data used to interface to and within an application, policies, for governing external and internal application resources, a Flow Model, for functionality with the application and contract specifications for externally visible interfaces to the functionality within the application Applications are implementable as deployable packages and are procurable in the system market place. The Applications Framework is neither a part of the Information Framework or the Business Process Framework (eTOM) definitions but links to both in an easily understandable way and also provides a mapping between them. == External links == TM Forum Frameworx page Telecommunications OSS and BSS The TMF Reference page == See also == Model-driven architecture Service-oriented architecture Telecommunications Management Network Web service
Wikipedia/Frameworx_Shared_Information/Data_Model
Operations support systems (OSS), operational support systems in British usage, or Operation System (OpS) in NTT are computer systems used by telecommunications service providers to manage their networks (e.g., telephone networks). They support management functions such as network inventory, service provisioning, network configuration and fault management. Together with business support systems (BSS), operations support systems support various end-to-end telecommunication services. BSS and OSS have their own data and service responsibilities. The two systems together are often abbreviated OSS/BSS, BSS/OSS or simply B/OSS. The acronym OSS is also used in a singular form to refer to all the Operations Support Systems viewed as a whole system. Different subdivisions of OSS have been proposed by the TM Forum, industrial research labs, or OSS vendors. In general, an OSS covers at least the following five functions: Network management systems Service delivery Service fulfillment, including the network inventory, activation and provisioning Service assurance Customer care == History == Before about 1970, many OSS activities were performed by manual administrative processes. However, it became obvious that much of this activity could be replaced by computers. In the next 5 years or so, the telephone companies created a number of computer systems (or software applications) which automated much of this activity. This was one of the driving factors for the development of the Unix operating system and the C programming language. The Bell System purchased their own product line of PDP-11 computers from Digital Equipment Corporation for a variety of OSS applications. OSS systems used in the Bell System include AMATPS, CSOBS, EADAS, Remote Memory Administration System (RMAS), Switching Control Center System (SCCS), Service Evaluation System (SES), Trunks Integrated Record Keeping System (TIRKS), and many more. OSS systems from this era are described in the Bell System Technical Journal, Bell Labs Record, and Telcordia Technologies (now part of Ericsson) SR-2275. Many OSS systems were initially not linked to each other and often required manual intervention. For example, consider the case where a customer wants to order a new telephone service. The ordering system would take the customer's details and details of their order, but would not be able to configure the telephone exchange directly—this would be done by a switch management system. Details of the new service would need to be transferred from the order handling system to the switch management system—and this would normally be done by a technician re-keying the details from one screen into another—a process often referred to as "swivel chair integration". This was clearly another source of inefficiency, so the focus for the next few years was on creating automated interfaces between the OSS applications—OSS integration. Cheap and simple OSS integration remains a major goal of most telecom companies. == Architecture == A lot of the work on OSS has been centered on defining its architecture. Put simply, there are four key elements of OSS: Processes the sequence of events Data the information that is acted upon Applications the components that implement processes to manage data Technology how we implement the applications During the 1990s, new OSS architecture definitions were done by the ITU Telecommunication Standardization Sector (ITU-T) in its Telecommunications Management Network (TMN) model. This established a 4-layer model of TMN applicable within an OSS: Business Management Level (BML) Service Management Level (SML) Network Management Level (NML) Element Management Level (EML) A fifth level is mentioned at times being the elements themselves, though the standards speak of only four levels. This was a basis for later work. Network management was further defined by the ISO using the FCAPS model—Fault, Configuration, Accounting, Performance and Security. This basis was adopted by the ITU-T TMN standards as the Functional model for the technology base of the TMN standards M.3000 – M.3599 series. Although the FCAPS model was originally conceived and is applicable for an IT enterprise network, it was adopted for use in the public networks run by telecommunication service providers adhering to ITU-T TMN standards. A big issue of network and service management is the ability to manage and control the network elements of the access and core networks. Historically, many efforts have been spent in standardization fora (ITU-T, 3GPP) in order to define standard protocol for network management, but with no success and practical results. On the other hand IETF SNMP protocol (Simple Network Management Protocol) has become the de facto standard for internet and telco management, at the EML-NML communication level. From 2000 and beyond, with the growth of the new broadband and VoIP services, the management of home networks is also entering the scope of OSS and network management. DSL Forum TR-069 specification has defined the CPE WAN Management Protocol (CWMP), suitable for managing home networks devices and terminals at the EML-NML interface. == TM Forum == The TM Forum, formerly the TeleManagement Forum, is an international membership organization of communications service providers and suppliers to the communications industry. While OSS is generally dominated by proprietary and custom technologies, TM Forum promotes standards and frameworks in OSS and BSS. By 2005, developments in OSS architecture were the results of the TM Forum's New Generation Operations Systems and Software (NGOSS) program, which was established in 2000. This established a set of principles that OSS integration should adopt, along with a set of models that provide standardized approaches. NGOSS was renamed Frameworx. === Frameworx models === An information model (the Shared Information/Data model, or SID) – now more commonly referred to as the Information Framework, A process model (the enhanced Telecom Operation Map, or eTOM) – now more commonly known as the Business Process Framework, An application model (the Telecom Applications Map) – now known as the Application Framework, an architecture (the Technology Neutral Architecture) and a lifecycle model. The TM Forum describes Frameworx as an architecture that is: "loosely coupled" distributed component based The components interact through a common communications vehicle (using an information exchange infrastructure; e.g., EAI, Web Services, EJB). The behavior can be controlled through the use of process management and/or policy management to orchestrate the functionality provided by the services offered by the components. The early focus of the TM Forum's NGOSS work was on building reference models to support a business stakeholder view on process, information and application interaction. Running in parallel were activities that supported an implementation stakeholder view on interface specifications to provide access to OSS capability (primarily MTNM). The MTNM work evolved into a set of Web Services providing Multi-Technology Operations System Interfaces MTOSI. Most recently, the OSS through Java initiative (OSS/J) joined the TMF to provide NGOSS-based BSS/OSS APIs. === Ongoing work - Open Digital Architecture (ODA) === Open Digital Architecture (ODA) offers an industry-agreed blueprint, language and set of key design principles to follow. It will provide pragmatic pathways for the journey from maintaining monolithic, legacy software solutions, towards managing nimble, cloud based capabilities that can be orchestrated using AI. It is a reference architecture that maps TM Forum’s Open APIs against technical and business platform functions. == See also == Business support system COSMOS (telecommunications) Loop maintenance operations system OA&M Service Evaluation System Switching Control Center System == References == == External links == Video: What is OSS/BSS? TM Forum OSS through Java initiative OSS News Review OSS Observer landing page of Analysys Mason Pipeline Magazine InsideTelephony OSS/BSS (2017.05.20 via Wayback Machine) Billing & OSS World OSS Line Telecommunications OSS and BSS Telcordia SR-2275
Wikipedia/Operational_Support_Systems
The Common Information Model (CIM) is an open standard that defines how managed elements in an IT environment are represented as a common set of objects and relationships between them. The Distributed Management Task Force maintains the CIM to allow consistent management of these managed elements, independent of their manufacturer or provider. == Overview == One way to describe CIM is to say that it allows multiple parties to exchange management information about these managed elements. However, this falls short of fully capturing CIM's ability not only to describe these managed elements and the management information, but also to actively control and manage them. By using a common model of information, management software can be written once and work with many implementations of the common model without complex and costly conversion operations or loss of information. The CIM standard is defined and published by the Distributed Management Task Force (DMTF). A related standard is Web-Based Enterprise Management (WBEM, also defined by DMTF) which defines a particular implementation of CIM, including protocols for discovering and accessing such CIM implementations. == Schema and specifications == The CIM standard includes the CIM Infrastructure Specification and the CIM Schema: CIM Infrastructure Specification The CIM Infrastructure Specification defines the architecture and concepts of CIM, including a language by which the CIM Schema (including any extension schema) is defined, and a method for mapping CIM to other information models, such as SNMP. The CIM architecture is based upon UML, so it is object-oriented: The managed elements are represented as CIM classes and any relationships between them are represented as CIM associations. Inheritance allows specialization of common base elements into more specific derived elements. CIM Schema The CIM Schema is a conceptual schema which defines the specific set of objects and relationships between them that represent a common base for the managed elements in an IT environment. The CIM Schema covers most of today's elements in an IT environment, for example computer systems, operating systems, networks, middleware, services and storage. Classes can be, for example: CIM_ComputerSystem, CIM_OperatingSystem, CIM_Process, CIM_DataFile. The CIM Schema defines a common basis for representing these managed elements. Since most managed elements have product and vendor specific behavior, the CIM Schema is extensible in order to allow the producers of these elements to represent their specific features seamlessly together with the common base functionality defined in the CIM Schema. Updates to the CIM Schema are published regularly. CIM is the basis for most of the other DMTF standards (e.g. WBEM or SMASH). It is also the basis for the SMI-S standard for storage management. == Implementations == === Infrastructure Implementations === Many vendors provide implementations of CIM in various forms: Some operating systems provide a CIM implementation, for example: the Windows Management Instrumentation (WMI) API available in Microsoft Windows 2000 and higher the Windows Management Infrastructure (MI) API for Microsoft Windows 2012 and higher some Linux distributions with the SBLIM (Standards Based Linux Instrumentation for Manageability) project Some implementations are Independent of the systems they support, for example: Open Group's Pegasus WSI's J WBEM Server There is also a growing number of tools market around CIM. === Management Standards based on the CIM Schema === Standards organizations have defined management standards based on the CIM Schema: The Storage Networking Industry Association (SNIA) has heavily bought into using CIM and WBEM: they have defined their usage of CIM (called Storage Management Initiative – Specification or SMI-S) as a standard. Some server manufacturers collaborate in the DMTF under the SMASH initiative to define CIM-based management of servers. The DASH initiative in the DMTF attempts to define CIM-based management of desktop computers. === Communication protocols used === A number of protocols are defined for messages transmitted between clients and servers. The message protocols are transmitted on top of HTTP. There are two message types: operational messages, which provoke a response from the receiver (RPC) export messages, which are indications/events. ==== CIM Operations over HTTP (CIM-XML) ==== CIM-XML forms part of the WBEM protocol family, and is standardised by the DMTF. CIM-XML comprises three specifications: CIM Operations over HTTP Representation of CIM using XML CIM DTD ==== WS-Management ==== WS-MAN forms part of the WBEM protocol family, and is standardised by the DMTF. WS-MAN comprises 3 specifications: WS-CIM Mapping Specification WS-Management CIM Binding Specification Web Services for Management (WS- Management) Specification ==== CIM operations over RESTful services ==== CIM-RS forms part of the WBEM protocol family, and is standardised by the DMTF. CIM-RS comprises three specifications: CIM Operations Over RESTful Services CIM-RS Protocol Specification CIM-RS Payload Representation in JSON == See also == Storage Management Initiative – Specification == References == == External links == CIM, Standards, DMTF, including CIM Schema and CIM Infrastructure Specification. CIM definition, Linktionary. CIM definition, Networkcomputing, archived from the original on 2007-10-09, retrieved 2006-12-11. CIM definition, Searchstorage, Techtarget. CIM, Tutorials, WBEM Solutions, archived from the original on 2008-04-10, retrieved 2006-12-11. SBLIM, Sourceforge.
Wikipedia/Common_Information_Model_(computing)
A semantic data model (SDM) is a high-level semantics-based database description and structuring formalism (database model) for databases. This database model is designed to capture more of the meaning of an application environment than is possible with contemporary database models. An SDM specification describes a database in terms of the kinds of entities that exist in the application environment, the classifications and groupings of those entities, and the structural interconnections among them. SDM provides a collection of high-level modeling primitives to capture the semantics of an application environment. By accommodating derived information in a database structural specification, SDM allows the same information to be viewed in several ways; this makes it possible to directly accommodate the variety of needs and processing requirements typically present in database applications. The design of the present SDM is based on our experience in using a preliminary version of it. SDM is designed to enhance the effectiveness and usability of database systems. An SDM database description can serve as a formal specification and documentation tool for a database; it can provide a basis for supporting a variety of powerful user interface facilities, it can serve as a conceptual database model in the database design process; and, it can be used as the database model for a new kind of database management system. == In software engineering == A semantic data model in software engineering has various meanings: It is a conceptual data model in which semantic information is included. This means that the model describes the meaning of its instances. Such a semantic data model is an abstraction that defines how the stored symbols (the instance data) relate to the real world. It is a conceptual data model that includes the capability to express and exchange information which enables parties to interpret meaning (semantics) from the instances, without the need to know the meta-model. Such semantic models are fact-oriented (as opposed to object-oriented). Facts are typically expressed by binary relations between data elements, whereas higher order relations are expressed as collections of binary relations. Typically binary relations have the form of triples: Object-RelationType-Object. For example: the Eiffel Tower <is located in> Paris. Typically the instance data of semantic data models explicitly include the kinds of relationships between the various data elements, such as <is located in>. To interpret the meaning of the facts from the instances, it is required that the meaning of the kinds of relations (relation types) be known. Therefore, semantic data models typically standardize such relation types. This means that the second kind of semantic data models enables that the instances express facts that include their own meanings. The second kind of semantic data models are usually meant to create semantic databases. The ability to include meaning in semantic databases facilitates building distributed databases that enable applications to interpret the meaning from the content. This implies that semantic databases can be integrated when they use the same (standard) relation types. This also implies that in general they have a wider applicability than relational or object-oriented databases. == Overview == The logical data structure of a database management system (DBMS), whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data, because it is limited in scope and biased toward the implementation strategy employed by the DBMS. Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data, as illustrated in the figure. The real world, in terms of resources, ideas, events, etc., are symbolically defined within physical data stores. A semantic data model is an abstraction which defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world. According to Klas and Schrefl (1995), the "overall goal of semantic data models is to capture more meaning of data by integrating relational concepts with more powerful abstraction concepts known from the Artificial Intelligence field. The idea is to provide high level modeling primitives as an integral part of a data model in order to facilitate the representation of real world situations". == History == The need for semantic data models was first recognized by the U.S. Air Force in the mid-1970s as a result of the Integrated Computer-Aided Manufacturing (ICAM) Program. The objective of this program was to increase manufacturing productivity through the systematic application of computer technology. The ICAM Program identified a need for better analysis and communication techniques for people involved in improving manufacturing productivity. As a result, the ICAM Program developed a series of techniques known as the IDEF (ICAM Definition) Methods which included the following: IDEF0 used to produce a “function model” which is a structured representation of the activities or processes within the environment or system. IDEF1 used to produce an “information model” which represents the structure and semantics of information within the environment or system. IDEF1X a semantic data modeling technique used to produce a graphical information model which represents the structure and semantics of information within an environment or system. Use of this standard permits the construction of semantic data models which may serve to support the management of data as a resource, the integration of information systems, and the building of computer databases. IDEF2 used to produce a “dynamics model” which represents the time varying behavioral characteristics of the environment or system. During the 1990s, the application of semantic modelling techniques resulted in the semantic data models of the second kind. An example of such is the semantic data model that is standardised as ISO 15926-2 (2002), which is further developed into the semantic modelling language Gellish (2005). The definition of the Gellish language is documented in the form of a semantic data model. Gellish itself is a semantic modelling language, that can be used to create other semantic models. Those semantic models can be stored in Gellish Databases, being semantic databases. == Applications == A semantic data model can be used to serve many purposes. Some key objectives include: Planning of data resources: A preliminary data model can be used to provide an overall view of the data required to run an enterprise. The model can then be analyzed to identify and scope projects to build shared data resources. Building of shareable databases: A fully developed model can be used to define an application independent view of data which can be validated by users and then transformed into a physical database design for any of the various DBMS technologies. In addition to generating databases which are consistent and shareable, development costs can be drastically reduced through data modeling. Evaluation of vendor software: Since a data model actually represents the infrastructure of an organization, vendor software can be evaluated against a company’s data model in order to identify possible inconsistencies between the infrastructure implied by the software and the way the company actually does business. Integration of existing databases: By defining the contents of existing databases with semantic data models, an integrated data definition can be derived. With the proper technology, the resulting conceptual schema can be used to control transaction processing in a distributed database environment. The U.S. Air Force Integrated Information Support System (I2S2) is an experimental development and demonstration of this kind of technology, applied to a heterogeneous type of DBMS environments. == See also == Computational mathematics Conceptual schema Entity-relationship model Information model Object-role modeling Ontology (information science) Relational Model/Tasmania Semantic technology Three-schema approach == References == This article incorporates public domain material from the National Institute of Standards and Technology == Further reading == Naphtali D. Rishe (1992). Database Design: The Semantic Modeling Approach. McGraw-Hill. Johan ter Bekke (1992). Semantic Data Modeling. Prentice Hall. Alfonso F. Cardenas and Dennis McLeod (1990). Research Foundations in Object-Oriented and Semantic Database Systems. Prentice Hall. Peter Gray, Krishnarao G. Kulkarni and, Norman W. Paton (1992). Object-Oriented Databases: A Semantic Data Model Approach. Prentice-Hall International Series in Computer Science. Michael Hammer and Dennis McLeod (1978). "The Semantic Data Model: a Modeling Mechanism for Data Base Applications." In: Proc. ACM SIGMOD Int’l. Conf. on Management of Data. Austin, Texas, May 31 - June 2, 1978, pp. 26–36. Hammer, Michael, and Dennis McLeod. "Database Description with SDM: A Semantic Database Model." ACM Transactions on Database Systems (TODS) 6.3 (1981): 351-86. Web. == External links == Data related to Semantic data model at Wikidata Semantic Data Modeling Johan ter Bekke tribute site. Technical analysis of semantic data modeling layer in BI tools
Wikipedia/Semantic_data_model
Enterprise modelling is the abstract representation, description and definition of the structure, processes, information and resources of an identifiable business, government body, or other large organization. It deals with the process of understanding an organization and improving its performance through creation and analysis of enterprise models. This includes the modelling of the relevant business domain (usually relatively stable), business processes (usually more volatile), and uses of information technology within the business domain and its processes. == Overview == Enterprise modelling is the process of building models of whole or part of an enterprise with process models, data models, resource models and/or new ontologies etc. It is based on knowledge about the enterprise, previous models and/or reference models as well as domain ontologies using model representation languages. An enterprise in general is a unit of economic organization or activity. These activities are required to develop and deliver products and/or services to a customer. An enterprise includes a number of functions and operations such as purchasing, manufacturing, marketing, finance, engineering, and research and development. The enterprise of interest are those corporate functions and operations necessary to manufacture current and potential future variants of a product. The term "enterprise model" is used in industry to represent differing enterprise representations, with no real standardized definition. Due to the complexity of enterprise organizations, a vast number of differing enterprise modelling approaches have been pursued across industry and academia. Enterprise modelling constructs can focus upon manufacturing operations and/or business operations; however, a common thread in enterprise modelling is an inclusion of assessment of information technology. For example, the use of networked computers to trigger and receive replacement orders along a material supply chain is an example of how information technology is used to coordinate manufacturing operations within an enterprise. The basic idea of enterprise modelling according to Ulrich Frank is "to offer different views on an enterprise, thereby providing a medium to foster dialogues between various stakeholders - both in academia and in practice. For this purpose they include abstractions suitable for strategic planning, organisational (re-) design and software engineering. The views should complement each other and thereby foster a better understanding of complex systems by systematic abstractions. The views should be generic in the sense that they can be applied to any enterprise. At the same time they should offer abstractions that help with designing information systems which are well integrated with a company's long term strategy and its organisation. Hence, enterprise models can be regarded as the conceptual infrastructure that support a high level of integration." == History == Enterprise modelling has its roots in systems modelling and especially information systems modelling. One of the earliest pioneering works in modelling information systems was done by Young and Kent (1958), who argued for "a precise and abstract way of specifying the informational and time characteristics of a data processing problem". They wanted to create "a notation that should enable the analyst to organize the problem around any piece of hardware". Their work was a first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components. A next step in IS modelling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine independent problem definition language, at the system level of data processing". This led to the development of a specific IS information algebra. The first methods dealing with enterprise modelling emerged in the 1970s. They were the entity-relationship approach of Peter Chen (1976) and SADT of Douglas T. Ross (1977), the one concentrate on the information view and the other on the function view of business entities. These first methods have been followed end 1970s by numerous methods for software engineering, such as SSADM, Structured Design, Structured Analysis and others. Specific methods for enterprise modelling in the context of Computer Integrated Manufacturing appeared in the early 1980s. They include the IDEF family of methods (ICAM, 1981) and the GRAI method by Guy Doumeingts in 1984 followed by GRAI/GIM by Doumeingts and others in 1992. These second generation of methods were activity-based methods which have been surpassed on the one hand by process-centred modelling methods developed in the 1990s such as Architecture of Integrated Information Systems (ARIS), CIMOSA and Integrated Enterprise Modeling (IEM). And on the other hand by object-oriented methods, such as Object-oriented analysis (OOA) and Object-modelling technique (OMT). == Enterprise modelling basics == === Enterprise model === An enterprise model is a representation of the structure, activities, processes, information, resources, people, behavior, goals, and constraints of a business, government, or other enterprises. Thomas Naylor (1970) defined a (simulation) model as "an attempt to describe the interrelationships among a corporation's financial, marketing, and production activities in terms of a set of mathematical and logical relationships which are programmed into the computer." These interrelationships should according to Gershefski (1971) represent in detail all aspects of the firm including "the physical operations of the company, the accounting and financial practices followed, and the response to investment in key areas" Programming the modelled relationships into the computer is not always necessary: enterprise models, under different names, have existed for centuries and were described, for example, by Adam Smith, Walter Bagehot, and many others. According to Fox and Gruninger (1998) from "a design perspective, an enterprise model should provide the language used to explicitly define an enterprise... From an operations perspective, the enterprise model must be able to represent what is planned, what might happen, and what has happened. It must supply the information and knowledge necessary to support the operations of the enterprise, whether they be performed by hand or machine." In a two-volume set entitled The Managerial Cybernetics of Organization Stafford Beer introduced a model of the enterprise, the Viable System Model (VSM). Volume 2, The Heart of Enterprise, analyzed the VSM as a recursive organization of five systems: System One (S1) through System Five (S5). Beer's model differs from others in that the VSM is recursive, not hierarchical: "In a recursive organizational structure, any viable system contains, and is contained in, a viable system." === Function modelling === Function modelling in systems engineering is a structured representation of the functions, activities or processes within the modelled system or subject area. A function model, also called an activity model or process model, is a graphical representation of an enterprise's function within a defined scope. The purposes of the function model are: to describe the functions and processes, assist with discovery of information needs, help identify opportunities, and establish a basis for determining product and service costs. A function model is created with a functional modelling perspective. A functional perspectives is one or more perspectives possible in process modelling. Other perspectives possible are for example behavioural, organisational or informational. A functional modelling perspective concentrates on describing the dynamic process. The main concept in this modelling perspective is the process, this could be a function, transformation, activity, action, task etc. A well-known example of a modelling language employing this perspective is data flow diagrams. The perspective uses four symbols to describe a process, these being: Process: Illustrates transformation from input to output. Store: Data-collection or some sort of material. Flow: Movement of data or material in the process. External Entity: External to the modelled system, but interacts with it. Now, with these symbols, a process can be represented as a network of these symbols. This decomposed process is a DFD, data flow diagram. In Dynamic Enterprise Modeling, for example, a division is made in the Control model, Function Model, Process model and Organizational model. === Data modelling === Data modelling is the process of creating a data model by applying formal data model descriptions using data modelling techniques. Data modelling is a technique for defining business requirements for a database. It is sometimes called database modelling because a data model is eventually implemented in a database. The figure illustrates the way data models are developed and used today. A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design. === Business process modelling === Business process modelling, not to be confused with the wider Business Process Management (BPM) discipline, is the activity of representing processes of an enterprise, so that the current ("as is") process may be analyzed and improved in future ("to be"). Business process modelling is typically performed by business analysts and managers who are seeking to improve process efficiency and quality. The process improvements identified by business process modelling may or may not require Information Technology involvement, although that is a common driver for the need to model a business process, by creating a process master. Change management programs are typically involved to put the improved business processes into practice. With advances in technology from large platform vendors, the vision of business process modelling models becoming fully executable (and capable of simulations and round-trip engineering) is coming closer to reality every day. === Systems architecture === The RM-ODP reference model identifies enterprise modelling as providing one of the five viewpoints of an open distributed system. Note that such a system need not be a modern-day IT system: a banking clearing house in the 19th century may be used as an example (). == Enterprise modelling techniques == There are several techniques for modelling the enterprise such as Active Knowledge Modeling, Design & Engineering Methodology for Organizations (DEMO) Dynamic Enterprise Modeling Enterprise Modelling Methodology/Open Distributed Processing (EMM/ODP) Extended Enterprise Modeling Language Multi-Perspective Enterprise Modelling (MEMO), Process modelling such as BPMN, CIMOSA, DYA, IDEF3, LOVEM, PERA, etc. Integrated Enterprise Modeling (IEM), and Modelling the enterprise with multi-agent systems. More enterprise modelling techniques are developed into Enterprise Architecture framework such as: ARIS - ARchitecture of Integrated Information Systems DoDAF - the US Department of Defense Architecture Framework RM-ODP - Reference Model of Open Distributed Processing TOGAF - The Open Group Architecture Framework Zachman Framework - an architecture framework, based on the work of John Zachman at IBM in the 1980s Service-oriented modeling framework (SOMF), based on the work of Michael Bell And metamodelling frameworks such as: Generalised Enterprise Reference Architecture and Methodology == Enterprise engineering == Enterprise engineering is the discipline concerning the design and the engineering of enterprises, regarding both their business and organization. In theory and practice two types of enterprise engineering has emerged. A more general connected to engineering and the management of enterprises, and a more specific related to software engineering, enterprise modelling and enterprise architecture. In the field of engineering a more general enterprise engineering emerged, defined as the application of engineering principals to the management of enterprises. It encompasses the application of knowledge, principles, and disciplines related to the analysis, design, implementation and operation of all elements associated with an enterprise. In essence this is an interdisciplinary field which combines systems engineering and strategic management as it seeks to engineer the entire enterprise in terms of the products, processes and business operations. The view is one of continuous improvement and continued adaptation as firms, processes and markets develop along their life cycles. This total systems approach encompasses the traditional areas of research and development, product design, operations and manufacturing as well as information systems and strategic management. This fields is related to engineering management, operations management, service management and systems engineering. In the context of software development a specific field of enterprise engineering has emerged, which deals with the modelling and integration of various organizational and technical parts of business processes. In the context of information systems development it has been the area of activity in the organization of the systems analysis, and an extension of the scope of Information Modelling. It can also be viewed as the extension and generalization of the systems analysis and systems design phases of the software development process. Here Enterprise modelling can be part of the early, middle and late information system development life cycle. Explicit representation of the organizational and technical system infrastructure is being created in order to understand the orderly transformations of existing work practices. This field is also called Enterprise architecture, or defined with Enterprise Ontology as being two major parts of Enterprise architecture. == Related fields == === Business reference modelling === Business reference modelling is the development of reference models concentrating on the functional and organizational aspects of the core business of an enterprise, service organization or government agency. In enterprise engineering a business reference model is part of an enterprise architecture framework. This framework defines in a series of reference models, how to organize the structure and views associated with an Enterprise Architecture. A reference model in general is a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business operations of an organization, independent of the organizational structure that perform them. Other types of business reference model can also depict the relationship between the business processes, business functions, and the business area’s business reference model. These reference model can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance. === Economic modelling === Economic modelling is the theoretical representation of economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified framework designed to illustrate complex processes, often but not always using mathematical techniques. Frequently, economic models use structural parameters. Structural parameters are underlying parameters in a model or class of models. A model may have various parameters and those parameters may change to create various properties. In general terms, economic models have two functions: first as a simplification of and abstraction from observed data, and second as a means of selection of data based on a paradigm of econometric study. The simplification is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analyzing and presenting this information are useful. === Ontology engineering === Ontology engineering or ontology building is a subfield of knowledge engineering that studies the methods and methodologies for building ontologies. In the domain of enterprise architecture, an ontology is an outline or a schema used to structure objects, their attributes and relationships in a consistent manner. As in enterprise modelling, an ontology can be composed of other ontologies. The purpose of ontologies in enterprise modelling is to formalize and establish the sharability, re-usability, assimilation and dissemination of information across all organizations and departments within an enterprise. Thus, an ontology enables integration of the various functions and processes which take place in an enterprise. One common language with well articulated structure and vocabulary would enable the company to be more efficient in its operations. A common ontology will allow for effective communication, understanding and thus coordination among the various divisions of an enterprise. There are various kinds of ontologies used in numerous environments. While the language example given earlier dealt with the area of information systems and design, other ontologies may be defined for processes, methods, activities, etc., within an enterprise. Using ontologies in enterprise modelling offers several advantages. Ontologies ensure clarity, consistency, and structure to a model. They promote efficient model definition and analysis. Generic enterprise ontologies allow for reusability of and automation of components. Because ontologies are schemata or outlines, the use of ontologies does not ensure proper enterprise model definition, analysis, or clarity. Ontologies are limited by how they are defined and implemented. An ontology may or may not include the potential or capability to capture all of the aspects of what is being modelled. === Systems thinking === The modelling of the enterprise and its environment could facilitate the creation of enhanced understanding of the business domain and processes of the extended enterprise, and especially of the relations—both those that "hold the enterprise together" and those that extend across the boundaries of the enterprise. Since enterprise is a system, concepts used in system thinking can be successfully reused in modelling enterprises. This way a fast understanding can be achieved throughout the enterprise about how business functions are working and how they depend upon other functions in the organization. == See also == Business process modelling Enterprise architecture Enterprise Architecture framework Enterprise integration Enterprise life cycle ISO 19439 Enterprise Data Modeling == References == == Further reading == August-Wilhelm Scheer (1992). Architecture of Integrated Information Systems: Foundations of Enterprise Modelling. Springer-Verlag. ISBN 3-540-55131-X François Vernadat (1996) Enterprise Modeling and Integration: Principles and Applications, Chapman & Hall, London, ISBN 0-412-60550-3 == External links == Agile Enterprise Modeling. by S.W. Ambler, 2003-2008. Enterprise Modeling Anti-patterns. by S.W. Ambler, 2005. Enterprise Modelling and Information Systems Architectures - An International Journal (EMISA) is a scholarly open access journal with a unique focus on novel and innovative research on Enterprise Models and Information Systems Architectures.
Wikipedia/Enterprise_modelling
A common data model (CDM) can refer to any standardised data model which allows for data and information exchange between different applications and data sources. Common data models aim to standardise logical infrastructure so that related applications can "operate on and share the same data", and can be seen as a way to "organize data from many sources that are in different formats into a standard structure". A common data model has been described as one of the components of a "strong information system". A standardised common data model has also been described as a typical component of a well designed agile application besides a common communication protocol. Providing a single common data model within an organisation is one of the typical tasks of a data warehouse. == Examples of common data models == === Border crossings === X-trans.eu was a cross-border pilot project between the Free State of Bavaria (Germany) and Upper Austria with the aim of developing a faster procedure for the application and approval of cross-border large-capacity transports. The portal was based on a common data model that contained all the information required for approval. === Climate data === The Climate Data Store Common Data Model is a common data model set up by the Copernicus Climate Change Service for harmonising essential climate variables from different sources and data providers. === General information technology === Within service-oriented architecture, S-RAMP is a specification released by HP, IBM, Software AG, TIBCO, and Red Hat which defines a common data model for SOA repositories as well as an interaction protocol to facilitate the use of common tooling and sharing of data. Content Management Interoperability Services (CMIS) is an open standard for inter-operation of different content management systems over the internet, and provides a common data model for typed files and folders used with version control. The NetCDF software libraries for array-oriented scientific data implements a common data model called the NetCDF Java common data model, which consists of three layers built on top of each other to add successively richer semantics. === Health === Within genomic and medical data, the Observational Medical Outcomes Partnership (OMOP) research program established under the U.S. National Institutes of Health has created a common data model for claims and electronic health records which can accommodate data from different sources around the world. PCORnet, which was developed by the Patient-Centered Outcomes Research Institute, is another common data model for health data including electronic health records and patient claims. The Sentinel Common Data Model was initially started as Mini-Sentinel in 2008. It is used by the Sentinel Initiative of the USA's Food and Drug Administration. The Generalized Data Model was first published in 2019. It was designed to be a stand-alone data model as well as to allow for further transformation into other data models (e.g., OMOP, PCORNet, Sentinel). It has a hierarchical structure to flexibly capture relationships among data elements. The JANUS clinical trial data repository also provides a common data model which is based on the SDTM standard to represent clinical data submitted to regulatory agencies, such as tabulation datasets, patient profiles, listings, etc. === Logistics === SX000i is a specification developed jointly by the Aerospace and Defence Industries Association of Europe (ASD) and the American Aerospace Industries Association (AIA) to provide information, guidance and instructions to ensure compatibility and the commonality. The associated SX002D specification contains a common data model. === Microsoft Common Data Model === The Microsoft Common Data Model is a collection of many standardised extensible data schemas with entities, attributes, semantic metadata, and relationships, which represent commonly used concepts and activities in various businesses areas. It is maintained by Microsoft and its partners, and is published on GitHub. Microsoft's Common Data Model is used amongst others in Microsoft Dataverse and with various Microsoft Power Platform and Microsoft Dynamics 365 services. === Rail transport === RailTopoModel is a common data model for the railway sector. === Other === There are many more examples of various common data models for different uses published by different sources. == See also == Apache OFBiz, an open source enterprise resource planning system which provides a common data model Canonical model Data Reference Model, one of five reference models of the U.S. government federal enterprise architecture Data platform Metadata Open Semantic Framework, which internally uses the RDF to convert all data to a common data model Requirements Interchange Format Generic data model == References ==
Wikipedia/Common_data_model
Distributed Management Task Force (DMTF) is a 501(c)(6) nonprofit industry standards organization that creates open manageability standards spanning diverse emerging and traditional IT infrastructures including cloud, virtualization, network, servers and storage. Member companies and alliance partners collaborate on standards to improve interoperable management of information technologies. Based in Portland, Oregon, the DMTF is led by a board of directors representing technology companies including: Broadcom Inc., Cisco, Dell Technologies, Hewlett Packard Enterprise, Intel Corporation, Lenovo, Positivo Tecnologia S.A., and Verizon. == History == Founded in 1992 as the Desktop Management Task Force, the organization's first standard was the now-legacy Desktop Management Interface (DMI). As the organization evolved to address distributed management through additional standards, such as the Common Information Model (CIM), it changed its name to the Distributed Management Task Force in 1999, but is now known as, DMTF. The DMTF continues to address converged, hybrid IT and the Software Defined Data Center (SDDC) with its latest specifications, such as the Redfish standard, SMBIOS, SPDM, and PMCI standards. == Standards == DMTF standards include: CADF - Cloud Auditing Data Federation CIMI - Cloud Infrastructure Management Interface CIM - Common Information Model CMDBf - Configuration Management Database Federation DASH - Desktop and Mobile Architecture for System Hardware MCTP - Management Component Transport Protocol Including NVMe-MI, I2C/SMBus and PCIe Bindings NC-SI - Network Controller Sideband Interface OVF - Open Virtualization Format PLDM - Platform Level Data Model Including Firmware Update, Redfish Device Enablement (RDE) Redfish – Including Protocols, Schema, Host Interface, Profiles SMASH - Systems Management Architecture for Server Hardware System Management BIOS (SMBIOS) – Standardized Host Management Information SPDM - Security Protocol and Data Model == See also == Cloud Infrastructure Management Interface Common Information Model (computing) Desktop and mobile Architecture for System Hardware Management Component Transport Protocol NC-SI Open Virtualization Format Redfish (specification) Systems Management Architecture for Server Hardware SMBIOS == References == === General === https://www.theregister.co.uk/2015/08/05/dmtf_signs_off_redfish_server_management_spec_v_10/ https://digitalisationworld.com/news/49120/dmtf-announces-redfish-api-advancements https://searchstorage.techtarget.com/tip/Choose-the-right-storage-management-interface-for-you == External links == Official website
Wikipedia/Distributed_Management_Task_Force
A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both. Communicating systems use well-defined formats for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses predetermined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communication protocols have to be agreed upon by the parties involved. To reach an agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communication what programming languages are to computations. An alternate formulation states that protocols are to communication what algorithms are to computation. Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together is known as a protocol suite; when implemented in software they are a protocol stack. Internet communication protocols are published by the Internet Engineering Task Force (IETF). The IEEE (Institute of Electrical and Electronics Engineers) handles wired and wireless networking and the International Organization for Standardization (ISO) handles other types. The ITU-T handles telecommunications protocols and formats for the public switched telephone network (PSTN). As the PSTN and Internet converge, the standards are also being driven towards convergence. == Communicating systems == === History === The first use of the term protocol in a modern data-commutation context occurs in April 1967 in a memorandum entitled A Protocol for Use in the NPL Data Communications Network. Under the direction of Donald Davies, who pioneered packet switching at the National Physical Laboratory in the United Kingdom, it was written by Roger Scantlebury and Keith Bartlett for the NPL network. On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol, written by Bob Kahn, which defined the transmission of messages to an IMP. The Network Control Program (NCP) for the ARPANET, developed by Steve Crocker and other graduate students including Jon Postel and Vint Cerf, was first implemented in 1970. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept. The CYCLADES network, designed by Louis Pouzin in the early 1970s was the first to implement the end-to-end principle, and make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was the first to tackle the highly complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be the Transmission Control Protocol (TCP). Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. Research in the early 1970s by Bob Kahn and Vint Cerf led to the formulation of the Transmission Control Program (TCP). Its RFC 675 specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974, still a monolithic design at this time. The International Network Working Group agreed on a connectionless datagram standard which was presented to the CCITT in 1975 but was not adopted by the CCITT nor by the ARPANET. Separate international research, particularly the work of Rémi Després, contributed to the development of the X.25 standard, based on virtual circuits, which was adopted by the CCITT in 1976. Computer manufacturers developed proprietary protocols such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's DECnet and Xerox Network Systems. TCP software was redesigned as a modular protocol stack, referred to as TCP/IP. This was installed on SATNET in 1982 and on the ARPANET in January 1983. The development of a complete Internet protocol suite by 1989, as outlined in RFC 1122 and RFC 1123, laid the foundation for the growth of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet. International work on a reference model for communication standards led to the OSI model, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. === Concept === The information exchanged between devices through a network or other media is governed by rules and conventions that can be set out in communication protocol specifications. The nature of communication, the actual data exchanged and any state-dependent behaviors, is defined by these specifications. In digital computing systems, the rules can be expressed by algorithms and data structures. Protocols are to communication what algorithms or programming languages are to computations. Operating systems usually contain a set of cooperating processes that manipulate shared data to communicate with each other. This communication is governed by well-understood protocols, which can be embedded in the process code itself. In contrast, because there is no shared memory, communicating systems have to communicate with each other using a shared transmission medium. Transmission is not necessarily reliable, and individual systems may use different hardware or operating systems. To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system. When protocol algorithms are expressed in a portable programming language the protocol software may be made operating system independent. The best-known frameworks are the TCP/IP model and the OSI model. At the time the Internet was developed, abstraction layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, the originally monolithic networking programs were decomposed into cooperating protocols. This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design. Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol suite. Some of the best-known protocol suites are TCP/IP, IPX/SPX, X.25, AX.25 and AppleTalk. The protocols can be arranged based on functionality in groups, for instance, there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions. To transmit a message, a protocol has to be selected from each layer. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer. == Types == There are two types of communication protocols, based on their representation of the content being carried: text-based and binary. === Text-based === A text-based protocol or plain text protocol represents its content in human-readable format, often in plain text encoded in a machine-readable encoding such as ASCII or UTF-8, or in structured text-based formats such as Intel hex format, XML or JSON. The immediate human readability stands in contrast to native binary protocols which have inherent benefits for use in a computer environment (such as ease of mechanical parsing and improved bandwidth utilization). Network applications have various methods of encapsulating data. One method very common with Internet protocols is a text oriented representation that transmits requests and responses as lines of ASCII text, terminated by a newline character (and usually a carriage return character). Examples of protocols that use plain, human-readable text for its commands are FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), early versions of HTTP (Hypertext Transfer Protocol), and the finger protocol. Text-based protocols are typically optimized for human parsing and interpretation and are therefore suitable whenever human inspection of protocol contents is required, such as during debugging and during early protocol development design phases. === Binary === A binary protocol utilizes all values of a byte, as opposed to a text-based protocol which only uses values corresponding to human-readable characters in ASCII encoding. Binary protocols are intended to be read by a machine rather than a human being. Binary protocols have the advantage of terseness, which translates into speed of transmission and interpretation. Binary have been used in the normative documents describing modern standards like EbXML, HTTP/2, HTTP/3 and EDOC. An interface in UML may also be considered a binary protocol. == Basic requirements == Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol must include rules describing the context. These kinds of rules are said to express the syntax of the communication. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kinds of rules are said to express the semantics of the communication. Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed: Data formats for data exchange Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header and the payload. The actual message is carried in the payload. The header area contains the fields with relevance to the operation of the protocol. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Address formats for data exchange Addresses are used to identify both the sender and the intended receiver(s). The addresses are carried in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are of interest and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair (sender address, receiver address). Usually, some address values have special meanings. An all-1s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme. Address mapping Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance, to translate a logical IP address specified by the application to an Ethernet MAC address. This is referred to as address mapping. Routing When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. The interconnection of networks through routers is called internetworking. Detection of transmission errors Error detection is necessary on networks where data corruption is possible. In a common approach, a CRC of the data area is added to the end of packets, making it possible for the receiver to detect differences caused by corruption. The receiver rejects the packets on CRC differences and arranges somehow for retransmission. Acknowledgements Acknowledgement of correct reception of packets is required for connection-oriented communication. Acknowledgments are sent from receivers back to their respective senders. Loss of information - timeouts and retries Packets may be lost on the network or be delayed in transit. To cope with this, under some protocols, a sender may expect an acknowledgment of correct reception from the receiver within a certain amount of time. Thus, on timeouts, the sender may need to retransmit the information. In case of a permanently broken link, the retransmission has no effect, so the number of retransmissions is limited. Exceeding the retry limit is considered an error. Direction of information flow Direction needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links or from one sender at a time as on a shared medium. This is known as media access control. Arrangements have to be made to accommodate the case of collision or contention where two parties respectively simultaneously transmit or wish to transmit. Sequence control If long bitstrings are divided into pieces and then sent on the network individually, the pieces may get lost or delayed or, on some types of networks, take different routes to their destination. As a result, pieces may arrive out of sequence. Retransmissions can result in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message. Flow control Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender. Queueing Communicating processes or state machines employ queues (or "buffers"), usually FIFO queues, to deal with the messages in the order sent, and may sometimes have multiple queues with different prioritization. == Protocol design == Systems engineering principles have been applied to create a set of common network protocol design principles. The design of complex protocols often involves decomposition into simpler, cooperating protocols. Such a set of cooperating protocols is sometimes called a protocol family or a protocol suite, within a conceptual framework. Communicating systems operate concurrently. An important aspect of concurrent programming is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain. A mathematical approach to the study of concurrency and communication is referred to as communicating sequential processes (CSP). Concurrency can also be modeled using finite-state machines, such as Mealy and Moore machines. Mealy and Moore machines are in use as design tools in digital electronics systems encountered in the form of hardware used in telecommunication or electronic devices in general. The literature presents numerous analogies between computer communication and programming. In analogy, a transfer mechanism of a protocol is comparable to a central processing unit (CPU). The framework introduces rules that allow the programmer to design cooperating protocols independently of one another. === Layering === In modern protocol design, protocols are layered to form a protocol stack. Layering is a design principle that divides the protocol design task into smaller steps, each of which accomplishes a specific part, interacting with the other parts of the protocol only in a small number of well-defined ways. Layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple. The communication protocols in use on the Internet are designed to function in diverse and complex settings. Internet protocols are designed for simplicity and modularity and fit into a coarse hierarchy of functional layers defined in the Internet Protocol Suite. The first two cooperating protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP) resulted from the decomposition of the original Transmission Control Program, a monolithic communication protocol, into this layered communication suite. The OSI model was developed internationally based on experience with networks that predated the internet as a reference model for general communication with much stricter rules of protocol interaction and rigorous layering. Typically, application software is built upon a robust data transport layer. Underlying this transport layer is a datagram delivery and routing mechanism that is typically connectionless in the Internet. Packet relaying across networks happens over another layer that involves only network link technologies, which are often specific to certain physical layer technologies, such as Ethernet. Layering provides opportunities to exchange technologies when needed, for example, protocols are often stacked in a tunneling arrangement to accommodate the connection of dissimilar networks. For example, IP may be tunneled across an Asynchronous Transfer Mode (ATM) network. ==== Protocol layering ==== Protocol layering forms the basis of protocol design. It allows the decomposition of single, complex protocols into simpler, cooperating protocols. The protocol layers each solve a distinct class of communication problems. Together, the layers make up a layering scheme or model. Computations deal with algorithms and data; Communication involves protocols and messages; So the analog of a data flow diagram is some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems, A and B, both make use of the same protocol suite. The vertical flows (and protocols) are in-system and the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines mark the boundaries of the (horizontal) protocol layers. ==== Software layering ==== The software supporting protocols has a layered organization and its relationship with protocol layering is shown in figure 5. To send a message on system A, the top-layer software module interacts with the module directly below it and hands over the message to be encapsulated. The lower module fills in the header data in accordance with the protocol it implements and interacts with the bottom module which sends the message over the communications channel to the bottom module of system B. On the receiving system B the reverse happens, so ultimately the message gets delivered in its original form to the top module of system B. Program translation is divided into subproblems. As a result, the translation software is layered as well, allowing the software layers to be designed independently. The same approach can be seen in the TCP/IP layering. The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between the application layer and the transport layer is called the operating system boundary. ==== Strict layering ==== Strictly adhering to a layered model, a practice known as strict layering, is not always the best approach to networking. Strict layering can have a negative impact on the performance of an implementation. Although the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers as abstracting the protocol stack in this way may cause a higher layer to duplicate the functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis. === Design patterns === Commonly recurring problems in the design and implementation of communication protocols can be addressed by software design patterns. === Formal specification === Popular formal methods of describing communication syntax are Abstract Syntax Notation One (an ISO standard) and augmented Backus–Naur form (an IETF standard). Finite-state machine models are used to formally describe the possible interactions of the protocol. and communicating finite-state machines == Protocol development == For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability. Protocol standards are commonly created by obtaining the approval or support of a standards organization, which initiates the standardization process. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol. === The need for protocol standards === The need for protocol standards can be shown by looking at what happened to the Binary Synchronous Communications (BSC) protocol invented by IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to enhance the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume, that a standard would have prevented at least some of this from happening. In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as de facto standards. De facto standards are common in emerging markets, niche markets, or markets that are monopolized (or oligopolized). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a de facto standard operating system like Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition. === Standards organizations === Some of the standards organizations of relevance for communication protocols are the International Organization for Standardization (ISO), the International Telecommunication Union (ITU), the Institute of Electrical and Electronics Engineers (IEEE), and the Internet Engineering Task Force (IETF). The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunication engineers designing the public switched telephone network (PSTN), as well as many radio communication systems. For marine electronics the NMEA standards are used. The World Wide Web Consortium (W3C) produces protocols and standards for Web technologies. International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other. Multiple standards bodies may be involved in the development of a protocol. If they are uncoordinated, then the result may be multiple, incompatible definitions of a protocol, or multiple, incompatible interpretations of messages; important invariants in one definition (e.g., that time-to-live values are monotone decreasing to prevent stable routing loops) may not be respected in another. === The standardization process === In the ISO, the standardization process starts off with the commissioning of a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties (including other standards bodies) in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement. These comments are taken into account and a draft proposal is produced by the working group. After feedback, modification, and compromise the proposal reaches the status of a draft international standard, and ultimately an international standard. International standards are reissued periodically to handle the deficiencies and reflect changing views on the subject. === OSI standardization === A lesson learned from ARPANET, the predecessor of the Internet, was that protocols need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for structured protocols (such as layered protocols) and their standardization. This would prevent protocol standards with overlapping functionality and would allow clear definition of the responsibilities of a protocol at the different levels (layers). This gave rise to the Open Systems Interconnection model (OSI model), which is used as a framework for the design of standard protocols and services conforming to the various layer specifications. In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered. Each layer provides service to the layer above it using the services of the layer immediately below it. The top layer provides services to the application process. The layers communicate with each other by means of an interface, called a service access point. Corresponding layers at each system are called peer entities. To communicate, two peer entities at a given layer use a protocol specific to that layer which is implemented by using services of the layer below. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the OSI model, the layers and their functionality are (from highest to lowest layer): The Application layer may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures. The presentation layer may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g., data compression and data encryption). The session layer may provide the following services to the presentation layer: establishment and release of session connections, normal and expedited data exchange, a quarantine service which allows the sending presentation entity to instruct the receiving session entity not to release data to its presentation entity without permission, interaction management so presentation entities can control whose turn it is to perform certain control functions, resynchronization of a session connection, reporting of unrecoverable exceptions to the presentation entity. The transport layer provides reliable and transparent data transfer in a cost-effective way as required by the selected quality of service. It may support the multiplexing of several transport connections on to one network connection or split one transport connection into several network connections. The network layer does the setup, maintenance and release of network paths between transport peer entities. When relays are needed, routing and relay functions are provided by this layer. The quality of service is negotiated between network and transport entities at the time the connection is set up. This layer is also responsible for network congestion control. The data link layer does the setup, maintenance and release of data link connections. Errors occurring in the physical layer are detected and may be corrected. Errors are reported to the network layer. The exchange of data link units (including flow control) is defined by this layer. The physical layer describes details like the electrical characteristics of the physical connection, the transmission techniques used, and the setup, maintenance and clearing of physical connections. In contrast to the TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Connection-oriented communication requires some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so the development of RM/OSI concentrated on connection-oriented networks and connectionless networks were first mentioned in an addendum to RM/OSI and later incorporated into an update to RM/OSI. At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code". The standardization process is described by RFC 2026. Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards. == Wire image == The wire image of a protocol is the information that a non-participant observer is able to glean from observing the protocol messages, including both information explicitly given meaning by the protocol, but also inferences made by the observer. Unencrypted protocol metadata is one source making up the wire image, and side-channels including packet timing also contribute. Different observers with different vantages may see different wire images. The wire image is relevant to end-user privacy and the extensibility of the protocol. If some portion of the wire image is not cryptographically authenticated, it is subject to modification by intermediate parties (i.e., middleboxes), which can influence protocol operation. Even if authenticated, if a portion is not encrypted, it will form part of the wire image, and intermediate parties may intervene depending on its content (e.g., dropping packets with particular flags). Signals deliberately intended for intermediary consumption may be left authenticated but unencrypted. The wire image can be deliberately engineered, encrypting parts that intermediaries should not be able to observe and providing signals for what they should be able to. If provided signals are decoupled from the protocol's operation, they may become untrustworthy. Benign network management and research are affected by metadata encryption; protocol designers must balance observability for operability and research against ossification resistance and end-user privacy. The IETF announced in 2014 that it had determined that large-scale surveillance of protocol operations is an attack due to the ability to infer information from the wire image about users and their behaviour, and that the IETF would "work to mitigate pervasive monitoring" in its protocol designs; this had not been done systematically previously. The Internet Architecture Board recommended in 2023 that disclosure of information by a protocol to the network should be intentional, performed with the agreement of both recipient and sender, authenticated to the degree possible and necessary, only acted upon to the degree of its trustworthiness, and minimised and provided to a minimum number of entities. Engineering the wire image and controlling what signals are provided to network elements was a "developing field" in 2023, according to the IAB. == Ossification == Protocol ossification is the loss of flexibility, extensibility and evolvability of network protocols. This is largely due to middleboxes that are sensitive to the wire image of the protocol, and which can interrupt or interfere with messages that are valid but which the middlebox does not correctly recognize. This is a violation of the end-to-end principle. Secondary causes include inflexibility in endpoint implementations of protocols. Ossification is a major issue in Internet protocol design and deployment, as it can prevent new protocols or extensions from being deployed on the Internet, or place strictures on the design of new protocols; new protocols may have to be encapsulated in an already-deployed protocol or mimic the wire image of another protocol. Because of ossification, the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are the only practical choices for transport protocols on the Internet, and TCP itself has significantly ossified, making extension or modification of the protocol difficult. Recommended methods of preventing ossification include encrypting protocol metadata, and ensuring that extension points are exercised and wire image variability is exhibited as fully as possible; remedying existing ossification requires coordination across protocol participants. QUIC is the first IETF transport protocol to have been designed with deliberate anti-ossification properties. == Taxonomies == Classification schemes for protocols usually focus on the domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connection-oriented networks and connectionless networks respectively. An example of function is a tunneling protocol, which is used to encapsulate packets in a high-level protocol so that the packets can be passed across a transport system using the high-level protocol. A layering scheme combines both function and domain of use. The dominant layering schemes are the ones developed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes. The layering scheme from the IETF is called Internet layering or TCP/IP layering. The layering scheme from ISO is called the OSI model or ISO layering. In networking equipment configuration, a term-of-art distinction is often drawn: The term protocol strictly refers to the transport layer, and the term service refers to protocols utilizing a protocol for transport. In the common case of TCP and UDP, services are distinguished by port numbers. Conformance to these port numbers is voluntary, so in content inspection systems the term service strictly refers to port numbers, and the term application is often used to refer to protocols identified through inspection signatures. == See also == Cryptographic protocol – Aspect of cryptography Lists of network protocols Protocol Builder – Programming tool to build network connectivity components == Notes == == References == === Bibliography === Radia Perlman (1999). Interconnections: Bridges, Routers, Switches, and Internetworking Protocols (2nd ed.). Addison-Wesley. ISBN 0-201-63448-1.. In particular Ch. 18 on "network design folklore", which is also available online Gerard J. Holzmann (1991). Design and Validation of Computer Protocols. Prentice Hall. ISBN 0-13-539925-4. Douglas E. Comer (2000). Internetworking with TCP/IP - Principles, Protocols and Architecture (4th ed.). Prentice Hall. ISBN 0-13-018380-6. In particular Ch.11 Protocol layering. Also has a RFC guide and a Glossary of Internetworking Terms and Abbreviations. R. Braden, ed. (1989). Requirements for Internet Hosts -- Communication Layers. Internet Engineering Task Force abbr. IETF. doi:10.17487/RFC1122. RFC 1122. Describes TCP/IP to the implementors of protocolsoftware. In particular the introduction gives an overview of the design goals of the suite. M. Ben-ari (1982). Principles of concurrent programming (10th Print ed.). Prentice Hall International. ISBN 0-13-701078-8. C.A.R. Hoare (1985). Communicating sequential processes (10th Print ed.). Prentice Hall International. ISBN 0-13-153271-5. R.D. Tennent (1981). Principles of programming languages (10th Print ed.). Prentice Hall International. ISBN 0-13-709873-1. Brian W Marsden (1986). Communication network protocols (2nd ed.). Chartwell Bratt. ISBN 0-86238-106-1. Andrew S. Tanenbaum (1984). Structured computer organization (10th Print ed.). Prentice Hall International. ISBN 0-13-854605-3. Bryant, Stewart; Morrow, Monique, eds. (November 2009). Uncoordinated Protocol Development Considered Harmful. doi:10.17487/RFC5704. RFC 5704. Farrell, Stephen; Tschofenig, Hannes (May 2014). Pervasive Monitoring Is an Attack. doi:10.17487/RFC7258. RFC 7258. Trammell, Brian; Kuehlewind, Mirja (April 2019). The Wire Image of a Network Protocol. doi:10.17487/RFC8546. RFC 8546. Hardie, Ted, ed. (April 2019). Transport Protocol Path Signals. doi:10.17487/RFC8558. RFC 8558. Fairhurst, Gorry; Perkins, Colin (July 2021). Considerations around Transport Header Confidentiality, Network Operations, and the Evolution of Internet Transport Protocols. doi:10.17487/RFC9065. RFC 9065. Thomson, Martin; Pauly, Tommy (December 2021). Long-Term Viability of Protocol Extension Mechanisms. doi:10.17487/RFC9170. RFC 9170. Arkko, Jari; Hardie, Ted; Pauly, Tommy; Kühlewind, Mirja (July 2023). Considerations on Application - Network Collaboration Using Path Signals. doi:10.17487/RFC9419. RFC 9419. McQuistin, Stephen; Perkins, Colin; Fayed, Marwan (July 2016). Implementing Real-Time Transport Services over an Ossified Network. 2016 Applied Networking Research Workshop. doi:10.1145/2959424.2959443. hdl:1893/26111. Papastergiou, Giorgos; Fairhurst, Gorry; Ros, David; Brunstrom, Anna; Grinnemo, Karl-Johan; Hurtig, Per; Khademi, Naeem; Tüxen, Michael; Welzl, Michael; Damjanovic, Dragana; Mangiante, Simone (2017). "De-Ossifying the Internet Transport Layer: A Survey and Future Perspectives". IEEE Communications Surveys & Tutorials. 19: 619–639. doi:10.1109/COMST.2016.2626780. hdl:2164/8317. S2CID 1846371. Moschovitis, Christos J. P. (1999). History of the Internet: A Chronology, 1843 to the Present. ABC-CLIO. ISBN 978-1-57607-118-2. == External links == Javvin's Protocol Dictionary at the Wayback Machine (archived 2004-06-10) Overview of protocols in telecontrol field with OSI Reference Model
Wikipedia/Interface_(computer_science)
Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function. Issues such as requirements engineering, reliability, logistics, coordination of different teams, testing and evaluation, maintainability, and many other disciplines, aka "ilities", necessary for successful system design, development, implementation, and ultimate decommission become more difficult when dealing with large or complex projects. Systems engineering deals with work processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as industrial engineering, production systems engineering, process systems engineering, mechanical engineering, manufacturing engineering, production engineering, control engineering, software engineering, electrical engineering, cybernetics, aerospace engineering, organizational studies, civil engineering and project management. Systems engineering ensures that all likely aspects of a project or system are considered and integrated into a whole. The systems engineering process is a discovery process that is quite unlike a manufacturing process. A manufacturing process is focused on repetitive activities that achieve high-quality outputs with minimum cost and time. The systems engineering process must begin by discovering the real problems that need to be resolved and identifying the most probable or highest-impact failures that can occur. Systems engineering involves finding solutions to these problems. == History == The term systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries, especially those developing systems for the U.S. military, to apply the discipline. When it was no longer possible to rely on design evolution to improve upon a system and the existing tools were not sufficient to meet growing demands, new methods began to be developed that addressed the complexity directly. The continuing evolution of systems engineering comprises the development and identification of new methods and modeling techniques. These methods aid in a better comprehension of the design and developmental control of engineering systems as they grow more complex. Popular tools that are often used in the systems engineering context were developed during these times, including Universal Systems Language (USL), Unified Modeling Language (UML), Quality function deployment (QFD), and Integration Definition (IDEF). In 1990, a professional society for systems engineering, the National Council on Systems Engineering (NCOSE), was founded by representatives from a number of U.S. corporations and organizations. NCOSE was created to address the need for improvements in systems engineering practices and education. As a result of growing involvement from systems engineers outside of the U.S., the name of the organization was changed to the International Council on Systems Engineering (INCOSE) in 1995. Schools in several countries offer graduate programs in systems engineering, and continuing education options are also available for practicing engineers. == Concept == Systems engineering signifies only an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to formalize various approaches simply and in doing so, identify new methods and research opportunities similar to that which occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavor. === Origins and traditional scope === The traditional scope of engineering embraces the conception, design, development, production, and operation of physical systems. Systems engineering, as originally conceived, falls within this scope. "Systems engineering", in this sense of the term, refers to the building of engineering concepts. === Evolution to a broader scope === The use of the term "systems engineer" has evolved over time to embrace a wider, more holistic concept of "systems" and of engineering processes. This evolution of the definition has been a subject of ongoing controversy, and the term continues to apply to both the narrower and a broader scope. Traditional systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as spacecraft and aircraft. More recently, systems engineering has evolved to take on a broader meaning especially when humans were seen as an essential component of a system. Peter Checkland, for example, captures the broader meaning of systems engineering by stating that 'engineering' "can be read in its general sense; you can engineer a meeting or a political agreement.": 10  Consistent with the broader scope of systems engineering, the Systems Engineering Body of Knowledge (SEBoK) has defined three types of systems engineering: Product Systems Engineering (PSE) is the traditional systems engineering focused on the design of physical systems consisting of hardware and software. Enterprise Systems Engineering (ESE) pertains to the view of enterprises, that is, organizations or combinations of organizations, as systems. Service Systems Engineering (SSE) has to do with the engineering of service systems. Checkland defines a service system as a system which is conceived as serving another system. Most civil infrastructure systems are service systems. === Holistic view === Systems engineering focuses on analyzing and eliciting customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem, the system lifecycle. This includes fully understanding all of the stakeholders involved. Oliver et al. claim that the systems engineering process can be decomposed into: A Systems Engineering Technical Process A Systems Engineering Management Process Within Oliver's model, the goal of the Management Process is to organize the technical effort in the lifecycle, while the Technical Process includes assessing available information, defining effectiveness measures, to create a behavior model, create a structure model, perform trade-off analysis, and create sequential build & test plan. Depending on their application, although there are several models that are used in the industry, all of them aim to identify the relation between the various stages mentioned above and incorporate feedback. Examples of such models include the Waterfall model and the VEE model (also called the V model). === Interdisciplinary field === System development often requires contribution from diverse technical disciplines. By providing a systems (holistic) view of the development effort, systems engineering helps mold all the technical contributors into a unified team effort, forming a structured development process that proceeds from concept to production to operation and, in some cases, to termination and disposal. In an acquisition, the holistic integrative discipline combines contributions and balances tradeoffs among cost, schedule, and performance while maintaining an acceptable level of risk covering the entire life cycle of the item. This perspective is often replicated in educational programs, in that systems engineering courses are taught by faculty from other engineering departments, which helps create an interdisciplinary environment. === Managing complexity === The need for systems engineering arose with the increase in complexity of systems and projects, in turn exponentially increasing the possibility of component friction, and therefore the unreliability of the design. When speaking in this context, complexity incorporates not only engineering systems but also the logical human organization of data. At the same time, a system can become more complex due to an increase in size as well as with an increase in the amount of data, variables, or the number of fields that are involved in the design. The International Space Station is an example of such a system. The development of smarter control algorithms, microprocessor design, and analysis of environmental systems also come within the purview of systems engineering. Systems engineering encourages the use of tools and methods to better comprehend and manage complexity in systems. Some examples of these tools can be seen here: System architecture System model, modeling, and simulation Mathematical optimization System dynamics Systems analysis Statistical analysis Reliability engineering Decision making Taking an interdisciplinary approach to engineering systems is inherently complex since the behavior of and interaction among system components is not always immediately well defined or understood. Defining and characterizing such systems and subsystems and the interactions among them is one of the goals of systems engineering. In doing so, the gap that exists between informal requirements from users, operators, marketing organizations, and technical specifications is successfully bridged. === Scope === The principles of systems engineering – holism, emergent behavior, boundary, et al. – can be applied to any system, complex or otherwise, provided systems thinking is employed at all levels. Besides defense and aerospace, many information and technology-based companies, software development firms, and industries in the field of electronics & communications require systems engineers as part of their team. An analysis by the INCOSE Systems Engineering Center of Excellence (SECOE) indicates that optimal effort spent on systems engineering is about 15–20% of the total project effort. At the same time, studies have shown that systems engineering essentially leads to a reduction in costs among other benefits. However, no quantitative survey at a larger scale encompassing a wide variety of industries has been conducted until recently. Such studies are underway to determine the effectiveness and quantify the benefits of systems engineering. Systems engineering encourages the use of modeling and simulation to validate assumptions or theories on systems and the interactions within them. Use of methods that allow early detection of possible failures, in safety engineering, are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. No method guarantees today's decisions will still be valid when a system goes into service years or decades after first conceived. However, there are techniques that support the process of systems engineering. Examples include soft systems methodology, Jay Wright Forrester's System dynamics method, and the Unified Modeling Language (UML)—all currently being explored, evaluated, and developed to support the engineering decision process. == Education == Education in systems engineering is often seen as an extension to the regular engineering courses, reflecting the industry attitude that engineering students need a foundational background in one of the traditional engineering disciplines (e.g. aerospace engineering, civil engineering, electrical engineering, mechanical engineering, manufacturing engineering, industrial engineering, chemical engineering)—plus practical, real-world experience to be effective as systems engineers. Undergraduate university programs explicitly in systems engineering are growing in number but remain uncommon, the degrees including such material are most often presented as a BS in Industrial Engineering. Typically programs (either by themselves or in combination with interdisciplinary study) are offered beginning at the graduate level in both academic and professional tracks, resulting in the grant of either a MS/MEng or Ph.D./EngD degree. INCOSE, in collaboration with the Systems Engineering Research Center at Stevens Institute of Technology maintains a regularly updated directory of worldwide academic programs at suitably accredited institutions. As of 2017, it lists over 140 universities in North America offering more than 400 undergraduate and graduate programs in systems engineering. Widespread institutional acknowledgment of the field as a distinct subdiscipline is quite recent; the 2009 edition of the same publication reported the number of such schools and programs at only 80 and 165, respectively. Education in systems engineering can be taken as systems-centric or domain-centric: Systems-centric programs treat systems engineering as a separate discipline and most of the courses are taught focusing on systems engineering principles and practice. Domain-centric programs offer systems engineering as an option that can be exercised with another major field in engineering. Both of these patterns strive to educate the systems engineer who is able to oversee interdisciplinary projects with the depth required of a core engineer. == Systems engineering topics == Systems engineering tools are strategies, procedures, and techniques that aid in performing systems engineering on a project or product. The purpose of these tools varies from database management, graphical browsing, simulation, and reasoning, to document production, neutral import/export, and more. === System === There are many definitions of what a system is in the field of systems engineering. Below are a few authoritative definitions: ANSI/EIA-632-1999: "An aggregation of end products and enabling products to achieve a given purpose." DAU Systems Engineering Fundamentals: "an integrated composite of people, products, and processes that provide a capability to satisfy a stated need or objective." IEEE Std 1220-1998: "A set or arrangement of elements and processes that are related and whose behavior satisfies customer/operational needs and provides for life cycle sustainment of the products." INCOSE Systems Engineering Handbook: "homogeneous entity that exhibits predefined behavior in the real world and is composed of heterogeneous parts that do not individually exhibit that behavior and an integrated configuration of components and/or subsystems." INCOSE: "A system is a construct or collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce systems-level results. The results include system-level qualities, properties, characteristics, functions, behavior, and performance. The value added by the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected." ISO/IEC 15288:2008: "A combination of interacting elements organized to achieve one or more stated purposes." NASA Systems Engineering Handbook: "(1) The combination of elements that function together to produce the capability to meet a need. The elements include all hardware, software, equipment, facilities, personnel, processes, and procedures needed for this purpose. (2) The end product (which performs operational functions) and enabling products (which provide life-cycle support services to the operational end products) that make up a system." === Systems engineering processes === Systems engineering processes encompass all creative, manual, and technical activities necessary to define the product and which need to be carried out to convert a system definition to a sufficiently detailed system design specification for product manufacture and deployment. Design and development of a system can be divided into four stages, each with different definitions: Task definition (informative definition) Conceptual stage (cardinal definition) Design stage (formative definition) Implementation stage (manufacturing definition) Depending on their application, tools are used for various stages of the systems engineering process: === Using models === Models play important and diverse roles in systems engineering. A model can be defined in several ways, including: An abstraction of reality designed to answer specific questions about the real world An imitation, analog, or representation of a real-world process or structure; or A conceptual, mathematical, or physical tool to assist a decision-maker. Together, these definitions are broad enough to encompass physical engineering models used in the verification of a system design, as well as schematic models like a functional flow block diagram and mathematical (i.e. quantitative) models used in the trade study process. This section focuses on the last. The main reason for using mathematical models and diagrams in trade studies is to provide estimates of system effectiveness, performance or technical attributes, and cost from a set of known or estimable quantities. Typically, a collection of separate models is needed to provide all of these outcome variables. The heart of any mathematical model is a set of meaningful quantitative relationships among its inputs and outputs. These relationships can be as simple as adding up constituent quantities to obtain a total, or as complex as a set of differential equations describing the trajectory of a spacecraft in a gravitational field. Ideally, the relationships express causality, not just correlation. Furthermore, key to successful systems engineering activities are also the methods with which these models are efficiently and effectively managed and used to simulate the systems. However, diverse domains often present recurring problems of modeling and simulation for systems engineering, and new advancements are aiming to cross-fertilize methods among distinct scientific and engineering communities, under the title of 'Modeling & Simulation-based Systems Engineering'. === Modeling formalisms and graphical representations === Initially, when the primary purpose of a systems engineer is to comprehend a complex problem, graphic representations of a system are used to communicate a system's functional and data requirements. Common graphical representations include: Functional flow block diagram (FFBD) Model-based design Data flow diagram (DFD) N2 chart IDEF0 diagram Use case diagram Sequence diagram Block diagram Signal-flow graph USL function maps and type maps Enterprise architecture frameworks A graphical representation relates the various subsystems or parts of a system through functions, data, or interfaces. Any or each of the above methods is used in an industry based on its requirements. For instance, the N2 chart may be used where interfaces between systems are important. Part of the design phase is to create structural and behavioral models of the system. Once the requirements are understood, it is now the responsibility of a systems engineer to refine them and to determine, along with other engineers, the best technology for a job. At this point starting with a trade study, systems engineering encourages the use of weighted choices to determine the best option. A decision matrix, or Pugh method, is one way (QFD is another) to make this choice while considering all criteria that are important. The trade study in turn informs the design, which again affects graphic representations of the system (without changing the requirements). In an SE process, this stage represents the iterative step that is carried out until a feasible solution is found. A decision matrix is often populated using techniques such as statistical analysis, reliability analysis, system dynamics (feedback control), and optimization methods. === Other tools === ==== Systems Modeling Language ==== Systems Modeling Language (SysML), a modeling language used for systems engineering applications, supports the specification, analysis, design, verification and validation of a broad range of complex systems. ==== Lifecycle Modeling Language ==== Lifecycle Modeling Language (LML), is an open-standard modeling language designed for systems engineering that supports the full lifecycle: conceptual, utilization, support, and retirement stages. == Related fields and sub-fields == Many related fields may be considered tightly coupled to systems engineering. The following areas have contributed to the development of systems engineering as a distinct entity: === Cognitive systems engineering === Cognitive systems engineering (CSE) is a specific approach to the description and analysis of human-machine systems or sociotechnical systems. The three main themes of CSE are how humans cope with complexity, how work is accomplished by the use of artifacts, and how human-machine systems and socio-technical systems can be described as joint cognitive systems. CSE has since its beginning become a recognized scientific discipline, sometimes also referred to as cognitive engineering. The concept of a Joint Cognitive System (JCS) has in particular become widely used as a way of understanding how complex socio-technical systems can be described with varying degrees of resolution. The more than 20 years of experience with CSE has been described extensively. === Configuration management === Like systems engineering, configuration management as practiced in the defense and aerospace industry is a broad systems-level practice. The field parallels the taskings of systems engineering; where systems engineering deals with requirements development, allocation to development items and verification, configuration management deals with requirements capture, traceability to the development item, and audit of development item to ensure that it has achieved the desired functionality and outcomes that systems engineering and/or Test and Verification Engineering have obtained and proven through objective testing. === Control engineering === Control engineering and its design and implementation of control systems, used extensively in nearly every industry, is a large sub-field of systems engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process. === Industrial engineering === Industrial engineering is a branch of engineering that concerns the development, improvement, implementation, and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material, and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design to specify, predict, and evaluate results obtained from such systems. === Production Systems Engineering === Production Systems Engineering (PSE) is an emerging branch of Engineering intended to uncover fundamental principles of production systems and utilize them for analysis, continuous improvement, and design. === Interface design === Interface design and its specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces are able to accept new features, including mechanical, electrical, and logical interfaces, including reserved wires, plug-space, command codes, and bits in communication protocols. This is known as extensibility. Human-Computer Interaction (HCI) or Human-Machine Interface (HMI) is another aspect of interface design and is a critical aspect of modern systems engineering. Systems engineering principles are applied in the design of communication protocols for local area networks and wide area networks. === Mechatronic engineering === Mechatronic engineering, like systems engineering, is a multidisciplinary field of engineering that uses dynamic systems modeling to express tangible constructs. In that regard, it is almost indistinguishable from Systems Engineering, but what sets it apart is the focus on smaller details rather than larger generalizations and relationships. As such, both fields are distinguished by the scope of their projects rather than the methodology of their practice. === Operations research === Operations research supports systems engineering. Operations research, briefly, is concerned with the optimization of a process under multiple constraints. === Performance engineering === Performance engineering is the discipline of ensuring a system meets customer expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed or the capability of executing a number of such operations in a unit of time. Performance may be degraded when operations queued to execute are throttled by limited system capacity. For example, the performance of a packet-switched network is characterized by the end-to-end packet transit delay or the number of packets switched in an hour. The design of high-performance systems uses analytical or simulation modeling, whereas the delivery of high-performance implementation involves thorough performance testing. Performance engineering relies heavily on statistics, queueing theory, and probability theory for its tools and processes. === Program management and project management === Program management (or project management) has many similarities with systems engineering, but has broader-based origins than the engineering ones of systems engineering. Project management is also closely related to both program management and systems engineering. Both include scheduling as engineering support tool in assessing interdisciplinary concerns under management process. In particular, the direct relationship of resources, performance features, and risk to the duration of a task or the dependency links among tasks and impacts across the system lifecycle are systems engineering concerns. === Proposal engineering === Proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost-effective proposal development system. Basically, proposal engineering uses the "systems engineering process" to create a cost-effective proposal and increase the odds of a successful proposal. === Reliability engineering === Reliability engineering is the discipline of ensuring a system meets customer expectations for reliability throughout its life (i.e. it does not fail more frequently than expected). Next to the prediction of failure, it is just as much about the prevention of failure. Reliability engineering applies to all aspects of the system. It is closely associated with maintainability, availability (dependability or RAMS preferred by some), and integrated logistics support. Reliability engineering is always a critical component of safety engineering, as in failure mode and effects analysis (FMEA) and hazard fault tree analysis, and of security engineering. === Risk management === Risk management, the practice of assessing and dealing with risk is one of the interdisciplinary parts of Systems Engineering. In development, acquisition, or operational activities, the inclusion of risk in tradeoffs with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for the system lifecycle that requires the interdisciplinary technical approach of systems engineering. Systems Engineering has Risk Management define, tailor, implement, and monitor a structured process for risk management which is integrated into the overall effort. === Safety engineering === The techniques of safety engineering may be applied by non-specialist engineers in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems. === Security engineering === Security engineering can be viewed as an interdisciplinary field that integrates the community of practice for control systems design, reliability, safety, and systems engineering. It may involve such sub-specialties as authentication of system users, system targets, and others: people, objects, and processes. === Software engineering === From its beginnings, software engineering has helped shape modern systems engineering practice. The techniques used in the handling of the complexities of large software-intensive systems have had a major effect on the shaping and reshaping of the tools, methods, and processes of Systems Engineering. == See also == == References == == Further reading == Madhavan, Guru (2024). Wicked Problems: How to Engineer a Better World. New York: W.W. Norton & Company. ISBN 978-0-393-65146-1 Blockley, D. Godfrey, P. Doing it Differently: Systems for Rethinking Infrastructure, Second Edition, ICE Publications, London, 2017. Buede, D.M., Miller, W.D. The Engineering Design of Systems: Models and Methods, Third Edition, John Wiley and Sons, 2016. Chestnut, H., Systems Engineering Methods. Wiley, 1967. Gianni, D. et al. (eds.), Modeling and Simulation-Based Systems Engineering Handbook, CRC Press, 2014 at CRC Goode, H.H., Robert E. Machol System Engineering: An Introduction to the Design of Large-scale Systems, McGraw-Hill, 1957. Hitchins, D. (1997) World Class Systems Engineering at hitchins.net. Lienig, J., Bruemmer, H., Fundamentals of Electronic Systems Design, Springer, 2017 ISBN 978-3-319-55839-4. Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.ISBN 978-1-118-58537-5 MITRE, The MITRE Systems Engineering Guide(pdf) NASA (2007) Systems Engineering Handbook, NASA/SP-2007-6105 Rev1, December 2007. NASA (2013) NASA Systems Engineering Processes and Requirements Archived 27 December 2016 at the Wayback Machine NPR 7123.1B, April 2013 NASA Procedural Requirements Oliver, D.W., et al. Engineering Complex Systems with Models and Objects. McGraw-Hill, 1997. Parnell, G.S., Driscoll, P.J., Henderson, D.L. (eds.), Decision Making in Systems Engineering and Management, 2nd. ed., Hoboken, NJ: Wiley, 2011. This is a textbook for undergraduate students of engineering. Ramo, S., St.Clair, R.K. The Systems Approach: Fresh Solutions to Complex Problems Through Combining Science and Practical Common Sense, Anaheim, CA: KNI, Inc, 1998. Sage, A.P., Systems Engineering. Wiley IEEE, 1992. ISBN 0-471-53639-3. Sage, A.P., Olson, S.R., Modeling and Simulation in Systems Engineering, 2001. SEBOK.org, Systems Engineering Body of Knowledge (SEBoK) Shermon, D. Systems Cost Engineering, Gower Publishing, 2009 Shishko, R., et al. (2005) NASA Systems Engineering Handbook. NASA Center for AeroSpace Information, 2005. Stevens, R., et al. Systems Engineering: Coping with Complexity. Prentice Hall, 1998. US Air Force, SMC Systems Engineering Primer & Handbook, 2004 US DoD Systems Management College (2001) Systems Engineering Fundamentals. Defense Acquisition University Press, 2001 US DoD Guide for Integrating Systems Engineering into DoD Acquisition Contracts Archived 29 August 2017 at the Wayback Machine, 2006 US DoD MIL-STD-499 System Engineering Management == External links == ICSEng homepage INCOSE homepage INCOSE UK homepage PPI SE Goldmine homepage Systems Engineering Body of Knowledge Systems Engineering Tools AcqNotes DoD Systems Engineering Overview NDIA Systems Engineering Division
Wikipedia/Systems_engineer
Software architecture pattern is a reusable, proven solution to a specific, recurring problem focused on architectural design challenges, which can be applied within various architectural styles. == Examples == Some examples of architectural patterns: Publish–subscribe pattern Message broker == See also == List of software architecture styles and patterns Process Driven Messaging Service Enterprise architecture Common layers in an information system logical architecture == References == == Bibliography == Avgeriou, Paris; Zdun, Uwe (2005). "Architectural patterns revisited:a pattern language" (PDF). 10th European Conference on Pattern Languages of Programs (EuroPlop 2005), Irsee, Germany, July. UVK Verlagsgesellschaft. pp. 1–39. CiteSeerX 10.1.1.141.7444. ISBN 9783879408054. Buschmann F.; Meunier R.; Rohnert H.; Sommerlad P.; Stal M. (1996). Pattern-Oriented Software Architecture: A System of Patterns. Wiley. ISBN 9781118725269. Bass L.; Clements P.; Kazman R. (2003). Software Architecture in Practice. Addison-Wesley. ISBN 9780321154958.
Wikipedia/Architectural_pattern_(computer_science)
The systems architect is an information and communications technology professional. Systems architects define the architecture of a computerized system (i.e., a system composed of software and hardware) in order to fulfill certain requirements. Such definitions include: a breakdown of the system into components, the component interactions and interfaces (including with the environment, especially the user), and the technologies and resources to be used in its design and implementation. The systems architect's work should seek to avoid implementation issues and readily permit unanticipated extensions/modifications in future stages. Because of the extensive experience required for this, the systems architect is typically a very senior technologist with substantial, but general, knowledge of hardware, software, and similar (user) systems. Above all, the systems architect must be reasonably knowledgeable of the users' domain of experience. For example, the architect of an air traffic system needs to be more than superficially familiar with all of the tasks of an air traffic system, including those of all levels of users. The title of systems architect connotes higher-level design responsibilities than a systems engineer, software engineer or programmer, though day-to-day activities may overlap. == Overview == Systems architects interface with multiple stakeholders in an organization in order to understand the various levels of requirements, the domain, the viable technologies, and anticipated development process. Their work includes determining multiple design and implementation alternatives, assessing such alternatives based on all identified constraints (such as cost, schedule, space, power, safety, usability, reliability, maintainability, availability, and other "ilities"), and selecting the most suitable options for further design. The output of such work sets the core properties of the system and those that are hardest to change later. In small systems the architecture is typically defined directly by the developers. However, in larger systems, a systems architect should be appointed to outline the overall system, and to interface between the users, sponsors, and other stakeholders on one side and the engineers on the other. Very large, highly complex systems may include multiple architects, in which case the architects work together to integrate their subsystems or aspects, and respond to a chief architect responsible for the entire system. In general, the role of the architect is to act as a mediator between the users and the engineers, reconciling the users' needs and requirements with what the engineers have determined to be doable within the given (engineering) constraints. In systems design, the architects (and engineers) are responsible for: Interfacing with the user(s) and sponsor(s) and all other stakeholders in order to determine their (evolving) needs. Generating the highest level of system requirements, based on the users' needs and other constraints. Ensuring that this set of high level requirements is consistent, complete, correct, and operationally defined. Performing cost–benefit analyses to determine whether requirements are best met by manual, software, or hardware functions; making maximum use of commercial off-the-shelf or already developed components. Developing partitioning algorithms (and other processes) to allocate all present and foreseeable requirements into discrete partitions such that a minimum of communications is needed among partitions, and between the users and the system. Partitioning large systems into (successive layers of) subsystems and components each of which can be handled by a single engineer or team of engineers or subordinate architect. Interfacing with the design and implementation engineers and architects, so that any problems arising during design or implementation can be resolved in accordance with the fundamental design concepts, and users' needs and constraints. Ensuring that a maximally robust and extensible design is developed. Generating a set of acceptance test requirements, together with the designers, test engineers, and the users, which determine that all of the high-level requirements have been met, especially for the computer-human-interface. Generating products such as sketches, models, an early user guide, and prototypes to keep the users and the engineers constantly up to date and in agreement on the system to be provided as it is evolving. Ensuring that all architectural products and products with architectural input are maintained in the most current state and never allowed to seriously lag or become obsolete. == Systems architect: topics == Large systems architecture was developed as a way to handle systems too large for one person to conceive of, let alone design. Systems of this size are rapidly becoming the norm, so architectural approaches and architects are increasingly needed to solve the problems of large to very large systems. In general, increasingly large systems are reduced to 'human' proportions by a layering approach, where each layer is composed of a number of individually comprehensible sub-layers-- each having its own principal engineer and/or architect. A complete layer at one level will be shown as a functional 'component' of a higher layer (and may disappear altogether at the highest layers). === Users and sponsors === Architects are expected to understand human needs and develop humanly functional and aesthetically-pleasing products. A good architect is also the principal keeper of the users' vision of the end product, and of the process of deriving requirements from and implementing that vision. Architects do not follow exact procedures. They communicate with users/sponsors in a highly interactive, relatively informal way— together they extract the true requirements necessary for the designed (end) system. The architect must remain constantly in communication with the end users and with the (principal) systems engineers. Therefore, the architect must be intimately familiar with the users' environment and problem, and with the engineering environment(s) of likely solution spaces. === High level requirements === The user requirements specification should be a joint product of the users and architect: the users bring their needs and wish list, the architect brings knowledge of what is likely to prove doable within the cost, time and other constraints. When the users needs are translated into a set of high-level requirements is also the best time to write the first version of the acceptance test, which should, thereafter, be religiously kept up to date with the requirements. That way, the users will be absolutely clear about what they are getting. It is also a safeguard against untestable requirements, misunderstandings, and requirements creep. The development of the first level of engineering requirements is not a purely analytical exercise and should also involve both the architect and engineer. If any compromises are to be made— to meet constraints- the architect must ensure that the final product and overall look and feel do not stray very far from the users' intent. The engineer should focus on developing a design that optimizes the constraints but ensures a workable, reliable, extensible and robust product. The provision of needed services to the users is the true function of an engineered system. However, as systems become ever larger and more complex, and as their emphases move away from simple hardware and software components, the narrow application of traditional systems development principles have been found to be insufficient— the application of more general principles of systems, hardware, and software architecture to the design of (sub)systems is seen to be needed. Architecture may also be seen as a simplified model of the finished end product— its primary function is to define the parts and their relationships to each other so that the whole can be seen to be a consistent, complete, and correct representation of what the users' had in mind— especially for the computer-human-interface. It is also used to ensure that the parts fit together and relate in the desired way. It is necessary to distinguish between the architecture of the users' world and the engineered systems architecture. The former represents and addresses problems and solutions in the user's world. It is principally captured in the computer-human-interfaces (CHI) of the engineered system. The engineered system represents the engineering solutions— how the engineer proposes to develop and/or select and combine the components of the technical infrastructure to support the CHI. In the absence of an experienced architect, there is an unfortunate tendency to confuse the two architectures. But— the engineer thinks in terms of hardware and software and the technical solution space, whereas the users may be thinking in terms of solving a problem of getting people from point A to point B in a reasonable amount of time and with a reasonable expenditure of energy, or of getting needed information to customers and staff. A systems architect is expected to combine knowledge of both the architecture of the users' world and of (all potentially useful) engineering systems architectures. The former is a joint activity with the users; the latter is a joint activity with the engineers. The product is a set of high-level requirements reflecting the users' requirements which can be used by the engineers to develop systems design requirements. Because requirements evolve over the course of a project, especially a long one, an architect is needed until the system is accepted by the user: the architect ensures that all changes and interpretations made during the course of development do not compromise the users' viewpoint. === Cost/benefit analyses === Architects are generalists. They are not expected to be experts in any one technology but are expected to be knowledgeable of many technologies and able to judge their applicability to specific situations. They also apply their knowledge to practical situations, but evaluate the cost/benefits of various solutions using different technologies, for example, hardware versus software versus manual, and assure that the system as a whole performs according to the users' expectations. Many commercial-off-the-shelf or already developed hardware and software components may be selected independently according to constraints such as cost, response, throughput, etc. In some cases, the architect can already assemble the end system (almost) unaided. Or, s/he may still need the help of a hardware or software engineer to select components and to design and build any special purpose function. The architects (or engineers) may also enlist the aid of other specialists— in safety, security, communications, special purpose hardware, graphics, human factors, test and evaluation, quality control, reliability, maintainability, availability, interface management, etc. An effective systems architectural team must have access to specialists in critical specialties as needed. === Partitioning and layering === An architect planning a building works on the overall design, making sure it will be pleasing and useful to its inhabitants. While a single architect by himself may be enough to build a single-family house, many engineers may be needed, in addition, to solve the detailed problems that arise when a novel high-rise building is designed. If the job is large and complex enough, parts of the architecture may be designed as independent components. That is, if we are building a housing complex, we may have one architect for the complex, and one for each type of building, as part of an architectural team. Large automation systems also require an architect and much engineering talent. If the engineered system is large and complex enough, the systems architect may defer to a hardware architect and/or a software architect for parts of the job, although they all may be members of a joint architectural team. The architect should sub-allocate the system requirements to major components or subsystems that are within the scope of a single hardware or software engineer, or engineering manager and team. But the architect must never be viewed as an engineering supervisor. (If the item is sufficiently large and/or complex, the chief architect will sub-allocate portions to more specialized architects.) Ideally, each such component/subsystem is a sufficiently stand-alone object that it can be tested as a complete component, separate from the whole, using only a simple testbed to supply simulated inputs and record outputs. That is, it is not necessary to know how an air traffic control system works in order to design and build a data management subsystem for it. It is only necessary to know the constraints under which the subsystem will be expected to operate. A good architect ensures that the system, however complex, is built upon relatively simple and "clean" concepts for each (sub)system or layer and is easily understandable by everyone, especially the users, without special training. The architect will use a minimum of heuristics to ensure that each partition is well defined and clean of kludges, work-arounds, short-cuts, or confusing detail and exceptions. As users needs evolve, (once the system is fielded and in use), it is a lot easier subsequently to evolve a simple concept than one laden with exceptions, special cases, and much "fine print." Layering the architecture is important for keeping the architecture sufficiently simple at each layer so that it remains comprehensible to a single mind. As layers are ascended, whole systems at lower layers become simple components at the higher layers, and may disappear altogether at the highest layers. === Acceptance test === The acceptance test is a principal responsibility of the systems architect. It is the chief means by which the program lead will prove to the users that the system is as originally planned and that all involved architects and engineers have met their objectives. === Communications with users and engineers === A building architect uses sketches, models, and drawings. An automation systems (or software or hardware) architect should use sketches, models, and prototypes to discuss different solutions and results with users, engineers, and other architects. An early, draft version of the users' manual is invaluable, especially in conjunction with a prototype. Nevertheless, it is important that a workable, well written set of requirements, or specification, be created which is reasonably understandable to the customer (so that they can properly sign off on it, but the principal users' requirements should be captured in a preliminary users' manual for intelligibility). But it must use precise and unambiguous language so that designers and other implementers are left in no doubt as to meanings or intentions. In particular, all requirements must be testable, and the initial draft of the test plan should be developed contemporaneously with the requirements. All stakeholders should sign off on the acceptance test descriptions, or equivalent, as the sole determinant of the satisfaction of the requirements, at the outset of the program. == Architect metaphor == The use of any form of the word "architect" is regulated by "title acts" in many states in the US, and a person must be licensed as a building architect to use it. In the UK the architects registration board excludes the usage of architect (when used in the context of software and IT) from its restricted usage. == See also == Enterprise architecture Enterprise architect Hardware architecture Requirements analysis Software architecture Software engineering Systems architecture Systems modeling Systems engineering Systems design Business analyst Service-oriented modeling framework (SOMF) == References == == Further reading == Donald Firesmith et al.: The Method Framework for Engineering System Architectures, (2008) Mark W. Maier and Rechtin, Eberhardt, The Art of Systems Architecting, Third Edition (2009) Gerrit Muller, "Systems architecting: A business perspective," CRC Press, (2012). Eberhardt Rechtin, Systems Architecting: Creating & Building Complex Systems, 1991. J. H. Saltzer, M. F. Kaashoek, Principles of Computer System Design: An Introduction, Morgan Kaufmann, 2009. Rob Williams, Computer Systems Architecture: a Networking Approach, Second Edition (December 2006). == External links == Principles of Computer System Design: An Introduction – MIT OpenCourseWare Systems Architecture: Canaxia Brings an Architect on Board, Article
Wikipedia/Systems_architect
Software-defined networking (SDN) is an approach to network management that uses abstraction to enable dynamic and programmatically efficient network configuration to create grouping and segmentation while improving network performance and monitoring in a manner more akin to cloud computing than to traditional network management. SDN is meant to improve the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane). The control plane consists of one or more controllers, which are considered the brains of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity. SDN was commonly associated with the OpenFlow protocol for remote communication with network plane elements to determine the path of network packets across network switches since OpenFlow's emergence in 2011. However, since 2012, proprietary systems have also used the term. These include Cisco Systems' Open Network Environment and Nicira's network virtualization platform. SD-WAN applies similar technology to a wide area network (WAN). == History == The history of SDN principles can be traced back to the separation of the control and data plane first used in public switched telephone networks. This provided a manner of simplifying provisioning and management years before the architecture was used in data networks. The Internet Engineering Task Force (IETF) began considering various ways to decouple the control and data forwarding functions in a proposed interface standard published in 2004 named Forwarding and Control Element Separation (ForCES). The ForCES Working Group also proposed a companion SoftRouter architecture. Additional early standards from the IETF that pursued separating control from data include the Linux Netlink as an IP services protocol and a path computation element (PCE)-based architecture. These early attempts failed to gain traction. One reason is that many in the Internet community viewed separating control from data to be risky, especially given the potential for failure in the control plane. Another reason is that vendors were concerned that creating standard application programming interfaces (APIs) between the control and data planes would result in increased competition. The use of open-source software in these separated architectures traces its roots to the Ethane project at Stanford's computer science department. Ethane's simple switch design led to the creation of OpenFlow, and an API for OpenFlow was first created in 2008. In that same year, NOX, an operating system for networks, was created. SDN research included emulators such as vSDNEmul, EstiNet, and Mininet. Work on OpenFlow continued at Stanford, including with the creation of testbeds to evaluate the use of the protocol in a single campus network, as well as across the WAN as a backbone for connecting multiple campuses. In academic settings, there were several research and production networks based on OpenFlow switches from NEC and Hewlett-Packard, as well as those based on Quanta Computer whiteboxes starting in about 2009. Beyond academia, the first deployments were by Nicira in 2010 to control OVS from Onix, codeveloped with NTT and Google. A notable deployment was Google's B4 in 2012. Later, Google announced the first OpenFlow/Onix deployments in is datacenters. Another large deployment exists at China Mobile. The Open Networking Foundation was founded in 2011 to promote SDN and OpenFlow. At the 2014 Interop and Tech Field Day, software-defined networking was demonstrated by Avaya using shortest-path bridging (IEEE 802.1aq) and OpenStack as an automated campus, extending automation from the data center to the end device and removing manual provisioning from service delivery. == Concept == SDN architectures decouple network control (control plane) and forwarding (data plane) functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. The OpenFlow protocol can be used in SDN technologies. The SDN architecture is: Directly programmable: Network control is directly programmable because it is decoupled from forwarding functions. Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs. Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch. Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software. Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols. == New network architecture == The explosion of mobile devices and content, server virtualization, and the advent of cloud services are among the trends driving the networking industry to re-examine traditional network architectures. Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture may be ill-suited to the dynamic computing and storage needs of today's enterprise data centers, campuses, and carrier environments. Some of the key computing trends driving the need for a new network paradigm include: Changing traffic patterns Within the enterprise data center, traffic patterns have changed significantly. In contrast to client-server applications where the bulk of the communication occurs between one client and one server, today's applications access different databases and servers, creating a flurry of east-west machine-to-machine traffic before returning data to the end user device in the classic north-south traffic pattern. At the same time, users are changing network traffic patterns as they push for access to corporate content and applications from any type of device, connecting from anywhere, at any time. Finally, many enterprise data center managers are deploying a utility computing model, which may include a private cloud, public cloud, or some mix of both, resulting in additional traffic across the wide-area network. The consumerization of IT Users are increasingly employing mobile personal devices such as smartphones, tablets, and notebooks to access the corporate network. IT is under pressure to accommodate these personal devices in a fine-grained manner while protecting corporate data and intellectual property and meeting compliance mandates. The rise of cloud services Enterprises have enthusiastically embraced both public and private cloud services, resulting in unprecedented growth of these services. Many enterprise businesses want the agility to access applications, infrastructure and other IT resources on demand and discretely. IT planning for cloud services must be performed in an environment of increased security, compliance and auditing requirements, along with business reorganizations, consolidations and mergers that can rapidly change assumptions. Providing self-service provisioning, whether in a private or public cloud, requires elastic scaling of computing, storage and network resources, ideally from a common viewpoint and with a common suite of tools. Big data means more bandwidth Handling today's big data requires massive parallel processing on thousands of servers, all of which need direct connections to each other. The rise of these large data sets is fueling a constant demand for additional network capacity in the data center. Operators of hyperscale data center networks face the daunting task of scaling the network to previously unimaginable size, maintaining any-to-any connectivity within a limited budget. Energy use in large data centers As Internet of things, cloud computing and SaaS emerged, the need for larger data centers has increased the energy consumption of those facilities. Many researchers have improved SDN's energy efficiency applying existing routing techniques to dynamically adjust the network data plane to save energy. Also techniques to improve control plane energy efficiency are being researched. == Architectural components == The following list defines and explains the SDN architectural components: SDN application SDN applications are programs that communicate their network requirements and desired network behavior to the SDN controller via a northbound interface (NBI). In addition, they may consume an abstracted view of the network for their internal decision-making purposes. An SDN Application consists of SDN application logic and one or more NBI drivers. SDN applications may themselves expose another layer of abstracted network control, thus offering one or more higher-level NBIs through respective NBI agents. SDN controller The SDN controller is a logically centralized entity in charge of (i) translating the requirements from the SDN application layer down to the SDN datapaths and (ii) providing the SDN applications with an abstract view of the network (which may include statistics and events). An SDN controller consists of one or more NBI agents, the SDN control logic, and the control to data-plane interface (CDPI) driver. The controller's definition as a logically centralized entity neither prescribes nor precludes implementation architectures such as the federation of multiple controllers, the hierarchical connection of controllers, communication interfaces between controllers, nor virtualization or slicing of network resources. SDN datapath The SDN datapath is a logical network device that exposes visibility and uncontested control over its advertised forwarding and data processing capabilities. The logical representation may encompass all or a subset of the capabilities. An SDN datapath consists of a CDPI agent and a set of one or more traffic forwarding engines and zero or more traffic processing functions. These engines and functions may include simple forwarding between the datapath's external interfaces or internal traffic processing or termination functions. One or more SDN datapaths may be contained in a single (physical) network element—an integrated physical combination of communications resources, managed as a unit. An SDN datapath may also be defined across multiple physical network elements. This logical definition neither prescribes nor precludes implementation details such as the logical to physical mapping, management of shared physical resources, virtualization or slicing of the SDN datapath, interoperability with non-SDN networking, nor the data processing functionality, which can include OSI layer 4-7 functions. SDN Control to Data-Plane Interface (CDPI) The SDN CDPI is the interface defined between an SDN controller and an SDN datapath, which provides at least programmatic control of all forwarding operations, capabilities advertisement, statistics reporting, and event notification. One value of SDN lies in the expectation that the CDPI is implemented in an open, vendor-neutral and interoperable way. SDN Northbound Interfaces (NBI) SDN NBIs are interfaces between SDN applications and SDN controllers and typically provide abstract network views and enable direct expression of network behavior and requirements. This may occur at any level of abstraction and across different sets of functionality. == SDN control plane == The implementation of the SDN control plane can follow a centralized, hierarchical, or decentralized design. Initial SDN control plane proposals focused on a centralized solution, where a single control entity had a global view of the network. While this simplifies the implementation of the control logic, it has scalability limitations as the size and dynamics of the network increase. To overcome these limitations, hierarchical and distributed approaches have been proposed. In hierarchical solutions, controllers operate on a partitioned network view, while decisions that require network-wide knowledge are taken by a logically centralized root controller. In distributed approaches, controllers operate on their local view or they may exchange synchronization messages to enhance their knowledge. Distributed solutions are more suitable for supporting adaptive SDN applications. A key issue when designing a distributed SDN control plane is to decide on the number and placement of control entities. An important parameter to consider while doing so is the propagation delay between the controllers and the network devices, especially in the context of large networks. Other objectives that have been considered involve control path reliability, fault tolerance, and application requirements. == SDN Data Plane == In SDN, the data plane is responsible for processing data-carrying packets using a set of rules specified by the control plane. The data plane may be implemented in physical hardware switches or in software implementations, such as Open vSwitch. The memory capacity of hardware switches may limit the number of rules that can be stored where as software implementations may have higher capacity. The location of the SDN data plane and agent can be used to classify SDN implementations: Hardware Switch-based SDNs: This approach implements the data plane processing inside a physical device. OpenFlow switches may use TCAM tables to route packet sequences (flows). These switches may use an ASIC for its implementation. Software Switch-Based SDNs: Some physical switches may implement SDN support using software on the device, such as Open vSwitch, to populate flow tables and to act as the SDN agent when communicating with the controller. Hypervisors may likewise use software implementations to support SDN protocols in the virtual switches used to support their virtual machines. Host-Based SDNs: Rather than deploying the data plane and SDN agent in network infrastructure, host-based SDNs deploy the SDN agent inside the operating system of the communicating endpoints. Such implementations can provide additional context about the application, user, and activity associated with network flows. To achieve the same traffic engineering capabilities of switch-based SDNs, host-based SDNs may require the use of carefully designed VLAN and spanning tree assignments. Flow table entries may be populated in a proactive, reactive, or hybrid fashion. In the proactive mode, the controller populates flow table entries for all possible traffic matches possible for this switch in advance. This mode can be compared with typical routing table entries today, where all static entries are installed ahead of time. Following this, no request is sent to the controller since all incoming flows will find a matching entry. A major advantage in proactive mode is that all packets are forwarded in line rate (considering all flow table entries in TCAM) and no delay is added. In the reactive mode, entries are populated on demand. If a packet arrives without a corresponding match rule in the flow table, the SDN agent sends a request to the controller for further instruction it the reactive mode. The controller examines the SDN agent requests and provides instructions, installing a rule in the flow table for the corresponding packet if necessary. The hybrid mode uses the low-latency proactive forwarding mode for a portion of traffic while relying on the flexibility of reactive mode processing for the remaining traffic. == Applications == === SDMN === Software-defined mobile networking (SDMN) is an approach to the design of mobile networks where all protocol-specific features are implemented in software, maximizing the use of generic and commodity hardware and software in both the core network and radio access network. It is proposed as an extension of SDN paradigm to incorporate mobile network specific functionalities. Since 3GPP Rel.14, a Control User Plane Separation was introduced in the Mobile Core Network architectures with the PFCP protocol. === SD-WAN === An SD-WAN is a WAN managed using the principles of software-defined networking. The main driver of SD-WAN is to lower WAN costs using more affordable and commercially available leased lines, as an alternative or partial replacement of more expensive MPLS lines. Control and management is administered separately from the hardware with central controllers allowing for easier configuration and administration. === SD-LAN === An SD-LAN is a Local area network (LAN) built around the principles of software-defined networking, though there are key differences in topology, network security, application visibility and control, management and quality of service. SD-LAN decouples control management, and data planes to enable a policy driven architecture for wired and wireless LANs. SD-LANs are characterized by their use of a cloud management system and wireless connectivity without the presence of a physical controller. === Security using the SDN paradigm === SDN architecture may enable, facilitate or enhance network-related security applications due to the controller's central view of the network, and its capacity to reprogram the data plane at any time. While the security of SDN architecture itself remains an open question that has already been studied a couple of times in the research community, the following paragraphs only focus on the security applications made possible or revisited using SDN. Several research works on SDN have already investigated security applications built upon the SDN controller, with different aims in mind. Distributed Denial of Service (DDoS) detection and mitigation, as well as botnet and worm propagation, are some concrete use-cases of such applications: basically, the idea consists in periodically collecting network statistics from the forwarding plane of the network in a standardized manner (e.g. using Openflow), and then apply classification algorithms on those statistics in order to detect any network anomalies. If an anomaly is detected, the application instructs the controller how to reprogram the data plane in order to mitigate it. Another kind of security application leverages the SDN controller by implementing some moving target defense (MTD) algorithms. MTD algorithms are typically used to make any attack on a given system or network more difficult than usual by periodically hiding or changing key properties of that system or network. In traditional networks, implementing MTD algorithms is not a trivial task since it is difficult to build a central authority able of determining - for each part of the system to be protected - which key properties are hidden or changed. In an SDN network, such tasks become more straightforward thanks to the centrality of the controller. One application can for example periodically assign virtual IPs to hosts within the network, and the mapping virtual IP/real IP is then performed by the controller. Another application can simulate some fake opened/closed/filtered ports on random hosts in the network in order to add significant noise during reconnaissance phase (e.g. scanning) performed by an attacker. Additional value regarding security in SDN enabled networks can also be gained using FlowVisor and FlowChecker respectively. The former tries to use a single hardware forwarding plane sharing multiple separated logical networks. Following this approach the same hardware resources can be used for production and development purposes as well as separating monitoring, configuration and internet traffic, where each scenario can have its own logical topology which is called slice. In conjunction with this approach FlowChecker realizes the validation of new OpenFlow rules that are deployed by users using their own slice. SDN controller applications are mostly deployed in large-scale scenarios, which requires comprehensive checks of possible programming errors. A system to do this called NICE was described in 2012. Introducing an overarching security architecture requires a comprehensive and protracted approach to SDN. Since it was introduced, designers are looking at possible ways to secure SDN that do not compromise scalability. One architecture called SN-SECA (SDN+NFV) Security Architecture. === Group Data Delivery Using SDN === Distributed applications that run across datacenters usually replicate data for the purpose of synchronization, fault resiliency, load balancing and getting data closer to users (which reduces latency to users and increases their perceived throughput). Also, many applications, such as Hadoop, replicate data within a datacenter across multiple racks to increase fault tolerance and make data recovery easier. All of these operations require data delivery from one machine or datacenter to multiple machines or datacenters. The process of reliably delivering data from one machine to multiple machines is referred to as Reliable Group Data Delivery (RGDD). SDN switches can be used for RGDD via installation of rules that allow forwarding to multiple outgoing ports. For example, OpenFlow provides support for Group Tables since version 1.1 which makes this possible. Using SDN, a central controller can carefully and intelligently setup forwarding trees for RGDD. Such trees can be built while paying attention to network congestion/load status to improve performance. For example, MCTCP is a scheme for delivery to many nodes inside datacenters that relies on regular and structured topologies of datacenter networks while DCCast and QuickCast are approaches for fast and efficient data and content replication across datacenters over private WANs. == Relationship to NFV == Network Function Virtualization, or NFV for short, is a concept that complements SDN. Thus, NFV is not dependent on SDN or SDN concepts. NFV separates software from hardware to enable flexible network deployment and dynamic operation. NFV deployments typically use commodity servers to run network services software versions that previously were hardware-based. These software-based services that run in an NFV environment are called Virtual Network Functions (VNF). SDN-NFV hybrid program was provided for high efficiency, elastic and scalable capabilities NFV aimed at accelerating service innovation and provisioning using standard IT virtualization technologies. SDN provides the agility of controlling the generic forwarding devices such as the routers and switches by using SDN controllers. On the other hand, NFV agility is provided for the network applications by using virtualized servers. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of VNFs, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems. == Relationship to DPI == Deep packet inspection (DPI) provides network with application-awareness, while SDN provides applications with network-awareness. Although SDN will radically change the generic network architectures, it should cope with working with traditional network architectures to offer high interoperability. The new SDN based network architecture should consider all the capabilities that are currently provided in separate devices or software other than the main forwarding devices (routers and switches) such as the DPI, security appliances == Quality of Experience (QoE) estimation using SDN == When using an SDN based model for transmitting multimedia traffic, an important aspect to take account is the QoE estimation. To estimate the QoE, first we have to be able to classify the traffic and then, it's recommended that the system can solve critical problems on its own by analyzing the traffic. == See also == == References ==
Wikipedia/Software-defined_networking
Air traffic control (ATC) is a service provided by ground-based air traffic controllers who direct aircraft on the ground and through a given section of controlled airspace, and can provide advisory services to aircraft in non-controlled airspace. The primary purpose of ATC is to prevent collisions, organise and expedite the flow of traffic in the air, and provide information and other support for pilots. Personnel of air traffic control monitor aircraft location in their assigned airspace by radar and communicate with the pilots by radio. To prevent collisions, ATC enforces traffic separation rules, which ensure each aircraft maintains a minimum amount of 'empty space' around it at all times. It is also common for ATC to provide services to all private, military, and commercial aircraft operating within its airspace; not just civilian aircraft. Depending on the type of flight and the class of airspace, ATC may issue instructions that pilots are required to obey, or advisories (known as flight information in some countries) that pilots may, at their discretion, disregard. The pilot in command of an aircraft always retains final authority for its safe operation, and may, in an emergency, deviate from ATC instructions to the extent required to maintain safe operation of the aircraft. == Language == Pursuant to requirements of the International Civil Aviation Organization (ICAO), ATC operations are conducted either in the English language, or the local language used by the station on the ground. In practice, the native language for a region is used; however, English must be used upon request. == History == In 1920, Croydon Airport near London, England, was the first airport in the world to introduce air traffic control. The 'aerodrome control tower' was a wooden hut 15 feet (5 metres) high with windows on all four sides. It was commissioned on 25 February 1920, and provided basic traffic, weather, and location information to pilots. In the United States, air traffic control developed three divisions. The first of several air mail radio stations (AMRS) was created in 1922, after World War I, when the U.S. Post Office began using techniques developed by the U.S. Army to direct and track the movements of reconnaissance aircraft. Over time, the AMRS morphed into flight service stations. Today's flight service stations do not issue control instructions, but provide pilots with many other flight related informational services. They do relay control instructions from ATC in areas where flight service is the only facility with radio or phone coverage. The first airport traffic control tower, regulating arrivals, departures, and surface movement of aircraft in the US at a specific airport, opened in Cleveland in 1930. Approach- and departure-control facilities were created after adoption of radar in the 1950s to monitor and control the busy airspace around larger airports. The first air route traffic control center (ARTCC), which directs the movement of aircraft between departure and destination, was opened in Newark in 1935, followed in 1936 by Chicago and Cleveland. After the 1956 Grand Canyon mid-air collision, killing all 128 on board, the FAA was given the air-traffic responsibility in the United States in 1958, and this was followed by other countries. In 1960, Britain, France, Germany, and the Benelux countries set up Eurocontrol, intending to merge their airspaces. The first and only attempt to pool controllers between countries is the Maastricht Upper Area Control Centre (MUAC), founded in 1972 by Eurocontrol, and covering Belgium, Luxembourg, the Netherlands, and north-western Germany. In 2001, the European Union (EU) aimed to create a 'Single European Sky', hoping to boost efficiency and gain economies of scale. In the USSR, the first air traffic control service was organized in 1929 on the Moscow - Irkutsk air route; in 1930, control areas were defined along all existing air routes. == Airport traffic control tower == The primary method of controlling the immediate airport environment is visual observation from the airport control tower. The tower is typically a tall, windowed structure, located within the airport grounds. The air traffic controllers, usually abbreviated 'controller', are responsible for separation and efficient movement of aircraft and vehicles operating on the taxiways and runways of the airport itself, and aircraft in the air near the airport, generally 5 to 10 nautical miles (9 to 19 kilometres; 6 to 12 miles), depending on the airport procedures. A controller must carry out the job using the precise and effective application of rules and procedures; however, they need flexible adjustments according to differing circumstances, often under time pressure. In a study that compared stress in the general population and this kind of system markedly showed more stress level for controllers. This variation can be explained, at least in part, by the characteristics of the job. Remote and virtual tower (RVT) is a system based on air traffic controllers being located somewhere other than at the local airport tower, and still able to provide air traffic control services. === Ground control === Ground control (sometimes known as ground movement control, GMC) is responsible for the airport movement areas. Some busier airports have surface movement radar (SMR). === Air control or local control === Air control (known to pilots as tower or tower control) is responsible for the active runway surfaces. === Flight data and clearance delivery === Clearance delivery is the position that issues route clearances to aircraft, typically before they commence taxiing. These clearances contain details of the route that the aircraft is expected to fly after departure. == Approach and terminal control == In the U.S., TRACONs are additionally designated by a three-digit alphanumeric code. For example, the Chicago TRACON is designated C90. == Area control centre / en-route centre == === General characteristics === === Radar coverage === Some air navigation service providers (e.g., Airservices Australia, the U.S. Federal Aviation Administration, Nav Canada, etc.) have implemented automatic dependent surveillance – broadcast (ADS-B) as part of their surveillance capability. This newer technology reverses the radar concept. Instead of radar 'finding' a target by interrogating the transponder, the ADS-B equipped aircraft 'broadcasts' a position report as determined by the navigation equipment on board the aircraft. ADS-C is another mode of automatic dependent surveillance, however ADS-C operates in the 'contract' mode, where the aircraft reports a position, automatically or initiated by the pilot, based on a predetermined time interval. It is also possible for controllers to request more frequent reports to more quickly establish aircraft position for specific reasons. However, since the cost for each report is charged by the ADS service providers to the company operating the aircraft, more frequent reports are not commonly requested, except in emergency situations. ADS-C is significant, because it can be used where it is not possible to locate the infrastructure for a radar system (e.g., over water). Computerised radar displays are now being designed to accept ADS-C inputs as part of their display. A radar archive system (RAS) keeps an electronic record of all radar information, preserving it for a few weeks. This information can be useful for search and rescue. When an aircraft has 'disappeared' from radar screens, a controller can review the last radar returns from the aircraft to determine its likely position. For an example, see the crash report in the following citation. === Flight traffic mapping === == Problems == === Traffic === Air traffic control errors occur when the separation (either vertical or horizontal) between airborne aircraft falls below the minimum prescribed separation set (for the domestic United States) by the US Federal Aviation Administration. Separation minimums for terminal control areas (TCAs) around airports are lower than en-route standards. Errors generally occur during periods following times of intense activity, when controllers tend to relax and overlook the presence of traffic and conditions that lead to loss of minimum separation. === Weather === According to the Civil Air Navigation Services Organisation (CANSO), weather significantly impacts global aviation, with more than 70% of air traffic delays being attributed to adverse weather conditions. These disruptions cause widespread delays, rerouting by ATC, and cancellations across continents. In 2024, Europe experienced a 40% increase in weather-related en-route delays compared to 2023. As climate change intensifies the frequency and severity of these events, CANSO urges collaboration and real-time solutions among global aviation stakeholders to mitigate the increased effects of weather on flight operations. === Infrastructure === Global ATC infrastructure is a complex network that varies significantly by region, with many countries facing challenges related to outdated technology, staffing shortages, and increasing traffic demand. While some regions, like parts of Europe and the U.S., have implemented modernization programs such as SESAR and NextGen, many others, especially in developing nations, still rely on legacy radar systems and voice-based communication, which limit efficiency and safety. These disparities contribute to delays and reduce the overall resilience of global air traffic management. According to the ICAO, coordinating ATC systems and accelerating digitalization is essential for meeting future aviation demands. Similarly, a 2024 report from the International Air Transport Association (IATA) emphasizes the urgency of investing in scalable, data-driven infrastructure to handle post-pandemic growth and ensure sustainability across the network. === Congestion === Constrained control capacity and growing traffic lead to flight cancellation and delays. In America, delays caused by ATC grew by 69% between 2012 and 2017. ATC staffing issues were a major factor in congestion. By then the market for air-traffic services was worth $14bn. More efficient ATC could save 5-10% of aviation fuel by avoiding holding patterns and indirect airways. The military takes 80% of Chinese airspace, congesting the thin corridors open to airliners. The United Kingdom closes its military airspace only during military exercises. == Call signs == A prerequisite to safe air traffic separation is the assignment and use of distinctive call signs. These are permanently allocated by ICAO on request, usually to scheduled flights, and some air forces and other military services for military flights. There are written call signs with a two or three letter combination followed by the flight number such as AAL872 or VLG1011. As such, they appear on flight plans and ATC radar labels. There are also the audio or radio-telephony call signs used on the radio contact between pilots and air traffic control. These are not always identical to their written counterparts. An example of an audio call sign would be 'Speedbird 832', instead of the written 'BAW832'. This is used to reduce the chance of confusion between ATC and the aircraft. By default, the call sign for any other flight is the registration number (or tail number in US parlance) of the aircraft, such as 'N12345', 'C-GABC', or 'EC-IZD'. The short radio-telephony call signs for these tail numbers is the last three letters using the NATO phonetic alphabet (e.g. ABC, spoken alpha-bravo-charlie for C-GABC), or the last three numbers (e.g. three-four-five for N12345). In the United States, the prefix may be an aircraft type, model, or manufacturer in place of the first registration character, for example, 'N11842' could become 'Cessna 842'. == Technology == The Federal Aviation Administration (FAA) has spent over US$3 billion on software, but a fully automated system is still yet to be achieved. In 2002, the United Kingdom commissioned a new area control centre into service at the London Area Control Centre (LACC) at Swanwick in Hampshire, relieving a busy suburban centre at West Drayton in Middlesex, north of London Heathrow Airport. Software from Lockheed-Martin predominates at the London Area Control Centre. However, the centre was initially troubled by software and communications problems causing delays and occasional shutdowns. Some tools are available in different domains to help the controller further: Flight data processing systems: this is the system (usually one per centre) that processes all the information related to the flight (the flight plan), typically in the time horizon from gate to gate (airport departure / arrival gates). It uses such processed information to invoke other flight plan related tools (such as e.g. Medium Term Conflict Detection (MTCD). Short-term conflict alert (STCA) that checks possible conflicting trajectories in a time horizon of about two or three minutes (or even less in approach context; 35 seconds in the French Roissy & Orly approach centres). Center TRACON automation system (CTAS): a suite of human centred decision support tools developed by NASA Ames Research Center. Several of the CTAS tools have been field tested and transitioned to the FAA for operational evaluation and use. Some of the CTAS tools are: traffic management advisor (TMA), passive final approach spacing tool (pFAST), collaborative arrival planning (CAP), direct-to (D2), en route descent advisor (EDA), and multi-center TMA. The software is running on Linux. MTCD and URET: In Europe, several MTCD tools are available: iFACTS (National Air Traffic Services), VAFORIT (Deutsche Flugsicherung), new FDPS (Maastricht Upper Area Control). The Single European Sky ATM Research (SESAR). The Nav Canada system known as EXCDS. Screen content recording: hardware or software based recording function which is part of most modern automation system, and that captures the screen content shown to the ATCO. Such recordings are used for a later replay together with audio recording for investigations and post event analysis. Communication navigation surveillance / air traffic management (CNS / ATM) systems are communications, navigation, and surveillance systems, employing digital technologies, including satellite systems, together with various levels of automation, applied in support of a seamless global air traffic management system. == Air navigation service providers (ANSPs) and air traffic service providers (ATSPs) == Spain – AENA now AENA S.A. (Spanish Airports) and ENAIRE (ATC & ATSP) Vietnam – Vietnam Air Traffic Management Corporation (VATM) Zambia – Zambia Civil Aviation Authority (ZCAA) Zimbabwe – Zimbabwe Civil Aviation Authority == Proposed changes == In the United States, some alterations to traffic control procedures are being examined: Free flight is a developing air traffic control method that uses no centralised control (e.g. air traffic controllers). Instead, parts of airspace are reserved dynamically and automatically in a distributed way using computer communication to ensure the required separation between aircraft. In Europe, the Single European Sky ATM Research (SESAR) programme plans to develop new methods, technologies, procedures, and systems to accommodate future (2020 and beyond) air traffic needs. In October 2018, European controller unions dismissed setting targets to improve ATC as "a waste of time and effort", as new technology could cut costs for users but threaten their jobs. In April 2019, the EU called for a 'Digital European Sky', focusing on cutting costs by including a common digitisation standard, and allowing controllers to move to where they are needed instead of merging national ATCs, as it would not solve all problems. Single air-traffic control services in continent-sized America and China does not alleviate congestion. Eurocontrol tries to reduce delays by diverting flights to less busy routes: flight paths across Europe were redesigned to accommodate the new airport in Istanbul, which opened in April, but the extra capacity will be absorbed by rising demand for air travel. Well-paid jobs in western Europe could move east with cheaper labour. The average Spanish controller earn over €200,000 a year, over seven times the country average salary, more than pilots, and at least ten controllers were paid over €810,000 ($1.1m) a year in 2010. French controllers spent a cumulative nine months on strike between 2004 and 2016. === Privatisation === Many countries have also privatised or corporatised their air navigation service providers. There are several models that can be used for ATC service providers. The first is to have the ATC services be part of a government agency as is currently the case in the United States. The problem with this model is that funding can be inconsistent, and can disrupt the development and operation of services. Sometimes funding can disappear when lawmakers cannot approve budgets in time. Both proponents and opponents of privatisation recognise that stable funding is one of the major factors for successful upgrades of ATC infrastructure. Some of the funding issues include sequestration and politicisation of projects. Proponents argue that moving ATC services to a private corporation could stabilise funding over the long term which will result in more predictable planning and rollout of new technology as well as training of personnel. As of November 2024, The United States had 265 contractor towers that are staffed by private companies but administered by FAA through its FAA Contract Tower Program, which was established in 1982. These contract control towers cover 51% of all the Federal air traffic control towers in the U.S. Another model is to have ATC services provided by a government corporation. This model is used in Germany, where funding is obtained through user fees. Yet another model is to have a for-profit corporation operate ATC services. This is the model used in the United Kingdom, but there have been several issues with the system there, including a large-scale failure in December 2014 which caused delays and cancellations and has been attributed to cost-cutting measures put in place by this corporation. In fact, earlier that year, the corporation owned by the German government won the bid to provide ATC services for Gatwick Airport in the United Kingdom. The last model, which is often the suggested model for the United States to transition to is to have a non-profit organisation that would handle ATC services as is used in Canada. The Canadian system is the one most often used as a model by proponents of privatisation. Air traffic control privatisation has been successful in Canada with the creation of Nav Canada, a private non-profit organisation which has reduced costs, and has allowed new technologies to be deployed faster due to the elimination of much of the bureaucratic red tape. This has resulted in shorter flights and less fuel usage. It has also resulted in flights being safer due to new technology. Nav Canada is funded from fees that are collected from the airlines based on the weight of the aircraft and the distance flown. Air traffic control is operated by national governments with few exceptions: in the European Union, only Italy has private shareholders. Privatisation does not guarantee lower prices: the profit margin of MUAC was 70% in 2017, as there is no competition, but governments could offer fixed terms concessions. == ATC regulations in the United States == The United States airspace is divided into 21 zones (centres), and each zone is divided into sectors. Also within each zone are portions of airspace, about 50 miles (80 kilometres) in diameter, called TRACON (Terminal Radar Approach Control) airspaces. Within each TRACON airspace are a number of airports, each of which has its own airspace with a 5 miles (8.0 kilometres) radius. FAA control tower operators (CTO) / air traffic controllers use FAA Order 7110.65 as the authority for all procedures regarding air traffic. == See also == == References == == External links == The short film A Traveler Meets Air Traffic Control (1963) is available for free viewing and download at the Internet Archive. NASA video of US air traffic Radar antennas in air traffic management (YouTube-video, part of a video series about radar basics)
Wikipedia/Air_traffic_control
A weapon, arm, or armament is any implement or device that is used to deter, threaten, inflict physical damage, harm, or kill. Weapons are used to increase the efficacy and efficiency of activities such as hunting, crime (e.g., murder), law enforcement, self-defense, warfare, or suicide. In a broader context, weapons may be construed to include anything used to gain a tactical, strategic, material, or mental advantage over an adversary or enemy target. While ordinary objects such as rocks and bottles can be used as weapons, many objects are expressly designed for the purpose; these range from simple implements such as clubs and swords to complicated modern firearms, tanks, missiles and biological weapons. Something that has been repurposed, converted, or enhanced to become a weapon of war is termed weaponized, such as a weaponized virus or weaponized laser. == History == The use of weapons has been a major driver of cultural evolution and human history up to today since weapons are a type of tool that is used to dominate and subdue autonomous agents such as animals and, by doing so, allow for an expansion of the cultural niche, while simultaneously other weapon users (i.e., agents such as humans, groups, and cultures) are able to adapt to the weapons of enemies by learning, triggering a continuous process of competitive technological, skill, and cognitive improvement (arms race). === Prehistoric === The use of objects as weapons has been observed among chimpanzees, leading to speculation that early hominids used weapons as early as five million years ago. However, this cannot be confirmed using physical evidence because wooden clubs, spears, and unshaped stones would have left an ambiguous record. The earliest unambiguous weapons to be found are the Schöningen spears, eight wooden throwing spears dating back more than 300,000 years. At the site of Nataruk in Turkana, Kenya, numerous human skeletons dating to 10,000 years ago may present evidence of traumatic injuries to the head, neck, ribs, knees, and hands, including obsidian projectiles embedded in the bones that might have been caused by arrows and clubs during conflict between two hunter-gatherer groups. But the interpretation of warfare at Nataruk has been challenged due to conflicting evidence. === Ancient history === The earliest ancient weapons were evolutionary improvements of late Neolithic implements, but significant improvements in materials and crafting techniques led to a series of revolutions in military technology. The development of metal tools began with copper during the Copper Age (about 3,300 BC) and was followed by the Bronze Age, leading to the creation of the Bronze Age sword and similar weapons. During the Bronze Age, the first defensive structures and fortifications appeared as well, indicating an increased need for security. Weapons designed to breach fortifications followed soon after, such as the battering ram, which was in use by 2500 BC. The development of ironworking around 1300 BC in Greece had an important impact on the development of ancient weapons. It was not the introduction of early Iron Age swords, however, as they were not superior to their bronze predecessors, but rather the domestication of the horse and widespread use of spoked wheels by c. 2000 BC. This led to the creation of the light, horse-drawn chariot, whose improved mobility proved important during this era. Spoke-wheeled chariot usage peaked around 1300 BC and then declined, ceasing to be militarily relevant by the 4th century BC. Cavalry developed once horses were bred to support the weight of a human. The horse extended the range and increased the speed of attacks. Alexander's conquest saw the increased use of spears and shields in the Middle East and Western Asia as a result Greek culture spread which saw many Greek and other European weapons be used in these regions and as a result many of these weapons were adapted to fit their new use in war In addition to land-based weaponry, warships, such as the trireme, were in use by the 7th century BC. During the first First Punic War, the use of advanced warships contributed to a Roman victory over the Carthaginians. === Post-classical history === European warfare during post-classical history was dominated by elite groups of knights supported by massed infantry. They were involved in mobile combat and sieges, which involved various siege weapons and tactics. Knights on horseback developed tactics for charging with lances, providing an impact on the enemy formations, and then drawing more practical weapons (such as swords) once they entered melee. By contrast, infantry, in the age before structured formations, relied on cheap, sturdy weapons such as spears and billhooks in close combat and bows from a distance. As armies became more professional, their equipment was standardized, and infantry transitioned to pikes. Pikes are normally seven to eight feet in length and used in conjunction with smaller sidearms (short swords). In Eastern and Middle Eastern warfare, similar tactics were developed independent of European influences. The introduction of gunpowder from Asia at the end of this period revolutionized warfare. Formations of musketeers, protected by pikemen, came to dominate open battles, and the cannon replaced the trebuchet as the dominant siege weapon. The Ottoman used the cannon to destroy much of the fortifications at Constantinople which would change warfare as gunpowder became more available and technology improved === Modern history === ==== Early modern ==== The European Renaissance marked the beginning of the implementation of firearms in western warfare. Guns and rockets were introduced to the battlefield. Firearms are qualitatively different from earlier weapons because they release energy from combustible propellants, such as gunpowder, rather than from a counterweight or spring. This energy is released very rapidly and can be replicated without much effort by the user. Therefore, even early firearms such as the arquebus were much more powerful than human-powered weapons. Firearms became increasingly important and effective during the 16th–19th centuries, with progressive improvements in ignition mechanisms followed by revolutionary changes in ammunition handling and propellant. During the American Civil War, new applications of firearms, including the machine gun and ironclad warship, emerged that would still be recognizable and useful military weapons today, particularly in limited conflicts. In the 19th century, warship propulsion changed from sail power to fossil fuel-powered steam engines. Since the mid-18th century North American French-Indian war through the beginning of the 20th century, human-powered weapons were reduced from the primary weaponry of the battlefield to yielding gunpowder-based weaponry. Sometimes referred to as the "Age of Rifles", this period was characterized by the development of firearms for infantry and cannons for support, as well as the beginnings of mechanized weapons such as the machine gun. Artillery pieces such as howitzers were able to destroy masonry fortresses and other fortifications, and this single invention caused a revolution in military affairs, establishing tactics and doctrine that are still in use today. ==== World War I ==== An important feature of industrial age warfare was technological escalation – innovations were rapidly matched through replication or countered by another innovation. World War I marked the entry of fully industrialized warfare as well as weapons of mass destruction (e.g., chemical and biological weapons), and new weapons were developed quickly to meet wartime needs. The technological escalation during World War I was profound, including the wide introduction of aircraft into warfare and naval warfare with the introduction of aircraft carriers. Above all, it promised the military commanders independence from horses and a resurgence in maneuver warfare through the extensive use of motor vehicles. The changes that these military technologies underwent were evolutionary but defined their development for the rest of the century. ==== Interwar ==== This period of innovation in weapon design continued in the interwar period (between WWI and WWII) with the continuous evolution of weapon systems by all major industrial powers. The major armament firms were Schneider-Creusot (based in France), Škoda Works (Czechoslovakia), and Vickers (Great Britain). The 1920s were committed to disarmament and the outlawing of war and poison gas, but rearmament picked up rapidly in the 1930s. The munitions makers responded nimbly to the rapidly shifting strategic and economic landscape. The main purchasers of munitions from the big three companies were Romania, Yugoslavia, Greece, and Turkey – and, to a lesser extent, Poland, Finland, the Baltic States, and the Soviet Union. ===== Criminalizing poison gas ===== Realistic critics understood that war could not really be outlawed, but its worst excesses might be banned. Poison gas became the focus of a worldwide crusade in the 1920s. Poison gas did not win battles, and the generals did not want it. The soldiers hated it far more intensely than bullets or explosive shells. By 1918, chemical shells made up 35 percent of French ammunition supplies, 25 percent of British, and 20 percent of American stock. The "Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous, or Other Gases and of Bacteriological Methods of Warfare", also known as the Geneva Protocol, was issued in 1925 and was accepted as policy by all major countries. In 1937, poison gas was manufactured in large quantities but not used except against nations that lacked modern weapons or gas masks. ==== World War II and postwar ==== Many modern military weapons, particularly ground-based ones, are relatively minor improvements to weapon systems developed during World War II. World War II marked perhaps the most frantic period of weapon development in the history of humanity. Massive numbers of new designs and concepts were fielded, and all existing technologies were improved between 1939 and 1945. The most powerful weapon invented during this period was the nuclear bomb; however, many other weapons influenced the world, such as jet aircraft and radar, but were overshadowed by the visibility of nuclear weapons and long-range rockets. ==== Nuclear weapons ==== Since the realization of mutual assured destruction (MAD), the nuclear option of all-out war is no longer considered a survivable scenario. During the Cold War in the years following World War II, both the United States and the Soviet Union engaged in a nuclear arms race. Each country and their allies continually attempted to out-develop each other in the field of nuclear armaments. Once the joint technological capabilities reached the point of being able to ensure the destruction of the Earth by 100 fold, a new tactic had to be developed. With this realization, armaments development funding shifted back to primarily sponsoring the development of conventional arms technologies for support of limited wars rather than total war. == Types == === By user === – what person or unit uses the weapon Personal weapons (or small arms) – designed to be used by a single person. Light weapons – 'man-portable' weapons that may require a small team to operate. Heavy weapons – artillery and similar weapons larger than light weapons (see SALW). Crew served weapons – larger than personal weapons, requiring two or more people to operate correctly. Fortification weapons – mounted in a permanent installation or used primarily within a fortification. Mountain weapons – for use by mountain forces or those operating in difficult terrain. Vehicle-mounted weapons – to be mounted on any type of combat vehicle. Railway weapons – designed to be mounted on railway cars, including armored trains. Aircraft weapons – carried on and used by some type of aircraft, helicopter, or other aerial vehicle. Naval weapons – mounted on ships and submarines. Space weapons – are designed to be used in or launched from space. Autonomous weapons – are capable of accomplishing a mission with limited or no human intervention. === By function === – the construction of the weapon and the principle of operation Antimatter weapons (theoretical) – would combine matter and antimatter to cause a powerful explosion. Archery weapons – operate by using a tensioned string and a bent solid to launch a projectile. Artillery – firearms capable of launching heavy projectiles over long distances. Biological weapons – spread biological agents, causing disease or infection. Blunt instruments – designed to break or fracture bones, produce concussions, create organ ruptures, or crush injuries. Chemical weapons – poison people and cause reactions. Edged and bladed weapons – designed to pierce or cut through skin, muscle, or bone and cause internal or external bleeding. Energy weapons – rely on concentrating forms of energy to attack, such as lasers or sonic attacks. Explosive weapons – use a physical explosion to create a blast, concussion, or spread shrapnel. Firearms – use a chemical charge to launch projectiles. Improvised weapons – common objects reused as weapons, such as crowbars and kitchen knives. Incendiary weapons – cause damage by fire. Loitering munitions – designed to loiter over a battlefield, striking once a target is located. Magnetic weapons – use magnetic fields to propel projectiles or focus particle beams. Missiles – rockets that are guided to their target after launch. (Also a general term for projectile weapons.) Non-lethal weapons – designed to subdue without killing. Nuclear weapons – use radioactive materials to create nuclear fission or nuclear fusion detonations Rockets – self-propelled projectiles. Suicide weapons – exploit the willingness of their operators not surviving the attack. === By target === – the type of target the weapon is designed to attack Anti-aircraft weapons – target missiles and aerial vehicles in flight. Anti-fortification weapons – designed to target enemy installations. Anti-personnel weapons – designed to attack people, either individually or in numbers. Anti-radiation weapons – target sources of electronic radiation, particularly radar emitters. Anti-satellite weapons – target orbiting satellites. Anti-ship weapons – target ships and vessels on water. Anti-submarine weapons – target submarines and other underwater targets. Anti-tank weapons – designed to defeat armored targets. Area denial weapons – target territory, making it unsafe or unsuitable for enemy use or travel. Hunting weapons – weapons used to hunt game animals. Infantry support weapons – designed to attack various threats to infantry units. Siege engines – designed to break or circumvent heavy fortifications in siege warfare. == Manufacture of weapons == The arms industry is a global industry that involves the sale and manufacture of weaponry. It consists of a commercial industry involved in the research and development, engineering, production, and servicing of military material, equipment, and facilities. Many industrialized countries have a domestic arms industry to supply their own military forces, and some also have a substantial trade in weapons for use by their citizens for self-defense, hunting, or sporting purposes. Contracts to supply a given country's military are awarded by governments, making arms contracts of substantial political importance. The link between politics and the arms trade can result in the development of a "military–industrial complex", where the armed forces, commerce, and politics become closely linked. According to research institute SIPRI, the volume of international transfers of major weapons in 2010–2014 was 16 percent higher than in 2005–2009, and the arms sales of the world's 100 largest private arms-producing and military services companies totaled $420 billion in 2018. == Legislation == The production, possession, trade, and use of many weapons are controlled. This may be at a local or central government level or by international treaty. Examples of such controls include: === Gun laws === All countries have laws and policies regulating aspects such as the manufacture, sale, transfer, possession, modification, and use of small arms by civilians. Countries that regulate access to firearms will typically restrict access to certain categories of firearms and then restrict the categories of persons who may be granted a license for access to such firearms. There may be separate licenses for hunting, sport shooting (a.k.a. target shooting), self-defense, collecting, and concealed carry, with different sets of requirements, permissions, and responsibilities. === Arms control laws === International treaties and agreements place restrictions on the development, production, stockpiling, proliferation, and usage of weapons, from small arms and heavy weapons to weapons of mass destruction. Arms control is typically exercised through the use of diplomacy, which seeks to impose such limitations upon consenting participants, although it may also comprise efforts by a nation or group of nations to enforce limitations upon a non-consenting country. === Arms trafficking laws === Arms trafficking is the trafficking of contraband weapons and ammunition. What constitutes legal trade in firearms varies widely, depending on local and national laws. In 2001, the United Nations had made a protocol against the manufacturing and trafficking of illicit arms. This protocol made governments dispose illegal arms, and to licence new firearms being produced, to ensure them being legitimate. It was signed by 122 parties. == Lifecycle problems == There are a number of issues around the potential ongoing risks from deployed weapons, the safe storage of weapons, and their eventual disposal when they are no longer effective or safe. Ocean dumping of unused weapons such as bombs, ordnance, landmines, and chemical weapons has been common practice by many nations and has created hazards. Unexploded ordnance (UXO) are bombs, land mines, naval mines, and similar devices that did not explode when they were employed and still pose a risk for many years or decades. Demining or mine clearance from areas of past conflict is a difficult process, but every year, landmines kill 15,000 to 20,000 people and severely maim countless more. Nuclear terrorism was a serious concern after the fall of the Soviet Union, with the prospect of "loose nukes" being available. While this risk may have receded, similar situations may arise in the future. == In science fiction == Strange and exotic weapons are a recurring feature or theme in science fiction. In some cases, weapons first introduced in science fiction have now become a reality. Other science fiction weapons, such as force fields and stasis fields, remain purely fictional and are often beyond the realms of known physical possibility. At its most prosaic, science fiction features an endless variety of sidearms, mostly variations on real weapons such as guns and swords. Among the best-known of these are the phaser used in the Star Trek television series, films, and novels, and the lightsaber and blaster featured in the Star Wars movies, comics, novels, and TV series. In addition to adding action and entertainment value, weaponry in science fiction sometimes becomes a theme when it touches on deeper concerns, often motivated by contemporary issues. One example is science fiction that deals with weapons of mass destruction like doomsday devices. == See also == == References == Weapon Specialist – Weapon Expert. Bevic Huynh, New York, 2012.. Retrieved July 26, 2012. == External links == The dictionary definition of weapon at Wiktionary Quotations related to Weapon at Wikiquote Media related to Weapons at Wikimedia Commons
Wikipedia/Defense_systems
Network functions virtualization (NFV) is a network architecture concept that leverages IT virtualization technologies to virtualize entire classes of network node functions into building blocks that may connect, or chain together, to create and deliver communication services. NFV relies upon traditional server-virtualization techniques such as those used in enterprise IT. A virtualized network function, or VNF, is implemented within one or more virtual machines or containers running different software and processes, on top of commercial off the shelf (COTS) high-volume servers, switches and storage devices, or even cloud computing infrastructure, instead of having custom hardware appliances for each network function thereby avoiding vendor lock-in. For example, a virtual session border controller could be deployed to protect a network without the typical cost and complexity of obtaining and installing physical network protection units. Other examples of NFV include virtualized load balancers, firewalls, intrusion detection devices and WAN accelerators to name a few. The decoupling of the network function software from the customized hardware platform realizes a flexible network architecture that enables agile network management, fast new service roll outs with significant reduction in CAPEX and OPEX. == Background == Product development within the telecommunication industry has traditionally followed rigorous standards for stability, protocol adherence and quality, reflected by the use of the term carrier grade to designate equipment demonstrating this high reliability and performance factor. While this model worked well in the past, it inevitably led to long product cycles, a slow pace of development and reliance on proprietary or specific hardware, e.g., bespoke application-specific integrated circuits (ASICs). This development model resulted in significant delays when rolling out new services, posed complex interoperability challenges and significant increase in CAPEX/OPEX when scaling network systems & infrastructure and enhancing network service capabilities to meet increasing network load and performance demands. Moreover, the rise of significant competition in communication service offerings from agile organizations operating at large scale on the public Internet (such as Google Talk, Skype, Netflix) has spurred service providers to look for innovative ways to disrupt the status quo and increase revenue streams. == History == In October 2012, a group of telecom operators published a white paper at a conference in Darmstadt, Germany, on software-defined networking (SDN) and OpenFlow. The Call for Action concluding the White Paper led to the creation of the Network Functions Virtualization (NFV) Industry Specification Group (ISG) within the European Telecommunications Standards Institute (ETSI). The ISG was made up of representatives from the telecommunication industry from Europe and beyond. ETSI ISG NFV addresses many aspects, including functional architecture, information model, data model, protocols, APIs, testing, reliability, security, future evolutions, etc. The ETSI ISG NFV has announced the Release 5 of its specifications since May 2021 aiming to produce new specifications and extend the already published specifications based on new features and enhancements. Since the publication of the white paper, the group has produced over 100 publications, which have gained wider acceptance in the industry and are being implemented in prominent open source projects like OpenStack, ONAP, Open Source MANO (OSM) to name a few. Due to active cross-liaison activities, the ETSI NFV specifications are also being referenced in other SDOs like 3GPP, IETF, ETSI MEC etc. == Framework == The NFV framework consists of three main components: Virtualized network functions (VNFs) are software implementations of network functions that can be deployed on a network functions virtualization infrastructure (NFVI). Network functions virtualization infrastructure (NFVI) is the totality of all hardware and software components that build the environment where NFVs are deployed. The NFV infrastructure can span several locations. The network providing connectivity between these locations is considered as part of the NFV infrastructure. Network functions virtualization management and orchestration architectural framework (NFV-MANO Architectural Framework) is the collection of all functional blocks, data repositories used by these blocks, and reference points and interfaces through which these functional blocks exchange information for the purpose of managing and orchestrating NFVI and VNFs. The building block for both the NFVI and the NFV-MANO is the NFV platform. In the NFVI role, it consists of both virtual and physical processing and storage resources, and virtualization software. In its NFV-MANO role it consists of VNF and NFVI managers and virtualization software operating on a hardware controller. The NFV platform implements carrier-grade features used to manage and monitor the platform components, recover from failures and provide effective security – all required for the public carrier network. == Practical aspects == A service provider that follows the NFV design implements one or more virtualized network functions, or VNFs. A VNF by itself does not automatically provide a usable product or service to the provider's customers. To build more complex services, the notion of service chaining is used, where multiple VNFs are used in sequence to deliver a service. Another aspect of implementing NFV is the orchestration process. To build highly reliable and scalable services, NFV requires that the network be able to instantiate VNF instances, monitor them, repair them, and (most important for a service provider business) bill for the services rendered. These attributes, referred to as carrier-grade features, are allocated to an orchestration layer in order to provide high availability and security, and low operation and maintenance costs. Importantly, the orchestration layer must be able to manage VNFs irrespective of the underlying technology within the VNF. For example, an orchestration layer must be able to manage an SBC VNF from vendor X running on VMware vSphere just as well as an IMS VNF from vendor Y running on KVM. == Distributed NFV == The initial perception of NFV was that virtualized capability should be implemented in data centers. This approach works in many – but not all – cases. NFV presumes and emphasizes the widest possible flexibility as to the physical location of the virtualized functions. Ideally, therefore, virtualized functions should be located where they are the most effective and least expensive. That means a service provider should be free to locate NFV in all possible locations, from the data center to the network node to the customer premises. This approach, known as distributed NFV, has been emphasized from the beginning as NFV was being developed and standardized, and is prominent in the recently released NFV ISG documents. For some cases there are clear advantages for a service provider to locate this virtualized functionality at the customer premises. These advantages range from economics to performance to the feasibility of the functions being virtualized. The first ETSI NFV ISG-approved public multi-vendor proof of concept (PoC) of D-NFV was conducted by Cyan, Inc., RAD, Fortinet and Certes Networks in Chicago in June, 2014, and was sponsored by CenturyLink. It was based on RAD's dedicated customer-edge D-NFV equipment running Fortinet's Next Generation Firewall (NGFW) and Certes Networks’ virtual encryption/decryption engine as Virtual Network Functions (VNFs) with Cyan's Blue Planet system orchestrating the entire ecosystem. RAD's D-NFV solution, a Layer 2/Layer 3 network termination unit (NTU) equipped with a D-NFV X86 server module that functions as a virtualization engine at the customer edge, became commercially available by the end of that month. During 2014 RAD also had organized a D-NFV Alliance, an ecosystem of vendors and international systems integrators specializing in new NFV applications. == NFV modularity benefits == When designing and developing the software that provides the VNFs, vendors may structure that software into software components (implementation view of a software architecture) and package those components into one or more images (deployment view of a software architecture). These vendor-defined software components are called VNF Components (VNFCs). VNFs are implemented with one or more VNFCs and it is assumed, without loss of generality, that VNFC instances map 1:1 to VM Images. VNFCs should in general be able to scale up and/or scale out. By being able to allocate flexible (virtual) CPUs to each of the VNFC instances, the network management layer can scale up (i.e., scale vertically) the VNFC to provide the throughput/performance and scalability expectations over a single system or a single platform. Similarly, the network management layer can scale out (i.e., scale horizontally) a VNFC by activating multiple instances of such VNFC over multiple platforms and therefore reach out to the performance and architecture specifications whilst not compromising the other VNFC function stabilities. Early adopters of such architecture blueprints have already implemented the NFV modularity principles. == Relationship to SDN == Network Functions Virtualisation is highly complementary to SDN. In essence, SDN is an approach to building data networking equipment and software that separates and abstracts elements of these systems. It does this by decoupling the control plane and data plane from each other, such that the control plane resides centrally and the forwarding components remain distributed. The control plane interacts with both northbound and southbound. In the northbound direction the control plane provides a common abstracted view of the network to higher-level applications and programs using high-level APIs and novel management paradigms, such as Intent-based networking. In the southbound direction the control plane programs the forwarding behavior of the data plane, using device level APIs of the physical network equipment distributed around the network. Thus, NFV is not dependent on SDN or SDN concepts, but NFV and SDN can cooperate to enhance the management of a NFV infrastructure and to create a more dynamic network environment. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of Network Services (NS), composed of different type of Network Functions (NF), such as Physical Network Functions (PNF) and VNFs, and placed between different geo-located NFV infrastructures, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems. An NFV system needs a central orchestration and management system that takes operator requests associated with an NS or a VNF, translates them into the appropriate processing, storage and network configuration needed to bring the NS or VNF into operation. Once in operation, the VNF and the networks it is connected to potentially must be monitored for capacity and utilization, and adapted if necessary. All network control functions in an NFV infrastructure can be accomplished using SDN concepts and NFV could be considered one of the primary SDN use cases in service provider environments. For example, within each NFV infrastructure site, a VIM could rely upon an SDN controller to set up and configure the overlay networks interconnecting (e.g. VXLAN) the VNFs and PNFs composing an NS. The SDN controller would then configure the NFV infrastructure switches and routers, as well as the network gateways, as needed. Similarly, a Wide Area Infrastructure Manager (WIM) could rely upon an SDN controller to set up overlay networks to interconnect NSs that are deployed to different geo-located NFV infrastructures. It is also apparent that many SDN use-cases could incorporate concepts introduced in the NFV initiative. Examples include where the centralized controller is controlling a distributed forwarding function that could in fact be also virtualized on existing processing or routing equipment. == Industry impact == NFV has proven a popular standard even in its infancy. Its immediate applications are numerous, such as virtualization of mobile base stations, platform as a service (PaaS), content delivery networks (CDN), fixed access and home environments. The potential benefits of NFV is anticipated to be significant. Virtualization of network functions deployed on general purpose standardized hardware is expected to reduce capital and operational expenditures, and service and product introduction times. Many major network equipment vendors have announced support for NFV. This has coincided with NFV announcements from major software suppliers who provide the NFV platforms used by equipment suppliers to build their NFV products. However, to realize the anticipated benefits of virtualization, network equipment vendors are improving IT virtualization technology to incorporate carrier-grade attributes required to achieve high availability, scalability, performance, and effective network management capabilities. To minimize the total cost of ownership (TCO), carrier-grade features must be implemented as efficiently as possible. This requires that NFV solutions make efficient use of redundant resources to achieve five-nines availability (99.999%), and of computing resource without compromising performance predictability. The NFV platform is the foundation for achieving efficient carrier-grade NFV solutions. It is a software platform running on standard multi-core hardware and built using open source software that incorporates carrier-grade features. The NFV platform software is responsible for dynamically reassigning VNFs due to failures and changes in traffic load, and therefore plays an important role in achieving high availability. There are numerous initiatives underway to specify, align and promote NFV carrier-grade capabilities such as ETSI NFV Proof of Concept, ATIS Open Platform for NFV Project, Carrier Network Virtualization Awards and various supplier ecosystems. The vSwitch, a key component of NFV platforms, is responsible for providing connectivity both VM-to-VM (between VMs) and between VMs and the outside network. Its performance determines both the bandwidth of the VNFs and the cost-efficiency of NFV solutions. The standard Open vSwitch's (OVS) performance has shortcomings that must be resolved to meet the needs of NFVI solutions. Significant performance improvements are being reported by NFV suppliers for both OVS and Accelerated Open vSwitch (AVS) versions. Virtualization is also changing the way availability is specified, measured and achieved in NFV solutions. As VNFs replace traditional function-dedicated equipment, there is a shift from equipment-based availability to a service-based, end-to-end, layered approach. Virtualizing network functions breaks the explicit coupling with specific equipment, therefore availability is defined by the availability of VNF services. Because NFV technology can virtualize a wide range of network function types, each with their own service availability expectations, NFV platforms should support a wide range of fault tolerance options. This flexibility enables CSPs to optimize their NFV solutions to meet any VNF availability requirement. == Management and orchestration (MANO) == ETSI has already indicated that an important part of controlling the NFV environment be done through automated orchestration. NFV Management and Orchestration (NFV-MANO) refers to a set of functions within an NFV system to manage and orchestrate the allocation of virtual infrastructure resources to virtualized network functions (VNFs) and network services (NSs). They are the brains of the NFV system and a key automation enabler. The main functional blocks within the NFV-MANO architectural framework (ETSI GS NFV-006 ) are: Network Functions Virtualisation Orchestrator (NFVO); Virtualised Network Function Manager (VNFM); Virtualised Infrastructure Manager (VIM). The entry point in NFV-MANO for external operations support systems (OSS) and business support systems (BSS) is the NFVO, which is in charge of managing the lifecycle of NS instances. The management of the lifecycle of VNF instances constituting an NS instance is delegated by the NFVO to one more or VNFMs. Both the NFVO and the VNFMs uses the services exposed by one or more VIMs for allocating virtual infrastructure resources to the objects they manage. Additional functions are used for managing containerized VNFs: the Container Infrastructure Service Management (CISM) and the Container Image Registry (CIR) functions. The CISM is responsible for maintaining the containerized workloads while the CIR is responsible for storing and maintaining information of OS container software images The behavior of the NFVO and VNFM is driven by the contents of deployment templates (a.k.a. NFV descriptors) such as a Network Service Descriptor (NSD) and a VNF Descriptor (VNFD). ETSI delivers a full set of standards enabling an open ecosystem where Virtualized Network Functions (VNFs) can be interoperable with independently developed management and orchestration systems, and where the components of a management and orchestration system are themselves interoperable. This includes a set of Restful API specifications as well as the specifications of a packaging format for delivering VNFs to service providers and of the deployment templates to be packaged with the software images to enable managing the lifecycle of VNFs. Deployment templates can be based on TOSCA or YANG. An OpenAPI (a.k.a. Swagger) representation of the API specifications is available and maintained on the ETSI forge server, along with TOSCA and YANG definition files to be used when creating deployment templates. The full set of published specifications is summarized in the table below. An overview of the different versions of the OpenAPI representations of NFV-MANO APIs is available on the ETSI NFV wiki. The OpenAPI files as well as the TOSCA YAML definition files and YANG modules applicable to NFV descriptors are available on the ETSI Forge. Additional studies are ongoing within ETSI on possible enhancement to the NFV-MANO framework to improve its automation capabilities and introduce autonomous management mechanisms (see ETSI GR NFV-IFA 041 ) == Performance study == Recent performance study on NFV focused on the throughput, latency and jitter of virtualized network functions (VNFs), as well as NFV scalability in terms of the number of VNFs a single physical server can support. Open source NFV platforms are available, one representative is openNetVM. openNetVM is a high performance NFV platform based on DPDK and Docker containers. openNetVM provides a flexible framework for deploying network functions and interconnecting them to build service chains. openNetVM is an open source version of the NetVM platform described in NSDI 2014 and HotMiddlebox 2016 papers, released under the BSD license. The source code can be found at GitHub:openNetVM == Cloud-native network functions == From 2018, many VNF providers began to migrate many of their VNFs to a container-based architecture. Such VNFs also known as Cloud-Native Network Functions (CNF) utilize many innovations deployed commonly on internet infrastructure. These include auto-scaling, supporting a continuous delivery / DevOps deployment model, and efficiency gains by sharing common services across platforms. Through service discovery and orchestration, a network based on CNFs will be more resilient to infrastructure resource failures. Utilizing containers, and thus dispensing with the overhead inherent in traditional virtualization through the elimination of the guest OS can greatly increase infrastructure resource efficiency. == See also == Hardware virtualization Network management Network virtualization OASIS TOSCA Open Platform for NFV == References == == External links == NFV basics Open Platform for NFV (OPNFV) The ETSI NFV FAQ What are the benefits of NFV?
Wikipedia/Network_function_virtualization
In object-oriented programming, a class defines the shared aspects of objects created from the class. The capabilities of a class differ between programming languages, but generally the shared aspects consist of state (variables) and behavior (methods) that are each either associated with a particular object or with all objects of that class. Object state can differ between each instance of the class whereas the class state is shared by all of them. The object methods include access to the object state (via an implicit or explicit parameter that references the object) whereas class methods do not. If the language supports inheritance, a class can be defined based on another class with all of its state and behavior plus additional state and behavior that further specializes the class. The specialized class is a sub-class, and the class it is based on is its superclass. == Attributes == === Object lifecycle === As an instance of a class, an object is constructed from a class via instantiation. Memory is allocated and initialized for the object state and a reference to the object is provided to consuming code. The object is usable until it is destroyed – its state memory is de-allocated. Most languages allow for custom logic at lifecycle events via a constructor and a destructor. === Type === An object expresses data type as an interface – the type of each member variable and the signature of each member function (method). A class defines an implementation of an interface, and instantiating the class results in an object that exposes the implementation via the interface. In the terms of type theory, a class is an implementation‍—‌a concrete data structure and collection of subroutines‍—‌while a type is an interface. Different (concrete) classes can produce objects of the same (abstract) type (depending on type system). For example, the type (interface) Stack might be implemented by SmallStack that is fast for small stacks but scales poorly and ScalableStack that scales well but has high overhead for small stacks. === Structure === A class contains data field descriptions (or properties, fields, data members, or attributes). These are usually field types and names that will be associated with state variables at program run time; these state variables either belong to the class or specific instances of the class. In most languages, the structure defined by the class determines the layout of the memory used by its instances. Other implementations are possible: for example, objects in Python use associative key-value containers. Some programming languages such as Eiffel support specification of invariants as part of the definition of the class, and enforce them through the type system. Encapsulation of state is necessary for being able to enforce the invariants of the class. === Behavior === The behavior of a class or its instances is defined using methods. Methods are subroutines with the ability to operate on objects or classes. These operations may alter the state of an object or simply provide ways of accessing it. Many kinds of methods exist, but support for them varies across languages. Some types of methods are created and called by programmer code, while other special methods—such as constructors, destructors, and conversion operators—are created and called by compiler-generated code. A language may also allow the programmer to define and call these special methods. === Class interface === Every class implements (or realizes) an interface by providing structure and behavior. Structure consists of data and state, and behavior consists of code that specifies how methods are implemented. There is a distinction between the definition of an interface and the implementation of that interface; however, this line is blurred in many programming languages because class declarations both define and implement an interface. Some languages, however, provide features that separate interface and implementation. For example, an abstract class can define an interface without providing an implementation. Languages that support class inheritance also allow classes to inherit interfaces from the classes that they are derived from. For example, if "class A" inherits from "class B" and if "class B" implements the interface "interface B" then "class A" also inherits the functionality(constants and methods declaration) provided by "interface B". In languages that support access specifiers, the interface of a class is considered to be the set of public members of the class, including both methods and attributes (via implicit getter and setter methods); any private members or internal data structures are not intended to be depended on by external code and thus are not part of the interface. Object-oriented programming methodology dictates that the operations of any interface of a class are to be independent of each other. It results in a layered design where clients of an interface use the methods declared in the interface. An interface places no requirements for clients to invoke the operations of one interface in any particular order. This approach has the benefit that client code can assume that the operations of an interface are available for use whenever the client has access to the object. Class interface example The buttons on the front of your television set are the interface between you and the electrical wiring on the other side of its plastic casing. You press the "power" button to toggle the television on and off. In this example, your particular television is the instance, each method is represented by a button, and all the buttons together compose the interface (other television sets that are the same model as yours would have the same interface). In its most common form, an interface is a specification of a group of related methods without any associated implementation of the methods. A television set also has a myriad of attributes, such as size and whether it supports color, which together comprise its structure. A class represents the full description of a television, including its attributes (structure) and buttons (interface). Getting the total number of televisions manufactured could be a static method of the television class. This method is associated with the class, yet is outside the domain of each instance of the class. A static method that finds a particular instance out of the set of all television objects is another example. === Member accessibility === The following is a common set of access specifiers: Private (or class-private) restricts access to the class itself. Only methods that are part of the same class can access private members. Protected (or class-protected) allows the class itself and all its subclasses to access the member. Public means that any code can access the member by its name. Although many object-oriented languages support the above access specifiers, their semantics may differ. Object-oriented design uses the access specifiers in conjunction with careful design of public method implementations to enforce class invariants—constraints on the state of the objects. A common usage of access specifiers is to separate the internal data of a class from its interface: the internal structure is made private, while public accessor methods can be used to inspect or alter such private data. Access specifiers do not necessarily control visibility, in that even private members may be visible to client external code. In some languages, an inaccessible but visible member may be referred to at runtime (for example, by a pointer returned from a member function), but an attempt to use it by referring to the name of the member from the client code will be prevented by the type checker. The various object-oriented programming languages enforce member accessibility and visibility to various degrees, and depending on the language's type system and compilation policies, enforced at either compile time or runtime. For example, the Java language does not allow client code that accesses the private data of a class to compile. In the C++ language, private methods are visible, but not accessible in the interface; however, they may be made invisible by explicitly declaring fully abstract classes that represent the interfaces of the class. Some languages feature other accessibility schemes: Instance vs. class accessibility: Ruby supports instance-private and instance-protected access specifiers in lieu of class-private and class-protected, respectively. They differ in that they restrict access based on the instance itself, rather than the instance's class. Friend: C++ supports a mechanism where a function explicitly declared as a friend function of the class may access the members designated as private or protected. Path-based: Java supports restricting access to a member within a Java package, which is the logical path of the file. However, it is a common practice when extending a Java framework to implement classes in the same package as a framework class to access protected members. The source file may exist in a completely different location, and may be deployed to a different .jar file, yet still be in the same logical path as far as the JVM is concerned. ==== Inheritance ==== Conceptually, a superclass is a superset of its subclasses. For example, GraphicObject could be a superclass of Rectangle and Ellipse, while Square would be a subclass of Rectangle. These are all subset relations in set theory as well, i.e., all squares are rectangles but not all rectangles are squares. A common conceptual error is to mistake a part of relation with a subclass. For example, a car and truck are both kinds of vehicles and it would be appropriate to model them as subclasses of a vehicle class. However, it would be an error to model the parts of the car as subclass relations. For example, a car is composed of an engine and body, but it would not be appropriate to model an engine or body as a subclass of a car. In object-oriented modeling these kinds of relations are typically modeled as object properties. In this example, the Car class would have a property called parts. parts would be typed to hold a collection of objects, such as instances of Body, Engine, Tires, etc. Object modeling languages such as UML include capabilities to model various aspects of "part of" and other kinds of relations – data such as the cardinality of the objects, constraints on input and output values, etc. This information can be utilized by developer tools to generate additional code besides the basic data definitions for the objects, such as error checking on get and set methods. One important question when modeling and implementing a system of object classes is whether a class can have one or more superclasses. In the real world with actual sets, it would be rare to find sets that did not intersect with more than one other set. However, while some systems such as Flavors and CLOS provide a capability for more than one parent to do so at run time introduces complexity that many in the object-oriented community consider antithetical to the goals of using object classes in the first place. Understanding which class will be responsible for handling a message can get complex when dealing with more than one superclass. If used carelessly this feature can introduce some of the same system complexity and ambiguity classes were designed to avoid. Most modern object-oriented languages such as Smalltalk and Java require single inheritance at run time. For these languages, multiple inheritance may be useful for modeling but not for an implementation. However, semantic web application objects do have multiple superclasses. The volatility of the Internet requires this level of flexibility and the technology standards such as the Web Ontology Language (OWL) are designed to support it. A similar issue is whether or not the class hierarchy can be modified at run time. Languages such as Flavors, CLOS, and Smalltalk all support this feature as part of their meta-object protocols. Since classes are themselves first-class objects, it is possible to have them dynamically alter their structure by sending them the appropriate messages. Other languages that focus more on strong typing such as Java and C++ do not allow the class hierarchy to be modified at run time. Semantic web objects have the capability for run time changes to classes. The rationale is similar to the justification for allowing multiple superclasses, that the Internet is so dynamic and flexible that dynamic changes to the hierarchy are required to manage this volatility. Although many class-based languages support inheritance, inheritance is not an intrinsic aspect of classes. An object-based language (i.e. Classic Visual Basic) supports classes yet does not support inheritance. === Local and inner === In some languages, classes can be declared in scopes other than the global scope. There are various types of such classes. An inner class is a class defined within another class. The relationship between an inner class and its containing class can also be treated as another type of class association. An inner class is typically neither associated with instances of the enclosing class nor instantiated along with its enclosing class. Depending on the language, it may or may not be possible to refer to the class from outside the enclosing class. A related concept is inner types, also known as inner data type or nested type, which is a generalization of the concept of inner classes. C++ is an example of a language that supports both inner classes and inner types (via typedef declarations). A local class is a class defined within a procedure or function. Such structure limits references to the class name to within the scope where the class is declared. Depending on the semantic rules of the language, there may be additional restrictions on local classes compared to non-local ones. One common restriction is to disallow local class methods to access local variables of the enclosing function. For example, in C++, a local class may refer to static variables declared within its enclosing function, but may not access the function's automatic variables. === Metaclass === A metaclass is a class where instances are classes. A metaclass describes a common structure of a collection of classes and can implement a design pattern or describe particular kinds of classes. Metaclasses are often used to describe frameworks. In some languages, such as Python, Ruby or Smalltalk, a class is also an object; thus each class is an instance of a unique metaclass that is built into the language. The Common Lisp Object System (CLOS) provides metaobject protocols (MOPs) to implement those classes and metaclasses. === Sealed === A sealed class cannot be subclassed. It is basically the opposite of an abstract class, which must be derived to be used. A sealed class is implicitly concrete. A class is declared as sealed via the keyword sealed in C# or final in Java or PHP. For example, Java's String class is marked as final. Sealed classes may allow a compiler to perform optimizations that are not available for classes that can be subclassed. === Open === An open class can be changed. Typically, an executable program cannot be changed by customers. Developers can often change some classes, but typically cannot change standard or built-in ones. In Ruby, all classes are open. In Python, classes can be created at runtime, and all can be modified afterward. Objective-C categories permit the programmer to add methods to an existing class without the need to recompile that class or even have access to its source code. === Mixin === Some languages have special support for mixins, though, in any language with multiple inheritance, a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes; for example, a class UnicodeConversionMixin might provide a method called unicode_to_ascii when included in classes FileReader and WebPageScraper that do not share a common parent. === Partial === In languages supporting the feature, a partial class is a class whose definition may be split into multiple pieces, within a single source-code file or across multiple files. The pieces are merged at compile time, making compiler output the same as for a non-partial class. The primary motivation for the introduction of partial classes is to facilitate the implementation of code generators, such as visual designers. It is otherwise a challenge or compromise to develop code generators that can manage the generated code when it is interleaved within developer-written code. Using partial classes, a code generator can process a separate file or coarse-grained partial class within a file, and is thus alleviated from intricately interjecting generated code via extensive parsing, increasing compiler efficiency and eliminating the potential risk of corrupting developer code. In a simple implementation of partial classes, the compiler can perform a phase of precompilation where it "unifies" all the parts of a partial class. Then, compilation can proceed as usual. Other benefits and effects of the partial class feature include: Enables separation of a class's interface and implementation code in a unique way. Eases navigation through large classes within an editor. Enables separation of concerns, in a way similar to aspect-oriented programming but without using any extra tools. Enables multiple developers to work on a single class concurrently without the need to merge individual code into one file at a later time. Partial classes have existed in Smalltalk under the name of Class Extensions for considerable time. With the arrival of the .NET framework 2, Microsoft introduced partial classes, supported in both C# 2.0 and Visual Basic 2005. WinRT also supports partial classes. === Uninstantiable === Uninstantiable classes allow programmers to group together per-class fields and methods that are accessible at runtime without an instance of the class. Indeed, instantiation is prohibited for this kind of class. For example, in C#, a class marked "static" can not be instantiated, can only have static members (fields, methods, other), may not have instance constructors, and is sealed. === Unnamed === An unnamed class or anonymous class is not bound to a name or identifier upon definition. This is analogous to named versus unnamed functions. == Benefits == The benefits of organizing software into object classes fall into three categories: Rapid development Ease of maintenance Reuse of code and designs Object classes facilitate rapid development because they lessen the semantic gap between the code and the users. System analysts can talk to both developers and users using essentially the same vocabulary, talking about accounts, customers, bills, etc. Object classes often facilitate rapid development because most object-oriented environments come with powerful debugging and testing tools. Instances of classes can be inspected at run time to verify that the system is performing as expected. Also, rather than get dumps of core memory, most object-oriented environments have interpreted debugging capabilities so that the developer can analyze exactly where in the program the error occurred and can see which methods were called to which arguments and with what arguments. Object classes facilitate ease of maintenance via encapsulation. When developers need to change the behavior of an object they can localize the change to just that object and its component parts. This reduces the potential for unwanted side effects from maintenance enhancements. Software reuse is also a major benefit of using Object classes. Classes facilitate re-use via inheritance and interfaces. When a new behavior is required it can often be achieved by creating a new class and having that class inherit the default behaviors and data of its superclass and then tailoring some aspect of the behavior or data accordingly. Re-use via interfaces (also known as methods) occurs when another object wants to invoke (rather than create a new kind of) some object class. This method for re-use removes many of the common errors that can make their way into software when one program re-uses code from another. == Runtime representation == As a data type, a class is usually considered as a compile time construct. A language or library may also support prototype or factory metaobjects that represent runtime information about classes, or even represent metadata that provides access to reflective programming (reflection) facilities and ability to manipulate data structure formats at runtime. Many languages distinguish this kind of run-time type information about classes from a class on the basis that the information is not needed at runtime. Some dynamic languages do not make strict distinctions between runtime and compile time constructs, and therefore may not distinguish between metaobjects and classes. For example, if Human is a metaobject representing the class Person, then instances of class Person can be created by using the facilities of the Human metaobject. == Prototype-based programming == In contrast to creating an object from a class, some programming contexts support object creation by copying (cloning) a prototype object. == See also == Class diagram – Diagram that describes the static structure of a software system Class variable – Variable defined in a class whose objects all possess the same copy Instance variable – Member variable of a class that all its objects possess a copy of List of object-oriented programming languages Trait (computer programming) – Set of methods that extend the functionality of a class == Notes == == References == == Further reading == Abadi; Cardelli: A Theory of Objects ISO/IEC 14882:2003 Programming Language C++, International standard Class Warfare: Classes vs. Prototypes, by Brian Foote Meyer, B.: "Object-oriented software construction", 2nd edition, Prentice Hall, 1997, ISBN 0-13-629155-4 Rumbaugh et al.: "Object-oriented modeling and design", Prentice Hall, 1991, ISBN 0-13-630054-5
Wikipedia/Class_(computer_science)
In some programming languages, function overloading or method overloading is the ability to create multiple functions of the same name with different implementations. Calls to an overloaded function will run a specific implementation of that function appropriate to the context of the call, allowing one function call to perform different tasks depending on context. == Basic definition == For example, doTask() and doTask(object o) are overloaded functions. To call the latter, an object must be passed as a parameter, whereas the former does not require a parameter, and is called with an empty parameter field. A common error would be to assign a default value to the object in the second function, which would result in an ambiguous call error, as the compiler wouldn't know which of the two methods to use. Another example is a Print(object o) function that executes different actions based on whether it's printing text or photos. The two different functions may be overloaded as Print(text_object T); Print(image_object P). If we write the overloaded print functions for all objects our program will "print", we never have to worry about the type of the object, and the correct function call again, the call is always: Print(something). == Languages supporting overloading == Languages which support function overloading include, but are not necessarily limited to, the following: Ada Apex C++ C# Clojure D Swift Fortran Kotlin Java Julia PostgreSQL and PL/SQL Scala TypeScript Visual Basic (.NET) Wolfram Language Elixir Nim Crystal Delphi Python Languages that do not support function overloading include C, Rust and Zig. == Rules in function overloading == The same function name is used for more than one function definition in a particular module, class or namespace The functions must have different type signatures, i.e. differ in the number or the types of their formal parameters (as in C++) or additionally in their return type (as in Ada). Function overloading is usually associated with statically-typed programming languages that enforce type checking in function calls. An overloaded function is a set of different functions that are callable with the same name. For any particular call, the compiler determines which overloaded function to use and resolves this at compile time. This is true for programming languages such as Java. Function overloading differs from forms of polymorphism where the choice is made at runtime, e.g. through virtual functions, instead of statically. Example: Function overloading in C++ In the above example, the volume of each component is calculated using one of the three functions named "volume", with selection based on the differing number and type of actual parameters. == Constructor overloading == Constructors, used to create instances of an object, may also be overloaded in some object-oriented programming languages. Because in many languages the constructor's name is predetermined by the name of the class, it would seem that there can be only one constructor. Whenever multiple constructors are needed, they are to be implemented as overloaded functions. In C++, default constructors take no parameters, instantiating the object members with their appropriate default values, "which is normally zero for numeral fields and empty string for string fields". For example, a default constructor for a restaurant bill object written in C++ might set the tip to 15%: The drawback to this is that it takes two steps to change the value of the created Bill object. The following shows creation and changing the values within the main program: By overloading the constructor, one could pass the tip and total as parameters at creation. This shows the overloaded constructor with two parameters. This overloaded constructor is placed in the class as well as the original constructor we used before. Which one gets used depends on the number of parameters provided when the new Bill object is created (none, or two): Now a function that creates a new Bill object could pass two values into the constructor and set the data members in one step. The following shows creation and setting the values: This can be useful in increasing program efficiency and reducing code length. Another reason for constructor overloading can be to enforce mandatory data members. In this case the default constructor is declared private or protected (or preferably deleted since C++11) to make it inaccessible from outside. For the Bill above total might be the only constructor parameter – since a Bill has no sensible default for total – whereas tip defaults to 0.15. == Complications == Two issues interact with and complicate function overloading: Name masking (due to scope) and implicit type conversion. If a function is declared in one scope, and then another function with the same name is declared in an inner scope, there are two natural possible overloading behaviors: the inner declaration masks the outer declaration (regardless of signature), or both the inner declaration and the outer declaration are included in the overload, with the inner declaration masking the outer declaration only if the signature matches. The first is taken in C++: "in C++, there is no overloading across scopes." As a result, to obtain an overload set with functions declared in different scopes, one needs to explicitly import the functions from the outer scope into the inner scope, with the using keyword. Implicit type conversion complicates function overloading because if the types of parameters do not exactly match the signature of one of the overloaded functions, but can match after type conversion, resolution depends on which type conversion is chosen. These can combine in confusing ways: An inexact match declared in an inner scope can mask an exact match declared in an outer scope, for instance. For example, to have a derived class with an overloaded function taking a double or an int, using the function taking an int from the base class, in C++, one would write: Failing to include the using results in an int parameter passed to F in the derived class being converted to a double and matching the function in the derived class, rather than in the base class; Including using results in an overload in the derived class and thus matching the function in the base class. == Caveats == If a method is designed with an excessive number of overloads, it may be difficult for developers to discern which overload is being called simply by reading the code. This is particularly true if some of the overloaded parameters are of types that are inherited types of other possible parameters (for example "object"). An IDE can perform the overload resolution and display (or navigate to) the correct overload. Type-based overloading can also hamper code maintenance, where code updates can accidentally change which method overload is chosen by the compiler. == See also == Abstraction (computer science) Constructor (computer science) Default argument Dynamic dispatch Factory method pattern Method signature Method overriding Object-oriented programming Operator overloading == Citations == == References == Bloch, Joshua (2018). "Effective Java: Programming Language Guide" (third ed.). Addison-Wesley. ISBN 978-0134685991. == External links == Meyer, Bertrand (October 2001). "Overloading vs Object Technology" (PDF). Eiffel column. Journal of Object-Oriented Programming. 14 (4). 101 Communications LLC: 3–7. Retrieved 27 August 2020.
Wikipedia/Method_overloading
In computer programming, a macro (short for "macro instruction"; from Greek μακρο- 'long, large') is a rule or pattern that specifies how a certain input should be mapped to a replacement output. Applying a macro to an input is known as macro expansion. The input and output may be a sequence of lexical tokens or characters, or a syntax tree. Character macros are supported in software applications to make it easy to invoke common command sequences. Token and tree macros are supported in some programming languages to enable code reuse or to extend the language, sometimes for domain-specific languages. Macros are used to make a sequence of computing instructions available to the programmer as a single program statement, making the programming task less tedious and less error-prone. Thus, they are called "macros" because a "big" block of code can be expanded from a "small" sequence of characters. Macros often allow positional or keyword parameters that dictate what the conditional assembler program generates and have been used to create entire programs or program suites according to such variables as operating system, platform or other factors. The term derives from "macro instruction", and such expansions were originally used in generating assembly language code. == Keyboard and mouse macros == Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to transform into other, usually more time-consuming, sequences of keystrokes and mouse actions. In this way, frequently used or repetitive sequences of keystrokes and mouse movements can be automated. Separate programs for creating these macros are called macro recorders. During the 1980s, macro programs – originally SmartKey, then SuperKey, KeyWorks, Prokey – were very popular, first as a means to automatically format screenplays, then for a variety of user-input tasks. These programs were based on the terminate-and-stay-resident mode of operation and applied to all keyboard input, no matter in which context it occurred. They have to some extent fallen into obsolescence following the advent of mouse-driven user interfaces and the availability of keyboard and mouse macros in applications, such as word processors and spreadsheets, making it possible to create application-sensitive keyboard macros. Keyboard macros can be used in massively multiplayer online role-playing games (MMORPGs) to perform repetitive, but lucrative tasks, thus accumulating resources. As this is done without human effort, it can skew the economy of the game. For this reason, use of macros is a violation of the TOS or EULA of most MMORPGs, and their administrators spend considerable effort to suppress them. === Application macros and scripting === Keyboard and mouse macros that are created using an application's built-in macro features are sometimes called application macros. They are created by carrying out the sequence once and letting the application record the actions. An underlying macro programming language, most commonly a scripting language, with direct access to the features of the application may also exist. The programmers' text editor Emacs (short for "editing macros") follows this idea to a conclusion. In effect, most of the editor is made of macros. Emacs was originally devised as a set of macros in the editing language TECO; it was later ported to dialects of Lisp. Another programmers' text editor, Vim (a descendant of vi), also has an implementation of keyboard macros. It can record into a register (macro) what a person types on the keyboard and it can be replayed or edited just like VBA macros for Microsoft Office. Vim also has a scripting language called Vimscript to create macros. Visual Basic for Applications (VBA) is a programming language included in Microsoft Office from Office 97 through Office 2019 (although it was available in some components of Office prior to Office 97). However, its function has evolved from and replaced the macro languages that were originally included in some of these applications. XEDIT, running on the Conversational Monitor System (CMS) component of VM, supports macros written in EXEC, EXEC2 and REXX, and some CMS commands were actually wrappers around XEDIT macros. The Hessling Editor (THE), a partial clone of XEDIT, supports Rexx macros using Regina and Open Object REXX (oorexx). Many common applications, and some on PCs, use Rexx as a scripting language. ==== Macro virus ==== VBA has access to most Microsoft Windows system calls and executes when documents are opened. This makes it relatively easy to write computer viruses in VBA, commonly known as macro viruses. In the mid-to-late 1990s, this became one of the most common types of computer virus. However, during the late 1990s and to date, Microsoft has been patching and updating its programs. In addition, current anti-virus programs immediately counteract such attacks. == Parameterized and parameterless macro == A parameterized macro is a macro that is able to insert given objects into its expansion. This gives the macro some of the power of a function. As a simple example, in the C programming language, this is a typical macro that is not a parameterized macro, i.e., a parameterless macro: #define PI 3.14159 This causes PI to always be replaced with 3.14159 wherever it occurs. An example of a parameterized macro, on the other hand, is this: #define pred(x) ((x)-1) What this macro expands to depends on what argument x is passed to it. Here are some possible expansions: pred(2) → ((2) -1) pred(y+2) → ((y+2) -1) pred(f(5)) → ((f(5))-1) Parameterized macros are a useful source-level mechanism for performing in-line expansion, but in languages such as C where they use simple textual substitution, they have a number of severe disadvantages over other mechanisms for performing in-line expansion, such as inline functions. The parameterized macros used in languages such as Lisp, PL/I and Scheme, on the other hand, are much more powerful, able to make decisions about what code to produce based on their arguments; thus, they can effectively be used to perform run-time code generation. == Text-substitution macros == Languages such as C and some assembly languages have rudimentary macro systems, implemented as preprocessors to the compiler or assembler. C preprocessor macros work by simple textual substitution at the token, rather than the character level. However, the macro facilities of more sophisticated assemblers, e.g., IBM High Level Assembler (HLASM) can't be implemented with a preprocessor; the code for assembling instructions and data is interspersed with the code for assembling macro invocations. A classic use of macros is in the computer typesetting system TeX and its derivatives, where most functionality is based on macros. MacroML is an experimental system that seeks to reconcile static typing and macro systems. Nemerle has typed syntax macros, and one productive way to think of these syntax macros is as a multi-stage computation. Other examples: m4 is a sophisticated stand-alone macro processor. TRAC Macro Extension TAL, accompanying Template Attribute Language SMX: for web pages ML/1 (Macro Language One) troff and nroff: for typesetting and formatting Unix manpages. CMS EXEC: for command-line macros and application macros EXEC 2 in Conversational Monitor System (CMS): for command-line macros and application macros CLIST in IBM's Time Sharing Option (TSO): for command-line macros and application macros REXX: for command-line macros and application macros in, e.g., AmigaOS, CMS, OS/2, TSO SCRIPT: for formatting documents Various shells for, e.g., Linux Some major applications have been written as text macro invoked by other applications, e.g., by XEDIT in CMS. === Embeddable languages === Some languages, such as PHP, can be embedded in free-format text, or the source code of other languages. The mechanism by which the code fragments are recognised (for instance, being bracketed by <?php and ?>) is similar to a textual macro language, but they are much more powerful, fully featured languages. == Procedural macros == Macros in the PL/I language are written in a subset of PL/I itself: the compiler executes "preprocessor statements" at compilation time, and the output of this execution forms part of the code that is compiled. The ability to use a familiar procedural language as the macro language gives power much greater than that of text substitution macros, at the expense of a larger and slower compiler. Macros in PL/I, as well as in many assemblers, may have side effects, e.g., setting variables that other macros can access. Frame technology's frame macros have their own command syntax but can also contain text in any language. Each frame is both a generic component in a hierarchy of nested subassemblies, and a procedure for integrating itself with its subassembly frames (a recursive process that resolves integration conflicts in favor of higher level subassemblies). The outputs are custom documents, typically compilable source modules. Frame technology can avoid the proliferation of similar but subtly different components, an issue that has plagued software development since the invention of macros and subroutines. Most assembly languages have less powerful procedural macro facilities, for example allowing a block of code to be repeated N times for loop unrolling; but these have a completely different syntax from the actual assembly language. == Syntactic macros == Macro systems—such as the C preprocessor described earlier—that work at the level of lexical tokens cannot preserve the lexical structure reliably. Syntactic macro systems work instead at the level of abstract syntax trees, and preserve the lexical structure of the original program. The most widely used implementations of syntactic macro systems are found in Lisp-like languages. These languages are especially suited for this style of macro due to their uniform, parenthesized syntax (known as S-expressions). In particular, uniform syntax makes it easier to determine the invocations of macros. Lisp macros transform the program structure itself, with the full language available to express such transformations. While syntactic macros are often found in Lisp-like languages, they are also available in other languages such as Prolog, Erlang, Dylan, Scala, Nemerle, Rust, Elixir, Nim, Haxe, and Julia. They are also available as third-party extensions to JavaScript and C#. === Early Lisp macros === Before Lisp had macros, it had so-called FEXPRs, function-like operators whose inputs were not the values computed by the arguments but rather the syntactic forms of the arguments, and whose output were values to be used in the computation. In other words, FEXPRs were implemented at the same level as EVAL, and provided a window into the meta-evaluation layer. This was generally found to be a difficult model to reason about effectively. In 1963, Timothy Hart proposed adding macros to Lisp 1.5 in AI Memo 57: MACRO Definitions for LISP. === Anaphoric macros === An anaphoric macro is a type of programming macro that deliberately captures some form supplied to the macro which may be referred to by an anaphor (an expression referring to another). Anaphoric macros first appeared in Paul Graham's On Lisp and their name is a reference to linguistic anaphora—the use of words as a substitute for preceding words. === Hygienic macros === In the mid-eighties, a number of papers introduced the notion of hygienic macro expansion (syntax-rules), a pattern-based system where the syntactic environments of the macro definition and the macro use are distinct, allowing macro definers and users not to worry about inadvertent variable capture (cf. referential transparency). Hygienic macros have been standardized for Scheme in the R5RS, R6RS, and R7RS standards. A number of competing implementations of hygienic macros exist such as syntax-rules, syntax-case, explicit renaming, and syntactic closures. Both syntax-rules and syntax-case have been standardized in the Scheme standards. Recently, Racket has combined the notions of hygienic macros with a "tower of evaluators", so that the syntactic expansion time of one macro system is the ordinary runtime of another block of code, and showed how to apply interleaved expansion and parsing in a non-parenthesized language. A number of languages other than Scheme either implement hygienic macros or implement partially hygienic systems. Examples include Scala, Rust, Elixir, Julia, Dylan, Nim, and Nemerle. === Applications === Evaluation order Macro systems have a range of uses. Being able to choose the order of evaluation (see lazy evaluation and non-strict functions) enables the creation of new syntactic constructs (e.g. control structures) indistinguishable from those built into the language. For instance, in a Lisp dialect that has cond but lacks if, it is possible to define the latter in terms of the former using macros. For example, Scheme has both continuations and hygienic macros, which enables a programmer to design their own control abstractions, such as looping and early exit constructs, without the need to build them into the language. Data sub-languages and domain-specific languages Next, macros make it possible to define data languages that are immediately compiled into code, which means that constructs such as state machines can be implemented in a way that is both natural and efficient. Binding constructs Macros can also be used to introduce new binding constructs. The most well-known example is the transformation of let into the application of a function to a set of arguments. Felleisen conjectures that these three categories make up the primary legitimate uses of macros in such a system. Others have proposed alternative uses of macros, such as anaphoric macros in macro systems that are unhygienic or allow selective unhygienic transformation. The interaction of macros and other language features has been a productive area of research. For example, components and modules are useful for large-scale programming, but the interaction of macros and these other constructs must be defined for their use together. Module and component-systems that can interact with macros have been proposed for Scheme and other languages with macros. For example, the Racket language extends the notion of a macro system to a syntactic tower, where macros can be written in languages including macros, using hygiene to ensure that syntactic layers are distinct and allowing modules to export macros to other modules. == Macros for machine-independent software == Macros are normally used to map a short string (macro invocation) to a longer sequence of instructions. Another, less common, use of macros is to do the reverse: to map a sequence of instructions to a macro string. This was the approach taken by the STAGE2 Mobile Programming System, which used a rudimentary macro compiler (called SIMCMP) to map the specific instruction set of a given computer into machine-independent macros. Applications (notably compilers) written in these machine-independent macros can then be run without change on any computer equipped with the rudimentary macro compiler. The first application run in such a context is a more sophisticated and powerful macro compiler, written in the machine-independent macro language. This macro compiler is applied to itself, in a bootstrap fashion, to produce a compiled and much more efficient version of itself. The advantage of this approach is that complex applications can be ported from one computer to a very different computer with very little effort (for each target machine architecture, just the writing of the rudimentary macro compiler). The advent of modern programming languages, notably C, for which compilers are available on virtually all computers, has rendered such an approach superfluous. This was, however, one of the first instances (if not the first) of compiler bootstrapping. == Assembly language == While macro instructions can be defined by a programmer for any set of native assembler program instructions, typically macros are associated with macro libraries delivered with the operating system allowing access to operating system functions such as peripheral access by access methods (including macros such as OPEN, CLOSE, READ and WRITE) operating system functions such as ATTACH, WAIT and POST for subtask creation and synchronization. Typically such macros expand into executable code, e.g., for the EXIT macroinstruction, a list of define constant instructions, e.g., for the DCB macro—DTF (Define The File) for DOS—or a combination of code and constants, with the details of the expansion depending on the parameters of the macro instruction (such as a reference to a file and a data area for a READ instruction); the executable code often terminated in either a branch and link register instruction to call a routine, or a supervisor call instruction to call an operating system function directly. Generating a Stage 2 job stream for system generation in, e.g., OS/360. Unlike typical macros, sysgen stage 1 macros do not generate data or code to be loaded into storage, but rather use the PUNCH statement to output JCL and associated data. In older operating systems such as those used on IBM mainframes, full operating system functionality was only available to assembler language programs, not to high level language programs (unless assembly language subroutines were used, of course), as the standard macro instructions did not always have counterparts in routines available to high-level languages. == History == In the mid-1950s, when assembly language programming was the main way to program a computer, macro instruction features were developed to reduce source code (by generating multiple assembly statements from each macro instruction) and to enforce coding conventions (e.g. specifying input/output commands in standard ways). A macro instruction embedded in the otherwise assembly source code would be processed by a macro compiler, a preprocessor to the assembler, to replace the macro with one or more assembly instructions. The resulting code, pure assembly, would be translated to machine code by the assembler. Two of the earliest programming installations to develop macro languages for the IBM 705 computer were at Dow Chemical Corp. in Delaware and the Air Material Command, Ballistics Missile Logistics Office in California. Some consider macro instructions as an intermediate step between assembly language programming and the high-level programming languages that followed, such as FORTRAN and COBOL. By the late 1950s the macro language was followed by the Macro Assemblers. This was a combination of both where one program served both functions, that of a macro pre-processor and an assembler in the same package. Early examples are FORTRAN Assembly Program (FAP) and Macro Assembly Program (IBMAP) on the IBM 709, 7094, 7040 and 7044, and Autocoder on the 7070/7072/7074. In 1959, Douglas E. Eastwood and Douglas McIlroy of Bell Labs introduced conditional and recursive macros into the popular SAP assembler, creating what is known as Macro SAP. McIlroy's 1960 paper was seminal in the area of extending any (including high-level) programming languages through macro processors. Macro Assemblers allowed assembly language programmers to implement their own macro-language and allowed limited portability of code between two machines running the same CPU but different operating systems, for example, early versions of MS-DOS and CP/M-86. The macro library would need to be written for each target machine but not the overall assembly language program. Note that more powerful macro assemblers allowed use of conditional assembly constructs in macro instructions that could generate different code on different machines or different operating systems, reducing the need for multiple libraries. In the 1980s and early 1990s, desktop PCs were only running at a few MHz and assembly language routines were commonly used to speed up programs written in C, Fortran, Pascal and others. These languages, at the time, used different calling conventions. Macros could be used to interface routines written in assembly language to the front end of applications written in almost any language. Again, the basic assembly language code remained the same, only the macro libraries needed to be written for each target language. In modern operating systems such as Unix and its derivatives, operating system access is provided through subroutines, usually provided by dynamic libraries. High-level languages such as C offer comprehensive access to operating system functions, obviating the need for assembler language programs for such functionality. Moreover, standard libraries of several newer programming languages, such as Go, actively discourage the use of syscalls in favor of platform-agnostic libraries as well if not necessary, to improve portability and security. == See also == Anaphoric macro – type of programming macroPages displaying wikidata descriptions as a fallback Assembly language § Macros – Backstory of macros Compound operator – Basic programming language constructPages displaying short descriptions of redirect targets Extensible programming – programming mechanisms for extending the language, compiler and runtime environmentPages displaying wikidata descriptions as a fallback Fused operation – Basic programming language constructPages displaying short descriptions of redirect targets Hygienic macro – Macros whose expansion is guaranteed not to cause the capture of identifiers Macro and security – Rule for substituting a set input with a set outputPages displaying short descriptions of redirect targets Programming by demonstration – Technique for teaching a computer or a robot new behaviors String interpolation – Replacing placeholders in a string with values == References == == External links == How to write Macro Instructions Rochester Institute of Technology, Professors Powerpoint
Wikipedia/Macro_(computer_science)
In software engineering, a software design pattern or design pattern is a general, reusable solution to a commonly occurring problem in many contexts in software design. A design pattern is not a rigid structure to be transplanted directly into source code. Rather, it is a description or a template for solving a particular type of problem that can be deployed in many different situations. Design patterns can be viewed as formalized best practices that the programmer may use to solve common problems when designing a software application or system. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages. Design patterns may be viewed as a structured approach to computer programming intermediate between the levels of a programming paradigm and a concrete algorithm. == History == Patterns originated as an architectural concept by Christopher Alexander as early as 1977 in A Pattern Language (cf. his article, "The Pattern of Streets," JOURNAL OF THE AIP, September, 1966, Vol. 32, No. 5, pp. 273–278). In 1987, Kent Beck and Ward Cunningham began experimenting with the idea of applying patterns to programming – specifically pattern languages – and presented their results at the OOPSLA conference that year. In the following years, Beck, Cunningham and others followed up on this work. Design patterns gained popularity in computer science after the book Design Patterns: Elements of Reusable Object-Oriented Software was published in 1994 by the so-called "Gang of Four" (Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides), which is frequently abbreviated as "GoF". That same year, the first Pattern Languages of Programming Conference was held, and the following year the Portland Pattern Repository was set up for documentation of design patterns. The scope of the term remains a matter of dispute. Notable books in the design pattern genre include: Gamma, Erich; Helm, Richard; Johnson, Ralph; Vlissides, John (1994). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley. ISBN 978-0-201-63361-0. Brinch Hansen, Per (1995). Studies in Computational Science: Parallel Programming Paradigms. Prentice Hall. ISBN 978-0-13-439324-7. Buschmann, Frank; Meunier, Regine; Rohnert, Hans; Sommerlad, Peter (1996). Pattern-Oriented Software Architecture, Volume 1: A System of Patterns. John Wiley & Sons. ISBN 978-0-471-95869-7. Beck, Kent (1997). Smalltalk Best Practice Patterns. Prentice Hall. ISBN 978-0134769042. Schmidt, Douglas C.; Stal, Michael; Rohnert, Hans; Buschmann, Frank (2000). Pattern-Oriented Software Architecture, Volume 2: Patterns for Concurrent and Networked Objects. John Wiley & Sons. ISBN 978-0-471-60695-6. Fowler, Martin (2002). Patterns of Enterprise Application Architecture. Addison-Wesley. ISBN 978-0-321-12742-6. Hohpe, Gregor; Woolf, Bobby (2003). Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions. Addison-Wesley. ISBN 978-0-321-20068-6. Freeman, Eric T.; Robson, Elisabeth; Bates, Bert; Sierra, Kathy (2004). Head First Design Patterns. O'Reilly Media. ISBN 978-0-596-00712-6. Larman, Craig (2004). Applying UML and Patterns (3rd Ed, 1st Ed 1995). Pearson. ISBN 978-0131489066. Although design patterns have been applied practically for a long time, formalization of the concept of design patterns languished for several years. == Practice == Design patterns can speed up the development process by providing proven development paradigms. Effective software design requires considering issues that may not become apparent until later in the implementation. Freshly written code can often have hidden, subtle issues that take time to be detected; issues that sometimes can cause major problems down the road. Reusing design patterns can help to prevent such issues, and enhance code readability for those familiar with the patterns. Software design techniques are difficult to apply to a broader range of problems. Design patterns provide general solutions, documented in a format that does not require specifics tied to a particular problem. In 1996, Christopher Alexander was invited to give a Keynote Speech to the 1996 OOPSLA Convention. Here he reflected on how his work on Patterns in Architecture had developed and his hopes for how the Software Design community could help Architecture extend Patterns to create living structures that use generative schemes that are more like computer code. == Motif == A pattern describes a design motif, a.k.a. prototypical micro-architecture, as a set of program constituents (e.g., classes, methods...) and their relationships. A developer adapts the motif to their codebase to solve the problem described by the pattern. The resulting code has structure and organization similar to the chosen motif. == Domain-specific patterns == Efforts have also been made to codify design patterns in particular domains, including the use of existing design patterns as well as domain-specific design patterns. Examples include user interface design patterns, information visualization, secure design, "secure usability", Web design and business model design. The annual Pattern Languages of Programming Conference proceedings include many examples of domain-specific patterns. == Object-oriented programming == Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages. == Examples == Design patterns can be organized into groups based on what kind of problem they solve. Creational patterns create objects. Structural patterns organize classes and objects to form larger structures that provide new functionality. Behavioral patterns describe collaboration between objects. === Creational patterns === === Structural patterns === === Behavioral patterns === === Concurrency patterns === == Documentation == The documentation for a design pattern describes the context in which the pattern is used, the forces within the context that the pattern seeks to resolve, and the suggested solution. There is no single, standard format for documenting design patterns. Rather, a variety of different formats have been used by different pattern authors. However, according to Martin Fowler, certain pattern forms have become more well-known than others, and consequently become common starting points for new pattern-writing efforts. One example of a commonly used documentation format is the one used by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides in their book Design Patterns. It contains the following sections: Pattern Name and Classification: A descriptive and unique name that helps in identifying and referring to the pattern. Intent: A description of the goal behind the pattern and the reason for using it. Also Known As: Other names for the pattern. Motivation (Forces): A scenario consisting of a problem and a context in which this pattern can be used. Applicability: Situations in which this pattern is usable; the context for the pattern. Structure: A graphical representation of the pattern. Class diagrams and Interaction diagrams may be used for this purpose. Participants: A listing of the classes and objects used in the pattern and their roles in the design. Collaboration: A description of how classes and objects used in the pattern interact with each other. Consequences: A description of the results, side effects, and trade offs caused by using the pattern. Implementation: A description of an implementation of the pattern; the solution part of the pattern. Sample Code: An illustration of how the pattern can be used in a programming language. Known Uses: Examples of real usages of the pattern. Related Patterns: Other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns. == Criticism == Some suggest that design patterns may be a sign that features are missing in a given programming language (Java or C++ for instance). Peter Norvig demonstrates that 16 out of the 23 patterns in the Design Patterns book (which is primarily focused on C++) are simplified or eliminated (via direct language support) in Lisp or Dylan. Related observations were made by Hannemann and Kiczales who implemented several of the 23 design patterns using an aspect-oriented programming language (AspectJ) and showed that code-level dependencies were removed from the implementations of 17 of the 23 design patterns and that aspect-oriented programming could simplify the implementations of design patterns. See also Paul Graham's essay "Revenge of the Nerds". Inappropriate use of patterns may unnecessarily increase complexity. FizzBuzzEnterpriseEdition offers a humorous example of over-complexity introduced by design patterns. By definition, a pattern must be programmed anew into each application that uses it. Since some authors see this as a step backward from software reuse as provided by components, researchers have worked to turn patterns into components. Meyer and Arnout were able to provide full or partial componentization of two-thirds of the patterns they attempted. In order to achieve flexibility, design patterns may introduce additional levels of indirection, which may complicate the resulting design and decrease runtime performance. == Relationship to other topics == Software design patterns offer finer granularity compared to software architecture patterns and software architecture styles, as design patterns focus on solving detailed, low-level design problems within individual components or subsystems. Examples include Singleton, Factory Method, and Observer. Software Architecture Pattern refers to a reusable, proven solution to a recurring problem at the system level, addressing concerns related to the overall structure, component interactions, and quality attributes of the system. Software architecture patterns operate at a higher level of abstraction than design patterns, solving broader system-level challenges. While these patterns typically affect system-level concerns, the distinction between architectural patterns and architectural styles can sometimes be blurry. Examples include Circuit Breaker. Software Architecture Style refers to a high-level structural organization that defines the overall system organization, specifying how components are organized, how they interact, and the constraints on those interactions. Architecture styles typically include a vocabulary of component and connector types, as well as semantic models for interpreting the system's properties. These styles represent the most coarse-grained level of system organization. Examples include Layered Architecture, Microservices, and Event-Driven Architecture. == See also == == References == == Further reading ==
Wikipedia/Software_design_pattern
In computer science, data (treated as singular, plural, or as a mass noun) is any sequence of one or more symbols; datum is a single symbol of data. Data requires interpretation to become information. Digital data is data that is represented using the binary number system of ones (1) and zeros (0), instead of analog representation. In modern (post-1960) computer systems, all data is digital. Data exists in three states: data at rest, data in transit and data in use. Data within a computer, in most cases, moves as parallel data. Data moving to or from a computer, in most cases, moves as serial data. Data sourced from an analog device, such as a temperature sensor, may be converted to digital using an analog-to-digital converter. Data representing quantities, characters, or symbols on which operations are performed by a computer are stored and recorded on magnetic, optical, electronic, or mechanical recording media, and transmitted in the form of digital electrical or optical signals. Data pass in and out of computers via peripheral devices. Physical computer memory elements consist of an address and a byte/word of data storage. Digital data are often stored in relational databases, like tables or SQL databases, and can generally be represented as abstract key/value pairs. Data can be organized in many different types of data structures, including arrays, graphs, and objects. Data structures can store data of many different types, including numbers, strings and even other data structures. == Characteristics == Metadata helps translate data to information. Metadata is data about the data. Metadata may be implied, specified or given. Data relating to physical events or processes will have a temporal component. This temporal component may be implied. This is the case when a device such as a temperature logger receives data from a temperature sensor. When the temperature is received it is assumed that the data has a temporal reference of now. So the device records the date, time and temperature together. When the data logger communicates temperatures, it must also report the date and time as metadata for each temperature reading. Fundamentally, computers follow a sequence of instructions they are given in the form of data. A set of instructions to perform a given task (or tasks) is called a program. A program is data in the form of coded instructions to control the operation of a computer or other machine. In the nominal case, the program, as executed by the computer, will consist of machine code. The elements of storage manipulated by the program, but not actually executed by the central processing unit (CPU), are also data. At its most essential, a single datum is a value stored at a specific location. Therefore, it is possible for computer programs to operate on other computer programs, by manipulating their programmatic data. To store data bytes in a file, they have to be serialized in a file format. Typically, programs are stored in special file types, different from those used for other data. Executable files contain programs; all other files are also data files. However, executable files may also contain data used by the program which is built into the program. In particular, some executable files have a data segment, which nominally contains constants and initial values for variables, both of which can be considered data. The line between program and data can become blurry. An interpreter, for example, is a program. The input data to an interpreter is itself a program, just not one expressed in native machine language. In many cases, the interpreted program will be a human-readable text file, which is manipulated with a text editor program. Metaprogramming similarly involves programs manipulating other programs as data. Programs like compilers, linkers, debuggers, program updaters, virus scanners and such use other programs as their data. For example, a user might first instruct the operating system to load a word processor program from one file, and then use the running program to open and edit a document stored in another file. In this example, the document would be considered data. If the word processor also features a spell checker, then the dictionary (word list) for the spell checker would also be considered data. The algorithms used by the spell checker to suggest corrections would be either machine code data or text in some interpretable programming language. In an alternate usage, binary files (which are not human-readable) are sometimes called data as distinguished from human-readable text. The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (281 exabytes). == Data keys and values, structures and persistence == Keys in data provide the context for values. Regardless of the structure of data, there is always a key component present. Keys in data and data-structures are essential for giving meaning to data values. Without a key that is directly or indirectly associated with a value, or collection of values in a structure, the values become meaningless and cease to be data. That is to say, there has to be a key component linked to a value component in order for it to be considered data. Data can be represented in computers in multiple ways, as per the following examples: === RAM === Random access memory (RAM) holds data that the CPU has direct access to. A CPU may only manipulate data within its processor registers or memory. This is as opposed to data storage, where the CPU must direct the transfer of data between the storage device (disk, tape...) and memory. RAM is an array of linear contiguous locations that a processor may read or write by providing an address for the read or write operation. The processor may operate on any location in memory at any time in any order. In RAM the smallest element of data is the binary bit. The capabilities and limitations of accessing RAM are processor specific. In general main memory is arranged as an array of locations beginning at address 0 (hexadecimal 0). Each location can store usually 8 or 32 bits depending on the computer architecture. === Keys === Data keys need not be a direct hardware address in memory. Indirect, abstract and logical keys codes can be stored in association with values to form a data structure. Data structures have predetermined offsets (or links or paths) from the start of the structure, in which data values are stored. Therefore, the data key consists of the key to the structure plus the offset (or links or paths) into the structure. When such a structure is repeated, storing variations of the data values and the data keys within the same repeating structure, the result can be considered to resemble a table, in which each element of the repeating structure is considered to be a column and each repetition of the structure is considered as a row of the table. In such an organization of data, the data key is usually a value in one (or a composite of the values in several) of the columns. === Organised recurring data structures === The tabular view of repeating data structures is only one of many possibilities. Repeating data structures can be organised hierarchically, such that nodes are linked to each other in a cascade of parent-child relationships. Values and potentially more complex data-structures are linked to the nodes. Thus the nodal hierarchy provides the key for addressing the data structures associated with the nodes. This representation can be thought of as an inverted tree. Modern computer operating system file systems are a common example; and XML is another. === Sorted or ordered data === Data has some inherent features when it is sorted on a key. All the values for subsets of the key appear together. When passing sequentially through groups of the data with the same key, or a subset of the key changes, this is referred to in data processing circles as a break, or a control break. It particularly facilitates the aggregation of data values on subsets of a key. === Peripheral storage === Until the advent of bulk non-volatile memory like flash, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives. These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size. In this case, the seek location on the media, is the data key and the blocks are the data values. Early used raw disk data file-systems or disc operating systems reserved contiguous blocks on the disc drive for data files. In those systems, the files could be filled up, running out of data space before all the data had been written to them. Thus much unused data space was reserved unproductively to ensure adequate free space for each file. Later file-systems introduced partitions. They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed. To achieve this, the file system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table. Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due additional seek time to read the data. Modern file systems reorganize fragmented files dynamically to optimize file access times. Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives. === Indexed data === Retrieving a small subset of data from a much larger set may imply inefficiently searching through the data sequentially. Indexes are a way to copy out keys and location addresses from data structures in files, tables and data sets, then organize them using inverted tree structures to reduce the time taken to retrieve a subset of the original data. In order to do this, the key of the subset of data to be retrieved must be known before retrieval begins. The most popular indexes are the B-tree and the dynamic hash key indexing methods. Indexing is overhead for filing and retrieving data. There are other ways of organizing indexes, e.g. sorting the keys and using a binary search algorithm. === Abstraction and indirection === Object-oriented programming uses two basic concepts for understanding data and software: The taxonomic rank-structure of classes, which is an example of a hierarchical data structure; and at run time, the creation of references to in-memory data-structures of objects that have been instantiated from a class library. It is only after instantiation that an object of a specified class exists. After an object's reference is cleared, the object also ceases to exist. The memory locations where the object's data was stored are garbage and are reclassified as unused memory available for reuse. === Database data === The advent of databases introduced a further layer of abstraction for persistent data storage. Databases use metadata, and a structured query language protocol between client and server systems, communicating over a computer network, using a two phase commit logging system to ensure transactional completeness, when saving data. === Parallel distributed data processing === Modern scalable and high-performance data persistence technologies, such as Apache Hadoop, rely on massively parallel distributed data processing across many commodity computers on a high bandwidth network. In such systems, the data is distributed across multiple computers and therefore any particular computer in the system must be represented in the key of the data, either directly, or indirectly. This enables the differentiation between two identical sets of data, each being processed on a different computer at the same time. == See also == == References ==
Wikipedia/Data_(computer_science)
A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure of a programming language. == Overview == A modeling language can be graphical or textual. Graphical modeling languages use a diagram technique with named symbols that represent concepts and lines that connect the symbols and represent relationships and various other graphical notation to represent constraints. Textual modeling languages may use standardized keywords accompanied by parameters or natural language terms and phrases to make computer-interpretable expressions. An example of a graphical modeling language and a corresponding textual modeling language is EXPRESS. Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such as parallel computing and distributed systems. A large number of modeling languages appear in the literature. == Type of modeling languages == === Graphical types === Example of graphical modeling languages in the field of computer science, project management and systems engineering: Behavior Trees are a formal, graphical modeling language used primarily in systems and software engineering. Commonly used to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system. Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process Modeling language. C-K theory consists of a modeling language for design processes. DRAKON is a general-purpose algorithmic modeling language for specifying software-intensive systems, a schematic representation of an algorithm or a stepwise process, and a family of programming languages. EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language. Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across a number of layers. Flowchart is a schematic representation of an algorithm or a stepwise process. Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems. IDEF is a family of modeling languages, which include IDEF0 for functional modeling, IDEF1X for information modeling, IDEF3 for business process modeling, IDEF4 for Object-Oriented Design and IDEF5 for modeling ontologies. Jackson Structured Programming (JSP) is a method for structured programming based on correspondences between data stream structure and program structure. LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modeling large object-oriented (Java, C++, C#) programs and design patterns. Lifecycle Modeling Language is an open-standard language for systems engineering that supports the full system lifecycle: conceptual, utilization, support and retirement stages. Object-Role Modeling (ORM) in the field of software engineering is a method for conceptual modeling, and can be used as a tool for information and rules analysis. Petri nets use variations on exactly one diagramming technique and topology, namely the bipartite graph. The simplicity of its basic user interface easily enabled extensive tool support over the years, particularly in the areas of model checking, graphically oriented simulation, and software verification. Southbeach Notation is a visual modeling language used to describe situations in terms of agents that are considered useful or harmful from the modeler's perspective. The notation shows how the agents interact with each other and whether this interaction improves or worsens the situation. Specification and Description Language (SDL) is a specification language targeted at the unambiguous specification and description of the behavior of reactive and distributed systems. SysML is a Domain-Specific Modeling language for systems engineering that is defined as a UML profile (customization). Unified Modeling Language (UML) is a general-purpose modeling language that is an industry standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen different diagram techniques, and has widespread tool support. FLINT — language which allows a high-level description of normative systems. Service-oriented modeling framework (SOMF) is a holistic language for designing enterprise and application level architecture models in the space of enterprise architecture, virtualization, service-oriented architecture (SOA), cloud computing, and more. Architecture description language (ADL) is a language used to describe and represent the systems architecture of a system. Architecture Analysis & Design Language (AADL) is a modeling language that supports early and repeated analyses of a system's architecture with respect to performance-critical properties through an extendable notation, a tool framework, and precisely defined semantics. Examples of graphical modeling languages in other fields of science. EAST-ADL is a Domain-Specific Modeling language dedicated to automotive system design. Energy Systems Language (ESL), a language that aims to model ecological energetics & global economics. IEC 61499 defines Domain-Specific Modeling language dedicated to distribute industrial process measurement and control systems. === Textual types === Information models can also be expressed in formalized natural languages, such as Gellish. Gellish has natural language variants such as Gellish Formal English and Gellish Formal Dutch (Gellish Formeel Nederlands), etc. Gellish Formal English is an information representation language or semantic modeling language that is defined in the Gellish English Dictionary-Taxonomy, which has the form of a Taxonomy-Ontology (similarly for Dutch). Gellish Formal English is not only suitable to express knowledge, requirements and dictionaries, taxonomies and ontologies, but also information about individual things. All that information is expressed in one language and therefore it can all be integrated, independent of the question whether it is stored in central or distributed or in federated databases. Information models in Gellish Formal English consists of collections of Gellish Formal English expressions, that use natural language terms and formalized phrases. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as: - the Eiffel tower <is located in> Paris - Paris <is classified as a> city whereas information requirements and knowledge can be expressed for example as follows: - tower <shall be located in a> geographical area - city <is a kind of> geographical area Such Gellish Formal English expressions use names of concepts (such as "city") and phrases that represent relation types (such as ⟨is located in⟩ and ⟨is classified as a⟩) that should be selected from the Gellish English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains more than 600 standard relation types and contains definitions of more than 40000 concepts. An information model in Gellish can express facts or make statements, queries and answers. === More specific types === In the field of computer science recently more specific types of modeling languages have emerged. ==== Algebraic ==== Algebraic Modeling Languages (AML) are high-level programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of AMLs like AIMMS, AMPL, GAMS, Gekko, Mosel, OPL, MiniZinc, and OptimJ is the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it. ==== Behavioral ==== Behavioral languages are designed to describe the observable behavior of complex systems consisting of components that execute concurrently. These languages focus on the description of key concepts such as: concurrency, nondeterminism, synchronization, and communication. The semantic foundations of Behavioral languages are process calculus or process algebra. ==== Discipline-specific ==== A discipline-specific modeling (DspM) language is focused on deliverables affiliated with a specific software development life cycle stage. Therefore, such language offers a distinct vocabulary, syntax, and notation for each stage, such as discovery, analysis, design, architecture, contraction, etc. For example, for the analysis phase of a project, the modeler employs specific analysis notation to deliver an analysis proposition diagram. During the design phase, however, logical design notation is used to depict the relationship between software entities. In addition, the discipline-specific modeling language best practices does not preclude practitioners from combining the various notations in a single diagram. ==== Domain-specific ==== Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, most often IT systems such as computer software. It involves the systematic use of a graphical domain-specific language (DSL) to represent the various facets of a system. DSM languages tend to support higher-level abstractions than General-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system. ==== Framework-specific ==== A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices. A FSML concept can be configured by selecting features and providing values for features. Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept. ==== Information and knowledge modeling ==== Linked data and ontology engineering require 'host languages' to represent entities and the relations between them, constraints between the properties of entities and relations, and metadata attributes. JSON-LD and RDF are two major (and semantically almost equivalent) languages in this context, primarily because they support statement reification and contextualisation which are essential properties to support the higher-order logic needed to reason about models. Model transformation is a common example of such reasoning. ==== Object-oriented ==== Object modeling languages are modeling languages based on a standardized set of symbols and ways of arranging them to model (part of) an object oriented software design or system design. Some organizations use them extensively in combination with a software development methodology to progress from initial specification to an implementation plan and to communicate that plan to an entire team of developers and stakeholders. Because a modeling language is visual and at a higher-level of abstraction than code, using models encourages the generation of a shared vision that may prevent problems of differing interpretation later in development. Often software modeling tools are used to construct these models, which may then be capable of automatic translation to code. ==== Virtual reality ==== Virtual Reality Modeling Language (VRML), before 1995 known as the Virtual Reality Markup Language is a standard file format for representing 3-dimensional (3D) interactive vector graphics, designed particularly with the World Wide Web in mind. ==== Others ==== Architecture Description Language Face Modeling Language Generative Modelling Language Java Modeling Language Promela Rebeca Modeling Language Service Modeling Language Web Services Modeling Language X3D == Applications == Various kinds of modeling languages are applied in different disciplines, including computer science, information management, business process modeling, software engineering, and systems engineering. Modeling languages can be used to specify: system requirements, structures and behaviors. Modeling languages are intended to be used to precisely specify systems so that stakeholders (e.g., customers, operators, analysts, designers) can better understand the system being modeled. The more mature modeling languages are precise, consistent and executable. Informal diagramming techniques applied with drawing tools are expected to produce useful pictorial representations of system requirements, structures and behaviors, which can be useful for communication, design, and problem solving but cannot be used programmatically.: 539  Executable modeling languages applied with proper tool support, however, are expected to automate system verification and validation, simulation and code generation from the same representations. == Quality == A review of modelling languages is essential to be able to assign which languages are appropriate for different modelling settings. In the term settings we include stakeholders, domain and the knowledge connected. Assessing the language quality is a means that aims to achieve better models. === Framework for evaluation === Here language quality is stated in accordance with the SEQUAL framework for quality of models developed by Krogstie, Sindre and Lindland (2003), since this is a framework that connects the language quality to a framework for general model quality. Five areas are used in this framework to describe language quality and these are supposed to express both the conceptual as well as the visual notation of the language. We will not go into a thorough explanation of the underlying quality framework of models but concentrate on the areas used to explain the language quality framework. ==== Domain appropriateness ==== The framework states the ability to represent the domain as domain appropriateness. The statement appropriateness can be a bit vague, but in this particular context it means able to express. You should ideally only be able to express things that are in the domain but be powerful enough to include everything that is in the domain. This requirement might seem a bit strict, but the aim is to get a visually expressed model which includes everything relevant to the domain and excludes everything not appropriate for the domain. To achieve this, the language has to have a good distinction of which notations and syntaxes that are advantageous to present. ==== Participant appropriateness ==== To evaluate the participant appropriateness we try to identify how well the language expresses the knowledge held by the stakeholders. This involves challenges since a stakeholder's knowledge is subjective. The knowledge of the stakeholder is both tacit and explicit. Both types of knowledge are of dynamic character. In this framework only the explicit type of knowledge is taken into account. The language should to a large extent express all the explicit knowledge of the stakeholders relevant to the domain. ==== Modeller appropriateness ==== Last paragraph stated that knowledge of the stakeholders should be presented in a good way. In addition it is imperative that the language should be able to express all possible explicit knowledge of the stakeholders. No knowledge should be left unexpressed due to lacks in the language. ==== Comprehensibility appropriateness ==== Comprehensibility appropriateness makes sure that the social actors understand the model due to a consistent use of the language. To achieve this the framework includes a set of criteria. The general importance that these express is that the language should be flexible, easy to organize and easy to distinguish different parts of the language internally as well as from other languages. In addition to this, the goal should be as simple as possible and that each symbol in the language has a unique representation. This is in connection to also to the structure of the development requirements. . ==== Tool appropriateness ==== To ensure that the domain actually modelled is usable for analyzing and further processing, the language has to ensure that it is possible to reason in an automatic way. To achieve this it has to include formal syntax and semantics. Another advantage by formalizing is the ability to discover errors in an early stage. It is not always that the language best fitted for the technical actors is the same as for the social actors. ==== Organizational appropriateness ==== The language used is appropriate for the organizational context, e.g. that the language is standardized within the organization, or that it is supported by tools that are chosen as standard in the organization. == See also == == References == == Further reading == John Krogstie (2003) "Evaluating UML using a generic quality framework" . SINTEF Telecom and Informatics and IDI, NTNU, Norway Krogstie and Sølvsberg (2003). Information Systems Engineering: Conceptual Modeling in a Quality Perspective. Institute of computer and information sciences.\ Anna Gunhild Nysetvold and John Krogstie (2005). "Assessing business processing modeling languages using a generic quality framework". Institute of computer and information sciences. == External links == Fundamental Modeling Concepts Software Modeling Languages Portal BIP -- Incremental Component-based Construction of Real-time Systems Gellish Formal English
Wikipedia/Software_modeling
In programming language theory and type theory, polymorphism is the use of one symbol to represent multiple different types. In object-oriented programming, polymorphism is the provision of one interface to entities of different data types. The concept is borrowed from a principle in biology where an organism or species can have many different forms or stages. The most commonly recognized major forms of polymorphism are: Ad hoc polymorphism: defines a common interface for an arbitrary set of individually specified types. Parametric polymorphism: not specifying concrete types and instead use abstract symbols that can substitute for any type. Subtyping (also called subtype polymorphism or inclusion polymorphism): when a name denotes instances of many different classes related by some common superclass. == History == Interest in polymorphic type systems developed significantly in the 1990s, with practical implementations beginning to appear by the end of the decade. Ad hoc polymorphism and parametric polymorphism were originally described in Christopher Strachey's Fundamental Concepts in Programming Languages, where they are listed as "the two main classes" of polymorphism. Ad hoc polymorphism was a feature of ALGOL 68, while parametric polymorphism was the core feature of ML's type system. In a 1985 paper, Peter Wegner and Luca Cardelli introduced the term inclusion polymorphism to model subtypes and inheritance, citing Simula as the first programming language to implement it. == Forms == === Ad hoc polymorphism === Christopher Strachey chose the term ad hoc polymorphism to refer to polymorphic functions that can be applied to arguments of different types, but that behave differently depending on the type of the argument to which they are applied (also known as function overloading or operator overloading). The term "ad hoc" in this context is not pejorative: instead, it means that this form of polymorphism is not a fundamental feature of the type system. In the Java example below, the add functions seem to work generically over two types (integer and string) when looking at the invocations, but are considered to be two entirely distinct functions by the compiler for all intents and purposes: In dynamically typed languages the situation can be more complex as the correct function that needs to be invoked might only be determinable at run time. Implicit type conversion has also been defined as a form of polymorphism, referred to as "coercion polymorphism". === Parametric polymorphism === Parametric polymorphism allows a function or a data type to be written generically, so that it can handle values uniformly without depending on their type. Parametric polymorphism is a way to make a language more expressive while still maintaining full static type safety. The concept of parametric polymorphism applies to both data types and functions. A function that can evaluate to or be applied to values of different types is known as a polymorphic function. A data type that can appear to be of a generalized type (e.g., a list with elements of arbitrary type) is designated polymorphic data type like the generalized type from which such specializations are made. Parametric polymorphism is ubiquitous in functional programming, where it is often simply referred to as "polymorphism". The next example in Haskell shows a parameterized list data type and two parametrically polymorphic functions on them: Parametric polymorphism is also available in several object-oriented languages. For instance, templates in C++ and D, or under the name generics in C#, Delphi, Java, and Go: John C. Reynolds (and later Jean-Yves Girard) formally developed this notion of polymorphism as an extension to lambda calculus (called the polymorphic lambda calculus or System F). Any parametrically polymorphic function is necessarily restricted in what it can do, working on the shape of the data instead of its value, leading to the concept of parametricity. === Subtyping === Some languages employ the idea of subtyping (also called subtype polymorphism or inclusion polymorphism) to restrict the range of types that can be used in a particular case of polymorphism. In these languages, subtyping allows a function to be written to take an object of a certain type T, but also work correctly, if passed an object that belongs to a type S that is a subtype of T (according to the Liskov substitution principle). This type relation is sometimes written S <: T. Conversely, T is said to be a supertype of S, written T :> S. Subtype polymorphism is usually resolved dynamically (see below). In the following Java example cats and dogs are made subtypes of pets. The procedure letsHear() accepts a pet, but will also work correctly if a subtype is passed to it: In another example, if Number, Rational, and Integer are types such that Number :> Rational and Number :> Integer (Rational and Integer as subtypes of a type Number that is a supertype of them), a function written to take a Number will work equally well when passed an Integer or Rational as when passed a Number. The actual type of the object can be hidden from clients into a black box, and accessed via object identity. If the Number type is abstract, it may not even be possible to get your hands on an object whose most-derived type is Number (see abstract data type, abstract class). This particular kind of type hierarchy is known, especially in the context of the Scheme language, as a numerical tower, and usually contains many more types. Object-oriented programming languages offer subtype polymorphism using subclassing (also known as inheritance). In typical implementations, each class contains what is called a virtual table (shortly called vtable) — a table of functions that implement the polymorphic part of the class interface—and each object contains a pointer to the vtable of its class, which is then consulted whenever a polymorphic method is called. This mechanism is an example of: late binding, because virtual function calls are not bound until the time of invocation; single dispatch (i.e., single-argument polymorphism), because virtual function calls are bound simply by looking through the vtable provided by the first argument (the this object), so the runtime types of the other arguments are completely irrelevant. The same goes for most other popular object systems. Some, however, such as Common Lisp Object System, provide multiple dispatch, under which method calls are polymorphic in all arguments. The interaction between parametric polymorphism and subtyping leads to the concepts of variance and bounded quantification. === Row polymorphism === Row polymorphism is a similar, but distinct concept from subtyping. It deals with structural types. It allows the usage of all values whose types have certain properties, without losing the remaining type information. === Polytypism === A related concept is polytypism (or data type genericity). A polytypic function is more general than polymorphic, and in such a function, "though one can provide fixed ad hoc cases for specific data types, an ad hoc combinator is absent". === Rank polymorphism === Rank polymorphism is one of the defining features of the array programming languages, like APL. The essence of the rank-polymorphic programming model is implicitly treating all operations as aggregate operations, usable on arrays with arbitrarily many dimensions, which is to say that rank polymorphism allows functions to be defined to operate on arrays of any shape and size. == Implementation aspects == === Static and dynamic polymorphism === Polymorphism can be distinguished by when the implementation is selected: statically (at compile time) or dynamically (at run time, typically via a virtual function). This is known respectively as static dispatch and dynamic dispatch, and the corresponding forms of polymorphism are accordingly called static polymorphism and dynamic polymorphism. Static polymorphism executes faster, because there is no dynamic dispatch overhead, but requires additional compiler support. Further, static polymorphism allows greater static analysis by compilers (notably for optimization), source code analysis tools, and human readers (programmers). Dynamic polymorphism is more flexible but slower—for example, dynamic polymorphism allows duck typing, and a dynamically linked library may operate on objects without knowing their full type. Static polymorphism typically occurs in ad hoc polymorphism and parametric polymorphism, whereas dynamic polymorphism is usual for subtype polymorphism. However, it is possible to achieve static polymorphism with subtyping through more sophisticated use of template metaprogramming, namely the curiously recurring template pattern. When polymorphism is exposed via a library, static polymorphism becomes impossible for dynamic libraries as there is no way of knowing what types the parameters are when the shared object is built. While languages like C++ and Rust use monomorphized templates, the Swift programming language makes extensive use of dynamic dispatch to build the application binary interface for these libraries by default. As a result, more code can be shared for a reduced system size at the cost of runtime overhead. == See also == Type class Virtual inheritance == References == == External links == C++ examples of polymorphism Objects and Polymorphism (Visual Prolog) Polymorphism on MSDN Polymorphism Java Documentation on Oracle
Wikipedia/Polymorphism_(computer_science)
Functional reactive programming (FRP) is a programming paradigm for reactive programming (asynchronous dataflow programming) using the building blocks of functional programming (e.g., map, reduce, filter). FRP has been used for programming graphical user interfaces (GUIs), robotics, games, and music, aiming to simplify these problems by explicitly modeling time. == Formulations of FRP == The original formulation of functional reactive programming can be found in the ICFP 97 paper Functional Reactive Animation by Conal Elliott and Paul Hudak. FRP has taken many forms since its introduction in 1997. One axis of diversity is discrete vs. continuous semantics. Another axis is how FRP systems can be changed dynamically. === Continuous === The earliest formulation of FRP used continuous semantics, aiming to abstract over many operational details that are not important to the meaning of a program. The key properties of this formulation are: Modeling values that vary over continuous time, called "behaviors" and later "signals". Modeling "events" which have occurrences at discrete points in time. The system can be changed in response to events, generally termed "switching." The separation of evaluation details such as sampling rate from the reactive model. This semantic model of FRP in side-effect free languages is typically in terms of continuous functions, and typically over time. This formulation is also referred to as denotative continuous time programming (DCTP). === Discrete === Formulations such as Event-Driven FRP and versions of Elm prior to 0.17 require that updates are discrete and event-driven. These formulations have pushed for practical FRP, focusing on semantics that have a simple API that can be implemented efficiently in a setting such as robotics or in a web-browser. In these formulations, it is common that the ideas of behaviors and events are combined into signals that always have a current value, but change discretely. == Interactive FRP == It has been pointed out that the ordinary FRP model, from inputs to outputs, is poorly suited to interactive programs. Lacking the ability to "run" programs within a mapping from inputs to outputs may mean one of the following solutions must be used: Create a data structure of actions which appear as the outputs. The actions must be run by an external interpreter or environment. This inherits all of the difficulties of the original stream input/output (I/O) system of Haskell. Use Arrowized FRP and embed arrows which are capable of performing actions. The actions may also have identities, which allows them to maintain separate mutable stores for example. This is the approach taken by the Fudgets library and, more generally, Monadic Stream Functions. The novel approach is to allow actions to be run now (in the IO monad) but defer the receipt of their results until later. This makes use of an interaction between the Event and IO monads, and is compatible with a more expression-oriented FRP: == Implementation issues == There are two types of FRP systems, push-based and pull-based. Push-based systems take events and push them through a signal network to achieve a result. Pull-based systems wait until the result is demanded, and work backwards through the network to retrieve the value demanded. Some FRP systems such as Yampa use sampling, where samples are pulled by the signal network. This approach has a drawback: the network must wait up to the duration of one computation step to learn of changes to the input. Sampling is an example of pull-based FRP. The Reactive and Etage libraries on Hackage introduced an approach called push-pull FRP. In it, only when the next event on a purely defined stream (such as a list of fixed events with times) is demanded, that event is constructed. These purely defined streams act like lazy lists in Haskell. That is the pull-based half. The push-based half is used when events external to the system are brought in. The external events are pushed to consumers, so that they can find out about an event the instant it is issued. == Implementations == Implementations exist for many programming languages, including: Yampa is an arrowized, efficient, pure Haskell implementation with SDL, SDL2, OpenGL and HTML DOM support. The language Elm used to support FRP but has since replaced it with a different pattern. reflex is an efficient push–pull FRP implementation in Haskell with hosts for web browser – Document Object Model (DOM), Simple DirectMedia Layer (SDL), and Gloss. reactive-banana is a target-agnostic push FRP implementation in Haskell. netwire and varying are arrowized, pull FRP implementations in Haskell. Flapjax is a behavior–event FRP implementation in JavaScript. React is an OCaml module for functional reactive programming. Sodium is a push FRP implementation independent of a specific user interface (UI) framework for several languages, such as Java, TypeScript, and C#. Dunai is a fast implementation in Haskell using Monadic Stream Functions that supports Classic and Arrowized FRP. ObservableComputations, a cross-platform .NET implementation. Stella is an actor model-based reactive language that demonstrates a model of "actors" and "reactors" which aims to avoid the issues of combining imperative code with reactive code (by separating them in actors and reactors). Actors are suitable for use in distributed reactive systems. TidalCycles is a pure FRP domain specific language for musical pattern, embedded in the Haskell language. ReactiveX, popularized by its JavaScript implementation rxjs, is functional and reactive but differs from functional reactive programming. == See also == Incremental computing Stream processing == References ==
Wikipedia/Functional_reactive_programming
In the theory of programming languages in computer science, deforestation (also known as fusion) is a program transformation to eliminate intermediate lists or tree structures that are created and then immediately consumed by a program. The term "deforestation" was originally coined by Philip Wadler in his 1990 paper "Deforestation: transforming programs to eliminate trees". Deforestation is typically applied to programs in functional programming languages, particularly non-strict programming languages such as Haskell. One particular algorithm for deforestation, shortcut deforestation, is implemented in the Glasgow Haskell Compiler. Deforestation is closely related to escape analysis. == See also == Hylomorphism (computer science) == References ==
Wikipedia/Deforestation_(computer_science)
In computer science, an expression is a syntactic entity in a programming language that may be evaluated to determine its value. It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, for mathematical expressions, is called evaluation. In simple settings, the resulting value is usually one of various primitive types, such as string, boolean, or numerical (such as integer, floating-point, or complex). Expressions are often contrasted with statements—syntactic entities that have no value (an instruction). == Examples == 2 + 3 is both an arithmetic and programming expression, which evaluates to 5. A variable is an expression because it denotes a value in memory, so y + 6 is also an expression. An example of a relational expression is 4 ≠ 4, which evaluates to false. == Void as a result type == In C and most C-derived languages, a call to a function with a void return type is a valid expression, of type void. Values of type void cannot be used, so the value of such an expression is always thrown away. == Side effects and elimination == In many programming languages, a function, and hence an expression containing a function, may have side effects. An expression with side effects does not normally have the property of referential transparency. In many languages (e.g. C++), expressions may be ended with a semicolon (;) to turn the expression into an expression statement. This asks the implementation to evaluate the expression for its side-effects only and to disregard the result of the expression (e.g. x+1;) unless it is a part of an expression statement that induces side-effects (e.g. y=x+1; or func1(func2());). === Caveats === The formal notion of a side effect is a change to the abstract state of the running program. Another class of side effects are changes to the concrete state of the computational system, such as loading data into cache memories. Languages that are often described as "side effect–free" will generally still have concrete side effects that can be exploited, for example, in side-channel attacks. Furthermore, the elapsed time evaluating an expression (even one with no other apparent side effects), is sometimes essential to the correct operation of a system, as behaviour in time is easily visible from outside the evaluation environment by other parts of the system with which it interacts, and might even be regarded as the primary effect such as when performing benchmark testing. It depends on the particular programming language specification whether an expression with no abstract side effects can legally be eliminated from the execution path by the processing environment in which the expression is evaluated. == See also == Evaluation strategy == References == == External links == This article is based on material taken from Expression at the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.
Wikipedia/Expression_(computer_science)
In computing, data-oriented design is a program optimization approach motivated by efficient usage of the CPU cache, often used in video game development. The approach is to focus on the data layout, separating and sorting fields according to when they are needed, and to think about transformations of data. Proponents include Mike Acton, Scott Meyers, and Jonathan Blow. The parallel array (or structure of arrays) is the main example of data-oriented design. It is contrasted with the array of structures typical of object-oriented designs. The definition of data-oriented design as a programming paradigm can be seen as contentious as many believe that it can be used side by side with another paradigm, but due to the emphasis on data layout, it is also incompatible with most other paradigms. == Motives == These methods became especially popular in the mid to late 2000s during the seventh generation of video game consoles that included the IBM PowerPC based PlayStation 3 (PS3) and Xbox 360 consoles. Historically, game consoles often have relatively weak central processing units (CPUs) compared to the top-of-line desktop computer counterparts. This is a design choice to devote more power and transistor budget to the graphics processing units (GPUs). For example, the 7th generation CPUs were not manufactured with modern out-of-order execution processors, but instead use in-order processors with high clock speeds and deep pipelines. In addition, most types of computing systems have main memory located hundreds of clock cycles away from the processing elements. Furthermore, as CPUs have become faster alongside a large increase in main memory capacity, there is massive data consumption that increases the likelihood of cache misses in the shared bus, otherwise known as Von Neumann bottlenecking. Consequently, locality of reference methods have been used to control performance, requiring improvement of memory access patterns to fix bottlenecking. Some of the software issues were also similar to those encountered on the Itanium, requiring loop unrolling for upfront scheduling. == Contrast with object orientation == The claim is that traditional object-oriented programming (OOP) design principles result in poor data locality, more so if runtime polymorphism (dynamic dispatch) is used (which is especially problematic on some processors). Although OOP appears to "organise code around data", it actually organises source code around data types rather than physically grouping individual fields and arrays in an efficient format for access by specific functions. Moreover, it often hides layout details under abstraction layers, while a data-oriented programmer wants to consider this first and foremost. == See also == CPU cache Data-driven programming Entity component system Memory access pattern Video game development == References ==
Wikipedia/Data-oriented_design
Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative (logic or functional) and often recursive programs from incomplete specifications, such as input/output examples or constraints. Depending on the programming language used, there are several kinds of inductive programming. Inductive functional programming, which uses functional programming languages such as Lisp or Haskell, and most especially inductive logic programming, which uses logic programming languages such as Prolog and other logical representations such as description logics, have been more prominent, but other (programming) language paradigms have also been used, such as constraint programming or probabilistic programming. == Definition == Inductive programming incorporates all approaches which are concerned with learning programs or algorithms from incomplete (formal) specifications. Possible inputs in an IP system are a set of training inputs and corresponding outputs or an output evaluation function, describing the desired behavior of the intended program, traces or action sequences which describe the process of calculating specific outputs, constraints for the program to be induced concerning its time efficiency or its complexity, various kinds of background knowledge such as standard data types, predefined functions to be used, program schemes or templates describing the data flow of the intended program, heuristics for guiding the search for a solution or other biases. Output of an IP system is a program in some arbitrary programming language containing conditionals and loop or recursive control structures, or any other kind of Turing-complete representation language. In many applications the output program must be correct with respect to the examples and partial specification, and this leads to the consideration of inductive programming as a special area inside automatic programming or program synthesis, usually opposed to 'deductive' program synthesis, where the specification is usually complete. In other cases, inductive programming is seen as a more general area where any declarative programming or representation language can be used and we may even have some degree of error in the examples, as in general machine learning, the more specific area of structure mining or the area of symbolic artificial intelligence. A distinctive feature is the number of examples or partial specification needed. Typically, inductive programming techniques can learn from just a few examples. The diversity of inductive programming usually comes from the applications and the languages that are used: apart from logic programming and functional programming, other programming paradigms and representation languages have been used or suggested in inductive programming, such as functional logic programming, constraint programming, probabilistic programming, abductive logic programming, modal logic, action languages, agent languages and many types of imperative languages. == History == Research on the inductive synthesis of recursive functional programs started in the early 1970s and was brought onto firm theoretical foundations with the seminal THESIS system of Summers and work of Biermann. These approaches were split into two phases: first, input-output examples are transformed into non-recursive programs (traces) using a small set of basic operators; second, regularities in the traces are searched for and used to fold them into a recursive program. The main results until the mid-1980s are surveyed by Smith. Due to limited progress with respect to the range of programs that could be synthesized, research activities decreased significantly in the next decade. The advent of logic programming brought a new elan but also a new direction in the early 1980s, especially due to the MIS system of Shapiro eventually spawning the new field of inductive logic programming (ILP). The early works of Plotkin, and his "relative least general generalization (rlgg)", had an enormous impact in inductive logic programming. Most of ILP work addresses a wider class of problems, as the focus is not only on recursive logic programs but on machine learning of symbolic hypotheses from logical representations. However, there were some encouraging results on learning recursive Prolog programs such as quicksort from examples together with suitable background knowledge, for example with GOLEM. But again, after initial success, the community got disappointed by limited progress about the induction of recursive programs with ILP less and less focusing on recursive programs and leaning more and more towards a machine learning setting with applications in relational data mining and knowledge discovery. In parallel to work in ILP, Koza proposed genetic programming in the early 1990s as a generate-and-test based approach to learning programs. The idea of genetic programming was further developed into the inductive programming system ADATE and the systematic-search-based system MagicHaskeller. Here again, functional programs are learned from sets of positive examples together with an output evaluation (fitness) function which specifies the desired input/output behavior of the program to be learned. The early work in grammar induction (also known as grammatical inference) is related to inductive programming, as rewriting systems or logic programs can be used to represent production rules. In fact, early works in inductive inference considered grammar induction and Lisp program inference as basically the same problem. The results in terms of learnability were related to classical concepts, such as identification-in-the-limit, as introduced in the seminal work of Gold. More recently, the language learning problem was addressed by the inductive programming community. In the recent years, the classical approaches have been resumed and advanced with great success. Therefore, the synthesis problem has been reformulated on the background of constructor-based term rewriting systems taking into account modern techniques of functional programming, as well as moderate use of search-based strategies and usage of background knowledge as well as automatic invention of subprograms. Many new and successful applications have recently appeared beyond program synthesis, most especially in the area of data manipulation, programming by example and cognitive modelling (see below). Other ideas have also been explored with the common characteristic of using declarative languages for the representation of hypotheses. For instance, the use of higher-order features, schemes or structured distances have been advocated for a better handling of recursive data types and structures; abstraction has also been explored as a more powerful approach to cumulative learning and function invention. One powerful paradigm that has been recently used for the representation of hypotheses in inductive programming (generally in the form of generative models) is probabilistic programming (and related paradigms, such as stochastic logic programs and Bayesian logic programming). == Application areas == The first workshop on Approaches and Applications of Inductive Programming (AAIP) held in conjunction with ICML 2005 identified all applications where "learning of programs or recursive rules are called for, [...] first in the domain of software engineering where structural learning, software assistants and software agents can help to relieve programmers from routine tasks, give programming support for end users, or support of novice programmers and programming tutor systems. Further areas of application are language learning, learning recursive control rules for AI-planning, learning recursive concepts in web-mining or for data-format transformations". Since then, these and many other areas have shown to be successful application niches for inductive programming, such as end-user programming, the related areas of programming by example and programming by demonstration, and intelligent tutoring systems. Other areas where inductive inference has been recently applied are knowledge acquisition, artificial general intelligence, reinforcement learning and theory evaluation, and cognitive science in general. There may also be prospective applications in intelligent agents, games, robotics, personalisation, ambient intelligence and human interfaces. == See also == Evolutionary programming Inductive reasoning Test-driven development == References == == Further reading == == External links == Inductive Programming community page, hosted by the University of Bamberg.
Wikipedia/Inductive_functional_programming
Q is a programming language for array processing, developed by Arthur Whitney. It is proprietary software, commercialized by Kx Systems. Q serves as the query language for kdb+, a disk based and in-memory, column-based database. Kdb+ is based on the language k, a terse variant of the language APL. Q is a thin wrapper around k, providing a more readable, English-like interface. One of the use cases is financial time series analysis, as one could do inexact time matches. An example is to match the a bid and the ask before that. Both timestamps slightly differ and are matched anyway. == Overview == The fundamental building blocks of q are atoms, lists, and functions. Atoms are scalars and include the data types numeric, character, date, and time. Lists are ordered collections of atoms (or other lists) upon which the higher level data structures dictionaries and tables are internally constructed. A dictionary is a map of a list of keys to a list of values. A table is a transposed dictionary of symbol keys and equal length lists (columns) as values. A keyed table, analogous to a table with a primary key placed on it, is a dictionary where the keys and values are arranged as two tables. The following code demonstrates the relationships of the data structures. Expressions to evaluate appear prefixed with the q) prompt, with the output of the evaluation shown beneath: These entities are manipulated via functions, which include the built-in functions that come with Q (which are defined as K macros) and user-defined functions. Functions are a data type, and can be placed in lists, dictionaries and tables, or passed to other functions as parameters. == Examples == Like K, Q is interpreted and the result of the evaluation of an expression is immediately displayed, unless terminated with a semi-colon. The Hello world program is thus trivial: The following expression sorts a list of strings stored in the variable x descending by their lengths: The expression is evaluated from right to left as follows: "count each x" returns the length of each word in the list x. "idesc" returns the indices that would sort a list of values in descending order. @ use the integer values on the right to index into the original list of strings. The factorial function can be implemented directly in Q as or recursively as Note that in both cases the function implicitly takes a single argument called x - in general it is possible to use up to three implicit arguments, named x, y and z, or to give arguments local variable bindings explicitly. In the direct implementation, the expression "til x" enumerates the integers from 0 to x-1, "1+" adds 1 to every element of the list and "prd" returns the product of the list. In the recursive implementation, the syntax "$[condition; expr1; expr2]" is a ternary conditional - if the condition is true then expr1 is returned; otherwise expr2 is returned. The expression ".z.s" is loosely equivalent to 'this' in Java or 'self' in Python - it is a reference to the containing object, and enables functions in q to call themselves. When x is an integer greater than 2, the following function will return 1 if it is a prime, otherwise 0: The function is evaluated from right to left: "til x" enumerate the non-negative integers less than x. "2_" drops the first two elements of the enumeration (0 and 1). "x mod" performs modulo division between the original integer and each value in the truncated list. "min" find the minimum value of the list of modulo result. The q programming language contains its own table query syntax called qSQL, which resembles traditional SQL but has important differences, mainly due to the fact that the underlying tables are oriented by column, rather than by row. == References == == Further reading == Borror, Jeffry A. Q For Mortals: A Tutorial in Q Programming. ISBN 978-1-4348-2901-6. Psaris, Nick. Q Tips: Fast, Scalable and Maintainable Kdb+. ISBN 978-9-8813-8990-9. == External links == Official website, Kx Systems Official website, kdb+ Online documentation and developer site Online kdb Tutorials qStudio an IDE with timeseries charting for kdb Kx Developer, an IDE for kdb+ kdb+ repositories on GitHub Free online version of Q for Mortals Q for All video tutorials Technical Whitepapers jq, an implementation of q on the JVM
Wikipedia/Q_(programming_language_from_Kx_Systems)
In computer programming, the strategy pattern (also known as the policy pattern) is a behavioral software design pattern that enables selecting an algorithm at runtime. Instead of implementing a single algorithm directly, code receives runtime instructions as to which in a family of algorithms to use. Strategy lets the algorithm vary independently from clients that use it. Strategy is one of the patterns included in the influential book Design Patterns by Gamma et al. that popularized the concept of using design patterns to describe how to design flexible and reusable object-oriented software. Deferring the decision about which algorithm to use until runtime allows the calling code to be more flexible and reusable. For instance, a class that performs validation on incoming data may use the strategy pattern to select a validation algorithm depending on the type of data, the source of the data, user choice, or other discriminating factors. These factors are not known until runtime and may require radically different validation to be performed. The validation algorithms (strategies), encapsulated separately from the validating object, may be used by other validating objects in different areas of the system (or even different systems) without code duplication. Typically, the strategy pattern stores a reference to code in a data structure and retrieves it. This can be achieved by mechanisms such as the native function pointer, the first-class function, classes or class instances in object-oriented programming languages, or accessing the language implementation's internal storage of code via reflection. == Structure == === UML class and sequence diagram === In the above UML class diagram, the Context class does not implement an algorithm directly. Instead, Context refers to the Strategy interface for performing an algorithm (strategy.algorithm()), which makes Context independent of how an algorithm is implemented. The Strategy1 and Strategy2 classes implement the Strategy interface, that is, implement (encapsulate) an algorithm. The UML sequence diagram shows the runtime interactions: The Context object delegates an algorithm to different Strategy objects. First, Context calls algorithm() on a Strategy1 object, which performs the algorithm and returns the result to Context. Thereafter, Context changes its strategy and calls algorithm() on a Strategy2 object, which performs the algorithm and returns the result to Context. === Class diagram === == Strategy and open/closed principle == According to the strategy pattern, the behaviors of a class should not be inherited. Instead, they should be encapsulated using interfaces. This is compatible with the open/closed principle (OCP), which proposes that classes should be open for extension but closed for modification. As an example, consider a car class. Two possible functionalities for car are brake and accelerate. Since accelerate and brake behaviors change frequently between models, a common approach is to implement these behaviors in subclasses. This approach has significant drawbacks; accelerate and brake behaviors must be declared in each new car model. The work of managing these behaviors increases greatly as the number of models increases, and requires code to be duplicated across models. Additionally, it is not easy to determine the exact nature of the behavior for each model without investigating the code in each. The strategy pattern uses composition instead of inheritance. In the strategy pattern, behaviors are defined as separate interfaces and specific classes that implement these interfaces. This allows better decoupling between the behavior and the class that uses the behavior. The behavior can be changed without breaking the classes that use it, and the classes can switch between behaviors by changing the specific implementation used without requiring any significant code changes. Behaviors can also be changed at runtime as well as at design-time. For instance, a car object's brake behavior can be changed from BrakeWithABS() to Brake() by changing the brakeBehavior member to: == See also == Dependency injection Higher-order function List of object-oriented programming terms Mixin Policy-based design Type class Entity–component–system Composition over inheritance == References == == External links == Strategy Pattern in UML (in Spanish) Geary, David (April 26, 2002). "Strategy for success". Java Design Patterns. JavaWorld. Retrieved 2020-07-20. Strategy Pattern for C article Refactoring: Replace Type Code with State/Strategy The Strategy Design Pattern at the Wayback Machine (archived 2017-04-15) Implementation of the Strategy pattern in JavaScript
Wikipedia/Strategy_pattern
In computer science, choreographic programming is a programming paradigm where programs are compositions of interactions among multiple concurrent participants. == Overview == === Choreographies === In choreographic programming, developers use a choreographic programming language to define the intended communication behaviour of concurrent participants. Programs in this paradigm are called choreographies. Choreographic languages are inspired by security protocol notation (also known as "Alice and Bob" notation). The key to these languages is the communication primitive, for example reads "Alice communicates the result of evaluating the expression expr to Bob, which stores it in its local variable x". Alice, Bob, etc. are typically called roles or processes. The example below shows a choreography for a simplified single sign-on (SSO) protocol based on a Central Authentication Service (CAS) that involves three roles: Client, which wishes to obtain an access token from CAS to interact with Service. Service, which needs to know from CAS if the Client should be given access. CAS, which is the Central Authentication Service responsible for checking the Client's credentials. The choreography is: The choreography starts in Line 1, where Client communicates a pair consisting of some credentials and the identifier of the service it wishes to access to CAS. CAS stores this pair in its local variable authRequest (for authentication request). In Line 2, the CAS checks if the request is valid for obtaining an authentication token. If so, it generates a token and communicates a Success message containing the token to both Client and Service (Lines 3–5). Otherwise, the CAS informs Client and Service that authentication failed, by sending a Failure message (Lines 7–8). We refer to this choreography as the "SSO choreography" in the remainder. === Endpoint Projection === A key feature of choreographic programming is the capability of compiling choreographies to distributed implementations. These implementations can be libraries for software that needs to participate in a computer network by following a protocol, or standalone distributed programs. The translation of a choreography into distributed programs is called endpoint projection (EPP for short). Endpoint projection returns a program for each role described in the source choreography. For example, given the choreography above, endpoint projection would return three programs: one for Client, one for Service, and one for CAS. They are shown below in pseudocode form, where send and recv are primitives for sending and receiving messages to/from other roles. For each role, its code contains the actions that the role should execute to implement the choreography correctly together with the others. == Development == The paradigm of choreographic programming originates from its titular PhD thesis. The inspiration for the syntax of choreographic programming languages can be traced back to security protocol notation, also known as "Alice and Bob" notation. Choreographic programming has also been heavily influenced by standards for service choreography and interaction diagrams, as well as developments of the theory of process calculi. Choreographic programming is an active area of research. The paradigm has been used in the study of information flow, parallel computing, cyber-physical systems, runtime adaptation, and system integration. == Languages == AIOCJ (website). A choreographic programming language for adaptable systems that produces code in Jolie. Chor. A session-typed choreographic programming language that compiled to microservices in Jolie. In the meantime superseded by Choral. Choral (website). An object-oriented choreographic programming language that compiles to libraries in Java. Choral is the first choreographic programming language with decentralised data structures and higher-order parameters. ChoRus (website). Library-level choreographic programming in Rust. Core Choreographies. A core theoretical model for choreographic programming. A mechanised implementation is available in Coq. HasChor (website). A library for choreographic programming in Haskell. Kalas. A choreographic programming language with a verified compiler to CakeML. Pirouette. A mechanised choreographic programming language theory with higher-order procedures. == See also == Security protocol notation Sequence diagram Service choreography Structured concurrency Multitier programming == References == == External links == www.choral-lang.org
Wikipedia/Choreographic_programming
In computer science, coinduction is a technique for defining and proving properties of systems of concurrent interacting objects. Coinduction is the mathematical dual to structural induction. Coinductively defined data types are known as codata and are typically infinite data structures, such as streams. As a definition or specification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As a proof technique, it may be used to show that an equation is satisfied by all possible implementations of such a specification. To generate and manipulate codata, one typically uses corecursive functions, in conjunction with lazy evaluation. Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result. In programming, co-logic programming (co-LP for brevity) "is a natural generalization of logic programming and coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation, concurrent logic programming, model checking, bisimilarity proofs, etc." Experimental implementations of co-LP are available from the University of Texas at Dallas and in the language Logtalk (for examples see ) and SWI-Prolog. == Description == In his book Types and Programming Languages, Benjamin C. Pierce gives a concise statement of both the principle of induction and the principle of coinduction. While this article is not primarily concerned with induction, it is useful to consider their somewhat generalized forms at once. In order to state the principles, a few preliminaries are required. === Preliminaries === Let U {\displaystyle U} be a set and F {\displaystyle F} be a monotone function 2 U → 2 U {\displaystyle 2^{U}\rightarrow 2^{U}} , that is: X ⊆ Y ⇒ F ( X ) ⊆ F ( Y ) {\displaystyle X\subseteq Y\Rightarrow F(X)\subseteq F(Y)} Unless otherwise stated, F {\displaystyle F} will be assumed to be monotone. X is F-closed if F ( X ) ⊆ X {\displaystyle F(X)\subseteq X} X is F-consistent if X ⊆ F ( X ) {\displaystyle X\subseteq F(X)} X is a fixed point if X = F ( X ) {\displaystyle X=F(X)} These terms can be intuitively understood in the following way. Suppose that X {\displaystyle X} is a set of assertions, and F ( X ) {\displaystyle F(X)} is the operation that yields the consequences of X {\displaystyle X} . Then X {\displaystyle X} is F-closed when one cannot conclude anymore than has already been asserted, while X {\displaystyle X} is F-consistent when all of the assertions are supported by other assertions (i.e. there are no "non-F-logical assumptions"). The Knaster–Tarski theorem tells us that the least fixed-point of F {\displaystyle F} (denoted μ F {\displaystyle \mu F} ) is given by the intersection of all F-closed sets, while the greatest fixed-point (denoted ν F {\displaystyle \nu F} ) is given by the union of all F-consistent sets. We can now state the principles of induction and coinduction. === Definition === Principle of induction: If X {\displaystyle X} is F-closed, then μ F ⊆ X {\displaystyle \mu F\subseteq X} Principle of coinduction: If X {\displaystyle X} is F-consistent, then X ⊆ ν F {\displaystyle X\subseteq \nu F} == Discussion == The principles, as stated, are somewhat opaque, but can be usefully thought of in the following way. Suppose you wish to prove a property of μ F {\displaystyle \mu F} . By the principle of induction, it suffices to exhibit an F-closed set X {\displaystyle X} for which the property holds. Dually, suppose you wish to show that x ∈ ν F {\displaystyle x\in \nu F} . Then it suffices to exhibit an F-consistent set that x {\displaystyle x} is known to be a member of. == Examples == === Defining a set of data types === Consider the following grammar of datatypes: T = ⊥ | ⊤ | T × T {\displaystyle T=\bot \;|\;\top \;|\;T\times T} That is, the set of types includes the "bottom type" ⊥ {\displaystyle \bot } , the "top type" ⊤ {\displaystyle \top } , and (non-homogenous) lists. These types can be identified with strings over the alphabet Σ = { ⊥ , ⊤ , × } {\displaystyle \Sigma =\{\bot ,\top ,\times \}} . Let Σ ≤ ω {\displaystyle \Sigma ^{\leq \omega }} denote all (possibly infinite) strings over Σ {\displaystyle \Sigma } . Consider the function F : 2 Σ ≤ ω → 2 Σ ≤ ω {\displaystyle F:2^{\Sigma ^{\leq \omega }}\rightarrow 2^{\Sigma ^{\leq \omega }}} : F ( X ) = { ⊥ , ⊤ } ∪ { x × y : x , y ∈ X } {\displaystyle F(X)=\{\bot ,\top \}\cup \{x\times y:x,y\in X\}} In this context, x × y {\displaystyle x\times y} means "the concatenation of string x {\displaystyle x} , the symbol × {\displaystyle \times } , and string y {\displaystyle y} ." We should now define our set of datatypes as a fixpoint of F {\displaystyle F} , but it matters whether we take the least or greatest fixpoint. Suppose we take μ F {\displaystyle \mu F} as our set of datatypes. Using the principle of induction, we can prove the following claim: To arrive at this conclusion, consider the set of all finite strings over Σ {\displaystyle \Sigma } . Clearly F {\displaystyle F} cannot produce an infinite string, so it turns out this set is F-closed and the conclusion follows. Now suppose that we take ν F {\displaystyle \nu F} as our set of datatypes. We would like to use the principle of coinduction to prove the following claim: Here ⊥ × ⊥ × ⋯ {\displaystyle \bot \times \bot \times \cdots } denotes the infinite list consisting of all ⊥ {\displaystyle \bot } . To use the principle of coinduction, consider the set: { ⊥ × ⊥ × ⋯ } {\displaystyle \{\bot \times \bot \times \cdots \}} This set turns out to be F-consistent, and therefore ⊥ × ⊥ × ⋯ ∈ ν F {\displaystyle \bot \times \bot \times \cdots \in \nu F} . This depends on the suspicious statement that ⊥ × ⊥ × ⋯ = ( ⊥ × ⊥ × ⋯ ) × ( ⊥ × ⊥ × ⋯ ) {\displaystyle \bot \times \bot \times \cdots =(\bot \times \bot \times \cdots )\times (\bot \times \bot \times \cdots )} The formal justification of this is technical and depends on interpreting strings as sequences, i.e. functions from N → Σ {\displaystyle \mathbb {N} \rightarrow \Sigma } . Intuitively, the argument is similar to the argument that 0. 0 ¯ 1 = 0 {\displaystyle 0.{\bar {0}}1=0} (see Repeating decimal). === Coinductive datatypes in programming languages === Consider the following definition of a stream: This would seem to be a definition that is not well-founded, but it is nonetheless useful in programming and can be reasoned about. In any case, a stream is an infinite list of elements from which you may observe the first element, or place an element in front of to get another stream. === Relationship with F-coalgebras === Consider the endofunctor F {\displaystyle F} in the category of sets: F ( x ) = A × x F ( f ) = ⟨ i d A , f ⟩ {\displaystyle {\begin{aligned}F(x)&=A\times x\\F(f)&=\langle \mathrm {id} _{A},f\rangle \end{aligned}}} The final F-coalgebra ν F {\displaystyle \nu F} has the following morphism associated with it: o u t : ν F → F ( ν F ) = A × ν F {\displaystyle \mathrm {out} :\nu F\rightarrow F(\nu F)=A\times \nu F} This induces another coalgebra F ( ν F ) {\displaystyle F(\nu F)} with associated morphism F ( o u t ) {\displaystyle F(\mathrm {out} )} . Because ν F {\displaystyle \nu F} is final, there is a unique morphism F ( o u t ) ¯ : F ( ν F ) → ν F {\displaystyle {\overline {F(\mathrm {out} )}}:F(\nu F)\rightarrow \nu F} such that o u t ∘ F ( o u t ) ¯ = F ( F ( o u t ) ¯ ) ∘ F ( o u t ) = F ( F ( o u t ) ¯ ∘ o u t ) {\displaystyle \mathrm {out} \circ {\overline {F(\mathrm {out} )}}=F\left({\overline {F(\mathrm {out} )}}\right)\circ F(\mathrm {out} )=F\left({\overline {F(\mathrm {out} )}}\circ \mathrm {out} \right)} The composition F ( o u t ) ¯ ∘ o u t {\displaystyle {\overline {F(\mathrm {out} )}}\circ \mathrm {out} } induces another F-coalgebra homomorphism ν F → ν F {\displaystyle \nu F\rightarrow \nu F} . Since ν F {\displaystyle \nu F} is final, this homomorphism is unique and therefore i d ν F {\displaystyle \mathrm {id} _{\nu F}} . Altogether we have: F ( o u t ) ¯ ∘ o u t = i d ν F o u t ∘ F ( o u t ) ¯ = F ( F ( o u t ) ¯ ) ∘ o u t ) = i d F ( ν F ) {\displaystyle {\begin{aligned}{\overline {F(\mathrm {out} )}}\circ \mathrm {out} &=\mathrm {id} _{\nu F}\\\mathrm {out} \circ {\overline {F(\mathrm {out} )}}=F\left({\overline {F(\mathrm {out} )}}\right)\circ \mathrm {out} )&=\mathrm {id} _{F(\nu F)}\end{aligned}}} This witnesses the isomorphism ν F ≃ F ( ν F ) {\displaystyle \nu F\simeq F(\nu F)} , which in categorical terms indicates that ν F {\displaystyle \nu F} is a fixed point of F {\displaystyle F} and justifies the notation. ==== Stream as a final coalgebra ==== We will show that Stream A is the final coalgebra of the functor F ( x ) = A × x {\displaystyle F(x)=A\times x} . Consider the following implementations: These are easily seen to be mutually inverse, witnessing the isomorphism. See the reference for more details. === Relationship with mathematical induction === We will demonstrate how the principle of induction subsumes mathematical induction. Let P {\displaystyle P} be some property of natural numbers. We will take the following definition of mathematical induction: 0 ∈ P ∧ ( n ∈ P ⇒ n + 1 ∈ P ) ⇒ P = N {\displaystyle 0\in P\land (n\in P\Rightarrow n+1\in P)\Rightarrow P=\mathbb {N} } Now consider the function F : 2 N → 2 N {\displaystyle F:2^{\mathbb {N} }\rightarrow 2^{\mathbb {N} }} : F ( X ) = { 0 } ∪ { x + 1 : x ∈ X } {\displaystyle F(X)=\{0\}\cup \{x+1:x\in X\}} It should not be difficult to see that μ F = N {\displaystyle \mu F=\mathbb {N} } . Therefore, by the principle of induction, if we wish to prove some property P {\displaystyle P} of N {\displaystyle \mathbb {N} } , it suffices to show that P {\displaystyle P} is F-closed. In detail, we require: F ( P ) ⊆ P {\displaystyle F(P)\subseteq P} That is, { 0 } ∪ { x + 1 : x ∈ P } ⊆ P {\displaystyle \{0\}\cup \{x+1:x\in P\}\subseteq P} This is precisely mathematical induction as stated. == See also == F-coalgebra Corecursion Bisimulation Anamorphism Total functional programming == References == == Further reading == Textbooks Davide Sangiorgi (2012). Introduction to Bisimulation and Coinduction. Cambridge University Press. Davide Sangiorgi and Jan Rutten (2011). Advanced Topics in Bisimulation and Coinduction. Cambridge University Press. Introductory texts Andrew D. Gordon (1994). "A Tutorial on Co-induction and Functional Programming". 1994. pp. 78–95. CiteSeerX 10.1.1.37.3914. — mathematically oriented description Bart Jacobs and Jan Rutten (1997). A Tutorial on (Co)Algebras and (Co)Induction (alternate link) — describes induction and coinduction simultaneously Eduardo Giménez and Pierre Castéran (2007). "A Tutorial on [Co-]Inductive Types in Coq" Coinduction — short introduction History Davide Sangiorgi. "On the Origins of Bisimulation and Coinduction", ACM Transactions on Programming Languages and Systems, Vol. 31, Nb 4, Mai 2009. Miscellaneous Co-Logic Programming: Extending Logic Programming with Coinduction — describes the co-logic programming paradigm
Wikipedia/Codata_(computer_science)
In computer science, an associative array, key-value store, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a function with finite domain. It supports 'lookup', 'remove', and 'insert' operations. The dictionary problem is the classic problem of designing efficient data structures that implement associative arrays. The two major solutions to the dictionary problem are hash tables and search trees. It is sometimes also possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures. Many programming languages include associative arrays as primitive data types, while many other languages provide software libraries that support associative arrays. Content-addressable memory is a form of direct hardware-level support for associative arrays. Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern. The name does not come from the associative property known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with associative processors. == Operations == In an associative array, the association between a key and a value is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association. The operations that are usually defined for an associative array are: Insert or put add a new ( k e y , v a l u e ) {\displaystyle (key,value)} pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value. Remove or delete remove a ( k e y , v a l u e ) {\displaystyle (key,value)} pair from the collection, unmapping a given key from its value. The argument to this operation is the key. Lookup, find, or get find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation. If no value is found, some lookup functions raise an exception, while others return a default value (such as zero, null, or a specific value passed to the constructor). Associative arrays may also include other operations such as determining the number of mappings or constructing an iterator to loop over all the mappings. For such operations, the order in which the mappings are returned is usually implementation-defined. A multimap generalizes an associative array by allowing multiple values to be associated with a single key. A bidirectional map is a related abstract data type in which the mappings operate in both directions: each value must be associated with a unique key, and a second lookup operation takes a value as an argument and looks up the key associated with that value. === Properties === The operations of the associative array should satisfy various properties: lookup(k, insert(j, v, D)) = if k == j then v else lookup(k, D) lookup(k, new()) = fail, where fail is an exception or default value remove(k, insert(j, v, D)) = if k == j then remove(k, D) else insert(j, v, remove(k, D)) remove(k, new()) = new() where k and j are keys, v is a value, D is an associative array, and new() creates a new, empty associative array. === Example === Suppose that the set of loans made by a library is represented in a data structure. Each book in a library may be checked out by one patron at a time. However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be: A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, and if Pat checks out a book, that would cause an insertion operation, leading to a different state: == Implementation == For dictionaries with very few mappings, it may make sense to implement the dictionary using an association list, which is a linked list of mappings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of mappings. However, it is easy to implement and the constant factors in its running time are small. Another very simple implementation technique, usable when the keys are restricted to a narrow range, is direct addressing into an array: the value for a given key k is stored at the array cell A[k], or if there is no mapping for k then the cell stores a special sentinel value that indicates the lack of a mapping. This technique is simple and fast, with each dictionary operation taking constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small. The two major approaches for implementing dictionaries are a hash table or a search tree. === Hash table implementations === The most frequently used general-purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and usually outperform alternative implementations. Hash tables must be able to handle collisions: the mapping by the hash function of two different keys to the same bucket of the array. The two most widespread approaches to this problem are separate chaining and open addressing. In separate chaining, the array does not store the value itself but stores a pointer to another container, usually an association list, that stores all the values matching the hash. By contrast, in open addressing, if a hash collision is found, the table seeks an empty spot in an array to store the value in a deterministic manner, usually by looking at the next immediate position in the array. Open addressing has a lower cache miss ratio than separate chaining when the table is mostly empty. However, as the table becomes filled with more elements, open addressing's performance degrades exponentially. Additionally, separate chaining uses less memory in most cases, unless the entries are very small (less than four times the size of a pointer). === Tree implementations === ==== Self-balancing binary search trees ==== Another common approach is to implement an associative array with a self-balancing binary search tree, such as an AVL tree or a red–black tree. Compared to hash tables, these structures have both strengths and weaknesses. The worst-case performance of self-balancing binary search trees is significantly better than that of a hash table, with a time complexity in big O notation of O(log n). This is in contrast to hash tables, whose worst-case performance involves all elements sharing a single bucket, resulting in O(n) time complexity. In addition, and like all binary search trees, self-balancing binary search trees keep their elements in order. Thus, traversing its elements follows a least-to-greatest pattern, whereas traversing a hash table can result in elements being in seemingly random order. Because they are in order, tree-based maps can also satisfy range queries (find all values between two bounds) whereas a hashmap can only find exact values. However, hash tables have a much better average-case time complexity than self-balancing binary search trees of O(1), and their worst-case performance is highly unlikely when a good hash function is used. A self-balancing binary search tree can be used to implement the buckets for a hash table that uses separate chaining. This allows for average-case constant lookup, but assures a worst-case performance of O(log n). However, this introduces extra complexity into the implementation and may cause even worse performance for smaller hash tables, where the time spent inserting into and balancing the tree is greater than the time needed to perform a linear search on all elements of a linked list or similar data structure. ==== Other trees ==== Associative arrays may also be stored in unbalanced binary search trees or in data structures specialized to a particular type of keys such as radix trees, tries, Judy arrays, or van Emde Boas trees, though the relative performance of these implementations varies. For instance, Judy trees have been found to perform less efficiently than hash tables, while carefully selected hash tables generally perform more efficiently than adaptive radix trees, with potentially greater restrictions on the data types they can handle. The advantages of these alternative structures come from their ability to handle additional associative array operations, such as finding the mapping whose key is the closest to a queried key when the query is absent in the set of mappings. === Comparison === == Ordered dictionary == The basic definition of a dictionary does not mandate an order. To guarantee a fixed order of enumeration, ordered versions of the associative array are often used. There are two senses of an ordered dictionary: The order of enumeration is always deterministic for a given set of keys by sorting. This is the case for tree-based implementations, one representative being the <map> container of C++. The order of enumeration is key-independent and is instead based on the order of insertion. This is the case for the "ordered dictionary" in .NET Framework, the LinkedHashMap of Java and Python. The latter is more common. Such ordered dictionaries can be implemented using an association list, by overlaying a doubly linked list on top of a normal dictionary, or by moving the actual data out of the sparse (unordered) array and into a dense insertion-ordered one. == Language support == Associative arrays can be implemented in any programming language as a package and many language systems provide them as part of their standard library. In some languages, they are not only built into the standard system, but have special syntax, often using array-like subscripting. Built-in syntactic support for associative arrays was introduced in 1969 by SNOBOL4, under the name "table". TMG offered tables with string keys and integer values. MUMPS made multi-dimensional associative arrays, optionally persistent, its key data structure. SETL supported them as one possible implementation of sets and maps. Most modern scripting languages, starting with AWK and including Rexx, Perl, PHP, Tcl, JavaScript, Maple, Python, Ruby, Wolfram Language, Go, and Lua, support associative arrays as a primary container type. In many more languages, they are available as library functions without special syntax. In Smalltalk, Objective-C, .NET, Python, REALbasic, Swift, VBA and Delphi they are called dictionaries; in Perl, Ruby and Seed7 they are called hashes; in C++, C#, Java, Go, Clojure, Scala, OCaml, Haskell they are called maps (see map (C++), unordered_map (C++), and Map); in Common Lisp and Windows PowerShell, they are called hash tables (since both typically use this implementation); in Maple and Lua, they are called tables. In PHP and R, all arrays can be associative, except that the keys are limited to integers and strings. In JavaScript (see also JSON), all objects behave as associative arrays with string-valued keys, while the Map and WeakMap types take arbitrary objects as keys. In Lua, they are used as the primitive building block for all data structures. In Visual FoxPro, they are called Collections. The D language also supports associative arrays. == Permanent storage == Many programs using associative arrays will need to store that data in a more permanent form, such as a computer file. A common solution to this problem is a generalized concept known as archiving or serialization, which produces a text or binary representation of the original objects that can be written directly to a file. This is most commonly implemented in the underlying object model, like .Net or Cocoa, which includes standard functions that convert the internal data into text. The program can create a complete text representation of any group of objects by calling these methods, which are almost always already implemented in the base associative array class. For programs that use very large data sets, this sort of individual file storage is not appropriate, and a database management system (DB) is required. Some DB systems natively store associative arrays by serializing the data and then storing that serialized data and the key. Individual arrays can then be loaded or saved from the database using the key to refer to them. These key–value stores have been used for many years and have a history as long as that of the more common relational database (RDBs), but a lack of standardization, among other reasons, limited their use to certain niche roles. RDBs were used for these roles in most cases, although saving objects to a RDB can be complicated, a problem known as object-relational impedance mismatch. After approximately 2010, the need for high-performance databases suitable for cloud computing and more closely matching the internal structure of the programs using them led to a renaissance in the key–value store market. These systems can store and retrieve associative arrays in a native fashion, which can greatly improve performance in common web-related workflows. == See also == Tuple Function (mathematics) == References == == External links == NIST's Dictionary of Algorithms and Data Structures: Associative Array
Wikipedia/Map_(computer_science)
Total functional programming (also known as strong functional programming, to be contrasted with ordinary, or weak functional programming) is a programming paradigm that restricts the range of programs to those that are provably terminating. == Restrictions == Termination is guaranteed by the following restrictions: A restricted form of recursion, which operates only upon 'reduced' forms of its arguments, such as Walther recursion, substructural recursion, or "strongly normalizing" as proven by abstract interpretation of code. Every function must be a total (as opposed to partial) function. That is, it must have a definition for everything inside its domain. There are several possible ways to extend commonly used partial functions such as division to be total: choosing an arbitrary result for inputs on which the function is normally undefined (such as ∀ x ∈ N . x ÷ 0 = 0 {\displaystyle \forall x\in \mathbb {N} .x\div 0=0} for division); adding another argument to specify the result for those inputs; or excluding them by use of type system features such as refinement types. These restrictions mean that total functional programming is not Turing-complete. However, the set of algorithms that can be used is still huge. For example, any algorithm for which an asymptotic upper bound can be calculated (by a program that itself only uses Walther recursion) can be trivially transformed into a provably-terminating function by using the upper bound as an extra argument decremented on each iteration or recursion. For example, quicksort is not trivially shown to be substructural recursive, but it only recurs to a maximum depth of the length of the vector (worst-case time complexity O(n2)). A quicksort implementation on lists (which would be rejected by a substructural recursive checker) is, using Haskell: To make it substructural recursive using the length of the vector as a limit, we could do: Some classes of algorithms have no theoretical upper bound but do have a practical upper bound (for example, some heuristic-based algorithms can be programmed to "give up" after so many recursions, also ensuring termination). Another outcome of total functional programming is that both strict evaluation and lazy evaluation result in the same behaviour, in principle; however, one or the other may still be preferable (or even required) for performance reasons. In total functional programming, a distinction is made between data and codata—the former is finitary, while the latter is potentially infinite. Such potentially infinite data structures are used for applications such as I/O. Using codata entails the usage of such operations as corecursion. However, it is possible to do I/O in a total functional programming language (with dependent types) also without codata. Both Epigram and Charity could be considered total functional programming languages, even though they do not work in the way Turner specifies in his paper. So could programming directly in plain System F, in Martin-Löf type theory or the Calculus of Constructions. == See also == Termination analysis == References ==
Wikipedia/Total_functional_programming
In many programming languages, map is a higher-order function that applies a given function to each element of a collection, e.g. a list or set, returning the results in a collection of the same type. It is often called apply-to-all when considered in functional form. The concept of a map is not limited to lists: it works for sequential containers, tree-like containers, or even abstract containers such as futures and promises. == Examples: mapping a list == Suppose there is list of integers [1, 2, 3, 4, 5] and would like to calculate the square of each integer. To do this, first define a function to square a single number (shown here in Haskell): Afterwards, call: which yields [1, 4, 9, 16, 25], demonstrating that map has gone through the entire list and applied the function square to each element. === Visual example === Below, there is view of each step of the mapping process for a list of integers X = [0, 5, 8, 3, 2, 1] mapping into a new list X' according to the function f ( x ) = x + 1 {\displaystyle f(x)=x+1} : The map is provided as part of the Haskell's base prelude (i.e. "standard library") and is implemented as: == Generalization == In Haskell, the polymorphic function map :: (a -> b) -> [a] -> [b] is generalized to a polytypic function fmap :: Functor f => (a -> b) -> f a -> f b, which applies to any type belonging the Functor type class. The type constructor of lists [] can be defined as an instance of the Functor type class using the map function from the previous example: Other examples of Functor instances include trees: Mapping over a tree yields: For every instance of the Functor type class, fmap is contractually obliged to obey the functor laws: where . denotes function composition in Haskell. Among other uses, this allows defining element-wise operations for various kinds of collections. === Category-theoretic background === In category theory, a functor F : C → D {\displaystyle F:C\rightarrow D} consists of two maps: one that sends each object A {\displaystyle A} of the category to another object F A {\displaystyle FA} , and one that sends each morphism f : A → B {\displaystyle f:A\rightarrow B} to another morphism F f : F A → F B {\displaystyle Ff:FA\rightarrow FB} , which acts as a homomorphism on categories (i.e. it respects the category axioms). Interpreting the universe of data types as a category T y p e {\displaystyle Type} , with morphisms being functions, then a type constructor F that is a member of the Functor type class is the object part of such a functor, and fmap :: (a -> b) -> F a -> F b is the morphism part. The functor laws described above are precisely the category-theoretic functor axioms for this functor. Functors can also be objects in categories, with "morphisms" called natural transformations. Given two functors F , G : C → D {\displaystyle F,G:C\rightarrow D} , a natural transformation η : F → G {\displaystyle \eta :F\rightarrow G} consists of a collection of morphisms η A : F A → G A {\displaystyle \eta _{A}:FA\rightarrow GA} , one for each object A {\displaystyle A} of the category D {\displaystyle D} , which are 'natural' in the sense that they act as a 'conversion' between the two functors, taking no account of the objects that the functors are applied to. Natural transformations correspond to functions of the form eta :: F a -> G a, where a is a universally quantified type variable – eta knows nothing about the type which inhabits a. The naturality axiom of such functions is automatically satisfied because it is a so-called free theorem, depending on the fact that it is parametrically polymorphic. For example, reverse :: List a -> List a, which reverses a list, is a natural transformation, as is flattenInorder :: Tree a -> List a, which flattens a tree from left to right, and even sortBy :: (a -> a -> Bool) -> List a -> List a, which sorts a list based on a provided comparison function. == Optimizations == The mathematical basis of maps allow for a number of optimizations. The composition law ensures that both (map f . map g) list and map (f . g) list lead to the same result; that is, map ⁡ ( f ) ∘ map ⁡ ( g ) = map ⁡ ( f ∘ g ) {\displaystyle \operatorname {map} (f)\circ \operatorname {map} (g)=\operatorname {map} (f\circ g)} . However, the second form is more efficient to compute than the first form, because each map requires rebuilding an entire list from scratch. Therefore, compilers will attempt to transform the first form into the second; this type of optimization is known as map fusion and is the functional analog of loop fusion. Map functions can be and often are defined in terms of a fold such as foldr, which means one can do a map-fold fusion: foldr f z . map g is equivalent to foldr (f . g) z. The implementation of map above on singly linked lists is not tail-recursive, so it may build up a lot of frames on the stack when called with a large list. Many languages alternately provide a "reverse map" function, which is equivalent to reversing a mapped list, but is tail-recursive. Here is an implementation which utilizes the fold-left function. Since reversing a singly linked list is also tail-recursive, reverse and reverse-map can be composed to perform normal map in a tail-recursive way, though it requires performing two passes over the list. == Language comparison == The map function originated in functional programming languages. The language Lisp introduced a map function called maplist in 1959, with slightly different versions already appearing in 1958. This is the original definition for maplist, mapping a function over successive rest lists: maplist[x;f] = [null[x] -> NIL;T -> cons[f[x];maplist[cdr[x];f]]] The function maplist is still available in newer Lisps like Common Lisp, though functions like mapcar or the more generic map would be preferred. Squaring the elements of a list using maplist would be written in S-expression notation like this: Using the function mapcar, above example would be written like this: Today mapping functions are supported (or may be defined) in many procedural, object-oriented, and multi-paradigm languages as well: In C++'s Standard Library, it is called std::transform, in C# (3.0)'s LINQ library, it is provided as an extension method called Select. Map is also a frequently used operation in high level languages such as ColdFusion Markup Language (CFML), Perl, Python, and Ruby; the operation is called map in all four of these languages. A collect alias for map is also provided in Ruby (from Smalltalk). Common Lisp provides a family of map-like functions; the one corresponding to the behavior described here is called mapcar (-car indicating access using the CAR operation). There are also languages with syntactic constructs providing the same functionality as the map function. Map is sometimes generalized to accept dyadic (2-argument) functions that can apply a user-supplied function to corresponding elements from two lists. Some languages use special names for this, such as map2 or zipWith. Languages using explicit variadic functions may have versions of map with variable arity to support variable-arity functions. Map with 2 or more lists encounters the issue of handling when the lists are of different lengths. Various languages differ on this. Some raise an exception. Some stop after the length of the shortest list and ignore extra items on the other lists. Some continue on to the length of the longest list, and for the lists that have already ended, pass some placeholder value to the function indicating no value. In languages which support first-class functions and currying, map may be partially applied to lift a function that works on only one value to an element-wise equivalent that works on an entire container; for example, map square is a Haskell function which squares each element of a list. == See also == Functor (functional programming) Zipping (computer science) or zip, mapping 'list' over multiple lists Filter (higher-order function) Fold (higher-order function) foreach loop Free monoid Functional programming Higher-order function List comprehension Map (parallel pattern) == References ==
Wikipedia/Map_(higher-order_function)