text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Nandrolone**
Nandrolone:
Nandrolone, also known as 19-nortestosterone, is an endogenous androgen which exists in the male body at a ratio of 1:50 compared to testosterone. It is also an anabolic steroid (AAS) which is medically used in the form of esters such as nandrolone decanoate (brand name Deca-Durabolin) and nandrolone phenylpropionate (brand name Durabolin). Nandrolone esters are used in the treatment of anemias, cachexia (muscle wasting syndrome), osteoporosis, breast cancer, and for other indications. They are not used by mouth and instead are given by injection into muscle or fat.Side effects of nandrolone esters include symptoms of masculinization like acne, increased hair growth, and voice changes; and decreased sexual desire in men due to its ability to suppress endogenous testosterone synthesis while not being a sufficient androgen itself. They are synthetic androgens and anabolic steroids and hence are agonists of the androgen receptor (AR), the biological target of androgens like testosterone and dihydrotestosterone (DHT). Nandrolone has strong anabolic effects and weak androgenic effects, which give them a mild side effect profile and make them especially suitable for use in women and children. There are metabolites of Nandrolone that act as long-lasting prodrugs in the body, such as 5α-Dihydronandrolone.
Nandrolone:
Nandrolone esters were first described and introduced for medical use in the late 1950s. They are among the most widely used AAS worldwide. In addition to their medical use, nandrolone esters are used to improve physique and performance, and are said to be the most widely used AAS for such purposes. The drugs are controlled substances in many countries and so non-medical use is generally illicit.
Medical uses:
Nandrolone esters are used clinically, although increasingly rarely, for people in catabolic states with major burns, cancer, and AIDS, and an ophthalmological formulation was available to support cornea healing.: 134 The positive effects of nandrolone esters include muscle growth, appetite stimulation and increased red blood cell production, and bone density. Clinical studies have shown them to be effective in treating anemia, osteoporosis, and breast cancer.
Medical uses:
Nandrolone sulfate has been used in an eye drop formulation as an ophthalmic medication.
Non-medical uses:
Nandrolone esters are used for physique- and performance-enhancing purposes by competitive athletes, bodybuilders, and powerlifters.
Side effects:
Side effects of nandrolone esters include masculinization among others. In women, nandrolone and nandrolone esters have been reported to produce increased libido, acne, facial and body hair growth, voice changes, and clitoral enlargement. However, the masculinizing effects of nandrolone and its esters are reported to be slighter than those of testosterone. Nandrolone has also been found to produce penile growth in prepubertal boys. Amenorrhea and menorrhagia have been reported as side effects of nandrolone cypionate.Nandrolone theoretically may produce erectile dysfunction as a side effect, although there is no clinical evidence to support this notion at present. Side effects of high doses of nandrolone may include cardiovascular toxicity as well as hypogonadism and infertility. Nandrolone may not produce scalp hair loss, although this is also theoretical.
Pharmacology:
Pharmacodynamics Nandrolone is an agonist of the AR, the biological target of androgens like testosterone and DHT. Unlike testosterone and certain other AAS, nandrolone is not potentiated in androgenic tissues like the scalp, skin, and prostate, hence deleterious effects in these tissues are lessened. This is because nandrolone is metabolized by 5α-reductase to the much weaker AR ligand 5α-dihydronandrolone (DHN), which has both reduced affinity for the androgen receptor (AR) relative to nandrolone in vitro and weaker AR agonistic potency in vivo. The lack of alkylation on the 17α-carbon drastically reduces the hepatotoxic potential of nandrolone. Estrogen effects resulting from reaction with aromatase are also reduced due to lessened enzyme interaction, but effects such as gynecomastia and reduced libido may still occur at sufficiently high doses.In addition to its AR agonistic activity, unlike many other AAS, nandrolone is also a potent progestogen. It binds to the progesterone receptor with approximately 22% of the affinity of progesterone. The progestogenic activity of nandrolone serves to augment its antigonadotropic effects, as antigonadotropic action is a known property of progestogens.
Pharmacology:
Anabolic and androgenic activity Nandrolone has a very high ratio of anabolic to androgenic activity. In fact, many nandrolone-like AAS and even nandrolone itself are said to have among the highest ratio of anabolic to androgenic effect of all AAS. This is attributed to the fact that whereas testosterone is potentiated via conversion into dihydrotestosterone (DHT) in androgenic tissues, the opposite is true with nandrolone and similar AAS (i.e., other 19-nortestosterone derivatives). As such, nandrolone-like AAS, namely nandrolone esters, are the most frequently used AAS in clinical settings in which anabolic effects are desired; for instance, in the treatment of AIDS-associated cachexia, severe burns, and chronic obstructive pulmonary disease. However, AAS with a very high ratio of anabolic to androgenic action like nandrolone still have significant androgenic effects and can produce symptoms of masculinization like hirsutism and voice deepening in women and children with extended use.
Pharmacology:
Pharmacokinetics The oral activity of nandrolone has been studied. With oral administration of nandrolone in rodents, it had about one-tenth of the potency of subcutaneous injection of nandrolone.Nandrolone has very low affinity for human serum sex hormone-binding globulin (SHBG), about 5% of that of testosterone and 1% of that of DHT. It is metabolized by the enzyme 5α-reductase, among others. Nandrolone is less susceptible to metabolism by 5α-reductase and 17β-hydroxysteroid dehydrogenase than testosterone. This results in it being transformed less in so-called "androgenic" tissues like the skin, hair follicles, and prostate gland and in the kidneys, respectively. Metabolites of nandrolone include 5α-dihydronandrolone, 19-norandrosterone, and 19-noretiocholanolone, and these metabolites may be detected in urine.Single intramuscular injections of 100 mg nandrolone phenylpropionate or nandrolone decanoate have been found to produce an anabolic effect for 10 to 14 days and 20 to 25 days, respectively. Conversely, unesterified nandrolone has been used by intramuscular injection once daily.
Chemistry:
Nandrolone, also known as 19-nortestosterone (19-NT) or as estrenolone, as well as estra-4-en-17β-ol-3-one or 19-norandrost-4-en-17β-ol-3-one, is a naturally occurring estrane (19-norandrostane) steroid and a derivative of testosterone (androst-4-en-17β-ol-3-one). It is specifically the C19 demethylated (nor) analogue of testosterone. Nandrolone is an endogenous intermediate in the production of estradiol from testosterone via aromatase in mammals including humans and is present in the body naturally in trace amounts. It can be detected during pregnancy in women. Nandrolone esters have an ester such as decanoate or phenylpropionate attached at the C17β position.
Chemistry:
Derivatives Esters A variety of esters of nandrolone have been marketed and used medically. The most commonly used esters are nandrolone decanoate and to a lesser extent nandrolone phenylpropionate. Examples of other nandrolone esters that have been marketed and used medically include nandrolone cyclohexylpropionate, nandrolone cypionate, nandrolone hexyloxyphenylpropionate, nandrolone laurate, nandrolone sulfate, and nandrolone undecanoate.
Chemistry:
Anabolic steroids Nandrolone is the parent compound of a large group of AAS. Notable examples include the non-17α-alkylated trenbolone and the 17α-alkylated ethylestrenol (ethylnandrol) and metribolone (R-1881), as well as the 17α-alkylated designer steroids norboletone and tetrahydrogestrinone (THG). The following is list of derivatives of nandrolone that have been developed as AAS: Progestins Nandrolone, together with ethisterone (17α-ethynyltestosterone), is also the parent compound of a large group of progestins, the norethisterone (17α-ethynyl-19-nortestosterone) derivatives. This family is subdivided into two groups: the estranes and the gonanes. The estranes include norethisterone (norethindrone), norethisterone acetate, norethisterone enanthate, lynestrenol, etynodiol diacetate, and noretynodrel, while the gonanes include norgestrel, levonorgestrel, desogestrel, etonogestrel, gestodene, norgestimate, dienogest (actually a 17α-cyanomethyl-19-nortestosterone derivative), and norelgestromin.
Chemistry:
Synthesis The elaboration of a method for the reduction of aromatic rings to the corresponding dihydrobenzenes under controlled conditions by A. J. Birch opened a convenient route to compounds related to the putative 19-norprogesterone.
Chemistry:
This reaction, now known as the Birch reduction, is typified by the treatment of the monomethyl ether of estradiol (1) with a solution of lithium metal in liquid ammonia in the presence of alcohol as a proton source. Initial reaction constituents of 1,4-dimetalation of the most electron deficient positions of the aromatic ring–in the case of an estrogen, the 1 and 4-positions. Rxn of the intermediate with the proton source leads to a dihydrobenzene; a special virtue of this sequence in steroids is the fact that the double bind at 2 is in effect becomes an enol ether moiety. Treatment of this product (2) with weak acid, oxalic acid for e.g., leads to the hydrolysis of the enol ether, producing β,γ-unconjugated ketone 3. Hydrolysis under more strenuous conditions (mineral acids) results in migration/conjugation of the olefin to yield nandrolone (4).
Chemistry:
Esters Treatment of 4 with decanoic anhydride and pyridine affords nandrolone decanoate.
Acylation of 4 with phenylpropionyl chloride yields nandrolone phenpropionate.
Chemistry:
Detection in body fluids Nandrolone use is directly detectable in hair or indirectly detectable in urine by testing for the presence of 19-norandrosterone, a metabolite. The International Olympic Committee has set a limit of 2.0 μg/L of 19-norandrosterone in urine as the upper limit, beyond which an athlete is suspected of doping. In the largest nandrolone study performed on 621 athletes at the 1998 Nagano Olympic Games, no athlete tested over 0.4 μg/L. 19-Norandrosterone was identified as a trace contaminant in commercial preparations of androstenedione, which until 2004 was available without a prescription as a dietary supplement in the U.S.A number of nandrolone cases in athletics occurred in 1999, which included high-profile athletes such as Merlene Ottey, Dieter Baumann and Linford Christie. However, the following year the detection method for nandrolone at the time was proved to be faulty. Mark Richardson, a British Olympic relay runner who tested positive for the substance, gave a significant amount of urine samples in a controlled environment and delivered a positive test for the drug, demonstrating that false positives could occur, which led to an overhaul of his competitive ban.Heavy consumption of the essential amino acid lysine (as indicated in the treatment of cold sores) has allegedly shown false positives in some and was cited by American shotputter C. J. Hunter as the reason for his positive test, though in 2004 he admitted to a federal grand jury that he had injected nandrolone. A possible cause of incorrect urine test results is the presence of metabolites from other AAS, though modern urinalysis can usually determine the exact AAS used by analyzing the ratio of the two remaining nandrolone metabolites. As a result of the numerous overturned verdicts, the testing procedure was reviewed by UK Sport. On October 5, 2007, three-time Olympic gold medalist for track and field Marion Jones admitted to use of the drug, and was sentenced to six months in jail for lying to a federal grand jury in 2000.Mass spectrometry is also used to detect small samples of nandrolone in urine samples, as it has a unique molar mass.
History:
Nandrolone was first synthesized in 1950.: 130 It was first introduced, as nandrolone phenylpropionate, in 1959, and then as nandrolone decanoate in 1962, followed by additional esters.
Society and culture:
Generic names Nandrolone is the generic name of the drug and its INN, BAN, DCF, and DCIT. The formal generic names of nandrolone esters include nandrolone cyclohexylpropionate (BANM), nandrolone cyclotate (USAN), nandrolone decanoate (USAN, USP, BANM, JAN), nandrolone laurate (BANM), nandrolone phenpropionate (USP), and nandrolone phenylpropionate (BANM, JAN).
Doping in sports Nandrolone was probably among the first AAS to be used as a doping agent in sports in the 1960s. It has been banned at the Olympics since 1974.: 128 There are many known cases of doping in sports with nandrolone esters by professional athletes.
Research:
Nandrolone esters have been studied in several indications. They were intensively studied for osteoporosis, and increased calcium uptake and decreased bone loss, but caused virilization in about half of the women who took them and were mostly abandoned for this use when better drugs like the bisphosphonates became available. They have also been studied in clinical trials for chronic kidney failure, aplastic anemia, and as male contraceptives.: 134 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diurnal motion**
Diurnal motion:
Diurnal motion (from Latin diurnus 'daily', from Latin diēs 'day') is an astronomical term referring to the apparent motion of celestial objects (e.g. the Sun and stars) around Earth, or more precisely around the two celestial poles, over the course of one day. It is caused by Earth's rotation around its axis, so almost every star appears to follow a circular arc path, called the diurnal circle, often depicted in star trail photography.
Diurnal motion:
The time for one complete rotation is 23 hours, 56 minutes, and 4.09 seconds – one sidereal day. The first experimental demonstration of this motion was conducted by Léon Foucault. Because Earth orbits the Sun once a year, the sidereal time at any given place and time will gain about four minutes against local civil time, every 24 hours, until, after a year has passed, one additional sidereal "day" has elapsed compared to the number of solar days that have gone by.
Relative direction:
The relative direction of diurnal motion in the Northern Celestial Hemisphere are as follows: Facing north, below Polaris: rightward, or eastward Facing north, above Polaris: leftward, or westward Facing south: rightward, or westwardThus, northern circumpolar stars move counterclockwise around Polaris, the north pole star.
At the North Pole, the cardinal directions do not apply to diurnal motion. Within the circumpolar circle, all the stars move simply rightward, or looking directly overhead, counterclockwise around the zenith, where Polaris is.
Southern Celestial Hemisphere observers are to replace north with south, left with right, and Polaris with Sigma Octantis, sometimes called the south pole star. The circumpolar stars move clockwise around Sigma Octantis. East and west are not interchanged.
As seen from the Equator, the two celestial poles are on the horizon due north and south, and the motion is counterclockwise (i.e. leftward) around Polaris and clockwise (i.e. rightward) around Sigma Octantis. All motion is westward, except for the two fixed points.
Apparent speed:
The daily arc path of an object on the celestial sphere, including the possible part below the horizon, has a length proportional to the cosine of the declination. Thus, the speed of the diurnal motion of a celestial object equals this cosine times 15° per hour, 15 arcminutes per minute, or 15 arcseconds per second.
Apparent speed:
Per a certain period of time, a given angular distance travelled by an object along or near the celestial equator may be compared to the angular diameter of one of the following objects: up to one Sun or Moon diameter (about 0.5° or 30') every 2 minutes up to one diameter of the planet Venus in inferior conjunction (about 1' or 60") about every 4 seconds 2,000 diameters of the largest stars per secondStar trail and time-lapse photography capture diurnal motion blur. The apparent motion of stars near the celestial pole seems slower than that of stars closer to the celestial equator. Conversely, following the diurnal motion with the camera to eliminate its arcing effect on a long exposure, can best be done with an equatorial mount, which requires adjusting the right ascension only; a telescope may have a sidereal motor drive to do that automatically. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Facial prosthetic**
Facial prosthetic:
A facial prosthetic or facial prosthesis is an artificial device used to change or adapt the outward appearance of a person's face or head.
Facial prosthetic:
When used in the theatre, film or television industry, facial prosthetic makeup alters a person's normal face into something extraordinary. Facial prosthetics can be made from a wide range of materials - including gelatin, foam latex, silicone, and cold foam. Effects can be as subtle as altering the curve of a cheek or nose, or making someone appear older or younger than they are. A facial prosthesis can also transform an actor into any creature, such as legendary creatures, animals and others.
Facial prosthetic:
To apply facial prosthetics, Pros-Aide, Beta Bond, Medical Adhesive or Liquid Latex is generally used. Pros-Aide is a water-based adhesive that has been the "industry standard" for over 30 years. It's completely waterproof and is formulated for use with sensitive skin. It is easily removed with Pros-Aide Remover. BetaBond is growing in popularity among Hollywood artists who say it's easier to remove. Medical Adhesive has the advantage that it's specifically designed not to cause allergies or skin irritation. Liquid Latex can only be used for a few hours, but can be used to create realistic blends from skin to prosthetics.
Facial prosthetic:
After application, cosmetics and/or paint is used to color the prosthetics and skin the desired colors, and achieve a realistic transition from skin to prosthetic. This can be done by the wearer, but is often done by a separate, trained artist.
At the end of its use, some prosthetics can be removed simply by being pulled off. Others need special solvents to help remove the prosthetics, such as Pros-Aide Remover (water based and completely safe) for Pros-Aide, Beta Solv for Beta Bond, and medical adhesive remover for medical adhesive.
Prosthetic make-up is becoming increasingly popular for everyday use. This kind of make-up is used by people who wish to significantly alter their features.
History of Facial Prosthetic:
The Emergence in Ancient History It began not after antiquity where the face was worn with artificial parts despite the lack of proof in the theory. It has been found that archaeologists stumbled upon an artefact that was false inside a skull’s left eye socket in Iran that goes way back around 3000-2900 B.C. Traces of thread were seen on the eye socket. When the person of the head skull died, the false eye was inserted. Gold masks were found on mummies in Ancient Egypt tombs around 2500 B.C., cosmetic gold and silver coins were present. The revelation of these findings are the start of the knowledge of the skill of facial prosthetics and in the ancient times focused on the social priority of the face. Body parts such as noses, ears and hands were a way as punishment for adultery in Ancient India. In the Vedic era, a well-known disquisition on the Indian treatments named The Sushruta Samhita, had done a report of the nasal pyramid with a cutaneous flap had been taken from the frontal region which shows signs of surgical reconstruction. The luck of it succeeding was not as high compared to these days. Hence, showing theories on prosthetic reconstructions attempts in history that are possibly not reported. In 1810 - 1750 B.C. Mesopotamia, it was found that punitive mutilations by the King Hammurabi were done despite his medical and morality being recognised. The people that had mutilated others had been retaliated by punishment which had restored lost parts which encouraged a few attempts at surgical grafting. There was barely any mention of facial prosthesis in the writings of the Greco-Roman Period. Long bone fracture reductions and restraints were more interesting to Hippocrates, Gallen and Celse than the treatment in maxillofacial defects.
History of Facial Prosthetic:
Facial Prostheses of the Kings in Post-Classical History The Byzantine in the Middle Ages believed that an individual would not be able to have been an Emperor also known as a ‘Rhinokopia’ if they had a severed nose. An order by Leonce was made to mutilate a nose belonging to the Emperor Justinian (482 - 565). History was made when Otton III (980 - 1002) who was the Emperor of the Holy Empire visited in 1000 A.D. to the tomb of Charlemagne at Aix-La-Chapelle in France. A tooth of Charlemagne was removed by Otton as a relic and a gold plate became a replacement to a piece of the cadaver’s broken nose. During that time as well, ivory made facial prostheses were described by Abulcasis (936 - 1013).
History of Facial Prosthetic:
The Birth of Maxillofacial Prosthetics in Modern History (Early Modern Period) Ambroise Paré (1510 - 1590) founded maxillofacial prosthetics who had the clinical knowledge tinged with medicine in the military which gave the first maxillofacial prosthesis with surgical anchorage. After three years had passed using human dissections to get educated on human anatomy, despite being previously known from the biggest hospital in the kingdom of France, the Hôtel-Dieu in Paris, he made the decision to relocate to Vitré to obtain knowledge of surgery from a barber. He proceeded to practice in heavy mutilations as a military surgeon prior to being assigned as “Surgeon of The King” of France (for Henri II and Charles IX).
History of Facial Prosthetic:
Materials and Techniques for Facial Prosthesis in Modern History (Late Modern Period) In the 19th Century throughout the time of the industrial revolution, appearance was improved a great deal by recently developed materials accessible for facial prostheses. Silver and gold were exchanged with lighter materials as they gave discomfort to the face and were stiff. To mask disfigurement, epitheses were used as it was practical and more successful therapeutics. In fact, in 1851, sulphur was incorporated into rubber which made Goodyear acquire vulcanite. It turned out to be a vital component of conventional dental prosthesis and facial prosthesis. A trouble-free and colourable creation being able to be used in both hard and soft structures. The application of vulcanite for facial prostheses was also mentioned by Norman Kingsley (1829 - 1931) and Apoléoni Preterre (1821-1893) in 1864 and 1866, respectively. In 1879, celluloid was used by Kingsley. Maxillofacial prosthetics were given a new dimension by mixing maxillofacial surgery with dental prosthetics by a French physician and dentist, Claude Martin (1843 - 1911) by the end of the 19th century. “Surgical” and “prosthesis” were terms used in conjunction with each other by Martin for the first time in De la prothèse immédiate à la résection des maxillaires. He had explained that in giving fulfilling skin simulation, the use of translucent ceramics for nasal prosthesis after amputation is the key.
Problems:
Being exposed to high temperatures can cause problems when wearing prosthetics. Glues that were sturdy at normal temperatures can become less effective under heat. This could lead to prosthetics falling apart or peeling from the skin.
Higher temperatures can cause sweating which can also affect the durability of the prosthetics. The negative effects of sweating can be prevented by cleaning the skin well with 99% alcohol before applying the adhesive. Another way to ensure that the facial prosthetics stay on once they have been applied is to treat the skin with an anti-perspirant beforehand. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genetic erosion**
Genetic erosion:
Genetic erosion (also known as genetic depletion) is a process where the limited gene pool of an endangered species diminishes even more when reproductive individuals die off before reproducing with others in their endangered low population. The term is sometimes used in a narrow sense, such as when describing the loss of particular alleles or genes, as well as being used more broadly, as when referring to the loss of a phenotype or whole species.
Genetic erosion:
Genetic erosion occurs because each individual organism has many unique genes which get lost when it dies without getting a chance to breed. Low genetic diversity in a population of wild animals and plants leads to a further diminishing gene pool – inbreeding and a weakening immune system can then "fast-track" that species towards eventual extinction.
By definition, endangered species suffer varying degrees of genetic erosion. Many species benefit from a human-assisted breeding program to keep their population viable, thereby avoiding extinction over long time-frames. Small populations are more susceptible to genetic erosion than larger populations.
Genetic erosion gets compounded and accelerated by habitat loss and habitat fragmentation – many endangered species are threatened by habitat loss and (fragmentation) habitat. Fragmented habitat create barriers in gene flow between populations.
Genetic erosion:
The gene pool of a species or a population is the complete set of unique alleles that would be found by inspecting the genetic material of every living member of that species or population. A large gene pool indicates extensive genetic diversity, which is associated with robust populations that can survive bouts of intense selection. Meanwhile, low genetic diversity (see inbreeding and population bottlenecks) can cause reduced biological fitness and increase the chance of extinction of that species or population.
Processes and consequences:
Population bottlenecks create shrinking gene pools, which leave fewer and fewer fertile mating partners. The genetic implications can be illustrated by considering the analogy of a high-stakes poker game with a crooked dealer. Consider that the game begins with a 52-card deck (representing high genetic diversity). Reduction of the number of breeding pairs with unique genes resembles the situation where the dealer deals only the same five cards over and over, producing only a few limited "hands".
Processes and consequences:
As specimens begin to inbreed, both physical and reproductive congenital effects and defects appear more often. Abnormal sperm increases, infertility rises, and birthrates decline. "Most perilous are the effects on the immune defense systems, which become weakened and less and less able to fight off an increasing number of bacterial, viral, fungal, parasitic, and other disease-producing threats. Thus, even if an endangered species in a bottleneck can withstand whatever human development may be eating away at its habitat, it still faces the threat of an epidemic that could be fatal to the entire population."
Loss of agricultural and livestock biodiversity:
Genetic erosion in agricultural and livestock is the loss of biological genetic diversity – including the loss of individual genes, and the loss of particular recombinants of genes (or gene complexes) – such as those manifested in locally adapted landraces of domesticated animals or plants that have become adapted to the natural environment in which they originated.
Loss of agricultural and livestock biodiversity:
The major driving forces behind genetic erosion in crops are variety replacement, land clearing, overexploitation of species, population pressure, environmental degradation, overgrazing, governmental policy, and changing agricultural systems. The main factor, however, is the replacement of local varieties of domestic plants and animals by other varieties or species that are non-local. A large number of varieties can also often be dramatically reduced when commercial varieties are introduced into traditional farming systems. Many researchers believe that the main problem related to agro-ecosystem management is the general tendency towards genetic and ecological uniformity imposed by the development of modern agriculture.
Loss of agricultural and livestock biodiversity:
In the case of Animal Genetic Resources for Food and Agriculture, major causes of genetic erosion are reported to include indiscriminate cross-breeding, increased use of exotic breeds, weak policies and institutions in animal genetic resources management, neglect of certain breeds because of a lack of profitability or competitiveness, the intensification of production systems, the effects of diseases and disease management, loss of pastures or other elements of the production environment, and poor control of inbreeding.
Prevention by human intervention, modern science and safeguards:
In situ conservation With advances in modern bioscience, several techniques and safeguards have emerged to check the relentless advance of genetic erosion and the resulting acceleration of endangered species towards eventual extinction. However, many of these techniques and safeguards are too expensive yet to be practical, and so the best way to protect species is to protect their habitat and to let them live in it as naturally as possible.
Prevention by human intervention, modern science and safeguards:
Wildlife sanctuaries and national parks have been created to preserve entire ecosystems with all the web of species native to the area. Wildlife corridors are created to join fragmented habitats (see Habitat fragmentation) to enable endangered species to travel, meet, and breed with others of their kind. Scientific conservation and modern wildlife management techniques, with the expertise of scientifically trained staff, help manage these protected ecosystems and the wildlife found in them. Wild animals are also translocated and reintroduced to other locations physically when fragmented wildlife habitats are too far and isolated to be able to link together via a wildlife corridor, or when local extinctions have already occurred.
Prevention by human intervention, modern science and safeguards:
Ex situ conservation Modern policies of zoo associations and zoos around the world have begun putting dramatically increased emphasis on keeping and breeding wild-sourced species and subspecies of animals in their registered endangered species breeding programs. These specimens are intended to have a chance to be reintroduced and survive back in the wild. The main objectives of zoos today have changed, and greater resources are being invested in breeding species and subspecies for then ultimate purpose of assisting conservation efforts in the wild. Zoos do this by maintaining extremely detailed scientific breeding records (i.e. studbooks)) and by loaning their wild animals to other zoos around the country (and often globally) for breeding, to safeguard against inbreeding by attempting to maximize genetic diversity however possible.
Prevention by human intervention, modern science and safeguards:
Costly (and sometimes controversial) ex-situ conservation techniques aim to increase the genetic biodiversity on our planet, as well as the diversity in local gene pools. by guarding against genetic erosion. Modern concepts like seedbanks, sperm banks, and tissue banks have become much more commonplace and valuable. Sperm, eggs, and embryos can now be frozen and kept in banks, which are sometimes called "Modern Noah's Arks" or "Frozen Zoos". Cryopreservation techniques are used to freeze these living materials and keep them alive in perpetuity by storing them submerged in liquid nitrogen tanks at very low temperatures. Thus, preserved materials can then be used for artificial insemination, in vitro fertilization, embryo transfer, and cloning methodologies to protect diversity in the gene pool of critically endangered species.
Prevention by human intervention, modern science and safeguards:
It can be possible to save an endangered species from extinction by preserving only parts of specimens, such as tissues, sperm, eggs, etc. – even after the death of a critically endangered animal, or collected from one found freshly dead, in captivity or from the wild. A new specimen can then be "resurrected" with the help of cloning, so as to give it another chance to breed its genes into the living population of the respective threatened species. Resurrection of dead critically endangered wildlife specimens with the help of cloning is still being perfected, and is still too expensive to be practical, but with time and further advancements in science and methodology it may well become a routine procedure not too far into the future.
Prevention by human intervention, modern science and safeguards:
Recently, strategies for finding an integrated approach to in situ and ex situ conservation techniques have been given considerable attention, and progress is being made. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Activity-driven model**
Activity-driven model:
In network science, the activity-driven model is a temporal network model in which each node has a randomly-assigned "activity potential", which governs how it links to other nodes over time. Each node j (out of N total) has its activity potential xi drawn from a given distribution F(x) . A sequence of timesteps unfolds, and in each timestep each node j forms ties to m random other nodes at rate ai=ηxi (more precisely, it does so with probability aiΔt per timestep). All links are then deleted after each timestep.
Activity-driven model:
Properties of time-aggregated network snapshots are able to be studied in terms of F(x) . For example, since each node j after T timesteps will have on average mηxiT outgoing links, the degree distribution after T timesteps in the time-aggregated network will be related to the activity-potential distribution by PT(k)∝F(kmηT).
Spreading behavior according to the SIS epidemic model was investigated on activity-driven networks, and the following condition was derived for large-scale outbreaks to be possible: βλ>2⟨a⟩⟨a⟩+⟨a2⟩, where β is the per-contact transmission probability, λ is the per-timestep recovery probability, and ( ⟨a⟩ , ⟨a2⟩ ) are the first and second moments of the random activity-rate aj
Extensions:
A variety of extensions to the activity-driven model have been studied. One example is activity-driven networks with attractiveness, in which the links that a given node forms do not attach to other nodes at random, but rather with a probability proportional to a variable encoding nodewise attractiveness. Another example is activity-driven networks with memory, in which activity-levels change according to a self-excitation mechanism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dorrite**
Dorrite:
Dorrite is a silicate mineral that is isostructural to the aenigmatite group. Although it is most chemically similar to the mineral rhönite [Ca2Mg5Ti(Al2Si4)O20], the lack of titanium (Ti) and presence of Fe3+ influenced dorrite's independence. Dorrite is named for Dr. John (Jack) A. Dorr, a late professor at the University of Michigan that researched in outcrops where dorrite was found in 1982. This mineral is sub-metallic resembling colors of brownish-black, dark brown, to reddish brown.
Discovery:
Dorrite was first reported in 1982 by A. Havette in a basalt-limestone contact on Réunion Island off of the coast of Africa. The second report of dorrite was made by Franklin Foit and his associates while examining a paralava from the Powder River Basin, Wyoming in 1987. Analyses determined that this newly found mineral was surprisingly similar to the mineral rhönite, lacking Ti but presenting dominant Fe3+ in its octahedral sites. Other minerals that coexist with this phase are plagioclase, gehlenite-akermanite, magnetite-magnesioferrite-spinel solid solutions, esseneite, nepheline, wollastonite, Ba-rich feldspar, apatite, ulvöspinel, ferroan sahamalite, and secondary barite, and calcite.
Occurrence:
Dorrite can be found in mineral reactions that relate dorrite + magnetite + clinopyroxene, rhönite + magnetite + olivine + clinopyroxene, and aenigmatite + pyroxene + olivine assemblages in nature. These assemblages favor low pressures and high temperatures. Dorrite is stable in strongly oxidizing, high-temperature, low-pressure environments. It occurs in paralava, pyrometamorphic melt rock, formed from the burning of coal beds.
Crystallography:
Researchers conclusively determined that dorrite is triclinic-pseudomonoclinic and twinned by a twofold rotation about the pseudomonoclinic b axis. The parameters for dorrite are a=10.505, b=10.897, c=9.019 Å, α=106.26°, β=95.16°, γ=124.75°.
Chemical Composition Calcium 8.97%Magnesium 5.44%Aluminum 6.04%Iron 37.48%Silicon 6.28%Oxygen 35.79% Oxides CaO 12.55%MgO 9.02%Al2O3 11.41%Fe2O3 53.59%SiO2 13.44% | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Master of Information Management**
Master of Information Management:
A Master of Information Management (MIM) is an interdisciplinary degree program designed to provide studies in strategic information management, knowledge management, usability, business administration, information systems, information architecture, information design, computer sciences, policy, ethics, and project management. The degree is relatively new and has typically been developed alongside other, more established programs in university Schools of Information. The MIM degree has emerged to address the growing and unique need for information professionals who understand the conflux of multiple organizational issues across several disciplines.
Master of Information Management:
The MIM degree is distinguished from closely related degrees (for example, Master of Science in Information System Management, Master of Information System Management, Master of Information Systems) that provide focused areas of study in computer science, information technology, information science, telecommunications, or some combination of these.
History:
The first MIM Program in New Zealand was started in 2002 at Victoria University of Wellington.
The first MIM degree program in the United States began in the fall of 2003 at The University of Maryland. Canada's first MIM program was established in Spring 2008.
MIM Programs:
United States Arizona State University Missouri Western State University Syracuse University School of Information Studies UIUC Graduate School of Library and Information Science University of Maryland College of Information Studies: Master of Information Management University of Michigan School of Information University of Washington Information School Washington University School of Engineering Canada Dalhousie University Colombia Escuela colombiana de ingeniería Julio Garavito New Zealand Victoria University of Wellington Australia RMIT University United Kingdom The Information School at The University of Sheffield London School of Economics and Political Science Belgium Katholieke Universiteit Leuven Czech Republic Brno University of Technology The Netherlands Tilburg University Maastricht University Denmark Copenhagen Business School India University of Mumbai Jamnalal Bajaj Institute of Management Studies, Mumbai MET (Mumbai Educational Trust) Institute of Management Studies, Bandra, Mumbai Welingkar Institute of Management Studies, Mumbai K.J. Somaiya Institute of Management Studies, Mumbai IES Institute of Management Studies, Mumbai Thakur Institute of Management Studies and Research, Mumbai Aditya Institute of Management Studies And Research, Mumbai Philippines University of the East Colegio de San Juan de Letran-Calamba (Master in Management-Information Technology Management) Poland Jagiellonian University (Master in Information Management) Portugal NOVA University Lisbon (NOVA IMS - NOVA Information Management School) Spain Universidad de Murcia Iceland Reykjavík University Taiwan National Dong Hwa University School of Management National Taiwan University of Science and Technology
Curriculum:
The University of Maryland College of Information Studies describes the MIM degree as a focus "on ways information and technology can be best organized, implemented and managed to meet the needs of end users in a variety of business, legal, nonprofit, government and institutional settings, which are affected by changes in the global environment every day."Curriculum will vary from school to school.
Curriculum:
At the University of Maryland, MIM students can specialize, earning a degree with a concentration of Strategic Management of Information Concentration or Technology Development and Deployment (formerly Socio-Tech Information Systems). The Strategic Management of Information is intended for those students who want to become organizations’ chief information officers, or follow that general management path. Technology Development and Deployment is designed for students who want to follow technology director career paths. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Versatile Service Engine**
Versatile Service Engine:
Versatile Service Engine is a second generation IP Multimedia Subsystem developed by Nortel Networks that is compliant with Advanced Telecommunications Computing Architecture specifications. Nortel's versatile service engine provides capability to telecommunication service provider to offer global System for mobile communications and code-division multiple access services in both wireline and wireless mode.
History:
The Versatile Service Engine is a joint effort of Nortel and Motorola. The aim of collaboration was to develop an Advanced Telecommunications Computing Architecture compliant platform for Nortel IP Multimedia Subsystem applications. Nortel joined the PCI Industrial Computer Manufacturers Group in 2002 and the work on Versatile Service Engine was started in 2004.
Architecture:
A single versatile service engine frame consists of three shelves, each shelf having three slots.
A single slot can have many sub-slots staging a blade in it. Advanced Telecommunications Computing Architecture blades can be processors, switches, AMC carriers, etc. A typical shelf will contain one or more switch blades and several processor blades. The power supply and cooling fans are located in the back pane of the Versatile Service Engine.
Ericsson ownership:
After Nortel Networks filed for bankruptcy protection in January 2009, Ericsson telecommunications then acquired the code-division multiple access and LTE based assets of then Canada's largest telecom equipment maker, hence taking the ownership of Versatile service engine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Material flow**
Material flow:
Material flow is the description of the transportation of raw materials, pre-fabricates, parts, components, integrated objects and final products as a flow of entities. The term applies mainly to advanced modeling of supply chain management.
As industrial material flow can easily become very complex, several different specialized simulation tools have been developed for complex systems. Typical tools are: AnyLogic AutoMod for logistics systems Plant Simulation for production system | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dithiane**
Dithiane:
A dithiane is a heterocyclic compound composed of a cyclohexane core structure wherein two methylene bridges (-CH2- units) are replaced by sulfur centres. The three isomeric parent heterocycles are 1,2-dithiane, 1,3-dithiane and 1,4-dithiane.
1,3-Dithianes:
1,3-Dithianes are protecting group of some carbonyl-containing compounds due to their inertness to many conditions. They form by treatment of the carbonyl compound with 1,3-propanedithiol under conditions that remove water from the system. The protecting group can be removed with mercuric reagents, a process that exploits the high affinity of Hg(II) for thiolates. 1,3-Dithianes are also employed in umpolung reactions, such as the Corey–Seebach reaction: Typically, in organic synthesis, ketones and aldehydes are protected as their dioxolanes instead of dithianes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Erythropoietin receptor**
Erythropoietin receptor:
The erythropoietin receptor (EpoR) is a protein that in humans is encoded by the EPOR gene. EpoR is a 52kDa peptide with a single carbohydrate chain resulting in an approximately 56-57 kDa protein found on the surface of EPO responding cells. It is a member of the cytokine receptor family. EpoR pre-exists as dimers. These dimers were originally thought to be formed by extracellular domain interactions, however, it is now assumed that it is formed by interactions of the transmembrane domain and that the original structure of the extracellular interaction site was due to crystallisation conditions and does not depict the native conformation. Binding of a 30 kDa ligand erythropoietin (Epo), changes the receptor's conformational change, resulting in the autophosphorylation of Jak2 kinases that are pre-associated with the receptor (i.e., EpoR does not possess intrinsic kinase activity and depends on Jak2 activity). At present, the most well-established function of EpoR is to promote proliferation and rescue of erythroid (red blood cell) progenitors from apoptosis.
Function and mechanism of action:
The cytoplasmic domains of the EpoR contain a number of phosphotyrosines that are phosphorylated by Jak2 and serve as docking sites for a variety of intracellular pathway activators and Stats (such as Stat5). In addition to activating Ras/AKT and ERK/MAP kinase, phosphatidylinositol 3-kinase/AKT pathway and STAT transcription factors, phosphotyrosines also serve as docking sites for phosphatases that negatively affect EpoR signaling in order to prevent overactivation that may lead to such disorders as erythrocytosis. In general, the defects in the erythropoietin receptor may produce erythroleukemia and familial erythrocytosis. Mutations in Jak2 kinases associated with EpoR can also lead to polycythemia vera.
Function and mechanism of action:
Erythroid survival Primary role of EpoR is to promote proliferation of erythroid progenitor cells and rescue erythroid progenitors from cell death. EpoR induced Jak2-Stat5 signaling, together with transcriptional factor GATA-1, induces the transcription of pro-survival protein Bcl-xL. Additionally, EpoR has been implicated in suppressing expression of death receptors Fas, Trail and TNFa that negatively affect erythropoiesis.Based on current evidence, it is still unknown whether Epo/EpoR directly cause "proliferation and differentiation" of erythroid progenitors in vivo, although such direct effects have been described based on in vitro work.
Function and mechanism of action:
Erythroid differentiation It is thought that erythroid differentiation is primarily dependent on the presence and induction of erythroid transcriptional factors such as GATA-1, FOG-1 and EKLF, as well as the suppression of myeloid/lymphoid transcriptional factors such as PU.1. Direct and significant effects of EpoR signaling specifically upon the induction of erythroid-specific genes such as beta-globin, have been mainly elusive. It is known that GATA-1 can induce EpoR expression. In turn, EpoR's PI3-K/AKT signaling pathway augments GATA-1 activity.
Function and mechanism of action:
Erythroid cell cycle/proliferation Induction of proliferation by the EpoR is likely cell type-dependent. It is known that EpoR can activate mitogenic signaling pathways and can lead to cell proliferation in erythroleukemic cell lines in vitro, various non-erythroid cells, and cancer cells. So far, there is no sufficient evidence that in vivo, EpoR signaling can induce erythroid progenitors to undergo cell division, or whether Epo levels can modulate the cell cycle. EpoR signaling may still have a proliferation effect upon BFU-e progenitors, but these progenitors cannot be directly identified, isolated and studied. CFU-e progenitors enter the cell cycle at the time of GATA-1 induction and PU.1 suppression in a developmental manner rather than due to EpoR signaling. Subsequent differentiation stages (proerythroblast to orthochromatic erythroblast) involve a decrease in cell size and eventual expulsion of the nucleus, and are likely dependent upon EpoR signaling only for their survival. In addition, some evidence on macrocytosis in hypoxic stress (when Epo can increase 1000-fold) suggests that mitosis is actually skipped in later erythroid stages, when EpoR expression is low/absent, in order to provide emergency reserve of red blood cells as soon as possible. Such data, though sometimes circumstantial, argue that there is limited capacity to proliferate specifically in response to Epo (and not other factors). Together, these data suggest that EpoR in erythroid differentiation may function primarily as a survival factor, while its effect on the cell cycle (for example, rate of division and corresponding changes in the levels of cyclins and Cdk inhibitors) in vivo awaits further work. In other cell systems, however, EpoR may provide a specific proliferative signal.
Function and mechanism of action:
Commitment of multipotent progenitors to the erythroid lineage EpoR's role in lineage commitment is currently unclear. EpoR expression can extend as far back as the hematopoietic stem cell compartment. It is unknown whether EpoR signaling plays a permissive (i.e. induces only survival) or an instructive (i.e. upregulates erythroid markers to lock progenitors to a predetermined differentiation path) role in early, multipotent progenitors in order to produce sufficient erythroblast numbers. Current publications in the field suggest that it is primarily permissive. The generation of BFU-e and CFU-e progenitors was shown to be normal in rodent embryos knocked out for either Epo or EpoR. An argument against such lack of requirement is that in response to Epo or hypoxic stress, the number of early erythroid stages, the BFU-e and CFU-e, increases dramatically. However, it is unclear if it is an instructive signal or, again, a permissive signal. One additional point is that signaling pathways activated by the EpoR are common to many other receptors; replacing EpoR with prolactin receptor supports erythroid survival and differentiation in vitro. Together, these data suggest that commitment to erythroid lineage likely does not happen due to EpoR's as-yet-unknown instructive function, but possibly due to its role in survival at the multipotent progenitor stages.
Animal studies on Epo Receptor mutations:
Mice with truncated EpoR are viable, which suggests Jak2 activity is sufficient to support basal erythropoiesis by activating the necessary pathways without phosphotyrosine docking sites being needed. EpoR-H form of EpoR truncation contains the first, and, what can be argued, the most important tyrosine 343 that serves as a docking site for the Stat5 molecule, but lacks the rest of the cytoplasmic tail. These mice exhibit elevated erythropoiesis consistent with the idea that phosphatase recruitment (and therefore the shutting down of signaling) is aberrant in these mice.
Animal studies on Epo Receptor mutations:
The EpoR-HM receptor also lacks the majority of the cytoplasmic domain, and contains the tyrosine 343 that was mutated to phenylalanine, making it unsuitable for efficient Stat5 docking and activation. These mice are anemic and show poor response to hypoxic stress, such as phenylhydrazine treatment or erythropoietin injection.EpoR knockout mice have defects in heart, brain and the vasculature. These defects may be due to blocks in RBC formation and thus insufficient oxygen delivery to developing tissues because mice engineered to express Epo receptors only in erythroid cells develop normally.
Clinical significance:
Defects in the erythropoietin receptor may produce erythroleukemia and familial erythrocytosis. Overproduction of red blood cells increases a chance of adverse cardiovascular event, such as thrombosis and stroke.
Clinical significance:
Rarely, seemingly beneficial mutations in the EpoR may arise, where increased red blood cell number allows for improved oxygen delivery in athletic endurance events with no apparent adverse effects upon the athlete's health (as for example in the Finnish athlete Eero Mäntyranta).Erythropoietin was reported to maintain endothelial cells and to promote tumor angiogenesis, hence the dysregulation of EpoR may affect the growth of certain tumors. However this hypothesis is not universally accepted.
Interactions:
Erythropoietin receptor has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PSTPIP1**
PSTPIP1:
Proline-serine-threonine phosphatase-interacting protein 1 is an enzyme that in humans is encoded by the PSTPIP1 gene.
Interactions:
PSTPIP1 has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ilinx**
Ilinx:
Ilinx is a kind of play, described by sociologist Roger Caillois, a major figure in game studies. Ilinx creates a temporary disruption of perception, as with vertigo, dizziness, or disorienting changes in direction of movement.
Ilinx:
Caillois identified several categories of play in Les Jeux et Les Hommes (ISBN 978-2070326723; 1958, in English as Man, Play and Games ISBN 978-0-252-07033-4; 2001.) In the book, Caillois described the category of ilinx as games that: "...are based on the pursuit of vertigo and which consist of an attempt to momentarily destroy the stability of perception and inflict a kind of voluptuous panic upon an otherwise lucid mind. In all cases, it is a question of surrendering to a kind of spasm, seizure, or shock which destroys reality with sovereign brusqueness."Caillois's other categories, which should be considered alongside ilinx as any form of play rarely fits wholly and discretely into one category, are "agon", "alea" and "mimesis" (or "mimicry"). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aluminium toxicity in people on dialysis**
Aluminium toxicity in people on dialysis:
Aluminium toxicity in people on dialysis is a problem for people on haemodialysis. The dialysis process does not efficiently remove excess aluminium from the body, so it may build up over time. Aluminium is a potentially toxic metal, and aluminium poisoning may lead to mainly three disorders: aluminium-induced bone disease, microcytic anemia and neurological dysfunction (encephalopathy). Such conditions are more prominently observed in people with chronic kidney failure and especially in people on haemodialysis.
Aluminium toxicity in people on dialysis:
About 5–10 mg of aluminium enters human body daily through different sources like water, food, occupational exposure to aluminium in industries, and so on. In people with normal kidney function, serum aluminium is normally lower than 6 microgram/L. Baseline levels of serum aluminium should be <20 microgram/L. According to AAMI, standard aluminum levels in the dialysis fluid should be less than 0.01 milligram/L. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electric chair**
Electric chair:
The electric chair is a specialized device employed for carrying out capital punishment through the process of electrocution. During its use, the individual sentenced to death is securely strapped to a specifically designed wooden chair and subjected to electrocution via strategically positioned electrodes affixed to the head and leg. This method of execution was conceptualized by Alfred P. Southwick, a dentist based in Buffalo, New York, in 1881. Over the following decade, this execution technique was developed further, aiming to provide a more humane alternative to the conventional form of execution, particularly hanging. The electric chair was first utilized in 1890 and subsequently became known as a symbol of this method of execution.
Electric chair:
The electric chair has been closely associated with the history of capital punishment in the United States and has also been utilized for a significant period in the Philippines. Originally, it was believed that death resulted from cerebral damage, but in 1899, it was scientifically established that the primary cause of death is ventricular fibrillation followed by cardiac arrest.
Electric chair:
Despite its historical significance in the context of the American death penalty, the use of the electric chair has diminished over time due to the increasing adoption of lethal injection as a more humane method of execution. While certain states still retain electrocution as a legally authorized method of execution, it is often employed as a secondary option, contingent upon the preference of the condemned individual. Exceptions to this include states like Tennessee and South Carolina, where electrocution can be used without prisoner input if the necessary drugs for lethal injection are unavailable.
Electric chair:
As of 2021, electrocution remains a selectable method of execution in states such as Alabama and Florida, where inmates may opt for lethal injection instead. In contrast, Kentucky has retired the electric chair, except for individuals sentenced to capital punishment before March 31, 1998, who can choose electrocution. Inmates who do not select this method, as well as those convicted after the aforementioned date, are executed through lethal injection. Kentucky has also authorized the use of electrocution as a potential alternative if lethal injection is deemed unconstitutional by a court.
Electric chair:
The electric chair continues to be an accepted alternative method of execution in states like Arkansas, Mississippi, and Oklahoma, to be utilized if other forms of execution are ruled unconstitutional at the time of the execution.
A significant turning point occurred on February 8, 2008, when the Nebraska Supreme Court ruled that execution by electric chair constituted a form of "cruel and unusual punishment" under the state's constitution. This decision marked the cessation of electric chair executions in Nebraska, making it the last state to rely solely on this method of execution.
Historical Background:
Invention In the late 1870s to early 1880s, the spread of arc lighting, a type of outdoor street lighting that required high voltages in the range of 3000–6000 volts, was followed by one story after another in newspapers about how the high voltages used were killing people, usually unwary linemen; it was a strange new phenomenon that seemed to instantaneously strike a victim dead without leaving a mark. One of these accidents, in Buffalo, New York, on August 7, 1881, led to the inception of the electric chair. That evening a drunken dock worker named George Lemuel Smith, looking for the thrill of a tingling sensation he had noticed when grabbing the guard rail in a Brush Electric Company arc lighting power house, managed to sneak his way back into the plant at night and grabbed the brush and ground of a large electric dynamo. He died instantly. The coroner who investigated the case brought it up that year at a local Buffalo scientific society. Another member attending that lecture, Alfred P. Southwick, a dentist who had a technical background, thought some application could be found for the curious phenomenon.Southwick joined physician George E. Fell and the head of the Buffalo ASPCA in a series of experiments electrocuting hundreds of stray dogs. They ran trials with the dog in water and out of water, and varied the electrode type and placement until they came up with a repeatable method to euthanize animals using electricity. Southwick went on in the early 1880s to advocate that this method be used as a more humane replacement for hanging in capital cases, coming to national attention when he published his ideas in scientific journals in 1882 and 1883. He worked out calculations based on the dog experiments, trying to develop a scaled-up method that would work on humans. Early on in his designs he adopted a modified version of the dental chair as a way to restrain the condemned, a device that from then on would be called the electric chair.
Historical Background:
The Gerry Commission After a series of botched hangings in the United States, there was mounting criticism of that form of capital punishment and the death penalty in general. In 1886, newly elected New York State governor David B. Hill set up a three-member death penalty commission, which was chaired by the human rights advocate and reformer Elbridge Thomas Gerry and included New York lawyer and politician Matthew Hale and Southwick, to investigate a more humane means of execution.
Historical Background:
The commission members surveyed the history of execution and sent out a fact-finding questionnaire to government officials, lawyers, and medical experts all around the state asking for their opinion. A slight majority of respondents recommended hanging over electrocution, with a few instead recommending the abolition of capital punishment. The commission also contacted electrical experts, including Thomson-Houston Electric Company's Elihu Thomson (who recommended high voltage AC connected to the head and the spine) and the inventor Thomas Edison (who also recommended AC, as well as using a Westinghouse generator). They also attended electrocutions of dogs by George Fell who had worked with Southwick in the early 1880s experiments. Fell was conducting further experiments, electrocuting anesthetized vivisected dogs trying to discern exactly how electricity killed a subject.In 1888, the Commission recommended electrocution using Southwick's electric chair idea with metal conductors attached to the condemned person's head and feet. They further recommended that executions be handled by the state instead of the individual counties with three electric chairs set up at Auburn, Clinton, and Sing Sing prisons. A bill following these recommendations passed the legislature and was signed by Governor Hill on June 4, 1888, set to go into effect on January 1, 1889.
Historical Background:
The New York Medico-Legal Commission The bill itself contained no details on the type or amount of electricity that should be used and the New York Medico-Legal Society, an informal society composed of doctors and lawyers, was given the task of determining these factors. In September 1888, a committee was formed and recommended 3000 volts, although the type of electricity, direct current (DC) or alternating current (AC), was not determined, and since tests up to that point had been done on animals smaller than a human (dogs), some members were unsure that the lethality of AC had been conclusively proven.
Historical Background:
At this point the state's efforts to design the electric chair became intermixed with what has come to be known as the war of the currents, a competition between Thomas Edison's direct current power system and George Westinghouse's alternating current based system. The two companies had been competing commercially since 1886 and a series of events had turned it into an all-out media war in 1888. The committee head, neurologist Frederick Peterson, enlisted the services of Harold P. Brown as a consultant. Brown had been on his own crusade against alternating current after the shoddy installation of pole-mounted AC arc lighting lines in New York City had caused several deaths in early 1888. Peterson had been an assistant at Brown's July 1888 public electrocution of dogs with AC at Columbia College, an attempt by Brown to prove AC was more deadly than DC. Technical assistance in these demonstrations was provided by Thomas Edison's West Orange laboratory and there grew to be some form of collusion between Edison Electric and Brown. Back at West Orange on December 5, 1888, Brown set up an experiment with members of the press, members of the Medico-Legal Society including Elbridge Gerry who was also chairman of the death penalty commission, and Thomas Edison looking on. Brown used alternating current for all of his tests on animals larger than a human, including 4 calves and a lame horse, all dispatched with 750 volts of AC. Based on these results the Medico-Legal Society recommended the use of 1000–1500 volts of alternating current for executions and newspapers noted the AC used was half the voltage used in the power lines over the streets of American cities. Westinghouse criticized these tests as a skewed self-serving demonstration designed to be a direct attack on alternating current and accused Brown of being in the employ of Edison.At the request of death penalty commission chairman Gerry, Medico-Legal Society members; electrotherapy expert Alphonse David Rockwell, Carlos Frederick MacDonald, and Columbia College professor Louis H. Laudy, were given the task of working out the details of electrode placement. They again turned to Brown to supply the technical assistance. Brown asked Edison Electric Light to supply equipment for the tests and treasurer Francis S. Hastings (who seemed to be one of the primary movers at the company trying to portray Westinghouse as a peddler of death dealing AC current) tried to obtain a Westinghouse AC generator for the test but found none could be acquired. They ended up using Edison's West Orange laboratory for the animal tests they conducted in mid-March 1889. Superintendent of Prisons Austin E. Lathrop asked Brown to design the chair, but Brown turned down the offer. George Fell drew up the final designs for a simple oak chair and went against the Medico-Legal Society recommendations, changing the position of the electrodes to the head and the middle of the back. Brown did take on the job of finding the generators needed to power the chair. He managed to surreptitiously acquire three Westinghouse AC generators that were being decommissioned with the help of Edison and Westinghouse's chief AC rival, the Thomson-Houston Electric Company, a move that made sure that Westinghouse's equipment would be associated with the first execution. The electric chair was built by Edwin F. Davis, the first "state electrician" (executioner) for the State of New York.
Historical Background:
First execution The first person in line to die under New York's new electrocution law was Joseph Chapleau, convicted for beating his neighbor to death with a sled stake, but his sentence was commuted to life imprisonment. The next person scheduled to be executed was William Kemmler, convicted of murdering his wife with a hatchet. An appeal on Kemmler's behalf was made to the New York Court of Appeals on the grounds that use of electricity as a means of execution constituted a "cruel and unusual punishment" and was thus contrary to the constitutions of the United States and the state of New York. On December 30, 1889, the writ of habeas corpus sworn out on Kemmler's behalf was denied by the court, with Judge Dwight writing in a lengthy ruling: We have no doubt that if the Legislature of this State should undertake to proscribe for any offense against its laws the punishment of burning at the stake, breaking at the wheel, etc., it would be the duty of the courts to pronounce upon such attempt the condemnation of the Constitution. The question now to be answered is whether the legislative act here assailed is subject to the same condemnation. Certainly, it is not so on its face, for, although the mode of death described is conceded to be unusual, there is no common knowledge or consent that it is cruel; it is a question of fact whether an electric current of sufficient intensity and skillfully applied will produce death without unnecessary suffering.
Historical Background:
Kemmler was executed in New York's Auburn Prison on August 6, 1890; the "state electrician" was Edwin Davis. The first 17-second passage of 1,000 volts AC through Kemmler caused unconsciousness, but failed to stop his heart and breathing. The attending physicians, Edward Charles Spitzka and Carlos Frederick MacDonald, came forward to examine Kemmler. After confirming Kemmler was still alive, Spitzka reportedly called out, "Have the current turned on again, quick, no delay." The generator needed time to re-charge, however. In the second attempt, Kemmler received a 2,000 volt AC shock. Blood vessels under the skin ruptured and bled, and the areas around the electrodes singed; some witnesses reported that his body caught fire. The entire execution took about eight minutes. George Westinghouse later commented that, "They would have done better using an axe", and the New York Times ran the headline: "Far worse than hanging".
Historical Background:
Adoption The electric chair was adopted by Ohio (1897), Massachusetts (1900), New Jersey (1906) and Virginia (1908), and soon became the prevalent method of execution in the United States, replacing hanging. Twenty six US States, the District of Columbia, the Federal government, and the US Military either had death by electrocution on the books or actively executed criminals using the method. The electric chair remained the most prominent execution method until the mid-1980s when lethal injection became widely accepted for conducting judicial executions.Other countries appear to have contemplated using the method, sometimes for special reasons. The Philippines also adopted the electric chair from 1926 to 1987. A well-publicized triple execution took place there in May 1972, when Jaime Jose, Basilio Pineda and Edgardo Aquino were electrocuted for the 1967 abduction and gang-rape of the young actress Maggie de la Riva. The last electric chair execution in the Philippines was in 1976 and was later replaced with lethal injection when executions resumed in that country.Ethiopia had, according to some sources, attempted to adopt the electric chair as a method of executing criminals. The emperor Menelik II is said to have acquired three electric chairs in 1896 at the behest of a missionary, but could not make the devices work as his nation did not have a reliable source of electric power available at that time. Two of the chairs were either used as garden furniture or given to friends and Menelik II is said to have used the third electric chair as a throne.
Historical Background:
United Kingdom The Royal Commission on Capital Punishment set up by the Clement Attlee government in 1949 reviewed the application of the death penalty in the United Kingdom, including the questions of what crimes should receive the death penalty and what method of execution should be employed. The Commission examined the various methods of execution as an alternative to hanging, but concluded that the electric chair had no particular advantages. The Commission described their own task as "trying to find some practical half-way house between the present scope of the death penalty and its abolition". Capital punishment in the United Kingdom was suspended in 1965 for 5 years and this suspension was made permanent in 1969.
Historical Background:
Key events in the United States Serial killer Lizzie Halliday was the first woman sentenced to die in the electric chair, in 1894, but governor Roswell P. Flower commuted her sentence to life in a mental institution after a medical commission declared her insane. A second woman sentenced to death in 1895, Maria Barbella, was acquitted the next year. Martha M. Place became the first woman executed in the electric chair at Sing Sing Prison on March 20, 1899, for the murder of her 17-year-old stepdaughter, Ida Place.Leon Czolgosz was executed in the electric chair at New York's Auburn Prison on October 29, 1901, for the assassination of then-President William McKinley.
Historical Background:
The first photograph of an execution by electric chair was of housewife Ruth Snyder at Sing Sing on the evening of January 12, 1928, for the March 1927 murder of her husband. It was photographed for a front-page story on New York Daily News the following morning by news photographer Tom Howard who had smuggled a camera into the death chamber and photographed her in the electric chair as the current was turned on. It remains one of the best-known examples of photojournalism.A record was set on July 13, 1928, when seven men were executed consecutively in the electric chair at the Kentucky State Penitentiary in Eddyville, Kentucky.On June 16, 1944, an African-American teenager, 14-year-old George Stinney, became the youngest person ever executed in the electric chair when he was electrocuted at the Central Correctional Institution in Columbia, South Carolina. His conviction was overturned in 2014 after a circuit court judge vacated his sentence on the grounds that Stinney did not receive a fair trial. The judge determined that Stinney's legal counsel was inadequate, thus violating his rights under the Sixth Amendment to the U.S. Constitution.On May 25, 1979, John Spenkelink became the first person to be electrocuted after the Gregg v. Georgia decision by the Supreme Court of the United States in 1976. He was the first person to be executed in the United States in this manner since 1966.
Historical Background:
The last person to be executed by electric chair without the choice of an alternative method was Lynda Lyon Block on May 10, 2002, in Alabama.
Process and mechanism:
The condemned inmate's head and legs are shaved on the day of the execution. After the condemned inmate is escorted to and seated in the chair, their arms and legs are tightly strapped with leather belts to restrict movement or resistance. A cap with a brine or saltwater soaked sponge is affixed to the inmate's head and electrodes are attached to the inmate's shaved legs. The inmate is typically hooded or blindfolded.
Process and mechanism:
After the inmate is read the order of execution and permitted to make a final statement, the execution commences. Various cycles (changes in voltage and duration) of alternating current are passed through the individual's body in order to cause lethal damage to the internal organs. The first, more powerful jolt (between 2000 and 2,500 volts) of electric current is intended to cause immediate unconsciousness, ventricular fibrillation, and eventual cardiac arrest. The second, less powerful jolt (500–1,500 volts) is intended to cause lethal damage to the vital organs.In 1999, Allen Lee Davis was the last person executed in Florida's electric chair. Up to 10 Amperes of electric current were applied for 38 seconds.After the cycles are completed, a doctor checks the inmate for any signs of life. If none are present, the doctor reports and records the time of death, and prison officials will wait for the body to cool down before removing it to prepare for autopsy. If the inmate exhibits signs of life, the doctor notifies the warden, who usually will order another round of electric current or (rarely) postpone the execution such as with Willie Francis.
Controversies and criticisms:
Possibility of consciousness and pain during execution Critics of the electric chair dispute whether the first jolt of electricity reliably induces immediate unconsciousness as proponents often claim. Witness testimony, botched electrocutions (see Willie Francis and Allen Lee Davis), and post-mortem examinations suggest that execution by electric chair is often painful.
Controversies and criticisms:
Botched executions The electric chair has been criticized because of several instances in which the subjects were killed only after being subjected to multiple electric shocks. This led to a call for ending of the practice, as being a "cruel and unusual punishment". Trying to address such concerns, Nebraska introduced a new electrocution protocol in 2004, which called for the administration of a 15-second application of current at 2,450 volts; after a 15-minute wait, an official then checks for signs of life. In April 2007, new concerns raised regarding the 2004 protocol resulted in the ushering in of a different Nebraska protocol, calling for a 20-second application of current at 2,450 volts. Prior to the 2004 protocol change, an initial eight-second application of current at 2,450 volts was administered, followed by a one-second pause, then a 22-second application at 480 volts. After a 20-second break, the cycle was repeated three more times.In 1946, the electric chair failed to kill Willie Francis, who reportedly shrieked, "Take it off! Let me breathe!", after the current was applied. It turned out that the portable electric chair had been improperly set up by an intoxicated prison guard and inmate. A case was brought before the U.S. Supreme Court (Louisiana ex rel. Francis v. Resweber), with lawyers for the condemned arguing that although Francis did not die, he had, in fact, been executed. The argument was rejected on the basis that re-execution did not violate the double jeopardy clause of the 5th Amendment of the United States Constitution, and Francis was returned to the electric chair and executed in 1947.Florida saw three highly controversial botched electrocutions in the 1990s, starting with the 1990 execution of Jesse Tafero. His case generated significant controversy, as with the first administration of electricity, Tafero's face and head caught on fire. Tafero's execution ultimately required three shocks over the course of seven minutes. The error was blamed on prison officials replacing Florida's old natural sea sponge with a kitchen sponge. The 1997 execution of Pedro Medina in Florida created controversy when flames burst from his head. An autopsy found that Medina had died instantly when the first surge of electricity had destroyed his brain and brain stem. A judge ruled that the incident arose from "unintentional human error" rather than any faults in the "apparatus, equipment, and electrical circuitry" of Florida's electric chair. In Florida, on July 8, 1999, Allen Lee Davis, convicted of murder, was executed in the Florida electric chair "Old Sparky". Davis' face was bloodied, and photographs were taken, which were later posted on the Internet. An investigation concluded that Davis had begun bleeding before the electricity was applied and that the chair had functioned as intended. Florida's Supreme Court ruled that the electric chair did not constitute "cruel and unusual punishment".
Decline and current status:
The use of the electric chair has declined since the 1979 advent of lethal injection, which is now the default method in all U.S. jurisdictions that authorize capital punishment.
Decline and current status:
As of 2023, the only places that still reserve the electric chair as an option for execution are the U.S. states of Alabama, Florida, South Carolina, Kentucky, and Tennessee. Arkansas, Mississippi, and Oklahoma laws provide for its use should lethal injection ever be held to be unconstitutional. Inmates in the other states must select either it or lethal injection. In Kentucky, only inmates sentenced before a certain date can choose to be executed by electric chair. Electrocution is also authorized in Kentucky in case lethal injection is found unconstitutional by a court. Tennessee was among the states that provided inmates with a choice of the electric chair or lethal injection; in May 2014, however, the state passed a law allowing the use of the electric chair if lethal injection drugs were unavailable or made unconstitutional.On February 15, 2008, the Nebraska Supreme Court declared execution by electrocution to be "cruel and unusual punishment" prohibited by the Nebraska Constitution.The last judicial electrocution in the U.S. prior to Furman v. Georgia took place in Oklahoma in 1966. The electric chair was used quite frequently in post-Gregg v Georgia executions during the 1980s, but its use in the United States gradually declined in the 1990s due to the widespread adoption of lethal injection. A number of states still allow the condemned person to choose between electrocution and lethal injection, with the most recent U.S. electrocution, of Nicholas Todd Sutton, taking place in February 2020 in Tennessee.In 2021, South Carolina's governor Henry McMaster passed a law forcing inmates to be executed by electrocution if lethal injection were not available. The law also mandated electrocution in the event that an inmate refused to select their execution method, between South Carolina's options of lethal injection, the electric chair, and a firing squad. In 2022, a judge in Richland County, South Carolina, declared that execution by firing squad and electrocution were both in violation of the South Carolina State Constitution, which bans methods that are "cruel, unusual, or corporal." The court, in their decision, stated that there was no evidence that electrocution could instantaneously or painlessly kill an inmate, writing that the idea of the electric chair inducing instant unconsciousness was based on "underlying assumptions upon which the electric chair is based, dating back to the 1800s, [that] have since been disproven." The decision also called electrocution "inconsistent with both the concepts of evolving standards of decency and the dignity of man," and stated, "Even if an inmate survived only fifteen or thirty seconds, he would suffer the experience of being burned alive – a punishment that has 'long been recognized as manifestly cruel and unusual.'" The ruling led to a permanent injunction being issued against both methods of execution, preventing the state from subjecting death row inmates to death by firing squad or electrocution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Joint compatibility branch and bound**
Joint compatibility branch and bound:
Joint compatibility branch and bound (JCBB) is an algorithm in computer vision and robotics commonly used for data association in simultaneous localization and mapping. JCBB measures the joint compatibility of a set of pairings that successfully rejects spurious matchings and is hence known to be robust in complex environments. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Process psychology**
Process psychology:
Process psychology is a branch of psychotherapeutic psychology which was derived from process philosophy as developed by Alfred North Whitehead. Process psychology got its start at a conference sponsored by the Center for Process Studies in 1998. In 2000, Michel Weber created the Whitehead Psychology Nexus: an open forum dedicated to the cross-examination of Alfred North Whitehead's process philosophy and the various facets of the contemporary psychological field.David Ray Griffin, a retired professor, has also been instrumental in encouraging the development of Process Psychology. Process Psychology is closely aligned with process theology and its practitioners frequently refer to spiritual concerns.
Process psychology:
John Buchanan described Process Psychology as a transpersonal psychology providing an empirical basis for what has been called mystical experience.Yet other theorists reference systems thinking and the work of Ludwig von Bertalanffy whose concept of a "system" is compared to Whitehead's idea of the "organism".The influence of Carl G. Jung is also referenced and he is considered to be among the discipline's founding fathers.Jon Mills (psychologist) has proposed a process psychology known as "dialectical psychoanalysis" (which is based, in part, on Hegelianism). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Divine light**
Divine light:
In theology, divine light (also called divine radiance or divine refulgence) is an aspect of divine presence perceived as light during a theophany or vision, or represented as such in allegory or metaphor.
The term "light" has been widely used in spirituality and religion, such as: An Nūr – Islamic term and concept, referenced in Surah an-Nur and Ayat an-Nur of the Quran.
Inner light – Christian concept and Quaker doctrine.
Jyoti or Jyot – a holy flame that is lit with cotton wicks and ghee or mustard oil. It is the prayer ritual of devotional worship performed by Hindus offer to the deities. Jyoti is also a representation of the divine light and a form of the Hindu goddess Durga shakti.
Ohr Ein Sof – in Rabbinic Judaism and Kabbalah.
Prakāśa – Kashmiri Shaiva concept of the light of Divine Consciousness of Shiva.
Tabor Light – the uncreated light revealed to the apostles present during the Transfiguration of Jesus; also experienced as illumination on the path to theosis in Eastern Orthodox theology during theoria, a form of Christian contemplation.
Buddhism:
Buddhist scripture speaks of numerous buddhas of light, including a Buddha of Boundless Light, a Buddha of Unimpeded Light, and the Buddhas of Unopposed Light, of Pure Light, of Incomparable Light, and of Unceasing Light.: 32
Christianity:
In the book of 1 John 1:5, it says "God is light" which means that God is part of the system that provides light to the whole universe. God created light, Genesis 1:3 and is light.
Bible commentators such as John W. Ritenbaugh see the presence of light as a metaphor of truth, good and evil, knowledge, and ignorance. In the first Chapter of the Bible, Elohim is described as creating light by fiat and seeing the light to be good.
Christianity:
Eastern Orthodoxy In the Eastern Orthodox tradition, the Divine Light illuminates the intellect of man through "theoria" or contemplation. In the Gospel of John, the opening verses describe God as Light: "In Him was life and the life was the light of men. And the light shines in the darkness and the darkness did not comprehend it." (John 1:5) In John 8:12, Christ proclaims "I am the light of the world", bringing the Divine Light to mankind. The Tabor Light, also called the Uncreated Light, was revealed to the three apostles present at the Transfiguration.
Christianity:
Quakers Quakers, known formally as the Religious Society of Friends, are generally united by a belief in each human's ability to experience the light within or see "that of God in every one". Most Quakers believe in continuing revelation: that God continuously reveals truth directly to individuals. George Fox said, "Christ has come to teach His people Himself." Friends often focus on feeling the presence of God. As Isaac Penington wrote in 1670, "It is not enough to hear of Christ, or read of Christ, but this is the thing – to feel him to be my root, my life, and my foundation..." Quakers reject the idea of priests, believing in the priesthood of all believers. Some express their concept of God using phrases such as "the inner light", "inward light of Christ", or "Holy Spirit". Quakers first gathered around George Fox in the mid-17th century and belong to a historically Protestant Christian set of denominations.
Hinduism:
In Hinduism, Diwali—the festival of lights—is a celebration of the victory of light over darkness. A mantra in Bṛhadāraṇyaka Upaniṣad (1.3.28) urges to God: "from darkness, lead us unto light". The Rig Veda includes nearly two dozen hymns to the dawn and its goddess, Ushas.
Sant Mat In the terminology of Sant Mat, Light and Sound are the two main and expressions of God and from them all the creation comes into existence. Inner Light (and Inner Sound) can be experienced with and after an initiation by a competent Guru during meditation, and are considered the better way to reach Enlightenment.
Manichaeism:
Manichaeism, the most widespread Western religion prior to Christianity, was based on the belief that God was, literally, light. From about 250-350 CE, devout Manichaeans followed the teachings of self-proclaimed prophet Mani. Mani's faithful, who could be found from Greece to China, believed in warring kingdoms of Light and Darkness, in "beings of light," and in a Father of Light who would conquer the demons of darkness and remake the earth through shards of light found in human souls. Manichaeism also co-opted other religions, including Buddhist teachings in its scripture and worshipping Jesus the Luminous who was crucified on a cross of pure light.
Manichaeism:
Among the many followers of Manichaeism was the young Augustine, who later wrote, "I thought that you, Lord God and Truth, were like a luminous body of immense size, and myself a bit of that body.": 30 When he converted to Christianity in 386 CE, Augustine denounced Manichaeism. By then, Manichaeism had been supplanted by ascendant Christianity.
Manichaeism's legacy is the word Manichaean—relating to a dualistic view of the world, dividing things into either good or evil, light or dark, black or white.
Neoplatonism:
In On the Mysteries of the Egyptians, Chaldeans, and Assyrians, Iamblichus refers to the divine light as the manifestation of the gods by which divination, theurgy, and other forms of ritual are accomplished.
Zoroastrianism:
Light is a core concept in Iranian mysticism. The root of this thought lies in Zoroastrian beliefs, which define the supreme God, Ahura Mazda, as the source of light. This essential attribute is manifested in various schools of thought in Persian mysticism and philosophy. Later, this notion was dispersed across the entire Middle East, shaping the paradigms of religions and philosophies emerging in the region. After the Arab invasion, this concept was incorporated into Islamic teachings by Iranian thinkers, the most famous of them being Shahab al-Din Suhrawardi, the founder of the illumination philosophy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Master-checker**
Master-checker:
A master-checker is a hardware-supported fault tolerance method for multiprocessor systems, in which two processors, referred to as the master and checker, calculate the same functions in parallel in order to increase the probability that the result is exact. The checker-CPU is synchronised at clock level with the master-CPU and processes the same programs as the master. Whenever the master-CPU generates an output, the checker-CPU compares this output to its own calculation and in the event of a difference raises a warning.
Master-checker:
The master-checker system generally gives more accurate answers by ensuring that the answer is correct before passing it on to the application requesting the algorithm being completed. It also allows for error handling if the results are inconsistent. A recurrence of discrepancies between the two processors could indicate a flaw in the software, hardware problems, or timing issues between the clock, CPUs, and/or system memory. However, such redundant processing wastes time and energy. If the master-CPU is correct 95% or more of the time, the power and time used by the checker-CPU to verify answers is wasted. Depending on the merit of a correct answer, a checker-CPU may or may not be warranted. In order to alleviate some of the cost in these situations, the checker-CPU may be used to calculate something else in the same algorithm, increasing the speed and processing output of the CPU system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oppo Joy Plus**
Oppo Joy Plus:
The Oppo Joy Plus launched at the end of April, 2015. The phone had the slogan "Leap Up, Reach Joy." One of the phone's key selling points was an improved touchscreen which utilized an "all-new touch IC chip" which would allow users to user the device while wearing gloves or wet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**60S ribosomal protein L32**
60S ribosomal protein L32:
60S ribosomal protein L32 is a protein that in humans is encoded by the RPL32 gene.Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L32E family of ribosomal proteins. It is located in the cytoplasm. Although some studies have mapped this gene to 3q13.3-q21, it is believed to map to 3p25-p24. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome. Alternatively spliced transcript variants encoding the same protein have been observed for this gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amycus Probe**
Amycus Probe:
Amycus Probe is a 1981 role-playing game adventure for Traveller published by Judges Guild.
Plot summary:
Amycus Probe is an adventure involving the ruins of a mysterious alien installation found on the planet Amycus, located in the Osiris Deep subsector of the Gateway Quadrant.
Publication history:
Amycus Probe was written by Dave Sering and was published in 1981 by Judges Guild as a 32-page book.
Reception:
William A. Barton reviewed Amycus Probe in The Space Gamer No. 47. Barton commented that "I recommend that if Amycus Probe is used, it be used in the campaign version and not as a one-time scenario. Provided that the latter adventures in the series carry through on the theme, it could form the basis of an interesting campaign situation." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**World Radio TV Handbook**
World Radio TV Handbook:
The World Radio TV Handbook, also known as WRTH, is a directory of virtually every radio and TV station on Earth, published yearly. The importance of the book has greatly diminished with the online availability of up-to-date frequency informations.
World Radio TV Handbook:
It was started in 1947 by Oluf Lund Johansen (1891–1975) as the World Radio Handbook (WRH). The word "TV" was added to the title in 1965, when Jens M. Frost (1919–1999) took over as editor. It had then already included data for television broadcasting for some years. After the 40th edition in 1986, Frost handed over editorship to Andrew G. (Andy) Sennitt.
History:
The first edition that bears an edition number is the 4th edition, published in 1949. The three previous editions appear to have been: the 1st edition, marked "Winter Ed. 1947" on the cover and completed in November 1947 the 2nd edition, marked "1948 (May–November)" on the cover and completed in May 1948 the 3rd edition, marked "1948-49" on the cover and completed in November 1948.Summer Supplements appear to have been issued from 1959 through 1971. From 1959 through 1966 they were called the Summer Supplement. From 1967 through 1971 they were called the Summer Edition.
History:
Through the 1969 edition, the WRTH indicated the date on which the manuscript was completed.
History:
Issues with covers in Danish are known to have been available for the years 1948 May–November (2d ed.), 1950-51 (5th ed.; cover and 1st page in Danish, rest in English, most ads in Danish), 1952 (6th ed.; cover and 1st page in Danish, rest in English, most ads in Danish), and probably others. The 1952 English ed., which is completely in English, has an extra page with world times and agents, and ads in English which are sometimes different from the ads in the Danish edition. Also, the 1953 ed. mentions the availability of a German edition.
History:
Oluf Lund Johansen published, in conjunction with Libreria Hispanoamericana of Barcelona, Spain, a softbound Spanish-language version of the 1960 WRTH. The book was printed in Spain and called Guia Mundial de Radio y Television, and carried the WRTH logo at the time as well as all the editorial references contained in the English-language version.
Hardbound editions are known to have been available for the years 1963 through 1966, 1968, 1969, and 1975–1978, and probably others.
Publications:
Various authors World Radio TV Handbook 75th ed. 2021, WRTH Publications Limited, 2020 ISBN 978-1-9998300-3-8. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ILLIAC III**
ILLIAC III:
The ILLIAC III was a fine-grained SIMD pattern recognition computer built by the University of Illinois in 1966.
This ILLIAC's initial task was image processing of bubble chamber experiments used to detect nuclear particles. Later it was used on biological images.
ILLIAC III:
The machine was destroyed in a fire, caused by a Variac shorting on one of the wooden-top benches, in 1968. It was rebuilt in the early 1970s, and the core parallel-processing element of the machine, the Pattern Articulation Unit, was successfully implemented. In spite of this and the productive exploration of other advanced concepts, such as multiple-radix arithmetic, the project was eventually abandoned.
ILLIAC III:
Bruce H. McCormick was the leader of the project throughout its history. John P. Hayes was responsible for the logic design of the input-output channel control units. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OntoCAPE**
OntoCAPE:
OntoCAPE is a large-scale ontology for the domain of Computer-Aided Process Engineering (CAPE). It can be downloaded free of charge via the OntoCAPE Homepage.
OntoCAPE is partitioned into 62 sub-ontologies, which can be used individually or as an integrated suite. The sub-ontologies are organized across different abstraction layers, which separate general knowledge from knowledge about particular domains and applications.
The upper layers have the character of an upper ontology, covering general topics such as mereotopology, systems theory, quantities and units.
The lower layers conceptualize the domain of chemical process engineering, covering domain-specific topics such as materials, chemical reactions, or unit operations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pickled herring**
Pickled herring:
Pickled herring is a traditional way of preserving herring as food by pickling or curing.
Pickled herring:
Most cured herring uses a two-step curing process: it is first cured with salt to extract water; then the salt is removed and the herring is brined in a vinegar, salt, and sugar solution, often with peppercorn, bay leaves, raw onions, and so on. Additional flavourings include sherry, mustard and dill, while other non-traditional ingredients have also begun being included in recent years.
Pickled herring:
Pickled herring remains a popular food or ingredient to dishes in many parts of Europe including Scandinavia, Great Britain, the Baltic, Eastern and Central Europe, as well as the Netherlands. It is also popular in parts of Canada such as British Columbia and Newfoundland. It is also associated with Ashkenazi Jewish cuisine, becoming a staple at kiddushes and social gatherings. Pickled herring is one of the twelve dishes traditionally served at Christmas Eve in Russia, Poland, Lithuania, and Ukraine. Pickled herring is also eaten at the stroke of midnight on New Year's Eve to symbolize a prosperous New Year in Poland, the Czech Republic, Germany, and parts of Scandinavia.
History:
Pickled herrings have been a staple in Northern Europe since medieval times, being a way to store and transport fish, especially necessary in meatless periods like Lent. The herrings would be prepared, then packed in barrels for storage or transportation. In 1801 Dutch fishermen amongst the prisoners of war in the Norman Cross Prison were sent to Scotland to teach the Scottish herring fishermen how to cure fish using the Dutch method.
History:
Geographic distribution In the Nordic countries, once the pickling process is finished and depending on which of the dozens of herring flavourings (mustard, onion, garlic, lingonberries etc.) are selected, it is eaten with dark rye bread, crisp bread, sour cream, or potatoes. This dish is common at Christmas, Easter and Midsummer, where it is frequently accompanied by spirits like akvavit. Soused herring (maatjesharing or just maatjes in Dutch) is an especially mild salt herring, which is made from young, immature herrings. The herrings are ripened for a couple of days in oak barrels in a salty solution, or brine. In English, a "soused herring" can also be a cooked marinated herring.Rollmops are pickled herring fillets rolled (hence the name) into a cylindrical shape around a piece of pickled gherkin or an onion. They are thought to have developed as a special treat in 19th century Berlin, and the word borrowed from the German.
History:
Fish cured through pickling or salting have long been consumed in the British Isles. Like jellied eel, it was primarily eaten by, and is sometimes associated with, the working class. Kipper is a dish eaten in Great Britain, Ireland, and parts of Canada. It consists of a split open herring, pickled or salted, and cold-smoked.
History:
Red herring is similar to kippers but is whole and ungutted; it is more heavily salted and is smoked for 2-3 weeks. The main UK export markets are Europe and West Africa.Pickled herring, especially brined herring, is common in Russia and Ukraine, where it is served cut into pieces and seasoned with sunflower oil and onions, or can be part of herring salads, such as dressed herring (Russian: Сельдь под шубой, Ukrainian: Оселедець під шубою, lit. 'herring under a fur coat'), which are usually prepared with vegetables and seasoned with mayonnaise dressing.
History:
Brined herring is common in Ashkenazi Jewish cuisine, perhaps best known for vorschmack salad known in English simply as "chopped herring" and as schmaltz herring in Yiddish. In Israel it is commonly known as dag maluach which means "salted fish".
Pickled herring can also be found in the cuisine of Hokkaidō in Japan, where families traditionally preserved large quantities for winter.
In Nova Scotia, Canada, pickled herring with onions is called "Solomon Gundy" (not to be confused with the Jamaican pickled fish pâté of the same name).
"Bismarck herring" (German Bismarckhering) is the common name for pickled herring in Germany, and is sometimes sold elsewhere under that name. There are various theories as to why the product is associated with Bismarck.
Nutritional content:
Pickled herring is rich in tyramine and thus should be avoided in the diet of people being treated with an antidepressant monoamine oxidase inhibitor.As with fresh herring, pickled herring is an excellent natural source of both vitamin D3 and omega-3 fatty acids. It is also a good source of selenium and vitamin B12. 100 grams may provide 680 IU of vitamin D, or 170% of the DV, as well as 84% of the DV for selenium, and 71% of the DV for vitamin B12. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zirconium(IV) hydroxide**
Zirconium(IV) hydroxide:
Zirconium (IV) hydroxide, often called hydrous zirconia is an ill-defined material or family of materials variously described as ZrO2·nH2O and Zr(OH)4·nH2O. All are white solids with low solubility in water. These materials are widely employed in the preparation of solid acid catalysts.These materials are generated by mild base hydrolysis of zirconium halides and nitrates. A typical precursor is zirconium oxychloride. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reductio ad absurdum**
Reductio ad absurdum:
In logic, reductio ad absurdum (Latin for "reduction to absurdity"), also known as argumentum ad absurdum (Latin for "argument to absurdity") or apagogical arguments, is the form of argument that attempts to establish a claim by showing that the opposite scenario would lead to absurdity or contradiction. This argument form traces back to Ancient Greek philosophy and has been used throughout history in both formal mathematical and philosophical reasoning, as well as in debate.
Reductio ad absurdum:
The equivalent formal rule is known as negation introduction. A related mathematical proof technique is called proof by contradiction.
Examples:
The "absurd" conclusion of a reductio ad absurdum argument can take a range of forms, as these examples show: The Earth cannot be flat; otherwise, since the Earth is assumed to be finite in extent, we would find people falling off the edge.
Examples:
There is no smallest positive rational number because, if there were, then it could be divided by two to get a smaller one.The first example argues that denial of the premise would result in a ridiculous conclusion, against the evidence of our senses. The second example is a mathematical proof by contradiction (also known as an indirect proof), which argues that the denial of the premise would result in a logical contradiction (there is a "smallest" number and yet there is a number smaller than it).
Greek philosophy:
Reductio ad absurdum was used throughout Greek philosophy. The earliest example of a reductio argument can be found in a satirical poem attributed to Xenophanes of Colophon (c. 570 – c. 475 BCE). Criticizing Homer's attribution of human faults to the gods, Xenophanes states that humans also believe that the gods' bodies have human form. But if horses and oxen could draw, they would draw the gods with horse and ox bodies. The gods cannot have both forms, so this is a contradiction. Therefore, the attribution of other human characteristics to the gods, such as human faults, is also false.
Greek philosophy:
Greek mathematicians proved fundamental propositions using reductio ad absurdum. Euclid of Alexandria (mid-4th – mid-3rd centuries BCE) and Archimedes of Syracuse (c. 287 – c. 212 BCE) are two very early examples.The earlier dialogues of Plato (424–348 BCE), relating the discourses of Socrates, raised the use of reductio arguments to a formal dialectical method (elenchus), also called the Socratic method. Typically, Socrates' opponent would make what would seem to be an innocuous assertion. In response, Socrates, via a step-by-step train of reasoning, bringing in other background assumptions, would make the person admit that the assertion resulted in an absurd or contradictory conclusion, forcing him to abandon his assertion and adopt a position of aporia.The technique was also a focus of the work of Aristotle (384–322 BCE), particularly in his Prior Analytics where he referred to it as (Greek: ἡ εἰς τὸ ἀδύνατον ἀπόδειξις, lit. "demonstration to the impossible", 62b).Another example of this technique is found in the sorites paradox, where it was argued that if 1,000,000 grains of sand formed a heap, and removing one grain from a heap left it a heap, then a single grain of sand (or even no grains) forms a heap.
Buddhist philosophy:
Much of Madhyamaka Buddhist philosophy centers on showing how various essentialist ideas have absurd conclusions through reductio ad absurdum arguments (known as prasaṅga, "consequence" in Sanskrit). In the Mūlamadhyamakakārikā, Nāgārjuna's reductio ad absurdum arguments are used to show that any theory of substance or essence was unsustainable and therefore, phenomena (dharmas) such as change, causality, and sense perception were empty (sunya) of any essential existence. Nāgārjuna's main goal is often seen by scholars as refuting the essentialism of certain Buddhist Abhidharma schools (mainly Vaibhasika) which posited theories of svabhava (essential nature) and also the Hindu Nyāya and Vaiśeṣika schools which posited a theory of ontological substances (dravyatas).
Principle of non-contradiction:
Aristotle clarified the connection between contradiction and falsity in his principle of non-contradiction, which states that a proposition cannot be both true and false. That is, a proposition Q and its negation ¬Q (not-Q) cannot both be true. Therefore, if a proposition and its negation can both be derived logically from a premise, it can be concluded that the premise is false. This technique, known as indirect proof or proof by contradiction, has formed the basis of reductio ad absurdum arguments in formal fields such as logic and mathematics.
Sources:
Hyde, Dominic; Raffman, Diana (2018). "Sorites Paradox". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy (Summer 2018 ed.).
Pasti, Mary. Reductio Ad Absurdum: An Exercise in the Study of Population Change. United States, Cornell University, Jan., 1977.
Daigle, Robert W.. The Reductio Ad Absurdum Argument Prior to Aristotle. N.p., San Jose State University, 1991. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Off-key**
Off-key:
Off-key is musical content that is not at the expected frequency or pitch period, either with respect to some absolute reference frequency, or in a ratiometric sense (i.e. through removal of exactly one degree of freedom, such as the frequency of a keynote), or pitch intervals not well-defined in the ratio of small whole numbers.
The term may also refer to a person or situation being out of step with what is considered normal or appropriate. A single note deliberately played or sung off-key can be called an "off-note". It is sometimes used the same way as a blue note in jazz.
Explanation of on-key:
The opposite of off-key is on-key or in-key, which suggests that there is a well defined keynote, or reference pitch. This does not necessarily have to be an absolute pitch but rather one that is relative for at least the duration of a song. A song is usually in a certain key, which is usually the note that the song ends on, and is the base frequency around which it resolves to at the end.
Explanation of on-key:
The base-frequency is usually called the harmonic or key center. Being on-key presumes that there is a key center frequency around which some portion of notes have well defined intervals to.
Deliberate use off-key content:
In jazz and blues music, certain notes called "blue notes" are deliberately sung somewhat flat for expressive effect. Examples include the words "Thought He Was a Goner" in the song "And the Cat Came Back" and the words "Yum Yum" in the children's song "Five Green and Speckled Frogs". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**D2-like receptor**
D2-like receptor:
The D2-like receptors are a subfamily of dopamine receptors that bind the endogenous neurotransmitter dopamine. The D2-like subfamily consists of three G-protein coupled receptors that are coupled to Gi/Go and mediate inhibitory neurotransmission, of which include D2, D3, and D4. For more information, please see the respective main articles of the individual subtypes: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sculpture trail**
Sculpture trail:
A sculpture trail - also known as "a culture walk" or "art trail" - is a walkway through open-air galleries of outdoor sculptures along a defined route with sequenced viewings encountered from planned preview and principal sight lines.
Settings:
Often the distinct walkway is one choice among other less structured ways of exploring intimate sculpture gardens, larger sculpture parks and expansive environmental art sites. They are often disabled and wheelchair accessible routes offering viewing and experiencing the art for many.
Sculptural works of land art and larger site-specific outdoor installation art, especially in fragile natural habitats, use sculpture trails for low-impact accessibility. Some culture walks have sculptor-in-residence programs for creating new temporary or permanent works.
Sculpture trail settings can range from urban parks and private estates, through art museum gardens, to large regional open space and art park sites, with walkways giving access to the sculptures.
Examples:
Sculpture by the Sea is a free annual 2 kilometres (1.2 mi) outdoor sculpture walk that goes from Bronte Beach to Bondi Beach via Tamarama Beach. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Time in Honduras**
Time in Honduras:
Honduras observes Central Standard Time (UTC−6) year-round.
IANA time zone database:
In the IANA time zone database, Honduras is given one zone in the file zone.tab—America/Tegucigalpa. "HN" refers to the country's ISO 3166-1 alpha-2 country code. Data for Honduras directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Discriminant of an algebraic number field**
Discriminant of an algebraic number field:
In mathematics, the discriminant of an algebraic number field is a numerical invariant that, loosely speaking, measures the size of the (ring of integers of the) algebraic number field. More specifically, it is proportional to the squared volume of the fundamental domain of the ring of integers, and it regulates which primes are ramified.
Discriminant of an algebraic number field:
The discriminant is one of the most basic invariants of a number field, and occurs in several important analytic formulas such as the functional equation of the Dedekind zeta function of K, and the analytic class number formula for K. A theorem of Hermite states that there are only finitely many number fields of bounded discriminant, however determining this quantity is still an open problem, and the subject of current research.The discriminant of K can be referred to as the absolute discriminant of K to distinguish it from the relative discriminant of an extension K/L of number fields. The latter is an ideal in the ring of integers of L, and like the absolute discriminant it indicates which primes are ramified in K/L. It is a generalization of the absolute discriminant allowing for L to be bigger than Q; in fact, when L = Q, the relative discriminant of K/Q is the principal ideal of Z generated by the absolute discriminant of K.
Definition:
Let K be an algebraic number field, and let OK be its ring of integers. Let b1, ..., bn be an integral basis of OK (i.e. a basis as a Z-module), and let {σ1, ..., σn} be the set of embeddings of K into the complex numbers (i.e. injective ring homomorphisms K → C). The discriminant of K is the square of the determinant of the n by n matrix B whose (i,j)-entry is σi(bj). Symbolically, det (σ1(b1)σ1(b2)⋯σ1(bn)σ2(b1)⋱⋮⋮⋱⋮σn(b1)⋯⋯σn(bn))2.
Definition:
Equivalently, the trace from K to Q can be used. Specifically, define the trace form to be the matrix whose (i,j)-entry is TrK/Q(bibj). This matrix equals BTB, so the discriminant of K is the determinant of this matrix.
The discriminant of an order in K with integral basis b1, ..., bn is defined in the same way.
Examples:
Quadratic number fields: let d be a square-free integer, then the discriminant of K=Q(d) is if mod if mod 4).
Examples:
An integer that occurs as the discriminant of a quadratic number field is called a fundamental discriminant.Cyclotomic fields: let n > 2 be an integer, let ζn be a primitive nth root of unity, and let Kn = Q(ζn) be the nth cyclotomic field. The discriminant of Kn is given by ΔKn=(−1)φ(n)/2nφ(n)∏p|npφ(n)/(p−1) where φ(n) is Euler's totient function, and the product in the denominator is over primes p dividing n.Power bases: In the case where the ring of integers has a power integral basis, that is, can be written as OK = Z[α], the discriminant of K is equal to the discriminant of the minimal polynomial of α. To see this, one can choose the integral basis of OK to be b1 = 1, b2 = α, b3 = α2, ..., bn = αn−1. Then, the matrix in the definition is the Vandermonde matrix associated to αi = σi(α), whose determinant squared is ∏1≤i<j≤n(αi−αj)2 which is exactly the definition of the discriminant of the minimal polynomial.Let K = Q(α) be the number field obtained by adjoining a root α of the polynomial x3 − x2 − 2x − 8. This is Richard Dedekind's original example of a number field whose ring of integers does not possess a power basis. An integral basis is given by {1, α, α(α + 1)/2} and the discriminant of K is −503.
Examples:
Repeated discriminants: the discriminant of a quadratic field uniquely identifies it, but this is not true, in general, for higher-degree number fields. For example, there are two non-isomorphic cubic fields of discriminant 3969. They are obtained by adjoining a root of the polynomial x3 − 21x + 28 or x3 − 21x − 35, respectively.
Basic results:
Brill's theorem: The sign of the discriminant is (−1)r2 where r2 is the number of complex places of K.
A prime p ramifies in K if and only if p divides ΔK .
Stickelberger's theorem: or mod 4).
Minkowski's bound: Let n denote the degree of the extension K/Q and r2 the number of complex places of K, then |ΔK|1/2≥nnn!(π4)r2≥nnn!(π4)n/2.
Minkowski's theorem: If K is not Q, then |ΔK| > 1 (this follows directly from the Minkowski bound).
Hermite–Minkowski theorem: Let N be a positive integer. There are only finitely many (up to isomorphisms) algebraic number fields K with |ΔK| < N. Again, this follows from the Minkowski bound together with Hermite's theorem (that there are only finitely many algebraic number fields with prescribed discriminant).
History:
The definition of the discriminant of a general algebraic number field, K, was given by Dedekind in 1871. At this point, he already knew the relationship between the discriminant and ramification.Hermite's theorem predates the general definition of the discriminant with Charles Hermite publishing a proof of it in 1857. In 1877, Alexander von Brill determined the sign of the discriminant. Leopold Kronecker first stated Minkowski's theorem in 1882, though the first proof was given by Hermann Minkowski in 1891. In the same year, Minkowski published his bound on the discriminant. Near the end of the nineteenth century, Ludwig Stickelberger obtained his theorem on the residue of the discriminant modulo four.
Relative discriminant:
The discriminant defined above is sometimes referred to as the absolute discriminant of K to distinguish it from the relative discriminant ΔK/L of an extension of number fields K/L, which is an ideal in OL. The relative discriminant is defined in a fashion similar to the absolute discriminant, but must take into account that ideals in OL may not be principal and that there may not be an OL basis of OK. Let {σ1, ..., σn} be the set of embeddings of K into C which are the identity on L. If b1, ..., bn is any basis of K over L, let d(b1, ..., bn) be the square of the determinant of the n by n matrix whose (i,j)-entry is σi(bj). Then, the relative discriminant of K/L is the ideal generated by the d(b1, ..., bn) as {b1, ..., bn} varies over all integral bases of K/L. (i.e. bases with the property that bi ∈ OK for all i.) Alternatively, the relative discriminant of K/L is the norm of the different of K/L. When L = Q, the relative discriminant ΔK/Q is the principal ideal of Z generated by the absolute discriminant ΔK . In a tower of fields K/L/F the relative discriminants are related by ΔK/F=NL/F(ΔK/L)ΔL/F[K:L] where N denotes relative norm.
Relative discriminant:
Ramification The relative discriminant regulates the ramification data of the field extension K/L. A prime ideal p of L ramifies in K if, and only if, it divides the relative discriminant ΔK/L. An extension is unramified if, and only if, the discriminant is the unit ideal. The Minkowski bound above shows that there are no non-trivial unramified extensions of Q. Fields larger than Q may have unramified extensions: for example, for any field with class number greater than one, its Hilbert class field is a non-trivial unramified extension.
Root discriminant:
The root discriminant of a degree n number field K is defined by the formula rd K=|ΔK|1/n.
The relation between relative discriminants in a tower of fields shows that the root discriminant does not change in an unramified extension.
Root discriminant:
Asymptotic lower bounds Given nonnegative rational numbers ρ and σ, not both 0, and a positive integer n such that the pair (r,2s) = (ρn,σn) is in Z × 2Z, let αn(ρ, σ) be the infimum of rdK as K ranges over degree n number fields with r real embeddings and 2s complex embeddings, and let α(ρ, σ) = liminfn→∞ αn(ρ, σ). Then 60.8 22.3 σ ,and the generalized Riemann hypothesis implies the stronger bound 215.3 44.7 σ.
Root discriminant:
There is also a lower bound that holds in all degrees, not just asymptotically: For totally real fields, the root discriminant is > 14, with 1229 exceptions.
Root discriminant:
Asymptotic upper bounds On the other hand, the existence of an infinite class field tower can give upper bounds on the values of α(ρ, σ). For example, the infinite class field tower over Q(√-m) with m = 3·5·7·11·19 produces fields of arbitrarily large degree with root discriminant 2√m ≈ 296.276, so α(0,1) < 296.276. Using tamely ramified towers, Hajir and Maire have shown that α(1,0) < 954.3 and α(0,1) < 82.2, improving upon earlier bounds of Martinet.
Relation to other quantities:
When embedded into K⊗QR , the volume of the fundamental domain of OK is |ΔK| (sometimes a different measure is used and the volume obtained is 2−r2|ΔK| , where r2 is the number of complex places of K).
Due to its appearance in this volume, the discriminant also appears in the functional equation of the Dedekind zeta function of K, and hence in the analytic class number formula, and the Brauer–Siegel theorem.
The relative discriminant of K/L is the Artin conductor of the regular representation of the Galois group of K/L. This provides a relation to the Artin conductors of the characters of the Galois group of K/L, called the conductor-discriminant formula. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psychagogy**
Psychagogy:
Psychagogy is a psycho-therapeutic method of influencing behavior by suggesting desirable life goals. In a more spiritual context, it can mean guidance of the soul. It is considered to be one of many antecedents and components of modern psychology.European psychagogy's beginnings can be dated back to the time of Socrates and Plato. Psychagogic methods were implemented by such groups as the Stoics, Epicureans, and Cynics. The method was also eventually adopted by Paul the Apostle, James, as well as other early Christian thinkers. Enduring well into the 20th century, psychagogy began to influence and be influenced by other psychological disciplines. Eventually the term psychagogy itself largely died out during the 1970s and 1980s, however the concept continues to be practiced through modalities like cognitive behavioral therapy, life coaching and pastoral counseling.
Etymology:
The word comes from the Greek ψυχαγωγία from ψυχή "soul" and ἄγω "lead"; so it literally means "soul guidance".
History:
Ancient Greek psychagogy The psychagogy of Ancient Greece, also known as maieutic psychagogy, involved Socrates (or another advanced teacher) helping a participant to give birth to realities from within the participant himself.
History:
Maieutic: from midwife, one who helps in the delivery of new life Psychagogy: from Greek, psûchê (soul) and agogê (transport)Within the ancient Greek tradition, psychagogy was viewed as the art of influencing the soul by the means of rhetoric. Plato believed that the human soul possesses latent knowledge, which could be brought out and elucidated by a specific type of discourse which he called dialectic: a bringing to birth from the depths of a person's higher being. He believed that a higher consciousness was needed in order to do this, and the result would bring forth a literal enlightenment and a furthered understanding of human nature.
History:
Dialectic is the only philosophical process which seeks for wisdom by anagogically uplifting our Intellectual foundations so that our Higher Self ascends to the Origin.
Plato also believed that only a prepared student can be involved in this process, and that the only way to prepare a student was to have them learn by doing. The process of maieutic psychagogy cannot be transmitted through writing, since it requires that a person actually experience the dynamically unfolding procedure.
History:
Dialectic took place in public areas as well as private ones, as can be seen in many of Plato's works (such as Phaedo, Meno, Phaedrus and Theaetetus). Socrates is often recorded in these works as using the process of dialectic to bring the ideas of others into being, acting as a sort of soul guide (also known as a psychagogue). In Plato's Theaetetus Socrates equates himself to a midwife, helping to bring the thoughts of others to light through his words. The term was used was used Plato's Phaedrus (261a and 271c).
History:
Additionally, key to ancient Greek philosophy was the idea of living life well and becoming the best that a person can be. This idea can be summed up by the term eudaimonia (human flourishing). Psychagogy was one practice philosophers would use to encourage people to strive toward such a goal. Although this end goal may have differed slightly between the Stoics, Epicureans, and Cynics, each group included the use of psychagogic methods in their guiding of others.Greco-Roman philosophers often practiced psychogogy by asking people to drop their thoughts of traditional wisdom, and to ignore reputation, wealth and luxury.The term was also used by the ancient Greeks to describe plays intended to teach civilians higher concepts. If the play had no higher teachings but still captivating it was consider "entertainment". (Entertainment is about someone coming into and controlling your mind Εnter] - from Latin intrō, from intrā (“inside”) [-tain] - from Latin sub- + teneo ("hold, grasp, possess, occupy, control") e.g. sustain, obtain. [-ment] - from Latin mēns (“the mind”).) Early Christian psychagogy It is thought that the idea of psychagogy was taken up by the Apostle Paul of Tarsus and early Christian thinkers, who relied on psychagogic techniques in writing the New Testament. However, psychagogy in Early Christianity took on a flavor of its own, differing slightly from the form of psychagogy that was familiar to the ancient Greeks. Psychagogy in the Early Christian sense, while retaining its use of rhetoric, placed a special emphasis on the emotions. Paul especially used this tactic while writing his epistles. He wrote these letters to new members of the Christian faith, often encouraging them toward virtue and to become mature and complete. Paul used psychagogy in order to do so effectively, fashioning his words to fit the needs of the community. Paul presented his words gently, unlike most Cynics who were known to speak critically and aggressively. Psychagogy around this time was widespread and was recognized by most all religious and philosophical groups. Considering this, it makes sense that psychagogy would have been taught in many philosophical schools, which was perhaps how Paul learned to use such language to influence the mindset and behaviors of his audience.One such group that recognized and applied psychagogic methods were those who led monastic lifestyles. Paul Dilley, an assistant professor of religious studies at the University of Iowa, has extensively studied this topic. Much of his research is summarized in his book Care of the Other in Ancient Monasticism: A Cultural History of Ascetic Guidance. In it, he argues that monastic psychagogy is based on the fundamental concept of a struggle for identity, a battle against hostile forces which challenge disciples' progress in virtue and salvation.
History:
He describes the two fundamental ascetic exercises, which recent converts began to practice immediately: the recitation of scripture and the fear of God, a complex sense of shame, guilt, and aversion to pain which could be mobilized to combat temptation. These exercises were learned both through individual effort, and the often harsh chastisement, both physical and verbal, of one's teacher. This style of psychagogy is similar to Plato's in that it involves a teacher in order to properly convey the techniques.
History:
Dilley states that the war with thoughts and emotions is definitely one of the most distinctive aspects of Christian psychagogy, and is connected to the importance of teachers and their emotional support, for the progress of disciples, until they are qualified to instruct others.
History:
20th century psychagogy Psychagogy maintained its association with ethical and moral self-improvement, and during the 1920s psychagogic methods were assimilated into the work of hypnosis, psychoanalysis, and psychotherapy. The International Institute for Psychagogy and Psychotherapy was founded in 1924 by Charles Baudouin, a Swiss psychoanalyst. In turn, psychagogy was influenced by other psychological fields such as social psychology, developmental psychology, and depth psychology. Due to the additional effect of special education and social work on the field during the 1950s and 1960s, psychagogy and its practitioners found their way to the specialized role of working with emotionally disturbed adolescents.
History:
In 1955, Rational Emotive Behavior Therapy (REBT) was developed by Albert Ellis, an American psychologist. Heavily influenced by psychagogic methods, REBT is an evidence-based psychotherapy that promotes goal achievement and well-being by first resolving negative emotions and behaviors. Ellis' work was extended by American psychiatrist Aaron Beck through his development of cognitive therapy. The work of Ellis, Beck and their students became known as cognitive behavioral therapy (CBT), which became a very common form of psychotherapy.
History:
As CBT became more known and practiced, the term psychagogy fell out of use during the 1970s and 1980s.
History:
Psychagogy today Although the term itself is no longer common, psychagogy's influence on modern day psychology can be seen mostly within the context of pastoral counseling and cognitive behavioral therapy. Like those previously labeled "psychagogues", pastoral counselors and practitioners of CBT exhibit the same kind of care, gentleness, and encouragement in the interest of helping their patients to alter maladaptive thoughts and behaviors (or in other words, changing negative patterns of thinking and behaving to more positive ways of thinking and behaving in response to a given stimulus). These are the people "guiding souls" today. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Energy & Fuels**
Energy & Fuels:
Energy & Fuels is a peer-reviewed scientific journal published by the American Chemical Society. It was established in 1987. Its publication frequency switched from bimonthly to monthly in 2009. The editor-in-chief is Hongwei Wu (Curtin University).
According to the American Chemical Society, Energy & Fuels publishes reports of research in the technical area defined by the intersection of the disciplines of chemistry and chemical engineering and the application domain of non-nuclear energy and fuels.
Editors:
The following are the current list of Associate Editors serving the Journal.
Anthony Dufour, The National Center for Scientific Research (CNRS) H. Scott Fogler, University of Michigan Praveen Linga, National University of Singapore Anja Oasmaa, VTT Technical Research Centre of Finland Ltd.
Ah-Hyung (Alissa) Park, Columbia University Andrew Pomerantz, Schlumberger-Doll Luiz P. Ramos, Federal University of Parana Ryan P. Rodgers, Florida State University Zongping Shao, Nanjing Tech University John M. Shaw, University of Alberta Jennifer Wilcox, Worcester Polytechnic Institute Minghou Xu, Huazhong University of Science and Technology
Abstracting and indexing:
The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 Journal Impact Factor of 4.654 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cu2+-exporting ATPase**
Cu2+-exporting ATPase:
Cu2+-exporting ATPase (EC 3.6.3.4) is an enzyme with systematic name ATP phosphohydrolase (Cu2+-exporting). This enzyme catalyses the following chemical reaction ATP + H2O + Cu2+in ⇌ ADP + phosphate + Cu2+outThis P-type ATPase undergoes covalent phosphorylation during the transport cycle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Telegeodynamics**
Telegeodynamics:
Telegeodynamics is an electromechanical earth-resonance concept for underground seismic exploration proposed by Nikola Tesla.
Description:
Tesla designed this system for use in prospecting and discerning the location of underground mineral structures through the transmission of mechanical energy through the subsurface. Data from reflected and refracted signals can be analyzed to deduce the location and characteristics of underground formations. Additional non-mechanical responses to the initial acoustic impulses may also be detectable using instruments that measure various electrical and magnetic parameters. Such predicted responses would--at least--take the form of induced electric and magnetic fields, telluric currents, and changes in earth conductivity.
Description:
The electromechanical oscillator was originally designed as a source of isochronous (that is to say, frequency stable), alternating electric current used with both wireless transmitting and receiving apparatus. In dynamical system theory an oscillator is called isochronous if the frequency is independent of its amplitude. An electromechanical device runs at the same rate regardless of changes in its drive force, so it maintains a constant frequency (hz). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transitional epithelium**
Transitional epithelium:
Transitional epithelium is a type of stratified epithelium. Transitional epithelium is a type of tissue that changes shape in response to stretching (stretchable epithelium). The transitional epithelium usually appears cuboidal when relaxed and squamous when stretched. This tissue consists of multiple layers of epithelial cells which can contract and expand in order to adapt to the degree of distension needed. Transitional epithelium lines the organs of the urinary system and is known here as urothelium (PL: urothelia). The bladder, for example, has a need for great distension.
Structure:
The appearance of transitional epithelium differs according to its cell layer. Cells of the basal layer are cuboidal (cube-shaped), or columnar (column-shaped), while the cells of the superficial layer vary in appearance depending on the degree of distension. These cells appear to be cuboidal with a domed apex when the organ or the tube in which they reside is not stretched. When the organ or tube is stretched (such as when the bladder is filled with urine), the tissue compresses and the cells become stretched. When this happens, the cells flatten, and they appear to be squamous and irregular.
Structure:
Cell layers Transitional epithelium is made up of three types of cell layers: basal, intermediate, and superficial. The basal layer fosters the epithelial stem cells in order to provide constant renewal of the epithelium. These cells' cytoplasm is rich in tonofilaments and mitochondria; however, they contain few rough endoplasmic reticulum. The tonofilaments play a role in the attachment of the basal layer to the basement membrane via desmosomes. The intermediate cell layer is highly proliferative and, therefore, provides for rapid cell regeneration in response to injury or infection of the organ or tube in which it resides. These cells contain a prominent Golgi apparatus and an array of membrane-bound vesicles. These function in the packaging and transport of proteins, such as keratin, to the superficial cell layer. The cells of the superficial cell layer that lines the lumen are known as facet cells or umbrella cells. This layer is the only fully differentiated layer of the epithelium. It provides an impenetrable barrier between the lumen and the bloodstream, so as not to allow the bloodstream to reabsorb harmful wastes or pathogens. All transitional epithelial cells are covered in microvilli and a fibrillar mucous coat.The epithelium contains many intimate and delicate connections to neural and connective tissue. These connections allow for communication to tell the cells to expand or contract. The superficial layer of transitional epithelium is connected to the basal layer via cellular projections, such as intermediate filaments protruding from the cellular membrane. These structural elements cause the epithelium to allow distension; however, these also cause the tissue to be relatively fragile and, therefore, difficult to study. All cells touch the basement membrane.
Structure:
Cell membrane The urothelium is the most impermeable membrane in the mammalian body. Because of its importance in acting as an osmotic barrier between the contents of the urinary tract and the surrounding organs and tissues, transitional epithelium is relatively impermeable to water and salts. This impermeability is due to a highly keratinized cellular membrane synthesized in the Golgi apparatus. The membrane is made up of a hexagonal lattice put together in the Golgi apparatus and implanted into the surface of the cell by reverse pinocytosis, a type of exocytosis. The cells in the superficial layer of the transitional epithelium are highly differentiated, allowing for maintenance of this barrier membrane. The basal layer of the epithelium is much less differentiated; however, it does act as a replacement source for more superficial layer. While the Golgi complex is much less prominent in the cells of the basal layer, these cells are rich in cytoplasmic proteins that bundle together to form tonofibrils. These tonofibrils converge at hemidesmosomes to attach the cells at the basement membrane.
Function:
The transitional epithelium cells stretch readily in order to accommodate fluctuation of volume of the liquid in an organ (the distal part of the urethra becomes non-keratinized stratified squamous epithelium in females; the part that lines the bottom of the tissue is called the basement membrane). Transitional epithelium also functions as a barrier between the lumen, or inside hollow space of the tract that it lines and the bloodstream. To help achieve this, the cells of transitional epithelium are connected by tight junctions, or virtually impenetrable junctions that seal together to the cellular membranes of neighboring cells. This barrier prevents re-absorption of toxic wastes and pathogens by the bloodstream.
Clinical significance:
Urothelium is susceptible to carcinoma. Because the bladder is in contact with urine for extended periods, chemicals that become concentrated in the urine can cause bladder cancer. For example, cigarette smoking leads to the concentration of carcinogens in the urine and is a leading cause of bladder cancer. Aristolochic acid, a compound found in plants of the family Aristolochiaceae, also causes DNA mutations and is a cause of liver, urothelial and bladder cancers. Occupational exposure to certain chemicals is also a risk factor for bladder cancer. This can include aromatic amines (aniline dye), polycyclic aromatic hydrocarbons, and diesel engine exhaust.
Clinical significance:
Carcinoma Carcinoma is a type of cancer that occurs in epithelial cells. Transitional cell carcinoma is the leading type of bladder cancer, occurring in 9 out of 10 cases. It is also the leading cause of cancer of the ureter, urethra, and urachus, and the second leading cause of cancer of the kidney. Transitional cell carcinoma can develop in two different ways. Should the transitional cell carcinoma grow toward the inner surface of the bladder via finger-like projections, it is known as papillary carcinoma. Otherwise, it is known as flat carcinoma. Either form can transition from non-invasive to invasive by spreading into the muscle layers of the bladder. Transitional cell carcinoma is commonly multifocal, more than one tumor occurring at the time of diagnosis.
Clinical significance:
Transitional cell carcinoma can metastasize, or spread to other parts of the body via the surrounding tissues, the lymph system, and the bloodstream. It can spread to the tissues and fat surrounding the kidney, the fat surrounding the ureter, or, more progressively, lymph nodes and other organs, including bone. Common risk factors of transitional cell carcinoma include long-term misuse of pain medication, smoking, and exposure to chemicals used in the making of leather, plastic, textiles, and rubber.Transitional cell carcinoma patients have a variety of treatment options. These include nephroureterectomy, or the removal of kidney, ureter, and bladder cuff, and segmental resection of the ureter. This is an option only when the cancer is superficial and infects only the bottom third of the ureter. The procedure entails removing the segment of cancerous ureter and reattaching the end. Patients with advanced bladder cancer or disease, also often look to bladder reconstruction as a treatment. Current methods of bladder reconstruction include the use of gastrointestinal tissue. However, while this method is effective in improving the function of the bladder, it can actually increases the risk of cancer, and can cause other complications, such as infections, urinary stones, and electrolyte imbalance. Therefore, other methods loom in the future. For example, current research paves the way for use of pluripotent stem cells to derive urothelium, as they are highly and indefinitely proliferative in vitro (i.e. outside of the body).
Clinical significance:
Interstitial cystitis Interstitial cystitis (IC) a type of painful bladder syndrome is a chronic disease of the bladder that causes feelings of pressure and pain in the bladder among other symptoms which can range from mild to severe. Urinary frequency and urgency are the most common symptoms associated with the disease. The exact causes of IC/BPS are unknown, but there is evidence of an association between increased permeability of the urothelium and IC. Since the purpose of the urothelium is to act as a highly resistant barrier, the loss of this function has serious clinical implications. Many patients with IC have exhibited a loss of umbrella cells.
Clinical significance:
Urothelial lesions Papillary urothelial lesions Papillary urothelial hyperplasia Urothelial papilloma Papillary urothelial neoplasm of low malignant potential (PUNLMP) Low-grade papillary urothelial carcinoma High-grade papillary urothelial carcinoma Invasive urothelial carcinoma Flat urothelial lesions Reactive urothelial atypia Urothelial inverted papilloma Urothelial atypia of unknown significance Urothelial dysplasia Urothelial carcinoma in situ Invasive urothelial carcinoma Invasive urothelial carcinoma (NOS) Urothelial carcinoma with inverted growth pattern Urothelial carcinoma with squamous differentiation Urothelial carcinoma with villoglandular differentiation Urothelial carcinoma, micropapillary variant Urothelial carcinoma, lymphoepithelioma-like variant Urothelial carcinoma, clear cell (glycogen-rich) variant Urothelial carcinoma, lipoid cell variant Urothelial carcinoma with syncitiotrophoblastic giant cells Urothelial carcinoma with rhabdoid differentiation Urothelial carcinoma similar to giant cell tumor of bone | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Papagoite**
Papagoite:
Papagoite is a rare cyclosilicate mineral. Chemically, it is a calcium copper aluminium silicate hydroxide, found as a secondary mineral on slip surfaces and in altered granodiorite veins, either in massive form or as microscopic crystals that may form spherical aggregates. Its chemical formula is Ca Cu Al Si2O6(O H)3.
Papagoite:
It was discovered in 1960 in Ajo, Arizona, US, and was named after the Hia C-ed O'odham people (also known as the Sand Papago) who inhabit the area. This location is the only papagoite source within the United States, while worldwide it is also found in South Africa and Namibia. It is associated with aurichalcite, shattuckite, ajoite and baryte in Arizona, and with quartz, native copper and ajoite in South Africa. Its bright blue color is the mineral's most notable characteristic.
Papagoite:
It is used as a gemstone. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PL/Perl**
PL/Perl:
PL/Perl (Procedural Language/Perl) is a procedural language supported by the PostgreSQL RDBMS.
PL/Perl, as an imperative programming language, allows more control than the relational algebra of SQL.
Programs created in the PL/Perl language are called functions and can use most of the features that the Perl programming language provides, including common flow control structures and syntax that has incorporated regular expressions directly.
These functions can be evaluated as part of a SQL statement, or in response to a trigger or rule.
PL/Perl:
The design goals of PL/Perl were to create a loadable procedural language that: can be used to create functions and trigger procedures, adds control structures to the SQL language, can perform complex computations, can be defined to be either trusted or untrusted by the server, is easy to use.PL/Perl is one of many "PL" languages available for PostgreSQL PL/pgSQL PL/Java, plPHP, PL/Python, PL/R, PL/Ruby, PL/sh, and PL/Tcl. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eyewear retailer**
Eyewear retailer:
Eyewear is a term used to refer to all accessories worn over both of a person's eyes, or occasionally a single eye, for one or more of a variety of purposes. Though historically used for vision improvement and correction, eyewear has also evolved into eye protection, for fashion and aesthetic purposes, and starting in the late 20th century, computers and virtual reality.
Eyewear retailer:
The primary intention of wearing eyewear can vary based on the need or desire of the wearer. Corrective lenses, such as glasses, contact lenses, and, historically, monocles are used to aid in one's vision and enable users to see clearly. Eyewear also can be used for protection, such as sunglasses which protect wearers from the Sun's ultraviolet rays which are damaging to the eyes when unprotected, eyepatches to protect injured eyes from further damage, or goggles which protect the wearer's eyes from debris, water and other chemicals. Variants of eyewear can conversely inhibit or disable vision for its bearers, such as blindfolds and view-limiting device for humans, blinkers for horses, or blinders for birds, especially poultry. Eyewear also exists for other specialized or niche purposes, such as active shutter 3D systems and anaglyph 3D glasses for stereoscopy, and night-vision goggles for low-light environments.
Eyewear retailer:
The eyewear industry is estimated to be valued at $100 billion USD as of May 2018. Much of the eyewear industry's prominence and use in fashion occurred in Western cultures during the 1950s, with individual designers and celebrities at the time wearing them in public and increasing the popularity of eyewear, especially sunglasses. The growth of the industry through the latter half of the 20th century is credited to Luxottica, generally credited with acquiring brands popular with Western culture such as Ray-Ban, Persol, and later Oakley, raising their prices and increasing the perceived status of eyewear in society. The 2010s and early 2020s saw a slowly-more technical focus towards the utility of eyewear, with early experiments such as Google Glass, Microsoft HoloLens and later Apple Vision Pro bringing augmented reality to eyewear; virtual reality headsets also began a growth in popularity in the 2010s.
History:
Pre-modern innovations Quartz was among the earliest used materials for reading stones, the precursors to wearable optics; quartz also became the foundation for glasses, the first major form of eyewear. The first incarnations of glasses were made with the aim of providing aid to reading.Though innovations in pre-modern eyewear technology occurred in both Imperial China and the Inuit territories, which both invented early forms of sunglasses and goggles, Venice and Northern Italy have historically been the place of consolidation for eyewear innovation in the Western world. Upon the release of the printing press and the mass adoption of literature, larger sectors of the population began to buy into eyewear to assist with reading. Eyewear frames around this time were mainly made of animal bones, horns and fabric; the implementation of wire frames in the 16th century further allowed glasses to be mass-produced. The 16th century also saw the earliest ancestors of pince-nez eyewear, which secured itself to the wearer through "pinching" the nose and later would become popular in the 18th and 19th centuries.
History:
Temple eyeglasses The first half of the 18th century saw British optician Edward Scarlett perfect temple eyeglasses which would rest on the nose and the ears. The innovations presented by Scarlett would not only spark some to look at aesthetic customization of eyewear for fashion within Europe but also lead Benjamin Franklin to invent bifocals in colonial America. Later in the middle of the century, Britain also saw its first popularized wave of sunglasses as James Ayscough created and sold blue and green tinted sunglasses for general vision improvement.
History:
Surge in popularity Despite earlier developments, eyewear began its surge in popularity in 1929. Foster Grant, which first went into business this year, was among the earliest large retailers for eyeglasses in the United States, setting up shop on the Atlantic City Boardwalk in New Jersey. The United States Army Air Corps was among the first large clients for sunglasses when it worked with Bausch + Lomb to create sunglasses which protected its pilots from glare. These sunglasses later evolved into aviator sunglasses, and the resulting name and brand, Ray-Ban, became synonymous with army pilots and later on a fashion item.Foster Grant continued contributing to the growth of the eyewear industry for fashion by running large campaigns featuring celebrities. By the 1960s, the company had become synonymous with eyewear in America and was the dominant producer of sunglasses in the Western world. Ray-Ban had also become a large leader in sunglasses around this time, with its aviator style and later Wayfarer style taking off in popularity.Mass-market eyewear experienced a popularity drought in the 1970s due to the dawn of luxury brands like Dior and Yves Saint Laurent entering the industry, though Ray-Ban began to experience cultural revival during the 1980s due to adoption by Hollywood celebrities both inside and outside of movies.
History:
Industry consolidation into EssilorLuxottica 1971 saw the rise of the Italian company Luxottica into the scene when founder Leonardo Del Vecchio launched his finished eyeglasses at the Milan International Optics Exhibition. The next two decades saw Luxottica, at this point exclusively focusing on sunglasses, grow within Europe and slowly begin to buy up sunglasses brands and retailers; 1988 saw its first major licensing deal to produce sunglasses for Giorgio Armani. By the year 2001, Luxottica had acquired retailers LensCrafters and Sunglass Hut; the company additionally acquired the entirety of Persol in 1995 for an undisclosed amount and Ray-Ban from Bausch + Lomb in 1999 for $640 million USD.The Italian eyewear firm pulled Ray-Ban across all of the United States in order to re-engineer the product and markup Ray-Ban as a premium sunglasses brand, pushing for a global expansion afterwards; Luxottica additionally pushed Ray-Ban into far Eastern markets to diversify the brand's appeal beyond the Western World.Luxottica's rise also concided with a battle between the United Fruit Company (today Chiquita) and Goody Brands for the remaining stock of Foster Grant. Both contenders eventually lost out to the German chemicals firm Hoechst AG after each company pulled out due to non-eyewear related factors. In 2006, Essilor bought Foster Grant for $465 million.About a year after Essilor acquired Foster Grant, Luxottica further acquired sports eyewear manufacturer Oakley in 2007 for $2.1 billion USD. The acquisition followed after Luxottica and Oakley had a pricing dispute, with Luxottica causing Oakley's stock price to plummet for pulling out of its stores. Ten years later, Luxottica would merge with Essilor in a €48 billion deal.
History:
Technology and disruption of the industry Virtual reality slowly became a more prominent technology stating in the 1990s after refinement of 1950s prototypes pushed by NASA and other technology companies. Sega was among the first companies to introduce head-mounted virtual reality headsets for theme park rides at Joypolis locations. The first major jump in virtual reality, however, was with the Oculus Rift, later evolving into the Quest line made by Facebook-owner Meta Platforms. The success of the Rift later incentivized other tech companies like Sony (through its PlayStation brand) and HTC to release their own competitors to Oculus; Microsoft, Google, and Apple also all released or announced products throughout the 2010s and early 2020s in the eyewear technology industry incorporating mixed reality.The internet also incentivized the founding of Warby Parker, with it stating its express purpose for being founded was to combat the high markups charged by other eyewear companies. Warby Parker disrupted the eyewear market with its price point, as well as the ability to try on up to five of its glasses for free and order products online. The company's success through disrupting the traditionally brick-and-mortar eyewear industry through an online alternative has led to other companies outside of eyewear being described as "the Warby Parker of" certain industries, though the company also has invested recently into brick-and-mortar stores.Online technologies also led to a rise in the exposure of Luxottica's dominance over the eyewear industry, with CBS's 60 Minutes, CNBC, and Adam Ruins Everything all releasing episodes on the dominance that Luxottica has over eyewear.
Eyewear industry:
Since the beginning of fashionable eyewear in the 20th century, much of the eyewear industry has been headquartered in either North America or Northern Italy, with early industry giants Foster Grant and Bausch & Lomb contracting with Hollywood and the U.S. Armed Forces respectively. During the Great Depression, both Bausch & Lomb and Polaroid Corporation founder Edwin H. Land experimented with polarization of lenses, intended to reduce glare; Bausch & Lomb's experiments delivered to American armed forces created the Ray-Ban brand.Today, the eyewear industry is estimated to reach a valuation of around $111 billion USD by 2026, and $172 billion USD by 2028.
Eyewear industry:
Eyewear retail Eyewear retail is a steadily growing business, driven by the rising global population, economic development, increased consumer purchasing power, and the global prevalence of ocular diseases. The increased use of digital screens has led to an increase in vision impairment, cataracts, myopia, hypermetropia, eye irritation, dry eyes, computer vision syndrome and double vision. Sunglasses make up 42% of the global eyewear market as of 2020. They protect the eyes from sun damage and conjunctivitis, but are also sold as fashion accessories, with many consumers opting to have a number of sunglasses for different occasions.EssilorLuxottica controls a dominant portion of the eyewear retail market. As of 2021, the largest single eyewear retail chain in the United States by sales revenue is Essilor subsidiary Vision Source, which sold $2.672 billion USD in 2021. Chains controlled by the Luxottica division of EssilorLuxottica, which include LensCrafters and Sunglass Hut, made a combined $2.41 billion USD that same year; the largest non-Luxottica chain by sales was National Vision Holdings, making $2.080 billion USD. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rain and snow mixed**
Rain and snow mixed:
Rain and snow mixed is precipitation composed of a mixture of rain and partially melted snow. Unlike ice pellets, which are hard, and freezing rain, which is fluid until striking an object where it fully freezes, this precipitation is soft and translucent, but it contains some traces of ice crystals from partially fused snowflakes, also called slush. In any one location, it usually occurs briefly as a transition phase from rain to snow or vice-versa, but hits the surface before fully transforming. Its METAR code is RASN or SNRA.
Terminology:
This precipitation type is commonly known as sleet in most Commonwealth countries. However, the United States National Weather Service uses the term sleet to refer to ice pellets instead.
Formation:
This precipitation occurs when the temperature in the lowest part of the atmosphere is slightly above the freezing point of water (0 °C or 32 °F). The depth of low-level warm air (below the freezing level) needed to melt snow falling from above to rain varies from about 230–460 m (750–1,500 ft) and depends on the mass of the flakes and the lapse rate of the melting layer. Rain and snow typically mix when the melting layer depth falls between these values as rain starts forming when in that range.
"Wintry showers" or "wintry mixes":
Wintry showers is a somewhat informal meteorological term, used primarily in the United Kingdom, to refer to various mixtures of rain, graupel and snow at once. Though no "official" criteria exist for the term, in the United Kingdom the term is not used when any significant accumulation of snow on the ground takes place. It is often used when the temperature of the ground surface is above 0 °C (32 °F), preventing accumulation from occurring even if the air temperature near the surface is marginally below 0 °C (32 °F); but even then, the falling precipitation must generally have something else other than exclusively snow.
"Wintry showers" or "wintry mixes":
In the United States, wintry mix generally refers to a mixture of freezing rain, ice pellets, and snow. In contrast to the usage in the United Kingdom, in the United States it is usually used when both air and ground temperatures are below 0 °C (32 °F). Additionally, it is generally used when some surface accumulation of ice and snow is expected to occur. During winter, a wide area can be affected by the multiple mixed precipitation types typical of a wintry mix during a single winter storm, as counterclockwise winds around a storm system bring warm air northwards ahead of the system, and then bring cold air back southwards behind it. Most often, it is the region ahead of the approaching storm system which sees the wintry mix, as warm air moves northward and above retreating cold air in a warm front, causing snow to change into ice pellets, freezing rain and finally rain. The reverse transition can occur behind the departing low-pressure system, though it is more common for precipitation to freeze directly from rain to snow, or for it to stop before a transition back. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Visual cryptography**
Visual cryptography:
Visual cryptography is a cryptographic technique which allows visual information (pictures, text, etc.) to be encrypted in such a way that the decrypted information appears as a visual image.
Visual cryptography:
One of the best-known techniques has been credited to Moni Naor and Adi Shamir, who developed it in 1994. They demonstrated a visual secret sharing scheme, where an image was broken up into n shares so that only someone with all n shares could decrypt the image, while any n − 1 shares revealed no information about the original image. Each share was printed on a separate transparency, and decryption was performed by overlaying the shares. When all n shares were overlaid, the original image would appear. There are several generalizations of the basic scheme including k-out-of-n visual cryptography, and using opaque sheets but illuminating them by multiple sets of identical illumination patterns under the recording of only one single-pixel detector.Using a similar idea, transparencies can be used to implement a one-time pad encryption, where one transparency is a shared random pad, and another transparency acts as the ciphertext. Normally, there is an expansion of space requirement in visual cryptography. But if one of the two shares is structured recursively, the efficiency of visual cryptography can be increased to 100%.Some antecedents of visual cryptography are in patents from the 1960s. Other antecedents are in the work on perception and secure communication.Visual cryptography can be used to protect biometric templates in which decryption does not require any complex computations.
Example:
In this example, the image has been split into two component images. Each component image has a pair of pixels for every pixel in the original image. These pixel pairs are shaded black or white according to the following rule: if the original image pixel was black, the pixel pairs in the component images must be complementary; randomly shade one ■□, and the other □■. When these complementary pairs are overlapped, they will appear dark gray. On the other hand, if the original image pixel was white, the pixel pairs in the component images must match: both ■□ or both □■. When these matching pairs are overlapped, they will appear light gray.
Example:
So, when the two component images are superimposed, the original image appears. However, without the other component, a component image reveals no information about the original image; it is indistinguishable from a random pattern of ■□ / □■ pairs. Moreover, if you have one component image, you can use the shading rules above to produce a counterfeit component image that combines with it to produce any image at all.
(2, N) Visual Cryptography Sharing Case:
Sharing a secret with an arbitrary number of people N such that at least 2 of them are required to decode the secret is one form of the visual secret sharing scheme presented by Moni Naor and Adi Shamir in 1994. In this scheme we have a secret image which is encoded into N shares printed on transparencies. The shares appear random and contain no decipherable information about the underlying secret image, however if any 2 of the shares are stacked on top of one another the secret image becomes decipherable by the human eye.
(2, N) Visual Cryptography Sharing Case:
Every pixel from the secret image is encoded into multiple subpixels in each share image using a matrix to determine the color of the pixels. In the (2,N) case a white pixel in the secret image is encoded using a matrix from the following set, where each row gives the subpixel pattern for one of the components: {all permutations of the columns of} : C0=[10...010...0...10...0].
(2, N) Visual Cryptography Sharing Case:
While a black pixel in the secret image is encoded using a matrix from the following set: {all permutations of the columns of} : C1=[10...001...0...00...1].
(2, N) Visual Cryptography Sharing Case:
For instance in the (2,2) sharing case (the secret is split into 2 shares and both shares are required to decode the secret) we use complementary matrices to share a black pixel and identical matrices to share a white pixel. Stacking the shares we have all the subpixels associated with the black pixel now black while 50% of the subpixels associated with the white pixel remain white.
Cheating the (2,N) Visual Secret Sharing Scheme:
Horng et al. proposed a method that allows N − 1 colluding parties to cheat an honest party in visual cryptography. They take advantage of knowing the underlying distribution of the pixels in the shares to create new shares that combine with existing shares to form a new secret message of the cheaters choosing.We know that 2 shares are enough to decode the secret image using the human visual system. But examining two shares also gives some information about the 3rd share. For instance, colluding participants may examine their shares to determine when they both have black pixels and use that information to determine that another participant will also have a black pixel in that location. Knowing where black pixels exist in another party's share allows them to create a new share that will combine with the predicted share to form a new secret message. In this way a set of colluding parties that have enough shares to access the secret code can cheat other honest parties.
In popular culture:
In "Do Not Forsake Me Oh My Darling", a 1967 episode of TV series The Prisoner, the protagonist uses a visual cryptography overlay of multiple transparencies to reveal a secret message – the location of a scientist friend who had gone into hiding. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Soil health**
Soil health:
Soil health is a state of a soil meeting its range of ecosystem functions as appropriate to its environment. In more colloquial terms, the health of soil arises from favorable interactions of all soil components (living and non-living) that belong together, as in microbiota, plants and animals. It is possible that a soil can be healthy in terms of eco-system functioning but not necessarily serve crop production or human nutrition directly, hence the scientific debate on terms and measurements.
Soil health:
Soil health testing is pursued as an assessment of this status but tends to be confined largely to agronomic objectives, for obvious reasons. Soil health depends on soil biodiversity (with a robust soil biota), and it can be improved via soil management, especially by care to keep protective living covers on the soil and by natural (carbon-containing) soil amendments. Inorganic fertilizers do not necessarily damage soil health if 1) used at appropriate and not excessive rates and 2) if they bring about a general improvement of overall plant growth which contributes more carbon-containing residues to the soil.
Aspects:
The term soil health is used to describe the state of a soil in: Sustaining plant and animal productivity (agronomic focus); Enhancing biodiversity (Soil biodiversity) (ecological focus); Maintaining or enhancing water and air quality (environmental/climate focus); Supporting human health and habitation.
Aspects:
sequestering carbonSoil Health has partly if not largely replaced the expression "Soil Quality" that was extant in the 1990s. The primary difference between the two expressions is that soil quality was focused on individual traits within a functional group, as in "quality of soil for maize production" or "quality of soil for roadbed preparation" and so on. The addition of the word "health" shifted the perception to be integrative, holistic and systematic. The two expressions still overlap considerably. Soil Health as an expression derives from organic or "biological farming" movements in Europe, however, well before soil quality was first applied as a discipline around 1990. In 1978, Swiss soil biologist Dr Otto Buess wrote an essay "The Health of Soil and Plants" which largely defines the field even today.
Aspects:
The underlying principle in the use of the term "soil health" is that soil is not just an inert, lifeless growing medium, which modern intensive farming tends to represent, rather it is a living, dynamic and ever-so-subtly changing whole environment. It turns out that soils highly fertile from the point of view of crop productivity are also lively from a biological point of view. It is now commonly recognized that soil microbial biomass is large: in temperate grassland soil the bacterial and fungal biomass have been documented to be 1–2 t (2.0 long tons; 2.2 short tons)/hectare and 2–5 t (4.9 long tons; 5.5 short tons)/ha, respectively.
Aspects:
Some microbiologists now believe that 80% of soil nutrient functions are essentially controlled by microbes.Using the human health analogy, a healthy soil can be categorized as one: In a state of composite well-being in terms of biological, chemical and physical properties; Not diseased or infirmed (i.e. not degraded, nor degrading), nor causing negative off-site impacts; With each of its qualities cooperatively functioning such that the soil reaches its full potential and resists degradation; Providing a full range of functions (especially nutrient, carbon and water cycling) and in such a way that it maintains this capacity into the future.
Conceptualisation:
Soil health is the condition of the soil in a defined space and at a defined scale relative to a set of benchmarks that encompass healthy functioning. It would not be appropriate to refer to soil health for soil-roadbed preparation, as in the analogy of soil quality in a functional class. The definition of soil health may vary between users of the term as alternative users may place differing priorities upon the multiple functions of a soil.
Conceptualisation:
Therefore, the term soil health can only be understood within the context of the user of the term, and their aspirations of a soil, as well as by the boundary definition of the soil at issue. Finally, intrinsic to the discussion on soil health are many potentially conflicting interpretations, especially ecological landscape assessment vs agronomic objectives, each claiming to have soil health criteria.
Interpretation:
Different soils will have different benchmarks of health depending on the "inherited" qualities, and on the geographic circumstance of the soil.
Interpretation:
The generic aspects defining a healthy soil can be considered as follows: "Productive" options are broad; Life diversity is broad; Absorbency, storing, recycling and processing is high in relation to limits set by climate; Water runoff quality is of high standard; Low entropy; and, No damage to, or loss of the fundamental components.This translates to: A comprehensive cover of vegetation; Carbon levels relatively close to the limits set by soil type and climate; Little leakage of nutrients from the ecosystem; Biological and agricultural productivity relatively close to the limits set by the soil environment and climate; Only geological rates of erosion; No accumulation of contaminants; and,An unhealthy soil thus is the simple converse of the above.
Measurement:
On the basis of the above, soil health will be measured in terms of individual ecosystem services provided relative to the benchmark. Specific benchmarks used to evaluate soil health include CO2 release, humus levels, microbial activity, and available calcium.Soil health testing is spreading in the United States, Australia and South Africa.
Measurement:
Cornell University, a land-grant college in NY State, has had a Soil Health Test since 2006. Woods End Laboratories, a private soil lab founded in Maine in 1975, has offered a soil quality package since 1985. Bost these services combine test for physical (aggregate stability) chemical (mineral balance) and biology (CO2 respiration) which today are considered hallmarks of soil health testing. The approach of other soil labs also entering the soil health field is to add into common chemical nutrient testing a biological set of factors not normally included in routine soil testing. The best example is adding biological soil respiration ("CO2-Burst") as a test procedure; this has already been adapted to modern commercial labs in the period since 2006.
Measurement:
There is however resistance among soil testing labs and university scientists to add new biological tests, primarily since interpretation of soil fertility is based on models from "crop response" studies which match yield to test levels of specific chemical nutrients, and no similar models for interpretation appear to exist for soil health tests. Critics of novel soil health tests argue that they may be insensitive to management changes.Soil test methods have evolved slowly over the past 40 years. However, in this same time USA soils have also lost up to 75% of their carbon (humus), causing biological fertility and ecosystem functioning to decline; how much is debatable. Many critics of the conventional system say the loss of soil quality is sufficient evidence that the old soil testing models have failed us, and need to be replaced with new approaches. These older models have stressed "maximum yield" and " yield calibration" to such an extent that related factors have been overlooked. Thus, surface and groundwater pollution with excess nutrients (nitrates and phosphates) has grown enormously, and early 2000s measures were reported (in the United States) to be the worst it has been since the 1970s, before the advent of environmental consciousness.
Soil health gap:
Importance of soil for global food security, agro-ecosystem, environment, and human life has exponentially shifted the trends of research towards soil health. However, lack of a site/region specific benchmark has limited the research effort towards understanding the true effect of different agronomic managements on soil health. In 2020, Maharjan and his team, introduces a new term and concept "Soil Health Gap" and described how native land in particular region can help in establishing the benchmark to compare the efficacies of different management practices and at the same time it can be used in understanding quantitative difference in soil health status. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TANGO**
TANGO:
The TANGO control system is a free open source device-oriented controls toolkit for controlling any kind of hardware or software and building SCADA systems. It is used for controlling synchrotrons, lasers, physics experiments in over 20 sites. It is being actively developed by a consortium of research institutes.
TANGO:
TANGO is a distributed control system. It runs on a single machine as well as hundreds of machines. TANGO uses two network protocols - the omniorb implementation of CORBA and Zeromq. The basic communication model is the client-server model. Communication between clients and servers can be synchronous, asynchronous or event driven. CORBA is used for synchronous and asynchronous communication and Zeromq is used for event-driven communication (since version 8 of TANGO).
TANGO:
TANGO is based on the concept of Devices. Devices implement object oriented and service oriented approaches to software architecture. The Device model in TANGO implements commands/methods, attributes / data fields and properties for configuring Devices. In TANGO all control objects are Devices.
Device Servers:
TANGO is a software for building control systems which need to provide network access to hardware. Hardware can range from single bits of digital input/output up to sophisticated detector systems or entire plant control systems (SCADAs). Hardware access is managed in a process called a Device Server. The Device Server contains Devices belonging to different Device Classes which implement the hardware access. At Device Server startup time Devices (instances of Device Classes) are created which then represent logical instances of hardware in the control system. Clients "import" the Devices via a database and send requests to the devices using TANGO. Devices can store configuration and setup values in a Mysql database permanently.
Device Servers:
Hundreds of Device Classes have been written by the community.
TANGO manages complexity using hierarchies.
Bindings:
TANGO supports bindings to the following languages : C, C++, Java, Python, MATLAB, LabVIEW, IGOR Pro
Licensing:
TANGO is distributed under 2 licenses. The libraries are licensed under the GNU Lesser General Public License (LGPLv3). Tools and device servers are (unless otherwise stated) under the GNU General Public License (GPLv3). The LGPL licence allows the TANGO libraries in products which are not GNU GPL.
Projects using TANGO:
Some of the projects using TANGO (in addition to the consortium) : the diagnostics of the Laser Mégajoule
Consortium:
The consortium is a group of institutes who are actively developing TANGO. To join the consortium an institute has to sign the Memorandum of Understanding and actively commit resources to the development of TANGO. The consortium currently consists of the following institutes : ESRF - European Synchrotron Radiation Facility, Grenoble, France SOLEIL - Soleil Synchrotron, Paris, France ELETTRA - Elettra Synchrotron, Trieste, Italy ALBA - Alba Synchrotron, Barcelona, Spain DESY - Petra III Synchrotron, Hamburg, Germany MAXIV - MAXIV Synchrotron, Lund, Sweden FRMII - FRMII neutron source, Munich, Germany SOLARIS - National Synchrotron Radiation Centre SOLARIS, Kraków, Poland ANKA - ANKA Synchrotron, Karlsruhe, Germany INAF - Istituto Nazionale di Astrofisica, ITThe goal of the consortium is to guarantee the development of TANGO. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Front Panel Data Port**
Front Panel Data Port:
The front panel data port (FPDP) is a bus that provides high speed data transfer between two or more VMEbus boards at up to 160 Mbit/s with low latency. The FPDP bus uses a 32-bit parallel synchronous bus wired with an 80-conductor ribbon cable.The following interface functions are supported: FPDP/TM (transmitter master) - drives data and timing signals onto the FPDP, and also terminates the bus signals at one end of the ribbon cable FPDP/RM (receiver master) - receives data from the FPDP synchronously with the timing signals provided by the FPDP/TM, and also terminates the bus at the opposite end of the cable to the FPDP/TM FPDP/R (receiver) - receives data from the FPDP synchronously with the timing signals provided by the FPDP/TM; it does not terminate the bus. More than one FPDP/R can be connected to the FPDP bus. It can also be an alternate function to that of FPDP/RM via software control.The connector, denoted by the FPDP specification, is a KEL P/N 8825E-080-175.
Interface signals:
D<31:0> : Data bus driven by FPDP/TM DIR_n : Active low Direction signal driven by FPDP/TM DVALID_n: Active low data valid indication driven by FPDP/TM STROBE : A free running clock supplied by FPDP/TM NRDY_n : Active low not ready signal driven by FPDP/R or FPDP/RM. Asserted before the commencement of transfer of data by the FPDP/R or FPDP/RM asynchronous to STROBE.
Interface signals:
PSTROBE : Optional Differential PECL version of the STROBE driven by FPDP/TM SUSPEND_n : Active low suspend signal asserted by FPDP/R or FPDP/RM asynchronous to STROBE to inform the transmitter that buffer flow condition may occur. The transmitter may delay not more than 16 clocks before it suspends the data transfer.
SYNC_n : Active low synchronization pulse provided by FPDP/TM.
PIO1, PIO2 : Programmable I/O lines for user purposes
Data frames:
The following types of data frames are supported: Unframed Data single frame Data Fixed size Repeating Frame Data Dynamic Size Repeating Frame Data
Cable length:
FPDP interfaces work with up to a cable length of 1 meter when used in multi-drop configuration. They work up to 2 meter when using STROBE signal during point-to-point configuration. They work up to 5 meter when used with PSTROBE differential signal during point-to-point configuration. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Holyhedron**
Holyhedron:
In mathematics, a holyhedron is a type of 3-dimensional geometric body: a polyhedron each of whose faces contains at least one polygon-shaped hole, and whose holes' boundaries share no point with each other or the face's boundary.The concept was first introduced by John H. Conway; the term "holyhedron" was coined by David W. Wilson in 1997 as a pun involving polyhedra and holes. Conway also offered a prize of 10,000 USD, divided by the number of faces, for finding an example, asking: Is there a polyhedron in Euclidean three-dimensional space that has only finitely many plane faces, each of which is a closed connected subset of the appropriate plane whose relative interior in that plane is multiply connected?No actual holyhedron was constructed until 1999, when Jade P. Vinson presented an example of a holyhedron with a total of 78,585,627 faces; another example was subsequently given by Don Hatch, who presented a holyhedron with 492 faces in 2003, worth about 20.33 USD prize money. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydroxyglutamate decarboxylase**
Hydroxyglutamate decarboxylase:
The enzyme hydroxyglutamate decarboxylase (EC 4.1.1.16) catalyzes the chemical reaction 3-hydroxy-L-glutamate ⇌ 4-amino-3-hydroxybutanoate + CO2Hence, this enzyme has one substrate, 3-hydroxy-L-glutamate, and two products, 4-amino-3-hydroxybutanoate and CO2.
This enzyme belongs to the family of lyases, specifically the carboxy-lyases, which cleave carbon-carbon bonds. The systematic name of this enzyme class is 3-hydroxy-L-glutamate 1-carboxy-lyase (4-amino-3-hydroxybutanoate-forming). This enzyme is also called 3-hydroxy-L-glutamate 1-carboxy-lyase. It employs one cofactor, pyridoxal phosphate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Facial masculinization surgery**
Facial masculinization surgery:
Facial masculinization surgery (FMS) is a set of plastic surgery procedures that can transform the patient's face to exhibit typical masculine morphology. Cisgender men may elect to undergo these procedures, and in the context of transgender people, FMS is a type of facial gender confirmation surgery (FGCS), which also includes facial feminization surgery (FFS) for transgender women.FMS can include various bony procedures such as chin augmentation, cheek augmentation, as well as augmentation of the forehead, jaw, and Adam's apple. In FMS, most procedures involve "having structures added to give more angles to the face."
History:
Trans men have requested FMS procedures since the 20th century. FMS is currently less common than FFS. Urologist Miriam Hadj-Moussa notes that "transgender men rarely undergo facial masculinization surgery since testosterone therapy leads to growth of facial hair and makes it easier for them to present."In 2011, Douglas Ousterhout outlined the available FMS procedures, drawing on the work of Paul Tessier. In 2015 Shane Morrison published an overview of all gender confirming surgeries for trans men, including FMS. In 2017, Ousterhout's successor Jordan Deschamps-Braly published a case report on the first female-to-male facial confirmation surgery that included masculinization of the Adam's apple.According to the World Professional Association for Transgender Health (WPATH), for many transgender men, FMS is considered medically necessary to treat gender dysphoria. Following the WPATH recommendations, several literature reviews and summaries of the state of the art were published in 2017 and 2018.
Surgical procedures:
The surgical procedures most frequently performed during FMS often include facial implants and include the following, as outlined in the literature.
Surgical procedures:
Forehead augmentation The purpose of forehead augmentation is to create a less rounded forehead with a more prominent supraorbital ridge typical of cisgender men. It can be done with a customized implant, a calvarial bone graft, fat grafting, or materials such as bone cement that are molded into shape before they harden. Injectable fillers may also be used as an outpatient procedure.
Surgical procedures:
Jaw augmentation Orthognathic surgery was first performed for functional reasons in the late 19th century, with cosmetic procedures being improved and refined throughout the 20th century. In facial masculinization surgery, the goal is to create a more robust and square jaw with a sharper mandibular angle. This can be achieved through hydroxyapatite (bone mineral) grafts, which promote new bone growth, or through customized implants.
Surgical procedures:
Chin augmentation To change the appearance of the jaw, chin augmentation may also be performed. This can consist of chin implants or an osteotomy to make the chin tip appear wider and more prominent.
Adam's apple augmentation This newer procedure uses an implant made from cartilage taken from the patient's rib cage to augment the tip of the thyroid cartilage known as the "Adam's apple." It was first performed in 2017. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dry pot chicken**
Dry pot chicken:
Dry pot chicken, also referred as hot pot chicken, 鸡爆 - jī bào or 干锅鸡 - gān guō jī, is a dish served as a dry hot pot, cooked with chili pepper, garlic and chicken, by fast frying the chicken in oil.
Dry pot chicken is but one type of hot pot dish. The seasoning, ingredients and cooking methods are similar to the more common "wet" hot pots, but dry hot pots do not have the soup base.
Origin:
There are two main theories regarding the origins of dry pot chicken.
Origin:
The dry hot pot chicken's origin traces back to Guizhou people's eating habits. It first appeared among the Miao ethnic people. They dug a small round pit in their yard where they placed firewood and charcoal. In the center of the pit, they put a stone or clay pot on the top of the wood. This is the so-called “fire pot", which was the prototype of dry hot pot chicken.
Origin:
Dry hot pot derives from the northern area of Sichuan province. While most dry pot chicken restaurant owners consider Chongqing as the origin of this dish, they also claim that what matters most is which place really popularized dry pot chicken.Dry hot pot dishes began to spread throughout China as rural Sichuan people migrated mainly to the south-east of China to find employment, taking with them about their traditional home town dishes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mnemonist**
Mnemonist:
The title mnemonist refers to an individual with the ability to remember and recall unusually long lists of data, such as unfamiliar names, lists of numbers, entries in books, etc. Some mnemonists also memorize texts such as long poems, speeches, or even entire books of fiction or non-fiction. The term is derived from the term mnemonic, which refers to a strategy to support remembering (such as the method of loci or major system), but not all mnemonists report using mnemonics. Mnemonists may have superior innate ability to recall or remember, in addition to (or instead of) relying on techniques.
Structure of mnemonic skills:
While the innateness of mnemonists' skills is debated, the methods that mnemonists use to memorize are well-documented. Many mnemonists have been studied in psychology labs over the last century, and most have been found to use mnemonic devices. Currently, all memory champions at the World Memory Championships have said that they use mnemonic strategies, such as the method of loci, to perform their memory feats.
Structure of mnemonic skills:
Skilled memory theory was proposed by K. Anders Ericsson and Bill Chase to explain the effectiveness of mnemonic devices in memory expertise. Generally, short-term memory has a capacity of seven items; however, in order to memorize long strings of unrelated information, this constraint must be overcome. Skilled memory theory involves three steps: meaningful encoding, retrieval structure, and speed-up.
Structure of mnemonic skills:
Encoding In encoding, information is encoded in terms of knowledge structures through meaningful associations. This may initially involve breaking down long lists into more manageable chunks that fall within the capacity of short term memory. Verbal reports of memory experts show a consistent grouping of three or four. A digit sequence 1-9-4-5, for example, can then be remembered as "the year World War Two ended". Luria reported that Solomon Shereshevsky used synesthesia to associate numbers and words as visual images or colors to encode the information presented to him, but Luria did not clearly distinguish between synesthesia and mnemonic techniques like the method of loci and number shapes. Other subjects studied have used previous knowledge such as racing times or historical information to encode new information. This is supported by studies that have shown that previous knowledge about a subject will increase one's ability to remember it. Chess experts, for example, can memorize more pieces of a chess game in progress than a novice chess player. However, while there is some correlation between memory expertise and general intelligence, as measured by either IQ or the general intelligence factor, the two are by no means identical. Many memory experts have been shown to be average to above-average by these two measures, but not exceptional.
Structure of mnemonic skills:
Retrieval The next step is to create a retrieval structure by which the associations can be recalled. It serves the function of storing retrieval cues without having to use short term memory. It is used to preserve the order of items to be remembered. Verbal reports of memory experts show two prominent methods of retrieving information: hierarchical nodes and the method of loci. Retrieval structures are hierarchically organized and can be thought of as nodes that are activated when information is retrieved. Verbal reports have shown that memory experts have different retrieval structures. One expert clustered digits into groups, groups into supergroups, and supergroups into clusters of supergroups. However, by far the most common method of retrieval structure is the method of loci.
Structure of mnemonic skills:
Method of loci The method of loci is "the use of an orderly arrangement of locations into which one could place the images of things or people that are to be remembered." The encoding process happens in three steps. First, an architectural area, such as the houses on a street, must be memorized. Second, each item to be remembered must be associated with a separate image. Finally, this set of images can be distributed in a "locus", or place within the architectural area in a pre-determined order. Then, as one tries to recall the information, the mnemonists simply has to "walk" down the street, see each symbol, and recall the associated information. An example of mnemonists who used this method is Solomon Shereshevsky; he would use Gorky Street, his own street. When he read, each word would form a graphic image. He would then place this image in a place along the street; later, when he needed to recall the information, he would simply "stroll" down the street again to recall the necessary information.Neuroimaging studies have shown results that support the method of loci as the retrieval method in world-class memory performers. An fMRI recorded brain activity in memory experts and a control group as they were memorizing selected data. Previous studies have shown that teaching a control group the method of loci leads to changes in brain activation during memorization. Consistent with their use of the method of loci, memory experts had higher activity in the medial parietal cortex, retrospenial cortex, and right posterior hippocampus; these brain areas have been linked to spatial memory and navigation. These differences were observable even when the memory experts were trying to memorize stimuli, such as snowflakes, where they showed no superior ability to the control group.
Structure of mnemonic skills:
Acceleration The final step in skilled memory theory is acceleration. With practice, time necessary for encoding and retrieval operations can be dramatically reduced. As a result, storage of information can then be performed within a few seconds. Indeed, one confounding factor in the study of memory is that the subjects often improve from day-to-day as they are tested over and over.
Learned skill or innate ability:
The innateness of expert performance in the memory field has been studied thoroughly by many scientists; it is a matter which has still not been definitively resolved.
Learned skill or innate ability:
Evidence for memory expertise as a learned skill Much evidence exists which points towards memory expertise as a learned skill which can only be learned through hours of deliberate practice. Anecdotally, the performers in top memory competitions like the World Memory Championships and the Extreme Memory Tournament all deny any ability of a photographic memory; rather, these experts have averaged 10 years practicing their encoding strategies. Another piece of evidence which points away from an innate superiority of memory is the specificity of memory expertise in memorists. For example, though memory experts have an exceptional ability to remember digits, their ability to remember unrelated items which are more difficult to encode, such as symbols or snowflakes, is the same as that of an average person. The same holds true for memory experts in other fields: studies of mental calculators and chess experts show the same specificity for superior memory. In some cases, other types of memory, such as visual memory for faces, may even be impaired. Another piece of evidence of memory expertise as a learned ability is the fact that dedicated individuals can make exceptional memory gains when exposed to mnemonics and given a chance to practice. One subject, SF, a college student of average intelligence, was able to attain world-class memory performance after hundreds of hours of practice over two years. His memory, in fact, improved over 70 standard deviations, while his digit span, or memory span for digits, grew to 80 digits, which was higher than the digit span for all memory experts previously recorded. Similarly, adults of average intelligence taught encoding strategies also show large gains in memory performance. Finally, neuroimaging studies performed on memory experts and compared to a control group have found no systematic anatomical differences in the brain between memory experts and a control group. While it is true that there are activation differences between the brains of memory experts and a control group, they are due to the use of spatial techniques to form retrieval structures, not any structural differences.
Learned skill or innate ability:
Evidence for memory expertise as an innate ability Much of the evidence for innate superiority of memory is anecdotal and is therefore rejected by scientists who have moved toward accepting only reproducible studies as evidence for elite performance. There have been exceptions, however, that do not fit skilled memory theory as proposed by Chase and Ericsson. Synesthetes, for example, show a memory advantage for material that induces their synesthesia over a control group. This advantage tends to be in retention of new information rather than learning. However, synesthetes are likely to have some brain differences which give them an innate advantage when it comes to memory. Another group which may have some innate memory advantage are autistic savants. Unfortunately, many savants who have performed memory feats, such as Kim Peek and Daniel Tammet, have not been rigorously studied; they do claim not to need to use encoding strategies. A recent imaging study of savants found that there are activation differences between savants and typically developing individuals; these cannot be explained by the method of loci as mnemonic savants do not tend to use encoding strategies for their memory. Savants activated the right inferior occipital areas of their brain, whereas control participants activated the left parietal region which is generally associated with attentional processes.
Famous mnemonists:
Femi Francis Akinsiku Yanjaa Wintersoul: double world record holding memory champion.
Derren Brown Creighton Carvello Dominic O'Brien: 8x world memory champion (1991, 1993, 1995–97, 1999–01).
Ben Pridmore: 3x world memory champion (2004, 2008–09).
Harry Lorayne Edward Cyril De Hault Laston Rajan Mahadevan Kim Peek, the real-life inspiration for the character of Raymond in the film Rain Man Shass Pollak S.V. Shereshevskii, from AR Luria's The Mind of a Mnemonist Daniel Tammet Ed Cooke: author and grandmaster of memory Wang Feng: 2x world memory champion (2010–11).
Johannes Mallow: world memory champion (2012).
Jonas von Essen: 2x world memory champion (2013–14).
Alex Mullen: 3x world memory champion (2015–17).
Nelson Dellis: 5x USA memory champion (2011–12, 2014–15, 2021).
Joshua Foer: author and USA memory champion (2006).
Brad Williams Dave Farrow: most playing cards memorized after a single sighting, with 59 decks.
Shraman N L, Republic of India Arun Phoke, Aurangabad, Maharashtra Nidhip Vora, Upleta, Gujarat, memory expert, researcher of IRSRO mnemonic system of memory, psychologist Moshe Feinstein, eminent rabbiMemory sport contains a more comprehensive list of well-known memory athletes. The complete, up-to-date memory world rankings can be found at the International Association of Memory website. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Marvel No-Prize**
Marvel No-Prize:
The Marvel No-Prize is a fake or satirical award given out by Marvel Comics to readers. Originally for those who spotted continuity errors in the comics, the current "No-Prizes" are given out for charitable works or other types of "meritorious service to the cause of Marveldom". As the No-Prize evolved, it was distinguished by its role in explaining away potential continuity errors. Initially awarded simply for identifying such errors, a No-Prize was later given only when a reader successfully explained why the continuity error was not an error at all.
History:
Predecessors and antecedents of the No-Prize The No-Prize was inspired by the policies of many other comic book publishers of the early 1960s — namely, that if a fan found a continuity error in a comic and wrote a letter to the publisher of the comic, he or she would receive a prize of cash, free comics, or even original artwork.In a similar vein, in 1962, Marvel Comics writer/editor Stan Lee promised, in the letters page of Fantastic Four #4, that he would send five dollars to a reader who would write in with the best explanation for a continuity error from an earlier issue. When the Marvel offices were inundated with suggestions, Lee awarded the $5 to the first letter received, and printed the names of all the other correspondents who had sent in good answers.
History:
The first No-Prizes This sort of interaction with the readers continued, with contests and polls being run on the Fantastic Four letters page for the next few years. In the letters page for issue #22, featuring a contest for which reader had the largest comics collection, Lee announced that "no prizes" would be given ("because we're cheapskates!"). The winner of the contest was announced in issue #25, where it was officially dubbed a "No-Prize."In Fantastic Four #26, Lee ran a contest asking readers to send in their definition of what "the Marvel Age of Comics" really meant. As part of the letter, Lee wrote "there will be no prizes, and therefore, no losers". Originally, the "prize" was simply Lee publishing the letter and informing the letter-writer that he or she had won a No-Prize, which was actually nothing.Other No-Prize contests asked readers questions and rewarded the most creative responses. One example asked readers for proof of whether the Sub-Mariner was a mutant or not (it has since been firmly established in continuity that the Sub-Mariner is a mutant). Winners had their letters printed, along with Lee congratulating them on winning a No-Prize.
History:
For "meritorious service" The No-Prize had been intended as a reminder to Marvel readers to "lighten up" and read comics for pleasure; to not write in for prizes, but instead for the thrill of being recognized for their efforts. Letters soon multiplied, however, as fans wrote in looking for errors in every comic they could, and suddenly the non-existent prize was in high demand. In response, Lee took on a new approach. Since other comic companies had given out prizes for pointing out oversights and continuity errors in their books, Lee began awarding No-Prizes in such situations only "to the fan who could explain a seemingly unexplainable situation." The reader who inspired this version of the No-Prize was a teenage George R. R. Martin, later a successful novelist.The No-Prize soon evolved into a reward to those who performed "meritorious service to the cause of Marveldom": readers who first spotted a mistake, or came up with a plausible way to explain a mistake others spotted, or made some great suggestion or performed a service for Marvel in general.
History:
No-Prize distribution As time went by, some recipients of the "award" began to write Lee and ask why they had not received an actual prize. In response, in 1967 Lee began mailing No-Prize-winners pre-printed empty envelopes that said "Congratulations, this envelope contains a genuine Marvel Comics No-Prize which you have just won!" However, some uncomprehending fans wrote back asking where their prize was, even going so far as to suggest their prize had fallen out of the envelope.
History:
Confusion and decline After Lee stepped down as Marvel editor-in-chief in 1972 (becoming Marvel's publisher), Marvel's various editors, who were left in charge of dispensing No-Prizes, developed differing policies toward awarding them. By 1986, these policies ranged from Ralph Macchio's practice of giving them away to anyone who wrote a letter asking for one to Mike Higgins' policy of not awarding them at all. As reported in Iron Man #213 (Dec. 1986), these were the various editors' policies: Ann Nocenti (X-Men): "The spirit of the No-Prize is not just to complain and nitpick but to offer an exciting solution. Do that and you will get one from me." Carl Potts (Alpha Flight and Power Pack): "If someone points out a major story problem I'm not aware of and solves it to my satisfaction, I'll award a No-Prize. I give away very few." Mike Higgins (Star Brand): "No No-Prizes for New Universe no-no's no way!" Larry Hama (Conan, G.I. Joe): "No one writes in for them in the Conan books so we don't award them. On G.I. Joe, which I write, I give them to people who get me out of jams if they are very ingenious about it." Archie Goodwin (Epic): "We acknowledge our mistakes in print, but Epic Comics doesn't award No-Prizes." Bob Budiansky (Secret Wars II): "If someone finds a clever enough explanation for what seems to be a mistake, I'll send them a No-Prize." Bob Harras (The Incredible Hulk, X-Factor): "My policy is if a certain mistake wouldn't have bothered me when I was a kid, it's not worth a No-Prize. But if someone does really help us out, I'll send them one." Don Daley (Captain America): "First I place a temporal statute of limitations on No-Prize mistakes. If the mistake is more than six issues old, it doesn't qualify anymore. Second, I only give them out for things that count, not trivial nitpicking and faultfinding. Third, the explanation should not only be logical but emotionally appealing. I don't award many of them." James Owsley (Spider-Man): "We only mail them out to people who send us the best possible explanations for important mistakes. Panels where someone's shirt is colored wrong do not count. We send out the No-Prize envelopes to everyone who gets the same best answer, and sometimes will send out postcards to runners-up who come close." Ralph Macchio (Daredevil): "The No-Prize is an honored Marvel tradition. Of course I give them away—for just about any old stupid thing. I have a million of them."A typical mid-1980s attempt at a No-Prize comes from the letters page of The Incredible Hulk #324 (Oct. 1986), in response to Hulk #321: ". . . On page 12, panel 5, Wonder Man's glasses are knocked off, but in following panels on the next page, he has them on. He didn't have enough time to get them after they fell off, and Hawkeye's explosive arrow probably would have destroyed them when it detonated on the Hulk. Never fear, though. I have the solution — while flying down to help Hawkeye, Wonder Man pulled out an extra pair he carries in case of just such emergencies." (Editor Bob Harras awarded the writer a No-Prize.)Editor Mark Gruenwald believed the quest for No-Prizes negatively impacted the quality of letters sent to comic book letter columns, as readers were becoming more focused on nitpicking and pointing out errors than in responding to the comics' stories themselves (he even cited one letter which focused on Captain America's glove being yellow in one panel, instead of the correct color red). Gruenwald then temporarily adopted a new policy, which was to award No-Prizes to readers who not only pointed out an error but also devised a clever explanation as to why it was not really an error (Gruenwald was also known for awarding the "fred-prize" to readers of Captain America). But in 1986, still believing that the quest for No-Prizes was degrading the quality of reader communication, Gruenwald informed the public that his office would no longer award No-Prizes at all.In January 1989, Marvel was purchased by Ronald Perelman. One of the first casualties of the new financial belt-tightening was the No-Prize, considered in one memo to be "a silly, expensive extravagance to mail out".
History:
Early 1990s reinstatement In 1991, then-Marvel editor-in-chief Tom DeFalco reinstated the No-Prize, introducing the "meritorious service to Marvel above and beyond the call of duty" criteria: What constitutes 'meritorious service'? Lots of things could! Like sending a box of comics to the children's wing of a hospital. Or compiling a chronological cross-title index to a character's appearance. Or coming up with an explanation for a major discontinuity or discrepancy. So if you think spotting a misspelled word or miscolored boot is worth a No-Prize, you're living in the wrong decade! This policy is in effect for all Marvel titles whose editors award No-Prizes.
History:
In the late 1990s, Stan Lee returned to writing the Bullpen Bulletins column. He would answer fan questions, and anyone whose question was used would receive a physical No-Prize. No-Prizes were still irregularly offered for any number of reasons. In one example, the first reader to name the last story Stan Lee wrote before becoming Marvel's publisher was promised a No-Prize.
History:
Digital No-Prize On July 31, 2006, Marvel executive editor Tom Brevoort instituted the digital No-Prize to be awarded for "meritorious service to Marveldom". The first was awarded on August 12, 2006, to a group of Marvel fans who donated a large number of comics to U.S. service members stationed in Iraq.
2023 variants In February 2023, Marvel released three variant covers featuring No-Prize envelopes. The covers were printed on Amazing Spider-Man #19, Black Panther #14, and Hulk #12.
No-Prize book:
In late 1982 (cover dated January 1983), Marvel published a humorous one-shot comic featuring some of their most notorious goofs. Subtitled "Mighty Marvel's Most Massive Mistakes", the book was organized and spearheaded by Jim Owsley and had a cover which was deliberately printed upside-down. In the comic's story Lee, with the help of artists Bob Camp and Vince Colletta, exposes and pokes fun at typos, misspellings and other errors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Portopulmonary hypertension**
Portopulmonary hypertension:
Portopulmonary hypertension (PPH) is defined by the coexistence of portal and pulmonary hypertension. PPH is a serious complication of liver disease, present in 0.25 to 4% of all patients with cirrhosis. Once an absolute contraindication to liver transplantation, it is no longer, thanks to rapid advances in the treatment of this condition. Today, PPH is comorbid in 4-6% of those referred for a liver transplant.
Presentation:
PPH presents roughly equally in male and female cirrhotics; 71% female in an American series and 57% male in a larger French series. Typically, patients present in their fifth decade, aged 49 +/- 11 years on average.In general, PPH is diagnosed 4–7 years after the patient is diagnosed with portal hypertension and in roughly 65% of cases, the diagnosis is actually made at the time of invasive hemodynamic monitoring following anesthesia induction prior to liver transplantation.Once patients are symptomatic, they present with right heart dysfunction secondary to pulmonary hypertension and its consequent dyspnea, fatigue, chest pain and syncope. Patients tend to have a poor cardiac status, with 60% having stage III-IV NYHA heart failure.PPH is actually independent of the severity of cirrhosis but may be more common in specific types of cirrhosis, in one series more so in Autoimmune Hepatitis and less in Hepatitis C cirrhosis, while in another it was equally distributed throughout the diagnoses.
Pathophysiology:
PPH pathology arises both from the humoral consequences of cirrhosis and the mechanical obstruction of the portal vein. A central paradigm holds responsible an excess local pulmonary production of vasoconstrictors that occurs while vasodilatation predominates systemically. Key here are imbalances between vasodilatory and vasoconstricting molecules; endogenous prostacyclin and thromboxane (from Kupffer Cells) or nitric oxide (NO) and endothelin-1 (ET-1). ET-1 is the most potent vasoconstrictor under investigation and it has been found to be increased in both cirrhosis and pulmonary hypertension. Endothelin-1 has two receptors in the pulmonary arterial tree, ET-A which mediates vasoconstriction and ET-B which mediates vasodilation. Rat models have shown decreased ET-B receptor expression in pulmonary arteries of cirrhotic and portal hypertensive animals, leading to a predominant vasoconstricting response to endothelin-1.In portal hypertension, blood will shunt from portal to systemic circulation, bypassing the liver. This leaves unmetabolized potentially toxic or vasoconstricting substances to reach and attack the pulmonary circulation. Serotonin, normally metabolized by the liver, is returned to the lung instead where it mediates smooth muscle hyperplasia and hypertrophy. Moreover, a key pathogenic factor in the decline in status of PPH patients related to this shunting is the cirrhotic cardiomyopathy with myocardial thickening and diastolic dysfunction.Finally, the pulmonary pathology of PPH is very similar to that of primary pulmonary hypertension. The muscular pulmonary arteries become fibrotic and hypertrophy while the smaller arteries lose smooth muscle cells and their elastic intima. One study found at autopsy significant thickening of pulmonary arteries in cirrhotic patients. This thickening and remodeling forms a positive feedback loop that serves to increase PAP and induce right heart hypertrophy and dysfunction.
Diagnosis:
The diagnosis of portopulmonary hypertension is based on hemodynamic criteria: . Portal hypertension and/or liver disease (clinical diagnosis—ascites/varices/splenomegaly) . Mean pulmonary artery pressure—MPAP > 20 mmHg at rest (revised from 25 to 20 according to 6th World Pulmonary Hypertension Symposium) . Pulmonary vascular resistance—PVR > 240 dynes s cm−5 . Pulmonary artery occlusion pressure— PAOP < 15mmHg or transpulmonary gradient—TPG > 12 mmHg where TPG = MPAP − PAOP.The diagnosis is usually first suggested by a transthoracic echocardiogram, part of the standard pre-transplantation work-up. Echocardiogram estimated pulmonary artery systolic pressures of 40 to 50 mm Hg are used as a screening cutoff for PPH diagnosis, with a sensitivity of 100% and a specificity as high as 96%. The negative predictive value of this method is 100% but the positive predictive value is 60%. Thereafter, these patients are referred for pulmonary artery catheterization.The limitations of echocardiography are related to the derivative nature of non-invasive PAP estimation. The measurement of PAP by echocardiogram is made using a simplified Bernoulli equation. High cardiac index and pulmonary capillary wedge pressures, however, may lead to false positives by this standard. By one institution's evaluation, the correlation between estimated systolic PAP and directly measured PAP was poor, 0.49. For these reasons, right heart catheterization is needed to confirm the diagnosis.
Treatment:
In general, the treatment of PPH is derived from the treatment of pulmonary hypertension. The best treatment available is the combination of medical therapy and liver transplantation.The ideal treatment for PPH management is that which can achieve pulmonary vasodilatation and smooth muscle relaxation without exacerbating systemic hypotension. Most of the therapies for PPH have been adapted from the primary pulmonary hypertension literature. Calcium channel blockers, b-blockers and nitrates have all been used – but the most potent and widely used aids are prostaglandin (and prostacyclin) analogs, phosphodiesterase inhibitors, nitric oxide and, most recently, endothelin receptor antagonists and agents capable of reversing the remodeling of pulmonary vasculature.Inhaled nitric oxide vasodilates, decreasing pulmonary arterial pressure (PAP) and pulmonary vascular resistance (PVR) without affecting systemic artery pressure because it is rapidly inactivated by hemoglobin, and improves oxygenation by redistributing pulmonary blood flow to ventilated areas of lung. Inhaled nitric oxide has been used successfully to bridge patients through liver transplantation and the immediate perioperative period, but there are two significant drawbacks: it requires intubation and cannot be used for long periods of time due to methemoglobinemia.Prostaglandin PGE1 (Alprostadil) binds G-protein linked cell surface receptors that activate adenylate cyclase to relax vascular smooth muscle. Prostacyclin – PGI2, an arachidonic acid derived lipid mediator (Epoprostenol, Flolan, Treprostenil) – is a vasodilator and, at the same time, the most potent inhibitor of platelet aggregation. More importantly, PGI2 (and not nitrous oxide) is also associated with an improvement in splanchnic perfusion and oxygenation. Epoprostenol and ilioprost (a more stable, longer acting variation) can and does successfully bridge for patients to transplant. Epoprostenol therapy can lower PAP by 29-46% and PVR by 21-71%., Ilioprost shows no evidence of generating tolerance, increases cardiac output and improves gas exchange while lowering PAP and PVR. A subset of patients does not respond to any therapy, likely having fixed vascular anatomic changes.Phosphodiesterase inhibitors (PDE-i) have been employed with excellent results. It has been shown to reduce mean PAP by as much as 50%, though it prolongs bleeding time by inhibiting collagen-induced platelet aggregation. Another drug, Milrinone, a Type 3 PDE-i increases vascular smooth muscle adenosine-3,5-cyclic monophosphate concentrations to cause selective pulmonary vasodilation. Also, by causing the buildup of cAMP in the myocardium, Milrinone increases contractile force, heart rate and the extent of relaxation.
Treatment:
The newest generation in PPH pharmacy shows great promise. Bosentan is a nonspecific endothelin-receptor antagonist capable of neutralizing the most identifiable cirrhosis associated vasoconstrictor, safely and efficaciously improving oxygenation and PVR, especially in conjunction with sildenafil. Finally, where the high pressures and pulmonary tree irritations of PPH cause a medial thickening of the vessels (smooth muscle migration and hyperplasia), one can remove the cause –control the pressure, transplant the liver – yet those morphological changes persist, sometimes necessitating lung transplantation. Imatinib, designed to treat chronic myeloid leukemia, has been shown to reverse the pulmonary remodeling associated with PPH.
Prognosis:
Following diagnosis, mean survival of patients with PPH is 15 months. The survival of those with cirrhosis is sharply curtailed by PPH but can be significantly extended by both medical therapy and liver transplantation, provided the patient remains eligible.Eligibility for transplantation is generally related to mean pulmonary artery pressure (PAP). Given the fear that those PPH patients with high PAP will have right heart failure following the stress of post-transplant reperfusion or in the immediate perioperative period, patients are typically risk-stratified based on mean PAP. Indeed, the operation-related mortality rate is greater than 50% when pre-operative mean PAP values lie between 35 and 50 mm Hg; if mean PAP exceeds 40–45, transplantation is associated with a perioperative mortality of 70-80% (in those cases without preoperative medical therapy) Patients, then, are considered to have a high risk of perioperative death once their mean PAP exceeds 35 mmHg.Survival is best inferred from published institutional experiences. At one institution, without treatment, 1-year survival was 46% and 5-year survival was 14%. With medical therapy, 1-year survival was 88% and 5-year survival was 55%. Survival at 5 years with medical therapy followed by liver transplantation was 67%. At another institution, of the 67 patients with PPH from 1652 total cirrhotics evaluated for transplant, half (34) were placed on the waiting list. Of these, 16 (48%) were transplanted at a time when 25% of all patients who underwent full evaluation received new livers, meaning the diagnosis of PPH made a patient twice as likely to be transplanted, once on the waiting list. Of those listed for transplant with PPH, 11 (33%) were eventually removed because of PPH, and 5 (15%) died on the waitlist. Of the 16 transplanted patients with PPH, 11 (69%) survived for more than a year after transplant, at a time when overall one-year survival in that center was 86.4%. The three-year post-transplant survival for patients with PPH was 62.5% when it was 81.02% overall at this institution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fard**
Fard:
Farḍ (Arabic: فرض) or farīḍah (فريضة) or fardh in Islam is a religious duty commanded by God. The word is also used in Turkish, Persian, Pashto, Urdu (spelled farz), and Malay (spelled fardu or fardhu) in the same meaning. Muslims who obey such commands or duties are said to receive hasanat (حسنة), ajr (أجر) or thawab (ثواب) for each good deed.
Fard:
Fard or its synonym wājib (واجب) is one of the five types of ahkam (أحكام) into which fiqh categorizes acts of every Muslim. The Hanafi fiqh, however, does not consider both terms to be synonymous, and makes a distinction between wajib and fard, the latter being obligatory and the former slightly lesser degree than being obligatory.
Individual duty and sufficiency:
The Fiqh distinguishes two sorts of duties: Individual duty or farḍ al-'ayn (فرض العين) relates is required to perform, such as daily prayer (salat), and the pilgrimage to Mecca at least once in a lifetime if the person can afford the journey (hajj). An individual not performing this will be punished in the afterlife (but can be excused on basis of incapability), but if he enjoins and fulfils its necessity will be rewarded.
Individual duty and sufficiency:
Sufficiency duty or farḍ al-kifāya (فرض الكفاية) is a duty which is imposed on the whole community of believers (ummah). The classic example for it is janaza (Funeral prayer): the individual is not required to perform it as long as a sufficient number of community members fulfill it.
Examples of fard acts:
Salah (daily prayer, including Friday prayer) Zakat (giving alms) Sawm (fasting during Ramadan) Hajj (pilgrimage to Mecca) Protecting one's children | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Strontium chlorate**
Strontium chlorate:
Strontium chlorate is a chemical compound, with the formula Sr(ClO3)2. It is a strong oxidizing agent.
Preparation:
Strontium chlorate is created by warming a solution of strontium hydroxide, and adding chlorine to it, which subsequent crystallization. Chlorine has no action on dry Sr(OH)2, but it converts the hydrate (Sr(OH)2·8H2O) into the chloride and chlorate, with a small quantity of strontium hypochlorite also being produced. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Object-oriented programming**
Object-oriented programming:
Object-Oriented Programming (OOP) is a programming paradigm based on the concept of "objects", which can contain data and code. The data is in the form of fields (often known as attributes or properties), and the code is in the form of procedures (often known as methods). A common feature of objects is that procedures (or methods) are attached to them and can access and modify the object's data fields. In this brand of OOP, there is usually a special name such as this or self used to refer to the current object. In OOP, computer programs are designed by making them out of objects that interact with one another. OOP languages are diverse, but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types.
Object-oriented programming:
Many of the most widely used programming languages (such as C++, Java, Python, etc.) are multi-paradigm and they support object-oriented programming to a greater or lesser degree, typically in combination with imperative, procedural programming. Significant object-oriented languages include: Ada, ActionScript, C++, Common Lisp, C#, Dart, Eiffel, Fortran 2003, Haxe, Java, JavaScript, Kotlin, logo, MATLAB, Objective-C, Object Pascal, Perl, PHP, Python, R, Raku, Ruby, Scala, SIMSCRIPT, Simula, Smalltalk, Swift, Vala and Visual Basic.NET.
History:
Terminology invoking "objects" in the modern sense of object-oriented programming made its first appearance at MIT in the late 1950s and early 1960s. In the environment of the artificial intelligence group, as early as 1960, "object" could refer to identified items (LISP atoms) with properties (attributes);Alan Kay later cited a detailed understanding of LISP internals as a strong influence on his thinking in 1966, and that he used the term "object-oriented programming" in conversation as early as 1967. Although sometimes called "the father of object-oriented programming", Alan Kay has differentiated his notion of OO from the more conventional abstract data type notion of object, and has implied that the computer science establishment did not adopt his notion. A 1976 MIT memo co-authored by Barbara Liskov lists Simula 67, CLU, and Alphard as object-oriented languages, but does not mention Smalltalk.
History:
Another early MIT example was Sketchpad created by Ivan Sutherland in 1960–1961; in the glossary of the 1963 technical report based on his dissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction.
History:
Also, an MIT ALGOL version, AED-0, established a direct link between data structures ("plexes", in that dialect) and procedures, prefiguring what were later termed "messages", "methods", and "member functions".Simula introduced important concepts that are today an essential part of object-oriented programming, such as class and object, inheritance, and dynamic binding. The object-oriented Simula programming language was used mainly by researchers involved with physical modelling, such as models to study and improve the movement of ships and their content through cargo ports.In the 1970s, the first version of the Smalltalk programming language was developed at Xerox PARC by Alan Kay, Dan Ingalls and Adele Goldberg. Smalltalk-72 included a programming environment and was dynamically typed, and at first was interpreted, not compiled. Smalltalk became noted for its application of object orientation at the language-level and its graphical development environment. Smalltalk went through various versions and interest in the language grew. While Smalltalk was influenced by the ideas introduced in Simula 67 it was designed to be a fully dynamic system in which classes could be created and modified dynamically.In the 1970s, Smalltalk influenced the Lisp community to incorporate object-based techniques that were introduced to developers via the Lisp machine. Experimentation with various extensions to Lisp (such as LOOPS and Flavors introducing multiple inheritance and mixins) eventually led to the Common Lisp Object System, which integrates functional programming and object-oriented programming and allows extension via a Meta-object protocol. In the 1980s, there were a few attempts to design processor architectures that included hardware support for objects in memory but these were not successful. Examples include the Intel iAPX 432 and the Linn Smart Rekursiv.
History:
In 1981, Goldberg edited the August issue of Byte Magazine, introducing Smalltalk and object-oriented programming to a wider audience. In 1986, the Association for Computing Machinery organised the first Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), which was unexpectedly attended by 1,000 people. In the mid-1980s Objective-C was developed by Brad Cox, who had used Smalltalk at ITT Inc., and Bjarne Stroustrup, who had used Simula for his PhD thesis, eventually went to create the object-oriented C++. In 1985, Bertrand Meyer also produced the first design of the Eiffel language. Focused on software quality, Eiffel is a purely object-oriented programming language and a notation supporting the entire software lifecycle. Meyer described the Eiffel software development method, based on a small number of key ideas from software engineering and computer science, in Object-Oriented Software Construction. Essential to the quality focus of Eiffel is Meyer's reliability mechanism, Design by Contract, which is an integral part of both the method and language.
History:
In the early and mid-1990s object-oriented programming developed as the dominant programming paradigm when programming languages supporting the techniques became widely available. These included Visual FoxPro 3.0, C++, and Delphi. Its dominance was further enhanced by the rising popularity of graphical user interfaces, which rely heavily upon object-oriented programming techniques. An example of a closely related dynamic GUI library and OOP language can be found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object-oriented, dynamic messaging extension to C based on Smalltalk. OOP toolkits also enhanced the popularity of event-driven programming (although this concept is not limited to OOP).
History:
At ETH Zürich, Niklaus Wirth and his colleagues had also been investigating such topics as data abstraction and modular programming (although this had been in common use since the 1960s or earlier, Wirth has added type checking across module boundaries). Modula-2 (1978) included both, and their succeeding design, Oberon, included a distinctive approach to object orientation, classes, and such. Inheritance is not obvious in Wirth's design since his nomenclature looks the opposite direction: It is called type extension and the viewpoint is from the parent down to the inheritor.
History:
Object-oriented features have been added to many previously existing languages, including Ada, BASIC, Fortran, Pascal, and COBOL. Adding these features to languages that were not initially designed for them often led to problems with compatibility and maintainability of code.
History:
More recently, a number of languages have emerged that are primarily object-oriented, but that are also compatible with procedural methodology. Two such languages are Python and Ruby. Probably the most commercially important recent object-oriented languages are Java, developed by Sun Microsystems, as well as C# and Visual Basic.NET (VB.NET), both designed for Microsoft's .NET platform. Each of these two frameworks shows, in its own way, the benefit of using OOP by creating an abstraction from implementation. VB.NET and C# support cross-language inheritance, allowing classes defined in one language to subclass classes defined in the other language.
Features:
Object-oriented programming uses objects, but not all of the associated techniques and structures are supported directly in languages that claim to support OOP. It performs operations on operands. The features listed below are common among languages considered to be strongly class- and object-oriented (or multi-paradigm with OOP support), with notable exceptions mentioned.
Shared with non-OOP languages Variables that can store information formatted in a small number of built-in data types like integers and alphanumeric characters. This may include data structures like strings, lists, and hash tables that are either built-in or result from combining variables using memory pointers.
Features:
Procedures – also known as functions, methods, routines, or subroutines – that take input, generate output, and manipulate data. Modern languages include structured programming constructs like loops and conditionals.Modular programming support provides the ability to group procedures into files and modules for organizational purposes. Modules are namespaced so identifiers in one module will not conflict with a procedure or variable sharing the same name in another file or module.
Features:
Objects and classes Languages that support object-oriented programming (OOP) typically use inheritance for code reuse and extensibility in the form of either classes or prototypes. Those that use classes support two main concepts: Classes – the definitions for the data format and available procedures for a given type or class of object; may also contain data and procedures (known as class methods) themselves, i.e. classes contain the data members and member functions Objects – instances of classesObjects sometimes correspond to things found in the real world. For example, a graphics program may have objects such as "circle", "square", "menu". An online shopping system might have objects such as "shopping cart", "customer", and "product". Sometimes objects represent more abstract entities, like an object that represents an open file, or an object that provides the service of translating measurements from U.S. customary to metric.
Features:
Each object is said to be an instance of a particular class (for example, an object with its name field set to "Mary" might be an instance of class Employee). Procedures in object-oriented programming are known as methods; variables are also known as fields, members, attributes, or properties. This leads to the following terms: Class variables – belong to the class as a whole; there is only one copy of each variable, shared across all instances of the class Instance variables or attributes – data that belongs to individual objects; every object has its own copy of each one Member variables – refers to both the class and instance variables that are defined by a particular class Class methods – belong to the class as a whole and have access to only class variables and inputs from the procedure call Instance methods – belong to individual objects, and have access to instance variables for the specific object they are called on, inputs, and class variablesObjects are accessed somewhat like variables with complex internal structure, and in many languages are effectively pointers, serving as actual references to a single instance of said object in memory within a heap or stack. They provide a layer of abstraction which can be used to separate internal from external code. External code can use an object by calling a specific instance method with a certain set of input parameters, read an instance variable, or write to an instance variable. Objects are created by calling a special type of method in the class known as a constructor. A program may create many instances of the same class as it runs, which operate independently. This is an easy way for the same procedures to be used on different sets of data.
Features:
Object-oriented programming that uses classes is sometimes called class-based programming, while prototype-based programming does not typically use classes. As a result, significantly different yet analogous terminology is used to define the concepts of object and instance.
In some languages classes and objects can be composed using other concepts like traits and mixins.
Features:
Class-based vs prototype-based In class-based languages the classes are defined beforehand and the objects are instantiated based on the classes. If two objects apple and orange are instantiated from the class Fruit, they are inherently fruits and it is guaranteed that you may handle them in the same way; e.g. a programmer can expect the existence of the same attributes such as color or sugar_content or is_ripe.
Features:
In prototype-based languages the objects are the primary entities. No classes even exist. The prototype of an object is just another object to which the object is linked. Every object has one prototype link (and only one). New objects can be created based on already existing objects chosen as their prototype. You may call two different objects apple and orange a fruit, if the object fruit exists, and both apple and orange have fruit as their prototype. The idea of the fruit class doesn't exist explicitly, but as the equivalence class of the objects sharing the same prototype. The attributes and methods of the prototype are delegated to all the objects of the equivalence class defined by this prototype. The attributes and methods owned individually by the object may not be shared by other objects of the same equivalence class; e.g. the attribute sugar_content may be unexpectedly not present in apple. Only single inheritance can be implemented through the prototype.
Features:
Dynamic dispatch/message passing It is the responsibility of the object, not any external code, to select the procedural code to execute in response to a method call, typically by looking up the method at run time in a table associated with the object. This feature is known as dynamic dispatch. If the call variability relies on more than the single type of the object on which it is called (i.e. at least one other parameter object is involved in the method choice), one speaks of multiple dispatch.
Features:
A method call is also known as message passing. It is conceptualized as a message (the name of the method and its input parameters) being passed to the object for dispatch.
Data abstraction Data abstraction is a design pattern in which data are visible only to semantically related functions, so as to prevent misuse. The success of data abstraction leads to frequent incorporation of data hiding as a design principle in object oriented and pure functional programming.
Features:
If a class does not allow calling code to access internal object data and permits access through methods only, this is a form of information hiding known as abstraction. Some languages (Java, for example) let classes enforce access restrictions explicitly, for example denoting internal data with the private keyword and designating methods intended for use by code outside the class with the public keyword. Methods may also be designed public, private, or intermediate levels such as protected (which allows access from the same class and its subclasses, but not objects of a different class). In other languages (like Python) this is enforced only by convention (for example, private methods may have names that start with an underscore). In C#, Swift & Kotlin languages, internal keyword permits access only to files present in same assembly, package or module as that of the class.
Features:
Encapsulation Encapsulation prevents external code from being concerned with the internal workings of an object. This facilitates code refactoring, for example allowing the author of the class to change how objects of that class represent their data internally without changing any external code (as long as "public" method calls work the same way). It also encourages programmers to put all the code that is concerned with a certain set of data in the same class, which organizes it for easy comprehension by other programmers. Encapsulation is a technique that encourages decoupling.
Features:
Composition, inheritance, and delegation Objects can contain other objects in their instance variables; this is known as object composition. For example, an object in the Employee class might contain (either directly or through a pointer) an object in the Address class, in addition to its own instance variables like "first_name" and "position". Object composition is used to represent "has-a" relationships: every employee has an address, so every Employee object has access to a place to store an Address object (either directly embedded within itself, or at a separate location addressed via a pointer).
Features:
Languages that support classes almost always support inheritance. This allows classes to be arranged in a hierarchy that represents "is-a-type-of" relationships. For example, class Employee might inherit from class Person. All the data and methods available to the parent class also appear in the child class with the same names. For example, class Person might define variables "first_name" and "last_name" with method "make_full_name()". These will also be available in class Employee, which might add the variables "position" and "salary". This technique allows easy re-use of the same procedures and data definitions, in addition to potentially mirroring real-world relationships in an intuitive way. Rather than utilizing database tables and programming subroutines, the developer utilizes objects the user may be more familiar with: objects from their application domain.Subclasses can override the methods defined by superclasses. Multiple inheritance is allowed in some languages, though this can make resolving overrides complicated. Some languages have special support for mixins, though in any language with multiple inheritance, a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes. For example, class UnicodeConversionMixin might provide a method unicode_to_ascii() when included in class FileReader and class WebPageScraper, which don't share a common parent.
Features:
Abstract classes cannot be instantiated into objects; they exist only for the purpose of inheritance into other "concrete" classes that can be instantiated. In Java, the final keyword can be used to prevent a class from being subclassed.The doctrine of composition over inheritance advocates implementing has-a relationships using composition instead of inheritance. For example, instead of inheriting from class Person, class Employee could give each Employee object an internal Person object, which it then has the opportunity to hide from external code even if class Person has many public attributes or methods. Some languages, like Go do not support inheritance at all.
Features:
The "open/closed principle" advocates that classes and functions "should be open for extension, but closed for modification".
Delegation is another language feature that can be used as an alternative to inheritance.
Polymorphism Subtyping – a form of polymorphism – is when calling code can be independent of which class in the supported hierarchy it is operating on – the parent class or one of its descendants. Meanwhile, the same operation name among objects in an inheritance hierarchy may behave differently.
For example, objects of type Circle and Square are derived from a common class called Shape. The Draw function for each type of Shape implements what is necessary to draw itself while calling code can remain indifferent to the particular type of Shape being drawn.
This is another type of abstraction that simplifies code external to the class hierarchy and enables strong separation of concerns.
Open recursion In languages that support open recursion, object methods can call other methods on the same object (including themselves), typically using a special variable or keyword called this or self. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof.
OOP languages:
Simula (1967) is generally accepted as being the first language with the primary features of an object-oriented language. It was created for making simulation programs, in which what came to be called objects were the most important information representation. Smalltalk (1972 to 1980) is another early example, and the one with which much of the theory of OOP was developed. Concerning the degree of object orientation, the following distinctions can be made: Languages called "pure" OO languages, because everything in them is treated consistently as an object, from primitives such as characters and punctuation, all the way up to whole classes, prototypes, blocks, modules, etc. They were designed specifically to facilitate, even enforce, OO methods. Examples: Ruby, Scala, Smalltalk, Eiffel, Emerald, JADE, Self, Raku.
OOP languages:
Languages designed mainly for OO programming, but with some procedural elements. Examples: Java, Python, C++, C#, Delphi/Object Pascal, VB.NET.
Languages that are historically procedural languages, but have been extended with some OO features. Examples: PHP, JavaScript, Perl, Visual Basic (derived from BASIC), MATLAB, COBOL 2002, Fortran 2003, ABAP, Ada 95, Pascal.
Languages with most of the features of objects (classes, methods, inheritance), but in a distinctly original form. Examples: Oberon (Oberon-1 or Oberon-2).
Languages with abstract data type support which may be used to resemble OO programming, but without all features of object-orientation. This includes object-based and prototype-based languages. Examples: JavaScript, Lua, Modula-2, CLU.
Chameleon languages that support multiple paradigms, including OO. Tcl stands out among these for TclOO, a hybrid object system that supports both prototype-based programming and class-based OO.
OOP in dynamic languages In recent years, object-oriented programming has become especially popular in dynamic programming languages. Python, PowerShell, Ruby and Groovy are dynamic languages built on OOP principles, while Perl and PHP have been adding object-oriented features since Perl 5 and PHP 4, and ColdFusion since version 6.
The Document Object Model of HTML, XHTML, and XML documents on the Internet has bindings to the popular JavaScript/ECMAScript language. JavaScript is perhaps the best known prototype-based programming language, which employs cloning from prototypes rather than inheriting from a class (contrast to class-based programming). Another scripting language that takes this approach is Lua.
OOP languages:
OOP in a network protocol The messages that flow between computers to request services in a client-server environment can be designed as the linearizations of objects defined by class objects known to both the client and the server. For example, a simple linearized object would consist of a length field, a code point identifying the class, and a data value. A more complex example would be a command consisting of the length and code point of the command and values consisting of linearized objects representing the command's parameters. Each such command must be directed by the server to an object whose class (or superclass) recognizes the command and is able to provide the requested service. Clients and servers are best modeled as complex object-oriented structures. Distributed Data Management Architecture (DDM) took this approach and used class objects to define objects at four levels of a formal hierarchy: Fields defining the data values that form messages, such as their length, code point and data values.
OOP languages:
Objects and collections of objects similar to what would be found in a Smalltalk program for messages and parameters.
Managers similar to IBM i Objects, such as a directory to files and files consisting of metadata and records. Managers conceptually provide memory and processing resources for their contained objects.
A client or server consisting of all the managers necessary to implement a full processing environment, supporting such aspects as directory services, security and concurrency control.The initial version of DDM defined distributed file services. It was later extended to be the foundation of Distributed Relational Database Architecture (DRDA).
Design patterns:
Challenges of object-oriented design are addressed by several approaches. Most common is known as the design patterns codified by Gamma et al.. More broadly, the term "design patterns" can be used to refer to any general, repeatable, solution pattern to a commonly occurring problem in software design. Some of these commonly occurring problems have implications and solutions particular to object-oriented development.
Design patterns:
Inheritance and behavioral subtyping It is intuitive to assume that inheritance creates a semantic "is a" relationship, and thus to infer that objects instantiated from subclasses can always be safely used instead of those instantiated from the superclass. This intuition is unfortunately false in most OOP languages, in particular in all those that allow mutable objects. Subtype polymorphism as enforced by the type checker in OOP languages (with mutable objects) cannot guarantee behavioral subtyping in any context. Behavioral subtyping is undecidable in general, so it cannot be implemented by a program (compiler). Class or object hierarchies must be carefully designed, considering possible incorrect uses that cannot be detected syntactically. This issue is known as the Liskov substitution principle.
Design patterns:
Gang of Four design patterns Design Patterns: Elements of Reusable Object-Oriented Software is an influential book published in 1994 by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, often referred to humorously as the "Gang of Four". Along with exploring the capabilities and pitfalls of object-oriented programming, it describes 23 common programming problems and patterns for solving them.
Design patterns:
The book describes the following patterns: Creational patterns (5): Factory method pattern, Abstract factory pattern, Singleton pattern, Builder pattern, Prototype pattern Structural patterns (7): Adapter pattern, Bridge pattern, Composite pattern, Decorator pattern, Facade pattern, Flyweight pattern, Proxy pattern Behavioral patterns (11): Chain-of-responsibility pattern, Command pattern, Interpreter pattern, Iterator pattern, Mediator pattern, Memento pattern, Observer pattern, State pattern, Strategy pattern, Template method pattern, Visitor pattern Object-orientation and databases Both object-oriented programming and relational database management systems (RDBMSs) are extremely common in software today. Since relational databases don't store objects directly (though some RDBMSs have object-oriented features to approximate this), there is a general need to bridge the two worlds. The problem of bridging object-oriented programming accesses and data patterns with relational databases is known as object-relational impedance mismatch. There are a number of approaches to cope with this problem, but no general solution without downsides. One of the most common approaches is object-relational mapping, as found in IDE languages such as Visual FoxPro and libraries such as Java Data Objects and Ruby on Rails' ActiveRecord.
Design patterns:
There are also object databases that can be used to replace RDBMSs, but these have not been as technically and commercially successful as RDBMSs.
Design patterns:
Real-world modeling and relationships OOP can be used to associate real-world objects and processes with digital counterparts. However, not everyone agrees that OOP facilitates direct real-world mapping (see Criticism section) or that real-world mapping is even a worthy goal; Bertrand Meyer argues in Object-Oriented Software Construction that a program is not a model of the world but a model of some part of the world; "Reality is a cousin twice removed". At the same time, some principal limitations of OOP have been noted.
Design patterns:
For example, the circle-ellipse problem is difficult to handle using OOP's concept of inheritance.
Design patterns:
However, Niklaus Wirth (who popularized the adage now known as Wirth's law: "Software is getting slower more rapidly than hardware becomes faster") said of OOP in his paper, "Good Ideas through the Looking Glass", "This paradigm closely reflects the structure of systems 'in the real world', and it is therefore well suited to model complex systems with complex behaviours" (contrast KISS principle).
Design patterns:
Steve Yegge and others noted that natural languages lack the OOP approach of strictly prioritizing things (objects/nouns) before actions (methods/verbs). This problem may cause OOP to suffer more convoluted solutions than procedural programming.
OOP and control flow OOP was developed to increase the reusability and maintainability of source code. Transparent representation of the control flow had no priority and was meant to be handled by a compiler. With the increasing relevance of parallel hardware and multithreaded coding, developing transparent control flow becomes more important, something hard to achieve with OOP.
Responsibility- vs. data-driven design Responsibility-driven design defines classes in terms of a contract, that is, a class should be defined around a responsibility and the information that it shares. This is contrasted by Wirfs-Brock and Wilkerson with data-driven design, where classes are defined around the data-structures that must be held. The authors hold that responsibility-driven design is preferable.
SOLID and GRASP guidelines SOLID is a mnemonic invented by Michael Feathers which spells out five software engineering design principles: Single responsibility principle Open/closed principle Liskov substitution principle Interface segregation principle Dependency inversion principleGRASP (General Responsibility Assignment Software Patterns) is another set of guidelines advocated by Craig Larman.
Criticism:
The OOP paradigm has been criticised for a number of reasons, including not meeting its stated goals of reusability and modularity, and for overemphasizing one aspect of software design and modeling (data/objects) at the expense of other important aspects (computation/algorithms).Luca Cardelli has claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take longer to compile, and that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex. The latter point is reiterated by Joe Armstrong, the principal inventor of Erlang, who is quoted as saying: The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
Criticism:
A study by Potok et al. has shown no significant difference in productivity between OOP and procedural approaches.Christopher J. Date stated that critical comparison of OOP to other technologies, relational in particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP; however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind of customizable type system to support RDBMS.In an article Lawrence Krubner claimed that compared to other languages (LISP dialects, functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden of unneeded complexity.Alexander Stepanov compares object orientation unfavourably to generic programming: I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras — families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting — saying that everything is an object is saying nothing at all.
Criticism:
Paul Graham has suggested that OOP's popularity within large companies is due to "large (and frequently changing) groups of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one programmer from "doing too much damage".Leo Brodie has suggested a connection between the standalone nature of objects and a tendency to duplicate code in violation of the don't repeat yourself principle of software development.
Criticism:
Steve Yegge noted that, as opposed to functional programming: Object Oriented Programming puts the nouns first and foremost. Why would you go to such lengths to put one part of speech on a pedestal? Why should one kind of concept take precedence over another? It's not as if OOP has suddenly made verbs less important in the way we actually think. It's a strangely skewed perspective.
Criticism:
Rich Hickey, creator of Clojure, described object systems as overly simplistic models of the real world. He emphasized the inability of OOP to model time properly, which is getting increasingly problematic as software systems become more concurrent.Eric S. Raymond, a Unix programmer and open-source software advocate, has been critical of claims that present object-oriented programming as the "One True Solution", and has written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency. Raymond compares this unfavourably to the approach taken with Unix and the C programming language.Rob Pike, a programmer involved in the creation of UTF-8 and Go, has called object-oriented programming "the Roman numerals of computing" and has said that OOP languages frequently shift the focus from data structures and algorithms to types. Furthermore, he cites an instance of a Java professor whose "idiomatic" solution to a problem was to create six new classes, rather than to simply use a lookup table.Regarding inheritance, Bob Martin states that because they are software, related classes do not necessarily share the relationships of the things they represent.
Formal semantics:
Objects are the run-time entities in an object-oriented system. They may represent a person, a place, a bank account, a table of data, or any item that the program has to handle.
Formal semantics:
There have been several attempts at formalizing the concepts used in object-oriented programming. The following concepts and constructs have been used as interpretations of OOP concepts: co algebraic data types recursive types encapsulated state inheritance records are basis for understanding objects if function literals can be stored in fields (like in functional-programming languages), but the actual calculi need be considerably more complex to incorporate essential features of OOP. Several extensions of System F<: that deal with mutable objects have been studied; these allow both subtype polymorphism and parametric polymorphism (generics)Attempts to find a consensus definition or theory behind objects have not proven very successful (however, see Abadi & Cardelli, A Theory of Objects for formal definitions of many OOP concepts and constructs), and often diverge widely. For example, some definitions focus on mental activities, and some on program structuring. One of the simpler definitions is that OOP is the act of using "map" data structures or arrays that can contain functions and pointers to other maps, all with some syntactic and scoping sugar on top. Inheritance can be performed by cloning the maps (sometimes called "prototyping"). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Penumbra (medicine)**
Penumbra (medicine):
In pathology and anatomy the penumbra is the area surrounding an ischemic event such as thrombotic or embolic stroke. Immediately following the event, blood flow and therefore oxygen transport is reduced locally, leading to hypoxia of the cells near the location of the original insult. This can lead to hypoxic cell death (infarction) and amplify the original damage from the ischemia; however, the penumbra area may remain viable for several hours after an ischemic event due to the collateral arteries that supply the penumbral zone.
Penumbra (medicine):
As time elapses after the onset of stroke, the extent of the penumbra tends to decrease; therefore, in the emergency department a major concern is to protect the penumbra by increasing oxygen transport and delivery to cells in the danger zone, thereby limiting cell death. The existence of a penumbra implies that salvage of the cells is possible. There is a high correlation between the extent of spontaneous neurological recovery and the volume of penumbra that escapes infarction; therefore, saving the penumbra should improve the clinical outcome.
Definition:
One widely accepted definition for penumbra describes the area as "ischemic tissue potentially destined for infarction but it isn't irreversibly injured and [is therefore] the target of any acute therapies." The original definition of the penumbra referred to areas of the brain that were damaged but not yet dead, and offered promise to rescue the brain tissue with the appropriate therapies.
Blood flow:
The penumbra region typically occurs when blood flow drops below 20 mL/100 g/min. At this point electrical communication between neurons fails to exist. Cells in this region are alive but metabolic pumps are inhibited, oxidative metabolism is reduced but neurons may begin to depolarize again. Areas of the brain generally do not become infarcted until blood flow to the region drops below 10 to 12 mL/100 g/min. At this point, glutamate release becomes unregulated, ion pumps are inhibited and adenosine triphosphate (ATP) synthesis also stops which ultimately leads to the disruption of intracellular processes and neuronal death.
Identification by imaging:
Positron emission tomography (PET) can quantify the size of the penumbra, but is neither widely available nor rapidly accessible. Magnetic resonance imaging can estimate the size of the penumbra with a combination of two MRI sequences: Perfusion weighted imaging (PWI) shows decreased blood perfusion in the infarcted core and the penumbra Diffusion weighted imaging (DWI) can estimate the size of the infarcted core.Both of these sequences somewhat overestimates their volumes of interest, but the size of the penumbra can roughly be estimated by subtracting abnormal volume by DWI from abnormal volume by PWI.The penumbral area can also be detected based upon an integration of three factors. These factors include: the site of vessel occlusion, the extent of oligaemia (hypoperfused area surrounding the penumbra, but not at risk of infarction ) at that moment, and the mismatch between this perfusion defect and the area of the brain already infarcted.
Clinical relevance:
A higher volume of penumbra around a cerebral infarction means a greater volume of potentially salvageable brain matter by thrombolysis and thrombectomy. Such therapies have a greater effect on regaining functions such as movement after a cerebral infarction. After the initial ischemic event the penumbra transitions from a tissue remodeling characterized by damage to a remodeling characterized by repair.In the penumbra, microglia are thought to exert neuroprotective effects via specialized contacts with neuronal somata, termed somatic junctions. Understanding and supporting these microglial actions could broaden the therapeutic window and lead to higher amount of preserved nervous tissue.
History:
The concept of the ischemic penumbra was developed in Lindsay Symons laboratory, The National Hospital, Queens Square, London, in 1976 by combined focal measurements of neurofunction, blood flow and extracellular K+ in the baboon brain following a MCA occlusion. Critical levels of blood flow was observed for function and energy metabolism. These results and the first mentioning of the term ischemic penumbra were published in 1977 in Stroke (1), and further substantiated by an editorial in 1981 (2). The first decade of research focused on physiologic profile of the penumbra tissue after stroke, mapping the cerebral blood flow, and quantifying oxygen and glucose consumption to define these areas. The second decade revealed the mechanism of the neuronal cell death. As the Biochemical pathways were dissected penumbral science became a rapidly evolving area of molecular biology. The third decade of penumbral research found a transitional leap as using positron emission tomography (PET) scanning can identify brain tissue with decreased blood flow and magnetic resonance imaging (MRI) has the ability to detect portions of the ischemic tissue that has not yet died. These images have allowed vision into the brain to see the areas of tissue that may be salvaged, the penumbra.1. Astrup J, Symon L, Branston NM, Lassen NA. Cortical evoked potential and extracellular K+ and H+ at critical levels of brain ischemia. Stroke. 1977 Jan-Feb;8(1):51-7. doi: 10.1161/01.str.8.1.51. PMID: 13521.
History:
2. Astrup J, Siesjö BK, Symon L. Thresholds in cerebral ischemia - the ischemic penumbra. Stroke. 1981 Nov-Dec;12(6):723-5. doi: 10.1161/01.str.12.6.723. PMID: 6272455. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LoLa (software)**
LoLa (software):
LoLa (low latency audio visual streaming system) is proprietary networked music performance software, first conceived in 2005, that enables real-time rehearsing and performing with musicians at remote locations, overcoming latency - the time lapse that occurs while (compressed) audio streams travel to and from each musician.
LoLa (software):
Unlike similar systems, LoLa offers ultra-low latency video as well as audio streaming, and for this reason has extremely stringent hardware requirements (estimated cost over 12,600 euros). The current version supports up to 3 connections, with up to 4 cameras per site. Over 140 sites - primarily universities and conservatoires - are listed as LoLa installations.LoLa was conceived in 2005, when a Miami orchestra ran a master class accompanied by the Italian Research and Academic Network (GARR). Alternative solutions suggested at the time included EtherSound (Paris), NetworkSound (Silicon Valley) and Dante (Sydney) but these were limited to high-speed university or laboratory-based local networks.It has been used for live streaming by individual professional musicians unable to perform in public during the 2020 COVID-19 pandemic, as well as international concerts. Pinchas Zukerman described the technology as "the savior of the profession". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Knorr-Bremse**
Knorr-Bremse:
Knorr-Bremse AG is a German manufacturer of braking systems for rail and commercial vehicles that has operated in the field for over 110 years. Other products in Group's portfolio include intelligent door systems, control components, air conditioning systems for rail vehicles, torsional vibration dampers, and transmission control systems for commercial vehicles.
In 2022, the Group's workforce of over 31,000 achieved worldwide sales of EUR 7.15 billion.The Group has a presence in over 30 countries, at 100 locations.On 13 October 2022, it was announced that Knorr-Bremse AG had chosen Marc Llistosella to be a member of the Executive Board and CEO. The appointment takes effect as of 1 January 2023.
History:
Foundation Engineer Georg Knorr established Knorr-Bremse GmbH in 1905 in Boxhagen-Rummelsburg, Neue Bahnhofstraße, near Berlin (since 1920 part of Berlin-Friedrichshain). Its production of railway braking systems derived from a company ("Carpenter & Schulze") founded in 1883. In 1911 the company merged with "Continentale Bremsen-GmbH" to found Knorr-Bremse Aktiengesellschaft (AG). From 1913 onwards, a second manufacturing plant, new headquarters, a heating plant and other annex buildings were erected.
History:
The initial basis for Knorr's commercial success was provided by an agreement with the Prussian State Railways, which at that time had formed the Prussian-Hessian Railway Company, to supply single-chamber express braking systems, first for passenger and later on for freight trains. A compressed-air brake, the "Knorr Druckluft-Einkammerschnellbremse" (K1), along with its derivatives, offered considerably enhanced safety performance compared with traditional systems.
History:
In the early twentieth century, train guards still had to operate brakes by hand, from so-called "brake vans". The first pneumatic brakes were of a basic design, but before long, indirect automatic systems using control valves were developed. See History of rail transport in Germany for an overview.
History:
Expansion In 1920 the manufacturing plant of the first Bayerische Motoren-Werke AG (BMW, established in 1917/1918) located in Munich, Moosacher Straße, became a subsidiary of Knorr-Bremse, delivering brake systems as Süddeutsche Bremsen-AG for the Bavarian Group Administration, the former "Royal Bavarian State Railways". There was no further interest in motor engines for aircraft and automobiles. The engine construction and the company name "BMW" were sold in 1922 to financier Camillo Castiglioni to be combined with the Bayerische Flugzeugwerke AG (BFW, located not far away), establishing the company a second time. For details see History of BMW and BFW/Messerschmitt.
History:
1922 until 1927 the new main manufacturing plant in Berlin at the Hirschberger Straße/Schreiberhauer Straße next to the Berlin Ringbahn was erected, a tunnelled road combined both the old and the new site.
History:
The second main area of activity emerged in 1922, when Knorr moved into pneumatic braking systems for commercial road vehicles. The company was the first in Europe to develop a system that applied the brakes simultaneously to all four wheels of a truck as well as its trailer. The resultant reduction in braking distances made a significant contribution to improving road safety.
History:
A small number of the Swedish light MG35/36 machine guns AKA "Knorr-Bremse machine guns" were also manufactured by Knorr-Bremse for the Wehrmacht during the Second World War.
Re-establishment The company is relocated at the Süddeutsche Bremsen-AG plant in Munich, the former sites in the eastern part of Berlin being expropriated after 1945.
Timeline
Products:
Rail vehicles Knorr-Bremse not only produces complete braking systems for all types of rolling stock but also door systems, toilets, air conditioning, couplings and windscreen wipers. In 2000, it purchased British manufacturer, Westinghouse Brakes (formerly the brakes division of Westinghouse Brake and Signal Company Ltd), from Invensys, and subsequently moved its operations from Chippenham to the nearby English town of Melksham, Wiltshire.Since 2002, Knorr-Bremse has been working on variable gauge systems for more efficient solutions to break of gauge problems.
Products:
Commercial vehicles Knorr-Bremse has been developing and manufacturing braking systems for commercial vehicles since 1920, for trucks and semi-trailer tractor units over 6 tonnes, buses, trailers or special vehicles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison triangle**
Comparison triangle:
Define Mk2 as the 2-dimensional metric space of constant curvature k . So, for example, M02 is the Euclidean plane, M12 is the surface of the unit sphere, and M−12 is the hyperbolic plane.
Comparison triangle:
Let X be a metric space. Let T be a triangle in X , with vertices p , q and r . A comparison triangle T∗ in Mk2 for T is a triangle in Mk2 with vertices p′ , q′ and r′ such that d(p,q)=d(p′,q′) , d(p,r)=d(p′,r′) and d(r,q)=d(r′,q′) Such a triangle is unique up to isometry. The interior angle of T∗ at p′ is called the comparison angle between q and r at p . This is well-defined provided q and r are both distinct from p | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DryvIQ**
DryvIQ:
DryvIQ is a software application that enables businesses to migrate on-site system files and associated data across storage and content management platforms, as well as create synchronized hybrid storage systems.
History:
Before it was DryvIQ, the software SkySync was released in 2013 by Ann Arbor, Michigan based company, Portal Architects, Inc. The company created SkySync, a back-end, administrative application designed to transfer content across storage platforms, after abandoning 18 months of development on a desktop application called SkyBrary in 2011.Between 2014 and 2015, Portal Architects established partnerships with the following companies: Autodesk, Box, Dropbox, Egnyte, EMC, Google, Syncplicity, Huddle, IBM, Microsoft, OpenText, Oracle, Citrix ShareFile, Hightail and Internet2.SkySync (currently DryvIQ) was named a "Cool Vendor in Content Management" by Gartner in 2015.In 2022, SkySync changed its name to DryvIQ, which is now what the company is currently known as.
Overview:
DryvIQ is a software application that syncs, migrates or backs up files including their associated properties, metadata, versions, user accounts and permissions across on-premises and Cloud-based storage platforms. The software deploys on a server, virtual machine or within Microsoft Azure, Amazon Web Services or other cloud computing services. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**A Treatise Concerning the Principles of Human Knowledge**
A Treatise Concerning the Principles of Human Knowledge:
A Treatise Concerning the Principles of Human Knowledge (commonly called the Principles of Human Knowledge, or simply the Treatise) is a 1710 work, in English, by Irish Empiricist philosopher George Berkeley. This book largely seeks to refute the claims made by Berkeley's contemporary John Locke about the nature of human perception. Whilst, like all the Empiricist philosophers, both Locke and Berkeley agreed that we are having experiences, regardless of whether material objects exist, Berkeley sought to prove that the outside world (the world which causes the ideas one has within one's mind) is also composed solely of ideas. Berkeley did this by suggesting that "Ideas can only resemble Ideas" – the mental ideas that we possess can only resemble other ideas (not material objects) and thus the external world consists not of physical form, but rather of ideas. This world is (or, at least, was) given logic and regularity by some other force, which Berkeley concludes is God.
Content:
Introduction Berkeley declared that his intention was to make an inquiry into the First Principles of Human Knowledge in order to discover the principles that have led to doubt, uncertainty, absurdity, and contradiction in philosophy. In order to prepare the reader, he discussed two topics that lead to errors. First, he claimed that the mind cannot conceive abstract ideas. We can't have an idea of some abstract thing that is common to many particular ideas and therefore has, at the same time, many different predicates and no predicates. Second, Berkeley declared that words, such as names, do not signify abstract ideas. With regard to ideas, he asserted that we can only think of particular things that have been perceived. Names, he wrote, signify general ideas, not abstract ideas. General ideas represent any one of several particular ideas. Berkeley criticized Locke for saying that words signify general, but abstract, ideas. At the end of his Introduction, he advised the reader to let his words engender clear, particular ideas instead of trying to associate them with non–existent abstractions.
Content:
Part I The following is a summary of Part I (Part II was never published).
Content:
"To be" means "to be perceived" Berkeley began his treatise by asserting that existence is the state of being perceived by a perceiver. Human minds know ideas, not objects. The three kinds of ideas are those of sensation, thought, and imagination. When several ideas are associated together, they are thought to be ideas of one distinct thing, which is then signified by one name.Ideas are known and perceived by a knowing perceiver. This active perceiver is designated by the names mind, spirit, soul, or self. Ideas exist by virtue of a perceiver. The existence of an idea consists in being perceived.What is meant by the term "exist" when it is applied to a thing that is known through the senses? To say that something exists is to say that it is perceived by a perceiver (Esse is percipi). This is the main principle of human knowledge.
Content:
External objects are things that are perceived through our senses. We perceive only our own sensations or ideas. Ideas and sensations cannot exist unperceived.To say that an object exists without being perceived is to attempt to abstract that which cannot be abstracted. We cannot separate or abstract objects and their qualities from our perception of them.If an object exists or is perceived, it must be perceived by me or some other perceiver. It is impossible to separate the being of a sensible thing from its existence as a perception of a perceiver.There can be no unthinking substance or substratum of ideas. Therefore, the perceiving mind or spirit is the only substance of ideas. Ideas inhere in or belong to a perceiver.Are there things that exist in an unthinking substance outside of the perceiver's mind? Can they be the originals that the ideas copy or resemble? An idea can only be like an idea, not something undetectable. It is impossible for us to conceive of a copy or resemblance unless it is between two ideas.
Content:
Locke's primary and secondary qualities According to Locke, a thing's primary qualities, such as its extension, shape, motion, solidity, and number, exist unperceived, apart from any perceiver's mind, in an inert, senseless substance called matter. Berkeley opposed Locke's assertion. Qualities that are called primary are, according to Berkeley, ideas that exist in a perceiver's mind. These ideas can only be like other ideas. They cannot exist in an unperceiving, corporeal substance or matter.The primary qualities of figure, motion, etc., cannot be conceived as being separate from the secondary qualities, which are related to sensations. Therefore, primary qualities, like secondary qualities, exist only in the mind. The properties of primary qualities are relative and change according to the observer's perspective. The greatness and smallness of figure, the swiftness and slowness of motion, exist in the mind and depend on point of view or position.
Content:
Number Number exists only in the mind. The same thing is described by different numbers according to the mind's viewpoint. An object can have an extension of one, three, and thirty six, according to its measurement in yards, feet, and inches. Number is relative and does not exist separately from a mind.
Content:
Sensed qualities are mental Unity is merely an abstract idea. Primary qualities, such as figure, extension, and motion, are relative, as are secondary qualities such as red, bitter, and soft. They all depend on the observer's frame of reference, position, or point of view. Berkeley's "…method of arguing does not so much prove that there is no extension or colour in an outward object, as that we do not know by sense which is the true extension or colour of the object." Idealism, here, is epistemological, not ontological. Berkeley declared that it is "…impossible that any colour or extension at all, or other sensible quality whatsoever, should exist in an unthinking subject without the mind, or in truth, that there should be any such thing as an outward object." Any quality that depends on sensation for its existence requires that a sense organ and a mind is conscious of it. By "unthinking subject," he means "mindless matter" or "substance, substratum, or support that is not a thinking mind." By "without the mind," he means "not in the mind." Meaning of material substance Matter is material substance. What does this mean? "Material substance" has two meanings: "being in general" and "support of accidents." (The word accident is used here to mean an unessential quality.) "Being in general" is incomprehensible because it is extremely abstract. To speak of supporting accidents such as extension, figure, and motion is to speak of being a substance, substratum, or support in an unusual, figurative, senseless manner. Sensible qualities, such as extension, figure, or motion, do not have an existence outside of a mind.
Content:
Knowledge of external objects Comparing ontology with epistemology, Berkeley asked, "But, though it were possible that solid, figured, moveable substances may exist without the mind, corresponding to the ideas we have of bodies, yet how is it possible for us to know this?" Knowledge through our senses only gives us knowledge of our senses, not of any unperceived things. Knowledge through reason does not guarantee that there are, necessarily, unperceived objects. In dreams and frenzies, we have ideas that do not correspond to external objects. "…[T]he supposition of external bodies is not necessary for the producing our ideas…." Materialists do not know how bodies affect spirit. We can't suppose that there is matter because we don't know how ideas occur in our minds. "In short, if there were external bodies, it is impossible we should ever come to know it…." Suppose that there were an intelligence that was not affected by external bodies. If that intelligence had orderly and vivid sensations and ideas, what reason would it have to believe that bodies external to the mind were exciting those sensations and ideas? None.
Content:
Berkeley's challenge Through reflection or introspection it is possible to attempt to know if a sound, shape, movement, or color can exist unperceived by a mind. Berkeley declared that he will surrender and admit the unperceived existence of material objects, even though this doctrine is unprovable and useless, if "…you can conceive it possible for one extended moveable substance or, in general, for any one idea, or anything like an idea, to exist otherwise than in a mind perceiving it…." In answer to Berkeley's summons, it might be said that it is easy to imagine objects that are not perceived by anyone. But, he asked, "…what is all this, I beseech you, more than framing in your mind certain ideas which you call books and trees, and at the same time omitting to frame the idea of any one that may perceive them? But do not you yourself perceive or think of them all the while?" The mind had merely forgotten to include itself as the imaginer of those imagined objects.
Content:
Absolute existence It is impossible to understand what is meant by the words absolute existence of sensible objects in themselves. To speak of perceived objects that are not perceived is to use words that have no meaning or to utter a contradiction.
Content:
What causes ideas? Ideas exist only in a mind and have no power to cause any effects. Ideas of extension, figure, and motion cannot cause sensations. "To say, therefore, that these [sensations] are the effects of powers resulting from the configuration, number, motion, and size of corpuscles must certainly be false." Some non–idea must produce the succession of ideas in our minds. Since the cause can't be another idea, it must be a substance. If there are no material substances, then it must be an immaterial substance. Such an incorporeal, active substance is called a Spirit. A Spirit is that which acts. A Spirit is one simple, undivided, active being. It cannot be perceived. Only its effects can be perceived. The two principal powers of Spirit are Understanding and Will. Understanding is a Spirit that perceives ideas. Will is a Spirit that operates with or produces ideas. The words will, soul, or spirit designate something that is active but cannot be represented by an idea. Berkeley claimed that a person's active mind can imaginatively generate ideas at will. Ideas that are sensually perceived, however, are not dependent on the observer's will. The ideas that are imprinted on the mind when observing the external world are not the result of willing. "There is therefore some other Will or Spirit that produces them." Natural laws Ideas that are perceived through our senses are lively and distinct, unlike imagined ideas. Their orderly connection and coherence reflects the wisdom and benevolence of the mind that made them. The ideas of sense occur according to rules. We call these connections and associations laws of nature. Necessary connections are not discovered by us. We only observe settled laws of nature and use them to manage our affairs. Erroneously, we attribute power and agency to ideas of sense, which are mere secondary causes. Ideas, we think, can cause other ideas. The primary cause, the "Governing Spirit whose Will constitutes the laws of nature" is ignored.
Content:
Strong and faint ideas There are strong ideas and there are faint ideas. We call strong ideas real things. They are regular, vivid, constant, distinct, orderly, and coherent. These strong ideas of sense are less dependent on the perceiver. Ideas of imagination, however, are less vivid and distinct. They are copies or images of strong ideas and are more the creation of a perceiver. Nevertheless, both strong and faint ideas are ideas and therefore exist only in a perceiver's mind.
Content:
13 objections Objection 1 Objection: [A]ll that is real and substantial in nature is banished out of the world, and instead thereof a chimerical scheme of ideas takes place. Answer: Real things and chimeras are both ideas and therefore exist in the mind. Real things are more strongly affecting, steady, orderly, distinct, and independent of the perceiver than imaginary chimeras, but both are ideas. If, by substance is meant that which supports accidents or qualities outside of the mind, then substance has no existence. "The only thing whose existence we deny is that which Philosophers call Matter or corporeal substance." All of our experiences are of things (ideas) which we perceive immediately by our senses. These things, or ideas, exist only in the mind that perceives them. "That what I see, hear, and feel doth exist, that is to say, is perceived by me, I no more doubt than I do of my own being." Objection 2 Objection: [T]here is a great difference betwixt real fire for instance, and the idea of fire, …if you suspect it to be only the idea of fire which you see, do but put your hand into it…. Answer: Real fire and the real pain that it causes are both ideas. They are known only by some mind that perceives them.
Content:
Objection 3 Objection: [W]e "see" things… at a distance from us, and which consequently do not exist in the mind…. Answer: Distant things in a dream are actually in the mind. Also, we do not directly perceive distance while we are awake. We infer distance from a combination of sensations, such as sight and touch. Distant ideas are ideas that we could perceive through touch if we were to move our bodies.
Content:
Objection 4 Objection: It would follow from Berkeley's principles that …things are every moment annihilated and created anew… . When no one perceives them, objects become nothing. When a perceiver opens his eyes, the objects are created again. Answer: Berkeley requests that the reader…consider whether he means anything by the actual existence of an idea distinct from its being perceived." "[I]t is the mind that frames all that variety of bodies which compose the visible world, any one whereof does not exist longer than it is perceived." If one perceiver closes his eyes, though, the objects that he had been perceiving could still exist in the mind of another perceiver.
Content:
Objection 5 Objection: "[I]f extension and figure exist only in the mind, it follows that the mind is extended and figured…." Extension would be an attribute that is predicated of the subject, the mind, in which it exists. Answer: Extension and figure are in the mind because they are ideas that are perceived by the mind. They are not in the mind as attributes that are predicated of the mind, which is the subject. The color red may be an idea in the mind, but that doesn't mean that the mind is red.
Content:
Objection 6 Objection: "[A] great many things have been explained by matter and motion…." Natural science ("Natural Philosophy" in the text) has made much progress by assuming the existence of matter and mechanical motion. Answer: Scientists ("they who attempt to account of things", the term "scientist" being introduced in the nineteenth century by W. Whewell), do not need to assume that matter and motion exist and that they have effects on an observer's mind. All scientists need to do is to explain why we are affected by certain ideas on certain occasions.
Content:
Objection 7 Objection: It is absurd to ascribe everything to Spirits instead of natural causes. Answer: Using common language, we can speak of natural causes. We do this in order to communicate. However, in actuality we must know that we are speaking only of ideas in a perceiver's mind. We should "think with the learned and speak with the vulgar." Objection 8 Objection: Humans universally agree that there are external things and that matter exists. Is everyone wrong? Answer: Universal assent doesn't guarantee the truth of a statement. Many false notions are believed by many people. Also, humans may act as if matter is the cause of their sensations. They can't, however, really understand any meaning in the words "matter exists." Objection 9 Objection: Then why does everyone think that matter and an external world exist? Answer: People notice that some ideas appear in their minds independently of their wishes or desires. They then conclude that those ideas or perceived objects exist outside of the mind. This judgment, however, is a contradiction. Some philosophers, who know that ideas exist only in the mind, assume that there are external objects that resemble the ideas. They think that external objects cause internal, mental ideas. The most important reason why philosophers do not consider God ("Supreme Spirit) as the only possible cause of our perceptions, is "because His operations are regular and uniform". Order and concatenation of things are "an argument of the greatest wisdom, power and goodness in their Creator".
Content:
Objection 10 Objection: Berkeley's principles are not consistent with science and mathematics. The motion of the Earth is considered to be true. But, according to Berkeley, motion is only an idea and does not exist if it is not perceived. Answer: To ask if the Earth moves is really to ask if we could view the Earth's movement if we were in a position to perceive the relation between the Earth and the Sun. In accordance with our knowledge of the way that ideas have appeared in our minds in the past, we can make reasonable predictions about how ideas will appear to us in the future.
Content:
Objection 11 Objection: Ideas appear in a causal sequence. If ideas are mere superficial appearances without internal parts, what is the purpose of the complicated causal sequence in which they appear? It would be less effort for objects to appear as ideas with simple exterior surfaces, without so many internal connections. Answer: Scientists should not explain things as though they are effects of causes. The connection of ideas is a relationship between signs and the things that are signified. We should study our ideas as though they are informative signs in a language of nature. If we understand the language in which these idea–signs are used, then we understand how we can produce connections of ideas.
Content:
Objection 12 Objection: Matter may possibly exist as an inert, thoughtless substance, or occasion, of ideas. Answer: If matter is an unknown support for qualities such as figure, motion and color, then it doesn't concern us. Such qualities are sensations or ideas in a perceiving mind.
Content:
Objection 13 Objection: Holy Scripture speaks of real things such as mountains, cities, and human bodies. Holy Writ also describes miracles, such as the marriage feast at Cana, in which things are changed into other things. Are these nothing but appearances or ideas? Answer: Real things are strong, distinct, vivid ideas. Imaginary things are weak, indistinct, faint ideas. Things that people are able to see, smell, and taste are real things.
Content:
Consequences As a result of these principles, the following consequences follow: Banished questions Because the following inquiries depend on the assumption of the existence of matter, these questions can no longer be asked: Can material substance think? Is matter infinitely divisible? What is the relationship between matter and spirit? We can know only ideas and spirits "Human Knowledge may naturally be reduced to two heads — that of IDEAS and that of SPIRITS".
Content:
Ideas, or unthinking things It is an error to think that objects of sense, or real things, exist in two ways: in the mind and not in the mind (apart from the mind). Scepticism results because we can't know if the perceived objects are like the unperceived objects.
Content:
Sensed ideas are real, existing things. They cannot exist without a perceiving mind. They cannot resemble anything that exists apart from a mind. This is because the existence of a sensation or idea consists in being perceived, and an idea cannot be like anything that is not an idea. If things originate or persist when I do not perceive them, it is because another mind perceives them.Sceptics, fatalists, idolators, and atheists believe that matter exists unperceived.
Content:
Another source of errors is the attempt to think about abstract ideas. Particular ideas are known as being real. Abstractions, made by subtracting all particularity from ideas, lead to errors and difficulties.Sceptics say that we can never know the true, real nature of things. There is no way, they say, that we can compare the ideas in our mind to what is in the external, material world. We are ignorant of the real essence (internal qualities and constitution) of any object. They say that the cause of an object's properties is its unknown essence, occult qualities, or mechanical causes. But, motion, color, sound, figure, magnitude, etc., are ideas and one idea or quality cannot cause another. The skeptics are wrong because only a spirit can cause an idea.The mechanical principle of attraction is used to explain the tendency of bodies to move toward each other. But attraction is merely a general name that describes an effect. It does not signify the cause of the observed motion. All efficient causes are produced by the will of a mind or spirit (mind or spirit being that which thinks, wills, and perceives). Gravitation (mutual attraction) is said to be universal. We, however, don't know if gravitation is necessary or essential everywhere in the universe. Gravitation depends only on the will of the mind or spirit that governs the universe.Four conclusions result from these premisses: (1) Mind or spirit is the efficient cause in nature; (2) We should investigate the final causes or purposes of things; (3) We should study the history of nature and make observations and experiments in order to draw useful general conclusions; (4) We should observe the phenomena that we see in order to discover general laws of nature in order to deduce other phenomena from them. These four conclusions are based on the wisdom, goodness, and kindness of God.Newton asserted that time, space, and motion can be distinguished into absolute/relative, true/apparent, mathematical/vulgar. In so doing, he assumed that time, space, and motion are usually thought of as being related to sensible things. But they also, he assumed, have an inner nature that exists apart from a spectator's mind and has no relation to sensible things. He described an absolute time, space, and motion that are distinguished from relative or apparent time, space, and motion. Berkeley disagreed. To him, all motion is relative because the idea that Berkeley had of motion necessarily included relation. By pure space, I mean that I conceive that I can move my arms and legs without anything resisting them. Space is less pure when there is more resistance by other bodies. Space, therefore, is an idea that is relative to body and motion.Errors made by mathematicians occur because of (1) their reliance on general abstract ideas and (2) their belief that an object exists as such without being an idea in a spectator's mind. In arithmetic, those things which pass for abstract truths and theorems concerning numbers are, in reality, concerned with particular things that can be counted. In geometry, a source of confusion is the assumption that a finite extension is infinitely divisible or contains an infinite number of parts. Every particular finite line, surface, or solid which may possibly be the object of our thought is an idea existing only in the mind, and consequently each part of it must be perceived. Any line, surface, or solid that I perceive is an idea in my mind. I can't divide my idea into an infinite number of other ideas. We can't conceive of an inch–long line being divided into a thousand parts, much less infinities of infinities. There is no such thing as an infinite number of parts contained in a finite quantity. In order to use mathematics, it is not necessary to assume that there are infinite parts of finite lines or any quantities smaller than the smallest that can be sensed.
Content:
Spirits, or thinking things A spirit or mind is that which thinks, wills, or perceives. It is thought that we are ignorant of the nature of mind or spirit because we have no idea of it. But it was demonstrated in § 27 that ideas exist in spirits or minds. It is absurd to expect that the spirit or mind that supports an idea should itself also be an idea. In § 27, it was shown that the soul is indivisible. Therefore, it is naturally immortal. I know that spirits or minds other than myself exist because I perceive the ideas that they cause. When I perceive the order and harmony of nature, I know that God, as infinitely wise spirit or mind, is the cause. We can't see God because He is a spirit or mind, not an idea. We see Him in the same way that we see a man, when actually we are seeing only the ideas, such as color, size, and motion that the man causes. Following a line of thought which can be traced back to Augustine's Theodicy, Berkeley argues that imperfections in nature, such as floods, blights, monstrous births, etc., are absolutely necessary. They are not the result of God's direct influence. They are the result of the working of the system of simple, general, consistent rules that God has established in nature in order that living things can survive. Such natural defects are useful in that they act as an agreeable variety and accentuate the beauty of the rest of nature by their contrast. The pain that exists in the world is indispensably necessary to our well–being. When seen from a higher, broader perspective, particular evils are known to be good when they are comprehended as parts of a beautiful, orderly whole system.
Content:
Main purpose Berkeley claimed that the main design of his efforts in writing this book was to promote the "Consideration of GOD, and our DUTY" (Berkeley's emphasis). If we are clearly convinced of God's existence, then we will fill our hearts with awful circumspection and holy fear. Berkeley claimed that the world exists as it does, when no one is looking at it, because it consists of ideas that are perceived by the mind of God. If we think that the eyes of the Lord are everywhere, beholding the evil and the good, knowing our innermost thoughts, then we will realize our total dependence on Him. In this way, we will have an incentive to be virtuous and to avoid vice.
Sources:
Berkeley, George; Turbayne, Colin Murray (1957). A Treatise Concerning the Principles of Human Knowledge. Forgotten Books. ISBN 978-1-60506-970-8. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Menlo Report**
Menlo Report:
The Menlo Report is a report published by the U.S. Department of Homeland Security Science and Technology Directorate, Cyber Security Division that outlines an ethical framework for research involving Information and Communications Technologies (ICT).The 17-page report was published on August 3, 2012. The following year, the Department of Homeland Security published a 33-page companion report that includes case studies that illustrate how the principles can be applied.
Menlo Report:
The Menlo Report adapted the original Belmont Report principles (Respect for Persons, Beneficence, and Justice) to the context of cybersecurity research & development, as well as adding a fourth principle, "Respect for Law and Public Interest."The Menlo Report was created under an informal, grassroots process that was catalyzed by the ethical issues raised in ICT Computer security research. Discussions at conferences and in public discourse exposed growing awareness of ethical debates in computer security research, including issues that existing oversight authorities (e.g., Institutional Review Boards) might have been unaware of or determined were beyond their purview. The Menlo Report is the core document stemming from the series of working group meetings that broached these issues in an attempt to pre-empt research harms and galvanize the community around common ethical principles and applications.
Menlo Report:
This report proposes a framework for ethical guidelines for computer and information security research, based on the principles set forth in the 1979 Belmont Report, a seminal guide for ethical research in the biomedical and behavioral sciences. The Menlo Report describes how the three principles in the Belmont report can be applied in fields related to research about or involving information and communication technology. ICT research raises new challenges resulting from interactions between humans and communications technologies. In particular, today's ICT research contexts contend with ubiquitously connected network environments, overlaid with varied, often discordant legal regimes and social norms. The Menlo Report proposes the application of these principles to information systems security research although the researchers expect the proposed framework to be relevant to other disciplines, including those targeted by the Belmont report but now operating in more complex and interconnected contexts. The Menlo Report details four core ethical principles, three from the original Belmont Report. respect for persons beneficence justiceIt has an additional principle - respect for law and public interest. The report explains each of these in the context of ICT research.
Principles of the Menlo Report:
The Menlo Report attempts to summarize a set of basic principles to guide the identification and resolution of ethical problems arising in research of or involving ICT. The report believes that ICT has increasingly become integrated into individual and collective daily lives and affects our social interactions.
Principles of the Menlo Report:
It believes that the challenges of ICTR risk assessment is derived from these three factors: - The researcher-subject relationships, which tend to be disconnected, dispersed, and intermediated by technology - The proliferation of data sources and analytics, which can heighten risk incalculably - And the inherent overlap between research and operations. In order to properly apply any of the principles in the complex setting of ICT research, it deems that it is first necessary to perform a systematic and comprehensive stakeholder analysis.
Principles of the Menlo Report:
The proposed guidelines for ethical assessment of ICT Research are as follows: Respect for Persons. Participation as a research subject is voluntary, and follows from informed consent. Therefore the research should treat individuals as autonomous agents and respect their right to determine their own best interests, respect individuals who are not targets of research yet are impacted, Individuals with diminished autonomy who are incapable of deciding for themselves and are entitled to protection.
Principles of the Menlo Report:
Beneficence. Do not harm. Maximize probable benefits and minimize probable harms. Systematically assess both risk of harm and benefit.
Justice. Each person deserves equal consideration in how to be treated, and the benefits of research should be fairly distributed according to individual need, effort, societal contribution, and merit. Selection of subjects should be fair, and burdens should be allocated equitably across impacted subjects.
Respect for Law and Public Interest. Engage in legal due diligence and be transparent in methods and results. Be accountable for actions.
Implementation of the Principles of the Menlo Report:
Respect for Persons Appropriate application of the four principles requires that Stakeholder analysis must first be performed. Thorough stakeholder analysis is important to identify: the correct entity(s) from whom to seek informed consent; the party(s) who bear the burdens or face risks of research; the party(s) who will benefit from research activity; and, the party(s) who are critical to mitigation in the event that chosen risks come to fruition.
Implementation of the Principles of the Menlo Report:
Informed consent assures that research subjects who are put at risk through their involvement in research understand the proposed research, the purpose for which they are being asked to participate in research, the anticipated benefits of the research, and the risks of the subject's participation in that research. They are then free to choose to accept or decline participation. These risks may involve identifiability in research data but can extend to other potential harms.
Implementation of the Principles of the Menlo Report:
Beneficence Assessing potential research harm involves considering risks related to information and information systems as a whole. Information-centric harms stem from contravening data confidentiality, availability, and integrity requirements. This also includes infringing rights and interests related to privacy and reputation, and psychological, financial, and physical well-being. Some personal information is more sensitive than others. Very sensitive information includes government-issued identifiers such as Social Security, driver's license, health care, and financial account numbers, and biometric records. A combination of personal information is typically more sensitive than a single piece of personal information.
Implementation of the Principles of the Menlo Report:
Basic research typically has long-term benefits to society through the advancement of scientific knowledge. Applied research generally has immediately visible benefits. Operational improvements include improved search algorithms, new queuing techniques, new user interface capabilities.
Implementation of the Principles of the Menlo Report:
The principle of balancing risks and benefits involves weighing the burdens of research and risks of harm to stakeholders (direct or indirect), against the benefits that will accrue to the larger society as a result of the research activity. The application of this principle is perhaps the most complicated because of the characteristics of ICTR. This compels us to revisit the existing guidance on research design and ethical evaluation.
Implementation of the Principles of the Menlo Report:
Circumstances may arise where significant harm occurs despite attempts to prevent or minimize risks, and additional harm-mitigating steps are required. ICT researchers should have (a) a response plan for reasonably foreseeable harms, and (b) a general contingency plan for low probability and high impact risks.
Implementation of the Principles of the Menlo Report:
Justice The report believes that research should be designed and conducted equitably between and across stakeholders, distributing research benefits and burdens. Research directed at ICT itself may be predicated on exploiting an attribute (e.g., economically disadvantaged) of persons which is not related to the research purpose. Hence, it can facilitate arbitrary targeting by proxy. On the other hand, the opacity and attribution challenges associated with ICT can inherently facilitate unbiased selection in all research as it is often impracticable to even discern those attributes.
Implementation of the Principles of the Menlo Report:
Respect for Law and Public Interest Applying respect for law and public interest through compliance assures that researchers engage in legal due diligence. Although ethics may be implicitly embedded in many established laws, they can extend beyond those strictures and address obligations that relate to reputation and individual well-being, for example.
Implementation of the Principles of the Menlo Report:
Transparency is an application of respect for law and public interest that can encourage assessing and implementing accountability. Accountability ensures that researchers behave responsibly, and ultimately it galvanizes trust in ICTR. Transparency-based accountability helps researchers, oversight entities, and other stakeholders avoid guesswork and incorrect inferences regarding if, when, and how ethical principles are being addressed. Transparency can expose ethical tensions, such as the researcher's interest in promoting openness and reproducibility versus withholding research findings in the interests of protecting a vulnerable population.
Companion Report:
The Companion Report is a complement to the Menlo Report that details the principles and applications in more detail and illustrates their implementation in real and synthetic case studies. It is intended for the benefit of society, by showing the potential for harm to humans (direct or indirect) and by helping researchers understand and preempt or minimize these risks in the lifecycle of their research. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Xóc đĩa**
Xóc đĩa:
Xóc đĩa (Chữ Nôm: 觸碟) is gambling game, originated and widespread in Vietnam. The game probably originated around 1909. This game is considered illegal by the governmental authorities because it's thought to be linked with criminal activities and gambling is defined as an illegal act in the Vietnamese Criminal Code.
Playing:
It is played with 4 coin shaped tokens in 4 different colors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Truffle butter**
Truffle butter:
Truffle butter is a compound butter made with butter combined with other ingredients, including truffles or synthetic truffle flavorings such as 2,4-dithiapentane. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GRIN2C**
GRIN2C:
Glutamate [NMDA] receptor subunit epsilon-3 is a protein that in humans is encoded by the GRIN2C gene.
Function:
N-methyl-D-aspartate (NMDA) receptors are a class of ionotropic glutamate receptors. NMDA channel has been shown to be involved in long-term potentiation, an activity-dependent increase in the efficiency of synaptic transmission thought to underlie certain kinds of memory and learning. NMDA receptor channels are heteromers composed of the key receptor subunit NMDAR1 (GRIN1) and 1 or more of the 4 NMDAR2 subunits: NMDAR2A (GRIN2A), NMDAR2B (GRIN2B), NMDAR2C (GRIN2C), and NMDAR2D (GRIN2D).
Interactions:
GRIN2C has been shown to interact with DLG4 and DLG3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Excited delirium**
Excited delirium:
Excited delirium (ExDS), also known as agitated delirium (AgDS) or hyperactive delirium syndrome with severe agitation, is a controversial diagnosis sometimes characterized as a potentially fatal state of extreme agitation and delirium. It is typically diagnosed postmortem in young adult males, disproportionally black men, who were physically restrained at the time of death, most often by law enforcement personnel.
Excited delirium:
Symptoms are said to include aggressive behavior, extreme physical strength and hyperthermia. It is not listed in the Diagnostic and Statistical Manual of Mental Disorders or the International Classification of Diseases, and is not recognized by the World Health Organization, the American Psychiatric Association, the American Medical Association, the American Academy of Emergency Medicine, or the National Association of Medical Examiners. It is accepted primarily by the American College of Emergency Physicians.
Excited delirium:
Excited delirium is particularly associated with taser use. A 2017 investigative report by Reuters found that excited delirium had been listed as a factor in autopsy reports, court records or other sources in at least 276 deaths that followed taser use since 2000. Manufactured by the firm Axon, the makers have been involved in police training in its use, publishing of numerous medical studies which promote their product, and other promotional activities.There have also been concerns raised over the use of sedative drugs during an arrest following claims of excited delirium. The drugs ketamine or midazolam (a benzodiazepine) and haloperidol injected into a muscle (an antipsychotic) have sometimes been used to sedate a person at the discretion of paramedics and sometimes at direct police request. Ketamine can cause respiratory arrest, and in many cases there is no evidence of a medical condition that would justify its use. The term excited delirium is sometimes used interchangeably with acute behavioural disturbance,: 1 a symptom of a number of conditions which is also responded to with involuntary injection with benzodiazapines, antipsychotics, or ketamine.: 624 : 152 A 2020 investigation by the United Kingdom's forensic science regulator found that the diagnosis should not have been used since it "has been applied in some cases where other important pathological mechanisms, such as positional asphyxia and trauma may have been more appropriate". In the U.S., a diverse group of neurologists writing for the Brookings Institution called it "a misappropriation of medical terminology, used by law enforcement to legitimize police brutality and to retroactively explain certain deaths occurring in police custody". The American Psychiatric Association's position is that the term "is too non-specific to meaningfully describe and convey information about a person."
History:
Throughout the 19th and early-20th century, "excited delirium" was used to describe an emotional and agitated state related to drug overdose and withdrawal or poisonings, similar to catatonia or Bell's mania with some believing them to be the same condition. The term "excited delirium" (ExDS) began to be used as a diagnosis to explain deaths in police custody especially during or after restraint use in Miami, Florida in the 1980s, as used in a 1985 Journal of Forensic Sciences article, co-authored by deputy chief medical examiner for Dade County, Florida, Charles Victor Wetli (1943–2020), entitled Cocaine-induced psychosis and sudden death in recreational cocaine users. The JFS article reported that in "five of the seven" cases they studied, deaths occurred while in police custody. Wetli determined that nineteen women, all Black prostitutes, had died of the condition due to "sexual excitement" while under the influence of cocaine. In 1992, police announced they had found a serial killer responsible for deaths determined by Wetli to be excited delirium. The legitimacy of the condition has since been under controversy with most of the medical community not recognizing it, and there is no official entry for it in the official Diagnostic and Statistical Manual of Medical Disorders.
History:
The supposed risk factors vary including "bizarre behavior generating phone calls to police", "failure to respond to police presence", and "continued struggle despite restraint". It supposedly endows individuals with "superhuman strength" and being "impervious to pain". It is disproportionately diagnosed among young Black males, and has clear undertones of racial bias.In 1849, a superficially similar condition was described by Luther Bell as "Bell's mania". Bell was one of thirteen other mental hospital superintendents who met in Philadelphia in 1844 to organize the Association of Medical Superintendents of American Institutions for the Insane (AMSAII), now the American Psychiatric Association.
Incidence:
People diagnosed with excited delirium are frequently claimed to have acute drug intoxication, generally involving phencyclidine, prolintanone, methylenedioxypyrovalerone, cocaine, or methamphetamine. Multiple other factors may be in evidence. These may include positional asphyxia, hyperthermia, drug toxicity, and/or catecholamine-induced fatal abnormal heart rhythms.Other conditions which can resemble excited delirium are mania, neuroleptic malignant syndrome, serotonin syndrome, thyroid storm, and catatonia of the malignant or excited type.
Deaths:
A 2017 report by Reuters found that excited delirium had been listed as a factor in autopsy reports, court records or other sources in at least 276 deaths that followed taser use since 2000, with diagnosis often based on a test conducted by Deborah Mash, a paid consultant to Axon, manufacturers of the Taser. In one case within four hours of a man dying after being tasered, Axon had provided model press releases, instructions for gathering evidence of excited delirium, and advised that samples be sent to Mash. Amnesty International found that the syndrome was cited in 75 of the 330 deaths following police use of a taser on suspects between 2001 and 2008, and a Florida-based study found it was listed as a cause of death in over half of all deaths in police custody, though many Florida districts do not use it at all.While diagnosis is habitually of men under police restraint, medical preconditions and symptoms attributed to the syndrome are far more varied.
Lack of acceptance by most medical associations:
Excited delirium is not recognized by the World Health Organization, the American Psychiatric Association, the American Medical Association, and not listed as a medical condition in the Diagnostic and Statistical Manual of Mental Disorders or International Classification of Diseases. Dr. Michael Baden, a specialist in investigating deaths in custody, describes excited delirium as "a boutique kind of diagnosis created, unfortunately, by many of my forensic pathology colleagues specifically for persons dying when being restrained by law enforcement". In June 2021, the Royal College of Psychiatrists in the UK released a statement that they do "not support the use of such terminology [as ExDS or AgDS], which has no empirical evidential basis" and said "the use of these terms is, in effect, racial discrimination".A 2020 scientific literature review looked at reported cases of excited delirium and agitated delirium. The authors noted that most published current information has indicated that excited delirium-related deaths are due to an occult pathophysiologic process. A database of cases was created which included the use of force, drug intoxication, mental illness, demographics, and survival outcome. A review of cases revealed there was no evidence to support ExDS as a cause of death in the absence of restraint. The authors found that when death occurred in an aggressively restrained individual that fits the profile of either ExDS or AgDS, restraint-related asphyxia must be considered the more likely cause of the death.
Position of the American College of Emergency Physicians:
Following a 2009 review by an internal task force, the American College of Emergency Physicians (ACEP) accepted excited delirium as a "real and unique syndrome." At that time their list of symptoms describing the condition stated: [The patient is] usually agitated, often speaking or yelling uncontrollably and pacing or running with no purpose. They often threaten others verbally or physically; they sweat profusely, appear ill, and are unable to control themselves. Often, their condition is associated with mental disorders or the use of drugs such as cocaine. Their actions make for riveting television, and alarm law enforcement officials and EMTs.
Position of the American College of Emergency Physicians:
Commenting on ACEP's position, in a 2020 position paper the American Psychiatric Association stated: The concept of "excited delirium" (also referred to as "excited delirium syndrome" (ExDs)) has been invoked in a number of cases to explain or justify injury or death to individuals in police custody, and the term excited delirium is disproportionately applied to Black men in police custody. Although the American College of Emergency Physicians has explicitly recognized excited delirium as a medical condition, the criteria are unclear and to date there have been no rigorous studies validating excited delirium as a medical diagnosis.
Position of the American College of Emergency Physicians:
Three of the members of ACEP's task force were linked to Axon, the corporation that manufactures Taser stun guns. Axon frequently blames excited delirium for stun-gun-related deaths.
Position of the American College of Emergency Physicians:
ACEP then created a new task force to investigate this syndrome and their report lead to a new ACEP position statement in April 2023 which recognized the syndrome, but discouraged the term "excited delirium":The American College of Emergency Physicians (ACEP) recognizes the existence of hyperactive delirium syndrome with severe agitation, a potentially life threatening clinical condition characterized by a combination of vital sign abnormalities (e.g., elevated temperature and blood pressure), pronounced agitation, altered mental status, and metabolic derangements.... ACEP does not recognize the use of the term “excited delirium” and its use in clinical settings.
Police involvement:
Males account for more documented diagnoses than females. Often law enforcement has used tasers or physical measures in these cases, and death most frequently occurs after the person is forcefully restrained. Critics of excited delirium have stated that the condition is primarily attributed to deaths while in the custody of law enforcement and is disproportionately applied to Black and Hispanic victims. One study looking at cocaine-related deaths in the 1970s and 1980s in Florida, showed that the deaths were more likely to be diagnosed as excited delirium when involving young Black men dying in police custody and "accidental cocaine toxicity" when involving white people. A 1998 study found that "In all 21 cases of unexpected death associated with excited delirium, the deaths were associated with restraint (for violent agitation and hyperactivity), with the person either in a prone position (18 people [86%]) or subjected to pressure on the neck (3 [14%]). All of those who died had suddenly lapsed into tranquillity shortly after being restrained".In 2003, the NAACP argued that excited delirium is used to explain the deaths of minorities more often than whites, and the American Psychiatric Association also notes that "the term excited delirium is disproportionately applied to Black men in police custody". The American Civil Liberties Union argued in 2007 that the diagnosis served "as a means of white-washing what may be excessive use of force and inappropriate use of control techniques by officers during an arrest."The UK Independent Advisory Panel on Deaths in Custody (IAP) suggests that the syndrome should be termed "Sudden death in restraint syndrome" in order to enhance clarity. Some civil-rights groups have argued that excited delirium diagnoses are being used to absolve law enforcement of guilt in cases where alleged excessive force may have contributed to patient deaths.Prominent cases include Daniel Prude, who was said to be in a state of excited delirium in 2020 when police put a hood over his head and pressed his naked body against the pavement. Prude, a Black man, lost consciousness and died. Excited delirium was also cited by the defense in State v. Chauvin, a murder trial related to the murder of George Floyd in 2020. Prosecutor Steve Schleicher refuted the defense suggestion that Floyd had "superhuman strength" during his arrest because he was suffering from the condition.
Police involvement:
Ketamine Ketamine or midazolam and haloperidol injected into a muscle have frequently been used, sometimes at direct police request, to sedate the person. Ketamine can cause respiratory arrest, and in many cases there is no evidence of a medical condition that would justify its use. Following an injection the person must be transported to a hospital. In 2018 a Minneapolis hospital published a paper which reported that 57 percent of the people who had been injected for agitation needed intubation.Concern has been raised about the increasing usage of a claim of excited delirium to justify tranquilizing persons during arrest, with requests for tranquilization often being made by law enforcement rather than medical professionals. Ketamine is the most commonly used drug in these cases. There have been deaths related to use of ketamine on restrained prisoners. A controversial study into ketamine use was terminated due to ethics concerns. The study was also linked to Axon via Jeffrey Ho.In 2019 Elijah McClain, a Black man, was arrested by police officers after receiving a 911 call which reported a man walking, waving his arms and wearing a ski mask. The officers said that he was exhibiting "crazy strength" when they attempted to arrest him but all three said that their body cams had fallen off and thus there was no video of what they claimed to be a violent struggle. McClain weighed 140 pounds and was 5 feet 6 inches tall. He was handcuffed and then a choke hold was used twice, once "successfully" meaning that McClain lost consciousness. When paramedics arrived they administered enough ketamine to sedate a 220-pound man. He went into cardiac arrest a few minutes later. In a report of the case on 60 Minutes, John Dickerson interviewed the District Attorney who justified the use of ketamine, adding that since excited delirium could not be ruled out as a cause of death it would be impossible to win a homicide case because "you can't file a homicide charge without cause of death." Taser use According to an article in the Harvard Civil Rights–Civil Liberties Law Review, since 2000, over one thousand people in the United States have died shortly after being tased, with the deaths sharing several commonalities: "the deceased often were mentally ill or under the influence of drugs at the time of death, they tend to have been shocked multiple times by officers during arrest, and they often share an exceptionally rare cause of death, 'excited delirium.'"Axon Enterprise, formerly Taser International, provides training for police on recognizing excited delirium and several prominent proponents of the diagnosis are retained by Axon, with diagnosis often based on a test conducted by Deborah Mash, a paid consultant to Axon. In one case reported by an investigative report done by Reuters, within four hours of a man dying after being tasered Axon had provided model press releases, instructions for gathering evidence of excited delirium, and advised that samples be sent to Mash for lab work to establish a diagnosis.Axon has paid thousands of dollars to proponents of the excited delirium diagnosis, including Charles Wetli who first proposed the term, who have repeatedly used "excited delirium" as a defense in liability suits and to shield police officers from criminal liability for deaths in custody. Harvard Civil Rights–Civil Liberties Law Review reports that "Axon has actively pursued litigation against some medical examiners who attribute deaths to tasers rather than excited delirium. These lawsuits seem to have a chilling effect on medical examiners' work; a 2011 survey found that 14% of medical examiners had modified a diagnostic finding out of fear of litigation by the company."In Canada, the 2007 case of Robert Dziekanski received national attention and placed a spotlight on the use of tasers in police actions and the diagnosis of excited delirium. Police psychologist Mike Webster testified at a British Columbia inquiry into taser deaths that police have been "brainwashed" by Taser International to justify "ridiculously inappropriate" use of the electric weapon. He called excited delirium a "dubious disorder" used by Taser International in its training of police. In a 2008 report, the Royal Canadian Mounted Police argued that excited delirium should not be included in the operational manual for the Royal Canadian Mounted Police without formal approval after consultation with a mental-health-policy advisory body. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uterosacral ligament**
Uterosacral ligament:
The uterosacral ligaments (or rectouterine ligaments) are major ligaments of uterus that extend posterior-ward from the cervix to attach onto the (anterior aspect of the) sacrum.
Anatomy:
Microanatomy/histology The uterosacral ligaments consist of fibrous connective tissue, and smooth muscle tissue.
Relations The uterosacral ligaments pass inferior to the peritoneum. They embrace the rectouterine pouch, and rectum. The pelvic splanchnic nerves run on top of the ligament.
Function:
The uterosacral ligaments pull the cervix posterior-ward, counteracting the anterior-ward pull exerted by the round ligament of uterus upon the fundus of the uterus, thus maintaining anteversion of the body of the uterus.
Clinical significance:
The uterosacral ligaments may be palpated during a rectal examination, but not during pelvic examination. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Periodontal ligament stem cells**
Periodontal ligament stem cells:
Periodontal ligament stem cells are stem cells found near the periodontal ligament of the teeth. They are involved in adult regeneration of the periodontal ligament, alveolar bone, and cementum. The cells are known to express STRO-1 and CD146 proteins. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MESP2**
MESP2:
Mesoderm posterior protein 2 (MESP2), also known as class C basic helix-loop-helix protein 6 (bHLHc6), is a protein that in humans is encoded by the MESP2 gene.
Function:
This gene encodes a member of the bHLH family of transcription factors and plays a key role in defining the rostrocaudal patterning of somites via interactions with multiple Notch signaling pathways. This gene is expressed in the anterior presomitic mesoderm and is downregulated immediately after the formation of segmented somites. This gene also plays a role in the formation of epithelial somitic mesoderm and cardiac mesoderm. In zebrafish, the homolog mesp-b is critical for dermomyotome development.
Clinical significance:
Mutations in the MESP2 gene cause autosomal recessive Spondylocostal dysostosis type 2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Colpitts oscillator**
Colpitts oscillator:
A Colpitts oscillator, invented in 1918 by Canadian-American engineer Edwin H. Colpitts, is one of a number of designs for LC oscillators, electronic oscillators that use a combination of inductors (L) and capacitors (C) to produce an oscillation at a certain frequency. The distinguishing feature of the Colpitts oscillator is that the feedback for the active device is taken from a voltage divider made of two capacitors in series across the inductor.
Overview:
The Colpitts circuit, like other LC oscillators, consists of a gain device (such as a bipolar junction transistor, field-effect transistor, operational amplifier, or vacuum tube) with its output connected to its input in a feedback loop containing a parallel LC circuit (tuned circuit), which functions as a bandpass filter to set the frequency of oscillation. The amplifier will have differing input and output impedances, and these need to be coupled into the LC circuit without overly damping it.
Overview:
A Colpitts oscillator uses a pair of capacitors to provide voltage division to couple the energy in and out of the tuned circuit. (It can be considered as the electrical dual of a Hartley oscillator, where the feedback signal is taken from an "inductive" voltage divider consisting of two coils in series (or a tapped coil).) Fig. 1 shows the common-base Colpitts circuit. The inductor L and the series combination of C1 and C2 form the resonant tank circuit, which determines the frequency of the oscillator. The voltage across C2 is applied to the base-emitter junction of the transistor, as feedback to create oscillations. Fig. 2 shows the common-collector version. Here the voltage across C1 provides feedback. The frequency of oscillation is approximately the resonant frequency of the LC circuit, which is the series combination of the two capacitors in parallel with the inductor: f0=12πLC1C2C1+C2.
Overview:
The actual frequency of oscillation will be slightly lower due to junction capacitances and resistive loading of the transistor.
Overview:
As with any oscillator, the amplification of the active component should be marginally larger than the attenuation of the resonator losses and its voltage division, to obtain stable operation. Thus, a Colpitts oscillator used as a variable-frequency oscillator (VFO) performs best when a variable inductance is used for tuning, as opposed to tuning just one of the two capacitors. If tuning by variable capacitor is needed, it should be done with a third capacitor connected in parallel to the inductor (or in series as in the Clapp oscillator).
Overview:
Practical example Fig. 3 shows a working example with component values. Instead of bipolar junction transistors, other active components such as field-effect transistors or vacuum tubes, capable of producing gain at the desired frequency, could be used.
The capacitor at the base provides an AC path to ground for parasitic inductances that could lead to unwanted resonance at undesired frequencies. Selection of the base's biasing resistors is not trivial. Periodic oscillation starts for a critical bias current and with the variation of the bias current to a higher value chaotic oscillations are observed.
Theory:
One method of oscillator analysis is to determine the input impedance of an input port neglecting any reactive components. If the impedance yields a negative resistance term, oscillation is possible. This method will be used here to determine conditions of oscillation and the frequency of oscillation.
An ideal model is shown to the right. This configuration models the common collector circuit in the section above. For initial analysis, parasitic elements and device non-linearities will be ignored. These terms can be included later in a more rigorous analysis. Even with these approximations, acceptable comparison with experimental results is possible.
Theory:
Ignoring the inductor, the input impedance at the base can be written as in =v1i1, where v1 is the input voltage, and i1 is the input current. The voltage v2 is given by v2=i2Z2, where Z2 is the impedance of C2 . The current flowing into C2 is i2 , which is the sum of two currents: i2=i1+is, where is is the current supplied by the transistor. is is a dependent current source given by is=gm(v1−v2), where gm is the transconductance of the transistor. The input current i1 is given by i1=v1−v2Z1, where Z1 is the impedance of C1 . Solving for v2 and substituting above yields in =Z1+Z2+gmZ1Z2.
Theory:
The input impedance appears as the two capacitors in series with the term in , which is proportional to the product of the two impedances: in =gmZ1Z2.
If Z1 and Z2 are complex and of the same sign, then in will be a negative resistance. If the impedances for Z1 and Z2 are substituted, in is in =−gmω2C1C2.
If an inductor is connected to the input, then the circuit will oscillate if the magnitude of the negative resistance is greater than the resistance of the inductor and any stray elements. The frequency of oscillation is as given in the previous section.
For the example oscillator above, the emitter current is roughly 1 mA. The transconductance is roughly 40 mS. Given all other values, the input resistance is roughly in 30 Ω.
This value should be sufficient to overcome any positive resistance in the circuit. By inspection, oscillation is more likely for larger values of transconductance and smaller values of capacitance. A more complicated analysis of the common-base oscillator reveals that a low-frequency amplifier voltage gain must be at least 4 to achieve oscillation. The low-frequency gain is given by 4.
If the two capacitors are replaced by inductors, and magnetic coupling is ignored, the circuit becomes a Hartley oscillator. In that case, the input impedance is the sum of the two inductors and a negative resistance given by in =−gmω2L1L2.
In the Hartley circuit, oscillation is more likely for larger values of transconductance and larger values of inductance.
Theory:
The above analysis also describes the behavior of the Pierce oscillator. The Pierce oscillator, with two capacitors and one inductor, is equivalent to the Colpitts oscillator. Equivalence can be shown by choosing the junction of the two capacitors as the ground point. An electrical dual of the standard Pierce oscillator using two inductors and one capacitor is equivalent to the Hartley oscillator.
Theory:
Oscillation amplitude The amplitude of oscillation is generally difficult to predict, but it can often be accurately estimated using the describing function method.
For the common-base oscillator in Figure 1, this approach applied to a simplified model predicts an output (collector) voltage amplitude given by VC=2ICRLC2C1+C2, where IC is the bias current, and RL is the load resistance at the collector.
This assumes that the transistor does not saturate, the collector current flows in narrow pulses, and that the output voltage is sinusoidal (low distortion).
This approximate result also applies to oscillators employing different active device, such as MOSFETs and vacuum tubes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deep-sky object**
Deep-sky object:
A deep-sky object (DSO) is any astronomical object that is not an individual star or Solar System object (such as Sun, Moon, planet, comet, etc.). The classification is used for the most part by amateur astronomers to denote visually observed faint naked eye and telescopic objects such as star clusters, nebulae and galaxies. This distinction is practical and technical, implying a variety of instruments and techniques appropriate to observation, and does not distinguish the nature of the object itself.
Origins and classification:
Classifying non-stellar astronomical objects began soon after the invention of the telescope. One of the earliest comprehensive lists was Charles Messier's 1774 Messier catalog, which included 103 "nebulae" and other faint fuzzy objects he considered a nuisance since they could be mistaken for comets, the objects he was actually searching for. As telescopes improved these faint nebulae would be broken into more descriptive scientific classifications such as interstellar clouds, star clusters, and galaxies.
Origins and classification:
"Deep-sky object", as an astronomical classification for these objects, has its origins in the modern field of amateur astronomy. The origin of the term is unknown but it was popularized by Sky & Telescope magazine's "Deep-Sky Wonders" column, which premiered in their first edition in 1941, created by Leland S. Copeland, written for the majority of its run by Walter Scott Houston, and currently penned by Sue French. Houston's columns, and later book compilations of those columns, helped popularize the term, each month giving the reader a guided tour of a small part of the sky highlighting well known and lesser known objects for binoculars and small telescopes.
Observations and activities:
There are many amateur astronomical techniques and activities associated with deep-sky objects. Some of these objects are bright enough to find and see in binoculars and small telescopes. But the faintest objects need the light-gathering power of telescopes with large objectives, and since they are invisible to the naked eye, can be hard to find. This has led to increased popularity of GoTo telescopes that can find DSOs automatically, and large reflecting telescopes, such as Dobsonian style telescopes, with wide fields of view well suited to such observing. Observing faint objects needs dark skies, so these relatively portable types of telescopes also lend themselves to the majority of amateurs who need to travel outside light polluted urban locations. To cut down light pollution and enhance contrast, observers employ "nebular filters" designed to admit certain wavelengths of light, and block others.
Observations and activities:
There are organized activities associated with DSOs such as the Messier marathon, which occurs at a specific time each year and involves observers trying to spot all 110 Messier objects in one night. Since the Messier catalog objects were discovered with relatively small 18th-century telescopes, it is a popular list with observers, being well within the grasp of most modern amateur telescopes. A much more demanding test known as the Herschel 400 is designed to tax larger telescopes and experienced amateur astronomers.
List of deep-sky object types:
There are many astronomical object types that come under the description of deep-sky objects. Since the definition is objects that are non-Solar System and non-stellar the list includes: Star clusters Open clusters Globular clusters Nebulae Bright nebulae Emission nebulae Reflection nebulae Dark nebulae Planetary nebulae Galaxies | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jakarta Persistence Query Language**
Jakarta Persistence Query Language:
The Jakarta Persistence Query Language (JPQL; formerly Java Persistence Query Language) is a platform-independent object-oriented query language: 284, §12 defined as part of the Jakarta Persistence (JPA; formerly Java Persistence API) specification.
JPQL is used to make queries against entities stored in a relational database. It is heavily inspired by SQL, and its queries resemble SQL queries in syntax,: 17, §1.3 but operate against JPA entity objects rather than directly with database tables.: 26, §2.2.3 In addition to retrieving objects (SELECT queries), JPQL supports set based UPDATE and DELETE queries.
Examples:
Example JPA Classes, getters and setters omitted for simplicity.
Then a simple query to retrieve the list of all authors, ordered alphabetically, would be: To retrieve the list of authors that have ever been published by XYZ Press: JPQL supports named parameters, which begin with the colon (:). We could write a function returning a list of authors with the given last name as follows:
Hibernate Query Language:
JPQL is based on the Hibernate Query Language (HQL), an earlier non-standard query language included in the Hibernate object-relational mapping library. Hibernate and the HQL were created before the JPA specification.
As of Hibernate 3 JPQL is a subset of HQL. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Waveform monitor**
Waveform monitor:
A waveform monitor is a special type of oscilloscope used in television production applications. It is typically used to measure and display the level, or voltage, of a video signal with respect to time.
Waveform monitor:
The level of a video signal usually corresponds to the brightness, or luminance, of the part of the image being drawn onto a regular video screen at the same point in time. A waveform monitor can be used to display the overall brightness of a television picture, or it can zoom in to show one or two individual lines of the video signal. It can also be used to visualize and observe special signals in the vertical blanking interval of a video signal, as well as the colorburst between each line of video.
Waveform monitor:
Waveform monitors are used for the following purposes: To assist with the calibration of professional video cameras, and to "line up" multiple-camera setups being used at the same location in order to ensure that the same scene shot under the same conditions will produce the same results.
As a tool to assist in telecine (film-to-tape transfer), color correction, and other video production activities To monitor video signals to make sure that neither the color gamut, nor the analog transmission limits, are violated.
To diagnose and troubleshoot a television studio, or the equipment located therein.
To assist with installation of equipment into a television facility, or with the commissioning or certification of a facility.
In manufacturing test and research and development applications.
For setting camera exposure in the case of video and digital cinema cameras.A waveform monitor is often used in conjunction with a vectorscope. Originally, these were separate devices; however modern waveform monitors include vectorscope functionality as a separate mode. (The combined device is simply called a "waveform monitor").
Waveform monitor:
Originally, waveform monitors were entirely analog devices; the incoming (analog) video signal was filtered and amplified, and the resulting voltage was used to drive the vertical axis of a cathode ray tube. A sync stripper circuit was used to isolate the sync pulses and colorburst from the video signal; the recovered sync information was fed to a sweep circuit which drove the horizontal axis. Early waveform monitors differed little from oscilloscopes, except for the specialized video trigger circuitry. Waveform monitors also permit the use of external reference; in this mode the sync and burst signals are taken from a separate input (thus allowing all devices in a facility to be genlocked, or synchronized to the same timing source).
Waveform monitor:
With the advent of digital television and digital signal processing, the waveform monitor acquired many new features and capabilities. Modern waveform monitors contain many additional modes of operation, including picture mode (where the video picture is simply presented on the screen, much like a television), various modes optimized for color gamut checking, support for the audio portion of a television program (either embedded with the video, or on separate inputs), eye pattern and jitter displays for measuring the physical layer parameters of serial-digital television formats, modes for examining the serial digital protocol layer, and support for ancillary data and television-related metadata such as timecode, closed captions and the v-chip rating systems.
Waveform monitor:
Modern waveform monitors and other oscilloscopes have largely abandoned old-style CRT technology as well. All new waveform monitors are based on a rasterizer, a piece of graphics hardware that duplicates the behavior of a CRT vector display, generating a raster signal. They may come with a flat-panel liquid crystal display, or they may be sold without a display, in which case the user can connect any VGA display. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Binary tetrahedral group**
Binary tetrahedral group:
In mathematics, the binary tetrahedral group, denoted 2T or ⟨2,3,3⟩, is a certain nonabelian group of order 24. It is an extension of the tetrahedral group T or (2,3,3) of order 12 by a cyclic group of order 2, and is the preimage of the tetrahedral group under the 2:1 covering homomorphism Spin(3) → SO(3) of the special orthogonal group by the spin group. It follows that the binary tetrahedral group is a discrete subgroup of Spin(3) of order 24. The complex reflection group named 3(24)3 by G.C. Shephard or 3[3]3 and by Coxeter, is isomorphic to the binary tetrahedral group.
Binary tetrahedral group:
The binary tetrahedral group is most easily described concretely as a discrete subgroup of the unit quaternions, under the isomorphism Spin(3) ≅ Sp(1), where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.)
Elements:
Explicitly, the binary tetrahedral group is given as the group of units in the ring of Hurwitz integers. There are 24 such units given by {±1,±i,±j,±k,12(±1±i±j±k)} with all possible sign combinations.
All 24 units have absolute value 1 and therefore lie in the unit quaternion group Sp(1). The convex hull of these 24 elements in 4-dimensional space form a convex regular 4-polytope called the 24-cell.
Properties:
The binary tetrahedral group, denoted by 2T, fits into the short exact sequence 1.
This sequence does not split, meaning that 2T is not a semidirect product of {±1} by T. In fact, there is no subgroup of 2T isomorphic to T.
Properties:
The binary tetrahedral group is the covering group of the tetrahedral group. Thinking of the tetrahedral group as the alternating group on four letters, T ≅ A4, we thus have the binary tetrahedral group as the covering group, 2T ≅ A4^ The center of 2T is the subgroup {±1}. The inner automorphism group is isomorphic to A4, and the full automorphism group is isomorphic to S4.
Properties:
The binary tetrahedral group can be written as a semidirect product 2T=Q⋊C3 where Q is the quaternion group consisting of the 8 Lipschitz units and C3 is the cyclic group of order 3 generated by ω = −1/2(1 + i + j + k). The group Z3 acts on the normal subgroup Q by conjugation. Conjugation by ω is the automorphism of Q that cyclically rotates i, j, and k.
Properties:
One can show that the binary tetrahedral group is isomorphic to the special linear group SL(2,3) – the group of all 2 × 2 matrices over the finite field F3 with unit determinant, with this isomorphism covering the isomorphism of the projective special linear group PSL(2,3) with the alternating group A4.
Presentation The group 2T has a presentation given by ⟨r,s,t∣r2=s3=t3=rst⟩ or equivalently, ⟨s,t∣(st)2=s3=t3⟩.
Generators with these relations are given by r=is=12(1+i+j+k)t=12(1+i+j−k), with r2=s3=t3=−1 Subgroups The quaternion group consisting of the 8 Lipschitz units forms a normal subgroup of 2T of index 3. This group and the center {±1} are the only nontrivial normal subgroups.
All other subgroups of 2T are cyclic groups generated by the various elements, with orders 3, 4, and 6.
Higher dimensions:
Just as the tetrahedral group generalizes to the rotational symmetry group of the n-simplex (as a subgroup of SO(n)), there is a corresponding higher binary group which is a 2-fold cover, coming from the cover Spin(n) → SO(n).
Higher dimensions:
The rotational symmetry group of the n-simplex can be considered as the alternating group on n + 1 points, An+1, and the corresponding binary group is a 2-fold covering group. For all higher dimensions except A6 and A7 (corresponding to the 5-dimensional and 6-dimensional simplexes), this binary group is the covering group (maximal cover) and is superperfect, but for dimensional 5 and 6 there is an additional exceptional 3-fold cover, and the binary groups are not superperfect.
Usage in theoretical physics:
The binary tetrahedral group was used in the context of Yang–Mills theory in 1956 by Chen Ning Yang and others.
It was first used in flavor physics model building by Paul Frampton and Thomas Kephart in 1994.
In 2012 it was shown that a relation between two neutrino mixing angles, derived by using this binary tetrahedral flavor symmetry, agrees with experiment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fréchet distance**
Fréchet distance:
In mathematics, the Fréchet distance is a measure of similarity between curves that takes into account the location and ordering of the points along the curves. It is named after Maurice Fréchet.
Intuitive definition:
Imagine a person traversing a finite curved path while walking their dog on a leash, with the dog traversing a separate finite curved path. Each can vary their speed to keep slack in the leash, but neither can move backwards. The Fréchet distance between the two curves is the length of the shortest leash sufficient for both to traverse their separate paths from start to finish. Note that the definition is symmetric with respect to the two curves—the Fréchet distance would be the same if the dog were walking its owner.
Formal definition:
Let S be a metric space. A curve A in S is a continuous map from the unit interval into S , i.e., A:[0,1]→S . A reparameterization α of [0,1] is a continuous, non-decreasing, surjection α:[0,1]→[0,1] Let A and B be two given curves in S . Then, the Fréchet distance between A and B is defined as the infimum over all reparameterizations α and β of [0,1] of the maximum over all t∈[0,1] of the distance in S between A(α(t)) and B(β(t)) . In mathematical notation, the Fréchet distance F(A,B) is inf max t∈[0,1]{d(A(α(t)),B(β(t)))} where d is the distance function of S Informally, we can think of the parameter t as "time". Then, A(α(t)) is the position of the dog and B(β(t)) is the position of the dog's owner at time t (or vice versa). The length of the leash between them at time t is the distance between A(α(t)) and B(β(t)) . Taking the infimum over all possible reparametrizations of [0,1] corresponds to choosing the walk along the given paths where the maximum leash length is minimized. The restriction that α and β be non-decreasing means that neither the dog nor its owner can backtrack.
Formal definition:
The Fréchet metric takes into account the flow of the two curves because the pairs of points whose distance contributes to the Fréchet distance sweep continuously along their respective curves. This makes the Fréchet distance a better measure of similarity for curves than alternatives, such as the Hausdorff distance, for arbitrary point sets. It is possible for two curves to have small Hausdorff distance but large Fréchet distance.
Formal definition:
The Fréchet distance and its variants find application in several problems, from morphing and handwriting recognition to protein structure alignment. Alt and Godau were the first to describe a polynomial-time algorithm to compute the Fréchet distance between two polygonal curves in Euclidean space, based on the principle of parametric search. The running time of their algorithm is log (mn)) for two polygonal curves with m and n segments.
The free-space diagram:
An important tool for calculating the Fréchet distance of two curves is the free-space diagram, which was introduced by Alt and Godau.
The free-space diagram:
The free-space diagram between two curves for a given distance threshold ε is a two-dimensional region in the parameter space that consist of all point pairs on the two curves at distance at most ε: := {(α,β)∈[0,1]2∣d(A(α),B(β))≤ε} The Fréchet distance F(A,B) is at most ε if and only if the free-space diagram Dε(A,B) contains a path from the lower left corner to the upper right corner, which is monotone both in the horizontal and in the vertical direction.
As a distance between probability distributions (the FID score):
In addition to measuring the distances between curves, the Fréchet distance can also be used to measure the difference between probability distributions. For two multivariate Gaussian distributions with means μX and μY and covariance matrices ΣX and ΣY , the Fréchet distance between these distributions is tr (ΣX+ΣY−2(ΣXΣY)1/2) This distance is the basis for the Fréchet inception distance (FID) that is used to compare images produced by a generative adversarial network with the real images that were used for training.
Variants:
The weak Fréchet distance is a variant of the classical Fréchet distance without the requirement that the endpoints move monotonically along their respective curves — the dog and its owner are allowed to backtrack to keep the leash between them short. Alt and Godau describe a simpler algorithm to compute the weak Fréchet distance between polygonal curves, based on computing minimax paths in an associated grid graph.
Variants:
The discrete Fréchet distance, also called the coupling distance, is an approximation of the Fréchet metric for polygonal curves, defined by Eiter and Mannila. The discrete Fréchet distance considers only positions of the leash where its endpoints are located at vertices of the two polygonal curves and never in the interior of an edge. This special structure allows the discrete Fréchet distance to be computed in polynomial time by an easy dynamic programming algorithm.
Variants:
When the two curves are embedded in a metric space other than Euclidean space, such as a polyhedral terrain or some Euclidean space with obstacles, the distance between two points on the curves is most naturally defined as the length of the shortest path between them. The leash is required to be a geodesic joining its endpoints. The resulting metric between curves is called the geodesic Fréchet distance. Cook and Wenk describe a polynomial-time algorithm to compute the geodesic Fréchet distance between two polygonal curves in a simple polygon.
Variants:
If we further require that the leash must move continuously in the ambient metric space, then we obtain the notion of the homotopic Fréchet distance between two curves. The leash cannot switch discontinuously from one position to another — in particular, the leash cannot jump over obstacles, and can sweep over a mountain on a terrain only if it is long enough. The motion of the leash describes a homotopy between the two curves. Chambers et al. describe a polynomial-time algorithm to compute the homotopic Fréchet distance between polygonal curves in the Euclidean plane with obstacles.
Examples:
The Fréchet distance between two concentric circles of radius r1 and r2 respectively is |r1−r2|.
The longest leash is required when the owner stands still and the dog travels to the opposite side of the circle ( r1+2∗r2 ), and the shortest leash when both owner and dog walk at a constant angular velocity around the circle ( |r1−r2| ).
Applications:
Fréchet distance has been used to study visual hierarchy, a graphic design principle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heat death paradox**
Heat death paradox:
The heat death paradox, also known as thermodynamic paradox, Clausius' paradox and Kelvin’s paradox, is a reductio ad absurdum argument that uses thermodynamics to show the impossibility of an infinitely old universe. It was formulated in February 1862 by Lord Kelvin and expanded upon by Hermann von Helmholtz and William John Macquorn Rankine.
The paradox:
Assuming that the universe is eternal, a question arises: How is it that thermodynamic equilibrium has not already been achieved? This theoretical paradox is directed at the then-mainstream strand of belief in a classical view of a sempiternal universe, whereby its matter is postulated as everlasting and having always been recognisably the universe. Heat death paradox is born of a paradigm resulting from fundamental ideas about the cosmos. It is necessary to change the paradigm to resolve the paradox.
The paradox:
The paradox was based upon the rigid mechanical point of view of the second law of thermodynamics postulated by Rudolf Clausius and Lord Kelvin, according to which heat can only be transferred from a warmer to a colder object. It notes: if the universe were eternal, as claimed classically, it should already be cold and isotropic (its objects should have the same temperature, and the distribution of matter or radiation should be even). Kelvin compared the universe to a clock that runs slower and slower, constantly dissipating energy in impalpable heat, although he was unsure whether it would stop for ever (reach thermodynamic equilibrium). According to this model, the existence of usable energy, which can be used to perform work and produce entropy, means that the clock has not stopped - since a conversion of heat in mechanical energy (which Kelvin called a rejuvenating universe scenario) is not contemplated.According to the laws of thermodynamics, any hot object transfers heat to its cooler surroundings, until everything is at the same temperature. For two objects at the same temperature as much heat flows from one body as flows from the other, and the net effect is no change. If the universe were infinitely old, there must have been enough time for the stars to cool and warm their surroundings. Everywhere should therefore be at the same temperature and there should either be no stars, or everything should be as hot as stars. The universe should thus achieve, or asymptotically tend to, thermodynamic equilibrium, which corresponds to a state where no thermodynamic free energy is left, and therefore no further work is possible: this is the heat death of the universe, as predicted by Lord Kelvin in 1852. The average temperature of the cosmos should also asymptotically tend to Kelvin Zero, and it is possible that a maximum entropy state will be reached.
Solution:
In February 1862, Lord Kelvin used the existence of the Sun and the stars as an empirical proof that the universe has not achieved thermodynamic equilibrium, as entropy production and free work are still possible, and there are temperature differences between objects. Helmholtz and Rankine expanded Kelvin’s work soon after.
Since there are stars and colder objects, the universe is not in thermodynamic equilibrium, so it cannot be infinitely old.
Modern cosmology The paradox does not arise in modern cosmology, which posits that the universe began in a Big Bang roughly 13.8 billion years ago – which is not long enough ago for the universe to have approached thermodynamic equilibrium.
Related paradoxes:
Olbers' paradox is another paradox which aims to disprove an infinitely old static universe, but it only fits with a static universe scenario. Also, unlike Kelvin’s paradox, it relies on Cosmology rather than Thermodynamics. The Boltzmann Brain can also be related to Kelvin’s, as it focuses on the spontaneous generation of a brain (filled with false memories) from entropy fluctuations, in a universe which has been lying in a heat death state for an indefinite amount of time. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spoken game**
Spoken game:
A spoken game is a game which uses words instead of cards, boards, game pieces, or other paraphernalia.
Spoken games can often also be categorized as guessing games, word games, or because of their freedom from equipment or visual engagement, car games.
Well-known spoken games include Twenty Questions, Riddle Me Ree, and Password. Because of their nature, spoken games are usually non-commercial. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unigine**
Unigine:
UNIGINE is a proprietary cross-platform game engine developed by UNIGINE Company used in simulators, virtual reality systems, serious games and visualization. It supports OpenGL 4, Vulkan and DirectX 12.UNIGINE Engine is a core technology for a lineup of benchmarks (CPU, GPU, power supply, cooling system), which are used by overclockers and technical media such as Tom's Hardware, Linus Tech Tips, PC Gamer, and JayzTwoCents. UNIGINE benchmarks are also included as part of the Phoronix Test Suite for benchmarking purposes on Linux and other systems.
UNIGINE 1:
The first public release was the 0.3 version on May 4, 2005.
Platforms UNIGINE 1 supported Microsoft Windows, Linux, OS X, PlayStation 3, Android, and iOS. Experimental support for WebGL existed but was not included into the official SDK. UNIGINE 1 supported DirectX 9, DirectX 10, DirectX 11, OpenGL, OpenGL ES and PlayStation 3, while initial versions (v0.3x) only supported OpenGL.
UNIGINE 1 provided C++, C#, and UnigineScript APIs for developers. It also supported the shading languages GLSL and HLSL.
Game features UNIGINE 1 had support for large virtual scenarios and specific hardware required by professional simulators and enterprise VR systems, often called serious games.
UNIGINE 1:
Support for large virtual worlds was implemented via double precision of coordinates (64-bit per axis), zone-based background data streaming, and optional operations in geographic coordinate system (latitude, longitude, and elevation instead of X, Y, Z).Display output was implemented via multi-channel rendering (network-synchronized image generation of a single large image with several computers), which typical for professional simulators. The same system enabled support of multiple output devices with asymmetric projections (e.g. CAVE). Curved screens with multiple projectors were also supported. UNIGINE 1 had stereoscopic output support for anaglyph rendering, separate images output, Nvidia 3D Vision, and virtual reality headsets. It also supported multi-monitor output.
UNIGINE 1:
Other features UNIGINE rendered supported Shader model 5.0 with hardware tessellation, DirectCompute, and OpenCL. It also used screen space ambient occlusion and real-time global illumination. UNIGINE used a proprietary physics engine to process events such as collision detection, rigid body physics, and dynamical destruction of objects. It also used a proprietary engine for path finding and basic AI components. UNIGINE had features such as interactive 3D GUI, video playback using Theora codec, 3D audio system based on OpenAL library, WYSIWYG scene editor (UNIGINE Editor).
UNIGINE 2:
UNIGINE 2 was released on October 10, 2015.
UNIGINE 2 has all features from UNIGINE 1 and transitioned from forward rendering to deferred rendering approach, PBR shading, and introduced new graphical technologies like geometry water, multi-layered volumetric clouds, SSRTGI and voxel-based lighting.
Platforms UNIGINE 2 supports Microsoft Windows, Linux and OS X (support stopped starting from 2.6 version).
UNIGINE 2 supports the following graphical APIs: DirectX 11, OpenGL 4.x. Since version 2.16 UNIGINE experimentally supports DirectX 12 and Vulkan.
There are 3 APIs for developers: C++, C#, Unigine Script.
Supported Shader languages: HLSL, GLSL, UUSL (Unified UNIGINE Shader Language).
SSRTGI Proprietary SSRTGI (Screen Space Ray-Traced Global Illumination) rendering technology was introduced in version 2.5. It was presented at SIGGRAPH 2017 Real-Time Live! event.
Development:
The roots of UNIGINE are in the frustum.org open source project, which was initiated in 2002 by Alexander "Frustum" Zaprjagaev, who is a co-founder (along with Denis Shergin, CEO) and ex-CTO of UNIGINE Company.
Development:
Linux game competition On November 25, 2010, UNIGINE Company announced a competition to support Linux game development. They agreed to give away a free license of the UNIGINE engine to anyone willing to develop and release a game with a Linux native client, and would also grant the team a Windows license. The competition ran until December 10, 2010, with a considerable number of entries being submitted. Due to the unexpected response, UNIGINE decided to extend the offer to the three best applicants, with each getting full UNIGINE licenses. The winners were announced on December 13, 2010, with the developers selected being Kot-in-Action Creative Artel (who previously developed Steel Storm), Gamepulp (who intend to make a puzzle platform), and MED-ART (who previously worked on Painkiller: Resurrection).
UNIGINE-based projects:
As of 2021, company claimed to have more than 250 B2B customers worldwide.Some companies that develop software for professional aircraft, ships & vehicle simulators use UNIGINE Engine as a base for the 3D & VR visualization.
UNIGINE-based projects:
Games ReleasedCradle - released for Windows and Linux in 2015 Oil Rush - released for Windows, Linux and Mac OS X in 2012; released for iOS in 2013 Syndicates of Arkon - released for Windows in 2010 Tryst - released for Windows in 2012 Petshop - released for Windows and Mac in 2011 Sumoman - released for Windows and Linux in 2017 Demolicious - released for iOS in 2012 Dual Universe - released in 2022UpcomingDilogus: The Winds of War Node - VR shooter (Steam page) Kingdom of Kore - action RPG for PC (in future for PS3) - cancelled by publisher El Somni Quas - MMORPG (Patreon page) Acro FS - aerobatic flight simulator (Steam page) Simulation and visualization Metro Simulator by Smart Simulation CarMaker 10.0 by IPG Automotive NAUTIS maritime simulators by VSTEP Train driver simulator by Oktal Sydac Be-200 flight simulator Klee 3D (3D visualization solution for digital marketing and research applications) The visualization component of the analytical software complex developed for JSC "ALMAZ-ANTEY" MSDB", an affiliate of JSC "Concern "Almaz-Antey" Real-time interactive architectural visualization projects of AI3D Bell-206 Ranger rescue helicopter simulator Magus ex Machina (3D animated movie) SIMREX CDS, SIMREX FDS, SIMREX FTS car driving simulators by INNOSIMULATION Real-time artworks by John Gerrard (artist): Farm, Solar Reserve, Exercise, Western Flag (Spindletop, Texas), X. laevis (Spacelab) Train simulators by SPECTR DVS3D by GDI RF-X flight simulator NAVANTIS Ship Simulator VR simulator for learning of computer vision for autonomous flight control at Daedalean AI Benchmarks UNIGINE Engine is used as a platform for a series of benchmarks, which can be used to determine the stability of PC hardware (CPU, GPU, power supply, cooling system) under extremely stressful conditions, as well as for overclocking: Superposition Benchmark (featuring online leaderboards) - UNIGINE 2 (2017) Valley Benchmark - UNIGINE 1 (2013) Heaven Benchmark (the first DirectX 11 benchmark) - UNIGINE 1 (2009) Tropics Benchmark - UNIGINE 1 (2008) Sanctuary Benchmark - UNIGINE 1 (2007) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tower crane anti-collision system**
Tower crane anti-collision system:
A tower crane anti-collision system is an operator support system for tower cranes on construction sites. It helps an operator to anticipate the risk of contact between the moving parts of a tower crane and other tower cranes and structures. In the event that a collision becomes imminent, the system can send a command to the crane's control system, ordering it to slow down or stop. An anti-collision system can describe an isolated system installed on an individual tower crane. It can also describe a site wide coordinated system, installed on many tower cranes in close proximity.
History:
Developments in tower crane design and the increasing complexity of construction sites in the 1970’s and 1980’s led to an increase in the quantity and proximity of tower cranes on construction sites. This increased the risk of collisions between cranes, particularly when their operating areas overlapped. The first tower crane anti-collision systems were developed in France in 1985 by SMIE.A Ministry of Labour directive issued in 1987 made anti-collision systems compulsory on all tower cranes in France.In 2011, Hong Kong introduced a "Code of Practice for the Safe Use of Tower Cranes" and Singapore introduced a "Workplace Safety and Health construction Regulation". Both required the provision of an anti-collision system where more than one tower crane is in use.In 2015, Luxembourg required automatic devices to be installed to avoid the risk of collision between tower cranes.
Collision avoidance with structures and other tower cranes:
Various sensors are used to measure the position, velocity and angle of each tower crane’s moving parts. These sensors can be part of the anti-collision system or the crane. This information is sent via radio link to a computer and a display in the operator’s cabin. Several features commonly found across tower crane anti-collision systems use this data.
Zoning Anti-collision systems allow prohibited zones to be defined. These are areas (such as schools, transport links, electrical power lines and areas beyond the site boundary) where the crane is not allowed to operate.
Situational awareness The operator's cabin hosts a display showing the tower crane's position, movement and operating area. Where the tower crane’s operating area overlaps with other cranes or prohibited zones these are also displayed. The system alerts the operator when the crane is approaching a prohibited area or another crane.
Tower crane control Anti-collision systems are often connected to the tower crane’s control system. This allows the anti-collision system to automatically slow down and stop the crane if there is a risk of an accident. The operator is then prevented from moving the crane towards the danger and can only move it away.
Supervisory system A supervisory system is a typical feature of an anti-collision system that covers an entire construction site. It allows a site supervisor to have a complete view of tower crane operations on a construction site. It also allows for centralised configuration and maintenance of the system.
Fail-safe operation If a fault occurs on a tower crane's anti-collision system, or it is bypassed, other tower cranes will be prevented from operating within the volume of the faulty system.
Collision avoidance with other vehicles:
Anti-collision lights are required on tower cranes operating in or near to airfield flight paths. Three red flashing lights are positioned on each end and the top of the crane. They provide a visual warning to aircraft pilots.
Limitations:
Tower crane anti-collision systems do not prevent collisions with mobile construction equipment such as mobile cranes and aerial work platforms.
Standards:
A draft standard setting out the functional requirements of tower crane anti-collision devices and systems is open for comment. It is BS EN 17076. Anti-collision devices and systems for tower crane. Safety characteristics and requirements. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gastrointestinal disease**
Gastrointestinal disease:
Gastrointestinal diseases (abbrev. GI diseases or GI illnesses) refer to diseases involving the gastrointestinal tract, namely the esophagus, stomach, small intestine, large intestine and rectum, and the accessory organs of digestion, the liver, gallbladder, and pancreas.
Oral disease:
The oral cavity is part of the gastrointestinal system and as such the presence of alterations in this district can be the first sign of both systemic and gastrointestinal diseases. By far the most common oral conditions are plaque-induced diseases (e.g., gingivitis, periodontitis, dental caries). Oral symptoms can be similar to lesions occurring elsewhere in the digestive tract, with a pattern of swelling, inflammation, ulcers, and fissures. If these signs are present, then patients are more likely to also have anal and esophageal lesions and experience other extra-intestinal disease manifestations. Some diseases which involve other parts of the GI tract can manifest in the mouth, alone or in combination, including: Gastroesophageal reflux disease can cause acid erosion of the teeth and halitosis.
Oral disease:
Gardner's syndrome can be associated with failure of tooth eruption, supernumerary teeth, and dentigerous cysts.
Peutz–Jeghers syndrome can cause dark spots on the oral mucosa or on the lips or the skin around the mouth.
Several GI diseases, especially those associated with malabsorption, can cause recurrent mouth ulcers, atrophic glossitis, and angular cheilitis (e.g., Crohn's disease is sometimes termed orofacial granulomatosis when it involves the mouth alone).
Sideropenic dysphagia can cause glossitis, angular cheilitis.
Oesophageal disease:
Oesophageal diseases include a spectrum of disorders affecting the oesophagus. The most common condition of the oesophagus in Western countries is gastroesophageal reflux disease, which in chronic forms is thought to result in changes to the epithelium of the oesophagus, known as Barrett's oesophagus.: 863–865 Acute disease might include infections such as oesophagitis, trauma caused by the ingestion of corrosive substances, or rupture of veins such as oesophageal varices, Boerhaave syndrome or Mallory-Weiss tears. Chronic diseases might include congenital diseases such as Zenker's diverticulum and esophageal webbing, and oesophageal motility disorders including the nutcracker oesophagus, achalasia, diffuse oesophageal spasm, and oesophageal stricture.: 853, 863–868 Oesophageal disease may result in a sore throat, throwing up blood, difficulty swallowing or vomiting. Chronic or congenital diseases might be investigated using barium swallows, endoscopy and biopsy, whereas acute diseases such as reflux may be investigated and diagnosed based on symptoms and a medical history alone.: 863–867
Gastric disease:
Gastric diseases refer to diseases affecting the stomach. Inflammation of the stomach by infection from any cause is called gastritis, and when including other parts of the gastrointestinal tract called gastroenteritis. When gastritis persists in a chronic state, it is associated with several diseases, including atrophic gastritis, pyloric stenosis, and gastric cancer. Another common condition is gastric ulceration, peptic ulcers. Ulceration erodes the gastric mucosa, which protects the tissue of the stomach from the stomach acids. Peptic ulcers are most commonly caused by a bacterial Helicobacter pylori infection. Epstein–Barr virus infection is another factor to induce gastric cancer.As well as peptic ulcers, vomiting blood may result from abnormal arteries or veins that have ruptured, including Dieulafoy's lesion and Gastric antral vascular ectasia. Congenital disorders of the stomach include pernicious anaemia, in which a targeted immune response against parietal cells results in an inability to absorb vitamin B12. Other common symptoms that stomach disease might cause include indigestion or dyspepsia, vomiting, and in chronic disease, digestive problems leading to forms of malnutrition. : 850–853 In addition to routine tests, an endoscopy might be used to examine or take a biopsy from the stomach. : 848
Intestinal disease:
The small and large intestines may be affected by infectious, autoimmune, and physiological states. Inflammation of the intestines is called enterocolitis, which may lead to diarrhea.
Intestinal disease:
Acute conditions affecting the bowels include infectious diarrhea and mesenteric ischaemia. Causes of constipation may include faecal impaction and bowel obstruction, which may in turn be caused by ileus, intussusception, volvulus. Inflammatory bowel disease is a condition of unknown aetiology, classified as either Crohn's disease or ulcerative colitis, that can affect the intestines and other parts of the gastrointestinal tract. Other causes of illness include intestinal pseudoobstruction, and necrotizing enterocolitis.: 850–862, 895–903 Diseases of the intestine may cause vomiting, diarrhoea or constipation, and altered stool, such as with blood in stool. Colonoscopy may be used to examine the large intestine, and a person's stool may be sent for culture and microscopy. Infectious disease may be treated with targeted antibiotics, and inflammatory bowel disease with immunosuppression. Surgery may also be used to treat some causes of bowel obstruction.: 850–862 The normal thickness of the small intestinal wall is 3–5 mm, and 1–5 mm in the large intestine. Focal, irregular and asymmetrical gastrointestinal wall thickening on CT scan suggests a malignancy. Segmental or diffuse gastrointestinal wall thickening is most often due to ischemic, inflammatory or infectious disease. Though less common, medications such as ACE inhibitors can cause angioedema and small bowel thickening.
Intestinal disease:
Small intestine The small intestine consists of the duodenum, jejunum and ileum. Inflammation of the small intestine is called enteritis, which if localised to just part is called duodenitis, jejunitis and ileitis, respectively. Peptic ulcers are also common in the duodenum.: 879–884 Chronic diseases of malabsorption may affect the small intestine, including the autoimmune coeliac disease, infective tropical sprue, and congenital or surgical short bowel syndrome. Other rarer diseases affecting the small intestine include Curling's ulcer, blind loop syndrome, Milroy disease and Whipple's disease. Tumours of the small intestine include gastrointestinal stromal tumours, lipomas, hamartomas and carcinoid syndromes.: 879–887 Diseases of the small intestine may present with symptoms such as diarrhoea, malnutrition, fatigue and weight loss. Investigations pursued may include blood tests to monitor nutrition, such as iron levels, folate and calcium, endoscopy and biopsy of the duodenum, and barium swallow. Treatments may include renutrition, and antibiotics for infections.: 879–887 Large intestine Diseases that affect the large intestine may affect it in whole or in part. Appendicitis is one such disease, caused by inflammation of the appendix. Generalised inflammation of the large intestine is referred to as colitis, which when caused by the bacteria Clostridium difficile is referred to as pseudomembranous colitis. Diverticulitis is a common cause of abdominal pain resulting from outpouchings that particularly affects the colon. Functional colonic diseases refer to disorders without a known cause, including irritable bowel syndrome and intestinal pseudoobstruction. Constipation may result from lifestyle factors, impaction of a rigid stool in the rectum, or in neonates, Hirschprung's disease.: 913–915 Diseases affecting the large intestine may cause blood to be passed with stool, may cause constipation, or may result in abdominal pain or a fever. Tests that specifically examine the function of the large intestine include barium swallows, abdominal x-rays, and colonoscopy.: 913–915 Rectum and anus Diseases affecting the rectum and anus are extremely common, especially in older adults. Hemorrhoids, vascular outpouchings of skin, are very common, as is pruritus ani, referring to anal itchiness. Other conditions, such as anal cancer may be associated with ulcerative colitis or with sexually transmitted infections such as HIV. Inflammation of the rectum is known as proctitis, one cause of which is radiation damage associated with radiotherapy to other sites such as the prostate. Faecal incontinence can result from mechanical and neurological problems, and when associated with a lack of voluntary voiding ability is described as encopresis. Pain on passing stool may result from anal abscesses, small inflamed nodules, anal fissures, and anal fistulas.: 915–916 Rectal and anal disease may be asymptomatic, or may present with pain when passing stools, fresh blood in stool, a feeling of incomplete emptying, or pencil-thin stools. In addition to regular tests, medical tests used to investigate the anus and rectum include the digital rectal exam and proctoscopy.
Accessory digestive gland disease:
Hepatic Hepatic diseases refers to those affecting the liver. Hepatitis refers to inflammation of liver tissue, and may be acute or chronic. Infectious viral hepatitis, such as hepatitis A, B and C, affect in excess of (X) million people worldwide. Liver disease may also be a result of lifestyle factors, such as fatty liver and NASH. Alcoholic liver disease may also develop as a result of chronic alcohol use, which may also cause alcoholic hepatitis. Cirrhosis may develop as a result of chronic hepatic fibrosis in a chronically inflamed liver, such as one affected by alcohol or viral hepatitis.: 947–958 Liver abscesses are often acute conditions, with common causes being pyogenic and amoebic. Chronic liver disease, such as cirrhosis, may be a cause of liver failure, a state where the liver is unable to compensate for chronic damage, and unable to meet the metabolic demands of the body. In the acute setting, this may be a cause of hepatic encephalopathy and hepatorenal syndrome. Other causes of chronic liver disease are genetic or autoimmune disease, such as hemochromatosis, Wilson's disease, autoimmune hepatitis, and primary biliary cirrhosis.: 959–963, 971 Acute liver disease rarely results in pain, but may result in jaundice. Infectious liver disease may cause a fever. Chronic liver disease may result in a buildup of fluid in the abdomen, yellowing of the skin or eyes, easy bruising, immunosuppression, and feminization. Portal hypertension is often present, and this may lead to the development of prominent veins in many parts of the body, such as oesophageal varices, and haemorrhoids.: 959–963, 971–973 In order to investigate liver disease, a medical history, including regarding a person's family history, travel to risk-prone areas, alcohol use and food consumption, may be taken. A medical examination may be conducted to investigate for symptoms of liver disease. Blood tests may be used, particularly liver function tests, and other blood tests may be used to investigate the presence of the Hepatitis viruses in the blood, and ultrasound used. If ascites is present, abdominal fluid may be tested for protein levels.: 921, 926–927 Pancreatic Pancreatic diseases that affect digestion refers to disorders affecting the exocrine pancreas, which is a part of the pancreas involved in digestion.One of the most common conditions of the exocrine pancreas is acute pancreatitis, which in the majority of cases relates to gallstones that have impacted in the pancreatic part of the biliary tree, or due to acute or chronic hazardous alcohol use or as a side-effect of ERCP. Other forms of pancreatitis include chronic and hereditary forms. Chronic pancreatitis may predispose to pancreatic cancer and is strongly linked to alcohol use. Other rarer diseases affecting the pancreas may include pancreatic pseudocysts, exocrine pancreatic insufficiency, and pancreatic fistulas.: 888–891 Pancreatic disease may present with or without symptoms. When symptoms occur, such as in acute pancreatitis, a person may experience acute-onset, severe mid-abdominal pain, nausea and vomiting. In severe cases, pancreatitis may lead to rapid blood loss and systemic inflammatory response syndrome. When the pancreas is unable to secrete digestive enzymes, such as with a pancreatic cancer occluding the pancreatic duct, result in jaundice. Pancreatic disease might be investigated using abdominal x-rays, MRCP or ERCP, CT scans, and through blood tests such as measurement of the amylase and lipase enzymes.: 888–894 Gallbladder and biliary tract Diseases of the hepatobiliary system affect the biliary tract (also known as the biliary tree), which secretes bile in order to aid digestion of fats. Diseases of the gallbladder and bile ducts are commonly diet-related, and may include the formation of gallstones that impact in the gallbladder (cholecystolithiasis) or in the common bile duct (choledocholithiasis).: 977–978 Gallstones are a common cause of inflammation of the gallbladder, called cholecystitis. Inflammation of the biliary duct is called cholangitis, which may be associated with autoimmune disease, such as primary sclerosing cholangitis, or a result of bacterial infection, such as ascending cholangitis.: 977–978, 963–968 Disease of the biliary tree may cause pain in the upper right abdomen, particularly when pressed. Disease might be investigated using ultrasound or ERCP, and might be treated with drugs such as antibiotics or UDCA, or by the surgical removal of the gallbladder.: 977–979 Cancer The Wikipedia article "Gastrointestinal cancer" describes the specific malignant conditions of the gastrointestinal tract. In general, a significant factor in the etiology of gastrointestinal cancers appears to be excessive exposure of the digestive organs to bile acids. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ras p21 protein activator 2**
Ras p21 protein activator 2:
RAS p21 protein activator 2 is a protein that in humans is encoded by the RASA2 gene.
Function:
The protein encoded by this gene is member of the GAP1 family of GTPase-activating proteins. The gene product stimulates the GTPase activity of normal RAS p21 but not its oncogenic counterpart. Acting as a suppressor of RAS function, the protein enhances the weak intrinsic GTPase activity of RAS proteins resulting in the inactive GDP-bound form of RAS, thereby allowing control of cellular proliferation and differentiation. Alternative splicing results in multiple transcript variants. [provided by RefSeq, Dec 2014]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Catholic dogmatic theology**
Catholic dogmatic theology:
Catholic dogmatic theology can be defined as "a special branch of theology, the object of which is to present a scientific and connected view of the accepted doctrines of the Christian faith."
Definition:
According to Joseph Pohle, writing in the Catholic Encyclopedia, "theology comprehends all those and only those doctrines which are to be found in the sources of faith, namely Scripture and Tradition...For, just as the Bible,...was written under the immediate inspiration of the Holy [Spirit], so Tradition was, and is, guided in a special manner by God, Who preserves it from being curtailed, mutilated, or falsified." The scientific character of dogmatic theology does not rest so much on the exactness of its exegetical and historical proofs as on the philosophical grasp of the content of dogma.The functions of dogmatic theology are twofold: first, to establish what constitutes a doctrine of the Christian faith, and to elucidate it in both its religious and its philosophical aspects; secondly, to connect the individual doctrines into a system. “In current Catholic usage, the term ‘dogma’ means a divinely revealed truth, proclaimed as such by the infallible teaching authority of the Church, and hence binding on all the faithful without exception, now and forever."
Subjects:
Dogmatic theology begins with the doctrine of God, whose existence, essence, and attributes are to be investigated. A philosophical understanding of the dogma of the Trinity was attempted by the Fathers. The theologian investigates the activity of creation. As the beginning of the world supposes creation out of nothing, so its continuation supposes Divine conservation, which is nothing less than a continued creation. However, God's creative activity is not thereby exhausted. The topics of Original Sin and Angelology come under Creation.
Subjects:
The subject of Redemption includes Christology, Soteriology, Mariology. The Redeemer's activity as Mediator stands out most prominently in His triple office of high priest, prophet, and king. For the most part, dogmatic theologians prefer to treat Mariology and the veneration paid to relics and images under Eschatology, together with the Communion of Saints.
History:
Patristic period (about A.D. 100–800) At first, Dogmatic theology comprised apologetics, dogmatic and moral theology, and canon law. The Fathers of the Church are honoured by the Church as her principal theologians. It was not so much in the catechetical schools of Alexandria, Antioch, and Edessa as in the struggle with the great heresies of the age that patristic theology developed. This serves to explain the character of the patristic literature, which is apologetical and polemical, parenetical and ascetic. It was not the intention of the Fathers to give a systematic treatment of theology. It may be said in general that the apologetic style predominated up to the time of Constantine the Great.Christian writers had to explain the truths of natural religion, such as God, the soul, creation, immortality, and freedom of the will; at the same time they had to defend the chief mysteries of the Christian faith. The efforts of the Fathers to define and combat heresy brought writings against Gnosticism, Manichæism, and Priscillianism. Those who wrote against pagan polytheism include: Justin Martyr, Lactantius, and Eusebius of Cæsarea. Prominent writers against the practices of Judaizing Christians were: Hippolytus of Rome, Epiphanius of Salamis, and Chrysostom. At the First Council of Nicaea, the Church took steps to define revealed doctrine more precisely in response to a challenge from a heretical theology."Eastern Christians in this dispute on the Trinity and Christology included: the Alexandrines, and Didymus the Blind; Athanasius and the three Cappadocians; Cyril of Alexandria and Leontius of Byzantium; and Maximus the Confessor. In the West the leaders were: Cyprian, Jerome, Fulgentius of Ruspe, Pope Leo I and Pope Gregory I. As the contest with Pelagianism and Semi-pelagianism clarified the dogmas of grace and liberty, providence and predestination, original sin and the condition of our first parents in Paradise, so also the contests with the Donatists brought codification to the doctrine of the sacraments (baptism), the hierarchical constitution of the Church her magisterium or teaching authority, and her infallibility. A culminating contest was decided by the Second Council of Nicæa (787).These developments left the dogmatic teachings of the Fathers as a collection of monographs rather than a systematic exposition. Irenæus shows attempts at synthesis; the trilogy of Clement of Alexandria (d. 217) marks an advance in the same direction. Gregory of Nyssa (d. 394) then endeavoured in his "Large Catechetical Treatise" (logos katechetikos ho megas) to correlate in a broad synthetic view the fundamental dogmas of the Trinity, the Incarnation, and the Sacraments. In the same manner, though somewhat fragmentarily, Hilary of Poitiers (d. 366) developed in his work "De Trinitate" the principal truths of Christianity.The catechetical instructions of Cyril of Jerusalem (d. 386) especially his five mystagogical treatises, on the Apostles' Creed and the three sacraments of Baptism, Confirmation, and the Holy Eucharist, contain an almost complete dogmatic treatise. Ambrose (d. 397) in his chief works: "De fide", "De Spiritu S.", "De incarnatione", "De mysteriis", "De poenitentia", treated the main points of dogma in classic Latin, though without any attempt at a unifying synthesis. Augustine of Hippo (d. 430) wrote one or two works, as the "De fide et symbolo" and the "Enchiridium", which are compendia of dogmatic and moral theology, as well as his speculative work De Trinitate.In regard to the Trinity and Christology, Cyril of Alexandria (d. 444) was a model for later dogmatic theologians. Towards the end of the Patristic Age Isidore of Seville (d. 636) in the West and John Damascene (b. ab. 700) in the East paved the way for a systematic treatment of dogmatic theology. Following closely the teachings of Augustine and Gregory the Great, Isidore proposed to collect all the writings of the earlier Fathers and to hand them down as a precious inheritance to posterity. The results of this undertaking were the "Libri III sententiarum seu de summo bono". The work of John Damascene (d. after 754) not only gathered the teachings and views of the Greek Fathers, but reduced them to a systematic whole; he deserves to be called the first and the only scholastic among the Greeks. His main work, which is divided into three parts, is entitled: "Fons scientiæ" (pege gnoseos), because it was intended to be the source, not merely of theology, but of philosophy and Church history as well. Under his leadership, the communion of saints, the veneration of relics and holy images were placed on a basis of orthodoxy. The only Greek prior to him who had produced a complete system of theology was Pseudo-Dionysius the Areopagite, in the fifth century.
History:
Other notable theologians of the period Middle Ages (800–1500) Beginning of Scholasticism (800–1200) The scope of the scholastic method is to analyze the content of dogma by means of dialectics. Scholasticism did not take its guidance from John Damascene or Pseudo-Dionysius, but from Augustine. Augustinian thought runs through the whole progress of Western Catholic philosophy and theology. The Venerable Bede (d. 735), is the link which joins the patristic with the medieval history of theology.Up to the time of Anselm of Canterbury, the theologians were more concerned with preserving than with developing the writings of the Fathers. The beginnings of Scholasticism may be traced back to the days of Charlemagne (d. 814). Theology was cultivated nowhere with greater industry than in the cathedral and monastic schools, founded and fostered by Charlemagne. The earliest signs of a new approach appeared in the ninth century in the work of Paschasius Radbertus, and Rabanus Maurus. These speculations were carried to a greater depth by (Lanfranc, Hugh of Langres, etc.).Anselm of Canterbury (d. 1109) was the first to bring a sharp logic to bear upon the principal dogmas of Christianity, and to draw up a plan for dogmatic theology. Taking the substance of his doctrine from Augustine, Anselm, as a philosopher, was not so much a disciple of Aristotle as of Plato, in whose dialogues he had been schooled.The great mystics, Bernard of Clairvaux, and Bonaventure, were at the same time distinguished Scholastics. It is upon the doctrine of Anselm and Bernard that the Scholastics of succeeding generations took their stand, and it was their spirit which lived in the theological efforts of the University of Paris.The first attempts at a theological system may be seen in the so-called Books of Sentences, collections and interpretations of quotations from the Fathers, more especially of Augustine. One of the earliest of these books is the Summa sententiarum, an anonymous compilation created at the School of Loan some time after 1125. Another is The Sacraments of the Christian Faith written by Hugh of St. Victor around 1135. His works are characterized throughout by a close adherence to Augustine and may serve as guides for beginners in the theology of Augustine. Peter Lombard, called the "Magister Sententiarum" (d. 1164), stands above them all. What Gratian had done for canon law Lombard did for dogmatic and moral theology. He sifted and explained and paraphrased the patristic lore in his "Libri IV sententiarum", and the arrangement which he adopted was, in spite of the lacunæ, so excellent that up to the sixteenth century his work was the standard text-book of theology. The work of interpreting this text began in the thirteenth century, and there was no theologian of note in the Middle Ages who did not write a commentary on the Sentences of Lombard. No other work exerted such a powerful influence on the development of scholastic theology.William of Auvergne (d. 1248), who died as bishop of Paris, deserves special mention. Though preferring the free, unscholastic method of an earlier age, he yet shows himself at once an original philosopher and a profound theologian. Inasmuch as in his numerous monographs on the Trinity, the Incarnation, the Sacraments, etc., he took into account the anti-Christian attacks of the Arabic writers on Aristoteleanism, he is the connecting link between this age and the thirteenth century.
History:
Other notable theologians of the period Scholasticism at its zenith (1200–1300) The most brilliant period of Scholasticism embraces about 100 years, and with it are connected the names of Alexander of Hales, Albertus Magnus, Bonaventure, Thomas Aquinas, and Duns Scotus. This period of Scholasticism was marked by the appearance of the theological Summae, as well as the mendicant orders. In the thirteenth century the champions of Scholasticism were to be found in the Franciscans and Dominicans, beside whom worked also the Augustinians, Carmelites, and Servites.Alexander of Hales (d. about 1245) was a Franciscan, while Albert the Great (d. 1280) was a Dominican. The Summa theologiæ of Alexander of Hales is the largest and most comprehensive work of its kind, flavoured with Platonism. Albert was an intellectual working not only in matters philosophical and theological but in the natural sciences as well. He made a first attempt to present the entire philosophy of Aristotle and to place it at the service of Catholic theology. The logic of Aristotle had been rendered into Latin by Boethius and had been used in the schools since the end of the sixth century; but his physics and metaphysics were made known to Western Christendom only through the Arabic philosophers of the thirteenth century. His works were prohibited by the Synod of Paris, in 1210, and again by a Bull of Pope Gregory IX in 1231. Later Scholastics, led by Albert the Great, went over the faulty Latin translation once more, and reconstructed the doctrine of Aristotle and its principles.Bonaventure (d. 1274) and Thomas Aquinas (d. 1274), mark the highest development of Scholastic theology. St. Bonaventure follows Alexander of Hales, his fellow-religious and predecessor, but surpasses him in mysticism and clearness of diction. Unlike the other Scholastics of this period, he did not write a theological Summa, but a Commentary on the Sentences, as well as his Breviloquium, a condensed Summa. Alexander of Hales and Bonaventure represent the old Franciscan Schools, from which the later School of Duns Scotus essentially differed.Thomas Aquinas holds the same rank among the theologians as does Augustine among the Fathers of the Church. He is distinguished by wealth of ideas, systematic exposition of them, and versatility. For dogmatic theology his most important work is the Summa theologica.
History:
Duns Scotus (1266—1308), by bold and virulent criticism of the Thomistic system, was to a great extent responsible for its decline. Scotus is the founder a new Scotistic School, in the speculative treatment of dogma. Later Franciscans, among them Costanzo de Sarnano, set about minimizing or even reconciling the doctrinal differences of the two.
History:
Other notable theologians of the period Gradual decline of Scholasticism (1300–1500) The following period showed both consolidation, and disruption: the Fraticelli, nominalism, conflict between Church and State (Philip the Fair, Louis of Bavaria, the Avignon Papacy). The spread of Nominalism owed much to two pupils of Duns Scotus: the Frenchman Peter Aureolus (d. 1321) and the Englishman William Occam (d. 1347).Nominalism had less effect on the Dominican theologians, who were as a rule loyal Thomists.
History:
It was in the early part of the sixteenth century that commentaries on the "Summa Theologica" of Aquinas began to appear. The Franciscans partly favoured Nominalism, partly adhered to pure Scotism. The Augustinian James of Viterbo (d. 1308) attached himself to Ægidius of Rome; Gregory of Rimini (d. 1359), championed an undisguised nominalism. Among the Carmelites, Gerard of Bologna (d. 1317) was a staunch Thomist. Generally speaking, the later Carmelites were followers of Aquinas. The Order of the Carthusians produced in the fifteenth century a prominent theologian in the person of Dionysius Ryckel (d. 1471), surnamed "the Carthusian", who set up his chair in Roermond, (the Netherlands).Outside the religious orders were many other. The Englishman Thomas Bradwardine (d. 1340), was the foremost mathematician of his day and a celebrated scholastic philosopher and doctor of theology. He is often called Doctor Profundus. (The Carmelite Thomas Netter (d. 1430), surnamed Waldensis, was an English Scholastic theologian and controversialist. Nicholas of Cusa (d. 1404) was an early proponent of Renaissance humanism, and inaugurated a new and speculative system in dogmatic theology. A thorough treatise on the Church was written by John Torquemada (d. 1468), and a similar work by St. John Capistran (d. 1456). Alphonsus Tostatus (d. 1454) interspersed his Biblical commentaries on the Scriptures with dogmatic treatises. His work "Quinque paradoxa" is a treatise on Christology and Mariology.
History:
Other notable theologians of the period
Modern era (1500–1900):
The Protestant Reformation brought about a more accurate definition of important Catholic articles of faith. From the period of the Renaissance the revival of classical studies gave new vigour to exegesis and patrology, while the Reformation stimulated the universities which had remained Catholic, especially in Spain (Salamanca, Alcalá), Portugal (Coimbra) and in the Netherlands (Louvain), to intellectual research. The Sorbonne of Paris regained its lost prestige only towards the end of the sixteenth century. Among the religious orders the newly founded Society of Jesus probably contributed most to the revival and growth of theology. Matthias Joseph Scheeben distinguishes five phases in this period.
Modern era (1500–1900):
First phase: to the Council of Trent (1500–1570) The whole literature of this period bears an apologetical and controversial character and deals with those subjects which had been attacked most bitterly: the rule and sources of faith, the Church, grace, the sacraments, especially the holy Eucharist. Peter Canisius (d. 1597) gave to the Catholics not only his world-renowned catechism, but also a most valuable Mariology.In England John Fisher, Bishop of Rochester (d. 1535), and Thomas More (d. 1535) championed the cause of the Catholic faith. The Jesuit Nicholas Sanders wrote one of the best treatises on the Church. In Belgium the professors of the University of Louvain opened new paths for the study of theology, foremost among them were: Jodocus Ravesteyn (d. 1570), and John Hessels (d. 1566).In France Jacques Merlin, and Gilbert Génebrard (d. 1597) rendered great services to dogmatic theology. Sylvester Prierias (d. 1523), Ambrose Catharinus (d; 1553), and Cardinal Seripandus are the boast of Italy. But, above all other countries, Spain is distinguished: Alphonsus of Castro (d. 1558), Michael de Medina (d. 1578), Peter de Soto (d. 1563). Some of their works have remained classics, such as "De natura et gratia" (Venice 1547) of Dominic Soto; "De justificatione libri XV" (Venice, 1546) of Andrew Vega; "De locis theologicis" (Salamanca, 1563) of Melchior Cano.
Modern era (1500–1900):
Other notable theologians of the period Second phase: late Scholasticism at its height (1570–1660) It was not until the seventeenth century, and then only for practical reasons, that moral theology was separated from the main body of Catholic dogma. The necessity of a further division of labour led to the independent development of other disciplines: apologetics, exegesis, church history. While apologetics uses historical and philosophical arguments, dogmatic theology makes use of Scripture and Tradition to prove the Divine character of the different dogmas.Robert Bellarmine (d. 1621), was a controversialist theologian who defended almost the whole of Catholic theology against the attacks of the Reformers. Jacques Davy Duperron (d. 1618) of France wrote a treatise on the Holy Eucharist. The pulpit orator Bossuet (d. 1627) preached from the standpoint of history. The Præscriptiones Catholicae was a voluminous work of the Italian Gravina (7 vols., Naples, 1619–39). Adrian (d. 1669) and Peter de Walemburg (d. 1675), easily ranked among the best controversialists.The development of positive theology went hand in hand with the progress of research into the Patristic Era and into the history of dogma. These studies were especially cultivated in France and Belgium. A number of scholars, thoroughly versed in history, published in monographs the results of their investigations into the history of particular dogmas. Joannes Morinus (d. 1659) made the Sacrament of Penance the subject of special study; Hallier (d. 1659), the Sacrament of Holy Orders, Jean Garnier (d. 1681), Pelagianism; Étienne Agard de Champs (d. 1701), Jansenism; Tricassinus (d. 1681), Augustine's doctrine on grace. The Jesuit Petavius (d. 1647) and the Oratorian Louis Thomassin (d. 1695), wrote "Dogmata theologica". They placed positive theology on a new basis without disregarding the speculative element.
Modern era (1500–1900):
Neo-scholasticism Religious orders fostered scholastic theology. Thomas Aquinas and Bonaventure were proclaimed Doctors of the Church, respectively by Pope Pius V and Pope Sixtus V.At the head of the Thomists was Domingo Bañez (d. 1604), who wrote a commentary on the theological Summa of Aquinas, which, combined with a similar work by Bartholomew Medina (d. 1581), forms a harmonious whole. The Carmelites of Salamanca produced the Cursus Salmanticensis (Salamanca, 1631–1712) in 15 folios, as commentary on the Summa. At Louvain William Estius (d. 1613) wrote a Thomist commentary on the "Liber Sententiarum" of Peter the Lombard, while his colleague Francis Sylvius (d. 1649) explained the theological Summa of the master himself. In the Sorbonne Thomism was represented by Nicholas Ysambert (d. 1624). The University of Salzburg also furnished the Theologia scholastica of Augustine Reding, who held the chair of theology in that university from 1645 to 1658.The Franciscans maintained doctrinal opposition to the Thomists, with steadily continued Scotist commentaries on Peter the Lombard. Scotistic manuals for use in schools were published about 1580 William Herincx. The Capuchins, on the other hand, adhered to Bonaventure, as, e.g., Gaudentius of Brescia, (d. 1672).
Modern era (1500–1900):
Jesuit theologians The Society of Jesus substantially adhered to the Summa of Thomas Aquinas, yet at the same time it made use of an eclectic freedom. Luis Molina (d. 1600) was the first Jesuit to write a commentary on the Summa of St. Thomas. Leading Jesuits were the Spaniards Francisco Suárez (d. 1617), and Gabriel Vasquez (d. 1604). Suárez was named "Doctor Eximius" by Pope Benedict XIV. Caspar Hurtado (d. 1646) wrote a commentary on Aquinas. A theological manual was written by Sylvester Maurus (d. 1687). Francesco Sforza Pallavicino, (d. 1667), known as the historiographer of the Council of Trent, won repute as a dogmatic theologian by several of his writings.
Modern era (1500–1900):
Other notable theologians of the period Third phase: decline of Scholasticism (1660–1760) Other counter-currents of thought set in: Cartesianism in philosophy, Gallicanism, and Jansenism. Bernard de Rubeis (d. 1775) produced a monograph on original sin. José Saenz d'Aguirre (d. 1699) wrote the three volume work "Theology of St. Anselm". Among the Franciscans Claudius Frassen (d. 1680) issued his elegant Scotus academicus. Eusebius Amort (d. 1775), the foremost theologian in Germany, combined conservatism with due regard for modern demands.The "Theologia Wirceburgensis" was published in 1766–71 by the Jesuits of Würzburg. The new school of Augustinians based their theology on the system of Gregory of Rimini rather than on that of Ægidius of Rome. To this school belonged Henry Noris (d. 1704). Its best work on dogmatic theology came from the pen of Giovanni Lorenzo Berti (d. 1766).The French Oratory took up Jansenism, with Pasquier Quesnel and Lebrun. The Sorbonne of Paris also adopted aspects of Jansenism and Gallicanism. Exceptions were Louis Abelly (d. 1691) and Honoratus Tournély (d. 1729), whose "Prælectiones dogmaticæ" are numbered among the best theological text-books.Against Jansenism stood the Jesuits Dominic Viva (d. 1726), and La Fontaine (d. 1728). Gallicanism and Josephinism were also pressed by the Jesuit theologians, especially by Francesco Antonio Zaccaria (d. 1795), Alfonso Muzzarelli (d. 1813), Bolgeni (d. 1811), Roncaglia, and others. The Jesuits were seconded by the Dominicans Giuseppe Agostino Orsi (d. 1761) and Thomas Maria Mamachi (d. 1792). Barnabite Hyacinthe Sigismond Gerdil (d. 1802) was a significant figure in the response of the papacy to the upheavals caused by the French Revolution. Alphonsus Liguori (d. 1787) wrote popular works.
Modern era (1500–1900):
Other notable theologians of the period Fourth phase: at a low ebb (1760–1840) In France the influences of Jansenism and Gallicanism were still powerful; in the German Empire Josephinism and Febronianism spread. The suppression of the Society of Jesus by Pope Clement XIV occurred in 1773. The period was dominated by the European Enlightenment, the French Revolution, and German idealism.
Modern era (1500–1900):
Marian Dobmayer (d. 1805) wrote a standard manual. Benedict Stattler (d. 1797) was a member of the German Catholic Enlightenment and wrote against Immanuel Kant's Critique of Pure Reason, as did Patrick Benedict Zimmer (d. 1820).
Modern era (1500–1900):
Fifth phase: restoration of dogmatic theology (1840–1900) Harold Acton remarked on the large number of histories of dogma published in Germany published in the years 1838 to 1841. Joseph Görres (d. 1848) and Ignaz von Döllinger (d. 1890) intended that Catholic theology should influence the development of German states.Johann Adam Möhler advanced patrology and symbolism. Both positive and speculative theology received a new lease of life, the former through Heinrich Klee (d. 1840), the latter through Franz Anton Staudenmaier (d. 1856). At the same time men like Joseph Kleutgen (d. 1883), Karl Werner (d. 1888), and Albert Stöckl (d. 1895) supported Scholasticism by thorough historical and systematic writings.In France and Belgium the dogmatic theology of Thomas-Marie-Joseph Gousset (d. 1866) of Reims and the writings of Jean-Baptiste Malou, Bishop of Bruges (d. 1865) exerted great influence. In North America there were the works of Francis Kenrick (d. 1863); Cardinal Camillo Mazzella (d. 1900) wrote his dogmatic works while occupying the chair of theology at Woodstock College, Maryland. In England Nicholas Wiseman (d. 1865), and Cardinal Manning (d. 1892) advanced Catholic theology.In Italy, Gaetano Sanseverino (d. 1865), Matteo Liberatore (d. 1892), and Salvator Tongiorgi (d. 1865) worked to restore Scholastic philosophy, against traditionalism and ontologism, which had a numerous following among Catholic scholars in Italy, France, and Belgium. The pioneer work in positive theology fell to the Jesuit Giovanni Perrone (d. 1876) in Rome. Other theologians, as Carlo Passaglia (d. 1887), Clement Schrader (d. 1875), Cardinal Franzelin (d. 1886), Domenico Palmieri (d. 1909), and others, continued his work.Among the Dominicans was Cardinal Zigliara, an inspiring teacher and fertile author. Germany produced a number of prominent theologians, as Johannes von Kuhn (d. 1887), Anton Berlage (d. 1881), Franz Xaver Dieringer (d. 1876), Albert Knoll (d. 1863), Heinrich Joseph Dominicus Denzinger (d. 1883), Constantine von Schäzler (d. 1880), Bernard Jungmann (d. 1895), and others. Germany's leading orthodox theologian at this time was Joseph Scheeben (d. 1888).
Modern era (1500–1900):
First Vatican Council The First Vatican Council was held (1870) and sought a middle ground between the competing approaches of traditionalism and rational liberalism. The Council issued the dogmatic constitution Dei Filius, which stated in part that there is no real discrepancy between faith and reason, since the same God who reveals mysteries and infuses faith has bestowed the light of reason on the human mind; and that any apparent contradiction is mainly due, either to the dogmas of faith not having been understood and interpreted fully, or unproven scientific or critical theory assumed to be certain.Pope Leo XIII in his Encyclical Æterni Patris (1879) restored the study of the Scholastics, especially of St. Thomas, in all higher Catholic schools, a measure which was again emphasized by Pope Pius X.
Modern era (1500–1900):
Ludwig Ott's 1952 Fundamentals of Catholic Dogma is considered a standard reference work on dogmatics. An updated and revised edition was issued by Baronius Press in 2018.
Development of Dogma:
Around 434, Vincent of Lérins wrote Commonitorium, in which he recognized that doctrine can develop over time. New doctrines could not be declared, but older ones better understood. In John Henry Newman's 1845 "Essay on the Development of Christian Doctrine", Newman listed seven criteria which "...can be applied in proper proportions to that further interpretation of dogmas aimed at giving them contemporary relevance." After its publication, Newman developed a lengthy correspondence with Giovanni Perrone, chair of dogmatic theology at the Roman College, particularly on the development of doctrine. An advisor to Popes Gregory XVI and Pius IX, Perrone was consultor of various congregations and was active in the discussions which resulted in the 1854 dogmatic definition of the Immaculate Conception. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Airport Expressway**
Airport Expressway:
An airport expressway is an expressway connecting to the airport as a road for vehicles going to the airport. Airport Expressway may also refer to:
Roads:
Canada The portion of Highway 427 between Highway 401 and Highway 409 in Toronto, Ontario, Canada from 1964 to 1971 Unofficially Ontario Highway 409 or Belfield Expressway is an Airport Expressway as the road connects Highway 401 to Toronto-Pearson International Airport China Airport Expressway (Beijing) in Beijing, China Philippines Ninoy Aquino International Airport Expressway, an elevated airport expressway in Manila, Philippines The portion of Subic–Clark–Tarlac Expressway between Clark South Interchange and Clark North Interchange located in Clark Freeport Zone Sri Lanka Colombo - Katunayake Expressway in Sri Lanka between Bandaranaike International Airport, Katunayake and Colombo United States Airport Expressway (Fort Wayne, Indiana) in Fort Wayne, Indiana, United States The Sam Jones Expressway in Indianapolis, Indiana, United States, formerly known as the Airport Expressway Airport Expressway in Miami, Florida, United States A one-mile portion of New York State Route 204 near Rochester, New York, United States | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linear trend estimation**
Linear trend estimation:
Linear trend estimation is a statistical technique to aid interpretation of data. When a series of measurements of a process are treated as, for example, a sequences or time series, trend estimation can be used to make and justify statements about tendencies in the data, by relating the measurements to the times at which they occurred. This model can then be used to describe the behaviour of the observed data, without explaining it.
Linear trend estimation:
In particular, it may be useful to determine if measurements exhibit an increasing or decreasing trend which is statistically distinguished from random behaviour. Some examples are determining the trend of the daily average temperatures at a given location from winter to summer, and determining the trend in a global temperature series over the last 100 years. In the latter case, issues of homogeneity are important (for example, about whether the series is equally reliable throughout its length).
Fitting a trend: least-squares:
Given a set of data and the desire to produce some kind of model of those data, there are a variety of functions that can be chosen for the fit. If there is no prior understanding of the data, then the simplest function to fit is a straight line with the data values on the y axis, and time (t = 1, 2, 3, ...) on the x axis.
Fitting a trend: least-squares:
Once it has been decided to fit a straight line, there are various ways to do so, but the most usual choice is a least-squares fit. This method minimizes the sum of the squared errors in the data series y.
Fitting a trend: least-squares:
Given a set of points in time t , and data values yt observed for those points in time, values of a and b are chosen so that ∑t[yt−(a^t+b^)]2 is minimized. Here at + b is the trend line, so the sum of squared deviations from the trend line is what is being minimized. This can always be done in closed form since this is a case of simple linear regression.
Fitting a trend: least-squares:
For the rest of this article, “trend” will mean the slope of the least squares line, since this is a common convention.
Trends in random data:
Before considering trends in real data, it is useful to understand trends in random data.
Trends in random data:
If a series which is known to be random is analysed – fair dice falls, or computer-generated pseudo-random numbers – and a trend line is fitted through the data, the chances of an exactly zero estimated trend are negligible. But the trend would be expected to be small. If an individual series of observations is generated from simulations that employ a given variance of noise that equals the observed variance of our data series of interest, and a given length (say, 100 points), a large number of such simulated series (say, 100,000 series) can be generated. These 100,000 series can then be analysed individually to calculate estimated trends in each series, and these results establish a distribution of estimated trends that are to be expected from such random data – see diagram. Such a distribution will be normal according to the central limit theorem except in pathological cases. A level of statistical certainty, S, may now be selected – 95% confidence is typical; 99% would be stricter, 90% looser – and the following question can be asked: what is the borderline trend value V that would result in S% of trends being between −V and +V? The above procedure can be replaced by a permutation test. For this, the set of 100,000 generated series would be replaced by 100,000 series constructed by randomly shuffling the observed data series; clearly such a constructed series would be trend-free, so as with the approach of using simulated data these series can be used to generate borderline trend values V and −V.
Trends in random data:
In the above discussion the distribution of trends was calculated by simulation, from a large number of trials. In simple cases (normally distributed random noise being a classic) the distribution of trends can be calculated exactly without simulation.
Trends in random data:
The range (−V, V) can be employed in deciding whether a trend estimated from the actual data is unlikely to have come from a data series that truly has a zero trend. If the estimated value of the regression parameter a lies outside this range, such a result could have occurred in the presence of a true zero trend only, for example, one time out of twenty if the confidence value S=95% was used; in this case, it can be said that, at degree of certainty S, we reject the null hypothesis that the true underlying trend is zero.
Trends in random data:
However, note that whatever value of S we choose, then a given fraction, 1 − S, of truly random series will be declared (falsely, by construction) to have a significant trend. Conversely, a certain fraction of series that in fact have a non-zero trend will not be declared to have a trend.
Data as trend plus noise:
To analyse a (time) series of data, we assume that it may be represented as trend plus noise: yt=at+b+et where a and b are unknown constants and the e 's are randomly distributed errors. If one can reject the null hypothesis that the errors are non-stationary, then the non-stationary series {yt } is called trend-stationary. The least squares method assumes the errors to be independently distributed with a normal distribution. If this is not the case, hypothesis tests about the unknown parameters a and b may be inaccurate. It is simplest if the e 's all have the same distribution, but if not (if some have higher variance, meaning that those data points are effectively less certain) then this can be taken into account during the least squares fitting, by weighting each point by the inverse of the variance of that point.
Data as trend plus noise:
In most cases, where only a single time series exists to be analysed, the variance of the e 's is estimated by fitting a trend to obtain the estimated parameter values a^ and b^, thus allowing the predicted values y^=a^t+b^ to be subtracted from the data yt (thus detrending the data) and leaving the residuals e^t as the detrended data, and estimating the variance of the et 's from the residuals — this is often the only way of estimating the variance of the et 's.
Data as trend plus noise:
Once we know the "noise" of the series, we can then assess the significance of the trend by making the null hypothesis that the trend, a , is not different from 0. From the above discussion of trends in random data with known variance, we know the distribution of calculated trends to be expected from random (trendless) data. If the estimated trend, a^ , is larger than the critical value for a certain significance level, then the estimated trend is deemed significantly different from zero at that significance level, and the null hypothesis of zero underlying trend is rejected.
Data as trend plus noise:
The use of a linear trend line has been the subject of criticism, leading to a search for alternative approaches to avoid its use in model estimation. One of the alternative approaches involves unit root tests and the cointegration technique in econometric studies.
Data as trend plus noise:
The estimated coefficient associated with a linear trend variable such as time is interpreted as a measure of the impact of a number of unknown or known but unmeasurable factors on the dependent variable over one unit of time. Strictly speaking, that interpretation is applicable for the estimation time frame only. Outside that time frame, one does not know how those unmeasurable factors behave both qualitatively and quantitatively. Furthermore, the linearity of the time trend poses many questions: (i) Why should it be linear? (ii) If the trend is non-linear then under what conditions does its inclusion influence the magnitude as well as the statistical significance of the estimates of other parameters in the model? (iii) The inclusion of a linear time trend in a model precludes by assumption the presence of fluctuations in the tendencies of the dependent variable over time; is this necessarily valid in a particular context? (iv) And, does a spurious relationship exist in the model because an underlying causative variable is itself time-trending? Research results of mathematicians, statisticians, econometricians, and economists have been published in response to those questions. For example, detailed notes on the meaning of linear time trends in regression model are given in Cameron (2005); Granger, Engle and many other econometricians have written on stationarity, unit root testing, co-integration and related issues (a summary of some of the works in this area can be found in an information paper by the Royal Swedish Academy of Sciences (2003); and Ho-Trieu & Tucker (1990) have written on logarithmic time trends with results indicating linear time trends are special cases of cycles.
Data as trend plus noise:
Example: noisy time series It is harder to see a trend in a noisy time series. For example, if the true series is 0, 1, 2, 3 all plus some independent normally distributed "noise" e of standard deviation E, and we have a sample series of length 50, then if E = 0.1 the trend will be obvious; if E = 100 the trend will probably be visible; but if E = 10000 the trend will be buried in the noise.
Data as trend plus noise:
If we consider a concrete example, the global surface temperature record of the past 140 years as presented by the IPCC: then the interannual variation is about 0.2 °C and the trend about 0.6 °C over 140 years, with 95% confidence limits of 0.2 °C (by coincidence, about the same value as the interannual variation). Hence the trend is statistically different from 0. However, as noted elsewhere this time series doesn't conform to the assumptions necessary for least squares to be valid.
Goodness of fit (r-squared) and trend:
The least-squares fitting process produces a value – r-squared (r2) – which is 1 minus the ratio of the variance of the residuals to the variance of the dependent variable. It says what fraction of the variance of the data is explained by the fitted trend line. It does not relate to the statistical significance of the trend line (see graph); statistical significance of the trend is determined by its t-statistic. Often, filtering a series increases r2 while making little difference to the fitted trend.
Real data may need more complicated models:
Thus far the data have been assumed to consist of the trend plus noise, with the noise at each data point being independent and identically distributed random variables and to have a normal distribution. Real data (for example climate data) may not fulfill these criteria. This is important, as it makes an enormous difference to the ease with which the statistics can be analysed so as to extract maximum information from the data series. If there are other non-linear effects that have a correlation to the independent variable (such as cyclic influences), the use of least-squares estimation of the trend is not valid. Also where the variations are significantly larger than the resulting straight line trend, the choice of start and end points can significantly change the result. That is, the model is mathematically misspecified. Statistical inferences (tests for the presence of trend, confidence intervals for the trend, etc.) are invalid unless departures from the standard assumptions are properly accounted for, for example as follows: Dependence: autocorrelated time series might be modeled using autoregressive moving average models.
Real data may need more complicated models:
Non-constant variance: in the simplest cases weighted least squares might be used.
Non-normal distribution for errors: in the simplest cases a generalised linear model might be applicable.
Unit root: taking first (or occasionally second) differences of the data, with the level of differencing being identified through various unit root tests.In R, the linear trend in data can be estimated by using the 'tslm' function of the 'forecast' package.
Trends in clinical data:
Medical and biomedical studies often seek to determine a link in sets of data, such as (as indicated above) three different diseases. But data may also be linked in time (such as change in the effect of a drug from baseline, to month 1, to month 2), or by an external factor that may or may not be determined by the researcher and/or their subject (such as no pain, mild pain, moderate pain, severe pain). In these cases one would expect the effect test statistic (e.g. influence of a statin on levels of cholesterol, an analgesic on the degree of pain, or increasing doses of a drug on a measurable index) to change in direct order as the effect develops. Suppose the mean level of cholesterol before and after the prescription of a statin falls from 5.6 mmol/L at baseline to 3.4 mmol/L at one month and to 3.7 mmol/L at two months. Given sufficient power, an ANOVA would most likely find a significant fall at one and two months, but the fall is not linear. Furthermore, a post-hoc test may be required. An alternative test may be repeated measures (two way) ANOVA, or Friedman test, depending on the nature of the data. Nevertheless, because the groups are ordered, a standard ANOVA is inappropriate. Should the cholesterol fall from 5.4 to 4.1 to 3.7, there is a clear linear trend. The same principal may be applied to the effects of allele/genotype frequency, where it could be argued that SNPs in nucleotides XX, XY, YY are in fact a trend of no Y's, one Y, and then two Y's. The mathematics of linear trend estimation is a variant of the standard ANOVA, giving different information, and would be the most appropriate test if the researchers are hypothesising a trend effect in their test statistic. One example [1] is of levels of serum trypsin in six groups of subjects ordered by age decade (10–19 years up to 60–69 years). Levels of trypsin (ng/mL) rise in a direct linear trend of 128, 152, 194, 207, 215, 218. Unsurprisingly, a 'standard' ANOVA gives p < 0.0001, whereas linear trend estimation give p = 0.00006. Incidentally, it could be reasonably argued that as age is a natural continuously variable index, it should not be categorised into decades, and an effect of age and serum trypsin sought by correlation (assuming the raw data is available). A further example is of a substance measured at four time points in different groups: mean [SD] (1) 1.6 [0.56], (2) 1.94 [0.75], (3) 2.22 [0.66], (4) 2.40 [0.79], which is a clear trend. ANOVA gives p = 0.091, because the overall variance exceeds the means, whereas linear trend estimation gives p = 0.012. However, should the data have been collected at four time points in the same individuals, linear trend estimation would be inappropriate, and a two-way (repeated measures) ANOVA applied. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Omitted-variable bias**
Omitted-variable bias:
In statistics, omitted-variable bias (OVB) occurs when a statistical model leaves out one or more relevant variables. The bias results in the model attributing the effect of the missing variables to those that were included.
More specifically, OVB is the bias that appears in the estimates of parameters in a regression analysis, when the assumed specification is incorrect in that it omits an independent variable that is a determinant of the dependent variable and correlated with one or more of the included independent variables.
In linear regression:
Intuition Suppose the true cause-and-effect relationship is given by: y=a+bx+cz+u with parameters a, b, c, dependent variable y, independent variables x and z, and error term u. We wish to know the effect of x itself upon y (that is, we wish to obtain an estimate of b).
In linear regression:
Two conditions must hold true for omitted-variable bias to exist in linear regression: the omitted variable must be a determinant of the dependent variable (i.e., its true regression coefficient must not be zero); and the omitted variable must be correlated with an independent variable specified in the regression (i.e., cov(z,x) must not equal zero).Suppose we omit z from the regression, and suppose the relation between x and z is given by z=d+fx+e with parameters d, f and error term e. Substituting the second equation into the first gives y=(a+cd)+(b+cf)x+(u+ce).
In linear regression:
If a regression of y is conducted upon x only, this last equation is what is estimated, and the regression coefficient on x is actually an estimate of (b + cf ), giving not simply an estimate of the desired direct effect of x upon y (which is b), but rather of its sum with the indirect effect (the effect f of x on z times the effect c of z on y). Thus by omitting the variable z from the regression, we have estimated the total derivative of y with respect to x rather than its partial derivative with respect to x. These differ if both c and f are non-zero.
In linear regression:
The direction and extent of the bias are both contained in cf, since the effect sought is b but the regression estimates b+cf. The extent of the bias is the absolute value of cf, and the direction of bias is upward (toward a more positive or less negative value) if cf > 0 (if the direction of correlation between y and z is the same as that between x and z), and it is downward otherwise.
In linear regression:
Detailed analysis As an example, consider a linear model of the form yi=xiβ+ziδ+ui,i=1,…,n where xi is a 1 × p row vector of values of p independent variables observed at time i or for the i th study participant; β is a p × 1 column vector of unobservable parameters (the response coefficients of the dependent variable to each of the p independent variables in xi) to be estimated; zi is a scalar and is the value of another independent variable that is observed at time i or for the i th study participant; δ is a scalar and is an unobservable parameter (the response coefficient of the dependent variable to zi) to be estimated; ui is the unobservable error term occurring at time i or for the i th study participant; it is an unobserved realization of a random variable having expected value 0 (conditionally on xi and zi); yi is the observation of the dependent variable at time i or for the i th study participant.We collect the observations of all variables subscripted i = 1, ..., n, and stack them one below another, to obtain the matrix X and the vectors Y, Z, and U: X=[x1⋮xn]∈Rn×p, and Y=[y1⋮yn],Z=[z1⋮zn],U=[u1⋮un]∈Rn×1.
In linear regression:
If the independent variable z is omitted from the regression, then the estimated values of the response parameters of the other independent variables will be given by the usual least squares calculation, β^=(X′X)−1X′Y (where the "prime" notation means the transpose of a matrix and the -1 superscript is matrix inversion).
Substituting for Y based on the assumed linear model, β^=(X′X)−1X′(Xβ+Zδ+U)=(X′X)−1X′Xβ+(X′X)−1X′Zδ+(X′X)−1X′U=β+(X′X)−1X′Zδ+(X′X)−1X′U.
On taking expectations, the contribution of the final term is zero; this follows from the assumption that U is uncorrelated with the regressors X. On simplifying the remaining terms: bias .
In linear regression:
The second term after the equal sign is the omitted-variable bias in this case, which is non-zero if the omitted variable z is correlated with any of the included variables in the matrix X (that is, if X′Z does not equal a vector of zeroes). Note that the bias is equal to the weighted portion of zi which is "explained" by xi.
Effect in ordinary least squares:
The Gauss–Markov theorem states that regression models which fulfill the classical linear regression model assumptions provide the most efficient, linear and unbiased estimators. In ordinary least squares, the relevant assumption of the classical linear regression model is that the error term is uncorrelated with the regressors.
Effect in ordinary least squares:
The presence of omitted-variable bias violates this particular assumption. The violation causes the OLS estimator to be biased and inconsistent. The direction of the bias depends on the estimators as well as the covariance between the regressors and the omitted variables. A positive covariance of the omitted variable with both a regressor and the dependent variable will lead the OLS estimate of the included regressor's coefficient to be greater than the true value of that coefficient. This effect can be seen by taking the expectation of the parameter, as shown in the previous section. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ubiquitin carboxyl-terminal hydrolase L5**
Ubiquitin carboxyl-terminal hydrolase L5:
Ubiquitin carboxyl-terminal hydrolase isozyme L5 is an enzyme that in humans is encoded by the UCHL5 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quark (dairy product)**
Quark (dairy product):
Quark or quarg is a type of fresh dairy product made from milk. The milk is soured, usually by adding lactic acid bacteria cultures, and strained once the desired curdling is achieved. It can be classified as fresh acid-set cheese. Traditional quark can be made without rennet, but in modern dairies small quantities of rennet are typically added. It is soft, white and unaged, and usually has no salt added.
Name:
Quark is possibly described by Tacitus in his book Germania as lac concretum ("thick milk"), eaten by Germanic peoples. However, this could also have meant soured milk or any other kind of fresh cheese or fermented milk product.
Although quark is sometimes referred to loosely as a type of "cottage cheese", they can be distinguished by the different production aspects and textural quality, with the cottage cheese grains described as more chewy or meaty.
Name:
Etymology The word Quark (Late Middle High German: quarc, twarc, zwarg; Lower Saxon: dwarg), with usage in German documented since the 14th century, is thought to derive from a West Slavic equivalent, such as Lower Sorbian twarog, Upper Sorbian twaroh, Polish twaróg, Czech and Slovak tvaroh. The word is also cognate with Russian [[Tvorog|tvorog (творог)]] and Belarusian: tvaroh (тварог).The original Old Slavonic tvarogъ (тварогъ) is supposed to be related to the Church Slavonic творъ, tr. tvor, meaning "form". The meaning can thus be interpreted as "milk that solidified and took a form". The word formation is thus similar to that of the Italian formaggio and French fromage.
Name:
More cognates and formsThe Slavic words may also be cognate with the Greek name for cheese, τῡρός (tūrós). A cognate term for quark, túró, is used in Hungarian.
Cognates also occur in Scandinavia (Danish kvark, Norwegian and Swedish kvarg) and the Netherlands (Dutch kwark). The Old English form is geþweor.Other German forms include Quarck, and Quaergel (Quärgel).
Production:
Quark is a member of the acid-set cheese group, whose coagulation mainly relies on the acidity produced by lactic acid bacteria feeding on the lactose. But moderate amounts of rennet have also been in use, both at the home consumption level and the industrial level.Manufacture of quark normally uses pasteurized skim milk as the main ingredient, but cream can be added later to adjust fat content. The lactic acid bacteria are introduced in the form of mesophilic Lactococcus starter cultures. In the dairy industry today, quark is mostly produced with a small quantity of rennet, added after the culture when the solution is still only slightly acidic (ph 6.1). The solution will then continue to acidify, allowed to reach an approximate pH of 4.6. At this point, the acidity causes the casein proteins in the milk to begin precipitating.In Germany, it is continuously stirred to prevent hardening, resulting in a thick, creamy texture. According to German regulations on cheese (Käseverordnung), "fresh cheeses" (Frischkäse) such as quark or cottage cheese must contain at least 73% water in the fat-free component. German quark is usually sold in plastic tubs. This type of quark has the firmness of sour cream but is slightly drier, resulting in a somewhat crumbly texture (like ricotta).Basic quark contains about 0.2% fat; this basic quark or skimmed quark (Magerquark) must under German law have less than 10% fat by dry mass. Quark with higher fat content is made by adding cream after cooling. It has a smooth and creamy texture, and is slightly sweet (unlike sour cream). A firmer version called Schichtkäse (layer cheese) is often used for baking. Schichtkäse is distinguished from quark by having an added layer of cream sandwiched between two layers of quark.
Production:
Quark may be flavored with herbs, spices, or fruit. In general, the dry mass of quark has 1% to 40% fat; most of the rest is protein (80% of which is casein), calcium, and phosphate.
In the 19th century, there was no industrial production of quark (as end-product) and it was produced entirely for home use. In the traditional home-made process, the milk would be allowed to let stand until it soured naturally by the presence of naturally occurring bacteria, although the hardening could be encouraged with the addition of some rennet.
Production:
Some or most of the whey is removed to standardize the quark to the desired thickness. Traditionally, this is done by hanging the cheese in a muslin bag or a loosely woven cotton gauze called cheesecloth and letting the whey drip off, which gives quark its distinctive shape of a wedge with rounded edges. In industrial production, however, cheese is separated from whey in a centrifuge and later formed into blocks.Variations in quark preparation occur across different regions of Germany and Austria.
Common uses:
Various cuisines feature quark as an ingredient for appetizers, salads, main dishes, side dishes and desserts.
In Germany, quark is sold in cubic plastic tubs and usually comes in three different varieties, Magerquark (skimmed quark, <10% fat by dry mass.), "regular" quark (20% fat in dry mass) and Sahnequark ("creamy quark", 40% fat in dry mass) with added cream. Similar gradations in fat content are also common in Eastern Europe.
While Magerquark is often used for baking or is eaten as breakfast with a side of fruit or muesli, Sahnequark also forms the basis of a large number of quark desserts (called Quarkspeise when homemade or Quarkdessert when sold in German).
Much like yoghurts in some parts of the world, these foods mostly come with fruit flavoring (Früchtequark, fruit quark), sometimes with vanilla and are often also simply referred to as quark.
Common uses:
Dishes in Germanic-speaking areas One common use for quark is in making cheesecake called Käsekuchen or Quarkkuchen in Germany. Quark cheesecake is called Topfenkuchen in Austria. The Quarktorte in Switzerland may be equivalent, though this has also been described as a torte that combines quark and cream.In neighboring Netherlands there is a different variant; these cakes, called kwarktaart in Dutch, usually have a cookie crumb crust, and the quark is typically mixed with whipped cream, gelatine, and sugar. These cakes do not require baking or frying, but instead are placed in the refrigerator to firm up. They may be made with quark or with the yogurt-like quark that is common in the Netherlands (see above).In Austria, Topfen is commonly used in baking for desserts like above-mentioned Topfenkuchen, Topfenstrudel and Topfen-Palatschinken (Topfen-filled crèpes).
Common uses:
Quark is also often used as an ingredient for sandwiches, salads, and savory dishes. Quark, vegetable oil and wheat flour are the ingredients of a popular kind of baking powder leavened dough called Quarkölteig ("quark oil dough"), used in German cuisine as an alternative to yeast-leavened dough in home baking, since it is considerably easier to handle and requires no rising period. The resulting baked goods look and taste very similar to yeast-leavened goods, although they do not last as long and are thus usually consumed immediately after baking.
Common uses:
In Germany, quark mixed with chopped onions and herbs such as parsley and chives is called Kräuterquark. Kräuterquark is commonly eaten with boiled potatoes and has some similarity to tzatziki which is based on yoghurt.
Quark with linseed oil and potatoes is the national dish of the Sorbs in Lusatia and an iconic dish in Brandenburg and parts of Saxony. Quark also has been used among Ashkenazi Jews.
Availability in other countries:
Most of the Austrian and other Central and Eastern European varieties contain less whey and are therefore drier and more solid than the German and Scandinavian ones.
Availability in other countries:
In the Netherlands, many products labelled "kwark" are not based on quark as described in this article (fresh acid-set cheese), but instead a thick yogurt-like product made using yogurt bacteria (such as Streptococcus thermophilus and Lactobacillus acidophilus) in a quicker process using a centrifuge.Under Russian governmental regulations, tvorog is distinguished from cheeses, and classified as a separate type of dairy product. Typical tvorog usually contain 65–80% water out of the total mass.In several languages quark is also known as "white cheese" (French: fromage blanc, southern German: Weißkäse or weißer Käs, Hebrew: גבינה לבנה, romanized: gevina levana, Lithuanian: baltas sūris, Polish: biały ser, Serbian: beli sir), as opposed to any rennet-set "yellow cheese". Another French name for it is fromage frais (fresh cheese), where the difference to fromage blanc is defined by French legislation: a product named fromage frais must contain live cultures when sold, whereas with fromage blanc fermentation has been halted. In Swiss French, it is usually called séré.
Availability in other countries:
In Israel, gevina levana denotes the creamy variety similar to the German types of quark. The firmer version which was introduced to Israel during the Aliyah of the 1990s by immigrants from the former Soviet Union is differentiated as tvorog.
Availability in other countries:
In Austria, the name Topfen (pot cheese) is common. In Flanders, it is called plattekaas (runny cheese). In Finnish, it is known as rahka, while in Estonian as kohupiim (foamy milk), in Lithuanian as varškės sūris (curd cheese), in Ukrainian it is frequently called cир, and in Latvian is known as biezpiens (thick milk). Its Italian name is giuncata or cagliata (curd). Among the Albanians quark is known as gjizë.
Availability in other countries:
It is traditional in the cuisines of Baltic, Germanic and Slavic-speaking countries as well as amongst Ashkenazi Jews and various Turkic peoples.
Dictionaries sometimes translate it as curd cheese, cottage cheese, farmer cheese or junket. In Germany, quark and cottage cheese are considered to be different types of fresh cheese and quark is often not considered cheese at all, while in Eastern Europe cottage cheese is usually viewed as a type of quark (e.g. Ukrainian for cottage cheese is "сир" syr).
Availability in other countries:
Quark is similar to French fromage blanc. It is distinct from Italian ricotta because ricotta (Italian "recooked") is made from scalded whey. Quark is somewhat similar to yogurt cheeses such as the South Asian chak(k)a, the Arabic labneh, and the Central Asian suzma or kashk, but while these products are obtained by straining yogurt (milk fermented with thermophile bacteria), quark is made from soured milk fermented with mesophile bacteria.
Availability in other countries:
Although common in continental Europe, manufacturing of quark is rare in the Americas. A few dairies manufacture it, such as the Vermont Creamery in Vermont, and some specialty retailers carry it. Lifeway Foods manufactures a product under the title "farmer cheese" which is available in a variety of metropolitan locations with Jewish, as well as former Soviet populations. Elli Quark, a Californian manufacturer of quark, offers soft quark in different flavors.In Canada, the firmer East European variety of quark is manufactured by Liberté Natural Foods; a softer German-style quark is manufactured in the Didsbury, Alberta, plant of Calgary-based Foothills Creamery. Glengarry Fine Cheesemaking in Lancaster (Eastern Ontario) also produces Quark. Also available in Canada is the very similar Dry Curd Cottage Cheese manufactured by Dairyland. Quark may also be available as baking cheese, pressed cottage cheese, or fromage frais.In Australia, Ukrainian traditional quark is produced by Blue Bay Cheese in the Mornington Peninsula. It is also sometimes available from supermarkets labelled as quark or quarg.In New Zealand, European traditional Kwark is produced by Karikaas in North Canterbury. It is available in 350 g pots and available online and in speciality stores such as Moore Wilsons.In the United Kingdom, fat-free quark is produced by several independent manufacturers based throughout the country. All the big four supermarkets in the UK sell their own branded quark, as well as other brands of quark.In Finland, quark (rahka) is commonly available in supermarkets, both in plain and flavored forms. It is produced by Arla, Valio and is also sold under private labels by Kesko and S Group. It is often used as a dessert when mixed with berries and whipped cream. Karelians have a dish called piimäpiirakka, which is a quark pie.
Availability in other countries:
Slavic and Baltic countries Desserts using quarks (Russian: tvorog, etc.) in Slavic regions include the tvarohovník in Slovakia, tvarožník in Czech Republic, sernik in Poland, and syrnyk in Ukraine) and cheese pancakes (syrnyky in Russia and Ukraine).
In Poland, twaróg is mixed with mashed potatoes to produce a filling for pierogi. Twaróg is also used to make gnocchi-shaped dumplings called leniwe pierogi ("lazy pierogi"). Ukrainian recipes for varenyky or lazy varenyky are similar but tvorog and mashed potatoes are different fillings which are usually not mixed together.
Availability in other countries:
In Russia, Ukraine, and Belarus, tvorog (Russian: творог) is highly popular and is bought frequently or made at home by almost every family. In Russian families, it is especially recommended for growing babies. It can be enjoyed simply with sour cream, or jam, sugar, sugar condensed milk, or as a breakfast food. It is often used as a stuffing in blinchiki offered at many fast-food restaurants. It is also commonly used as the base for making Easter cakes. It is mixed with eggs, sugar, raisins and nuts and dried into a solid pyramid-shaped mass called paskha. The mass can also be fried, then known as syrniki.
Availability in other countries:
In Latvia, quark is eaten savory mixed with sour cream and scallions on rye bread or with potatoes. In desserts, quark is commonly baked into biezpiena plātsmaize, a crusted sheet cake baked with or without raisins. A sweetened treat biezpiena sieriņš (small curd cheese) is made of small sweetened blocks of quark dipped in chocolate.
Dishes including quark | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Barrier troops**
Barrier troops:
Barrier troops, blocking units, or anti-retreat forces are military units that are located in the rear or on the front line (behind the main forces) to maintain military discipline, prevent the flight of servicemen from the battlefield, capture spies, saboteurs and deserters, and return troops who flee from the battlefield or lag behind their units.
According to research by Jason Lyall, barrier troops are more likely to be used by the militaries of states that discriminate against the ethnic groups that comprise the state's military.
National Revolutionary Army:
During the Battle of Nanking of the Second Sino-Japanese War, a battalion in the New 36th Division of the National Revolutionary Army (NRA) was stationed at the Yijiang Gate with orders to guard the gate and "let no one through". On 12 December 1937, the NRA collapsed in the face of an offensive by the Imperial Japanese Army (IJA), and various units attempted to retreat without orders through the gate. The battalion responded by opening fire and killing a large number of the retreating NRA units and fleeing civilians.
Soviet Red Army:
In the Red Army of the Russian SFSR and later the Soviet Union, the concept of barrier troops first arose in August 1918 with the formation of the заградительные отряды (zagraditelnye otriady), translated as "blocking troops" or "anti-retreat detachments" (Russian: заградотряды, заградительные отряды, отряды заграждения). The barrier troops comprised personnel drawn from the Cheka secret police punitive detachments or from regular Red Army infantry regiments. The first use of the barrier troops by the Red Army occurred in the late summer and fall of 1918 in the Eastern front during the Russian Civil War, when People's Commissar of Military and Naval Affairs (War Commissar) Leon Trotsky of the Communist Bolshevik government authorized Mikhail Tukhachevsky, the commander of the 1st Army, to station blocking detachments behind unreliable Red Army infantry regiments in the 1st Red Army, with orders to shoot if front-line troops either deserted or retreated without permission.
Soviet Red Army:
In December 1918, Trotsky ordered that detachments of additional barrier troops be raised for attachment to each infantry formation in the Red Army. On December 18 he cabled: How do things stand with the blocking units? As far as I am aware they have not been included in our establishment and it appears they have no personnel. It is absolutely essential that we have at least an embryonic network of blocking units and that we work out a procedure for bringing them up to strength and deploying them. The barrier troops were also used to enforce Bolshevik control over food supplies in areas controlled by the Red Army as part of Lenin's war communism policies, a role which soon earned them the hatred of the Russian civilian population. These policies led to the Russian famine of 1921–1922, which killed about five million people.The concept was re-introduced on a large scale during the Second World War. On June 27, 1941, in response to reports of unit disintegration in battle and desertion from the ranks in the Soviet Red Army, the 3rd Department (military counterintelligence of Soviet Army) of the People's Commissariat of Defense of the Soviet Union (NKO) issued a directive establishing mobile barrier forces composed of NKVD secret police personnel to operate on roads, railways, forests, etc. for the purpose of catching "deserters and suspicious persons". With the continued deterioration of the military situation in the face of the German offensive of 1941, NKVD detachments acquired a new mission: to prevent the unauthorized withdrawal of Red Army forces from the battle line. The first troops of this kind were formed in the Bryansk Front on September 5, 1941.
Soviet Red Army:
On September 12, 1941 Joseph Stalin issued the Stavka Directive No. 1919 (Директива Ставки ВГК №001919) concerning the creation of barrier troops in rifle divisions of the Southwestern Front, to suppress panic retreats. Each Red Army division was to have an anti-retreat detachment equipped with transport totaling one company for each regiment. Their primary goal was to maintain strict military discipline and to prevent disintegration of the front line by any means. These barrier troops were usually formed from ordinary military units and placed under NKVD command.
Soviet Red Army:
In 1942, after Stavka Directive No. 227 (Директива Ставки ВГК №227), issued on 28 July 1942, set up penal battalions, anti-retreat detachments were used to prevent withdrawal or desertion by penal units as well. Penal military unit personnel were always rearguarded by NKVD anti-retreat detachments, and not by regular Red Army infantry forces. As per Order No. 227, each Army should have had 3–5 barrier squads of up to 200 persons each.
Soviet Red Army:
A report to the Commissar General of State Security (NKVD chief) Lavrentiy Beria on 10 October 1941 noted that since the beginning of the war, NKVD anti-retreat troops had detained a total of 657,364 retreating, spies, traitors, instigators and deserting personnel, of which 25,878 were arrested (of which 10,201 were sentenced to death by court martial and the rest were returned to active duty).At times, barrier troops were involved in battle operations along with regular soldiers, as noted by Aleksandr Vasilevsky in his directive N 157338 from October 1, 1942.
Soviet Red Army:
Order No. 227 also stipulated the capture or shooting of "cowards" and fleeing panicked troops at the rear of the blocking detachments, who in the first three months shot 1,000 penal troops and sent 24,993 more to penal battalions. By October 1942, the idea of regular blocking detachments was quietly dropped, and on 29 October 1944 Stalin officially ordered the disbanding of the units, although they continued to be utilized in a semi-official capacity until 1945.
Soviet Red Army:
Practice and results of use According to an official letter addressed in October 1941 to Lavrentiy Beria, in the period between the beginning of Operation Barbarossa to early December 1941, NKVD detachments had detained 657,364 servicemen who had fallen behind their lines and fled from the front. Of these detainees, 25,878 were arrested, and the remaining 632,486 were formed in units and sent back to the front. Among those arrested included accused 1505 spies, 308 saboteurs, 2621 traitors, 2643 "cowards and alarmists", 3987 distributors of "provocative rumors", and 4371 others. 10,201 of them were shot, meaning approximately 1.5% of those arrested were sentenced by military tribunals to death.Richard Overy mentions the total number of those sentenced to be shot during the war was 158,000.For a thorough check of the Red Army soldiers who were in captivity or surrounded by the enemy, by the decision of the State Defense Committee No. 1069ss of December 27, 1941, army collection and forwarding points were established in each army and special camps of the NKVD were organized. In 1941–1942, 27 special camps were created, but in connection with the inspection and shipment of verified servicemen to the front, they were gradually eliminated (by the beginning of 1943, only 7 special camps were operating). According to Soviet official data, 177,081 former prisoners of war and surrounded men were sent to special camps in 1942. After checking by special departments of the NKVD, 150,521 people were transferred to the Red Army.On 29 October 1944, Order No. 0349 of the People's Commissar for Defense I. V. Stalin, the barrage detachments were disbanded due to a significant change in the situation at the front. Personnel joined the rifle units.
Syrian Army:
It has been reported that in the initial stages of the Syrian civil war, regular soldiers sent to subdue protesters were surrounded by an outer cordon manned by forces known to be loyal to the regime, with orders to shoot those who refused their orders or attempted to flee.
Russian Ground Forces:
According to Fedir Venislavsky, a member of the Ukrainian parliament's committee on national security and defence, the Russian Ground Forces used Chechen troops from the 141st Special Motorized Regiment as barrier troops to shoot deserters who tried to leave combat zones during the full-scale invasion of Ukraine in 2022. In October 2022, Ukrainian intelligence published a purported phone call where a Russian soldier described both his task of killing inmates, recruited from prisons by the Wagner Group, if they were retreating, and how he would be killed by others if he himself retreated. In November, the British Ministry of Defence assessed that Russia was using blocking units. In a video appeal to Russian President Vladimir Putin published on 25 March 2023, members of a unit tasked with assaulting Vuhledar claimed that their commanders were utilizing anti-retreat troops to force them to advance or risk being shot. On June 12 2023, video emerged of an alleged event where three soldiers of the RGF executed three retreating soldiers in Bakhmut.
In film:
The 2001 film Enemy at the Gates shows Soviet Red Army commissars and barrier troops using a PM M1910 alongside their own small arms to gun down the few retreating survivors of a failed charge on a German position during the Battle of Stalingrad. The 2011 South Korean film My Way also depicts Soviet blocking troops shooting retreating soldiers during a charge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cannabinol**
Cannabinol:
Cannabinol (CBN) is a mildly psychoactive cannabinoid that acts as a low affinity partial agonist at both CB1 and CB2 receptors. This activity at CB1 and CB2 receptors constitutes interaction of CBN with the endocannabinoid system (ECS).
Cannabinol:
CBN was the first cannabis compound to be isolated from cannabis extract in the late 1800s. Its structure and chemical synthesis were achieved by 1940, followed by some of the first basic research studies to determine the effects of individual cannabis-derived compounds in vivo. Although CBN shares the same mechanism of action as other phytocannabinoids (e.g., delta-9 tetrahydrocannabinol or D9THC), it has a lower affinity for CB1 receptors, meaning that much higher doses of CBN are required in order to experience effects, such as mild sedation.
Chemical structure:
Cannabinoid receptor agonists are categorized into four groups based on chemical structure. CBN, as one of the many phytocannabinoids derived from Cannabis Sativa L, is considered a classical cannabinoid. Other examples of compounds in this group include dibenzopyran derivatives such as D9THC, well-known for underlying the subjective “high” experienced by cannabis users, as well as D8THC, and their synthetic analogs. In contrast, endogenously produced cannabinoids (i.e., endocannabinoids), which also exert effects through CB agonism, are considered eicosanoids, distinguished by notable differences in chemical structure.
Chemical structure:
Compared to D9THC, one additional aromatic ring confers CBN with a slower and more limited metabolic profile - see CBN Formation & Metabolism, below. In contrast to THC, CBN has no double bond isomers nor stereoisomers. CBN can degrade into HU-345 from oxidation. In the case of oral administration of CBN, first-pass metabolism in the liver involves the addition of a hydroxyl group at C9 or C11, increasing the affinity and specificity of CBN for both CB1 and CB2 receptors (see 11-OH-CBN).
Synthesis and metabolism:
This diagram represents the biosynthetic and metabolic pathways by which phytocannabinoids (e.g., CBD, THC, CBN) are created in the cannabis plant. Starting with CBG-A, the acidic forms of certain phytocannabinoids are generated via enzymatic conversion. From there, decarboxylation (i.e., catalyzed by combustion or heat) yields the most well-known metabolites present in the cannabis plant. CBN is unique in that it does not arise from a pre-existing acidic form, but rather is generated through the oxidation of THC.
Synthesis and metabolism:
CBN is unique among phytocannabinoids in that its biosynthetic pathway involves conversion directly from D9THC, rather than from an acidic precursor form of CBN (e.g., D9THC arises through decarboxylation of THC-A). CBN can be found in trace amounts in the Cannabis plant, found mostly in cannabis that is aged and stored, allowing for CBN formation through the oxidation of the cannabis plant's main psychoactive and intoxicating chemical, tetrahydrocannabinol (THC). This process of oxidation occurs via exposure to heat, oxygen, and/or light. Although reports are limited, CBN-A has also been measured at very low levels in the cannabis plant, thought to have formed via hydrolyzation of THC-A (see Phytocannabinoid Biosynthesis diagram, below).
Synthesis and metabolism:
When administered orally, CBN demonstrates a similar metabolism to D9THC, with the primary active metabolite produced through the hydrolyzation of C9 as part of first-pass metabolism in the liver. The active metabolite generated via this process is called 11-OH-CBN, which is 2x as potent as CBN, and has demonstrated activity as a weak CB2 antagonist. This metabolism starkly contrasts that of D9THC in terms of potency, given that 11-OH-THC has been reported to have 10x the potency of D9THC.
Synthesis and metabolism:
Due to high lipophilicity and first-pass metabolism, there is low bioavailability of CBN and other cannabinoids following oral administration. CBN metabolism is mediated in part by CYP450 isoforms 2C9 and 3A4. The metabolism of CBN may be catalyzed by UGTs (UDP-Glucuronosyltransferases), with a subset of UGT isoforms (1A7, 1A8, 1A9, 1A10, 2B7) identified as potential substrates associated with CBN glucuronidation. The bioavailability of CBN following administration via inhalation (e.g., smoking or vaporizing) is approximately 40% that of intravenous administration.
Pharmacology:
CBN was the first cannabis compound to be isolated from cannabis extract in the late 1800s. Its structure and chemical synthesis were achieved by 1940, followed by some of the first preclinical research studies to determine the effects of individual cannabis-derived compounds in vivo.Both THC and CBN activate the CB1 (Ki = 211.2 nM) and CB2 (Ki = 126.4 nM) receptors. Each compound acts as a low affinity partial agonist at CB1 receptors with THC demonstrating 10-13x greater affinity to the CB1 receptor. Compared to THC, CBN has an equivalent or higher affinity to CB2 receptors, which are located throughout the central and peripheral nervous system, but are primarily associated with immune function. CB2 receptors are known to be located on immune cells throughout the body, including macrophages, T cells, and B cells. These immune cells have been shown to decrease production of immune-related chemical signals (e.g., cytokines) or undergo apoptosis as a consequence of CB2 agonism by CBN. In cell culture, CBN demonstrates antimicrobial effects, particularly in instances of antibiotic-resistant bacteria. CBN has also been reported to act as an ANKTM1 channel agonist at high concentrations (>20nM). While some phytocannabinoids have been shown to interact with nociceptive and immune-related signaling via transient receptor potential channels (e.g., TRPV1 and TRPM8), there is currently limited evidence to suggest that CBN acts in this way. In preclinical rodent studies, CBN, anandamide and other CB1 agonists have demonstrated inhibitory effects on GI motility, reversible via CB1R blockade (i.e., antagonism).In considering the efficacy of cannabis-based products, there remains controversy surrounding a concept termed “the entourage effect”. This concept describes a widely-reported but poorly-understood synergistic effect of certain cannabinoids when phytocannabinoids are coadministered with other naturally-occurring chemical compounds in the cannabis plant (e.g., flavonoids, terpenoids, alkaloids). This entourage effect is often cited to explain the superior efficacy observed in some studies of whole-plant-derived cannabis therapeutics as compared to isolated or synthesized individual cannabis constituents.
Pharmacology:
Putative receptor targets The table highlights several common cannabinoids along with putative receptor targets and therapeutic properties. Exogenous (plant-derived) phytocannabinoids are identified with an asterisk while remaining chemicals represent well-known endocannabinoids (i.e., endogenously-produced cannabinoid receptor ligands).
Pharmacology:
Neurotransmitter interactions In the brain, the canonical mechanism of CB1 receptor activation is a form of short-term synaptic plasticity initiated via retrograde signaling of endogenous CB1 agonists such as 2AG or AEA (two primary endocannabinoids). This mechanism of action is called depolarization-induced suppression of inhibition (DSI) or depolarization-induced suppression of excitation (DSE), depending on the classification of the presynaptic neuron acted upon by the retrograde messenger (see diagram at left). In the case of CB1R agonism on the presynaptic membrane of a GABAergic interneuron, activation leads to a net effect of increased activity, while the same activity on a glutamatergic neuron leads to the opposite net effect. The release of other neurotransmitters is also modulated in this way, particularly dopamine, dynorphin, oxytocin, and vasopressin.
Pharmacology:
Pharmacokinetics A small study of six cannabis users found a highly variable half life of 32 +/- 17 hours upon intravenous administration. Similar to CBD, CBN is metabolized by the CYP2C9 and CYP3A4 liver enzymes and thus the half-life is sensitive to genetic factors that effect the levels of these enzymes.
Legal status:
CBN is not listed in the schedules set out by the United Nations' Single Convention on Narcotic Drugs from 1961 nor their Convention on Psychotropic Substances from 1971, so the signatory countries to these international drug control treaties are not required by these treaties to control CBN.
United States According to the 2018 Farm Bill, extracts from the Cannabis sativa L. plant, including CBN, are legal under US federal law as long as they have a delta-9 Tetrahydrocannabinol (THC) concentration of 0.3 percent or less. and sales or possession of CBN could potentially be prosecuted under the Federal Analogue Act. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bloch's theorem (complex variables)**
Bloch's theorem (complex variables):
In complex analysis, a branch of mathematics, Bloch's theorem describes the behaviour of holomorphic functions defined on the unit disk. It gives a lower bound on the size of a disk in which an inverse to a holomorphic function exists. It is named after André Bloch.
Statement:
Let f be a holomorphic function in the unit disk |z| ≤ 1 for which |f′(0)|=1 Bloch's Theorem states that there is a disk S ⊂ D on which f is biholomorphic and f(S) contains a disk with radius 1/72.
Landau's theorem:
If f is a holomorphic function in the unit disk with the property |f′(0)| = 1, then let Lf be the radius of the largest disk contained in the image of f.
Landau's theorem states that there is a constant L defined as the infimum of Lf over all such functions f, and that L is greater than Bloch's constant L ≥ B.
This theorem is named after Edmund Landau.
Valiron's theorem:
Bloch's theorem was inspired by the following theorem of Georges Valiron: Theorem. If f is a non-constant entire function then there exist disks D of arbitrarily large radius and analytic functions φ in D such that f(φ(z)) = z for z in D.
Bloch's theorem corresponds to Valiron's theorem via the so-called Bloch's Principle.
Proof:
Landau's theorem We first prove the case when f(0) = 0, f′(0) = 1, and |f′(z)| ≤ 2 in the unit disk. By Cauchy's integral formula, we have a bound sup w=γ(t)|f′(w)||w−z|2≤2r, where γ is the counterclockwise circle of radius r around z, and 0 < r < 1 − |z|.
By Taylor's theorem, for each z in the unit disk, there exists 0 ≤ t ≤ 1 such that f(z) = z + z2f″(tz) / 2. Thus, if |z| = 1/3 and |w| < 1/6, we have |(f(z)−w)−(z−w)|=12|z|2|f″(tz)|≤|z|21−t|z|≤|z|21−|z|=16<|z|−|w|≤|z−w|.
By Rouché's theorem, the range of f contains the disk of radius 1/6 around 0.
Proof:
Let D(z0, r) denote the open disk of radius r around z0. For an analytic function g : D(z0, r) → C such that g(z0) ≠ 0, the case above applied to (g(z0 + rz) − g(z0)) / (rg′(0)) implies that the range of g contains D(g(z0), |g′(0)|r / 6). For the general case, let f be an analytic function in the unit disk such that |f′(0)| = 1, and z0 = 0. If |f′(z)| ≤ 2|f′(z0)| for |z − z0| < 1/4, then by the first case, the range of f contains a disk of radius |f′(z0)| / 24 = 1/24.
Proof:
Otherwise, there exists z1 such that |z1 − z0| < 1/4 and |f′(z1)| > 2|f′(z0)|.
If |f′(z)| ≤ 2|f′(z1)| for |z − z1| < 1/8, then by the first case, the range of f contains a disk of radius |f′(z1)| / 48 > |f′(z0)| / 24 = 1/24.
Otherwise, there exists z2 such that |z2 − z1| < 1/8 and |f′(z2)| > 2|f′(z1)|.Repeating this argument, we either find a disk of radius at least 1/24 in the range of f, proving the theorem, or find an infinite sequence (zn) such that |zn − zn−1| < 1/2n+1 and |f′(zn)| > 2|f′(zn−1)|.
In the latter case the sequence is in D(0, 1/2), so f′ is unbounded in D(0, 1/2), a contradiction.
Proof:
Bloch's Theorem In the proof of Landau's Theorem above, Rouché's theorem implies that not only can we find a disk D of radius at least 1/24 in the range of f, but there is also a small disk D0 inside the unit disk such that for every w ∈ D there is a unique z ∈ D0 with f(z) = w. Thus, f is a bijective analytic function from D0 ∩ f−1(D) to D, so its inverse φ is also analytic by the inverse function theorem.
Bloch's and Landau's constants:
The number B is called the Bloch's constant. The lower bound 1/72 in Bloch's theorem is not the best possible. Bloch's theorem tells us B ≥ 1/72, but the exact value of B is still unknown. The best known bounds for B at present are 0.4332 10 11 12 0.47186 , where Γ is the Gamma function. The lower bound was proved by Chen and Gauthier, and the upper bound dates back to Ahlfors and Grunsky.
Bloch's and Landau's constants:
The similarly defined optimal constant L in Landau's theorem is called the Landau's constant. Its exact value is also unknown, but it is known that 0.5 0.543258965342...
(sequence A081760 in the OEIS)In their paper, Ahlfors and Grunsky conjectured that their upper bounds are actually the true values of B and L.
For injective holomorphic functions on the unit disk, a constant A can similarly be defined. It is known that 0.5 0.7853 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perspectives on Science**
Perspectives on Science:
Perspectives on Science is a peer-reviewed academic journal that publishes contributions to science studies that integrate historical, philosophical, and sociological perspectives. The journal contains theoretical essays, case studies, and review essays. Perspectives on Science was established in 1993 and is published online and in hard copy by the MIT Press.
Abstracting and indexing:
The journal is abstracted and indexed by the following bibliographic databases: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EUginius**
EUginius:
EUginius is an Internet-based database application for Genetically modified organisms (GMOs). The name EUginius is an acronym and stands for EUropean GMO Initiative for a Unified Database System.
Development and commissioning:
The EUginius database was created on the initiative of the German Federal Office of Consumer Protection and Food Safety (BVL) and the Dutch research Institute Wageningen Food Safety Research (WFSR, formerly RIKILT). Building on parallel preparatory work by both cooperation partners, EUginius has been jointly developed and maintained since 2010, and has been online since October 2014. The information is provided in English.
Goal:
EUginius aims to assist competent authorities as well as interested private users in finding accurate information on the presence, detection and identification of GMOs. Data on the GMOs’ molecular characterisation and traits, detection methods, reference materials, and authorisation status (currently limited to the EU) are provided. EUginius is tax-financed and therefore offers its information on the GMOs freely accessible. Information on releases carried out and their geographical location are not provided in EUginius.
Types of organisms present in EUginius:
Most of the GMOs present in EUginius are used for genetically modified food and feed, the majority plants (e.g. pest resistant Bt maize, Golden rice with enhanced synthesis of ß-carotene), and thus come from the field of green biotechnology. There is also information on genetically modified animals. EUginius provides for example information on a fast-growing genetically modified salmon (AquAdvantage) as well as information on genetically modified insects that have been developed to combat vectors of pathogens (e.g. Aedes aegypti OX5034, used to reduce the yellow fever mosquito population). In addition, in some cases, information is provided for the detection of genetically modified microbial production strains of food or feed additives (white biotechnology). Since the European Union classifies organisms developed using New Breeding Techniques (NBTs) as GMOs, EUginius provides information about commercialised NBT-organisms, including genome edited organisms such as the high-oleic soybean, the larger growing pufferfish or the heat-tolerant cattle. Furthermore, EUginius lists, to some extent, published NGT organisms which present market-relevant traits.
Data in EUginius:
As of August 2022, EUginius contains 870 genetically modified organisms (detailed information on GMOs) 259 PCR-detection methods (methods for detection and identification of GMOs) 440 reference materials (non-certified and certified reference materials)
Server locations and service:
The database and web servers are located in Germany and are mirrored on servers in the Netherlands. Further development and troubleshooting are decided jointly by the cooperation partners.
Partnerships:
Since 2018, there have also been partnerships with the Institute of Plant Breeding and Acclimatization (Instytut Hodowli i Aklimatyzacji Roślin – IHAR, based in Blonie, Poland), the Austrian Agency for Health and Food Safety (AGES, based in Vienna, Austria) and the Experimental ZooProphylactic Institute of Lazio and Tuscany (Istituto Zooprofilattico Sperimentale – IZS, based in Rome, Italy).
Beyond that, EUginius uses the element thesaurus GMO-GET1 (GMO Genetic Element Thesaurus) developed in collaboration with the Biosafety Clearing House (BCH, Montreal, Canada).
Outlook:
EUginius is continuously maintained and further developed. In this way, the work of the control laboratories using EUginius is supported in a timely manner. The adaptation of the contents and their provision (e.g. information on organisms developed by NBTs and sequencing information) is carried out on an ongoing basis. The renewal of the module for accessing GMO authorisation applications is under development. Finally, an optimisation of the design to improve usability and make navigation more intuitive is planned (2023 – 2024). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bio-FET**
Bio-FET:
A field-effect transistor-based biosensor, also known as a biosensor field-effect transistor (Bio-FET or BioFET), field-effect biosensor (FEB), or biosensor MOSFET, is a field-effect transistor (based on the MOSFET structure) that is gated by changes in the surface potential induced by the binding of molecules. When charged molecules, such as biomolecules, bind to the FET gate, which is usually a dielectric material, they can change the charge distribution of the underlying semiconductor material resulting in a change in conductance of the FET channel. A Bio-FET consists of two main compartments: one is the biological recognition element and the other is the field-effect transistor. The BioFET structure is largely based on the ion-sensitive field-effect transistor (ISFET), a type of metal–oxide–semiconductor field-effect transistor (MOSFET) where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution, and reference electrode.
Mechanism of operation:
Bio-FETs couple a transistor device with a bio-sensitive layer that can specifically detect bio-molecules such as nucleic acids and proteins. A Bio-FET system consists of a semiconducting field-effect transistor that acts as a transducer separated by an insulator layer (e.g. SiO2) from the biological recognition element (e.g. receptors or probe molecules) which are selective to the target molecule called analyte. Once the analyte binds to the recognition element, the charge distribution at the surface changes with a corresponding change in the electrostatic surface potential of the semiconductor. This change in the surface potential of the semiconductor acts like a gate voltage would in a traditional MOSFET, i.e. changing the amount of current that can flow between the source and drain electrodes. This change in current (or conductance) can be measured, thus the binding of the analyte can be detected. The precise relationship between the current and analyte concentration depends upon the region of transistor operation.
Fabrication of Bio-FET:
The fabrication of Bio-FET system consists of several steps as follows: Finding a substrate suitable for serving as a FET site, and forming a FET on the substrate, Exposing an active site of the FET from the substrate, Providing a sensing film layer on active site of FET, Providing a receptor on the sensing film layer in order to be used for ion detection, Removing a semiconductor layer, and thinning a dielectric layer, Etching the remaining portion of the dielectric layer to expose an active site of the FET, Removing the photoresist, and depositing a sensing film layer followed by formation of a photoresist pattern on the sensing film, Etching the unprotected portion of the sensing film layer, and removing the photoresist
Advantages:
The principle of operation of Bio-FET devices based on detecting changes in electrostatic potential due to binding of analyte. This the same mechanism of operation as glass electrode sensors which also detect changes in surface potential but were developed as early as the 1920s. Due to the small magnitude of the changes in surface potential upon binding of biomolecules or changing pH, glass electrodes require a high impedance amplifier which increases the size and cost of the device. In contrast, the advantage of Bio-FET devices is that they operate as an intrinsic amplifier, converting small changes in surface potential to large changes in current (through the transistor component) without the need for additional circuitry. This means BioFETs have the capability to be much smaller and more affordable than glass electrode-based biosensors. If the transistor is operated in the subthreshold region, then an exponential increase in current is expected for a unit change in surface potential.
Advantages:
Bio-FETs can be used for detection in fields such as medical diagnostics, biological research, environmental protection and food analysis. Conventional measurements like optical, spectrometric, electrochemical, and SPR measurements can also be used to analyze biological molecules. Nevertheless, these conventional methods are relatively time-consuming and expensive, involving multi-stage processes and also not compatible to real-time monitoring, in contrast to Bio-FETs. Bio-FETs are low weight, low cost of mass production, small size and compatible with commercial planar processes for large-scale circuitry. They can be easily integrated into digital microfluidic devices for Lab-on-a-chip. For example, a microfluidic device can control sample droplet transport whilst enabling detection of bio-molecules, signal processing, and the data transmission, using an all-in-one chip. Bio-FET also does not require any labeling step, and simply utilise a specific molecular (e.g. antibody, ssDNA) on the sensor surface to provide selectivity. Some Bio-FETs display fascinating electronic and optical properties. An example FET would is a glucose-sensitive based on the modification of the gate surface of ISFET with SiO2 nanoparticles and the enzyme glucose oxidase (GOD); this device showed obviously enhanced sensitivity and extended lifetime compared with that without SiO2 nanoparticles.
Optimization:
The choice of reference electrode (liquid gate) or back-gate voltage determines the carrier concentration within the field effect transistor, and therefore its region of operation, therefore the response of the device can be optimised by tuning the gate voltage. If the transistor is operated in the subthreshold region then an exponential increase in current is expected for a unit change in surface potential. The response is often reported as the change in current on analyte binding divided by the initial current ( ΔI/I0 ), and this value is always maximal in the subthreshold region of operation due to this exponential amplification. For most devices, optimum signal-to-noise, defined as change in current divided by the baseline noise, ( noise ) is also obtained when operating in the subthreshold region, however as the noise sources vary between devices, this is device dependent.One optimization of Bio-FET may be to put a hydrophobic passivation surface on the source and the drain to reduce non-specific biomolecular binding to regions which are not the sensing-surface. Many other optimisation strategies have been reviewed in the literature.
History:
The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng in 1959, and demonstrated in 1960. Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. Biosensor MOSFETs (BioFETs) were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters.The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld for electrochemical and biological applications in 1970. Other early BioFETs include the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET), and cell-potential BioFET (CPFET) had been developed. Current research in this area has produced new formations of the BioFET such as the Organic Electrolyte Gated FET (OEGFET). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ring extension**
Ring extension:
In mathematics, a subring of R is a subset of a ring that is itself a ring when binary operations of addition and multiplication on R are restricted to the subset, and which shares the same multiplicative identity as R. For those who define rings without requiring the existence of a multiplicative identity, a subring of R is just a subset of R that is a ring for the operations of R (this does imply it contains the additive identity of R). The latter gives a strictly weaker condition, even for rings that do have a multiplicative identity, so that for instance all ideals become subrings (and they may have a multiplicative identity that differs from the one of R). With definition requiring a multiplicative identity (which is used in this article), the only ideal of R that is a subring of R is R itself.
Definition:
A subring of a ring (R, +, ∗, 0, 1) is a subset S of R that preserves the structure of the ring, i.e. a ring (S, +, ∗, 0, 1) with S ⊆ R. Equivalently, it is both a subgroup of (R, +, 0) and a submonoid of (R, ∗, 1).
Examples:
The ring Z and its quotients Z/nZ have no subrings (with multiplicative identity) other than the full ring.: 228 Every ring has a unique smallest subring, isomorphic to some ring Z/nZ with n a nonnegative integer (see characteristic). The integers Z correspond to n = 0 in this statement, since Z is isomorphic to Z/0Z .: 89–90
Subring test:
The subring test is a theorem that states that for any ring R, a subset S of R is a subring if and only if it is closed under multiplication and subtraction, and contains the multiplicative identity of R.: 228 As an example, the ring Z of integers is a subring of the field of real numbers and also a subring of the ring of polynomials Z[X].
Ring extensions:
If S is a subring of a ring R, then equivalently R is said to be a ring extension of S, written as R/S in similar notation to that for field extensions.
Subring generated by a set:
Let R be a ring. Any intersection of subrings of R is again a subring of R. Therefore, if X is any subset of R, the intersection of all subrings of R containing X is a subring S of R. S is the smallest subring of R containing X. ("Smallest" means that if T is any other subring of R containing X, then S is contained in T.) S is said to be the subring of R generated by X. If S = R, we may say that the ring R is generated by X.
Relation to ideals:
Proper ideals are subrings (without unity) that are closed under both left and right multiplication by elements of R.
Relation to ideals:
If one omits the requirement that rings have a unity element, then subrings need only be non-empty and otherwise conform to the ring structure, and ideals become subrings. Ideals may or may not have their own multiplicative identity (distinct from the identity of the ring): The ideal I = {(z,0) | z in Z} of the ring Z × Z = {(x,y) | x,y in Z} with componentwise addition and multiplication has the identity (1,0), which is different from the identity (1,1) of the ring. So I is a ring with unity, and a "subring-without-unity", but not a "subring-with-unity" of Z × Z.
Relation to ideals:
The proper ideals of Z have no multiplicative identity.
Profile by commutative subrings:
A ring may be profiled by the variety of commutative subrings that it hosts: The quaternion ring H contains only the complex plane as a planar subring The coquaternion ring contains three types of commutative planar subrings: the dual number plane, the split-complex number plane, as well as the ordinary complex plane The ring of 3 × 3 real matrices also contains 3-dimensional commutative subrings generated by the identity matrix and a nilpotent ε of order 3 (εεε = 0 ≠ εε). For instance, the Heisenberg group can be realized as the join of the groups of units of two of these nilpotent-generated subrings of 3 × 3 matrices. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Supply network operations**
Supply network operations:
Supply network operations are the synchronized execution of compliant manufacturing and logistics processes across a dynamically reconfigurable supply network to profitably meet demand.supply network | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Luminescence**
Journal of Luminescence:
The Journal of Luminescence is a monthly peer-reviewed scientific journal published by Elsevier. The editor-in-chief is Xueyuan Chen. According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.171, ranking it 26th out of 101 journals in the category "Optics". The journal covers all aspects related to the emission of light (luminescence).
Editors:
The editors-in-chief are: S. Tanabe (Kyoto University D. Poelman (Universiteit Gent, Ghent, Belgium.
K.-L. Wong (Hong Kong Baptist University M.F. Reid (Dodd-Walls Centre for Photonic and Quantum Technologies) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PhotoRC RNA motifs**
PhotoRC RNA motifs:
PhotoRC RNA motifs refer to conserved RNA structures that are associated with genes acting in the photosynthetic reaction centre of photosynthetic bacteria. Two such RNA classes were identified and called the PhotoRC-I and PhotoRC-II motifs. PhotoRC-I RNAs were detected in the genomes of some cyanobacteria. Although no PhotoRC-II RNA has been detected in cyanobacteria, one is found in the genome of a purified phage that infects cyanobacteria. Both PhotoRC-I and PhotoRC-II RNAs are present in sequences derived from DNA that was extracted from uncultivated marine bacteria.
PhotoRC RNA motifs:
The PhotoRC motif RNAs are located upstream of, and presumably in the 5′ untranslated regions (5′ UTRs), of genes that are sometimes annotated as psbA. The proteins encoded by psbA genes form the reaction center of the photosystem II complex. It was proposed that PhotoRC RNAs are cis-regulatory elements functioning at the RNA level, since bacterial cis-regulatory RNAs typically reside in 5′ UTRs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Abraham–Lorentz force**
Abraham–Lorentz force:
In the physics of electromagnetism, the Abraham–Lorentz force (also known as the Lorentz–Abraham force) is the recoil force on an accelerating charged particle caused by the particle emitting electromagnetic radiation by self-interaction. It is also called the radiation reaction force, the radiation damping force, or the self-force. It is named after the physicists Max Abraham and Hendrik Lorentz.
Abraham–Lorentz force:
The formula although predating the theory of special relativity, was initially calculated for non-relativistic velocity approximations was extended to arbitrary velocities by Max Abraham and was shown to be physically consistent by George Adolphus Schott. The non-relativistic form is called Lorentz self-force while the relativistic version is called the Lorentz–Dirac force or collectively known as Abraham–Lorentz–Dirac force. The equations are in the domain of classical physics, not quantum physics, and therefore may not be valid at distances of roughly the Compton wavelength or below. There are, however, two analogs of the formula that are both fully quantum and relativistic: one is called the "Abraham–Lorentz–Dirac–Langevin equation", the other is the self-force on a moving mirror. The force is proportional to the square of the object's charge, multiplied by the jerk (rate of change of acceleration) that it is experiencing. The force points in the direction of the jerk. For example, in a cyclotron, where the jerk points opposite to the velocity, the radiation reaction is directed opposite to the velocity of the particle, providing a braking action. The Abraham–Lorentz force is the source of the radiation resistance of a radio antenna radiating radio waves.
Abraham–Lorentz force:
There are pathological solutions of the Abraham–Lorentz–Dirac equation in which a particle accelerates in advance of the application of a force, so-called pre-acceleration solutions. Since this would represent an effect occurring before its cause (retrocausality), some theories have speculated that the equation allows signals to travel backward in time, thus challenging the physical principle of causality. One resolution of this problem was discussed by Arthur D. Yaghjian and was further discussed by Fritz Rohrlich and Rodrigo Medina.
Definition and description:
Mathematically, the Lorentz-self force derived for non-relativistic velocity approximation v≪c , is given in SI units by: or in Gaussian units by where Frad is the force, a˙ is the derivative of acceleration, or the third derivative of displacement, also called jerk, μ0 is the magnetic constant, ε0 is the electric constant, c is the speed of light in free space, and q is the electric charge of the particle.
Definition and description:
Physically, an accelerating charge emits radiation (according to the Larmor formula), which carries momentum away from the charge. Since momentum is conserved, the charge is pushed in the direction opposite the direction of the emitted radiation. In fact the formula above for radiation force can be derived from the Larmor formula, as shown below.
The Abraham-Lorentz force, a generalization of Lorentz self-force for arbitrary velocities is given by: Where γ is the Lorentz factor associated with v, velocity of particle. The formula is consistent with special relativity and reduces to Lorentz's self-force expression for low velocity limit.
The covariant form of radiation reaction deduced by Dirac for arbitrary shape of elementary charges is found to be:
History:
The first calculation of electromagnetic radiation energy due to current was given by George Francis FitzGerald in 1883, in which radiation resistance appears. However, dipole antenna experiments by Heinrich Hertz made a bigger impact and gathered commentary by Poincaré on the amortissement or damping of the oscillator due to the emission of radiation. Qualitative discussions surrounding damping effects of radiation emitted by accelerating charges was sparked by Henry Poincaré in 1891. In 1892, Hendrik Lorentz derived the self-interaction force of charges for low velocities but did not relate it to radiation losses. Suggestion of a relationship between radiation energy loss and self-force was first made by Max Planck. Planck's concept of the damping force, which did not assume any particular shape for elementary charged particles, was applied by Max Abraham to find the radiation resistance of an antenna in 1898, which remains the most practical application of the phenomenon.In the early 1900s, Abraham formulated a generalization of the Lorentz self-force to arbitrary velocities, the physical consistency of which was later shown by Schott. Schott was able to derive the Abraham equation and attributed "acceleration energy" to be the source of energy of the electromagnetic radiation. Originally submitted as an essay for the 1908 Adams Prize, he won the competition and had the essay published as a book in 1912. The relationship between self-force and radiation reaction became well-established at this point. Wolfgang Pauli first obtained the covariant form of the radiation reaction and in 1938, Paul Dirac found that the equation of motion of charged particles, without assuming the shape of the particle, contained Abraham's formula within reasonable approximations. The equations derived by Dirac are considered exact within the limits of classical theory.
Background:
In classical electrodynamics, problems are typically divided into two classes: Problems in which the charge and current sources of fields are specified and the fields are calculated, and The reverse situation, problems in which the fields are specified and the motion of particles are calculated.In some fields of physics, such as plasma physics and the calculation of transport coefficients (conductivity, diffusivity, etc.), the fields generated by the sources and the motion of the sources are solved self-consistently. In such cases, however, the motion of a selected source is calculated in response to fields generated by all other sources. Rarely is the motion of a particle (source) due to the fields generated by that same particle calculated. The reason for this is twofold: Neglect of the "self-fields" usually leads to answers that are accurate enough for many applications, and Inclusion of self-fields leads to problems in physics such as renormalization, some of which are still unsolved, that relate to the very nature of matter and energy.These conceptual problems created by self-fields are highlighted in a standard graduate text. [Jackson] The difficulties presented by this problem touch one of the most fundamental aspects of physics, the nature of the elementary particle. Although partial solutions, workable within limited areas, can be given, the basic problem remains unsolved. One might hope that the transition from classical to quantum-mechanical treatments would remove the difficulties. While there is still hope that this may eventually occur, the present quantum-mechanical discussions are beset with even more elaborate troubles than the classical ones. It is one of the triumphs of comparatively recent years (~ 1948–1950) that the concepts of Lorentz covariance and gauge invariance were exploited sufficiently cleverly to circumvent these difficulties in quantum electrodynamics and so allow the calculation of very small radiative effects to extremely high precision, in full agreement with experiment. From a fundamental point of view, however, the difficulties remain. The Abraham–Lorentz force is the result of the most fundamental calculation of the effect of self-generated fields. It arises from the observation that accelerating charges emit radiation. The Abraham–Lorentz force is the average force that an accelerating charged particle feels in the recoil from the emission of radiation. The introduction of quantum effects leads one to quantum electrodynamics. The self-fields in quantum electrodynamics generate a finite number of infinities in the calculations that can be removed by the process of renormalization. This has led to a theory that is able to make the most accurate predictions that humans have made to date. (See precision tests of QED.) The renormalization process fails, however, when applied to the gravitational force. The infinities in that case are infinite in number, which causes the failure of renormalization. Therefore, general relativity has an unsolved self-field problem. String theory and loop quantum gravity are current attempts to resolve this problem, formally called the problem of radiation reaction or the problem of self-force.
Derivation:
The simplest derivation for the self-force is found for periodic motion from the Larmor formula for the power radiated from a point charge that moves with velocity much lower than that of speed of light: If we assume the motion of a charged particle is periodic, then the average work done on the particle by the Abraham–Lorentz force is the negative of the Larmor power integrated over one period from τ1 to τ2 The above expression can be integrated by parts. If we assume that there is periodic motion, the boundary term in the integral by parts disappears: Clearly, we can identify the Lorentz self-force equation which is applicable to slow moving particles as: A more rigorous derivation, which does not require periodic motion, was found using an effective field theory formulation.A generalized equation for arbitrary velocities was formulated by Max Abraham, which is found to be consistent with special relativity. An alternative derivation, making use of theory of relativity which was well established at that time, was found by Dirac without any assumption of the shape of the charged particle.
Signals from the future:
Below is an illustration of how a classical analysis can lead to surprising results. The classical theory can be seen to challenge standard pictures of causality, thus signaling either a breakdown or a need for extension of the theory. In this case the extension is to quantum mechanics and its relativistic counterpart quantum field theory. See the quote from Rohrlich in the introduction concerning "the importance of obeying the validity limits of a physical theory".
Signals from the future:
For a particle in an external force Fext , we have where This equation can be integrated once to obtain The integral extends from the present to infinitely far in the future. Thus future values of the force affect the acceleration of the particle in the present. The future values are weighted by the factor which falls off rapidly for times greater than t0 in the future. Therefore, signals from an interval approximately t0 into the future affect the acceleration in the present. For an electron, this time is approximately 10 24 sec, which is the time it takes for a light wave to travel across the "size" of an electron, the classical electron radius. One way to define this "size" is as follows: it is (up to some constant factor) the distance r such that two electrons placed at rest at a distance r apart and allowed to fly apart, would have sufficient energy to reach half the speed of light. In other words, it forms the length (or time, or energy) scale where something as light as an electron would be fully relativistic. It is worth noting that this expression does not involve the Planck constant at all, so although it indicates something is wrong at this length scale, it does not directly relate to quantum uncertainty, or to the frequency–energy relation of a photon. Although it is common in quantum mechanics to treat ℏ→0 as a "classical limit", some speculate that even the classical theory needs renormalization, no matter how the Planck constant would be fixed.
Abraham–Lorentz–Dirac force:
To find the relativistic generalization, Dirac renormalized the mass in the equation of motion with the Abraham–Lorentz force in 1938. This renormalized equation of motion is called the Abraham–Lorentz–Dirac equation of motion.
Definition The expression derived by Dirac is given in signature (−, +, +, +) by With Liénard's relativistic generalization of Larmor's formula in the co-moving frame, one can show this to be a valid force by manipulating the time average equation for power:
Paradoxes:
Pre-acceleration Similar to the non-relativistic case, there are pathological solutions using the Abraham–Lorentz–Dirac equation that anticipate a change in the external force and according to which the particle accelerates in advance of the application of a force, so-called preacceleration solutions. One resolution of this problem was discussed by Yaghjian, and is further discussed by Rohrlich and Medina.
Runaway solutions Runaway solutions are solutions to ALD equations that suggest the force on objects will increase exponential over time. It is considered as an unphysical solution.
Paradoxes:
Hyperbolic motion The ALD equations are known to be zero for constant acceleration or hyperbolic motion in Minkowski space-time diagram. The subject of whether in such condition electromagnetic radiation exists was matter of debate until Fritz Rohrlich resolved the problem by showing that hyperbolically moving charges do emit radiation. Subsequently the issue is discussed in context of energy conservation and equivalence principle which is classically resolved by considering "acceleration energy" or Schott energy.
Self-interactions:
However the antidamping mechanism resulting from the Abraham–Lorentz force can be compensated by other nonlinear terms, which are frequently disregarded in the expansions of the retarded Liénard–Wiechert potential.
Experimental observations:
While the Abraham–Lorentz force is largely neglected for many experimental considerations, it gains importance for plasmonic excitations in larger nanoparticles due to large local field enhancements. Radiation damping acts as a limiting factor for the plasmonic excitations in surface-enhanced Raman scattering. The damping force was shown to broaden surface plasmon resonances in gold nanoparticles, nanorods and clusters.The effects of radiation damping on nuclear magnetic resonance were also observed by Nicolaas Bloembergen and Robert Pound, who reported its dominance over spin–spin and spin–lattice relaxation mechanisms for certain cases.The Abraham–Lorentz force has been observed in the semiclassical regime in experiments which involve the scattering of a relativistic beam of electrons with a high intensity laser. In the experiments, a supersonic jet of helium gas is intercepted by a high-intensity (1018–1020 W/cm2) laser. The laser ionizes the helium gas and accelerates the electrons via what is known as the “laser-wakefield” effect. A second high-intensity laser beam is then propagated counter to this accelerated electron beam. In a small number of cases, inverse-Compton scattering occurs between the photons and the electron beam, and the spectra of the scattered electrons and photons are measured. The photon spectra are then compared with spectra calculated from Monte Carlo simulations that use either the QED or classical LL equations of motion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fluorsid**
Fluorsid:
Fluorsid S.p.A. is an Italian company active in the chemical and mining industry, and precisely involved in the production and sales of inorganic fluorine derivatives.
The historic production site is located in Macchiareddu, in the industrial area of Cagliari, in Sardinia, but the company has plants and headquarters also located in the Italian peninsula, Norway, Switzerland and United Kingdom.
History:
Historically, since the Neolithic era, the populations that have inhabited Sardinia have always exploited the mineral wealths and varieties present in its subsoil. These mining activities grew considerably between the nineteenth and twentieth centuries: in 1850 there were more than 250 mining concessions. This period was dominated by the extraction of lead and zinc which made the fortune of Arburese, Iglesiente and the Barony of Siniscola regions. The sector, however, went into crisis after the Second World War and many of these mines were closed or downsized. On the contrary, however, in the same period there has been a boom in the fluorine and barium extraction sector, especially in Gerrei area. Therefore, many entrepreneurs, supported by the Autonomous Region of Sardinia through the Sardinian Mining Authority (it. Ente Minerario Sardo), chose to invest in this new business. Among these figures there was also Count Enrico Giulini, who already obtained the first mining concessions in 1953 and a few years later, on April 17, 1969, he founded Fluorsid based in the Genna Tres Montis mine, in the municipality of Silius. The company was specialized in the production of synthetic cryolite and aluminum fluoride. It immediately built on site a drying and bagging plant for fluorspar as well as one for briquetting the fluorite for steel and wet process, implemented in 1972. This process will be used until 1988, when it was decided to switch to dry production.
History:
At the end of the 1980s, however, the problem of the enlargement of the ozone hole emerged: chlorofluorocarbons were considered the main culprits and some of these compounds were banned by the 1987 Montreal Protocol. This led to a strong crisis in the chemical industry linked to the production of fluorides (in particular hydrofluoric acid), for which the Silìus mine also entered into crisis. The Autonomous Region of Sardinia therefore decided to separate the mining sector from the production and commercial side, taking charge of the first aspect and contributing to 80% of the capital of the mines in Gerrei. Since 1990, Fluorsid left Silius and its business focused in the production and marketing of fluoroderivatives purchasing the raw material from other mines in the world, such as in South Africa, Morocco and China. In 2006, 100% of the raw materials were imported.However, Sardinia had not been abandoned, on the contrary the industrial site of Macchiareddu was expanded, in the industrial area of Assemini, near Cagliari: in 2002 the sulfuric acid plant was started up with the production of steam by electricity and at the same time in the new Millennium the company began to grow at worldwide level and acquire other companies in the sector, also thanks to the arrival in 2005 of Tommaso Giulini, son of the founder, Count Enrico, who inherited the head of the company in 2005. With the closure of the Sardinian mines, in fact, the goal was to take advantage of the energy produced on site and, at the same time, increase production capacity, focus on innovative technologies and diversify production by focusing on the aluminum sector in order to seek new outlets in the Arab and Persian Gulf markets.In fact, in 2010 ICIB was acquired, at the time the Italian leader in the market of hydrofluoric acid in solution and synthetic anhydride. Two years later British Fluorspar began mining fluorite in Derbyshire. In 2013 the Swiss office in Lausanne began trading metals, chemicals and minerals and three years later Fluorsid acquired the Norwegian company Noralf with an investment of 12.5 million euros, buying the second largest European plant for production of aluminum fluoride. In 2017 it acquired the SFM plant in Martigny, Switzerland, dedicated to the production of magnesium, and the English company Active Metals, leader in the production of titanium powders and granules in Sheffield plant. The following year, on the other hand, it acquired 50% of Simplis Logistics, the logistics platform in Bahrain for trade in Asian markets.
History:
In Italy, in 2018, the chemical company Alkeemia was acquired together with its former Solvay plant in Porto Marghera for the production of anhydrous hydrofluoric acid, and in 2021 was created Fluorsid Deutschland, a German section born with the acquisition of 50% of CF Carbons, manufacturer of Chlorodifluoromethane, called also R22, in Frankfurt. However in October of the same year, Fluorsid sold both the subsidiary Alkeemia, including the related assets, namely the Porto Marghera plant, and the investment in Germany to a fund managed by the English investment company Blantyre Capital Limited.Since 2019 all the subsidiaries of the Fluorsid Group have been unified and controlled under the Fluorsid brand.
Products:
The company develops the entire fluorine chain, from the extraction of fluorite to hydrofluoric acid, with the production and marketing of its derivatives as well as the sale of non-ferrous metals dedicated to the aluminum, fluoropolymer and steel markets, but also to the production of gypsum and anhydrite for the construction sector. The production of sulfuric acid serves the producers of fertilizers, synthetic detergents and pharmaceutical companies.Among the chemical products there are high density aluminum fluoride made with dry process, anhydrous hydrofluoric acid in aqueous version (from 2018 to 2021 even in anidrous form in Porto Marghera plant, then sold), sulfuric acid from molten sulfur with the Double Contact Double Absorption process, synthetic cryolite, fluorite (a raw material for many subsequent processes), calcium fluoride and R22, raw material for the production of many fluoropolymers. Fluorsid produces also calcium sulphate in various forms (raw, milled or granular) and it is sold under the trade name Gypsos.As for metals, the company is the leading European producer of powders, granules and shavings of magnesium and exports worldwide over 90% of its annual production as well as powders and granules of titanium. Through partner companies then exchanges and produces base metals (copper, zinc, lead, nickel, tin and aluminum), ferrous and non-ferrous alloys, semi-finished metals, steel and other metals for refining, and produces of catalysts and for high-precision applications.
Headquarters and production plants:
The administrative headquarters and the offices of the company are located in via Vegezio Milan, in the Citylife district, while the production plants are scattered in several countries in Europe.
Headquarters and production plants:
The company's first plant, that is as well the registered office, is located in Macchiareddu (in the territory of Assemini), an industrial area on the outskirts of Cagliari, in Sardinia. It has been built when fluorite was still extracted in Gerrei. Here aluminum fluoride, sulfuric acid, synthetic cryolite, calcium fluoride and calcium sulfate are produced under the Gypsos brand. Aluminum fluoride is produced in five parallel production lines. The last two built, in 2008 and 2013, are equipped with highly efficient double-bed reactors. Sulfuric acid is produced in two parallel plants, the first built in 2002 and the second, of the same capacity, in 2013. The raw material for both plants is molten sulfur from nearby Saras in Sarroch's refinery, which guarantees the absence of dangerous dust. The process is highly exothermic and, thanks to a very efficient heat recovery, huge quantities of steam are generated and sent to two turbine generators of 5 and 7 MW capacity. These ones, starting from a zero-km by-product, allow the plant to be self-sufficient in terms of sulfuric acid, steam and electricity without the use of fuels, carbon dioxide emissions or other greenhouse gases.In 2010 the group acquired ICIB, which had been the main producer of hydrofluoric acid since 1949, and settled in its Treviglio plant, in Bergamo area. Merging this plant and the one in Porto Marghera, is produced a quantity of hydrofluoric acid equal to 10,000 tons per year.With the acquisition of Noralf in 2013, it is also present in Norway in the Odda plant, capable of producing around 40,000 tons per year of aluminum fluoride.About the mining sector, in Great Britain it has been present since 2012 at Cavendish Mill in Derbyshire within the Peak District National Park, where fluorite is extracted and subsequently transported to the various plants. In the same country there is also a plant, dated 1984, in Sheffield (South Yorkshire) where titanium powders and granules are produced.In Switzerland, the entire magnesium sector is located in Martigny (Canton of Valais). The site specializes in the production of magnesium anodes for the cathodic protection of water heaters, tanks and pipelines. Also in Switzerland, in Zurich and Lausanne, there are offices dedicated to the group's trading activities.Finally, with the subsidiary Simplis Logistics, the company operates in Manama, Bahrain, in the logistics sector and in the transport of production materials.In 2018 Fluorsid acquired from Solvay its plant in Porto Marghera, in the industrial area of Venice. Indeed here there was one of the biggest productions of hydrofluoric acid, in both anhydrous and aqueous form, and of fluoroderivatives (calcium sulphate). The company therefore settled here its company branch called Fluorsid Alkeemia. The plant had an area of 125,000 square meters and every year 27,000 tons of anhydrous hydrofluoric acid and 100,000 tons of calcium sulphate were produced. In February 2021 it was selected by the Italian Ministry of Economic Development for a European Union Project of Common Interest for the development of innovative lithium battery cells and systems. In October 2021 Fluorsid sold the controlled company Alkeemia and its plant.In 2021, with the establishment of Fluorsid Deutschland, the company arrived in Germany at the Höchst plant, near Frankfurt Airport. The plant mainly produced chlorodifluoromethane, a raw material for many fluorine polymers such as PTFE and some special fluoroelastomers. The production capacity of the plant was about 24,000 MT per year and is managed through the partner Nouryon, the original owner of the site. In October of the same year, together with the sale of the subsidiary Alkeemia, there has been the sale of the 50% stake in CF Carbons which ended the activities on in Germany.
Legal controversies:
In May 2017 Fluorsid was implicated in an investigation for environmental disaster which also involved its top management with accusations of conspiracy and environmental crimes. At the end of the events, the company and the reference company were declared extraneous to the conduct, because the technical choices were not being made by the top management of Fluorsid. The agents of the Regional Forestry Corps seized two areas containing polluting materials: one of three hectares, next to the Fluorsid plant in Macchiareddu, in Sardinia, where there were piles of materials stored outdoors and one of five hectares in Terrasili, in the municipality of Assemini, for the storage of various materials. The warrant contested the presence of contamination by dispersion of harmful dust containing fluorine, the contamination of the soil by diffusion of fluorine dust, then ending up on grazing lands, and the contamination of groundwater and livestock by heavy metals and inorganic compounds, and contamination of livestock by fluorine in Macchiareddu. The investigation closed in December 2018 bringing the number of suspects to 15, which then rose to 22 including managers of public control bodies.
Legal controversies:
On June 26, 2019, the company presented an investment plan of approximately 22 million euros to modernize the area around the Macchiareddu plant and further improve compliance with environmental safety. On 25 July of the same year 11 of the 22 suspects negotiated a sentence of 23 months' detention (but with suspended sentence) and a 7,000 euro fine for pollution, environmental disaster and illegal waste disposal, while the accusation of conspiracy was canceled. On 18 December 2019 the case was definitively closed with the dismissal of the positions of the remaining 11 suspects, which involved the managers of Fluorsid (including the president of Cagliari Calcio and of the company itself Tommaso Giulini) and its subsidiaries and managers of the Sardinian Healthcare Board (ATS) and of the Sardinian Environmental Defense Board (ARPAS). The plea deal, as well as the final setting of the accusation, confirmed the extraneousness of Fluorsid and its top management with respect to the conduct.
Governance:
Fluorsid consists of eight subsidiaries that report directly to the original company established in 1969. In fact, they maintain organizational charts and legal entities, but are unified from a commercial and brand point of view, with their name preceded by that of the Group, precisely Fluorsid. Management is led by the Fluorsid Board of directors to which the BoDs of the subsidiaries also refer. With regard to organizational reports, the plant managers of the subsidiaries report directly to the Fluorsid's Chief Executive Officer. The entire Fluorsid is part of the holding called Fluorsid Group, that has the full ownership of the professional football club Cagliari Calcio, the main sports club in Sardinia, and the minority shareholdings in three other companies of the chemical and metal field: SEMP, Simplis Logistics and Laminazione Sottile. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.