id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
6,984,314
https://en.wikipedia.org/wiki/Obturator%20ring
An obturator ring was a type of piston ring used in the early rotary engines of some World War I fighter aircraft for improved sealing in the presence of cylinder distortion. Purpose The cylinders of rotary aircraft engines (engines with the crankshaft fixed to the airframe and rotating cylinders) suffered from uneven cylinder cooling as the side facing the direction of rotation received more cooling air which lead to thermal distortion. To keep weight down the cylinders on rotary engines had very thin-walls (1.5 mm) and some had no cylinder liners. On engine types without cylinder liners, obturator rings, made of bronze in the early Gnome engines, were fitted as these were soft enough to not damage cylinder walls and could flex to the shape of the cylinder. In operation wear on the rings was considerable. Engines needed to be overhauled about every 20 hours. The reliability of Gnome engines license-built by The British Gnome and Le Rhone Engine Co. was improved with an overhaul life of about 80 hours being achieved, mainly as a result using a special tool to roll the 'L' section obturator rings. Clerget rotary aircraft engines also used obturator rings which were prone to overheating and seizure. Le Rhône and Bentley BR1/BR2 rotary engines used cylinder liners and were sealed using conventional piston rings rather than obturator rings. See also Piston ring References External links An 'L' section obturator ring is shown in Patent US 1378109A - "Obturator ring". Pistons Aerospace engineering
Obturator ring
Engineering
312
5,355,609
https://en.wikipedia.org/wiki/Ashen%20light
Ashen light is a hypothesised subtle glow that has been claimed to be seen on the night side of the planet Venus. The phenomenon has not been scientifically confirmed, and theories as to the observed phenomenon's cause are numerous, such as emission of light by Venus, or optical phenomena within the observing telescope itself. A modern hypothesis as to the source of light on Venus suggests it to be associated with lightning, for which there is some evidence on Venus. This theory has fallen out of favour, however, as there is not enough light generated by this lightning so as to be observed. A more recent hypothesis is that it is a form of transient aurorae or airglow caused by unusually high solar activity interacting with the upper Venusian atmosphere. History of observations While the discovery of the ashen light is often attributed to Italian astronomer Giovanni Battista Riccioli, recent evidence finds that German priest Athanasius Kircher might have been the first to observe the ashen light during his one and only trip to Palermo, Sicily in the spring of 1638. However, the first distinct and detailed record of the ashen light was produced by Riccioli on 9 January 1643, who ascribed it to the refraction of light within the telescope itself: "The colors arise from the various refraction of light in the glass, as it happens with trigonal glasses." This is likely a description of a phenomenon now known as chromatic aberration. Subsequent claims have been made by various observers ever since, including Sir William Herschel, Sir Patrick Moore, Dale P. Cruikshank, and William K. Hartmann. The ashen light has often been sighted when Venus is in the evening sky, when the evening terminator of the planet is toward the Earth. Observation attempts were made on 17 July 2001, when a 67% illuminated Venus reappeared from behind a 13% illuminated moon. None of the observers of this occurrence (including some using 'Super RADOTS' telescopes) reported seeing the ashen light. Video from the event was captured, but the camera was too insensitive to detect even the earthshine. A particularly favourable viewing opportunity occurred on 8 October 2015, with a 40% illuminated Venus reappearing from behind the unlit limb of a 15% sunlit Moon. The event was visible in dark skies throughout Central Australia and was recorded by David and Joan Dunham (of the International Occultation Timing Association) using a 10" f/4 Newton telescope with a Watec 120N+ video camera from a location just north of Alice Springs. They also observed the event visually with an 8" Schmidt–Cassegrain telescope. Neither the real-time visual observation nor close visual inspection of the video recording showed any sign of the dark side of Venus. Light source hypotheses The Keck telescope on Hawaii reported seeing a subtle green glow and suggested it could be produced as ultraviolet light from the Sun splits molecules of carbon dioxide (), known to be common in Venus' atmosphere, into carbon monoxide () and oxygen (). However, the green light emitted as oxygen recombines to form is thought too faint to explain the effect, and it is too faint to have been observed with amateur telescopes. In 1967, Venera 4 found the Venusian magnetic field to be much weaker than that of Earth. This magnetic field is induced by an interaction between the ionosphere and the solar wind, rather than by an internal dynamo in the core like the one inside Earth. Venus's small induced magnetosphere provides negligible protection to the atmosphere against cosmic radiation. This radiation may result in cloud-to-cloud lightning discharges. It was hypothesized in 1957 by Urey and Brewer that CO+, CO and O ions produced by the ultraviolet radiation from the Sun were the cause of the glow. In 1969, it was hypothesized that the Ashen light is an auroral phenomena due to solar particle bombardment on the dark side of Venus. Throughout the 1980s, it was thought that the cause of the glow was lightning on Venus. The Soviet Venera 9 and 10 orbital probes obtained optical and electromagnetic evidence of lightning on Venus. Also, the Pioneer Venus Orbiter recorded visible airglow at Venus in 1978 strong enough to saturate its star sensor. In 1990, Christopher T. Russell and J. L. Phillips gave further support to the lightning hypothesis, stating that if there are several strikes on the night side of the planet, in a sufficiently short period of time, the sequence may give off an overall glow in the skies of Venus. The European Space Agency's Venus Express in 2007 detected whistler waves, providing further evidence for lightning on Venus. The Akatsuki spacecraft, by Japan's space agency JAXA, entered orbit around Venus on 7 December 2015. Part of its scientific payload includes the Lightning and Airglow Camera (LAC) which is looking for lightning in the visible spectrum (552–777 nm). To image lightning, the orbiter has sight of the dark side of Venus for about 30 minutes every 10 days. No lightning has been detected in 16.8 hours of night-side observation (July 2019). Simulations indicate that the lightning hypothesis as the cause of the glow is incorrect, as not enough light could be transmitted through the atmosphere to be seen from Earth. Observers have speculated it may be illusory, resulting from the physiological effect of observing a bright, crescent-shaped object. Spacecraft looking for it have not been able to spot it — leading some astronomers to believe that it is just an enduring myth. A more recent hypothesis is that unusually high solar activity could induce auroral or airglow-like effects on the dark side of Venus. It has been observed that after major solar storms, an emission of light with a wavelength of 557.7 nm (the oxygen green line) occurs across the entire upper atmosphere of Venus. This is the same phenomenon which gives some aurorae on Earth their greenish appearance. Generally, this emission does not occur except for during major solar events such as coronal mass ejections (CMEs) or solar flares. However, dim emissions have been detected twice outside of solar storms, on December 27, 2010, and December 12, 2013 respectively. Both of these detections coincided with the passage of a “Stream Interaction Region”, a denser than average solar wind. In July 2012, a CME struck Venus producing a very bright green line emission. It is notable that after every CME impact on Venus, this emission is detected, but not after every flare. This is taken to indicate charged particles are what is responsible for the green line emission, similar to Earth's aurora. See also Atmosphere of Venus References Planetary science Venus Light sources Unexplained phenomena
Ashen light
Astronomy
1,384
18,813,134
https://en.wikipedia.org/wiki/Protosuchia
Protosuchia is a group of extinct Mesozoic crocodyliforms. They were small in size (~1 meter in length) and terrestrial. In phylogenetic terms, Protosuchia is considered an informal group because it is a grade of basal crocodyliforms, not a true clade. Classification Recent phylogenetic analyses have not supported Protosuchia as a natural group. However, two studies found a clade of Late Triassic-Early Jurassic animals: Edentosuchus Hemiprotosuchus Orthosuchus Protosuchus Both of these studies also found a clade more closely related to Hsisosuchus and Mesoeucrocodylia consisting of Late Jurassic-Late Cretaceous genera: Neuquensuchus Shantungosuchus Sichuanosuchus Zosuchus However, other possible protosuchians from the Late Cretaceous of China-Mongolia, the Gobiosuchidae (Gobiosuchus and Zaraasuchus), have been found to be either intermediate between these two clades, or members of the Sichuanosuchus clade. There is also another family of Late Jurassic-Late Cretaceous genera, the Shartegosuchidae (e.g. Kyasuchus, Shartegosuchus and Nominosuchus). Below is a cladogram from Fiorelli and Calvo (2007). Protosuchians are marked by the green bracket. References Crocodyliforms Terrestrial crocodylomorphs Triassic crocodylomorpha Jurassic crocodylomorphs Early Cretaceous crocodylomorphs Late Cretaceous crocodylomorphs Late Triassic first appearances Late Cretaceous extinctions Paraphyletic groups
Protosuchia
Biology
350
38,266,327
https://en.wikipedia.org/wiki/Rhombitetrapentagonal%20tiling
In geometry, the rhombitetrapentagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,2{4,5}. Dual tiling The dual is called the deltoidal tetrapentagonal tiling with face configuration V.4.4.4.5. Related polyhedra and tiling References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Uniform tilings in hyperbolic plane List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Uniform tilings
Rhombitetrapentagonal tiling
Physics
187
927,421
https://en.wikipedia.org/wiki/Molecular%20lesion
A molecular lesion or point lesion is damage to the structure of a biological molecule such as DNA, RNA, or protein. This damage may result in the reduction or absence of normal function, and in rare cases the gain of a new function. Lesions in DNA may consist of breaks or other changes in chemical structure of the helix, ultimately preventing transcription. Meanwhile, lesions in proteins consist of both broken bonds and improper folding of the amino acid chain. While many nucleic acid lesions are general across DNA and RNA, some are specific to one, such as thymine dimers being found exclusively in DNA. Several cellular repair mechanisms exist, ranging from global to specific, in order to prevent lasting damage resulting from lesions. Causes There are two broad causes of nucleic acid lesions, endogenous and exogenous factors. Endogenous factors, or endogeny, refer to the resulting conditions that develop within an organism. This is in contrast with exogenous factors which originate from outside the organism. DNA and RNA lesions caused by endogenous factors generally occur more frequently than damage caused by exogenous ones. Endogenous Factors Endogenous sources of specific DNA damage include pathways like hydrolysis, oxidation, alkylation, mismatch of DNA bases, depurination, depyrimidination, double-strand breaks (DSS), and cytosine deamination. DNA lesions can also naturally occur from the release of specific compounds such as reactive oxygen species (ROS), reactive nitrogen species (RNS), reactive carbonyl species (RCS), lipid peroxidation products, adducts, and alkylating agents through metabolic processes. ROS is one of the major endogenous sources of DNA damage and the most studied oxidative DNA adduct is 8-oxo-dG. Other adducts known to form are etheno-, propano-, and malondialdehyde-derived DNA adducts. The aldehydes formed from lipid peroxidation also pose another threat to DNA. Proteins such as "damage-up" proteins (DDPs) can promote endogenous DNA lesions by either increasing the amount of reactive oxygen by transmembrane transporters, losing chromosomes by replisome binding, and stalling replication by transcription factors. For RNA lesions specifically, the most abundant types of endogenous damage include oxidation, alkylation, and chlorination. Phagocytic cells produce radical species that include hypochlorous acid (HOCl), nitric oxide (NO•), and peroxynitrite (ONOO−) to fight infections, and many cell types use nitric oxide as a signaling molecule. However, these radical species can also cause the pathways that form RNA lesions. Exogenous Factors Ultraviolet Radiation UV light, specifically non-ionizing shorter-wavelength radiation such as UVC and UVB, causes direct DNA damage by initiating a synthesis reaction between two thymine molecules. The resulting dimer is very stable. Although they can be removed through excision repairs, when UV damage is extensive, the entire DNA molecule breaks down and the cell dies. If the damage is not too extensive, precancerous or cancerous cells are created from healthy cells. Chemotherapy drugs Chemotherapeutics, by design, induce DNA damage and are targeted towards rapidly dividing cancer cells. However, these drugs can not tell the difference between sick and healthy cells, resulting in the damage of normal cells. Alkylating agents Alkylating agents are a type of chemotherapeutic drug which keeps the cell from undergoing mitosis by damaging its DNA. They work in all phases of the cell cycle. The use of alkylating agents may result in leukemia due to them being able to target the cells of the bone marrow. Cancer causing agents Carcinogens are known to cause a number of DNA lesions, such as single-strand breaks, double- strand breaks, and covalently bound chemical DNA adducts. Tobacco products are one of the most prevalent cancer-causing agents of today. Other DNA damaging, cancer-causing agents include asbestos, which can cause damage through physical interaction with DNA or by indirectly setting off a reactive oxygen species, excessive nickel exposure, which can repress the DNA damage-repair pathways, aflatoxins, which are found in food, and many more. Lesions of Nucleic Acids Oxidative lesions Oxidative lesions are an umbrella category of lesions caused by reactive oxygen species (ROS), reactive nitrogen species (RNS), other byproducts of cellular metabolism, and exogenous factors such as ionizing or ultraviolet radiation. Byproducts of oxidative respiration are the main source of reactive species which cause a background level of oxidative lesions in the cell. DNA and RNA are both affected by this, and it has been found that RNA oxidative lesions are more abundant in humans compared to DNA. This may be due to cytoplasmic RNA having closer proximity to the electron transport chain. Known oxidative lesions characterized in DNA and RNA are many in number, as oxidized products are unstable and may resolve quickly. The hydroxyl radical and singlet oxygen are common reactive oxygen species responsible for these lesions. 8-oxo-guanine (8-oxoG) is the most abundant and well characterized oxidative lesion, found in both RNA and DNA. Accumulation of 8-oxoG may cause dire damage within the mitochondria and is thought to be a key player in the aging process. RNA oxidation has direct consequences in the production of proteins. mRNA affected by oxidative lesions is still recognized by ribosome, but the ribosome will undergo stalling and dysfunction. This results in proteins having either decreased expression or truncation, leading to aggregation and general dysfunction. Structural rearrangements Depurination is caused by hydrolysis and results in loss of the purine base of a nucleic acid. DNA is more prone to this, as the transition state in the depurination reaction has much greater energy in RNA. Tautomerization is a chemical reaction that is primarily relevant in the behavior of amino acids and nucleic acids. Both of which are correlated to DNA and RNA. The process of tautomerization of DNA bases occurs during DNA replication. The ability for the wrong tautomer of one of the standard nucleic bases to mispair causes a mutation during the process of DNA replication which can be cytotoxic or mutagenic to the cell. These mispairings can result in transition, transversion, frameshift, deletion, and/or duplication mutations. Some diseases that result from tautomerization induced DNA lesions include Kearns-Sayre syndrome, Fragile X syndrome, Kennedy disease, and Huntington's disease. Cytosine deamination commonly occurs under physiological conditions and essentially is the deamination of cytosine. This process yields uracil as its product, which is not a base pair found within DNA. This process causes extensive DNA damage. The rate of this process is slowed down significantly in double-stranded DNA compared to single-stranded DNA. Single and Double Stranded Breaks Single-strand breaks (SSBs) occur when one strand of the DNA double helix experiences breakage of a single nucleotide accompanied by damaged 5’- and/or 3’-termini at this point. One common source of SSBs is due to oxidative attack by physiological reactive oxygen species (ROS) such as hydrogen peroxide. H2O2 causes SSBs three times more frequently than double-strand breaks (DSBs). Alternative methods of SSB acquisition include direct disintegration of the oxidized sugar or through DNA base-excision repair (BER) of damaged bases. Additionally, cellular enzymes may perform erroneous activity leading to SSBs or DSBs by a variety of mechanisms. One such example would be when the cleavage complex formed by DNA topoisomerase 1 (TOP1) relaxes DNA during transcription and replication through the transient formation of a nick. While TOP1 normally reseals this nick shortly after, these cleavage complexes may collide with RNA or DNA polymerases or be proximal to other lesions, leading to TOP1-linked SSBs or TOP1-linked DSBs. Chemical Adducts A DNA adduct is a segment of DNA that binds to a chemical carcinogen. Some adducts that cause lesions to DNA included oxidatively modified bases, propano-, etheno-, and MDA-induced adducts. 5‐Hydroxymethyluracil is an example of an oxidatively modified base where oxidation of the methyl group of thymine occurs. This adduct interferes with the binding of transcription factors to DNA which can trigger apoptosis or result in deletion mutations. Propano adducts are derived by species generated by lipid peroxidation. For example, HNE is a major toxic product of the process. It regulates the expression of genes that are involved in cell cycle regulation and apoptosis. Some of the aldehydes from lipid peroxidation can be converted to epoxy aldehydes by oxidation reactions. These epoxy aldehydes can damage DNA by producing etheno adducts. An increase in this type of DNA lesion exhibits conditions resulting in oxidative stress which is known to be associated with an increased risk of cancer. Malondialdehyde (MDA) is another highly toxic product from lipid peroxidation and also in the synthesis of prostaglandin. MDA reacts with DNA to form the M1dG adduct which causes DNA lesions. Disease Effects Many systems are in place to repair DNA and RNA lesions but it is possible for lesions to escape these measures. This may lead to mutations or large genome abnormalities, which can threaten the cell or organism's ability to live. Several cancers are a result of DNA lesions. Even repair mechanisms to heal the damage may end up causing more damage. Mismatch repair defects, for example, cause instability that predisposes to colorectal and endometrial carcinomas. DNA lesions in neurons may lead to neurodegenerative disorders such as Alzheimer's, Huntington's, and Parkinson's diseases. These come as a result of neurons generally being associated with high mitochondrial respiration and redox species production, which can damage nuclear DNA. Since these cells often cannot be replaced after being damaged, the damage done to them leads to dire consequences. Other disorders stemming from DNA lesions and their association with neurons include but are not limited to Fragile X syndrome, Friedreich's ataxia, and Spinocerebellar ataxias. During replication, usually DNA polymerases are unable to go past the lesioned area, however, some cells are equipped with special polymerases which allow for translesion synthesis (TLS). TLS polymerases allow for the replication of DNA past lesions and risk generating mutations at a high frequency. Common mutations that occur after undergoing this process are point mutations and frameshift mutations. Several diseases come as a result of this process including several cancers and Xeroderma pigmentosum. The effect of oxidatively damaged RNA has resulted in a number of human diseases and is especially associated with chronic degeneration. This type of damage has been observed in many neurodegenerative diseases such as Amyotrophic lateral sclerosis, Alzheimer's, Parkinson's, dementia with Lewy bodies, and several prion diseases. It is important to note that this list is rapidly growing and data suggests that RNA oxidation occurs early in the development of these diseases, rather than as an effect of cellular decay. RNA and DNA lesions are both associated with the development of diabetes mellitus type 2. Repair Mechanisms DNA Damage Response When DNA is damaged such as due to a lesion, a complex signal transduction pathway is activated which is responsible for recognizing the damage and instigating the cell's response for repair. Compared to the other lesion repair mechanisms, DDR is the highest level of repair and is employed for the most complex lesions. DDR consists of various pathways, the most common of which are the DDR kinase signaling cascades. These are controlled by phosphatidylinositol 3-kinase-related kinases (PIKK), and range from DNA-dependent protein kinase (DNA-PKcs) and ataxia telangiectasia-mutated (ATM) most involved in repairing DSBs to the more versatile Rad3-related (ATR). ATR is crucial to human cell viability, while ATM mutations cause the severe disorder ataxia-telangiectasia leading to neurodegeneration, cancer, and immunodeficiency. These three DDR kinases all recognize damage via protein-protein interactions which localize the kinases to the areas of damage. Next, further protein-protein interactions and posttranslational modifications (PTMs) complete the kinase activation, and a series of phosphorylation events takes place. DDR kinases perform repair regulation at three levels - via PTMs, at the level of chromatin, and at the level of the nucleus. Base Excision Repair Base excision repair (BER) is responsible for removing damaged bases in DNA. This mechanism specifically works on excising small base lesions which do not distort the DNA double helix, in contrast to the nucleotide excision repair pathway which is employed in correcting more prominent distorting lesions. DNA glycosylases initiate BER by both recognizing the faulty or incorrect bases and then removing them, forming AP sites lacking any purine or pyrimidine. AP endonuclease then cleaves the AP site, and the single-strand break is either processed by short-patch BER to replace a single nucleotide long-patch BER to create 2-10 replacement nucleotides. Single Stranded Break Repair Single stranded breaks (SSBs) can severely threaten genetic stability and cell survival if not quickly and properly repaired, so cells have developed fast and efficient SSB repair (SSBR) mechanisms. While global SSBR systems extract SSBs throughout the genome and during interphase, S-phase specific SSBR processes work together with homologous recombination at the replication forks. Double Stranded Break Repair Double stranded breaks (DSB) are a threat to all organisms as they can cause cell death and cancer. They can be caused exogenously as a result of radiation and endogenously from errors in replication or encounters with DNA lesions by the replication fork. DSB repair occurs through a variety of different pathways and mechanisms in order to correctly repair these errors. Nucleotide Excision and Mismatch Repair Nucleotide excision repair  is one of the main mechanisms used to remove bulky adducts from DNA lesions caused by chemotherapy drugs, environmental mutagens, and most importantly UV radiation. This mechanism functions by releasing a short damage containing oligonucleotide from the DNA site, and then that gap is filled in and repaired by NER. NER recognizes a variety of structurally unrelated DNA lesions due to the flexibility of the mechanism itself, as NER is highly sensitive to changes in the DNA helical structure. Bulky adducts seem to trigger NER. The XPC-RAD23-CETN2 heterotrimer involved with NER has a critical role in DNA lesion recognition. In addition to other general lesions in the genome, UV damaged DNA binding protein complex (UV-DDB)  also has an important role in both recognition and repair of UV-induced DNA photolesions. Mismatch repair (MMR) mechanisms within the cell correct base mispairs that occur during replication using a variety of pathways. It has a high affinity for targeting DNA lesions with specificity, as alternations in base pair stacking that occur at DNA lesion sites affect the helical structure. This is likely one of many signals that triggers MMR. References Molecular biology
Molecular lesion
Chemistry,Biology
3,345
9,014,291
https://en.wikipedia.org/wiki/Badger%20Creek%20Wilderness
The Badger Creek Wilderness is a wilderness area located east of Mount Hood in the northwestern Cascades of Oregon, United States. It is one of six designated wilderness areas in the Mount Hood National Forest, the others being Mark O. Hatfield, Salmon-Huckleberry, Mount Hood, Mount Jefferson, and Bull of the Woods. Topography The elevation of Badger Creek Wilderness ranges from . Steep walled glacial valleys lead to the top of Lookout Mountain, at . Annual precipitation in the Wilderness ranges from on the western ridges to in the dry eastern lowlands. Three creeks drain the Wilderness - Badger, Little Badger, and Tygh. Vegetation Lookout Mountain and the high ridgeland extending east support a subalpine ecosystem, with hardy trees and rocky terrain. Penstemon, Indian paintbrush, yellow avalanche lilies, and stonecrop are common in the area. Farther east in the Wilderness the climate is warm and dry, where ponderosa pine forest and extensive growths of Oregon white oak and grasslands are common. Larkspur, shooting star, lupine, balsamroot, death camas, and purple onion can be found in the area. Recreation Common recreational activities in Badger Creek Wilderness include hiking, camping, wildlife watching, cross-country skiing, and horseback riding. There are approximately of developed trails in the Wilderness. These trails lead to Lookout Mountain, Flag Point fire lookout, Badger Lake, and along Badger, Little Badger, and Tygh Creeks. There are several primitive campsites in the wilderness. the Bonney Butte area of the wilderness is in Mount Hood National Recreation Area. See also List of Oregon Wildernesses List of U.S. Wilderness Areas Old growth List of old growth forests Wilderness Act References External links Mt. Hood National Forest - Wilderness Cascade Range Old-growth forests Wilderness areas of Oregon Protected areas of Hood River County, Oregon Protected areas of Wasco County, Oregon Mount Hood National Forest 1984 establishments in Oregon Protected areas established in 1984
Badger Creek Wilderness
Biology
391
4,905,213
https://en.wikipedia.org/wiki/Thomson%20%28unit%29
The thomson (symbol: Th) is a unit that has appeared infrequently in scientific literature relating to the field of mass spectrometry as a unit of mass-to-charge ratio. The unit was proposed by R. Graham Cooks and Alan L. Rockwood naming it in honour of J. J. Thomson who measured the mass-to-charge ratio of electrons and ions. Definition The thomson is defined as where Da is the symbol for the unit dalton (also called the unified atomic mass unit, symbol u), and e is the elementary charge, which is the unit of electric charge in the system of atomic units. For example, the ion C7H72+ has a mass of 91 Da. Its charge number is +2, and hence its charge is 2e. The ion will be observed at 45.5 Th in a mass spectrum. The thomson allows for negative values for negatively charged ions. For example, the benzoate anion would be observed at −121 Th since the charge is −e. Use The thomson has been used by some mass spectrometrists, for example Alexander Makarov—the inventor of the Orbitrap—in a scientific poster, and a 2015 presentation. Other uses of the thomson include papers, and (notably) one book. The journal Rapid Communications in Mass Spectrometry (in which the original article appeared) states that "the thomson (Th) may be used for such purposes as a unit of mass-to-charge ratio although it is not currently approved by IUPAP or IUPAC." Even so, the term has been called "controversial" by RCM's former Editor-in Chief (in a review the Hoffman text cited above). The book, Mass Spectrometry Desk Reference, argues against the use of the thomson. However, the editor-in-chief of the Journal of the Mass Spectrometry Society of Japan has written an editorial in support of the thomson unit. The thomson is not an SI unit, nor has it been defined by IUPAC. Since 2013, the thomson is deprecated by IUPAC (Definitions of Terms Relating to Mass Spectrometry). Since 2014, Rapid Communications in Mass Spectrometry regards the thomson as a "term that should be avoided in mass spectrometry publications". References Units of measurement Mass spectrometry
Thomson (unit)
Physics,Chemistry,Mathematics
495
18,180,411
https://en.wikipedia.org/wiki/Siderophilic%20bacteria
Siderophilic bacteria are bacteria that require or are facilitated by free iron. They may include Vibrio vulnificus, Listeria monocytogenes, Yersinia enterocolica, Salmonella enterica (serotype Typhimurium), Klebsiella pneumoniae and Escherichia coli. One possible symptom of haemochromatosis is susceptibility to infections from these species. Certain non-bacterial microorganisms such as Rhizopus arrhizus and Mucor may also be siderophilic. See also Iron-oxidizing bacteria Dissimilatory metal-reducing bacteria References Bacteria
Siderophilic bacteria
Biology
142
2,925,602
https://en.wikipedia.org/wiki/Pulegone
Pulegone is a naturally occurring organic compound obtained from the essential oils of a variety of plants such as Nepeta cataria (catnip), Mentha piperita, and pennyroyal. It is classified as a monoterpenoid, which means that it is an oxidized derivative of a terpene, a large class of naturally occurring C10 hydrocarbons. Pulegone is a colorless oil with a pleasant odor similar to pennyroyal, peppermint, and camphor. It is used in flavoring agents, in perfumery, and in aromatherapy. Isolation and some uses Pulegone comprises 75% of the oil pressed from pennyroyal, which is cultivated for that purpose. Hydrogenation of pulegone gives menthone. Pulegone is also a precursor to menthofuran, another flavorant. Toxicology It was reported that the chemical is toxic to rats if a large quantity is consumed. Pulegone is also an insecticide − the most powerful of three insecticides naturally occurring in many mint species. As of October 2018, the FDA withdrew authorization for the use of pulegone as a synthetic flavoring substance for use in food, but that naturally-occurring pulegone can continue to be used. Sources Creeping charlie Mentha longifolia Mentha suaveolens Pennyroyal Peppermint Schizonepeta tenuifolia Bursera graveolens See also Menthofuran Menthol References Ketones Flavors Cooling flavors Perfume ingredients Monoterpenes IARC Group 2B carcinogens
Pulegone
Chemistry
326
1,044,681
https://en.wikipedia.org/wiki/Mesh
A mesh is a barrier made of interlaced strands of metal, fiber or other flexible or ductile materials. A mesh is similar to a web or a net in that it has many interwoven strands. Types A plastic mesh may be extruded, oriented, expanded, woven or tubular. It can be made from polypropylene, polyethylene, nylon, PVC or PTFE. A metal mesh may be woven, knitted, welded, expanded, sintered, photo-chemically etched or electroformed (screen filter) from steel or other metals. In clothing, mesh is loosely woven or knitted fabric that has many closely spaced holes. Knitted mesh is frequently used for modern sports jerseys and other clothing like hosiery and lingerie A meshed skin graft is a piece of harvested skin that has been systematically fenestrated to create a mesh-like patch. Meshing of skin grafts provides coverage of a greater surface area at the recipient site, and also allows for the egress of excess serous or sanguinous fluid, which can compromise the graft establishment via formation of haematoma or seroma. However, it results in a rather pebbled appearance upon healing that may ultimately look less aesthetically pleasing. Fiberglass mesh is a neatly woven, crisscross pattern of fiberglass thread that can be used to create new products such as door screens, filtration components, and reinforced adhesive tapes. It is commonly sprayed with a PVC coating to make it stronger, last longer, and to prevent skin irritation. Coiled wire fabric is a type of mesh that is constructed by interlocking metal wire coils via a simple corkscrew method. The resulting spirals are then woven together to create a flexible metal fabric panel. Coiled wire fabric mesh is a product that is used by architects to design commercial and residential structures. It is also used in industrial settings to protect personnel and contain debris. Additionally, coiled wire fabric mesh is used for zoo enclosures, typically aviary and small mammal exhibits. Uses Meshes are often used to screen out insects. Wire screens on windows and mosquito netting are meshes. Wire screens can be used to shield against radio frequency radiation, e.g. in microwave ovens and Faraday cages. Metal and nylon wire mesh filters are used in filtration. Wire mesh is used in guarding for secure areas and as protection in the form of vandal screens. Wire mesh can be fabricated to produce park benches, waste baskets and other baskets for material handling. Woven meshes are basic to screen printing. Surgical mesh is used to provide a reinforcing structure in surgical procedures like inguinal hernioplasty, and umbilical hernia repair. Meshes are used as drum heads in practice and electronic drum sets. Fence for livestock or poultry (chicken wire or hardware cloth) Humane animal trapping uses woven or welded wire mesh cages (chicken wire or hardware cloth) to trap wild animals like raccoons and skunks in populated areas. Meshes can be used for eyes in masks. See also Expanded metal Faraday cage Gauze Wire gauze Heating mantle Latticework Sieve References External links Woven fabrics Net fabrics Filters Building materials Steel
Mesh
Physics,Chemistry,Engineering
659
13,683,591
https://en.wikipedia.org/wiki/Phytochelatin
Phytochelatins are oligomers of glutathione, produced by the enzyme phytochelatin synthase. They are found in plants, fungi, nematodes and all groups of algae including cyanobacteria. Phytochelatins act as chelators, and are important for heavy metal detoxification. They are abbreviated PC2 through PC11. A mutant Arabidopsis thaliana lacking phytochelatin synthase is very sensitive to cadmium, but it grows just as well as the wild-type plant at normal concentrations of zinc and copper, two essential metal ions, indicating that phytochelatin is only involved in resistance to metal poisoning. Because phytochelatin synthase uses glutathione with a blocked thiol group in the synthesis of phytochelatin, the presence of heavy metal ions that bind to glutathione causes the enzyme to work faster. Therefore, the amount of phytochelatin increases when the cell needs more phytochelatin to survive in an environment with high concentrations of metal ions. Phytochelatin binds to Pb ions leading to sequestration of Pb ions in plants and thus serves as an important component of the detoxification mechanism in plants. Phytochelatin seems to be transported into the vacuole of plants, so that the metal ions it carries are stored safely away from the proteins of the cytosol. Related peptides There are groups of other peptides with a similar structure to phytochelatin, but where the last amino acid is not glycine: History Phytochelatin was first discovered in 1981 in fission yeast, and was named cadystin. It was then found in higher plants in 1985 and was named phytochelatin. In 1989 the biosynthetic enzyme, phytochelatin synthase, was discovered. See also pp. 228–230 Dunaliella References External links Strong induction of phytochelatin synthesis by zinc in marine green alga, Dunaliella tertiolecta. Peptides Chelating agents
Phytochelatin
Chemistry
441
1,023,575
https://en.wikipedia.org/wiki/Glioblastoma
Glioblastoma, previously known as glioblastoma multiforme (GBM), is the most aggressive and most common type of cancer that originates in the brain, and has a very poor prognosis for survival. Initial signs and symptoms of glioblastoma are nonspecific. They may include headaches, personality changes, nausea, and symptoms similar to those of a stroke. Symptoms often worsen rapidly and may progress to unconsciousness. The cause of most cases of glioblastoma is not known. Uncommon risk factors include genetic disorders, such as neurofibromatosis and Li–Fraumeni syndrome, and previous radiation therapy. Glioblastomas represent 15% of all brain tumors. They are thought to arise from astrocytes. The diagnosis typically is made by a combination of a CT scan, MRI scan, and tissue biopsy. There is no known method of preventing the cancer. Treatment usually involves surgery, after which chemotherapy and radiation therapy are used. The medication temozolomide is frequently used as part of chemotherapy. High-dose steroids may be used to help reduce swelling and decrease symptoms. Surgical removal (decompression) of the tumor is linked to increased survival, but only by some months. Despite maximum treatment, the cancer almost always recurs. The typical duration of survival following diagnosis is 10–13 months, with fewer than 5–10% of people surviving longer than five years. Without treatment, survival is typically three months. It is the most common cancer that begins within the brain and the second-most common brain tumor, after meningioma, which is benign in most cases. About 3 in 100,000 people develop the disease per year. The average age at diagnosis is 64, and the disease occurs more commonly in males than females. Tumors of the central nervous system are the 10th leading cause of death worldwide, with up to 90% being brain tumors. Glioblastoma multiforme (GBM) is derived from astrocytes and accounts for 49% of all malignant central nervous system tumors, making it the most common form of central nervous system cancer. Despite countless efforts to develop new therapies for GBM over the years, the median survival rate of GBM patients worldwide is 8 months; radiation and chemotherapy standard-of-care treatment beginning shortly after diagnosis improve the median survival length to around 14 months and a five-year survival rate of 5–10%. The five-year survival rate for individuals with any form of primary malignant brain tumor is 20%. Even when all detectable traces of the tumor are removed through surgery, most patients with GBM experience recurrence of their cancer. Signs and symptoms Common symptoms include seizures, headaches, nausea and vomiting, memory loss, changes to personality, mood or concentration, and localized neurological problems. The kinds of symptoms produced depend more on the location of the tumor than on its pathological properties. The tumor can start producing symptoms quickly, but occasionally is an asymptomatic condition until it reaches an enormous size. Risk factors The cause of most cases is unclear. The best known risk factor is exposure to ionizing radiation, and CT scan radiation is an important cause. About 5% develop from certain hereditary syndromes. Genetics Uncommon risk factors include genetic disorders such as neurofibromatosis, Li–Fraumeni syndrome, tuberous sclerosis, or Turcot syndrome. Previous radiation therapy is also a risk. For unknown reasons, it occurs more commonly in males. Environmental Other associations include exposure to smoking, pesticides, and working in petroleum refining or rubber manufacturing. Glioblastoma has been associated with the viruses SV40, HHV-6, and cytomegalovirus (CMV). Infection with an oncogenic CMV may even be necessary for the development of glioblastoma. Other Research has been done to see if consumption of cured meat is a risk factor. No risk had been confirmed as of 2003. Similarly, exposure to formaldehyde, and residential electromagnetic fields, such as from cell phones and electrical wiring within homes, have been studied as risk factors. As of 2015, they had not been shown to cause GBM. Pathogenesis The cellular origin of glioblastoma is unknown. Because of the similarities in immunostaining of glial cells and glioblastoma, gliomas such as glioblastoma have long been assumed to originate from glial-type stem cells found in the subventricular zone. More recent studies suggest that astrocytes, oligodendrocyte progenitor cells, and neural stem cells could all serve as the cell of origin. GBMs usually form in the cerebral white matter, grow quickly, and can become very large before producing symptoms. Since the function of glial cells in the brain is to support neurons, they have the ability to divide, to enlarge, and to extend cellular projections along neurons and blood vessels. Once cancerous, these cells are predisposed to spread along existing paths in the brain, typically along white-matter tracts, blood vessels and the perivascular space. The tumor may extend into the meninges or ventricular wall, leading to high protein content in the cerebrospinal fluid (CSF) (> 100 mg/dl), as well as an occasional pleocytosis of 10 to 100 cells, mostly lymphocytes. Malignant cells carried in the CSF may spread (rarely) to the spinal cord or cause meningeal gliomatosis. However, metastasis of GBM beyond the central nervous system is extremely unusual. About 50% of GBMs occupy more than one lobe of a hemisphere or are bilateral. Tumors of this type usually arise from the cerebrum and may exhibit the classic infiltration across the corpus callosum, producing a butterfly (bilateral) glioma. Glioblastoma classification Brain tumor classification has been traditionally based on histopathology at macroscopic level, measured in hematoxylin-eosin sections. The World Health Organization published the first standard classification in 1979 and has been doing so since. The 2007 WHO Classification of Tumors of the Central Nervous System was the last classification mainly based on microscopy features. The new 2016 WHO Classification of Tumors of the Central Nervous System was a paradigm shift: some of the tumors were defined also by their genetic composition as well as their cell morphology. In 2021, the fifth edition of the WHO Classification of Tumors of the Central Nervous System was released. This update eliminated the classification of secondary glioblastoma and reclassified those tumors as Astrocytoma, IDH mutant, grade 4. Only tumors that are IDH wild type are now classified as glioblastoma. Molecular alterations There are currently three molecular subtypes of glioblastoma that were identified based on gene expression: Classical: Around 97% of tumors in this subtype carry extra copies of the epidermal growth factor receptor (EGFR) gene, and most have higher than normal expression of EGFR, whereas the gene TP53 (p53), which is often mutated in glioblastoma, is rarely mutated in this subtype. Loss of heterozygosity in chromosome 10 is also frequently seen in the classical subtype alongside chromosome 7 amplification. The proneural subtype often has high rates of alterations in TP53 (p53), and in PDGFRA the gene encoding a-type platelet-derived growth factor receptor. The mesenchymal subtype is characterized by high rates of mutations or other alterations in NF1, the gene encoding neurofibromin 1 and fewer alterations in the EGFR gene and less expression of EGFR than other types. Initial analyses of gene expression had revealed a fourth neural subtype. However, further analyses revealed that this subtype is non-tumor specific and is potential contamination caused by the normal cells. Many other genetic alterations have been described in glioblastoma, and the majority of them are clustered in two pathways, the RB and the PI3K/AKT. 68–78% and 88% of Glioblastomas have alterations in these pathways, respectively. Another important alteration is methylation of MGMT, a "suicide" DNA repair enzyme. Methylation impairs DNA transcription and expression of the MGMT gene. Since the MGMT enzyme can repair only one DNA alkylation due to its suicide repair mechanism, reserve capacity is low and methylation of the MGMT gene promoter greatly affects DNA-repair capacity. MGMT methylation is associated with an improved response to treatment with DNA-damaging chemotherapeutics, such as temozolomide. Studies using genome-wide profiling have revealed glioblastomas to have a remarkable genetic variety. At least three distinct paths in the development of Glioblastomas have been identified with the aid of molecular investigations. The first pathway involves the amplification and mutational activation of receptor tyrosine kinase (RTK) genes, leading to the dysregulation of growth factor signaling. Epithelial growth factor (EGF), vascular endothelial growth factor (VEGF), and platelet-derived growth factor (PDGF) are all recognized by transmembrane proteins called RTKs. Additionally, they can function as receptors for hormones, cytokines, and other signaling pathways. The second method involves activating the intracellular signaling system known as phosphatidylinositol-3-OH kinase (PI3K)/AKT/mTOR, which is crucial for controlling cell survival. The third pathway is defined by p53 and retinoblastoma (Rb) tumor suppressor pathway inactivation. Cancer stem cells Glioblastoma cells with properties similar to progenitor cells (glioblastoma cancer stem cells) have been found in glioblastomas. Their presence, coupled with the glioblastoma's diffuse nature results in difficulty in removing them completely by surgery, and is therefore believed to be the possible cause behind resistance to conventional treatments, and the high recurrence rate. Glioblastoma cancer stem cells share some resemblance with neural progenitor cells, both expressing the surface receptor CD133. CD44 can also be used as a cancer stem cell marker in a subset of glioblastoma tumour cells. Glioblastoma cancer stem cells appear to exhibit enhanced resistance to radiotherapy and chemotherapy mediated, at least in part, by up-regulation of the DNA damage response. Metabolism The IDH1 gene encodes for the enzyme isocitrate dehydrogenase 1 and is not mutated in glioblastoma. As such, these tumors behave more aggressively compared to IDH1-mutated astrocytomas. Ion channels Furthermore, GBM exhibits numerous alterations in genes that encode for ion channels, including upregulation of gBK potassium channels and ClC-3 chloride channels. By upregulating these ion channels, glioblastoma tumor cells are hypothesized to facilitate increased ion movement over the cell membrane, thereby increasing H2O movement through osmosis, which aids glioblastoma cells in changing cellular volume very rapidly. This is helpful in their extremely aggressive invasive behavior because quick adaptations in cellular volume can facilitate movement through the sinuous extracellular matrix of the brain. MicroRNA As of 2012, RNA interference, usually microRNA, was under investigation in tissue culture, pathology specimens, and preclinical animal models of glioblastoma. Additionally, experimental observations suggest that microRNA-451 is a key regulator of LKB1/AMPK signaling in cultured glioma cells and that miRNA clustering controls epigenetic pathways in the disease. Tumor vasculature GBM is characterized by abnormal vessels that present disrupted morphology and functionality. The high permeability and poor perfusion of the vasculature result in a disorganized blood flow within the tumor and can lead to increased hypoxia, which in turn facilitates cancer progression by promoting processes such as immunosuppression. Diagnosis When viewed with MRI, glioblastomas often appear as ring-enhancing lesions. The appearance is not specific, however, as other lesions such as abscess, metastasis, tumefactive multiple sclerosis, and other entities may have a similar appearance. Definitive diagnosis of a suspected GBM on CT or MRI requires a stereotactic biopsy or a craniotomy with tumor resection and pathologic confirmation. Because the tumor grade is based upon the most malignant portion of the tumor, biopsy or subtotal tumor resection can result in undergrading of the lesion. Imaging of tumor blood flow using perfusion MRI and measuring tumor metabolite concentration with MR spectroscopy may add diagnostic value to standard MRI in select cases by showing increased relative cerebral blood volume and increased choline peak, respectively, but pathology remains the gold standard for diagnosis and molecular characterization. Distinguishing glioblastoma from high-grade astrocytoma is important. These tumors occur spontaneously (de novo) and have not progressed from a lower-grade glioma, as in high-grade astrocytomas Glioblastomas have a worse prognosis and different tumor biology, and may have a different response to therapy, which makes this a critical evaluation to determine patient prognosis and therapy. Astrocytomas carry a mutation in IDH1 or IDH2, whereas this mutation is not present in glioblastoma. Thus, IDH1 and IDH2 mutations are a useful tool to distinguish glioblastomas from astrocytomas, since histopathologically they are similar and the distinction without molecular biomarkers is unreliable. IDH-wildtype glioblastomas usually have lower OLIG2 expression compared with IDH-mutant lower grade astrocytomas. In patients aged over 55 years with a histologically typical glioblastoma, without a pre-existing lower grade glioma, with a non-midline tumor location and with retained nuclear ATRX expression, immunohistochemical negativity for IDH1 R132H suffices for the classification as IDH-wild-type glioblastoma. In all other instances of diffuse gliomas, a lack of IDH1 R132H immunopositivity should be followed by IDH1 and IDH2 DNA sequencing to detect or exclude the presence of non-canonical mutations. IDH-wild-type diffuse astrocytic gliomas without microvascular proliferation or necrosis should be tested for EGFR amplification, TERT promoter mutation and a +7/–10 cytogenetic signature as molecular characteristics of IDH-wild-type glioblastomas. Prevention There are no known methods to prevent glioblastoma. It is the case for most gliomas, unlike for some other forms of cancer, that they happen without previous warning and there are no known ways to prevent them. Treatment Treating glioblastoma is difficult due to several complicating factors: The tumor cells are resistant to conventional therapies. The brain is susceptible to damage from conventional therapy. The brain has a limited capacity to repair itself. Many drugs cannot cross the blood–brain barrier to act on the tumor. Treatment of primary brain tumors consists of palliative (symptomatic) care and therapies intended to improve survival. Symptomatic therapy Supportive treatment focuses on relieving symptoms and improving the patient's neurologic function. The primary supportive agents are anticonvulsants and corticosteroids. Historically, around 90% of patients with glioblastoma underwent anticonvulsant treatment, although only an estimated 40% of patients required this treatment. Neurosurgeons have recommended that anticonvulsants not be administered prophylactically, and should wait until a seizure occurs before prescribing this medication. Those receiving phenytoin concurrent with radiation may have serious skin reactions such as erythema multiforme and Stevens–Johnson syndrome. Corticosteroids, usually dexamethasone, can reduce peritumoral edema (through rearrangement of the blood–brain barrier), diminishing mass effect and lowering intracranial pressure, with a decrease in headache or drowsiness. Surgery Surgery is the first stage of treatment of glioblastoma. An average GBM tumor contains 1011 cells, which is on average reduced to 109 cells after surgery (a reduction of 99%). Benefits of surgery include resection for a pathological diagnosis, alleviation of symptoms related to mass effect, and potentially removing disease before secondary resistance to radiotherapy and chemotherapy occurs. The greater the extent of tumor removal, the better. In retrospective analyses, removal of 98% or more of the tumor has been associated with a significantly longer healthier time than if less than 98% of the tumor is removed. The chances of near-complete initial removal of the tumor may be increased if the surgery is guided by a fluorescent dye known as 5-aminolevulinic acid. GBM cells are widely infiltrative through the brain at diagnosis, and despite a "total resection" of all obvious tumor, most people with GBM later develop recurrent tumors either near the original site or at more distant locations within the brain. Other modalities, typically radiation and chemotherapy, are used after surgery in an effort to suppress and slow recurrent disease through damaging the DNA of rapidly proliferative GBM cells. Between 60-85% of glioblastoma patients report cancer-related cognitive impairments following surgery, which refers to problems with executive functioning, verbal fluency, attention, speed of processing. These symptoms may be managed with cognitive behavioral therapy, physical exercise, yoga and meditation. Radiotherapy Subsequent to surgery, radiotherapy becomes the mainstay of treatment for people with glioblastoma. It is typically performed along with giving temozolomide. A pivotal clinical trial carried out in the early 1970s showed that among 303 GBM patients randomized to radiation or best medical therapy, those who received radiation had a median survival more than double those who did not. Subsequent clinical research has attempted to build on the backbone of surgery followed by radiation. Whole-brain radiotherapy does not improve when compared to the more precise and targeted three-dimensional conformal radiotherapy. A total radiation dose of 60–65 Gy has been found to be optimal for treatment. GBM tumors are well known to contain zones of tissue exhibiting hypoxia, which are highly resistant to radiotherapy. Various approaches to chemotherapy radiosensitizers have been pursued, with limited success . , newer research approaches included preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as radiosensitizers, and a clinical trial was underway. Boron neutron capture therapy has been tested as an alternative treatment for glioblastoma, but is not in common use. Chemotherapy Most studies show no benefit from the addition of chemotherapy. However, a large clinical trial of 575 participants randomized to standard radiation versus radiation plus temozolomide chemotherapy showed that the group receiving temozolomide survived a median of 14.6 months as opposed to 12.1 months for the group receiving radiation alone. This treatment regimen is now standard for most cases of glioblastoma where the person is not enrolled in a clinical trial. Temozolomide seems to work by sensitizing the tumor cells to radiation, and appears more effective for tumors with MGMT promoter methylation. High doses of temozolomide in high-grade gliomas yield low toxicity, but the results are comparable to the standard doses. Antiangiogenic therapy with medications such as bevacizumab control symptoms, but do not appear to affect overall survival in those with glioblastoma. A 2018 systematic review found that the overall benefit of anti-angiogenic therapies was unclear. In elderly people with newly diagnosed glioblastoma who are reasonably fit, concurrent and adjuvant chemoradiotherapy gives the best overall survival but is associated with a greater risk of haematological adverse events than radiotherapy alone. Immunotherapy Phase 3 clinical trials of immunotherapy treatments for glioblastoma have largely failed. Other procedures Alternating electric field therapy is an FDA-approved therapy for newly diagnosed and recurrent glioblastoma. In 2015, initial results from a phase-III randomized clinical trial of alternating electric field therapy plus temozolomide in newly diagnosed glioblastoma reported a three-month improvement in progression-free survival, and a five-month improvement in overall survival compared to temozolomide therapy alone, representing the first large trial in a decade to show a survival improvement in this setting. Despite these results, the efficacy of this approach remains controversial among medical experts. However, increasing understanding of the mechanistic basis through which alternating electric field therapy exerts anti-cancer effects and results from ongoing phase-III clinical trials in extracranial cancers may help facilitate increased clinical acceptance to treat glioblastoma in the future. Prognosis The most common length of survival following diagnosis is 10 to 13 months (although recent research points to a median survival rate of 15 months), with fewer than 1–3% of people surviving longer than five years. In the United States between 2012 and 2016 five-year survival was 6.8%. Without treatment, survival is typically three months. Complete cures are extremely rare, but have been reported. Increasing age (> 60 years) carries a worse prognostic risk. Death is usually due to widespread tumor infiltration with cerebral edema and increased intracranial pressure. A good initial Karnofsky performance score (KPS) and MGMT methylation are associated with longer survival. A DNA test can be conducted on glioblastomas to determine whether or not the promoter of the MGMT gene is methylated. Patients with a methylated MGMT promoter have longer survival than those with an unmethylated MGMT promoter, due in part to increased sensitivity to temozolomide. Long-term benefits have also been associated with those patients who receive surgery, radiotherapy, and temozolomide chemotherapy. However, much remains unknown about why some patients survive longer with glioblastoma. Age under 50 is linked to longer survival in GBM, as is 98%+ resection and use of temozolomide chemotherapy and better KPSs. A recent study confirms that younger age is associated with a much better prognosis, with a small fraction of patients under 40 years of age achieving a population-based cure. Cure is thought to occur when a person's risk of death returns to that of the normal population, and in GBM, this is thought to occur after 10 years. UCLA Neuro-oncology publishes real-time survival data for patients with this diagnosis. According to a 2003 study, GBM prognosis can be divided into three subgroups dependent on KPS, the age of the patient, and treatment. Epidemiology About three per 100,000 people develop the disease a year, although regional frequency may be much higher. The frequency in England doubled between 1995 and 2015. It is the second-most common central nervous system tumor after meningioma. It occurs more commonly in males than females. Although the median age at diagnosis is 64, in 2014, the broad category of brain cancers was second only to leukemia in people in the United States under 20 years of age. History The term glioblastoma multiforme was introduced in 1926 by Percival Bailey and Harvey Cushing, based on the idea that the tumor originates from primitive precursors of glial cells (glioblasts), and the highly variable appearance due to the presence of necrosis, hemorrhage, and cysts (multiform). Research Gene therapy Gene therapy has been explored as a method to treat glioblastoma, and while animal models and early-phase clinical trials have been successful, as of 2017, all gene-therapy drugs that had been tested in phase-III clinical trials for glioblastoma had failed. Scientists have developed the core–shell nanostructured LPLNP-PPT (long persistent luminescence nanoparticles. PPT refers to polyetherimide, PEG and trans-activator of transcription, and TRAIL is the human tumor necrosis factor-related apoptosis-induced ligand) for effective gene delivery and tracking, with positive results. This is a TRAIL ligand that has been encoded to induce apoptosis of cancer cells, more specifically glioblastomas. Although this study was still in clinical trials in 2017, it has shown diagnostic and therapeutic functionalities, and will open great interest for clinical applications in stem-cell-based therapy. Other gene therapy approaches has also been explored in the context of glioblastoma, including suicide gene therapy. Suicide gene therapy is a two step approach which includes the delivery of a foreign enzyme-gene to the cancer cells followed by activation with an pro-drug causing toxicities in the cancer-cells which induces cell death. This approach have had success in animal models and small clinical studied but could not show survival benefit in larger clinical studies. Using new more efficient delivery vectors and suicide gene-prodrug systems could improve the clinical benefit from these types of therapies. Oncolytic virotherapy Oncolytic virotherapy is an emerging novel treatment that is under investigation both at preclinical and clinical stages. Several viruses including herpes simplex virus, adenovirus, poliovirus, and reovirus are currently being tested in phases I and II of clinical trials for glioblastoma therapy and have shown to improve overall survival. Intranasal drug delivery Direct nose-to-brain drug delivery is being explored as a means to achieve higher, and hopefully more effective, drug concentrations in the brain. A clinical phase-I/II study with glioblastoma patients in Brazil investigated the natural compound perillyl alcohol for intranasal delivery as an aerosol. The results were encouraging and, as of 2016, a similar trial has been initiated in the United States. See also Adegramotide Asunercept Glioblastoma Foundation Lomustine List of people with brain tumors References External links Information about glioblastoma from the American Brain Tumor Association Aging-associated diseases Brain tumor Cancer Oncology Wikipedia medicine articles ready to translate
Glioblastoma
Biology
5,572
37,647,193
https://en.wikipedia.org/wiki/11%20Librae
11 Librae is a single, fifth-magnitude star in the southern zodiac constellation of Libra. It is faintly visible to the naked eye with an apparent visual magnitude is 4.93. The star is moving further from the Sun with a heliocentric radial velocity of +83.6 km/s. The distance to this star, as estimated from its annual parallax shift of , is about 219 light years. This star has a stellar classification of K0 III/IV, indicating the spectrum displays mixed traits of a giant/subgiant K-type star. Alves (2000) and Afşar et al. (2012) classify it as a red clump star, which means it is an evolved star at the cool end of the horizontal branch and is generating energy through helium fusion in its core region. It is about five billion years old and is spinning with a projected rotational velocity of 4 km/s. The star has 1.1 times the mass of the Sun and has expanded to over 10 times the Sun's radius. It is radiating around 59 times the Sun's radius from its enlarged photosphere at an effective temperature of about 4,749 K. References K-type giants Horizontal-branch stars Libra (constellation) Durchmusterung objects Librae, 11 130952 072631 5535
11 Librae
Astronomy
277
409,951
https://en.wikipedia.org/wiki/Isothermal%20process
An isothermal process is a type of thermodynamic process in which the temperature T of a system remains constant: ΔT = 0. This typically occurs when a system is in contact with an outside thermal reservoir, and a change in the system occurs slowly enough to allow the system to be continuously adjusted to the temperature of the reservoir through heat exchange (see quasi-equilibrium). In contrast, an adiabatic process is where a system exchanges no heat with its surroundings (Q = 0). Simply, we can say that in an isothermal process For ideal gases only, internal energy while in adiabatic processes: Etymology The noun isotherm is derived from the Ancient Greek words (), meaning "equal", and (), meaning "heat". Examples Isothermal processes can occur in any kind of system that has some means of regulating the temperature, including highly structured machines, and even living cells. Some parts of the cycles of some heat engines are carried out isothermally (for example, in the Carnot cycle). In the thermodynamic analysis of chemical reactions, it is usual to first analyze what happens under isothermal conditions and then consider the effect of temperature. Phase changes, such as melting or evaporation, are also isothermal processes when, as is usually the case, they occur at constant pressure. Isothermal processes are often used as a starting point in analyzing more complex, non-isothermal processes. Isothermal processes are of special interest for ideal gases. This is a consequence of Joule's second law which states that the internal energy of a fixed amount of an ideal gas depends only on its temperature. Thus, in an isothermal process the internal energy of an ideal gas is constant. This is a result of the fact that in an ideal gas there are no intermolecular forces. Note that this is true only for ideal gases; the internal energy depends on pressure as well as on temperature for liquids, solids, and real gases. In the isothermal compression of a gas there is work done on the system to decrease the volume and increase the pressure. Doing work on the gas increases the internal energy and will tend to increase the temperature. To maintain the constant temperature energy must leave the system as heat and enter the environment. If the gas is ideal, the amount of energy entering the environment is equal to the work done on the gas, because internal energy does not change. For isothermal expansion, the energy supplied to the system does work on the surroundings. In either case, with the aid of a suitable linkage the change in gas volume can perform useful mechanical work. For details of the calculations, see calculation of work. For an adiabatic process, in which no heat flows into or out of the gas because its container is well insulated, Q = 0. If there is also no work done, i.e. a free expansion, there is no change in internal energy. For an ideal gas, this means that the process is also isothermal. Thus, specifying that a process is isothermal is not sufficient to specify a unique process. Details for an ideal gas For the special case of a gas to which Boyle's law applies, the product pV (p for gas pressure and V for gas volume) is a constant if the gas is kept at isothermal conditions. The value of the constant is nRT, where n is the number of moles of the present gas and R is the ideal gas constant. In other words, the ideal gas law pV = nRT applies. Therefore: holds. The family of curves generated by this equation is shown in the graph in Figure 1. Each curve is called an isotherm, meaning a curve at a same temperature T. Such graphs are termed indicator diagrams and were first used by James Watt and others to monitor the efficiency of engines. The temperature corresponding to each curve in the figure increases from the lower left to the upper right. Calculation of work In thermodynamics, the reversible work involved when a gas changes from state A to state B is where p for gas pressure and V for gas volume. For an isothermal (constant temperature T), reversible process, this integral equals the area under the relevant PV (pressure-volume) isotherm, and is indicated in purple in Figure 2 for an ideal gas. Again, p =  applies and with T being constant (as this is an isothermal process), the expression for work becomes: In IUPAC convention, work is defined as work on a system by its surroundings. If, for example, the system is compressed, then the work is done on the system by the surrounding so the work is positive and the internal energy of the system increases. Conversely, if the system expands (i.e., system surrounding expansion, so free expansions not the case), then the work is negative as the system does work on the surroundings and the internal energy of the system decreases. It is also worth noting that for ideal gases, if the temperature is held constant, the internal energy of the system U also is constant, and so ΔU = 0. Since the first law of thermodynamics states that ΔU = Q + W in IUPAC convention, it follows that Q = −W for the isothermal compression or expansion of ideal gases. Example of an isothermal process The reversible expansion of an ideal gas can be used as an example of work produced by an isothermal process. Of particular interest is the extent to which heat is converted to usable work, and the relationship between the confining force and the extent of expansion. During isothermal expansion of an ideal gas, both and change along an isotherm with a constant product (i.e., constant T). Consider a working gas in a cylindrical chamber 1 m high and 1 m2 area (so 1m3 volume) at 400 K in static equilibrium. The surroundings consist of air at 300 K and 1 atm pressure (designated as ). The working gas is confined by a piston connected to a mechanical device that exerts a force sufficient to create a working gas pressure of 2 atm (state ). For any change in state that causes a force decrease, the gas will expand and perform work on the surroundings. Isothermal expansion continues as long as the applied force decreases and appropriate heat is added to keep = 2 [atm·m3] (= 2 atm × 1 m3). The expansion is said to be internally reversible if the piston motion is sufficiently slow such that at each instant during the expansion the gas temperature and pressure is uniform and conform to the ideal gas law. Figure 3 shows the relationship for = 2 [atm·m3] for isothermal expansion from 2 atm (state ) to 1 atm (state ). The work done (designated ) has two components. First, expansion work against the surrounding atmosphere pressure (designated as ), and second, usable mechanical work (designated as ). The output here could be movement of the piston used to turn a crank-arm, which would then turn a pulley capable of lifting water out of flooded salt mines. The system attains state ( = 2 [atm·m3] with = 1 atm and = 2 m3) when the applied force reaches zero. At that point, equals –140.5 kJ, and is –101.3 kJ. By difference, = –39.1 kJ, which is 27.9% of the heat supplied to the process (- 39.1 kJ / - 140.5 kJ). This is the maximum amount of usable mechanical work obtainable from the process at the stated conditions. The percentage of is a function of and , and approaches 100% as approaches zero. To pursue the nature of isothermal expansion further, note the red line on Figure 3. The fixed value of causes an exponential increase in piston rise vs. pressure decrease. For example, a pressure decrease from 2 to 1.9 atm causes a piston rise of 0.0526 m. In comparison, a pressure decrease from 1.1 to 1 atm causes a piston rise of 0.1818 m. Entropy changes Isothermal processes are especially convenient for calculating changes in entropy since, in this case, the formula for the entropy change, ΔS, is simply where Qrev is the heat transferred (internally reversible) to the system and T is absolute temperature. This formula is valid only for a hypothetical reversible process; that is, a process in which equilibrium is maintained at all times. A simple example is an equilibrium phase transition (such as melting or evaporation) taking place at constant temperature and pressure. For a phase transition at constant pressure, the heat transferred to the system is equal to the enthalpy of transformation, ΔHtr, thus Q = ΔHtr. At any given pressure, there will be a transition temperature, Ttr, for which the two phases are in equilibrium (for example, the normal boiling point for vaporization of a liquid at one atmosphere pressure). If the transition takes place under such equilibrium conditions, the formula above may be used to directly calculate the entropy change . Another example is the reversible isothermal expansion (or compression) of an ideal gas from an initial volume VA and pressure PA to a final volume VB and pressure PB. As shown in Calculation of work, the heat transferred to the gas is . This result is for a reversible process, so it may be substituted in the formula for the entropy change to obtain . Since an ideal gas obeys Boyle's law, this can be rewritten, if desired, as . Once obtained, these formulas can be applied to an irreversible process, such as the free expansion of an ideal gas. Such an expansion is also isothermal and may have the same initial and final states as in the reversible expansion. Since entropy is a state function (that depends on an equilibrium state, not depending on a path that the system takes to reach that state), the change in entropy of the system is the same as in the reversible process and is given by the formulas above. Note that the result Q = 0 for the free expansion can not be used in the formula for the entropy change since the process is not reversible. The difference between the reversible and irreversible is found in the entropy of the surroundings. In both cases, the surroundings are at a constant temperature, T, so that ΔSsur = −; the minus sign is used since the heat transferred to the surroundings is equal in magnitude and opposite in sign to the heat Q transferred to the system. In the reversible case, the change in entropy of the surroundings is equal and opposite to the change in the system, so the change in entropy of the universe is zero. In the irreversible, Q = 0, so the entropy of the surroundings does not change and the change in entropy of the universe is equal to ΔS for the system. See also Joule–Thomson effect Joule expansion (also called free expansion) Adiabatic process Cyclic process Isobaric process Isochoric process Polytropic process Spontaneous process References Thermodynamic processes Atmospheric thermodynamics
Isothermal process
Physics,Chemistry
2,299
21,711,296
https://en.wikipedia.org/wiki/Valery%20Glivenko
Valery Ivanovich Glivenko (, ; 2 January 1897 (Gregorian calendar) / 21 December 1896 (Julian calendar) in Kyiv – 15 February 1940 in Moscow) was a Soviet mathematician. He worked in the foundations of mathematics, real analysis, probability theory, and mathematical statistics. He taught at the Moscow Industrial Pedagogical Institute until his death at age 43. Most of Glivenko's work was published in French. See also Glivenko's double-negation translation Glivenko's theorem (probability theory) Glivenko–Cantelli theorem Glivenko–Stone theorem Notes Works External links Photograph Mathematical logicians 1896 births 1940 deaths Soviet logicians Soviet mathematicians Ukrainian mathematicians 20th-century Russian mathematicians Probability theorists Mathematical analysts Moscow State University alumni Mathematical statisticians
Valery Glivenko
Mathematics
168
22,059,685
https://en.wikipedia.org/wiki/January%202038%20lunar%20eclipse
A penumbral lunar eclipse will occur at the Moon’s ascending node of orbit on Thursday, January 21, 2038, with an umbral magnitude of −0.1127. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring about 3.1 days before perigee (on January 24, 2038, at 4:50 UTC), the Moon's apparent diameter will be larger. This eclipse will be the first of four penumbral lunar eclipses in 2038, with the others occurring on June 17, July 16, and December 11. Visibility The eclipse will be completely visible over North and South America, west Africa, and Europe, seen rising over the eastern Pacific Ocean and setting over east Africa and west and central Asia. Eclipse details Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2038 An annular solar eclipse on January 5. A penumbral lunar eclipse on January 21. A penumbral lunar eclipse on June 17. An annular solar eclipse on July 2. A penumbral lunar eclipse on July 16. A penumbral lunar eclipse on December 11. A total solar eclipse on December 26. Metonic Preceded by: Lunar eclipse of April 3, 2034 Followed by: Lunar eclipse of November 8, 2041 Tzolkinex Preceded by: Lunar eclipse of December 9, 2030 Followed by: Lunar eclipse of March 3, 2045 Half-Saros Preceded by: Solar eclipse of January 14, 2029 Followed by: Solar eclipse of January 26, 2047 Tritos Preceded by: Lunar eclipse of February 20, 2027 Followed by: Lunar eclipse of December 20, 2048 Lunar Saros 144 Preceded by: Lunar eclipse of January 10, 2020 Followed by: Lunar eclipse of February 1, 2056 Inex Preceded by: Lunar eclipse of February 9, 2009 Followed by: Lunar eclipse of December 31, 2066 Triad Preceded by: Lunar eclipse of March 23, 1951 Followed by: Lunar eclipse of November 21, 2124 Lunar eclipses of 2035–2038 Saros 144 Tritos series Half-Saros cycle A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 151. See also List of lunar eclipses and List of 21st-century lunar eclipses Notes External links 2038-01 2038-01 2038 in science
January 2038 lunar eclipse
Astronomy
686
41,996,200
https://en.wikipedia.org/wiki/List%20of%20text%20mining%20software
Text mining computer programs are available from many commercial and open source companies and sources. Commercial Angoss – Angoss Text Analytics provides entity and theme extraction, topic categorization, sentiment analysis and document summarization capabilities via the embedded AUTINDEX – is a commercial text mining software package based on sophisticated linguistics by IAI (Institute for Applied Information Sciences), Saarbrücken. DigitalMR – social media listening & text+image analytics tool for market research. FICO Score – leading provider of analytics. General Sentiment – Social Intelligence platform that uses natural language processing to discover affinities between the fans of brands with the fans of traditional television shows in social media. Stand alone text analytics to capture social knowledge base on billions of topics stored to 2004. IBM LanguageWare – the IBM suite for text analytics (tools and Runtime). IBM SPSS – provider of Modeler Premium (previously called IBM SPSS Modeler and IBM SPSS Text Analytics), which contains advanced NLP-based text analysis capabilities (multi-lingual sentiment, event and fact extraction), that can be used in conjunction with Predictive Modeling. Text Analytics for Surveys provides the ability to categorize survey responses using NLP-based capabilities for further analysis or reporting. Inxight – provider of text analytics, search, and unstructured visualization technologies. (Inxight was bought by Business Objects that was bought by SAP AG in 2008). Language Computer Corporation – text extraction and analysis tools, available in multiple languages. Lexalytics – provider of a text analytics engine used in Social Media Monitoring, Voice of Customer, Survey Analysis, and other applications. Salience Engine. The software provides the unique capability of merging the output of unstructured, text-based analysis with structured data to provide additional predictive variables for improved predictive models and association analysis. Linguamatics – provider of natural language processing (NLP) based enterprise text mining and text analytics software, I2E, for high-value knowledge discovery and decision support. Mathematica – provides built in tools for text alignment, pattern matching, clustering and semantic analysis. See Wolfram Language, the programming language of Mathematica. MATLAB offers Text Analytics Toolbox for importing text data, converting it to numeric form for use in machine and deep learning, sentiment analysis and classification tasks. Medallia – offers one system of record for survey, social, text, written and online feedback. NetOwl – suite of multilingual text and entity analytics products, including entity extraction, link and event extraction, sentiment analysis, geotagging, name translation, name matching, and identity resolution, among others. PolyAnalyst - text analytics environment. PoolParty Semantic Suite - graph-based text mining platform. RapidMiner with its Text Processing Extension – data and text mining software. SAS – SAS Text Miner and Teragram; commercial text analytics, natural language processing, and taxonomy software used for Information Management. Sketch Engine – a corpus manager and analysis software which providing creating text corpora from uploaded texts or the Web including part-of-speech tagging and lemmatization or detecting a particular website. Sysomos – provider social media analytics software platform, including text analytics and sentiment analysis on online consumer conversations. WordStat – Content analysis and text mining add-on module of QDA Miner for analyzing large amounts of text data. Open source Carrot2 – text and search results clustering framework. GATE – general Architecture for Text Engineering, an open-source toolbox for natural language processing and language engineering. Gensim – large-scale topic modelling and extraction of semantic information from unstructured text (Python). KH Coder – for Quantitative Content Analysis or Text Mining The KNIME Text Processing extension. Natural Language Toolkit (NLTK) – a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python programming language. OpenNLP – natural language processing. Orange with its text mining add-on. The PLOS Text Mining Collection. The programming language R provides a framework for text mining applications in the package tm. The Natural Language Processing task view contains tm and other text mining library packages. spaCy – open-source Natural Language Processing library for Python Stanbol – an open source text mining engine targeted at semantic content management. Voyant Tools – a web-based text analysis environment, created as a scholarly project. References External links Text Mining APIs on Mashape Text Mining APIs on Programmable Web Text Mining APIs at the Text Analysis Portal for Research Data mining and machine learning software Lists of software
List of text mining software
Technology
945
10,059,094
https://en.wikipedia.org/wiki/%CE%94-opioid%20receptor
The δ-opioid receptor, also known as delta opioid receptor or simply delta receptor, abbreviated DOR or DOP, is an inhibitory 7-transmembrane G-protein coupled receptor coupled to the G protein Gi/G0 and has enkephalins as its endogenous ligands. The regions of the brain where the δ-opioid receptor is largely expressed vary from species model to species model. In humans, the δ-opioid receptor is most heavily expressed in the basal ganglia and neocortical regions of the brain. Function The endogenous system of opioid receptors is well known for its analgesic potential; however, the exact role of δ-opioid receptor activation in pain modulation is largely up for debate. This also depends on the model at hand since receptor activity is known to change from species to species. Activation of delta receptors produces analgesia, perhaps as significant potentiators of μ-opioid receptor agonists. However, it seems like delta agonism provides heavy potentiation to any mu agonism. Therefore, even selective mu agonists can cause analgesia under the right conditions, whereas under others can cause none whatsoever. It is also suggested however that the pain modulated by the μ-opioid receptor and that modulated by the δ-opioid receptor are distinct types, with the assertion that DOR modulates the nociception of chronic pain, while MOR modulates acute pain. Evidence for whether delta agonists produce respiratory depression is mixed; high doses of the delta agonist peptide DPDPE produced respiratory depression in sheep. In contrast both the peptide delta agonist Deltorphin II and the non-peptide delta agonist (+)-BW373U86 actually stimulated respiratory function and blocked the respiratory depressant effect of the potent μ-opioid agonist alfentanil, without affecting pain relief. It thus seems likely that while δ-opioid agonists can produce respiratory depression at very high doses, at lower doses they have the opposite effect, a fact that may make mixed mu/delta agonists such as DPI-3290 potentially very useful drugs that might be much safer than the μ agonists currently used for pain relief. Many delta agonists may also cause seizures at high doses, although not all delta agonists produce this effect. Of additional interest is the potential for delta agonists to be developed for use as a novel class of antidepressant drugs, following robust evidence of both antidepressant effects and also upregulation of BDNF production in the brain in animal models of depression. These antidepressant effects have been linked to endogenous opioid peptides acting at δ- and μ-opioid receptors, and so can also be produced by enkephalinase inhibitors such as RB-101. However, in human models the data for antidepressant effects remains inconclusive. In the 2008 Phase 2 clinical trial by Astra Zeneca, NCT00759395, 15 patients were treated with the selective delta agonist AZD 2327. The results showed no significant effect on mood suggesting that δ-opioid receptor modulation might not participate in the regulation of mood in humans. However, doses were administered at low doses and the pharmacological data also remains inconclusive. Further trials are required. Another interesting aspect of δ-opioid receptor function is the suggestion of μ/δ-opioid receptor interactions. At the extremes of this suggestion lies the possibility of a μ/δ opioid receptor oligomer. The evidence for this stems from the different binding profiles of typical mu and delta agonists such as morphine and DAMGO respectively, in cells that coexpress both receptors compared to those in cells that express them individually. In addition, work by Fan and coworkers shows the restoration of the binding profiles when distal carboxyl termini are truncated at either receptor, suggesting that the termini play a role in the oligomerization. While this is exciting, rebuttal by the Javitch and coworkers suggest the idea of oligomerization may be overplayed. Relying on RET, Javitch and coworkers showed that RET signals were more characteristic of random proximity between receptors, rather than an actual bond formation between receptors, suggesting that discrepancies in binding profiles may be the result of downstream interactions, rather than novel effects due to oligomerization. Nevertheless, coexpression of receptors remains unique and potentially useful in the treatment of mood disorders and pain. Recent work indicates that exogenous ligands that activate the delta receptors mimic the phenomenon known as ischemic preconditioning. Experimentally, if short periods of transient ischemia are induced the downstream tissues are robustly protected if longer-duration interruption of the blood supply is then affected. Opiates and opioids with DOR activity mimic this effect. In the rat model, introduction of DOR ligands results in significant cardioprotection. Ligands Until comparatively recently, there were few pharmacological tools for the study of δ receptors. As a consequence, our understanding of their function is much more limited than those of the other opioid receptors for which selective ligands have long been available. However, there are now several selective δ-opioid receptor agonists available, including peptides such as DPDPE and deltorphin II, and non-peptide drugs such as SNC-80, the more potent (+)-BW373U86, a newer drug DPI-287, which does not produce the problems with convulsions seen with the earlier agents, and the mixed μ/δ agonist DPI-3290, which is a much more potent analgesic than the more highly selective δ agonists. Selective antagonists for the δ receptor are also available, with the best known being the opiate derivative naltrindole. Agonists Peptides Leu-enkephalin Met-enkephalin Deltorphins DADLE DPDPE Non-peptides ADL-5859 BU-48 BW373U86 DPI-221 DPI-287 DPI-3290 RWJ-394674- SNC-80 TAN-67 Amoxapine (partial agonist) Cannabidiol (allosteric modulator, non-selective) Desmethylclozapine Mitragynine Mitragynine pseudoindoxyl Norbuprenorphine (peripherally restricted) N-Phenethyl-14-ethoxymetopon 7-Spiroindanyloxymorphone Tetrahydrocannabinol (allosteric modulator, non-selective) Xorphanol Antagonists Buprenorphine Naltriben Naltrindole Mitragynine 7-Hydroxymitragynine Interactions δ-opioid receptors have been shown to interact with β2 adrenergic receptors, arrestin β1 and GPRASP1. See also κ-opioid receptor μ-opioid receptor References Further reading External links G protein-coupled receptors Opioid receptors
Δ-opioid receptor
Chemistry
1,521
25,270
https://en.wikipedia.org/wiki/Quine%20%28computing%29
A quine is a computer program that takes no input and produces a copy of its own source code as its only output. The standard terms for these programs in the computability theory and computer science literature are "self-replicating programs", "self-reproducing programs", and "self-copying programs". A quine is a fixed point of an execution environment, when that environment is viewed as a function transforming programs into their outputs. Quines are possible in any Turing-complete programming language, as a direct consequence of Kleene's recursion theorem. For amusement, programmers sometimes attempt to develop the shortest possible quine in any given programming language. Name The name "quine" was coined by Douglas Hofstadter, in his popular 1979 science book Gödel, Escher, Bach, in honor of philosopher Willard Van Orman Quine (1908–2000), who made an extensive study of indirect self-reference, and in particular for the following paradox-producing expression, known as Quine's paradox: "Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation. History John von Neumann theorized about self-reproducing automata in the 1940s. Later, Paul Bratley and Jean Millo's article "Computer Recreations: Self-Reproducing Automata" discussed them in 1972. Bratley first became interested in self-reproducing programs after seeing the first known such program written in Atlas Autocode at Edinburgh in the 1960s by the University of Edinburgh lecturer and researcher Hamish Dewar. The "download source" requirement of the GNU Affero General Public License is based on the idea of a quine. Examples Constructive quines In general, the method used to create a quine in any programming language is to have, within the program, two pieces: (a) code used to do the actual printing and (b) data that represents the textual form of the code. The code functions by using the data to print the code (which makes sense since the data represents the textual form of the code), but it also uses the data, processed in a simple way, to print the textual representation of the data itself. Here are three small examples in Python3: # Example A. chr(39) == "'". a = 'a = {}{}{}; print(a.format(chr(39), a, chr(39)))'; print(a.format(chr(39), a, chr(39))) # Example B. chr(39) == "'". b = 'b = %s%s%s; print(b %% (chr(39), b, chr(39)))'; print(b % (chr(39), b, chr(39))) # Example C. %r will quote automatically. c = 'c = %r; print(c %% c)'; print(c % c)The following Java code demonstrates the basic structure of a quine. public class Quine { public static void main(String[] args) { char q = 34; // Quotation mark character String[] l = { // Array of source code "public class Quine", "{", " public static void main(String[] args)", " {", " char q = 34; // Quotation mark character", " String[] l = { // Array of source code", " ", " };", " for (int i = 0; i < 6; i++) // Print opening code", " System.out.println(l[i]);", " for (int i = 0; i < l.length; i++) // Print string array", " System.out.println(l[6] + q + l[i] + q + ',');", " for (int i = 7; i < l.length; i++) // Print this code", " System.out.println(l[i]);", " }", "}", }; for (int i = 0; i < 6; i++) // Print opening code System.out.println(l[i]); for (int i = 0; i < l.length; i++) // Print string array System.out.println(l[6] + q + l[i] + q + ','); for (int i = 7; i < l.length; i++) // Print this code System.out.println(l[i]); } } The source code contains a string array of itself, which is output twice, once inside quotation marks. This code was adapted from an original post from c2.com, where the author, Jason Wilson, posted it as a minimalistic version of a Quine, without Java comments. Thanks to new text blocks feature in Java 15 (or newer), a more readable and simpler version is possible: public class Quine { public static void main(String[] args) { String textBlockQuotes = new String(new char[]{'"', '"', '"'}); char newLine = 10; String source = """ public class Quine { public static void main(String[] args) { String textBlockQuotes = new String(new char[]{'"', '"', '"'}); char newLine = 10; String source = %s; System.out.print(source.formatted(textBlockQuotes + newLine + source + textBlockQuotes)); } } """; System.out.print(source.formatted(textBlockQuotes + newLine + source + textBlockQuotes)); } } Eval quines Some programming languages have the ability to evaluate a string as a program. Quines can take advantage of this feature. For example, this Ruby quine: eval s="print 'eval s=';p s" Lua can do: s="print(string.format('s=%c%s%c; load(s)()',34,s,34))"; load(s)() In Python 3.8: exec(s:='print("exec(s:=%r)"%s)') "Cheating" quines Self-evaluation In many functional languages, including Scheme and other Lisps, and interactive languages such as APL, numbers are self-evaluating. In TI-BASIC, if the last line of a program returns a value, the returned value is displayed on the screen. Therefore, in such languages a program consisting of only a single digit results in a 1-byte quine. Since such code does not construct itself, this is often considered cheating. 1 Empty quines In some languages, particularly scripting languages but also C, an empty source file is a fixed point of the language, being a valid program that produces no output. Such an empty program, submitted as "the world's smallest self reproducing program", once won the "worst abuse of the rules" prize in the International Obfuscated C Code Contest. The program was not actually compiled, but used cp to copy the file into another file, which could be executed to print nothing. Source code inspection Quines, per definition, cannot receive any form of input, including reading a file, which means a quine is considered to be "cheating" if it looks at its own source code. The following shell script is not a quine: #!/bin/sh # Invalid quine. # Reading the executed file from disk is cheating. cat $0 A shorter variant, exploiting the behaviour of shebang directives: #!/bin/cat Other questionable techniques include making use of compiler messages; for example, in the GW-BASIC environment, entering "Syntax Error" will cause the interpreter to respond with "Syntax Error". Ouroboros programs The quine concept can be extended to multiple levels of recursion, giving rise to "ouroboros programs", or quine-relays. This should not be confused with multiquines. Example This Java program outputs the source for a C++ program that outputs the original Java code. #include <iostream> #include <string> using namespace std; int main(int argc, char* argv[]) { char q = 34; string l[] = { " ", "=============<<<<<<<< C++ Code >>>>>>>>=============", "#include <iostream>", "#include <string>", "using namespace std;", "", "int main(int argc, char* argv[])", "{", " char q = 34;", " string l[] = {", " };", " for(int i = 20; i <= 25; i++)", " cout << l[i] << endl;", " for(int i = 0; i <= 34; i++)", " cout << l[0] + q + l[i] + q + ',' << endl;", " for(int i = 26; i <= 34; i++)", " cout << l[i] << endl;", " return 0;", "}", "=============<<<<<<<< Java Code >>>>>>>>=============", "public class Quine", "{", " public static void main(String[] args)", " {", " char q = 34;", " String[] l = {", " };", " for(int i = 2; i <= 9; i++)", " System.out.println(l[i]);", " for(int i = 0; i < l.length; i++)", " System.out.println(l[0] + q + l[i] + q + ',');", " for(int i = 10; i <= 18; i++)", " System.out.println(l[i]);", " }", "}", }; for(int i = 20; i <= 25; i++) cout << l[i] << endl; for(int i = 0; i <= 34; i++) cout << l[0] + q + l[i] + q + ',' << endl; for(int i = 26; i <= 34; i++) cout << l[i] << endl; return 0; }public class Quine { public static void main(String[] args) { char q = 34; String[] l = { " ", "=============<<<<<<<< C++ Code >>>>>>>>=============", "#include <iostream>", "#include <string>", "using namespace std;", "", "int main(int argc, char* argv[])", "{", " char q = 34;", " string l[] = {", " };", " for(int i = 20; i <= 25; i++)", " cout << l[i] << endl;", " for(int i = 0; i <= 34; i++)", " cout << l[0] + q + l[i] + q + ',' << endl;", " for(int i = 26; i <= 34; i++)", " cout << l[i] << endl;", " return 0;", "}", "=============<<<<<<<< Java Code >>>>>>>>=============", "public class Quine", "{", " public static void main(String[] args)", " {", " char q = 34;", " String[] l = {", " };", " for(int i = 2; i <= 9; i++)", " System.out.println(l[i]);", " for(int i = 0; i < l.length; i++)", " System.out.println(l[0] + q + l[i] + q + ',');", " for(int i = 10; i <= 18; i++)", " System.out.println(l[i]);", " }", "}", }; for(int i = 2; i <= 9; i++) System.out.println(l[i]); for(int i = 0; i < l.length; i++) System.out.println(l[0] + q + l[i] + q + ','); for(int i = 10; i <= 18; i++) System.out.println(l[i]); } } Such programs have been produced with various cycle lengths: Haskell → Python → Ruby Python → Bash → Perl C → Haskell → Python → Perl Haskell → Perl → Python → Ruby → C → Java Ruby → Java → C# → Python C → C++ → Ruby → Python → PHP → Perl Ruby → Python → Perl → Lua → OCaml → Haskell → C → Java → Brainfuck → Whitespace → Unlambda Ruby → Scala → Scheme → Scilab → Shell (bash) → S-Lang → Smalltalk → Squirrel3 → Standard ML → ... → Rexx (128 (and formerly 50) programming languages) Web application → C (web application source code consists of HTML, JavaScript, and CSS) Multiquines David Madore, creator of Unlambda, describes multiquines as follows: "A multiquine is a set of r different programs (in r different languages – without this condition we could take them all equal to a single quine), each of which is able to print any of the r programs (including itself) according to the command line argument it is passed. (Cheating is not allowed: the command line arguments must not be too long – passing the full text of a program is considered cheating)." A multiquine consisting of 2 languages (or biquine) would be a program which: When run, is a quine in language X. When supplied with a user-defined command line argument, would print a second program in language Y. Given the second program in language Y, when run normally, would also be a quine in language Y. Given the second program in language Y, and supplied with a user-defined command line argument, would produce the original program in language X. A biquine could then be seen as a set of two programs, both of which are able to print either of the two, depending on the command line argument supplied. Theoretically, there is no limit on the number of languages in a multiquine. A 5-part multiquine (or pentaquine) has been produced with Python, Perl, C, NewLISP, and F# and there is also a 25-language multiquine. Polyglot Similar to, but unlike a multiquine, a polyglot program is a computer program or script written in a valid form of multiple programming languages or file formats by combining their syntax. A polyglot program is not required to have a self-reproducing quality, although a polyglot program can also be a quine in one or more of its possible ways to execute. Unlike quines and multiquines, polyglot programs are not guaranteed to exist between arbitrary sets of languages as a result of Kleene's recursion theorem, because they rely on the interplay between the syntaxes, and not a provable property that one can always be embedded within another. Radiation-hardened A radiation-hardened quine is a quine that can have any single character removed and still produces the original program with no missing character. Of necessity, such quines are much more convoluted than ordinary quines, as is seen by the following example in Ruby: eval='eval$q=%q(puts %q(10210/#{1 1 if 1==21}}/.i rescue##/ 1 1"[13,213].max_by{|s|s.size}#"##").gsub(/\d/){["=\47eval$q=%q(#$q)#\47##\47 ",:eval,:instance_,"||=9"][eval$&]} exit)#'##' instance_eval='eval$q=%q(puts %q(10210/#{1 1 if 1==21}}/.i rescue##/ 1 1"[13,213].max_by{|s|s.size}#"##").gsub(/\d/){["=\47eval$q=%q(#$q)#\47##\47 ",:eval,:instance_,"||=9"][eval$&]} exit)#'##' /#{eval eval if eval==instance_eval}}/.i rescue##/ eval eval"[eval||=9,instance_eval||=9].max_by{|s|s.size}#"##" Automatic generation Using relational programming techniques, it is possible to generate quines automatically by transforming the interpreter (or equivalently, the compiler and runtime) of a language into a relational program, and then solving for a fixed point. See also Diagonal lemma Droste effect Fixed point combinator Self-modifying code Self-interpreter Self-replicating machine Self-replication Self-relocation TiddlyWiki Tupper's self-referential formula Programming languages Quine's paradox Polyglot (computing) Notes References Further reading Douglas Hofstadter: Gödel, Escher, Bach: An Eternal Golden Braid Ken Thompson: "Reflections on Trusting Trust" (Communications of the ACM, 27(8):761-3) External links TiddlyWiki, a quine manifested as a wiki The Quine Page (by Gary P. Thompson) A Brief Guide to Self-Referential Programs QuineProgram at the Portland Pattern Repository Wiki David Madore's Discussion of Quines Zip File Quine Zip Files All The Way Down An Introduction to Quines — in particular, quines using more than one language Quine Web Page: A standards-conforming HTML+JavaScript web page that shows its own source code HTML Quine: An HTML page that only uses HTML and CSS to show its own source code Quine Challenge for Tom's JavaScript Machine, with a series of interactive hints A Java Quine built straight from Kleene's fixed point theorem, composition and s-n-m A QR code quine Source code Articles with example C code Willard Van Orman Quine Test items in computer languages Computer programming folklore Self-replication
Quine (computing)
Biology
4,401
2,324,711
https://en.wikipedia.org/wiki/Rogers%E2%80%93Ramanujan%20identities
In mathematics, the Rogers–Ramanujan identities are two identities related to basic hypergeometric series and integer partitions. The identities were first discovered and proved by , and were subsequently rediscovered (without a proof) by Srinivasa Ramanujan some time before 1913. Ramanujan had no proof, but rediscovered Rogers's paper in 1917, and they then published a joint new proof . independently rediscovered and proved the identities. Definition The Rogers–Ramanujan identities are and . Here, denotes the q-Pochhammer symbol. Combinatorial interpretation Consider the following: is the generating function for partitions with exactly parts such that adjacent parts have difference at least 2. is the generating function for partitions such that each part is congruent to either 1 or 4 modulo 5. is the generating function for partitions with exactly parts such that adjacent parts have difference at least 2 and such that the smallest part is at least 2. is the generating function for partitions such that each part is congruent to either 2 or 3 modulo 5. The Rogers–Ramanujan identities could be now interpreted in the following way. Let be a non-negative integer. The number of partitions of such that the adjacent parts differ by at least 2 is the same as the number of partitions of such that each part is congruent to either 1 or 4 modulo 5. The number of partitions of such that the adjacent parts differ by at least 2 and such that the smallest part is at least 2 is the same as the number of partitions of such that each part is congruent to either 2 or 3 modulo 5. Alternatively, The number of partitions of such that with parts the smallest part is at least is the same as the number of partitions of such that each part is congruent to either 1 or 4 modulo 5. The number of partitions of such that with parts the smallest part is at least is the same as the number of partitions of such that each part is congruent to either 2 or 3 modulo 5. Application to partitions Since the terms occurring in the identity are generating functions of certain partitions, the identities make statements about partitions (decompositions) of natural numbers. The number sequences resulting from the coefficients of the Maclaurin series of the Rogers–Ramanujan functions G and H are special partition number sequences of level 5: The number sequence (OEIS code: A003114) represents the number of possibilities for the affected natural number n to decompose this number into summands of the patterns 5a + 1 or 5a + 4 with a ∈ . Thus gives the number of decays of an integer n in which adjacent parts of the partition differ by at least 2, equal to the number of decays in which each part is equal to 1 or 4 mod 5 is. And the number sequence (OEIS code: A003106) analogously represents the number of possibilities for the affected natural number n to decompose this number into summands of the patterns 5a + 2 or 5a + 3 with a ∈ . Thus gives the number of decays of an integer n in which adjacent parts of the partition differ by at least 2 and in which the smallest part is greater than or equal to 2 is equal the number of decays whose parts are equal to 2 or 3 mod 5. This will be illustrated as examples in the following two tables: Rogers–Ramanujan continued fractions R and S Definition of the continued fractions The following continued fraction is called Rogers–Ramanujan continued fraction, Continuing fraction is called alternating Rogers–Ramanujan continued fraction! {| class="wikitable" !Standardized continued fraction !Alternating continued fraction |- | | |} The factor creates a quotient of module functions and it also makes these shown continued fractions modular: This definition applies for the continued fraction mentioned: This is the definition of the Ramanujan theta function: With this function, the continued fraction R can be created this way: . The connection between the continued fraction and the Rogers–Ramanujan functions was already found by Rogers in 1894 (and later independently by Ramanujan). The continued fraction can also be expressed by the Dedekind eta function: The alternating continued fraction has the following identities to the remaining Rogers–Ramanujan functions and to the Ramanujan theta function described above: Identities with Jacobi theta functions The following definitions are valid for the Jacobi "Theta-Nullwert" functions: And the following product definitions are identical to the total definitions mentioned: These three so-called theta zero value functions are linked to each other using the Jacobian identity: The mathematicians Edmund Taylor Whittaker and George Neville Watson discovered these definitional identities. The Rogers–Ramanujan continued fraction functions and have these relationships to the theta Nullwert functions: The element of the fifth root can also be removed from the elliptic nome of the theta functions and transferred to the external tangent function. In this way, a formula can be created that only requires one of the three main theta functions: Modular modified functions of G and H Definition of the modular form of G and H An elliptic function is a modular function if this function in dependence on the elliptic nome as an internal variable function results in a function, which also results as an algebraic combination of Legendre's elliptic modulus and its complete elliptic integrals of the first kind in the K and K' form. The Legendre's elliptic modulus is the numerical eccentricity of the corresponding ellipse. If you set (where the imaginary part of is positive), following two functions are modular functions! If q = e2πiτ, then q−1/60G(q) and q11/60H(q) are modular functions of τ. For the Rogers–Ramanujan continued fraction R(q) this formula is valid based on the described modular modifications of G and H: Special values These functions have the following values for the reciprocal of Gelfond's constant and for the square of this reciprocal: The Rogers–Ramanujan continued fraction takes the following ordinate values for these abscissa values: {| class="wikitable" | |- | |} Dedekind eta function identities Derivation by the geometric mean Given are the mentioned definitions of and in this already mentioned way: The Dedekind eta function identities for the functions G and H result by combining only the following two equation chains: The quotient is the Rogers Ramanujan continued fraction accurately: But the product leads to a simplified combination of Pochhammer operators: The geometric mean of these two equation chains directly lead to following expressions in dependence of the Dedekind eta function in their Weber form: In this way the modulated functions and are represented directly using only the continued fraction R and the Dedekind eta function quotient! With the Pochhammer products alone, the following identity then applies to the non-modulated functions G and H: Pentagonal number theorem For the Dedekind eta function according to Weber's definition these formulas apply: The fourth formula describes the pentagonal number theorem because of the exponents! These basic definitions apply to the pentagonal numbers and the card house numbers: The fifth formula contains the Regular Partition Numbers as coefficients. The Regular Partition Number Sequence itself indicates the number of ways in which a positive integer number can be split into positive integer summands. For the numbers to , the associated partition numbers with all associated number partitions are listed in the following table: Further Dedekind eta identities The following further simplification for the modulated functions and can be undertaken. This connection applies especially to the Dedekind eta function from the fifth power of the elliptic nome: These two identities with respect to the Rogers–Ramanujan continued fraction were given for the modulated functions and : The combination of the last three formulas mentioned results in the following pair of formulas: {| class="wikitable" | |- | |} Reduced Weber modular function The Weber modular functions in their reduced form are an efficient way of computing the values of the Rogers–Ramanujan functions: First of all we introduce the reduced Weber modular functions in that pattern: This function fulfills following equation of sixth degree: {| class="wikitable" | |} Therefore this function is an algebraic function indeed. But along with the Abel–Ruffini theorem this function in relation to the eccentricity can not be represented by elementary expressions. However there are many values that in fact can be expressed elementarily. Four examples shall be given for this: First example: {| class="wikitable" | |- | |} Second example: {| class="wikitable" | |- | |} Third example: {| class="wikitable" | |- | |} Fourth example: {| class="wikitable" | |- | |} For that function, a further expression is valid: Exact eccentricity identity for the functions G and H In this way the accurate eccentricity dependent formulas for the functions G and H can be generated: Following Dedekind eta function quotient has this eccentricity dependency: This is the eccentricity dependent formula for the continued fraction R: The last three now mentioned formulas will be inserted into the final formulas mentioned in the section above: {| class="wikitable" | |- | |} On the left side of the balances the functions and in relation to the elliptic nome function are written down directly. And on the right side an algebraic combination of the eccentricity is formulated. Therefore these functions and are modular functions indeed! Application to quintic equations Discovery of the corresponding modulus by Charles Hermite The general case of quintic equations in the Bring–Jerrard form has a non-elementary solution based on the Abel–Ruffini theorem and will now be explained using the elliptic nome of the corresponding modulus, described by the lemniscate elliptic functions in a simplified way. The real solution for all real values can be determined as follows: Alternatively, the same solution can be presented in this way: The mathematician Charles Hermite determined the value of the elliptic modulus k in relation to the coefficient of the absolute term of the Bring–Jerrard form. In his essay "Sur la résolution de l'Équation du cinquiéme degré Comptes rendus" he described the calculation method for the elliptic modulus in terms of the absolute term. The Italian version of his essay "Sulla risoluzione delle equazioni del quinto grado" contains exactly on page 258 the upper Bring–Jerrard equation formula, which can be solved directly with the functions based on the corresponding elliptic modulus. This corresponding elliptic modulus can be worked out by using the square of the Hyperbolic lemniscate cotangent. For the derivation of this, please see the Wikipedia article lemniscate elliptic functions! The elliptic nome of this corresponding modulus is represented here with the letter Q: The abbreviation ctlh expresses the Hyperbolic Lemniscate Cotangent and the abbreviation aclh represents the Hyperbolic Lemniscate Areacosine! Calculation examples Two examples of this solution algorithm are now mentioned: First calculation example: {|class = "wikitable" | Quintic Bring–Jerrard equation: Solution formula: Decimal places of the nome: Decimal places of the solution: |} Second calculation example: {|class = "wikitable" | Quintic Bring–Jerrard equation: Solution: Decimal places of the nome: Decimal places of the solution: |} Applications in Physics The Rogers–Ramanujan identities appeared in Baxter's solution of the hard hexagon model in statistical mechanics. The demodularized standard form of the Ramanujan's continued fraction unanchored from the modular form is as follows:: Relations to affine Lie algebras and vertex operator algebras James Lepowsky and Robert Lee Wilson were the first to prove Rogers–Ramanujan identities using completely representation-theoretic techniques. They proved these identities using level 3 modules for the affine Lie algebra . In the course of this proof they invented and used what they called -algebras. Lepowsky and Wilson's approach is universal, in that it is able to treat all affine Lie algebras at all levels. It can be used to find (and prove) new partition identities. First such example is that of Capparelli's identities discovered by Stefano Capparelli using level 3 modules for the affine Lie algebra . See also Rogers polynomials Continuous q-Hermite polynomials References W.N. Bailey, Generalized Hypergeometric Series, (1935) Cambridge Tracts in Mathematics and Mathematical Physics, No. 32, Cambridge University Press, Cambridge. George Gasper and Mizan Rahman, Basic Hypergeometric Series, 2nd Edition, (2004), Encyclopedia of Mathematics and Its Applications, 96, Cambridge University Press, Cambridge. . Bruce C. Berndt, Heng Huat Chan, Sen-Shan Huang, Soon-Yi Kang, Jaebum Sohn, Seung Hwan Son, The Rogers–Ramanujan Continued Fraction, J. Comput. Appl. Math. 105 (1999), pp. 9–24. Cilanne Boulet, Igor Pak, A Combinatorial Proof of the Rogers–Ramanujan and Schur Identities, Journal of Combinatorial Theory, Ser. A, vol. 113 (2006), 1019–1030. James Lepowsky and Robert L. Wilson, Construction of the affine Lie algebra , Comm. Math. Phys. 62 (1978) 43-53. James Lepowsky and Robert L. Wilson, A new family of algebras underlying the Rogers–Ramanujan identities, Proc. Natl. Acad. Sci. USA 78 (1981), 7254-7258. James Lepowsky and Robert L. Wilson, The structure of standard modules, I: Universal algebras and the Rogers–Ramanujan identities, Invent. Math. 77 (1984), 199-290. James Lepowsky and Robert L. Wilson, The structure of standard modules, II: The case , principal gradation, Invent. Math. 79 (1985), 417-442. Stefano Capparelli, Vertex operator relations for affine algebras and combinatorial identities'', Thesis (Ph.D.)–Rutgers The State University of New Jersey - New Brunswick. 1988. 107 pp. External links Hypergeometric functions Integer partitions Mathematical identities Q-analogs Modular forms Srinivasa Ramanujan
Rogers–Ramanujan identities
Mathematics
3,047
64,037,719
https://en.wikipedia.org/wiki/Al%20Hijarah%20%28missile%29
The al-Hijarah missile was an Iraqi liquid propelled inertial Short-range ballistic missile, it was also a Scud missile and considered an upgrade of the al-Hussein missile equipped with chemical warheads. It was developed by 1990 and was first used in the Persian Gulf War where the al-Hijarah missile would release poison gas clouds and kill personnel on the ground, as well as ignite oil wells. One al-Hijarah missile was confirmed to have been fired at Israel during the Gulf War where one landed near Dimona, it was revealed that the missile had a concrete filled warhead. Characteristics The al-Hijarah missile warhead was probably filled with chemical weapons and biological weapons possessed by Iraq at that time like anthrax, botulinum toxin, aflatoxin, sarin, cyclosarin and VX nerve agent. The al-Hijarah missile being a version of the al Hussein also suffered from flight instability and improper guidance. Iraq itself at that time was almost fully indigenous when it came to ballistic missile components and only lacked the ability to locally manufacture Gyroscopes. References Chemical weapons Scud missiles Military history of Iraq Short-range ballistic missiles of Iraq Ballistic missiles of Iraq Surface-to-surface missiles of Iraq Weapons and ammunition introduced in 1990 Theatre ballistic missiles
Al Hijarah (missile)
Chemistry,Biology
270
40,841,737
https://en.wikipedia.org/wiki/75D/Kohoutek
75D/Kohoutek is a short-period comet discovered in February 1975, by Luboš Kohoutek. Even on the discovery plate the comet was only apparent magnitude 14. Assuming the comet has not disintegrated the 2020-2021 perihelion passage is only expected to peak around apparent magnitude 20. Not to be confused with the much better-known C/1973 E1 (Kohoutek), 75D is a repeat visitor to the inner Solar System, with a period of about seven years. It was placed on the discovery orbit when it passed from Jupiter on 28 July 1972. Apparitions have been dim, with the brightest being in 1988 at about apparent magnitude 13. It was not seen in 1994, 2000, 2007, nor on its last predicted return in 2014. The comet has been estimated to be in diameter. This comet was last observed by Mauna Kea on 19 May 1988. The Minor Planet Center has given the comet a "D/" designation as the comet is believed to be lost. The comet is calculated to come to opposition in October 2020 in the constellation of Pisces. See also List of numbered comets References External links Orbital simulation from JPL (Java) / Ephemeris 75D/Kohoutek – Seiichi Yoshida @ aerith.net Periodic comets 0075 Lost comets Astronomical objects discovered in 1975
75D/Kohoutek
Astronomy
286
2,574,552
https://en.wikipedia.org/wiki/Compactly%20supported%20homology
In mathematics, a homology theory in algebraic topology is compactly supported if, in every degree n, the relative homology group Hn(X, A) of every pair of spaces (X, A) is naturally isomorphic to the direct limit of the nth relative homology groups of pairs (Y, B), where Y varies over compact subspaces of X and B varies over compact subspaces of A. Singular homology is compactly supported, since each singular chain is a finite sum of simplices, which are compactly supported. Strong homology is not compactly supported. If one has defined a homology theory over compact pairs, it is possible to extend it into a compactly supported homology theory in the wider category of Hausdorff pairs (X, A) with A closed in X, by defining that the homology of a Hausdorff pair (X, A) is the direct limit over pairs (Y, B), where Y, B are compact, Y is a subset of X, and B is a subset of A. References Homology theory
Compactly supported homology
Mathematics
227
55,589,664
https://en.wikipedia.org/wiki/Lynx%20X-ray%20Observatory
The Lynx X-ray Observatory (Lynx) is a NASA-funded Large Mission Concept Study commissioned as part of the National Academy of Sciences 2020 Astronomy and Astrophysics Decadal Survey. The concept study phase is complete as of August 2019, and the Lynx final report has been submitted to the Decadal Survey for prioritization. If launched, Lynx would be the most powerful X-ray astronomy observatory constructed to date, enabling order-of-magnitude advances in capability over the current Chandra X-ray Observatory and XMM-Newton space telescopes. Background In 2016, following recommendations laid out in the so-called Astrophysics Roadmap of 2013, NASA established four space telescope concept studies for future Large strategic science missions. In addition to Lynx (originally called X-ray Surveyor in the Roadmap document), they are the Habitable Exoplanet Imaging Mission (HabEx), the Large Ultraviolet Optical Infrared Surveyor (LUVOIR), and the Origins Space Telescope (OST, originally called the Far-Infrared Surveyor). The four teams completed their final reports in August 2019, and turned them over to both NASA and the National Academy of Sciences, whose independent Decadal Survey committee advises NASA on which mission should take top priority. If it receives top prioritization and therefore funding, Lynx would launch in approximately 2036. It would be placed into a halo orbit around the second Sun–Earth Lagrange point (L2), and would carry enough propellant for more than twenty years of operation without servicing. The Lynx concept study involved more than 200 scientists and engineers across multiple international academic institutions, aerospace, and engineering companies. The Lynx Science and Technology Definition Team (STDT) was co-chaired by Alexey Vikhlinin and Feryal Özel. Jessica Gaskin was the NASA Study Scientist, and the Marshall Space Flight Center managed the Lynx Study Office jointly with the Smithsonian Astrophysical Observatory, which is part of the Center for Astrophysics Harvard & Smithsonian. Scientific objectives According to the concept study's Final Report, the Lynx Design Reference Mission was intentionally optimized to enable major advances in the following three astrophysical discovery areas: The dawn of black holes (Chapter 1 of the Lynx Report) The drivers of galaxy formation and evolution (Lynx Report, Chapter 2) The energetic properties of stellar evolution and stellar ecosystems (Lynx Report, Chapter 3) Collectively, these serve as three "science pillars" that set the baseline requirements for the observatory. Those requirements include greatly enhanced sensitivity, a sub-arcsecond point spread function stable across the telescope's field of view, and very high spectral resolution for both imaging and gratings spectroscopy. These requirements, in turn, enable a broad science case with major contributions across the astrophysical landscape (as summarized in Chapter 4 of the Lynx Report), including multi-messenger astronomy, black hole accretion physics, large-scale structure, Solar System science, and even exoplanets. The Lynx team markets the mission's science capabilities as "transformationally powerful, flexible, and long-lived", inspired by the spirit of NASA's Great Observatories program. Mission design and payload Spacecraft As described in Chapters 6-10 of the concept study's Final Report, Lynx is designed as an X-ray observatory with a grazing incidence X-ray telescope and detectors that record the position, energy, and arrival time of individual X-ray photons. Post-facto aspect reconstruction leads to modest requirements on pointing precision and stability, while enabling accurate sky locations for detected photons. The design of the Lynx spacecraft draws heavily on heritage from the Chandra X-ray Observatory, with few moving parts and high technology readiness level elements. Lynx will operate in a halo orbit around Sun-Earth L2, enabling high observing efficiency in a stable environment. Its maneuvers and operational procedures on-orbit are nearly identical to Chandras, and similar design approaches promote longevity. Without in-space servicing, Lynx will carry enough consumables to enable continuous operation for at least twenty years. The spacecraft and payload elements are, however, designed to be serviceable, potentially enabling an even longer lifetime. Payload The major advances in sensitivity, spatial, and spectral resolution in the Lynx Design Reference Mission are enabled by the spacecraft's payload, namely the mirror assembly and suite of three science instruments. The Lynx Report notes that each of the payload elements features state-of-the-art technologies while also representing a natural evolution of existing instrumentation technology development over the last two decades. The key technologies are currently at Technology Readiness Levels (TRL) 3 or 4. The Lynx Report notes that, with three years of targeted pre-phase A development in early 2020s, three of four key technologies will be matured to TRL 5 and one will reach TRL 4 by start of Phase A, achieving TRL 5 shortly thereafter. The Lynx payload consists of the following four major elements: The Lynx X-ray Mirror Assembly (LMA): The LMA is the central element of the observatory, enabling the major advances in sensitivity, spectroscopic throughput, survey speed, and greatly improved imaging relative to Chandra due to greatly improved off-axis performance. The Lynx design reference mission baselines a new technology called Silicon Metashell Optics (SMO), in which thousands of very thin, highly polished segments of nearly pure silicon are stacked into tightly packed concentric shells. Of the three mirror technologies considered for Lynx, the SMO design is currently the most advanced in terms of demonstrated performance (already approaching what is required for Lynx). The SMO's highly modular design lends itself to parallelized manufacturing and assembly, while also providing high fault tolerance: if some individual mirror segments or even modules are damaged, the impact to schedule and cost is minimal. The High Definition X-ray Imager (HDXI): The HDXI is the main imager for Lynx, providing high spatial resolution over a wide field of view (FOV) and high sensitivity over the 0.2–10 keV bandpass. Its 0.3 arcsecond (0.3′′) pixels will adequately sample the Lynx mirror point spread function over a 22′ × 22′ FOV. The 21 individual sensors of the HDXI are laid out along the optimal focal surface to improve the off-axis PSF. The Lynx DRM uses Complementary Metal Oxide Semiconductor (CMOS) Active Pixel Sensor (APS) technology, which is projected to have the required capabilities (i.e., high readout rates, high broad-band quantum efficiency, sufficient energy resolution, minimal pixel crosstalk, and radiation hardness). The Lynx team has identified three options with comparable TRL ratings (TRL 3) and sound TRL advancement roadmaps: the Monolithic CMOS, Hybrid CMOS, and Digital CCDs with CMOS readout. All are currently funded for technology development. The Lynx X-ray Microcalorimeter (LXM): The LXM is an imaging spectrometer that provides high resolving power (R ~ 2,000) in both the hard and soft X-ray bands, combined with high spatial resolution (down to 0.5′′ scales). To meet the diverse range of Lynx science requirements, the LXM focal plane includes three arrays that share the same readout technology. Each array is differentiated by its absorber pixel size and thickness, and by how the absorbers are connected to thermal readouts. The total number of pixels exceeds 100,000 — a major leap over past and currently planned X-ray microcalorimeters. This huge improvement does not entail a huge added cost: two of the LXM arrays feature a simple, already proven, “thermal” multiplexing approach where multiple absorbers are connected to a single temperature sensor. This design brings the number of sensors to read out (one of the main power and cost drivers for the X-ray microcalorimeters) to ~7,600. This is only a modest increase over what is planned for the X-IFU instrument on Athena. As of Spring 2019, prototypes of the focal plane have been made that include all three arrays at 2/3 full size. These prototypes demonstrate that arrays with the pixel form factor, size, and wiring density required by Lynx are readily achievable, with high yield. The energy resolution requirements of the different pixel types is also readily achievable. Although the LXM is technically still at TRL 3, there is a clear path for achieving TRL 4 by 2020 and TRL 5 by 2024. The X-ray Grating Spectrometer''' (XGS): The XGS will provide even higher spectral resolution (R = 5,000 with a goal of 7,500) in the soft X-ray band for point sources. Compared to the current state of the art (Chandra), the XGS provides a factor of > 5 higher spectral resolution and a factor of several hundred higher throughput. These gains are enabled by recent advances in X-ray grating technologies. Two strong technology candidates are: critical angle transmission (used for the Lynx DRM) and off-plane reflection gratings. Both are fully feasible, currently at TRL 4, and have demonstrated high efficiencies and resolving powers of ~ 10,000 in recent X-ray tests. Mission operations The Chandra X-ray Observatory experience provides the blueprint for developing the systems required to operate Lynx, leading to a significant cost reduction relative to starting from scratch. This starts with a single prime contractor for the science and operations center, staffed by a seamless, integrated team of scientists, engineers, and programmers. Many of the system designs, procedures, processes, and algorithms developed for Chandra will be directly applicable for Lynx, although all will be recast in a software/hardware environment appropriate for the 2030s and beyond. The science impact of Lynx will be maximized by subjecting all of its proposed observations to peer review, including those related to the three science pillars. Time pre-allocation can be considered only for a small number of multi-purpose key programs, such as surveys in pre-selected regions of the sky. Such an open General Observer (GO) program approach has been successfully employed by large missions such as Hubble Space Telescope, Chandra X-ray Observatory, and Spitzer Space Telescope, and is planned for James Webb Space Telescope and Nancy Grace Roman Space Telescope. The Lynx GO program will have ample exposure time to achieve the objectives of its science pillars, make impacts across the astrophysical landscape, open new directions of inquiry, and produce as yet unimagined discoveries. Estimated cost The cost of the Lynx X-ray Observatory is estimated to be between US$4.8 billion to US$6.2 billion (in FY20 dollars at 40% and 70% confidence levels, respectively). This estimated cost range includes the launch vehicle, cost reserves, and funding for five years of mission operations, while excluding potential foreign contributions (such as participation by the European Space Agency (ESA)). As described in Section 8.5 of the concept study's Final Report, the Lynx'' team commissioned five independent cost estimates, all of which arrived at similar estimates for the total mission lifecycle cost. See also Advanced Telescope for High Energy Astrophysics International X-ray Observatory Nuclear Spectroscopic Telescope Array (NuSTAR) List of proposed space observatories References External links Lynx home page Lynx home page for scientists at NASA Space telescopes X-ray telescopes Proposed NASA space probes
Lynx X-ray Observatory
Astronomy
2,360
72,333,949
https://en.wikipedia.org/wiki/Nitryl%20azide
Nitryl azide (tetranitrogen dioxide) is an inorganic compound with the chemical formula . It is an unstable nitrogen oxide consisting of a covalent nitrogen–nitrogen bond between a nitro group and an azide group. It has been detected by infrared spectroscopy as a short-lived product of the reaction between sodium azide and nitronium hexafluoroantimonate: The compound quickly decomposes to form nitrous oxide. Calculations suggest this process that occurs via an oxatetrazole oxide intermediate: References Azido compounds Nitrogen oxides
Nitryl azide
Chemistry
119
699,235
https://en.wikipedia.org/wiki/Square%20foot
The square foot (; abbreviated sq ft, sf, or ft2; also denoted by '2 and ⏍, also known as "squeet") is an imperial unit and U.S. customary unit (non-SI, non-metric) of area, used mainly in the United States, Canada, the United Kingdom, Bangladesh, India, Nepal, Pakistan, Ghana, Liberia, Malaysia, Myanmar, Singapore and Hong Kong. It is defined as the area of a square with sides of 1 foot. Although the pluralization is regular in the noun form, when used as an adjective, the singular is preferred. So, an apartment measuring 700 square feet could be described as a 700 square-foot apartment. This corresponds to common linguistic usage of foot. The square foot unit is commonly used in real estate. Dimensions are generally taken with a laser device, the latest in a long line of tools used to gauge the size of apartments or other spaces. Real estate agents often measure straight corner-to-corner, then deduct non-heated spaces, and add heated spaces whose footprints exceed the end-to-end measurement. 1 square foot conversion to other units of area: 1 square foot (ft2) = 0.0000000358701 square miles (mi2) 1 square foot (ft2) = 0.000022956341 acres (ac) 1 square foot (ft2) = 0.111111111111 square yards (yd2) 1 square foot (ft2) = 144 square inches (in2) 1 square foot (ft2) = 144,000,000,000,000 square microinches (μin2) 1 square foot (ft2) = 0.00000009290304 square kilometers (km2) 1 square foot (ft2) = 0.000009290304 hectare (ha) 1 square foot (ft2) = 0.09290304 square meters (m2) 1 square foot (ft2) = 9.290304 square decimeters (dm2) (uncommon) 1 square foot (ft2) = 929.0304 square centimeters (cm2) 1 square foot (ft2) = 92,903.04 square millimeters (mm2) 1 square foot (ft2) = 92,903,040,000 square micrometers (μm2) See also Area (geometry) Conversion of units Cubic foot Metrication in Canada Miscellaneous Technical (Unicode) for a list of miscellaneous technical symbols and fonts which support the square foot symbol Orders of magnitude (area) Square (algebra), square root References External links Square feet conversion to acres and other units of area Units of area Imperial units Customary units of measurement in the United States
Square foot
Mathematics
579
2,388,685
https://en.wikipedia.org/wiki/Anemophily
Anemophily or wind pollination is a form of pollination whereby pollen is distributed by wind. Almost all gymnosperms are anemophilous, as are many plants in the order Poales, including grasses, sedges, and rushes. Other common anemophilous plants are oaks, pecans, pistachios, sweet chestnuts, alders, hops, and members of the family Juglandaceae (hickory or walnut family). Approximately 12% of plants across the globe are pollinated by anemophily, including cereal crops like rice and corn and other prominent crop plants like wheat, rye, barley, and oats. In addition, many pines, spruces, and firs are wind-pollinated. Syndrome Features of the wind-pollination syndrome include a lack of scent production, a lack of showy floral parts (resulting in small, inconspicuous flowers), reduced production of nectar, and the production of enormous numbers of pollen grains. This distinguishes them from entomophilous and zoophilous species (whose pollen is spread by insects and vertebrates respectively). Anemophilous pollen grains are smooth, light, and non-sticky, so that they can be transported by air currents. Wind-pollinating plants have no predisposition to attract pollinating organisms. They freely expel a myriad of these pollen grains, and only a small percentage of them ends up captured by the female floral structures on wind-pollinated plants. They are typically in diameter, although the pollen grains of Pinus species can be much larger and much less dense. Anemophilous plants possess lengthy, well-exposed stamens to catch and distribute pollen. These stamens are exposed to wind currents and also have large, feathery stigma to easily trap airborne pollen grains. Pollen from anemophilous plants tends to be smaller and lighter than pollen from entomophilous ones, with very low nutritional value to insects due to their low protein content. However, insects sometimes gather pollen from staminate anemophilous flowers at times when higher-protein pollens from entomophilous flowers are scarce. Anemophilous pollens may also be inadvertently captured by bees' electrostatic field. This may explain why, though bees are not observed to visit ragweed flowers, its pollen is often found in honey made during the ragweed floral bloom. Other flowers that are generally anemophilous are observed to be actively worked by bees, with solitary bees often visiting grass flowers, and the larger honeybees and bumblebees frequently gathering pollen from corn tassels and other grains. Anemophily is an adaptation that helps to separate the male and female reproductive systems of a single plant, reducing the effects of inbreeding. It often accompanies dioecy – the presence of male and female reproductive structures on separate plants. Anemophily is adaptively beneficial because it promotes outcrossing and thus the avoidance of inbreeding depression that can occur due to the expression of recessive deleterious mutations in inbred progeny plants. Allergies Almost all pollens that are allergens are from anemophilous species. People allergic to the pollen produced by anemophilous plants often have symptoms of hay fever. Grasses (Poaceae) are the most important producers of aeroallergens in most temperate regions, with lowland or meadow species producing more pollen than upland or moorland species. In Morocco, it was found that asthma caused by pollen from Poaceae accounted for 10% of the clinical respiratory diseases that patients faced. The nature of how species of Poaceae grasses flower results in an increase in the time that the allergenic pollen circulates through the air, which is not favorable to people who are hypersensitive to it. References External links Plant morphology Pollination Wind
Anemophily
Biology
808
3,971,914
https://en.wikipedia.org/wiki/SIGMOBILE
SIGMOBILE is the Association for Computing Machinery's Special Interest Group on Mobility of Systems, Users, Data and Computing, which specializes in the field of mobile computing and wireless networks and wearable computing. Conceived in early 1995, ACM SIGMOBILE started out as an organization that fostered research in the "field of mobility and tetherless ubiquitous connectivity". It was founded as a provisional SIG on June 13, 1996, gaining permanent status on October 12, 2000. On February 8, 2005, the SIGMOBILE Chapter Program was launched. The NTU Singapore chapter became the first Student Chapter, and the Sydney, Australia Chapter became the first Professional Chapter. SIGMOBILE sponsors four annual international conferences: MobiCom, the International Conference on Mobile Computing and Networking; MobiHoc, the International Symposium on Mobile Ad Hoc Networking and Computing; MobiSys, the International Conference on Mobile Systems, Applications, and Services; and SenSys, the ACM Conference on Embedded Networked Sensor Systems. SIGMOBILE publishes a quarterly journal, Mobile Computing and Communications Review (MC2R), as well as the annual Proceedings of the conferences and many workshops sponsored by SIGMOBILE such as HotMobile, the International Workshop on Mobile Computing Systems and Applications. External links SIGMOBILE MC2R MobiCom MobiHoc MobiSys SenSys HotMobile Association for Computing Machinery Special Interest Groups
SIGMOBILE
Technology
293
74,596,521
https://en.wikipedia.org/wiki/CodeWeek
EU Codeweek (also stylized as CodeWeek) is an initiative started in 2013 by the European Union to increase basic programming knowledge among children and young people. History Codeweek was launched in 2013 by Neelie Kroes – at the time Vice President of the European Commission – as part of a broader European digital agenda. With the increase in the number of devices running software, there is a growing need for programmers, and the organization wants to introduce children to computer language at a young age through this initiative. Codeweek is a week during which free events are organized in schools, libraries, and other locations across Europe to teach more children and young people the basics of programming. In 2022, eighty European countries participated. Participating countries By 2024, 45 countries are taking part in Code Week, featuring an annual competition that ranks countries by the ratio of activities to their population size. These are the participating countries. Albania Argentina Armenia Austria Belgium Bosnia and Herzegovina Bulgaria Croatia Cyprus Czech Republic Denmark Estonia France Georgia Germany Greece Hungary Ireland Italy Jordan Kenya Latvia Lebanon Lithuania Malta Moldova Monaco Netherlands North Macedonia Pakistan Palestine Poland Portugal Romania Serbia Slovakia Slovenia Spain Sweden Thailand Tunisia Türkiye (Turkey) Ukraine United Kingdom United States References Computer science education
CodeWeek
Technology
243
4,197,790
https://en.wikipedia.org/wiki/Germabenzene
Germabenzene (C5H6Ge) is the parent representative of a group of chemical compounds containing in their molecular structure a benzene ring with a carbon atom replaced by a germanium atom. Germabenzene itself has been studied theoretically, and synthesized with a bulky 2,4,6-tris[bis(trimethylsilyl)methyl]phenyl or Tbt group. Also, stable naphthalene derivatives do exist in the laboratory such as the 2-germanaphthalene-containing substance represented below. The germanium to carbon bond in this compound is shielded from potential reactants by a Tbt group. This compound is aromatic just as the other carbon group representatives silabenzene and stannabenzene. See also 6-membered aromatic rings with one carbon replaced by another group: borabenzene, silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium, telluropyrylium References Germanium heterocycles Germanium(IV) compounds Six-membered rings Hypothetical chemical compounds
Germabenzene
Chemistry
266
18,404,411
https://en.wikipedia.org/wiki/Commutation%20theorem%20for%20traces
In mathematics, a commutation theorem for traces explicitly identifies the commutant of a specific von Neumann algebra acting on a Hilbert space in the presence of a trace. The first such result was proved by Francis Joseph Murray and John von Neumann in the 1930s and applies to the von Neumann algebra generated by a discrete group or by the dynamical system associated with a measurable transformation preserving a probability measure. Another important application is in the theory of unitary representations of unimodular locally compact groups, where the theory has been applied to the regular representation and other closely related representations. In particular this framework led to an abstract version of the Plancherel theorem for unimodular locally compact groups due to Irving Segal and Forrest Stinespring and an abstract Plancherel theorem for spherical functions associated with a Gelfand pair due to Roger Godement. Their work was put in final form in the 1950s by Jacques Dixmier as part of the theory of Hilbert algebras. It was not until the late 1960s, prompted partly by results in algebraic quantum field theory and quantum statistical mechanics due to the school of Rudolf Haag, that the more general non-tracial Tomita–Takesaki theory was developed, heralding a new era in the theory of von Neumann algebras. Commutation theorem for finite traces Let H be a Hilbert space and M a von Neumann algebra on H with a unit vector Ω such that M Ω is dense in H M ' Ω is dense in H, where M ' denotes the commutant of M (abΩ, Ω) = (baΩ, Ω) for all a, b in M. The vector Ω is called a cyclic-separating trace vector. It is called a trace vector because the last condition means that the matrix coefficient corresponding to Ω defines a tracial state on M. It is called cyclic since Ω generates H as a topological M-module. It is called separating because if aΩ = 0 for a in M, then aMΩ= (0), and hence a = 0. It follows that the map for a in M defines a conjugate-linear isometry of H with square the identity, J2 = I. The operator J is usually called the modular conjugation operator. It is immediately verified that JMJ and M commute on the subspace M Ω, so that The commutation theorem of Murray and von Neumann states that {| border="1" cellspacing="0" cellpadding="5" | |} One of the easiest ways to see this is to introduce K, the closure of the real subspace Msa Ω, where Msa denotes the self-adjoint elements in M. It follows that an orthogonal direct sum for the real part of the inner product. This is just the real orthogonal decomposition for the ±1 eigenspaces of J. On the other hand for a in Msa and b in Msa, the inner product (abΩ, Ω) is real, because ab is self-adjoint. Hence K is unaltered if M is replaced by M '. In particular Ω is a trace vector for M and J is unaltered if M is replaced by M '. So the opposite inclusion follows by reversing the roles of M and M. Examples One of the simplest cases of the commutation theorem, where it can easily be seen directly, is that of a finite group Γ acting on the finite-dimensional inner product space by the left and right regular representations λ and ρ. These unitary representations are given by the formulas for f in and the commutation theorem implies that The operator J is given by the formula Exactly the same results remain true if Γ is allowed to be any countable discrete group. The von Neumann algebra λ(Γ)' ' is usually called the group von Neumann algebra of Γ. Another important example is provided by a probability space (X, μ). The Abelian von Neumann algebra A = L∞(X, μ) acts by multiplication operators on H = L2(X, μ) and the constant function 1 is a cyclic-separating trace vector. It follows that so that A is a maximal Abelian subalgebra of B(H), the von Neumann algebra of all bounded operators on H. The third class of examples combines the above two. Coming from ergodic theory, it was one of von Neumann's original motivations for studying von Neumann algebras. Let (X, μ) be a probability space and let Γ be a countable discrete group of measure-preserving transformations of (X, μ). The group therefore acts unitarily on the Hilbert space H = L2(X, μ) according to the formula for f in H and normalises the Abelian von Neumann algebra A = L∞(X, μ). Let a tensor product of Hilbert spaces. The group–measure space construction or crossed product von Neumann algebra is defined to be the von Neumann algebra on H1 generated by the algebra and the normalising operators . The vector is a cyclic-separating trace vector. Moreover the modular conjugation operator J and commutant M ' can be explicitly identified. One of the most important cases of the group–measure space construction is when Γ is the group of integers Z, i.e. the case of a single invertible measurable transformation T. Here T must preserve the probability measure μ. Semifinite traces are required to handle the case when T (or more generally Γ) only preserves an infinite equivalent measure; and the full force of the Tomita–Takesaki theory is required when there is no invariant measure in the equivalence class, even though the equivalence class of the measure is preserved by T (or Γ). Commutation theorem for semifinite traces Let M be a von Neumann algebra and M+ the set of positive operators in M. By definition, a semifinite trace (or sometimes just trace) on M is a functional τ from M+ into [0, ∞] such that for a, b in M+ and λ, μ ≥ 0 (); for a in M+ and u a unitary operator in M (unitary invariance); τ is completely additive on orthogonal families of projections in M (normality); each projection in M is as orthogonal direct sum of projections with finite trace (semifiniteness). If in addition τ is non-zero on every non-zero projection, then τ is called a faithful trace. If τ is a faithful trace on M, let H = L2(M, τ) be the Hilbert space completion of the inner product space with respect to the inner product The von Neumann algebra M acts by left multiplication on H and can be identified with its image. Let for a in M0. The operator J is again called the modular conjugation operator and extends to a conjugate-linear isometry of H satisfying J2 = I. The commutation theorem of Murray and von Neumann {| border="1" cellspacing="0" cellpadding="5" | |} is again valid in this case. This result can be proved directly by a variety of methods, but follows immediately from the result for finite traces, by repeated use of the following elementary fact: If M1 ⊇ M2 are two von Neumann algebras such that pn M1 = pn M2 for a family of projections pn in the commutant of M1 increasing to I in the strong operator topology, then M1 = M2. Hilbert algebras The theory of Hilbert algebras was introduced by Godement (under the name "unitary algebras"), Segal and Dixmier to formalize the classical method of defining the trace for trace class operators starting from Hilbert–Schmidt operators. Applications in the representation theory of groups naturally lead to examples of Hilbert algebras. Every von Neumann algebra endowed with a semifinite trace has a canonical "completed" or "full" Hilbert algebra associated with it; and conversely a completed Hilbert algebra of exactly this form can be canonically associated with every Hilbert algebra. The theory of Hilbert algebras can be used to deduce the commutation theorems of Murray and von Neumann; equally well the main results on Hilbert algebras can also be deduced directly from the commutation theorems for traces. The theory of Hilbert algebras was generalised by Takesaki as a tool for proving commutation theorems for semifinite weights in Tomita–Takesaki theory; they can be dispensed with when dealing with states. Definition A Hilbert algebra is an algebra with involution x→x* and an inner product (,) such that (a, b) = (b*, a*) for a, b in ; left multiplication by a fixed a in is a bounded operator; * is the adjoint, in other words (xy, z) = (y, x*z); the linear span of all products xy is dense in . Examples The Hilbert–Schmidt operators on an infinite-dimensional Hilbert space form a Hilbert algebra with inner product (a, b) = Tr (b*a). If (X, μ) is an infinite measure space, the algebra L∞ (X) L2(X) is a Hilbert algebra with the usual inner product from L2(X). If M is a von Neumann algebra with faithful semifinite trace τ, then the *-subalgebra M0 defined above is a Hilbert algebra with inner product (a, b) = τ(b*a). If G is a unimodular locally compact group, the convolution algebra L1(G)L2(G) is a Hilbert algebra with the usual inner product from L2(G). If (G, K) is a Gelfand pair, the convolution algebra L1(K\G/K)L2(K\G/K) is a Hilbert algebra with the usual inner product from L2(G); here Lp(K\G/K) denotes the closed subspace of K-biinvariant functions in Lp(G). Any dense *-subalgebra of a Hilbert algebra is also a Hilbert algebra. Properties Let H be the Hilbert space completion of with respect to the inner product and let J denote the extension of the involution to a conjugate-linear involution of H. Define a representation λ and an anti-representation ρ of on itself by left and right multiplication: These actions extend continuously to actions on H. In this case the commutation theorem for Hilbert algebras states that {| border="1" cellspacing="0" cellpadding="5" | |} Moreover if the von Neumann algebra generated by the operators λ(a), then {| border="1" cellspacing="0" cellpadding="5" | |} These results were proved independently by and . The proof relies on the notion of "bounded elements" in the Hilbert space completion H. An element of x in H is said to be bounded (relative to ) if the map a → xa of into H extends to a bounded operator on H, denoted by λ(x). In this case it is straightforward to prove that: Jx is also a bounded element, denoted x*, and λ(x*) = λ(x)*; a → ax is given by the bounded operator ρ(x) = Jλ(x*)J on H; M ' is generated by the ρ(x)'s with x bounded; λ(x) and ρ(y) commute for x, y bounded. The commutation theorem follows immediately from the last assertion. In particular The space of all bounded elements forms a Hilbert algebra containing as a dense *-subalgebra. It is said to be completed or full because any element in H bounded relative to must actually already lie in . The functional τ on M+ defined by if x = λ(a)*λ(a) and ∞ otherwise, yields a faithful semifinite trace on M with Thus: {| border="1" cellspacing="0" cellpadding="5" |There is a one-one correspondence between von Neumann algebras on H with faithful semifinite trace and full Hilbert algebras with Hilbert space completion H. |} See also von Neumann algebra Affiliated operator Tomita–Takesaki theory Notes References (English translation) (English translation) (Section 5) Von Neumann algebras Representation theory of groups Ergodic theory Theorems in functional analysis Theorems in representation theory
Commutation theorem for traces
Mathematics
2,618
329,806
https://en.wikipedia.org/wiki/Sleight%20of%20hand
Sleight of hand (also known as prestidigitation or legerdemain ()) refers to fine motor skills when used by performing artists in different art forms to entertain or manipulate. It is closely associated with close-up magic, card magic, card flourishing and stealing. Because of its heavy use and practice by magicians, sleight of hand is often confused as a branch of magic; however, it is a separate genre of entertainment and many artists practice sleight of hand as an independent skill. Sleight of hand pioneers with worldwide acclaim include Dan and Dave, Ricky Jay, Derek DelGaudio, David Copperfield, Yann Frisch, Norbert Ferré, Dai Vernon, Jerry Sadowitz, Cardini, Tony Slydini, Helder Guimarães and Tom Mullica. Etymology and history The word sleight, meaning "the use of dexterity or cunning, especially so as to deceive", comes from the Old Norse. The phrase sleight of hand means "quick fingers" or "trickster fingers". Common synonyms of Latin and French include prestidigitation and legerdemain respectively. Seneca the Younger, philosopher of the Silver Age of Latin literature, famously compared rhetorical techniques and illusionist techniques. Association with close-up magic Sleight of hand is often used in close-up magic, where the sleights are performed with the audience close to the magician, usually in physical contact or within . This close contact eliminates theories of fake audience members and the use of gimmicks. It makes use of everyday items as props, such as cards, coins, rubber bands, paper, phones and even saltshakers. A well-performed sleight looks like an ordinary, natural and completely innocent gesture, change in hand position or body posture. In addition to manual dexterity, sleight of hand in close-up magic depends on the use of psychology, timing, misdirection, and natural choreography in accomplishing a magical effect. Association with stage magic Sleight of hand during stage magic performances is not common, as most magic events and stunts are performed with objects visible to a much larger audience, but is nevertheless done occasionally by many stage performers. The most common magic tricks performed with sleight of hand on stage are rope manipulations and card tricks, with the first typically being done with a member of the audience to rule out the possibility of stooges and the latter primarily being done on a table while a camera is live-recording, allowing the rest of the audience to see the performance on a big screen. Worldwide acclaimed stage magician David Copperfield often includes illusions featuring sleight of hand in his stage shows. Association with card cheating Although being mostly used for entertainment and comedy purposes, sleight of hand is also notoriously used to cheat at casinos and gambling facilities throughout the world. Common ways to professionally cheat at card games using sleight of hand include palming, switching, ditching, and stealing cards from the table. Such techniques involve extreme misdirection and years of practice. For these reasons, the term sleight of hand frequently carries negative associations of dishonesty and deceit at many gambling halls, and many magicians known around the world are publicly banned from casinos, such as British mentalist and close-up magician Derren Brown, who is banned from every casino in Britain. Association with cardistry Unlike card tricks done on the streets or on stage and card cheating, cardistry is solely about impressing without illusions, deceit, misdirection and other elements commonly used in card tricks and card cheating. Cardistry is the art of card flourishing, and is intended to be visually impressive and to give the appearance of being difficult to perform. Card flourishing is often associated with card tricks, but many sleight of hand artists perform flourishing without considering themselves magicians or having any real interest in card tricks. Association with card throwing The art of card throwing generally consists of throwing standard playing cards with excessively high speed and accuracy, powerful enough to slice fruits like carrots and even melons. Like flourishing, throwing cards is meant to be visibly impressive and does not include magic elements. Magician Ricky Jay popularized throwing cards within the sleight of hand industry with the release of his 1977 book Cards as Weapons, which was met with large sales and critical acclaim. Some magic tricks, both close-up and on stage, are heavily connected to throwing cards. See also Cups and balls Tenkai palm References Sources Printed Online External links Sleight of hand on YouTube Sleight of hand on https://Cardtricks.info Card magic Coin magic Motor skills de:Zauberkunst#Manipulation ja:手品
Sleight of hand
Biology
976
15,213,988
https://en.wikipedia.org/wiki/KIAA0930
KIAA0930 is a protein that, in humans, is encoded by the KIAA0930 gene. References External links Further reading Uncharacterized proteins
KIAA0930
Biology
37
36,175,775
https://en.wikipedia.org/wiki/C28H42O2
{{DISPLAYTITLE:C28H42O2}} The molecular formula C28H42O2 (molar mass: 410.63 g/mol, exact mass: 410.3185 u) may refer to: Tocotrienols β-Tocotrienol γ-Tocotrienol
C28H42O2
Chemistry
67
4,701,722
https://en.wikipedia.org/wiki/Ramsey%E2%80%93Cass%E2%80%93Koopmans%20model
The Ramsey–Cass–Koopmans model, or Ramsey growth model, is a neoclassical model of economic growth based primarily on the work of Frank P. Ramsey, with significant extensions by David Cass and Tjalling Koopmans. The Ramsey–Cass–Koopmans model differs from the Solow–Swan model in that the choice of consumption is explicitly microfounded at a point in time and so endogenizes the savings rate. As a result, unlike in the Solow–Swan model, the saving rate may not be constant along the transition to the long run steady state. Another implication of the model is that the outcome is Pareto optimal or Pareto efficient. Originally Ramsey set out the model as a social planner's problem of maximizing consumption levels over successive generations. Only later was a model adopted by Cass and Koopmans as a description of a decentralized dynamic economy with a representative agent. The Ramsey–Cass–Koopmans model aims only at explaining long-run economic growth rather than business cycle fluctuations and does not include sources of disturbances like market imperfections, heterogeneity among households, or exogenous shocks. Subsequent researchers extended the model, allowing for government purchases, employment variations, and other shocks, notably in real business cycle theory. Mathematical description Model setup In the usual setup, time is continuous starting, for simplicity, at and continuing forever. By assumption, the only productive factors are capital and labour , both required to be nonnegative. The labour force, which makes up the entire population, is assumed to grow at a constant rate , i.e. , implying that with initial level at . Finally, let denote aggregate production, and denote aggregate consumption. The variables that the Ramsey–Cass–Koopmans model ultimately aims to describe are , the per capita (or more accurately, per labour) consumption, as well as , known as capital intensity. It does so by first connecting capital accumulation, written in Newton's notation, with consumption , describing a consumption-investment trade-off. More specifically, since the existing capital stock decays by depreciation rate (assumed to be constant), it requires investment of current-period production output . Thus, The relationship between the productive factors and aggregate output is described by the aggregate production function, . A common choice is the Cobb–Douglas production function , but generally any production function satisfying the Inada conditions is permissible. Importantly, though, is required to be homogeneous of degree 1, which economically implies constant returns to scale. With this assumption, we can re-express aggregate output in per capita terms For example, if we use the Cobb–Douglas production function with , then . To obtain the first key equation of the Ramsey–Cass–Koopmans model, the dynamic equation for the capital stock needs to be expressed in per capita terms. Noting the quotient rule for , we have a non-linear differential equation akin to the Solow–Swan model but incorporates endogenous consumption 𝑐, reflecting the model's microfoundations. Maximizing welfare If we ignore the problem of how consumption is distributed, then the rate of utility is a function of aggregate consumption. That is, . To avoid the problem of infinity, we exponentially discount future utility at a discount rate . A high reflects high impatience. The social planner's problem is maximizing the social welfare function . Assume that the economy is populated by identical immortal individuals with unchanging utility functions (a representative agent), such that the total utility is:The utility function is assumed to be strictly increasing (i.e., there is no bliss point) and concave in , with , where is marginal utility of consumption . Thus we have the social planner's problem: where an initial non-zero capital stock is given. To ensure that the integral is well-defined, we impose . Solution The solution, usually found by using a Hamiltonian function, is a differential equation that describes the optimal evolution of consumption, the Keynes–Ramsey rule. The term , where is the marginal product of capital, reflects the marginal return on net investment, accounting for capital depreciation and time discounting. Here is the elasticity of intertemporal substitution (EIS), defined byIt is formally equivalent to the inverse of relative risk aversion. The quantity reflects the curvature of the utility function and indicates how much the representative agent wishes to smooth consumption over time. If the agent has high relative risk aversion, then it has low EIS, and thus would be more willing to smooth consumption over time. It is often assumed that is strictly monotonically increasing and concave, thus . In particular, if utility is logarithmic, then it is constant:We can rewrite the Ramsey rule aswhere we interpret as the "consumption delay rate", indicating the rate at which current consumption is being postponed in favor of future consumption. A higher value implies that the agent is prioritizing saving over consuming today, thereby deferring consumption to a later period. Graphical analysis in phase space The two coupled differential equations for and form the Ramsey–Cass–Koopmans dynamical system. A steady state for the system is found by setting and equal to zero. There are three solutions: The first is the only solution in the interior of the upper quadrant. It is a saddle point (as shown below). The second is a repelling point. The third is a degenerate stable equilibrium. By default, the first solution is meant, although the other two solutions are important to keep track of. Any optimal trajectory must follow the dynamical system. However, since the variable is a control variable, at each capital intensity , to find its corresponding optimal trajectory, we still need to find its starting consumption rate . As it turns out, the optimal trajectory is the unique one that converges to the interior equilibrium point. Any other trajectory either converges to the all-saving equilibrium with , or diverges to , which means that the economy expends all its capital in finite time. Both achieve a lower overall utility than the trajectory towards the interior equilibrium point. A qualitative statement about the stability of the solution requires a linearization by a first-order Taylor polynomial where is the Jacobian matrix evaluated at steady state, given by which has determinant since , is positive by assumption, and since is concave (Inada condition). Since the determinant equals the product of the eigenvalues, the eigenvalues must be real and opposite in sign. Hence by the stable manifold theorem, the equilibrium is a saddle point and there exists a unique stable arm, or “saddle path”, that converges on the equilibrium, indicated by the blue curve in the phase diagram. The system is called “saddle path stable” since all unstable trajectories are ruled out by the “no Ponzi scheme” condition: implying that the present value of the capital stock cannot be negative. History Spear and Young re-examine the history of optimal growth during the 1950s and 1960s, focusing in part on the veracity of the claimed simultaneous and independent development of Cass' "Optimum growth in an aggregative model of capital accumulation" (published in 1965 in the Review of Economic Studies), and Tjalling Koopman's "On the concept of optimal economic growth" (published in Study Week on the Econometric Approach to Development Planning, 1965, Rome: Pontifical Academy of Science). Over their lifetimes, neither Cass nor Koopmans ever suggested that their results characterizing optimal growth in the one-sector, continuous-time growth model were anything other than "simultaneous and independent". That the issue of priority ever became a discussion point was due only to the fact that in the published version of Koopmans' work, he cited the chapter from Cass' thesis that later became the RES paper. In his paper, Koopmans states in a footnote that Cass independently obtained conditions similar to what Koopmans finds, and that Cass also considers the limiting case where the discount rate goes to zero in his paper. For his part, Cass notes that "after the original version of this paper was completed, a very similar analysis by Koopmans came to our attention. We draw on his results in discussing the limiting case, where the effective social discount rate goes to zero". In the interview that Cass gave to Macroeconomic Dynamics, he credits Koopmans with pointing him to Frank Ramsey's previous work, claiming to have been embarrassed not to have known of it, but says nothing to dispel the basic claim that his work and Koopmans' were in fact independent. Spear and Young dispute this history, based upon a previously overlooked working paper version of Koopmans' paper, which was the basis for Koopmans' oft-cited presentation at a conference held by the Pontifical Academy of Sciences in October 1963. In this Cowles Discussion paper, there is an error. Koopmans claims in his main result that the Euler equations are both necessary and sufficient to characterize optimal trajectories in the model because any solutions to the Euler equations which do not converge to the optimal steady-state would hit either a zero consumption or zero capital boundary in finite time. This error was apparently presented at the Vatican conference, although at the time of Koopmans' presenting it, no participant commented on the problem. This can be inferred because the discussion after each paper presentation at the Vatican conference is preserved verbatim in the conference volume. In the Vatican volume discussion following the presentation of a paper by Edmond Malinvaud, the issue does arise because of Malinvaud's explicit inclusion of a so-called "transversality condition" (which Malinvaud calls Condition I) in his paper. At the end of the presentation, Koopmans asks Malinvaud whether it is not the case that Condition I simply guarantees that solutions to the Euler equations that do not converge to the optimal steady-state hit a boundary in finite time. Malinvaud replies that this is not the case, and suggests that Koopmans look at the example with log utility functions and Cobb-Douglas production functions. At this point, Koopmans obviously recognizes he has a problem, but, based on a confusing appendix to a later version of the paper produced after the Vatican conference, he seems unable to decide how to deal with the issue raised by Malinvaud's Condition I. From the Macroeconomic Dynamics interview with Cass, it is clear that Koopmans met with Cass' thesis advisor, Hirofumi Uzawa, at the winter meetings of the Econometric Society in January 1964, where Uzawa advised him that his student [Cass] had solved this problem already. Uzawa must have then provided Koopmans with the copy of Cass' thesis chapter, which he apparently sent along in the guise of the IMSSS Technical Report that Koopmans cited in the published version of his paper. The word "guise" is appropriate here, because the TR number listed in Koopmans' citation would have put the issue date of the report in the early 1950s, which it clearly was not. In the published version of Koopmans' paper, he imposes a new Condition Alpha in addition to the Euler equations, stating that the only admissible trajectories among those satisfying the Euler equations is the one that converges to the optimal steady-state equilibrium of the model. This result is derived in Cass' paper via the imposition of a transversality condition that Cass deduced from relevant sections of a book by Lev Pontryagin. Spear and Young conjecture that Koopmans took this route because he did not want to appear to be "borrowing" either Malinvaud's or Cass' transversality technology. Based on this and other examination of Malinvaud's contributions in 1950s—specifically his intuition of the importance of the transversality condition—Spear and Young suggest that the neo-classical growth model might better be called the Ramsey–Malinvaud–Cass model than the established Ramsey–Cass–Koopmans honorific. Notes References Further reading External links Economics models Differential equations
Ramsey–Cass–Koopmans model
Mathematics
2,500
30,874,071
https://en.wikipedia.org/wiki/Passivity%20%28engineering%29
Passivity is a property of engineering systems, most commonly encountered in analog electronics and control systems. Typically, analog designers use passivity to refer to incrementally passive components and systems, which are incapable of power gain. In contrast, control systems engineers will use passivity to refer to thermodynamically passive ones, which consume, but do not produce, energy. As such, without context or a qualifier, the term passive is ambiguous. An electronic circuit consisting entirely of passive components is called a passive circuit, and has the same properties as a passive component. If a device is not passive, then it is an active device. Thermodynamic passivity In control systems and circuit network theory, a passive component or circuit is one that consumes energy, but does not produce energy. Under this methodology, voltage and current sources are considered active, while resistors, capacitors, inductors, transistors, tunnel diodes, metamaterials and other dissipative and energy-neutral components are considered passive. Circuit designers will sometimes refer to this class of components as dissipative, or thermodynamically passive. While many books give definitions for passivity, many of these contain subtle errors in how initial conditions are treated and, occasionally, the definitions do not generalize to all types of nonlinear time-varying systems with memory. Below is a correct, formal definition, taken from Wyatt et al., which also explains the problems with many other definitions. Given an n-port R with a state representation S, and initial state x, define available energy EA as: where the notation supx→T≥0 indicates that the supremum is taken over all T ≥ 0 and all admissible pairs {v(·), i(·)} with the fixed initial state x (e.g., all voltage–current trajectories for a given initial condition of the system). A system is considered passive if EA is finite for all initial states x. Otherwise, the system is considered active. Roughly speaking, the inner product is the instantaneous power (e.g., the product of voltage and current), and EA is the upper bound on the integral of the instantaneous power (i.e., energy). This upper bound (taken over all T ≥ 0) is the available energy in the system for the particular initial condition x. If, for all possible initial states of the system, the energy available is finite, then the system is called passive. If the available energy is finite, it is known to be non-negative, since any trajectory with voltage gives an integral equal to zero, and the available energy is the supremum over all possible trajectories. Moreover, by definition, for any trajectory {v(·), i(·)}, the following inequality holds: . The existence of a non-negative function EA that satisfies this inequality, known as a "storage function", is equivalent to passivity. For a given system with a known model, it is often easier to construct a storage function satisfying the differential inequality than directly computing the available energy, as taking the supremum on a collection of trajectories might require the use of calculus of variations. Incremental passivity In circuit design, informally, passive components refer to ones that are not capable of power gain; this means they cannot amplify signals. Under this definition, passive components include capacitors, inductors, resistors, diodes, transformers, voltage sources, and current sources. They exclude devices like transistors, vacuum tubes, relays, tunnel diodes, and glow tubes. To give other terminology, systems for which the small signal model is not passive are sometimes called locally active (e.g. transistors and tunnel diodes). Systems that can generate power about a time-variant unperturbed state are often called parametrically active (e.g. certain types of nonlinear capacitors). Formally, for a memoryless two-terminal element, this means that the current–voltage characteristic is monotonically increasing. For this reason, control systems and circuit network theorists refer to these devices as locally passive, incrementally passive, increasing, monotone increasing, or monotonic. It is not clear how this definition would be formalized to multiport devices with memory – as a practical matter, circuit designers use this term informally, so it may not be necessary to formalize it. Other definitions of passivity This term is used colloquially in a number of other contexts: A passive USB to PS/2 adapter consists of wires, and potentially resistors and similar passive (in both the incremental and thermodynamic sense) components. An active USB to PS/2 adapter consists of logic to translate signals (active in the incremental sense) A passive mixer consists of just resistors (incrementally passive), whereas an active mixer includes components capable of gain (active). In audio work one can also find both (incrementally) passive and active converters between balanced and unbalanced lines. A passive balun converter is generally just a transformer along with, of course, the requisite connectors, while an active one typically consists of a differential drive or an instrumentation amplifier. In some books, devices that exhibit gain or a rectifying function (e.g. diodes) are considered active. Only resistors, capacitors, inductors, transformers, and gyrators are considered passive. United States Patent and Trademark Office is amongst the organisations classing diodes as active devices. This definition is somewhat informal, as diodes can be considered non-linear resistors, and virtually all real-world devices exhibit some non-linearity. Sales/product catalogs will often use different informal definitions of this term, as fitting to a particular hierarchies of products being sold. It is not uncommon, for example, to list all silicon devices under "active devices," even if some of those devices are technically passive. Stability Passivity, in most cases, can be used to demonstrate that passive circuits will be stable under specific criteria. This only works if only one of the above definitions of passivity is used – if components from the two are mixed, the systems may be unstable under any criteria. In addition, passive circuits will not necessarily be stable under all stability criteria. For instance, a resonant series LC circuit will have unbounded voltage output for a bounded voltage input, but will be stable in the sense of Lyapunov, and given bounded energy input will have bounded energy output. Passivity is frequently used in control systems to design stable control systems or to show stability in control systems. This is especially important in the design of large, complex control systems (e.g. stability of airplanes). Passivity is also used in some areas of circuit design, especially filter design. Passive filter A passive filter is a kind of electronic filter that is made only from passive components – in contrast to an active filter, it does not require an external power source (beyond the signal). Since most filters are linear, in most cases, passive filters are composed of just the four basic linear elements – resistors, capacitors, inductors, and transformers. More complex passive filters may involve nonlinear elements, or more complex linear elements, such as transmission lines. A passive filter has several advantages over an active filter: Guaranteed stability Scale better to large signals (tens of amperes, hundreds of volts), where active devices are often expensive or impractical No power supply needed Often less expensive in discrete designs (unless large coils are required). Active filters tend to be less expensive in integrated designs. For linear filters, potentially greater linearity depending on components required (in many cases, active filters allow the use of more linear components; e.g. active components can permit the use of a polypropylene or NP0 ceramic capacitor, while a passive one might require an electrolytic). They are commonly used in speaker crossover design (due to the moderately large voltages and currents, and the lack of easy access to a power supply), filters in power distribution networks (due to the large voltages and currents), power supply bypassing (due to low cost, and in some cases, power requirements), as well as a variety of discrete and home brew circuits (for low-cost and simplicity). Passive filters are uncommon in monolithic integrated circuit design, where active devices are inexpensive compared to resistors and capacitors, and inductors are prohibitively expensive. Passive filters are still found, however, in hybrid integrated circuits. Indeed, it may be the desire to incorporate a passive filter that leads the designer to use the hybrid format. Energic and non-energic passive circuit elements Passive circuit elements may be divided into energic and non-energic kinds. When current passes through it, an energic passive circuit element converts some of the energy supplied to it into heat. It is dissipative. When current passes through it, a non-energic passive circuit element converts none of the energy supplied to it into heat. It is non-dissipative. Resistors are energic. Ideal capacitors, inductors, transformers, and gyrators are non-energic. Notes References Further reading —Very readable introductory discussion on passivity in control systems. —Good collection of passive stability theorems, but restricted to memoryless one-ports. Readable and formal. —Somewhat less readable than Chua, and more limited in scope and formality of theorems. —Gives a definition of passivity for multiports (in contrast to the above), but the overall discussion of passivity is quite limited. — A pair of memos that have good discussions of passivity. —A complete exposition of dissipative systems, with emphasis on the celebrated KYP Lemma, and on Willems' dissipativity and its use in Control. Engineering concepts
Passivity (engineering)
Engineering
2,087
65,931,909
https://en.wikipedia.org/wiki/Manuka%20oil
Manuka oil is an essential oil obtained from the steam distillation of the leaves and small branches of the tree Leptospermum scoparium (commonly known as mānuka, or New Zealand tea tree). Though it is used in a wide range of cosmetics, cosmeceuticals and naturopathic and topical medications, manuka oil is a relatively new development; it was first identified during the 1970s and has been produced commercially since the 1980s and investigated by global research teams since then. Main constituents The composition of manuka oil is dependent on its chemotype. Manuka oil from the East Cape region of New Zealand, described as a high triketone chemotype, is commercially important because of its antimicrobial properties (the ability to kill bacteria, viruses, yeasts and fungi). The triketone chemotype of manuka oil from the East Cape contains over 20% triketones (often as high as 33%), comprising flavesone, leptospermone and iso-leptospermone. Manuka that grows in the Marlborough Sounds region of New Zealand also has relative high levels of triketones, between 15 and 20%. In contrast, manuka that grows in Australia have a different essential oil profile that does not include triketones More than ten other chemotypes of New Zealand manuka that have been described. These oils are rich in terpene compounds, particularly sesquiterpenes, such as myrcene, humulene, caryophyllene, α-pinene, linalool, α-copaene, elemene, selinene, calamenene, cubebene and cadinene amongst others. Production Until recently, most of New Zealand's manuka oil production came from wild-harvested manuka. Harvesters used brush cutters to gather fresh branches, leaving the bushes viable for regrowth available for future years. In recent years manuka plantations in the East Cape region of New Zealand are allowing for the mechanical harvesting of manuka leaf to produce essential oil at a commercial scale. The oil is distilled from the leaves and small branches of the manuka bush using the technique of steam distillation where the steam is passed through the leaf material. The steam is then condensed and the oil floats on top of the condensed water from where it is drawn off. Distillation processes vary from the super- heated fast extraction method to the slower ambient pressure distillation at lower temperatures. Each tonne of foliage produces 3–5 litres of manuka essential oil. References Essential oils
Manuka oil
Chemistry
539
77,003,143
https://en.wikipedia.org/wiki/Lead%28II%29%20phthalocyanine
Lead(II) Phthalocyanine (PbPc), also known as phthalocyanine lead, is a salt consisting of a lead ion and Pc2−, the conjugate base of pythalocyanine. It is a organolead dye and bright purple powder. It's also used as a near infared light absorber for photodetectors. It has a unique structural feature which is called and resembles a "shuttle cock." References Phthalocyanines Macrocycles Lead(II) compounds Organic pigments
Lead(II) phthalocyanine
Chemistry
119
36,106,398
https://en.wikipedia.org/wiki/UGC%205497
UGC 5497 is a dwarf galaxy, located about 12 million light years away in the constellation Ursa Major. It is a member of the M81 Group. See also Lists of galaxies References Dwarf galaxies M81 Group 05497 Ursa Major
UGC 5497
Astronomy
55
16,763,358
https://en.wikipedia.org/wiki/Eles%2C%20Tunisia
Eles (also transliterated as Ellès and Al Las) is a village in the Siliana Governorate, Tunisia. It is located around , about 13 km northwest of Maktar in Siliana Governorate. Eles, Tunisia sits over a natural spring at the base of the surrounding hills. The village is notable for the large number of dolmens found immediately to the west, thought to be from around 2500 BC. BC. south and east of the village which are typical of the tombs found around Maktar. A study of fifty-three of the dolmens by Belmonte, Esteban and Jiménez González suggest that some of these tombs may be orientated towards Alpha Centauri. In contrast Hoskin argues that Tunisian dolmen orientations can be explained by the local topography, in that the entrances all face downhill. The local rock strata are geologically interesting as they provide a particularly good record of the Cretaceous–Paleogene boundary, which is now better known as the Cretaceous–Paleogene extinction event. During the Roman Empire and late antiquity there was a civitas (Roman town) called Ululi. Citations General and cited references Ancient Berber cities Archaeoastronomy Archaeological sites in Tunisia Catholic titular sees in Africa Roman towns and cities in Tunisia
Eles, Tunisia
Astronomy
262
1,567,889
https://en.wikipedia.org/wiki/Triparental%20mating
Triparental mating is a form of bacterial conjugation where a conjugative plasmid present in one bacterial strain assists the transfer of a mobilizable plasmid present in a second bacterial strain into a third bacterial strain. Plasmids are introduced into bacteria for such purposes as transformation, cloning, or transposon mutagenesis. Triparental matings can help overcome some of the barriers to efficient plasmid mobilization. For instance, if the conjugative plasmid and the mobilizable plasmid are members of the same incompatibility group they do not need to stably coexist in the second bacterial strain for the mobilizable plasmid to be transferred. History Triparental mating was identified in yeasts in 1960 and then in Escherichia coli in 1962. Process Requirements A helper strain, carrying a conjugative plasmid (such as the F-plasmid) that codes for genes required for conjugation and DNA transfer. A donor strain, carrying a mobilizable plasmid that can utilize the transfer functions of the conjugative plasmid. A recipient strain, you wish to introduce the mobilizable plasmid into. Five to seven days are required to determine if the plasmid was successfully introduced into the new bacterial strain and confirm that there is no carryover of the helper or donor strain. In contrast, electroporation does not require a helper or donor strain. This helps avoid possible contamination with other strains. The introduction of the plasmid can be verified in the recipient strain in two days, making electroporation a faster and more efficient method of transformation. Electroporation however does not work with all bacteria and is mostly limited to well-characterized model organisms. See also Bacterial conjugation Plasmid Transposon (applications) Bacteriophage Three-parent baby External links Protocol for P. aeruginosa References Molecular biology
Triparental mating
Chemistry,Biology
426
37,784,287
https://en.wikipedia.org/wiki/Voluntary%20Protection%20Program
Voluntary Protection Programs (VPP) is an Occupational Safety and Health Administration (OSHA) initiative that encourages private industry and federal agencies to prevent workplace injuries and illnesses through hazard prevention and control, worksite analysis, training; and cooperation between management and workers. VPP enlists worker involvement to achieve injury and illness rates that are below national Bureau of Labor Statistics averages for their respective industries. History Even though the original OSH Act of 1970 included language that discussed the concept of VPP, it didn't start until an experimental California program began in 1979. The OSHA program started in 1982 with the first approved facilities. Levels/types of certification VPP offers two levels of certification: Star Star is the highest level. It recognizes employers and employees for developing and implementing continuous improvement workplace safety and health management programs that result in injury/illness rates that are below the national averages for their industries. Merit Merit is for employers and employees that have implemented good safety and health programs but require additional improvements. They must also commit to seeking to advance to Star level within three years. VPP offers three types of certification: Site-based Site-based Star and Merit certifications are offered for permanent work sites and long-term construction sites. It may also be used to certify resident contractors at participating VPP sites or under a corporate program. Mobile workforce This type of certification is for companies whose employees work on location at various sites. Corporate Large organizations that implement organization-wide health and safety management programs that extend to its individual sites are able to seek corporate VPP certification. As of 10/31/2012 2,370 entities were registered as VPP certified with the vast majority achieving the Star level. All organizations are re-evaluated every three to five years to remain in the programs. See also Safety and health training References Occupational Safety and Health Administration Training Occupational safety and health Industrial hygiene Safety engineering
Voluntary Protection Program
Engineering
379
659,902
https://en.wikipedia.org/wiki/Acid%E2%80%93base%20titration
An acid–base titration is a method of quantitative analysis for determining the concentration of Brønsted-Lowry acid or base (titrate) by neutralizing it using a solution of known concentration (titrant). A pH indicator is used to monitor the progress of the acid–base reaction and a titration curve can be constructed. This differs from other modern modes of titrations, such as oxidation-reduction titrations, precipitation titrations, & complexometric titrations. Although these types of titrations are also used to determine unknown amounts of substances, these substances vary from ions to metals. Acid–base titration finds extensive applications in various scientific fields, such as pharmaceuticals, environmental monitoring, and quality control in industries. This method's precision and simplicity makes it an important tool in quantitative chemical analysis, contributing significantly to the general understanding of solution chemistry. History The history of acid-base titration dates back to the late 19th century when advancements in analytical chemistry fostered the development of systematic techniques for quantitative analysis. The origins of titration methods can be linked to the work of chemists such as Karl Friedrich Mohr in the mid-1800s. His contributions laid the groundwork for understanding titrations involving acids and bases. Theoretical progress came with the research of Swedish chemist Svante Arrhenius, who in the late 19th century, introduced the Arrhenius theory, providing a theoretical framework for acid-base reactions. This theoretical foundation, along with ongoing experimental refinements, contributed to the evolution of acid-base titration as a precise and widely applicable analytical method. Over time, the method has undergone further refinements and adaptations, establishing itself as an essential tool in laboratories across various scientific disciplines. Alkalimetry and acidimetry Alkalimetry and acidimetry are types of volumetric analyses in which the fundamental reaction is a neutralization reaction. They involve the controlled addition of either an acid or a base (titrant) of known concentration to the solution of the unknown concentration (titrate) until the reaction reaches its stoichiometric equivalence point. At this point, the moles of acid and base are equal, resulting in a neutral solution: acid + base → salt + water For example: HCl + NaOH → NaCl + H2O Acidimetry is the specialized analytical use of acid-base titration to determine the concentration of a basic (alkaline) substance using standard acid. This can be used for weak bases and strong bases. An example of an acidimetric titration involving a strong base is as follows: Ba(OH)2 + 2 H+ → Ba2+ + 2 H2O In this case, the strong base (Ba(OH)2) is neutralized by the acid until all of the base has reacted. This allows the viewer to calculate the concentration of the base from the volume of the standard acid that is used. Alkalimetry follows uses same concept of specialized analytic acid-base titration, but to determine the concentration of an acidic substance using standard base. An example of an alkalimetric titration involving a strong acid is as follows: H2SO4 + 2 OH− → SO42- + 2 H2O In this case, the strong acid (H2SO4) is neutralized by the base until all of the acid has reacted. This allows the viewer to calculate the concentration of the acid from the volume of the standard base that is used. The standard solution (titrant) is stored in the burette, while the solution of unknown concentration (analyte/titrate) is placed in the Erlenmeyer flask below it with an indicator. Indicator choice A suitable pH indicator must be chosen in order to detect the end point of the titration. The colour change or other effect should occur close to the equivalence point of the reaction so that the experimenter can accurately determine when that point is reached. The pH of the equivalence point can be estimated using the following rules: A strong acid will react with a strong base to form a neutral (pH = 7) solution. A strong acid will react with a weak base to form an acidic (pH < 7) solution. A weak acid will react with a strong base to form a basic (pH > 7) solution. These indicators are essential tools in chemistry and biology, aiding in the determination of a solution's acidity or alkalinity through the observation of colour transitions. The table below serves as a reference guide for these indicator choices, offering insights into the pH ranges and colour transformations associated with specific indicators: Phenolphthalein is widely recognized as one of the most commonly used acid-base indicators in chemistry. Its popularity is because of its effectiveness in a broad pH range and its distinct colour transitions. Its sharp and easily detectable colour changes makes phenolphthalein a valuable tool for determining the endpoint of acid-base titrations, as a precise pH change signifies the completion of the reaction. When a weak acid reacts with a weak base, the equivalence point solution will be basic if the base is stronger and acidic if the acid is stronger. If both are of equal strength, then the equivalence pH will be neutral. However, weak acids are not often titrated against weak bases because the colour change shown with the indicator is often quick, and therefore very difficult for the observer to see the change of colour. The point at which the indicator changes colour is called the endpoint. A suitable indicator should be chosen, preferably one that will experience a change in colour (an endpoint) close to the equivalence point of the reaction. In addition to the wide variety of indicator solutions, pH papers, crafted from paper or plastic infused with combinations of these indicators, serve as a practical alternative. The pH of a solution can be estimated by immersing a strip of pH paper into it and matching the observed colour to the reference standards provided on the container. Overshot titration Overshot titrations are a common phenomenon, and refer to a situation where the volume of titrant added during a chemical titration exceeds the amount required to reach the equivalence point. This excess titrant leads to an outcome where the solution becomes slightly more alkaline or over-acidified. Overshooting the equivalence point can occur due to various factors, such as errors in burette readings, imperfect reaction stoichiometry, or issues with endpoint detection. The consequences of overshot titrations can affect the accuracy of the analytical results, particularly in quantitative analysis. Researchers and analysts often employ corrective measures, such as back-titration and using more precise titration techniques, to mitigate the impact of overshooting and obtain reliable and precise measurements. Understanding the causes, consequences, and solutions related to overshot titrations is crucial in achieving accurate and reproducible results in the field of chemistry. Mathematical analysis: titration of weak acid For calculating concentrations, an ICE table can be used. ICE stands for initial, change, and equilibrium. The pH of a weak acid solution being titrated with a strong base solution can be found at different points along the way. These points fall into one of four categories: initial pH pH before the equivalence point pH at the equivalence point pH after the equivalence point 1. The initial pH is approximated for a weak acid solution in water using the equation: where [H3O+]0 is the initial concentration of the hydronium ion. 2. The pH before the equivalence point depends on the amount of weak acid remaining and the amount of conjugate base formed. The pH can be calculated approximately by the Henderson–Hasselbalch equation: where Ka is the acid dissociation constant. 3. The pH at the equivalence point depends on how much the weak acid is consumed to be converted into its conjugate base. Note that when an acid neutralizes a base, the pH may or may not be neutral (pH = 7). The pH depends on the strengths of the acid and base. In the case of a weak acid and strong base titration, the pH is greater than 7 at the equivalence point. Thus pH can be calculated using the following formula: Where {[OH^{-}]} is the concentration of the hydroxide ion. The concentration of the hydroxide ion is calculated from the concentration of the hydronium ion and using the following relationship: Where Kb is the base dissociation constant, Kw is the water dissociation constant. 4. The pH after the equivalence point depends on the concentration of the conjugate base of the weak acid and the strong base of the titrant. However, the base of the titrant is stronger than the conjugate base of the acid. Therefore, the pH in this region is controlled by the strong base. As such the pH can be found using the following: where is the concentration of the strong base that is added, is the volume of base added until the equilibrium, is the concentration of the strong acid that is added, and is the initial volume of the acid. Single formula More accurately, a single formula that describes the titration of a weak acid with a strong base from start to finish is given below: where " φ = fraction of completion of the titration (φ < 1 is before the equivalence point, φ = 1 is the equivalence point, and φ > 1 is after the equivalence point) = the concentrations of the acid and base respectively = the volumes of the acid and base respectively Graphical methods Identifying the pH associated with any stage in the titration process is relatively simple for monoprotic acids and bases. A monoprotic acid is an acid that donates one proton. A monoprotic base is a base that accepts one proton. A monoprotic acid or base only has one equivalence point on a titration curve. A diprotic acid donates two protons and a diprotic base accepts two protons. The titration curve for a diprotic solution has two equivalence points. A polyprotic substance has multiple equivalence points. All titration reactions contain small buffer regions that appear horizontal on the graph. These regions contain comparable concentrations of acid and base, preventing sudden changes in pH when additional acid or base is added. Pharmaceutical applications In the pharmaceutical industry, acid-base titration serves as a fundamental analytical technique with diverse applications. One primary use involves the determination of the concentration of Active Pharmaceutical Ingredients (APIs) in drug formulations, ensuring product quality and compliance with regulatory standards. Acid–base titration is particularly valuable in quantifying acidic or basic functional groups with pharmaceutical compounds. Additionally, the method is employed for the analysis of additives or ingredients, making it easier to adjust and control how a product is made. Quality control laboratories utilize acid-base titration to assess the purity of raw materials and to monitor various stages of drug manufacturing processes. The technique's reliability and simplicity make it an integral tool in pharmaceutical research and development, contributing to the production of safe and effective medications. Environmental monitoring applications Acid–base titration plays a crucial role in environmental monitoring by providing a quantitative analytical method for assessing the acidity or alkalinity of water samples. The measurement of parameters such as pH, total alkalinity, and acidity is essential in evaluating the environmental impact of industrial discharges, agricultural runoff, and other sources of water contamination. Acid–base titration allows for the determination of the buffering capacity of natural water systems, aiding in the assessment of their ability to resist changes in pH. Monitoring pH levels is important for preserving aquatic ecosystems and ensuring compliance with environmental regulations. Acid–base titration is also utilized in the analysis of acid rain effects on soil and water bodies, contributing to the overall understanding and management of environmental quality. The method's prevision and reliability make it a valuable tool in safeguarding ecosystems and assessing the impact of human activities on natural water resources. See also Henderson–Hasselbalch equation pH indicator References External links Graphical method to solve acid-base problems, including titrations Graphic and numerical solver for general acid-base problems - Software Program for phone and tablets Titration
Acid–base titration
Chemistry
2,517
37,852
https://en.wikipedia.org/wiki/Fusion%20rocket
A fusion rocket is a theoretical design for a rocket driven by fusion propulsion that could provide efficient and sustained acceleration in space without the need to carry a large fuel supply. The design requires fusion power technology beyond current capabilities, and much larger and more complex rockets. Fusion nuclear pulse propulsion is one approach to using nuclear fusion energy to provide propulsion. Fusion's main advantage is its very high specific impulse, while its main disadvantage is the (likely) large mass of the reactor. A fusion rocket may produce less radiation than a fission rocket, reducing the shielding mass needed. The simplest way of building a fusion rocket is to use hydrogen bombs as proposed in Project Orion, but such a spacecraft would be massive and the Partial Nuclear Test Ban Treaty prohibits the use of such bombs. For that reason bomb-based rockets would likely be limited to operating only in space. An alternate approach uses electrical (e.g. ion) propulsion with electric power generated by fusion instead of direct thrust. Electricity generation vs. direct thrust Spacecraft propulsion methods such as ion thrusters require electric power to run, but are highly efficient. In some cases their thrust is limited by the amount of power that can be generated (for example, a mass driver). An electric generator running on fusion power could drive such a ship. One disadvantage is that conventional electricity production requires a low-temperature energy sink, which is difficult (i.e. heavy) in a spacecraft. Direct conversion of the kinetic energy of fusion products into electricity mitigates this problem. One attractive possibility is to direct the fusion exhaust out the back of the rocket to provide thrust without the intermediate production of electricity. This would be easier with some confinement schemes (e.g. magnetic mirrors) than with others (e.g. tokamaks). It is also more attractive for "advanced fuels" (see aneutronic fusion). Helium-3 propulsion would use the fusion of helium-3 atoms as a power source. Helium-3, an isotope of helium with two protons and one neutron, could be fused with deuterium in a reactor. The resulting energy release could expel propellant out the back of the spacecraft. Helium-3 is proposed as a power source for spacecraft mainly because of its lunar abundance. Scientists estimate that 1 million tons of accessible helium-3 are present on the moon. Only 20% of the power produced by the D-T reaction could be used this way; while the other 80% is released as neutrons which, because they cannot be directed by magnetic fields or solid walls, would be difficult to direct towards thrust, and may in turn require shielding. Helium-3 is produced via beta decay of tritium, which can be produced from deuterium, lithium, or boron. Even if a self-sustaining fusion reaction cannot be produced, it might be possible to use fusion to boost the efficiency of another propulsion system, such as a VASIMR engine. Confinement alternatives Magnetic To sustain a fusion reaction, the plasma must be confined. The most widely studied configuration for terrestrial fusion is the tokamak, a form of magnetic confinement fusion. Currently tokamaks weigh a great deal, so the thrust to weight ratio would seem unacceptable. NASA's Glenn Research Center proposed in 2001 a small aspect ratio spherical torus reactor for its "Discovery II" conceptual vehicle design. "Discovery II" could deliver a crewed 172 metric tons payload to Jupiter in 118 days (or 212 days to Saturn) using 861 metric tons of hydrogen propellant, plus 11 metric tons of Helium-3-Deuterium (D-He3) fusion fuel. The hydrogen is heated by the fusion plasma debris to increase thrust, at a cost of reduced exhaust velocity (348–463 km/s) and hence increased propellant mass. Inertial The main alternative to magnetic confinement is inertial confinement fusion (ICF), such as that proposed by Project Daedalus. A small pellet of fusion fuel (with a diameter of a couple of millimeters) would be ignited by an electron beam or a laser. To produce direct thrust, a magnetic field forms the pusher plate. In principle, the Helium-3-Deuterium reaction or an aneutronic fusion reaction could be used to maximize the energy in charged particles and to minimize radiation, but it is highly questionable whether using these reactions is technically feasible. Both the detailed design studies in the 1970s, the Orion drive and Project Daedalus, used inertial confinement. In the 1980s, Lawrence Livermore National Laboratory and NASA studied an ICF-powered "Vehicle for Interplanetary Transport Applications" (VISTA). The conical VISTA spacecraft could deliver a 100-tonne payload to Mars orbit and return to Earth in 130 days, or to Jupiter orbit and back in 403 days. 41 tonnes of deuterium/tritium (D-T) fusion fuel would be required, plus 4,124 tonnes of hydrogen expellant. The exhaust velocity would be 157 km/s. Magnetized target Magnetized target fusion (MTF) is a relatively new approach that combines the best features of the more widely studied magnetic confinement fusion (i.e. good energy confinement) and inertial confinement fusion (i.e. efficient compression heating and wall free containment of the fusing plasma) approaches. Like the magnetic approach, the fusion fuel is confined at low density by magnetic fields while it is heated into a plasma, but like the inertial confinement approach, fusion is initiated by rapidly squeezing the target to dramatically increase fuel density, and thus temperature. MTF uses "plasma guns" (i.e. electromagnetic acceleration techniques) instead of powerful lasers, leading to low cost and low weight compact reactors. The NASA/MSFC Human Outer Planets Exploration (HOPE) group has investigated a crewed MTF propulsion spacecraft capable of delivering a 164-tonne payload to Jupiter's moon Callisto using 106-165 metric tons of propellant (hydrogen plus either D-T or D-He3 fusion fuel) in 249–330 days. This design would thus be considerably smaller and more fuel efficient due to its higher exhaust velocity (700 km/s) than the previously mentioned "Discovery II", "VISTA" concepts. Inertial electrostatic Another popular confinement concept for fusion rockets is inertial electrostatic confinement (IEC), such as in the Farnsworth-Hirsch Fusor or the Polywell variation under development by Energy-Matter Conversion Corporation (EMC2). The University of Illinois has defined a 500-tonne "Fusion Ship II" concept capable of delivering a 100,000 kg crewed payload to Jupiter's moon Europa in 210 days. Fusion Ship II utilizes ion rocket thrusters (343 km/s exhaust velocity) powered by ten D-He3 IEC fusion reactors. The concept would need 300 tonnes of argon propellant for a 1-year round trip to the Jupiter system. Robert Bussard published a series of technical articles discussing its application to spaceflight throughout the 1990s. His work was popularised by an article in the Analog Science Fiction and Fact publication, where Tom Ligon described how the fusor would make for a highly effective fusion rocket. Antimatter A still more speculative concept is antimatter-catalyzed nuclear pulse propulsion, which would use antimatter to catalyze a fission and fusion reaction, allowing much smaller fusion explosions to be created. During the 1990s an abortive design effort was conducted at Penn State University under the name AIMStar. The project would require more antimatter than can currently be produced. In addition, some technical hurdles need to be surpassed before it would be feasible. Development projects MSNW Magneto-Inertial Fusion Driven Rocket See also Helium-3 Nuclear propulsion Rocket propulsion technologies (disambiguation) References External links Rocket propulsion Nuclear spacecraft propulsion Rocket Hypothetical technology
Fusion rocket
Physics,Chemistry
1,605
31,722,896
https://en.wikipedia.org/wiki/Ciluprevir
Ciluprevir was a drug used experimentally in the treatment of hepatitis C. It is manufactured by Boehringer Ingelheim and developed under the research code of BILN 2061. It was the first-in-class NS3/4A protease inhibitor to enter clinical development and tested in human. Ciluprevir is a potent competitive reversible inhibitor of NS3/4A protease from HCV genotype 1a (Ki = 0.3 nM) and 1b (Ki = 0.66 nM). It shows good selectivity for NS3 protease against representative serine and cysteine proteases, human leukocyte elastase and cathepsin B (IC50 > 30 μM). Its development was halted in phase Ib clinical trials because of toxicity in animals. However, ciluprevir scaffold was exploited to design new macrocyclic inhibitors such as simeprevir (TMC-435) and danoprevir. References Abandoned drugs NS3/4A protease inhibitors
Ciluprevir
Chemistry
221
14,439,377
https://en.wikipedia.org/wiki/OXGR1
OXGR1, i.e., 2-oxoglutarate receptor 1 (also known as GPR99, cysteinyl leukotriene receptor E, i.e., CysLTE, and cysteinyl leukotriene receptor 3, i.e., CysLT3) is a G protein-coupled receptor located on the surface membranes of certain cells. It functions by binding one of its ligands and thereby becoming active in triggering pre-programmed responses in its parent cells. OXGR1 has been shown to be activated by α-ketoglutarate, itaconate, and three cysteinyl-containing leukotrienes (abbreviated as CysLTs), leukotriene E4 (i.e., LTE4), LTC4, and LTD4. α-Ketoglutarate and itaconate are the dianionic forms of α-ketoglutaric acid and itaconic acid, respectively. α-Ketoglutaric and itaconic acids are short-chain dicarboxylic acids that have two carboxyl groups (notated as -) both of which are bound to hydrogen (i.e., ). However, at the basic pH levels (i.e., pH>7) in virtually all animal tissues, α-ketoglutaric acid and itaconic acid exit almost exclusively as α-ketoglutarate and itaconate, i.e., with their carboxy residues being negatively charged (notated as -), because they are not bound to (see Conjugate acid-base theory). It is α-ketoglutarate and itaconate, not α-ketoglutaric or itaconic acids, which activate OXGR1. History In 2001, a human gene projected to code for a G protein-coupled receptor (i.e., a receptor that stimulates cells by activating G proteins) was identified. Its protein product was classified as an orphan receptor, i.e., a receptor whose activating ligand and function are unknown. The projected amino acid sequence of the protein encoded by this gene bore similarities to the purinergic receptor, P2Y1, and therefore might, like P2Y1, be a receptor for purines. This study named the new receptor and its gene GPR80 and GPR80, respectively. Shortly thereafter, a second study found this same gene, indicated that it coded for a G protein-coupled receptor, had an amino acid sequence similar to two purinergic receptors, P2Y1 and GPR91, and determined that a large series of purine nucleotides, other nucleotides, and derivatives of these compounds did not activate this receptor. The study named this receptor GPR99. A third study published in 2004 reported an orphan G protein-coupled receptor with an amino acid sequence similar to the P2Y receptor family of nucleotides was activated by two purines, adenosine and adenosine monophosphate. The study nominated this receptor to be a purinergic receptor and named it the P2Y15 receptor. However, a review in 2004 of these three studies by members of the International Union of Pharmacology Subcommittee for P2Y Receptor Nomenclature and Classification decided that GPR80/GPR99 is not a receptor for adenosine, adenosine monophosphate, or any other nucleotide. A fourth study, also published in 2004, found that GPR80/GPR99 -bearing cells responded to α-ketoglutarate. In 2013, IUPHAR accepted this report and the names OXGR1 and OXGR1 for the α-ketoglutarate responsive receptor and its gene, respectively. In 2013, a fifth study found that LTE4, LTC4, and LTD4 activated OXGR1. Finally, a 2023 study provided evidence that itaconate also activated OXGR1. OXGR1 gene The human OXGR1 gene is located on chromosome 13 at position 13q32.2; that is, it resides at position 32.2 (i.e., region 3, band 2, sub-band 2) on the "q" arm (i.e., long arm) of chromosome 13. OXGR1 codes for a G protein coupled-receptor that is primarily linked to and activates heterotrimeric G proteins containing the Gq alpha subunit. When bound to one of its ligands, OXGR1 activates Gq alpha subunit-regulated cellular pathways (see Functions of the Gq alpha pathways) that stimulate the cellular responses that these pathways are programmed to elicit. OXGR1 activating and inhibiting ligands Activating ligands OXGR1 is the receptor for α-ketoglutarate, LTE4, LTC4, LTD4, and itaconate. These ligands have the following relative potencies in stimulating responses in cultures of cells expressing human OXGR1: LTE4 >> LTC4 = LTD4 > α-ketoglutarate = itaconate LTE4 is able to stimulate responses in at least some of its target cells at concentrations as low as a few picomoles/liter whereas LTC4, LTD4, α-ketoglutarate, and itaconate require far higher levels to do so. The relative potencies that LTC4, LTD4, and LTE4 have in activating their target receptors, i.e., cysteinyl leukotriene receptor 1 (CysLTR1), cysteinyl leukotriene receptor 2 (CysLTR2), and OXGR1 are: CysLTR1: LTD4 > LTC4 >> LTE4 CysLTR2: LTC4 = LTD4 >> LTE4 OXGR1: LTE4 > LTC4 > LTD4 These relationships suggest that CysTR1 and CysLTR2 are physiological receptors for LTD4 and LTC4 but due to its relative weakness in stimulating these two receptors, perhaps not or to a far lesser extent for LTE4. Indeed, the LTE4 concentrations needed to activate CysTR1 and CysLTR2 may be higher than those that normally occur in vivo (see Functions of OXGR1 in mediating the actions of LTE4, LTD4, and LTC4). These potency relationships suggest that the LTE4's actions are mediated primarily by OXGR1. The following findings support this suggestion. First, pretreatment of guinea pig trachea and human bronchial smooth muscle with LTE4 but not with LTC4 or LTD4 enhanced their smooth muscle contraction responses to histamine. This suggests LTE4's target receptor differs from the receptors targeted by LTC4 and LTD4. Second, LTE4 was as potent as LTC4 and LTD4 in eliciting vascular leakage when injected into the skin of guinea pigs and humans; the inhalation of LTE4 by asthmatic individuals caused the accumulation of eosinophils and basophils in their bronchial mucosa whereas the inhalation of LTD4 did not have this effect; and mice engineered to lack CysLTR1 and CysLTR2 receptors exhibited edema responses to the intradermal injection of LTC4, LTD4, and LTE4 but LTE4 was 64-fold more potent in triggering this response in these mice than in wild type mice. Since LTE4 should have been far less active than LTC4 or LTD4 in triggering vascular leakage, the recruitment of the cited cells into the lung, and causing vascular edema responses in mice lacking CysLT1 and CysLT2 receptors, these findings imply that the latter two receptors are not the primary receptors mediating LTF4' actions. And third, mice engineered to lack all three CysLTR1, CysLTR2, and OXGR1 receptors did not exhibit dermal edema responses to the injection of LTC4, LTD4, or LTE4 thereby indicating that at least one of these receptors was responsible for each of their actions. Overall, these findings suggest that LTE4 commonly acts through a different receptor than LTC4 and LTD4 and that this receptor is OXGR1. Indeed, studies have defined OXGR1 as the high affinity receptor for LTF4. Nonetheless, several studies have reported that cultures of certain types of inflammatory cells, e.g., the human LAD2 (but not LUVA) mast cell lines, T helper cell lymphocytes that have differentiated into Th2 cells, and mouse ILC2 lymphocytes (also termed type 2 innate lymphoid cells) The levels of LTE4 used in some of these studies may not develop in animals or humans. In all events, dysfunctions caused by deleting the OXGR1 gene in cells, tissues or animals and dysfunctions in humans that are associated with a lack of a viable OXGR1 gene implicate the lack of OXGR1 protein in the development of these dysfunctions. Inhibiting ligand OXGR1 is inhibited by Montelukast, a well-known and clinically useful receptor antagonist, i.e., inhibitor, of CysLTR1 but not CysTR2 activation. (Inhibitors of CysLTR2 have not been identified.) In consequence, Montelukast blocks the binding and thereby the actions of LTE4, LTC4, and LTD4 that are mediated by OXGR1. It is presumed to act similarly to block the actions of α-ketoglutarate and itaconate on OXGR1. It is not yet known if other CysLTR1 inhibitors can mimic Montelukast in blocking OXGR1's responses to α-ketoglutarate and itaconate. Montelukast is used to treat various disorders including asthma, exercise-induced bronchoconstriction, allergic rhinitis, primary dysmenorrhea (i.e. menstrual cramps not associated with known causes, see causes of dysmenorrhea), and urticaria (see Functions of CysLTR1). While it is likely that its inhibition of CysLTR1 accounts for its effects in these diseases, the ability of these leukotrienes, particularly LTE4, to stimulate OXGR1 allows that Montelukast's effects on these conditions may be due at least in part to its ability to block OXGR1. Expression Based on their content of the OXGR1 protein or mRNA that directs its synthesis, OXGR1 is expressed in human: a) kidney, placenta, and fetal brain; b) cells that promote allergic and other hypersensitivity reactions, i.e., eosinophils and mast cells; c) tissues involved in allergic and other hypersensitivity reactions such as the lung trachea, salivary glands, and nasal mucosa; and d) fibroblasts, i.e., cells that synthesize the extracellular matrix and collagen (when pathologically activated, these cells produce tissue fibrosis). In mice, Oxgr1 mRNA is highly expressed in kidneys, testes, smooth muscle tissues, nasal epithelial cells, and lung epithelial cells. Functions Associated with OXGR1 gene defects or deficiencies The following studies have defined OXGR1 functions based on the presence of disorders in mice or humans that do not have a viable OXGR1 protein. It is not been determined which of OXGR1's ligands, if any, are responsible for stimulating OXGR1 to prevent these disorders. Otitis media Mice lacking OXGPR1 protein due the knockout of their OXGR1 gene developed (82% penetrance) otitis media (i.e., inflammation in their middle ears), mucus effusions in their middle ears, and hearing losses all which had many characteristics of human otitis media. The study did not find evidence that these mice had a middle ear bacterial infection. (Infection with Streptococcus pneumoniae, Moraxella catarrhalis, or other bacteria is one of the most common causes of otitis media.) While the underlying mechanism for the development of this otitis has not been well-defined, the study suggests that OXER1 functions to prevent middle ear inflammations and Oxgr1 gene knockout mice may be a good model to study and relate to human ear pathophysiology. Goblet cells Mice lacking OXGR1 protein due the knockout of their OXGR1 gene had significantly fewer numbers of mucin-containing goblet cells in their nasal mucosa than control mice. Cysltr1 gene knockout mice and Cysltr2 gene knockout mice had normal numbers of these nasal goblet cells. This finding implicates OXGR1 in functioning to maintain higher numbers of airway goblet cells. Kidney stones and nephrocalcinosis Majmunda et al. identified 6 individuals from different families with members that had histories of developing calcium-containing kidney stones (also termed nephrolithiasis) and/or nephrocalcinosis (i.e., the deposition of calcium-containing material in multiple sites throughout the kidney). Each of these 6 individuals had dominant variants in their OXGR1 gene. These variant genes appeared (based on their OXGR1 gene's DNA structure as defined by exome sequencing) to be unable to form an active OXGR1 protein. The study proposed that the OXGR1 gene is a candidate for functioning to suppress the development of calcium-containing nephrolithiasis and nephrocalcinosis in humans. Associated with α-ketoglutarate-regulated functions Studies in rodents have found that the ability of α-ketoglutarate to regulate various functions is dependent on its activation of OXGR1 (see OXGR1 receptor-dependent bioactions of α-ketoglutarate). These functions include: promoting normal kidney functions such as the absorption of key urinary ions and maintenance of acid base balance; regulating the development of glucose tolerance as defined by glucose tolerance tests; suppressing the development of diet-induced obesity; and suppressing the muscle atrophy response to excessive exercise. Associated with LTE4-induced functions A study showed that LTE4, LTC4, and LTD4 produce similar levels of vascular leakage and localized tissue swelling when injected into the skin of guinea pigs or humans. Studies that examined the effects of using various doses of these LTs after injection into the earlobes of mice found that, in comparison to control mice, OXGR1 gene knockout mice showed virtually no response to injection of a low dose of LTE4, a greatly reduced response to injection of an intermediate dose of LTE4, and a somewhat delayed but otherwise similar response to a high dose of LTE4 (these doses were 0.008, 0.0625, and 0.5 nmols, respectively). The study concluded that lower levels of LTE4 act primarily through OXGR1 to cause vascular permeability and, since it is the major cysteinyl leukotriene that accumulates in inflamed tissues, suggested that OXGR1 may be a therapeutic target for treating inflammatory disorders. Another study found that the application of an extract of Alternaria alternata (a genus of fungi that infects plants and causes allergic diseases, infections, and toxic reactions in animals and humans) into the noses of mice caused their nasal epithelial cells to release mucin and their nasal submucosa to swell. (The nasal as well as lung epithelial cells of these mice expressed OXGR1). OXGR1 gene knockout mice did not show these responses to the fungal toxin. The study also showed that a) Cysltr1 and Cysltr2 double gene knockout mice had full mucin release response to the toxin and b) Cstlr2 gene knockout mice had full submucosal swelling responses to the toxin but Csltr1 gene knockout mice did not show submucosal swelling responses to the toxin. The study concluded that LTE4's activation of OXGR1 controls key airway epithelial cell functions in mice and suggested that the inhibition of LTE4-induced OXGR1 activation may prove useful for treating asthma and other allergic and inflammatory disorders. A subsequent study examined the effects of LTE4-OXGR1 on a certain type of tuft cell. When located in intestinal mucosa, these tuft cells are termed tuft cells but when located in the nasal respiratory mucosa they are termed solitary chemosensory cells and when located in the trachea they are termed brush cells. Control mice that inhaled the mold Alternaria alternata, the American house dust mite Dermatophagoides farinae, or LTF4 developed increases in the number of their tracheal brush cells, release of the inflammation-promoting cytokine, interleukin 25, and lung inflammation whereas OXGR1 gene knockout mice did not show these responses. These findings indicate that the activation of OXGR1 regulates airway: brush cell numbers, interleukin 25 release, and inflammation. Associated with itaconate-regulated functions A study reported in 2023 was the first and to date (2024) only study indicating that itaconate's actions are mediated by activating OXGR1. This study showed that itaconate stimulated the nasal secretion of mucus when applied to the noses of mice, reduced the number of Pseudomonas aeruginosa bacteria in their lung tissue and bronchoalveolar lavage fluid (i.e., airway washing) in mice injected intranasally with these bacteria, and stimulated cultured mouse respiratory epithelium cells to raise their cytosolic Ca2+ levels (an indicator of cell activation). Itaconate was unable to induce these responses in OXGR1 gene knockout mice or in the respiratory epithelial cells isolated from the OXGR1 gene knockout mice. The study concluded that the activation of OXGR1 by itaconate contributes to regulating the pulmonary innate immune response to Pseudomonas aeruginosa and might also do so in other bacterial infections. References Further reading G protein-coupled receptors
OXGR1
Chemistry
3,872
5,600,755
https://en.wikipedia.org/wiki/Quotition%20and%20partition
In arithmetic, quotition and partition are two ways of viewing fractions and division. In quotitive division one asks "how many parts are there?" while in partitive division one asks "what is the size of each part?" In general, a quotient where , , and are integers or rational numbers, can be conceived of in either of 2 ways: Quotition: "How many parts of size must be added to get a sum of ?" Partition: "What is the size of each of equal parts whose sum is ?" For example, the quotient can be conceived of as representing either of the decompositions: In the rational number system used in elementary mathematics, the numerical answer is always the same no matter which way you put it, as a consequence of the commutativity of multiplication. Quotition Thought of quotitively, a division problem can be solved by repeatedly subtracting groups of the size of the divisor. For instance, suppose each egg carton fits 12 eggs, and the problem is to find how many cartons are needed to fit 36 eggs in total. Groups of 12 eggs at a time can be separated from the main pile until none are left, 3 groups: If the last group is a remainder smaller than the divisor, it can be thought of as forming an additional smaller group. For example, if 45 eggs are to be put into 12-egg cartons, then after the first 3 cartons have been filled there are 9 eggs remaining, which only partially fill the 4th carton. The answer to the question "How many cartons are needed to fit 45 eggs?" is 4 cartons, since rounds up to 4. Quotition is the concept of division most used in measurement. For example, measuring the length of a table using a measuring tape involves comparing the table to the markings on the tape. This is conceptually equivalent to dividing the length of the table by a unit of length, the distance between markings. Partition Thought of partitively, a division problem might be solved by sorting the initial quantity into a specific number of groups by adding items to each group in turn. For instance, a deck of 52 playing cards could be divided among 4 players by dealing the cards to into 4 piles one at a time, eventually yielding piles of 13 cards each. If there is a remainder in solving a partition problem, the parts will end up with unequal sizes. For example, if 52 cards are dealt out to 5 players, then 3 of the players will receive 10 cards each, and 2 of the players will receive 11 cards each, since . See also List of partition topics References External links A University of Melbourne web page shows what to do when the fraction is a ratio of integers or rational. Operations on numbers Division (mathematics)
Quotition and partition
Mathematics
576
269,036
https://en.wikipedia.org/wiki/Cent%20%28music%29
The cent is a logarithmic unit of measure used for musical intervals. Twelve-tone equal temperament divides the octave into 12 semitones of 100 cents each. Typically, cents are used to express small intervals, to check intonation, or to compare the sizes of comparable intervals in different tuning systems. For humans, a single cent is too small to be perceived between successive notes. Cents, as described by Alexander John Ellis, follow a tradition of measuring intervals by logarithms that began with Juan Caramuel y Lobkowitz in the 17th century. Ellis chose to base his measures on the hundredth part of a semitone, , at Robert Holford Macdowell Bosanquet's suggestion. Making extensive measurements of musical instruments from around the world, Ellis used cents to report and compare the scales employed, and further described and utilized the system in his 1875 edition of Hermann von Helmholtz's On the Sensations of Tone. It has become the standard method of representing and comparing musical pitches and intervals. History Alexander John Ellis' paper On the Musical Scales of Various Nations, published by the Journal of the Society of Arts in 1885, officially introduced the cent system to be used in exploring, by comparing and contrasting, musical scales of various nations. The cent system had already been defined in his History of Musical Pitch, where Ellis writes: "If we supposed that, between each pair of adjacent notes, forming an equal semitone [...], 99 other notes were interposed, making exactly equal intervals with each other, we should divide the octave into 1200 equal of an equal semitone, or cents as they may be briefly called." Ellis defined the pitch of a musical note in his 1880 work History of Musical Pitch to be "the number of double or complete vibrations, backwards and forwards, made in each second by a particle of air while the note is heard". He later defined musical pitch to be "the pitch, or V [for "double vibrations"] of any named musical note which determines the pitch of all the other notes in a particular system of tunings." He notes that these notes, when sounded in succession, form the scale of the instrument, and an interval between any two notes is measured by "the ratio of the smaller pitch number to the larger, or by the fraction formed by dividing the larger by the smaller". Absolute and relative pitches were also defined based on these ratios. Ellis noted that "the object of the tuner is to make the interval [...] between any two notes answering to any two adjacent finger keys throughout the instrument precisely the same. The result is called equal temperament or tuning, and is the system at present used throughout Europe. He further gives calculations to approximate the measure of a ratio in cents, adding that "it is, as a general rule, unnecessary to go beyond the nearest whole number of cents." Ellis presents applications of the cent system in this paper on musical scales of various nations, which include: (I. Heptatonic scales) Ancient Greece and Modern Europe, Persia, Arabia, Syria and Scottish Highlands, India, Singapore, Burmah and Siam,; (II. Pentatonic scales) South Pacific, Western Africa, Java, China and Japan. And he reaches the conclusion that "the Musical Scale is not one, not 'natural,' nor even founded necessarily on the laws of the constitution of musical sound, so beautifully worked out by Helmholtz, but very diverse, very artificial, and very capricious". Use A cent is a unit of measure for the ratio between two frequencies. An equally tempered semitone (the interval between two adjacent piano keys) spans 100 cents by definition. An octave—two notes that have a frequency ratio of 2:1—spans twelve semitones and therefore 1200 cents. The ratio of frequencies one cent apart is precisely equal to , the 1200th root of 2, which is approximately . Thus, raising a frequency by one cent corresponds to multiplying the original frequency by this constant value. Raising a frequency by 1200 cents doubles the frequency, resulting in its octave. If one knows the frequencies and of two notes, the number of cents measuring the interval from to is: Likewise, if one knows and the number of cents in the interval from to , then equals: Comparison of major third in just and equal temperament The major third in just intonation has a frequency ratio 5:4 or ~386 cents, but in equal temperament is 400 cents. This 14 cent difference is about a seventh of a half step and large enough to be audible. Piecewise linear approximation As x increases from 0 to , the function 2x increases almost linearly from to , allowing for a piecewise linear approximation. Thus, although cents represent a logarithmic scale, small intervals (under 100 cents) can be loosely approximated with the linear relation 1 +   instead of the true exponential relation 2. The rounded error is zero when is 0 or 100, and is only about 0.72 cents high at =50 (whose correct value of 2 ≅  is approximated by 1 +  × 50 ≅ 1.02973). This error is well below anything humanly audible, making this piecewise linear approximation adequate for most practical purposes. Human perception It is difficult to establish how many cents are perceptible to humans; this precision varies greatly from person to person. One author stated that humans can distinguish a difference in pitch of about 5–6 cents. The threshold of what is perceptible, technically known as the just noticeable difference (JND), also varies as a function of the frequency, the amplitude and the timbre. In one study, changes in tone quality reduced student musicians' ability to recognize, as out-of-tune, pitches that deviated from their appropriate values by ±12 cents. It has also been established that increased tonal context enables listeners to judge pitch more accurately. "While intervals of less than a few cents are imperceptible to the human ear in a melodic context, in harmony very small changes can cause large changes in beats and roughness of chords." When listening to pitches with vibrato, there is evidence that humans perceive the mean frequency as the center of the pitch. One study of modern performances of Schubert's Ave Maria found that vibrato span typically ranged between ±34 cents and ±123 cents with a mean of ±71 cents and noted higher variation in Verdi's opera arias. Normal adults are able to recognize pitch differences of as small as 25 cents very reliably. Adults with amusia, however, have trouble recognizing differences of less than 100 cents and sometimes have trouble with these or larger intervals. Other representations of intervals by logarithms Octave The representation of musical intervals by logarithms is almost as old as logarithms themselves. Logarithms had been invented by Lord Napier in 1614. As early as 1647, Juan Caramuel y Lobkowitz (1606-1682) in a letter to Athanasius Kircher described the usage of base-2 logarithms in music. In this base, the octave is represented by 1, the semitone by 1/12, etc. Heptamerides Joseph Sauveur, in his Principes d'acoustique et de musique of 1701, proposed the usage of base-10 logarithms, probably because tables were available. He made use of logarithms computed with three decimals. The base-10 logarithm of 2 is equal to approximately 0.301, which Sauveur multiplies by 1000 to obtain 301 units in the octave. In order to work on more manageable units, he suggests to take 7/301 to obtain units of 1/43 octave. The octave therefore is divided in 43 parts, named "merides", themselves divided in 7 parts, the "heptamerides". Sauveur also imagined the possibility to further divide each heptameride in 10, but does not really make use of such microscopic units. Savart Félix Savart (1791-1841) took over Sauveur's system, without limiting the number of decimals of the logarithm of 2, so that the value of his unit varies according to sources. With five decimals, the base-10 logarithm of 2 is 0.30103, giving 301.03 savarts in the octave. This value often is rounded to 1/301 or to 1/300 octave. Prony Early in the 19th century, Gaspard de Prony proposed a logarithmic unit of base , where the unit corresponds to a semitone in equal temperament. Alexander John Ellis in 1880 describes a large number of pitch standards that he noted or calculated, indicating in pronys with two decimals, i.e. with a precision to the 1/100 of a semitone, the interval that separated them from a theoretical pitch of 370 Hz, taken as point of reference. Centitones A centitone (also Iring) is a musical interval (2, ) equal to two cents (2) proposed as a unit of measurement () by Widogast Iring in Die reine Stimmung in der Musik (1898) as 600 steps per octave and later by Joseph Yasser in A Theory of Evolving Tonality (1932) as 100 steps per equal tempered whole tone. Iring noticed that the Grad/Werckmeister (1.96 cents, 12 per Pythagorean comma) and the schisma (1.95 cents) are nearly the same (≈ 614 steps per octave) and both may be approximated by 600 steps per octave (2 cents). Yasser promoted the decitone, centitone, and millitone (10, 100, and 1000 steps per whole tone = 60, 600, and 6000 steps per octave = 20, 2, and 0.2 cents). For example: Equal tempered perfect fifth = 700 cents = 175.6 savarts = 583.3 millioctaves = 350 centitones. Sound files The following audio files play various intervals. In each case the first note played is middle C. The next note is sharper than C by the assigned value in cents. Finally, the two notes are played simultaneously. Note that the JND for pitch difference is 5–6 cents. Played separately, the notes may not show an audible difference, but when they are played together, beating may be heard (for example if middle C and a note 10 cents higher are played). At any particular instant, the two waveforms reinforce or cancel each other more or less, depending on their instantaneous phase relationship. A piano tuner may verify tuning accuracy by timing the beats when two strings are sounded at once. , beat frequency = 0.16 Hz , beat frequency = 1.53 Hz , beat frequency = 3.81 Hz See also Degree Gradian Microtonality Radian References Footnotes Citations Sources External links Cent conversion: Whole number ratio to cent [rounded to whole number] Cent conversion: Online utility with several functions Equal temperaments Intervals (music) 100 (number) Units of level
Cent (music)
Physics,Mathematics
2,300
680,030
https://en.wikipedia.org/wiki/Unilineality
Unilineality is a system of determining descent groups in which one belongs to one's father's or mother's line, whereby one's descent is traced either exclusively through male ancestors (patriline), or exclusively through female ancestors (matriline). Both patrilineality and matrilineality are types of unilineal descent. The main types of the unilineal descent groups are lineages and clans. A lineage is a unilineal descent group that can demonstrate their common descent from a known apical ancestor. It is also called the simple unilineal descent. Unilineal descent organization and deep Christianization Recent research on the unilineal descent organization has studied variables that are usually regarded as the main causes of the decline of unilineal descent organization – viz. statehood, class stratification and commercialization – along with one not previously considered: deep Christianization, defined here as having been Christianized over 500 years before ethnographic study. The research demonstrated that the traditionally accepted causes of the decline are less significant than deep Christianization, while the presence of unilineal descent groups correlates negatively with communal democracy and is especially strong for complex traditional societies. Its conclusion is that as , Christianization might have contributed to the development of modern democracy by helping to replace unilineal descent organization in Europe. References See also Ambilineality Family Cultural anthropology Kinship and descent
Unilineality
Biology
298
2,945,880
https://en.wikipedia.org/wiki/Hobble%20%28device%29
A hobble (also, and perhaps earlier, hopple), or spancel, is a device which prevents or limits the locomotion of an animal, by tethering one or more legs. Although hobbles are most commonly used on horses, they are also sometimes used on other animals. On dogs, they are used especially during force-fetch training to limit the movement of a dog's front paws when training it to stay still. They are made from leather, rope, or synthetic materials such as nylon or neoprene. There are various designs for breeding, casting (causing a horse or other large animal to lie down with its legs underneath it), and mounting horses. Types Western horse hobbles "Western"-style horse hobbles are tied around the pasterns or cannon bones of the horse's front legs. They comprise three basic types: The vaquero or braided hobble, which is often of a quite fancy plaiting and lighter than other varieties, and is therefore only suitable for short term use. The figure eight hobble or Queensland Utility Strap, a common style of hobble that stockmen wear as a belt and can be used neck strap, lunch-time hobble, or tie for a “micky”. This hobble is made with three pieces of leather and two rings, plus a buckle fastening. The twist hobble, made of soft leather or rope, with a twist between the horse's legs. The above patterns are unsuitable for training, as they can tighten around a leg and cause injury. Western hobbles are normally used to secure a horse when no tie device, tree, or other object is available for that purpose; e.g., when, if traveling across open lands, a rider has to dismount for various reasons. Hobbles also allow a horse to graze and move short distances slowly, yet prevent the horse from running off too far. This is handy at night if the rider has to get some sleep; using a hobble ensures that, in the morning, they can find their horse not too far away. Hobble training a horse is a form of sacking out and desensitizing a horse to accept restraints on its legs. This helps a horse accept pressure on its legs in case it ever becomes entangled in barbed wire or fencing. A hobble-trained horse is less likely to pull, struggle, and cut its legs in a panic, since it has been taught to give to pressure in its legs. Other hobbles Breeding or service hobbles usually fasten around a mare's hocks, pass between her front legs to a neck strap. They are used to protect a stallion from kicks. Casting hobbles are the same as the above, but with another rope or strap attached to the other hind foot. When these straps or ropes are pulled up together, the horse will fall. Cattle hobbles are a strong strap with a metal keeper in the middle and a buckle at the end. They are used on the hind legs for a short period when capturing feral cattle. Drovers’ or grazing hobbles have a buckle on a wide double redhide or chrome leather strap and a swivel and 5 ring chain connecting them. They are placed around the pasterns. Hind leg pull up strap passes from a neck strap and around a hind pastern to draw up a hind foot for shoeing or treatment. Hopples (sometimes called hobbles) are a piece of equipment used by Standardbred pacers to help the horse maintain its pacing gait. Humble or one leg hobble is a strap placed around the front pastern, and then the leg is lifted and the strap is wrapped around the upper leg and then buckled, leaving the horse with three legs to stand on. Mounting hobbles are knee hobbles that are made with a quick release, on a lead that passes to the rider. They are used to mount fractious horses and when mounted the rider can retrieve them. Picket hobble is a single hobble that is placed on a front pastern and then attached to a tether chain. Sideline hobbles may be made in the same manner as above, but with a longer chain to hobble a front and back leg. Rope may also replace the chain. They, too, are placed around the pasterns. This pattern may be useful on a persistent jumper or a horse that has mastered the art of travelling in front leg hobbles Three or four leg hobbles are made in a similar pattern to the above and hobble three or four legs. Used for securing legs for operations, etc. History Hobbles date at least as far back as Ancient Egypt. Two Egyptian hieroglyphs are believed to depict hobbles. A hobble is illustrated on a silver vase excavated from a 4th century B.C. tomb at Chertomlyk in modern day Ukraine. The Persians were also known for their custom of hobbling. In Anabasis, Xenophon claims "a Persian army is good for nothing at night. Their horses are haltered, and, as a rule, hobbled as well to prevent their escaping as they might if loose." See also Hobble skirt Legcuffs References — A detailed discussion of the various types of Western hobbles Animal equipment Horse tack Physical restraint
Hobble (device)
Biology
1,102
910,263
https://en.wikipedia.org/wiki/Hawaiian%20earring
In mathematics, the Hawaiian earring is the topological space defined by the union of circles in the Euclidean plane with center and radius for endowed with the subspace topology: The space is homeomorphic to the one-point compactification of the union of a countable family of disjoint open intervals. The Hawaiian earring is a one-dimensional, compact, locally path-connected metrizable space. Although is locally homeomorphic to at all non-origin points, is not semi-locally simply connected at . Therefore, does not have a simply connected covering space and is usually given as the simplest example of a space with this complication. The Hawaiian earring looks very similar to the wedge sum of countably infinitely many circles; that is, the rose with infinitely many petals, but these two spaces are not homeomorphic. The difference between their topologies is seen in the fact that, in the Hawaiian earring, every open neighborhood of the point of intersection of the circles contains all but finitely many of the circles (an -ball around contains every circle whose radius is less than ); in the rose, a neighborhood of the intersection point might not fully contain any of the circles. Additionally, the rose is not compact: the complement of the distinguished point is an infinite union of open intervals; to those add a small open neighborhood of the distinguished point to get an open cover with no finite subcover. Fundamental group The Hawaiian earring is neither simply connected nor semilocally simply connected since, for all the loop parameterizing the th circle is not homotopic to a trivial loop. Thus, has a nontrivial fundamental group  sometimes referred to as the Hawaiian earring group. The Hawaiian earring group is uncountable, and it is not a free group. However, is locally free in the sense that every finitely generated subgroup of is free. The homotopy classes of the individual loops generate the free group on a countably infinite number of generators, which forms a proper subgroup of . The uncountably many other elements of arise from loops whose image is not contained in finitely many of the Hawaiian earring's circles; in fact, some of them are surjective. For example, the path that on the interval circumnavigates the th circle. More generally, one may form infinite products of the loops indexed over any countable linear order provided that for each , the loop and its inverse appear within the product only finitely many times. It is a result of John Morgan and Ian Morrison that embeds into the inverse limit of the free groups with generators, , where the bonding map from to simply kills the last generator of . However, is a proper subgroup of the inverse limit since each loop in may traverse each circle of only finitely many times. An example of an element of the inverse limit that does not correspond an element of is an infinite product of commutators , which appears formally as the sequence in the inverse limit . First singular homology Katsuya Eda and Kazuhiro Kawamura proved that the abelianisation of and therefore the first singular homology group is isomorphic to the group The first summand is the direct product of infinitely many copies of the infinite cyclic group (the Baer–Specker group). This factor represents the singular homology classes of loops that do not have winding number around every circle of and is precisely the first Cech Singular homology group . Additionally, may be considered as the infinite abelianization of , since every element in the kernel of the natural homomorphism is represented by an infinite product of commutators. The second summand of consists of homology classes represented by loops whose winding number around every circle of is zero, i.e. the kernel of the natural homomorphism . The existence of the isomorphism with is proven abstractly using infinite abelian group theory and does not have a geometric interpretation. Higher dimensions It is known that is an aspherical space, i.e. all higher homotopy and homology groups of are trivial. The Hawaiian earring can be generalized to higher dimensions. Such a generalization was used by Michael Barratt and John Milnor to provide examples of compact, finite-dimensional spaces with nontrivial singular homology groups in dimensions larger than that of the space. The -dimensional Hawaiian earring is defined as Hence, is a countable union of -spheres which have one single point in common, and the topology is given by a metric in which the sphere's diameters converge to zero as Alternatively, may be constructed as the Alexandrov compactification of a countable union of disjoint s. Recursively, one has that consists of a convergent sequence, is the original Hawaiian earring, and is homeomorphic to the reduced suspension . For , the -dimensional Hawaiian earring is a compact, -connected and locally -connected. For , it is known that is isomorphic to the Baer–Specker group For and Barratt and Milnor showed that the singular homology group is a nontrivial uncountable group for each such . See also List of topologies References Further reading . . . . . . Topological spaces
Hawaiian earring
Mathematics
1,066
2,377,453
https://en.wikipedia.org/wiki/Cauliflory
Cauliflory is a botanical term referring to plants that flower and fruit from their main stems or woody trunks, rather than from new growth and shoots. It is rare in temperate regions but common in tropical forests. There have been several strategies to distinguish among types of cauliflory historically, including the location or age of branch where inflorescences grow, whether inflorescences attach to stolons or branches, and whether axillary nodes or adventitious nodes develop into reproductive tissues. Cauliflory is a non-homologous phenomenon with several different sources of development and evolutionary value. The development of buds in axillary cauliflorous species occurs through either the re-use of the same position or old tissue over seasons of growth or release from dormancy. In both cases, vascularization of the bud must occur from pre-existing tissue, such as the pith. In Cercis canadensis, dormant buds break annually in a sympodial pattern. If flowers develop adventitiously, they form similarly to epicormic tissues and may be reactive to immediate environmental conditions. In certain species of Ficus, flowers may be produced from axillary buds in young plants and change to adventitious buds later. One frequently suggested hypothesis for the evolution of cauliflory is to allow trees to be pollinated or have their seeds dispersed by animals, especially bats, that climb on trunks and sturdy limbs to feed on the nectar and fruits. Some species may instead have fruit which drops from the canopy and ripen only after they reach the ground, an alternative strategy termed nonfunctionally caulicarpic fruits. In Ficus, there is not an association between the evolution of cauliflory as an apomorphy and ecological associations. Alternative hypotheses have focused on competition for sugar and minerals between flowers and young leaves, mechanical support for larger flowers and fruits particularly in Atrocarpus and Durio, and evolutionary theory built on the plant as a metapopulation and differential rates of mutations across large plant bodies. An extreme version is flagelliflory where long, whip-like branches descend from the main trunk and bear all the inflorescences. The branches grow to and along the ground and even below it. As a result, the plant or tree's flowers can appear to emerge from the soil. Examples are known mostly from the plant families Annonaceae and Moraceae such as a species of Desmopsisterriflora but also include Couroupita guianensis (Lecythidaceae) and the cactus Weberocereus tunilla (Cactaceae). Cauliflorous species List of some species of cauliflorous plants with articles (list may be incomplete) Annonaceae Uvariopsis (all species are ramiflorous, cauliflorous or both.) Cauliflorous species are: U. submontana. U. congolana, U. guineensis, U. vanderystii, Piptostigma Annonidium mannii Aristolochiaceae Aristolochia arborea Bignoniaceae Adenocalymma Amphitecna, Parmentiera, Crescentia Rhodocolea, Colea Caricaceae Carica papaya (papaya) Cunoniaceae Davidsonia Ebenaceae Diospyros Fabaceae Cercis siliquastrum Castanospermum australe Cynometra cauliflora Lecythidaceae Couroupita guianensis (cannonball tree) Grias Malvaceae Theobroma cacao (cacao), T. grandiflorum (cupuaçu) (and possibly others) Cola mossambicensis Crescentia cujete (calabash tree). Pavonia strictiflora Durian Meliaceae Didymocheton spectabilis (Kohekohe) Epicharis parasitica Moraceae Ficus racemosa (cluster fig), F. sansibarica (knobby fig), F. sur (Cape fig), F. sycomorus (sycamore fig), F. coronata (sandpaper fig) Artocarpus heterophyllus (jackfruit), A. integer (chempedak), A. altilis (breadfruit) Myrtaceae Syzygium branderhorstii, S. cormiflorum, S.erythrocalyx, S. moorei Plinia cauliflora Oxalidaceae Averrhoa bilimbi (bilimbi) Sapindaceae Chytranthus Pancovia: Sapotaceae Englerophytum magalismontanum (stamvrug) Omphalocarpum Stilbaceae Halleria lucida (tree fuchsia) Surianaceae Recchia simplicifolia Thymelaeaceae Phaleria clerodendron (scented daphne) Image gallery See also Ramiflory Spur Adventitious buds References External links Cauliflory in Malaysian Rainforest Trees Plant morphology
Cauliflory
Biology
1,050
78,888,751
https://en.wikipedia.org/wiki/Bistrifluron
Bistrifluron is an insecticide of the benzoylurea class. It is used to control chewing insects such as aphids, whiteflies, caterpillars, and termites. It is not highly toxic to mammals, but bioaccumulation may be a concern. It has a low level of toxicity to birds and moderate to high toxicity to most aquatic animals, honeybees, and earthworms. References Insecticides Ureas 2-Fluorophenyl compounds Anilides Benzamides 2-Chlorophenyl compounds Trifluoromethyl compounds
Bistrifluron
Chemistry
124
681,190
https://en.wikipedia.org/wiki/Compression%20%28functional%20analysis%29
In functional analysis, the compression of a linear operator T on a Hilbert space to a subspace K is the operator , where is the orthogonal projection onto K. This is a natural way to obtain an operator on K from an operator on the whole Hilbert space. If K is an invariant subspace for T, then the compression of T to K is the restricted operator K→K sending k to Tk. More generally, for a linear operator T on a Hilbert space and an isometry V on a subspace of , define the compression of T to by , where is the adjoint of V. If T is a self-adjoint operator, then the compression is also self-adjoint. When V is replaced by the inclusion map , , and we acquire the special definition above. See also Dilation References P. Halmos, A Hilbert Space Problem Book, Second Edition, Springer-Verlag, 1982. Functional analysis
Compression (functional analysis)
Mathematics
191
1,985,954
https://en.wikipedia.org/wiki/Poka-yoke
is a Japanese term that means "mistake-proofing" or "error prevention". It is also sometimes referred to as a forcing function or a behavior-shaping constraint. A poka-yoke is any mechanism in a process that helps an equipment operator avoid () mistakes () and defects by preventing, correcting, or drawing attention to human errors as they occur. The concept was formalized, and the term adopted, by Shigeo Shingo as part of the Toyota Production System. Etymology Poka-yoke was originally , but as this means "fool-proofing" (or "idiot-proofing") the name was changed to the milder poka-yoke. Poka-yoke is derived from (), a term in shogi that means avoiding an unthinkably bad move. Usage and examples More broadly, the term can refer to any behavior-shaping constraint designed into a process to prevent incorrect operation by the user. A simple poka-yoke example is demonstrated when a driver of the car equipped with a manual gearbox must press on the clutch pedal (a process step, therefore a poka-yoke) prior to starting an automobile. The interlock serves to prevent unintended movement of the car. Another example of poka-yoke would be the car equipped with an automatic transmission, which has a switch that requires the car to be in "Park" or "Neutral" before the car can be started (some automatic transmissions require the brake pedal to be depressed as well). These serve as behavior-shaping constraints as the action of "car in Park (or Neutral)" or "foot depressing the clutch/brake pedal" must be performed before the car is allowed to start. The requirement of a depressed brake pedal to shift most of the cars with an automatic transmission from "Park" to any other gear is yet another example of a poka-yoke application. Over time, the driver's behavior is conformed with the requirements by repetition and habit. When automobiles first started shipping with on-board GPS systems, it was not uncommon to use a forcing function which prevented the user from interacting with the GPS (such as entering in a destination) while the car was in motion. This ensures that the driver's attention is not distracted by the GPS. However, many drivers found this feature irksome, and the forcing function has largely been abandoned. This reinforces the idea that forcing functions are not always the best approach to shaping behavior. The microwave oven provides another example of a forcing function. In all modern microwave ovens, it is impossible to start the microwave while the door is still open. Likewise, the microwave will shut off automatically if the door is opened by the user. By forcing the user to close the microwave door while it is in use, it becomes impossible for the user to err by leaving the door open. Forcing functions are very effective in safety critical situations such as this, but can cause confusion in more complex systems that do not inform the user of the error that has been made. These forcing functions are being used in the service industry as well. Call centers concerned with credit card fraud and friendly fraud are using agent-assisted automation to prevent the agent from seeing or hearing the credit card information so that it cannot be stolen. The customer punches the information into their phone keypad, the tones are masked to the agent and are not visible in the customer relationship management software. History The term poka-yoke was applied by Shigeo Shingo in the 1960s to industrial processes designed to prevent human errors. Shingo redesigned a process in which factory workers, while assembling a small switch, would often forget to insert the required spring under one of the switch buttons. In the redesigned process, the worker would perform the task in two steps, first preparing the two required springs and placing them in a placeholder, then inserting the springs from the placeholder into the switch. When a spring remained in the placeholder, the workers knew that they had forgotten to insert it and could correct the mistake effortlessly. Shingo distinguished between the concepts of inevitable human mistakes and defects in the production. Defects occur when the mistakes are allowed to reach the customer. The aim of poka-yoke is to design the process so that mistakes can be detected and corrected immediately, eliminating defects at the source. Implementation in manufacturing Poka-yoke can be implemented at any step of a manufacturing process where something can go wrong or an error can be made. For example, a fixture that holds pieces for processing might be modified to only allow pieces to be held in the correct orientation, or a digital counter might track the number of spot welds on each piece to ensure that the worker executes the correct number of welds. Shingo recognized three types of poka-yoke for detecting and preventing errors in a mass production system: The contact method identifies product defects by testing the product's shape, size, color, or other physical attributes. The fixed-value (or constant number) method alerts the operator if a certain number of movements are not made. The motion-step (or sequence) method determines whether the prescribed steps of the process have been followed. Either the operator is alerted when a mistake is about to be made, or the poka-yoke device actually prevents the mistake from being made. In Shingo's lexicon, the former implementation would be called a warning poka-yoke, while the latter would be referred to as a control poka-yoke. Shingo argued that errors are inevitable in any manufacturing process, but that if appropriate poka-yokes are implemented, then mistakes can be caught quickly and prevented from resulting in defects. By eliminating defects at the source, the cost of mistakes within a company is reduced. Benefits of poka-yoke implementation A typical feature of poka-yoke solutions is that they don't let an error in a process happen. Other advantages include: Less time spent on training workers; Elimination of many operations related to quality control; Unburdening of operators from repetitive operations; Promotion of the work improvement-oriented approach and actions; A reduced number of rejects; Immediate action when a problem occurs; 100% built-in quality control; Preventing bad products from reaching customers; Detecting mistakes as they occur; Eliminating defects before they occur. See also Defensive design Fail-safe Idiot-proof Interlock Murphy's law References Further reading External links Mistake-proofing example wiki Mistake-Proofing - Fool-Proofing - Failsafing Architectures of Control in Design, a site looking at constraints in the design of products, systems and environments Japanese business terms Lean manufacturing Toyota Production System
Poka-yoke
Engineering
1,372
6,139,311
https://en.wikipedia.org/wiki/1%20euro%20coin
The 1 euro coin (€1) is a euro coin with a value of one euro. It is made of two alloys: the inner part of cupronickel, the outer part of nickel brass. All coins have a common reverse side and country-specific national sides. The coin has been used since 2002, with the present common side design dating from 2007. As of July 2019, there were approximately 7.5 billion one-euro coins in circulation, constituting 25.3% of all circulated euro coins by value and 5.6% by quantity. History The coin dates from 2002, when euro coins and banknotes were introduced in the twelve-member Eurozone and its related territories. The common side was designed by Luc Luycx, a Belgian artist who won a Europe-wide competition to design the new coins. The design of the one and two euro coins was intended to show the European Union (EU) as a whole with the then 15 countries more closely joined together than on the 10- to 50-cent coins (the 1- to 5-cent coins showed the EU as one, though intending to show its place in the world). There were then 15 versions of the national sides (eurozone + Monaco, San Marino and the Vatican who could mint their own) and in each case there was a national competition to decide the design, which had to comply with uniform specifications, such as the requirement to include twelve stars (see euro coins). National designs were not allowed to change until the end of 2008, unless a monarch (whose portrait usually appears on the coins) died or abdicated. This happened in Monaco and the Vatican City, resulting in three new designs in circulation (the Vatican had an interim sede vacante design until the new Pope was elected). National designs have seen some changes, as they are now required to include the name of the issuing country: previously neither Finland nor Belgium showed this. , Austria, Germany and Greece are obliged to change their designs due this requirement in the future. As the EU's membership has since expanded in 2004 and 2007, with further expansions envisaged, the common face of all euro coins of values of 10 cents and above were redesigned in 2007 to show a new map. This map showed Europe, not just the EU, as one continuous landmass; however Cyprus was moved west as the map cut off after the Bosphorus (which was seen as excluding Turkey for political reasons). The 2007 redesign coincided with the first enlargement of the eurozone in that year, with the entry of Slovenia. Hence, the Slovenian design was added to the designs in circulation. Since then designs for Cyprus, Malta, Slovakia, Estonia, Latvia, Lithuania, and Croatia have been added as each of these states joined the eurozone. Andorra began minting its own designs in 2014 after winning the right to do so. Design The coins are composed of two alloys. The inner circle is composed of three layers (copper-nickel, nickel, copper-nickel) and the outer ring of nickel brass, giving the coin a two-colour appearance. The coin has a diameter of 23.25 mm, thickness 2.33 mm and a mass of 7.5 grams. The coins' edges consist of alternating segments: three smooth, three finely ribbed. The coins have been used from 2002, though some are dated 1999, which is the year the euro was created as a currency, but not put into general circulation. Reverse (common) side The reverse was designed by Luc Luycx and displays a map of Europe, not including Iceland and cutting off, in a semicircle, at the Bosphorus, north through the middle of Ukraine, then Russia and through northern Scandinavia. Cyprus is located further west than it should be and Malta is shown disproportionately large so that it appears on the map. Six fine lines cut across the map except where there is landmass and have a star at each end—reflecting the twelve stars on the flag of Europe. Across the map is the word EURO, and a large number 1 appears to the left hand side of the coin. The designer's initials, LL, appear next to Cyprus. In 2007, the map was updated to reflect the EU's enlargements in 2004 and 2007. Other than depicting the newly added countries, the new design was much the same. The map was less detailed and showed no national borders. The vertical lines running across the rightmost third of the coin are interrupted in the middle to make way for eastern Europe. Obverse (national) side The obverse side of the coin depends on the issuing country. All have to include twelve stars (in most cases a circle around the edge), the engraver's initials and the year of issue. New designs also have to include the name or initials of the issuing country. The side cannot repeat the denomination of the coin unless the issuing country uses an alphabet other than Latin (currently, Greece is the only such country, hence "1 ΕΥΡΩ" is engraved upon its coin. Austria is currently in breach of the revised rules, but has so far not announced plans to remove "1 EURO" from its coin). Planned designs Austria, Germany and Greece will at some point need to update their designs to comply with guidelines stating they must include the issuing state's name or initial, and not repeat the denomination of the coin. In addition, there are several EU states that have not yet adopted the euro, some of them have already agreed upon their coin designs; however, it is not known exactly when they will adopt the currency, and hence these are not yet minted. See enlargement of the Eurozone for expected entry dates of these countries. Minting One-euro coins have been produced every year in Belgium, Finland, France, the Netherlands and Spain. In Austria, Germany, Greece, Ireland, Luxembourg, Portugal, San Marino and the Vatican City no €1 coins were minted dated 1999, 2000 and 2001. In Monaco, no €1 coins were minted in 1999, 2000, 2005, 2008 and 2010. Malta did not issue €1 coins in 2009. Slovenia and Slovakia have produced coins every year since their respective entries to the eurozone. Proof €1 coins are minted by the majority, but not all, of the eurozone states. One of the most valuable planned issues of a €1 coin was by Vatican City in 2002, which may sell for several hundred euro. However, the French mint marks were mistakenly not placed on some 2007 Monaco coins which are hence worth more than €200 to collectors. PP means the proof-condition coins. Numbers means if more than one coin was minted in that year in that condition by the country. In Germany, there are five mint marks, so they mint ten types of coins in every year. In Greece, there were coins in 2002 which were minted in Finland with S mint mark. In the Vatican, there were coins minted with John Paul II's effigy, and with "Sede Vacante" image in 2005. Error coins There are several error 1-euro coins: Italian types from 2002 without mintmarks; Portuguese coins, also from 2002 with another type of edging (28 stripes instead of 29) and from 2008 with the first type of the common side, officially used until 2007; and the famous Monegasque coin from 2007 without mint marks. Similar coins The coins were minted in several of the participating countries, many using blanks produced at the Birmingham Mint in Birmingham, England. A problem has arisen in differentiation of coins made using similar blanks and minting techniques. The Turkish 50 New Kuruş coin (which was in circulation from 2005 until 2008) closely resembled the €1 coin in both weight and size, and both coins seem to be recognized and accepted by coin-operated machines as being a €1 coin; however, 1 euro are worth roughly 10 times than 50 Turkish kuruş. There are now some vending machines which have been upgraded to reject the 50 kuruş coin. The Brazilian $1 coin is also similar to the 1 Euro coin. It is worth around 18 Euro Cents (1/5 of the 1 Euro coin). The Polish 2 złotych coin, currently worth about 0.46 EUR. The Italian 1000 lire minted from 1997 to 2001 has a diameter 3.75 larger. The coin was worth approximately €0.51. References External links Information about the euro coin issues Euro coins Bi-metallic coins One-base-unit coins Maps on coins
1 euro coin
Chemistry
1,754
5,828,038
https://en.wikipedia.org/wiki/Somatomedin%20receptor
A somatomedin receptor is a receptor which binds the somatomedins (IGFs). Somatomedin is abbreviated to IGF, in reference to insulin-like growth factor. There are two types: Insulin-like growth factor 1 receptor (IGF-1R) Insulin-like growth factor 2 receptor (IGF-2R) External links Receptors
Somatomedin receptor
Chemistry,Biology
80
71,638,720
https://en.wikipedia.org/wiki/L%20168-9
L 168-9 (also known as GJ 4332 or TOI-134, officially named Danfeng) is a red dwarf star located away from the Solar System in the constellation of Tucana. The star has about 61% the mass and 60% the radius of the Sun. It has a temperature of and a rotation period of 29 days. L 168-9 is orbited by one known exoplanet. Nomenclature The designation L 168-9 comes from Luyten's first catalogue of stars with high proper motion. In August 2022, this planetary system was included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from China, were announced in June 2023. L 168-9 is named Danfeng and its planet is named Qingluan, after mythological birds of ancient China. Planetary system The exoplanet L 168-9 b, officially named Qingluan, was discovered in 2020 using TESS. At the discovery, this terrestrial super-Earth was thought to have about 4.6 times the mass and 1.39 times the radius of Earth, and an estimated equilibrium temperature of . L 168-9 b is a target for observation and atmospheric characterization with the James Webb Space Telescope, and has been observed as one of its first targets. A newer study refined the planetary parameters of L 168-9 b. The newer research found a lower mass of and a higher radius of . These parameters imply a lower density of , in contrast to the previous value of . Given the lower density of the planet, it more likely has a pure rock composition, rather than a 50% iron core and 50% silicate mantle as previously proposed. The orbital parameters show little variation, while the equilibrium temperature was updated to . Transmission spectra of combined near- and mid-infrared observations by the James Webb Space Telescope showed no atmospheric features. However, further observations are required to rule out a thick (100 bar) carbon dioxide atmosphere, which could also explain the data. References Tucana M-type main-sequence stars Planetary systems with one confirmed planet CD-60 08051 4332 115211 0134 Danfeng
L 168-9
Astronomy
447
228,190
https://en.wikipedia.org/wiki/Vanillin
Vanillin is an organic compound with the molecular formula . It is a phenolic aldehyde. Its functional groups include aldehyde, hydroxyl, and ether. It is the primary component of the extract of the vanilla bean. Synthetic vanillin is now used more often than natural vanilla extract as a flavoring in foods, beverages, and pharmaceuticals. Vanillin and ethylvanillin are used by the food industry; ethylvanillin is more expensive, but has a stronger note. It differs from vanillin by having an ethoxy group (−O−CH2CH3) instead of a methoxy group (−O−CH3). Natural vanilla extract is a mixture of several hundred different compounds in addition to vanillin. Artificial vanilla flavoring is often a solution of pure vanillin, usually of synthetic origin. Because of the scarcity and expense of natural vanilla extract, synthetic preparation of its predominant component has long been of interest. The first commercial synthesis of vanillin began with the more readily available natural compound eugenol (4-allyl-2-methoxyphenol). Today, artificial vanillin is made either from guaiacol or lignin. Lignin-based artificial vanilla flavoring is alleged to have a richer flavor profile than that from guaiacol-based artificial vanilla; the difference is due to the presence of acetovanillone, a minor component in the lignin-derived product that is not found in vanillin synthesized from guaiacol. Natural history Although it is generally accepted that vanilla was domesticated in Mesoamerica and subsequently spread to the Old World in the 16th century, in 2019, researchers published a paper stating that vanillin residue had been discovered inside jars within a tomb in Israel dating to the 2nd millennium BCE, suggesting the possible cultivation of an unidentified, Old World-endemic Vanilla species in Canaan since the Middle Bronze Age. Traces of vanillin were also found in wine jars in Jerusalem, which were used by the Judahite elite before the city was destroyed in 586 BCE. Vanilla beans, called tlilxochitl, were discovered and cultivated as a flavoring for beverages by native Mesoamerican peoples, most famously the Totonacs of modern-day Veracruz, Mexico. Since at least the early 15th century, the Aztecs used vanilla as a flavoring for chocolate in drinks called xocohotl. Synthetic history Vanillin was first isolated as a relatively pure substance in 1858 by Théodore Nicolas Gobley, who obtained it by evaporating a vanilla extract to dryness and recrystallizing the resulting solids from hot water. In 1874, the German scientists Ferdinand Tiemann and Wilhelm Haarmann deduced its chemical structure, at the same time finding a synthesis for vanillin from coniferin, a glucoside of isoeugenol found in pine bark. Tiemann and Haarmann founded a company Haarmann and Reimer (now part of Symrise) and started the first industrial production of vanillin using their process (now known as the Reimer–Tiemann reaction) in Holzminden, Germany. In 1876, Karl Reimer synthesized vanillin (2) from guaiacol (1). By the late 19th century, semisynthetic vanillin derived from the eugenol found in clove oil was commercially available. Synthetic vanillin became significantly more available in the 1930s, when production from clove oil was supplanted by production from the lignin-containing waste produced by the sulfite pulping process for preparing wood pulp for the paper industry. By 1981, a single pulp and paper mill in Thorold, Ontario, supplied 60% of the world market for synthetic vanillin. However, subsequent developments in the wood pulp industry have made its lignin wastes less attractive as a raw material for vanillin synthesis. Today, approximately 15% of the world's production of vanillin is still made from lignin wastes, while approximately 85% is synthesized in a two-step process from the petrochemical precursors guaiacol and glyoxylic acid. Beginning in 2000, Rhodia began marketing biosynthetic vanillin prepared by the action of microorganisms on ferulic acid extracted from rice bran. This product, sold at USD$700/kg under the trademarked name Rhovanil Natural, is not cost-competitive with petrochemical vanillin, which sells for around US$15/kg. However, unlike vanillin synthesized from lignin or guaiacol, it can be labeled as a natural flavoring. Occurrence Vanillin is most prominent as the principal flavor and aroma compound in vanilla. Cured vanilla pods contain about 2% by dry weight vanillin. Relatively pure vanillin may be visible as a white dust or "frost" on the exteriors of cured pods of high quality. It is also found in Leptotes bicolor, a species of orchid native to Paraguay and southern Brazil, and the Southern Chinese red pine. At lower concentrations, vanillin contributes to the flavor and aroma profiles of foodstuffs as diverse as olive oil, butter, raspberry, and lychee fruits. Aging in oak barrels imparts vanillin to some wines, vinegar, and spirits. In other foods, heat treatment generates vanillin from other compounds. In this way, vanillin contributes to the flavor and aroma of coffee, maple syrup, and whole-grain products, including corn tortillas and oatmeal. Chemistry Natural production Natural vanillin is extracted from the seed pods of Vanilla planifolia, a vining orchid native to Mexico, but now grown in tropical areas around the globe. Madagascar is presently the largest producer of natural vanillin. As harvested, the green seed pods contain vanillin in the form of glucovanillin, its β--glucoside; the green pods do not have the flavor or odor of vanilla. Vanillin is released from glucovanillin by the action of the enzyme β-glucosidase during ripening and during the curing process. After being harvested, their flavor is developed by a months-long curing process, the details of which vary among vanilla-producing regions, but in broad terms it proceeds as follows: First, the seed pods are blanched in hot water, to arrest the processes of the living plant tissues. Then, for 1–2 weeks, the pods are alternately sunned and sweated: during the day they are laid out in the sun, and each night wrapped in cloth and packed in airtight boxes to sweat. During this process, the pods become dark brown, and enzymes in the pod release vanillin as the free molecule. Finally, the pods are dried and further aged for several months, during which time their flavors further develop. Several methods have been described for curing vanilla in days rather than months, although they have not been widely developed in the natural vanilla industry, with its focus on producing a premium product by established methods, rather than on innovations that might alter the product's flavor profile. Biosynthesis Although the exact route of vanillin biosynthesis in V. planifolia is currently unknown, several pathways are proposed for its biosynthesis. Vanillin biosynthesis is generally agreed to be part of the phenylpropanoid pathway starting with -phenylalanine, which is deaminated by phenylalanine ammonia lyase (PAL) to form t-cinnamic acid. The para position of the ring is then hydroxylated by the cytochrome P450 enzyme cinnamate 4-hydroxylase (C4H/P450) to create p-coumaric acid. Then, in the proposed ferulate pathway, 4-hydroxycinnamoyl-CoA ligase (4CL) attaches p-coumaric acid to coenzyme A (CoA) to create p-coumaroyl CoA. Hydroxycinnamoyl transferase (HCT) then converts p-coumaroyl CoA to 4-coumaroyl shikimate/quinate. This subsequently undergoes oxidation by the P450 enzyme coumaroyl ester 3’-hydroxylase (C3’H/P450) to give caffeoyl shikimate/quinate. HCT then exchanges the shikimate/quinate for CoA to create caffeoyl CoA, and 4CL removes CoA to afford caffeic acid. Caffeic acid then undergoes methylation by caffeic acid O-methyltransferase (COMT) to give ferulic acid. Finally, vanillin synthase hydratase/lyase (vp/VAN) catalyzes hydration of the double bond in ferulic acid followed by a retro-aldol elimination to afford vanillin. Vanillin can also be produced from vanilla glycoside with the additional final step of deglycosylation. In the past p-hydroxybenzaldehyde was speculated to be a precursor for vanillin biosynthesis. However, a 2014 study using radiolabelled precursor indicated that p-hydroxybenzaldehyde is not used to synthesise vanillin or vanillin glucoside in the vanilla orchids. Chemical synthesis The demand for vanilla flavoring has long exceeded the supply of vanilla beans. , the annual demand for vanillin was 12,000 tons, but only 1,800 tons of natural vanillin were produced. The remainder was produced by chemical synthesis. Vanillin was first synthesized from eugenol (found in oil of clove) in 1874–75, less than 20 years after it was first identified and isolated. Vanillin was commercially produced from eugenol until the 1920s. Later it was synthesized from lignin-containing "brown liquor", a byproduct of the sulfite process for making wood pulp. Counterintuitively, though it uses waste materials, the lignin process is no longer popular because of environmental concerns, and today most vanillin is produced from guaiacol. Several routes exist for synthesizing vanillin from guaiacol. At present, the most significant of these is the two-step process practiced by Rhodia since the 1970s, in which guaiacol (1) reacts with glyoxylic acid by electrophilic aromatic substitution. The resulting vanillylmandelic acid (2) is then converted by 4-Hydroxy-3-methoxyphenylglyoxylic acid (3) to vanillin (4) by oxidative decarboxylation. Wood-based vanillin 15% of the world's production of vanillin is produced from lignosulfonates, a byproduct from the manufacture of cellulose via the sulfite process. The sole producer of wood-based vanillin is the company Borregaard located in Sarpsborg, Norway. Wood-based vanillin is produced by copper-catalyzed oxidation of the lignin structures in lignosulfonates under alkaline conditions and is claimed by the manufacturing company to be preferred by their customers due to, among other reasons, its much lower carbon footprint than petrochemically synthesized vanillin. Fermentation The company Evolva has developed a genetically modified microorganism which can produce vanillin. Because the microbe is a processing aid, the resulting vanillin would not fall under U.S. GMO labeling requirements, and because the production is nonpetrochemical, food using the ingredient can claim to contain "no artificial ingredients". Using ferulic acid as an input and a specific non GMO species of Amycolatopsis bacteria, natural vanillin can be produced. Biochemistry Several studies have suggested that vanillin can affect the performance of antibiotics in laboratory conditions. Uses The largest use of vanillin is as a flavoring, usually in sweet foods. The ice cream and chocolate industries together comprise 75% of the market for vanillin as a flavoring, with smaller amounts being used in confections and baked goods. Vanillin is also used in the fragrance industry, in perfumes, and to mask unpleasant odors or tastes in medicines, livestock fodder, and cleaning products. It is also used in the flavor industry, as a very important key note for many different flavors, especially creamy profiles such as cream soda. Additionally, vanillin can be used as a general-purpose stain for visualizing spots on thin-layer chromatography plates. This stain yields a range of colors for these different components. Vanillin–HCl staining can be used to visualize the localisation of tannins in cells. Vanillin is becoming a popular choice for the development of bio-based plastics. Manufacturing Vanillin has been used as a chemical intermediate in the production of pharmaceuticals, cosmetics, and other fine chemicals. In 1970, more than half the world's vanillin production was used in the synthesis of other chemicals. As of 2016, vanillin uses have expanded to include perfumes, flavoring and aromatic masking in medicines, various consumer and cleaning products, and livestock foods. Adverse effects Vanillin can trigger migraine headaches in a small fraction of the people who experience migraines. Some people have allergic reactions to vanilla. They may be allergic to synthetically produced vanilla but not to natural vanilla, or the other way around, or to both. Vanilla orchid plants can trigger contact dermatitis, especially among people working in the vanilla trade if they come into contact with the plant's sap. An allergic contact dermatitis called vanillism produces swelling and redness, and sometimes other symptoms. The sap of most species of vanilla orchid which exudes from cut stems or where beans are harvested can cause moderate to severe dermatitis if it comes in contact with bare skin. The sap of vanilla orchids contains calcium oxalate crystals, which are thought to be the main causative agent of contact dermatitis in vanilla plantation workers. A pseudophytodermatitis called vanilla lichen can be caused by flour mites (Tyroglyphus farinae). Ecology Scolytus multistriatus, one of the vectors of the Dutch elm disease, uses vanillin as a signal to find a host tree during oviposition. See also Phenolic compounds in wine Other positional isomers: Isovanillin ortho-Vanillin 2-Hydroxy-5-methoxybenzaldehyde 2-Hydroxy-4-methoxybenzaldehyde Benzaldehyde Protocatechuic aldehyde Syringaldehyde References Notes Flavors Perfume ingredients Hydroxybenzaldehydes Vanilloids O-methylated natural phenols Vanilla Total synthesis
Vanillin
Chemistry
3,135
18,209,139
https://en.wikipedia.org/wiki/MIKE%20FLOOD
is a computer program that simulates inundation for rivers, flood plains and urban drainage systems. It dynamically couples 1D (MIKE 11 and Mouse) and 2D (MIKE 21) modeling techniques into one single tool. MIKE FLOOD is developed by DHI. MIKE FLOOD is accepted by US Federal Emergency Management Agency (FEMA) for use in the National Flood Insurance Program (NFIP). MIKE FLOOD can be expanded with a range of modules and methods including a flexible mesh overland flow solver, MIKE URBAN, Rainfall-runoff modeling and dynamic operation of structures. Applications MIKE FLOOD can be used for river-flood plain interaction, integrated urban drainage and river modeling, urban flood analysis and detailed dam break studies. Integrated hydrologic modelling Hydraulic engineering Environmental engineering Physical geography
MIKE FLOOD
Physics,Chemistry,Engineering,Environmental_science
156
72,492,288
https://en.wikipedia.org/wiki/Amanita%20rhacopus
Amanita rhacopus is a species of Amanita found in east coast of the United States References External links rhacopus Fungi of North America Fungi described in 2018 Fungus species
Amanita rhacopus
Biology
41
55,130,878
https://en.wikipedia.org/wiki/Trim%20drag
Trim drag, denoted as Dm in the diagram, is the component of aerodynamic drag on an aircraft created by the flight control surfaces, mainly elevators and trimable horizontal stabilizers, when they are used to offset changes in pitching moment and centre of gravity during flight. For longitudinal stability in pitch and in speed, aircraft are designed in such a way that the centre of mass (centre of gravity) is forward of the neutral point. The nose-down pitching moment is compensated by the downward aerodynamic force on the elevator and the trimable horizontal stabilizer. This downwards force on the tailplane (horizontal stabilizer and elevator combination) produces lift–induced drag in a similar way as the lift on the wing produces lift–induced drag. The changes (shifts) of the position of the centre of mass are often caused by fuel being burned off over the period of the flight, and require the aerodynamic trim force to be adjusted. Systems that actively pump fuel between separate fuel tanks in the aircraft can be used to offset this effect and reduce the trim drag. Fly-By-Wire flight control systems can completely eliminate trim drag at transonic speeds, and reduce it substantially at supersonic speeds by using the tail as a lifting body, adding to wing lift, at subsonic speeds, transitioning to pushing down against the wing as in conventional designs at supersonic speeds, and just at Mach 1 going completely neutral, providing no lift whatsoever in either direction. This not only eliminates trim drag but also slightly reduces induced drag when crossing the sound barrier. References Drag (physics)
Trim drag
Chemistry
314
578,412
https://en.wikipedia.org/wiki/Acetone%20peroxide
Acetone peroxide ( also called APEX and mother of Satan) is an organic peroxide and a primary explosive. It is produced by the reaction of acetone and hydrogen peroxide to yield a mixture of linear monomer and cyclic dimer, trimer, and tetramer forms. The monomer is dimethyldioxirane. The dimer is known as diacetone diperoxide (DADP). The trimer is known as triacetone triperoxide (TATP) or tri-cyclic acetone peroxide (TCAP). Acetone peroxide takes the form of a white crystalline powder with a distinctive bleach-like odor when impure, or a fruit-like smell when pure, and can explode powerfully if subjected to heat, friction, static electricity, concentrated sulfuric acid, strong UV radiation, or shock. Until about 2015, explosives detectors were not set to detect non-nitrogenous explosives, as most explosives used preceding 2015 were nitrogen-based. TATP, being nitrogen-free, has been used as the explosive of choice in several terrorist bomb attacks since 2001. History Acetone peroxide (specifically, triacetone triperoxide) was discovered in 1895 by the German chemist Richard Wolffenstein. Wolffenstein combined acetone and hydrogen peroxide, and then he allowed the mixture to stand for a week at room temperature, during which time a small quantity of crystals precipitated, which had a melting point of . In 1899, Adolf von Baeyer and Victor Villiger described the first synthesis of the dimer and described use of acids for the synthesis of both peroxides. Baeyer and Villiger prepared the dimer by combining potassium persulfate in diethyl ether with acetone, under cooling. After separating the ether layer, the product was purified and found to melt at . They found that the trimer could be prepared by adding hydrochloric acid to a chilled mixture of acetone and hydrogen peroxide. By using the depression of freezing points to determine the molecular weights of the compounds, they also determined that the form of acetone peroxide that they had prepared via potassium persulfate was a dimer, whereas the acetone peroxide that had been prepared via hydrochloric acid was a trimer, like Wolffenstein's compound. Work on this methodology and on the various products obtained, was further investigated in the mid-20th century by Milas and Golubović. Chemistry The chemical name acetone peroxide is most commonly used to refer to the cyclic trimer, the product of a reaction between two precursors, hydrogen peroxide and acetone, in an acid-catalyzed nucleophilic addition, although monomeric and dimeric forms are also possible. Specifically, two dimers, one cyclic (C6H12O4) and one open chain (C6H14O4), as well as an open dihydroperoxide monomer (C3H8O4), can also be formed; under a particular set of conditions of reagent and acid catalyst concentration, the cyclic trimer is the primary product. Under neutral conditions, the reaction is reported to produce the monomeric organic peroxide. A tetrameric form has also been described, under different catalytic conditions, albeit not without disputes and controversy. The most common route for nearly pure TATP is H2O2/acetone/HCl in 1:1:0.25 molar ratios, using 30% hydrogen peroxide. This product contains very little or none of DADP with some very small traces of chlorinated compounds. Product that contains large fraction of DADP can be obtained from 50% H2O2 using large amounts of concentrated sulfuric acid as catalyst or alternatively with 30% H2O2 and massive amounts of HCl as a catalyst. The product made by using hydrochloric acid is regarded as more stable than the one made using sulfuric acid. It is known that traces of sulfuric acid trapped inside the formed acetone peroxide crystals lead to instability. In fact, the trapped sulfuric acid can induce detonation at temperatures as low as . This is the most likely mechanism behind accidental explosions of acetone peroxide that occur during drying on heated surfaces. Organic peroxides in general are sensitive, dangerous explosives, and all forms of acetone peroxide are sensitive to initiation. TATP decomposes explosively; examination of the explosive decomposition of TATP at the very edge of detonation front predicts "formation of acetone and ozone as the main decomposition products and not the intuitively expected oxidation products." Very little heat is created by the explosive decomposition of TATP at the very edge of the detonation front; the foregoing computational analysis suggests that TATP decomposition is an entropic explosion. However, this hypothesis has been challenged as not conforming to actual measurements. The claim of entropic explosion has been tied to the events just behind the detonation front. The authors of the 2004 Dubnikova et al. study confirm that a final redox reaction (combustion) of ozone, oxygen and reactive species into water, various oxides and hydrocarbons takes place within about 180ps after the initial reaction—within about a micron of the detonation wave. Detonating crystals of TATP ultimately reach temperature of and pressure of 80 kbar. The final energy of detonation is about 2800 kJ/kg (measured in helium), enough to briefly raise the temperature of gaseous products to . Volume of gases at STP is 855 L/kg for TATP and 713 L/kg for DADP (measured in helium). The tetrameric form of acetone peroxide, prepared under neutral conditions using a tin catalyst in the presence of a chelator or general inhibitor of radical chemistry, is reported to be more chemically stable, although still a very dangerous primary explosive. Its synthesis has been disputed. Both TATP and DADP are prone to loss of mass via sublimation. DADP has lower molecular weight and higher vapor pressure. This means that DADP is more prone to sublimation than TATP. This can lead to dangerous crystal growth when the vapors deposit if the crystals have been stored in a container with a threaded lid. This process of repeated sublimation and deposition also results in a change in crystal size via Ostwald ripening. Several methods can be used for trace analysis of TATP, including gas chromatography/mass spectrometry (GC/MS), high performance liquid chromatography/mass spectrometry (HPLC/MS), and HPLC with post-column derivitization. Acetone peroxide is soluble in toluene, chloroform, acetone, dichloromethane and methanol. Recrystalization of primary explosives may yield large crystals that detonate spontaneously due to internal strain. Industrial uses Ketone peroxides, including acetone peroxide and methyl ethyl ketone peroxide, find application as initiators for polymerization reactions, e.g., silicone or polyester resins, in the making of fiberglass-reinforced composites. For these uses, the peroxides are typically in the form of a dilute solution in an organic solvent; methyl ethyl ketone peroxide is more common for this purpose, as it is stable in storage. Acetone peroxide is used as a flour bleaching agent to bleach and "mature" flour. Acetone peroxides are unwanted by-products of some oxidation reactions such as those used in phenol syntheses. Due to their explosive nature, their presence in chemical processes and chemical samples creates potential hazardous situations. For example, triacetone peroxide is the major contaminant found in diisopropyl ether as a result of photochemical oxidation in air. Accidental occurrence at illicit MDMA laboratories is possible. Numerous methods are used to reduce their appearance, including shifting pH to more alkaline, adjusting reaction temperature, or adding inhibitors of their production. Use in improvised explosive devices TATP has been used in bomb and suicide attacks and in improvised explosive devices, including the London bombings on 7 July 2005, where four suicide bombers killed 52 people and injured more than 700. It was one of the explosives used by the "shoe bomber" Richard Reid in his 2001 failed shoe bomb attempt and was used by the suicide bombers in the November 2015 Paris attacks, 2016 Brussels bombings, Manchester Arena bombing, June 2017 Brussels attack, Parsons Green bombing, the Surabaya bombings, and the 2019 Sri Lanka Easter bombings. Hong Kong police claim to have found of TATP among weapons and protest materials in July 2019, when mass protests were taking place against a proposed law allowing extradition to mainland China. TATP shockwave overpressure is 70% of that for TNT, and the positive phase impulse is 55% of the TNT equivalent. TATP at 0.4 g/cm3 has about one-third of the brisance of TNT (1.2 g/cm3) measured by the Hess test. TATP is attractive to terrorists because it is easily prepared from readily available retail ingredients, such as hair bleach and nail polish remover. It was also able to evade detection because it is one of the few high explosives that do not contain nitrogen, and could therefore pass undetected through standard explosive detection scanners, which were hitherto designed to detect nitrogenous explosives. By 2016, explosives detectors had been modified to be able to detect TATP, and new types were developed. Legislative measures to limit the sale of hydrogen peroxide concentrated to 12% or higher have been made in the European Union. A key disadvantage is the high susceptibility of TATP to accidental detonation, causing injuries and deaths among illegal bomb-makers, which has led to TATP being referred to as the "Mother of Satan". TATP was found in the accidental explosion that preceded the 2017 terrorist attacks in Barcelona and surrounding areas. Large-scale TATP synthesis is often betrayed by excessive bleach-like or fruity smells. This smell can even penetrate into clothes and hair in amounts that are quite noticeable; this was reported in the 2016 Brussels bombings. References External links Explosive chemicals Ketals Organic peroxides Organic peroxide explosives Oxygen heterocycles Radical initiators
Acetone peroxide
Chemistry,Materials_science
2,149
415,000
https://en.wikipedia.org/wiki/Edward%20Drinker%20Cope
Edward Drinker Cope (July 28, 1840 – April 12, 1897) was an American zoologist, paleontologist, comparative anatomist, herpetologist, and ichthyologist. Born to a wealthy Quaker family, he distinguished himself as a child prodigy interested in science, publishing his first scientific paper at the age of 19. Though his father tried to raise Cope as a gentleman farmer, he eventually acquiesced to his son's scientific aspirations. Cope had little formal scientific training, and he eschewed a teaching position for field work. He made regular trips to the American West, prospecting in the 1870s and 1880s, often as a member of U.S. Geological Survey teams. A personal feud between Cope and paleontologist Othniel Charles Marsh led to a period of intense fossil-finding competition now known as the Bone Wars. Cope's financial fortunes soured after failed mining ventures in the 1880s, forcing him to sell off much of his fossil collection. He experienced a resurgence in his career toward the end of his life before dying on April 12, 1897. Though Cope's scientific pursuits nearly bankrupted him, his contributions helped to define the field of American paleontology. He was a prodigious writer with 1,400 papers published over his lifetime, although his rivals debated the accuracy of his rapidly published works. He discovered, described, and named more than 1,000 vertebrate species, including hundreds of fishes and dozens of dinosaurs. His proposal for the origin of mammalian molars is notable among his theoretical contributions. Biography Early life Edward Drinker Cope was born on July 28, 1840, the eldest son of Alfred Cope and Hannah, daughter of Thomas Edge, of Chester County, Pennsylvania. He was a distant cousin of historian Gilbert Cope. His middle name, "Drinker", was his paternal grandmother's maiden name, she being daughter of John Drinker, of Philadelphia. The Cope family were of English origin; the first to settle in America, in 1683, was Oliver Cope, a tailor formerly of Avebury, Wiltshire, who was granted two hundred and fifty acres in Delaware. The death of his mother when he was three years old seemed to have had little effect on young Edward, as he mentioned in his letters that he had no recollection of her. His stepmother, Rebecca Biddle, filled the maternal role; Cope referred to her warmly, as well as his younger stepbrother, James Biddle Cope. Alfred, an orthodox member of the Religious Society of Friends (Quakers), operated a lucrative shipping business started by his father, Thomas P. Cope, in 1821. He was a philanthropist who gave money to the Society of Friends, the Philadelphia Zoological Gardens, and the Institute for Colored Youth. Edward was born and raised in a large stone house called "Fairfield", whose location is now within the boundaries of Philadelphia. The of pristine and exotic gardens of the house offered a landscape that Edward was able to explore. The Copes began teaching their children to read and write while very young, and took Edward on trips across New England and to museums, zoos, and gardens. Cope's interest in animals became apparent at a young age, as did his natural artistic ability. Alfred intended to give his son the same education he himself had received. At age nine, Edward was sent to a day school in Philadelphia; at 12, he attended the Friends' Boarding School at Westtown, near West Chester, Pennsylvania. The school was founded in 1799 with fundraising by members of the Society of Friends (Quakers), and provided much of the Cope family's education. The prestigious school was expensive, costing Alfred $500 in tuition each year, and in his first year, Edward studied algebra, chemistry, scripture, physiology, grammar, astronomy, and Latin. Edward's letters home requesting a larger allowance show he was able to manipulate his father, and he was, according to author and Cope biographer Jane Davidson, "a bit of a spoiled brat". His letters suggest he was lonely at the school—it was the first time he had been away from his home for an extended period. Otherwise, Edward's studies progressed much like a typical boy—he consistently had "less than perfect" or "not quite satisfactory" marks for conduct from his teachers, and did not work hard on his penmanship lessons, which may have contributed to his often-illegible handwriting as an adult. Edward returned to Westtown in 1855, accompanied by two of his sisters. Biology began to interest him more that year, and he studied natural history texts in his spare time. While at the school, he frequently visited the Academy of Natural Sciences. Edward often obtained bad marks due to quarreling and bad conduct. His letters to his father show he chafed at farm work and betrayed flashes of the temper for which he would later become well known. After sending Edward back to the farm for summer break in 1854 and 1855, Alfred did not return Edward to school after spring 1856. Instead, Alfred attempted to turn his son into a gentleman farmer, which he considered a wholesome profession that would yield enough profit to lead a comfortable life, and improve the undersized Edward's health. Until 1863, Cope's letters to his father continually expressed his yearning for a more professional scientific career than that of a farmer, which he called "dreadfully boring". While working on farms, Edward continued his education on his own. In 1858, he began working part-time at the Academy of Natural Sciences, reclassifying and cataloguing specimens, and published his first series of research results in January 1859. Cope began taking French and German classes with a former Westtown teacher. Though Alfred resisted his son's pursuit of a science career, he paid for his son's private studies. Instead of working the farm his father bought for him, Edward rented out the land and used the income to further his scientific endeavors. Alfred finally gave in to Edward's wishes and paid for university classes. Cope attended the University of Pennsylvania in the 1861 and/or 1862 academic years, studying comparative anatomy under Joseph Leidy, one of the most influential anatomists and paleontologists at the time. Cope asked his father to pay for a tutor in German and French to allow him to read scholarly works in those languages. During this period, he had a job recataloging the herpetological collection at the Academy of Natural Sciences, where he became a member at Leidy's urging. Cope visited the Smithsonian Institution on occasion, where he became acquainted with Spencer Baird, who was an expert in the fields of ornithology and ichthyology. In 1861, he published his first paper on Salamandridae classification; over the next five years he published primarily on reptiles and amphibians. Cope's membership in the Academy of Natural Sciences and American Philosophical Society gave him outlets to publish and announce his work; many of his early paleontological works were published by the Philosophical Society. European travels In 1863 and 1864, during the American Civil War, Cope traveled through Europe, taking the opportunity to visit the most esteemed museums and societies of the time. Initially, he seemed interested in helping out at a field hospital, but in letters to his father later on in the war, this aspiration seemed to disappear; instead he considered working in the American South to assist freed African Americans. Davidson suggests Cope's correspondence with Leidy and Ferdinand Hayden, who worked as field surgeons during the war, might have informed Cope of the horrors of the occupation. Edward was involved in a love affair; his father did not approve. Whether Edward or the unnamed woman (whom he at one point intended to marry) broke off the relationship is unknown, but he took the breakup poorly. Biographer and paleontologist Henry Fairfield Osborn attributed Edward's sudden departure for Europe as a method of keeping him from being drafted into the Civil War. Cope did write to his father from London on February 11, 1864, "I shall get home in time to catch and be caught by the new draft. I shall not be sorry for this, as I know certain persons who would be mean enough to say that I have gone to Europe to avoid the war." Eventually, Cope took the pragmatic approach and waited out the conflict. He may have suffered from mild depression during this period, and often complained of boredom. Despite his torpor, Edward proceeded with his tour of Europe, and met with some of the most highly esteemed scientists of the world during his travels through France, Germany, Great Britain, Ireland, Austria, Italy, and Eastern Europe, most likely with introductory letters from Leidy and Spencer Baird. In the winter of 1863, Edward met Othniel Charles Marsh while in Berlin. Marsh, age 32, was attending the University of Berlin. He held two university degrees in comparison to Edward's lack of formal schooling past 16, but Edward had written 37 scientific papers in comparison to Marsh's two published works. While they would later become rivals, the two men appeared to take a liking to each other upon meeting. Marsh led Edward on a tour of the city, and they stayed together for days. After Edward left Berlin, the two maintained correspondence, exchanging manuscripts, fossils, and photographs. Edward burned many of his journals and letters from Europe upon his return to the United States. Friends intervened and stopped Cope from destroying some of his drawings and notes, in what author Url Lanham deemed a "partial suicide". Family and early career When Cope returned to Philadelphia in 1864, his family made every effort to secure him a teaching post as the Professor of Zoology at Haverford College, a small Quaker school where the family had philanthropic ties. The college awarded him an honorary master's degree so he could have the position. Cope even began to think about marriage and consulted his father in the matter, telling him of the girl he would like to marry: "an amiable woman, not over sensitive, with considerable energy, and especially one inclined to be serious and not inclined to frivolity and display—the more truly Christian of course the better—seems to be the most practically the most suitable for me, though intellect and accomplishments have more charm." Cope thought of Annie Pim, a member of the Society of Friends, as less a lover than companion, declaring, "her amiability and domestic qualities generally, her capability of taking care of a house, etc., as well as her steady seriousness weigh far more with me than any of the traits which form the theme of poets!" Cope's family approved of his choice, and the marriage took place in July 1865 at Pim's farmhouse in Chester County, Pennsylvania. The two had a single daughter, Julia Biddle Cope, born June 10, 1866. Cope's return to the United States also marked an expansion of his scientific studies; in 1864, he described several fishes, a whale, and the amphibian Amphibamus grandiceps (his first paleontological contribution). During the period between 1866 and 1867, Cope went on trips to western parts of the country. He related to his father his scientific experiences; to his daughter he sent descriptions of animal life as part of her education. Cope found educating his students at Haverford "a pleasure", but wrote to his father that he "could not get any work done." He resigned from his position at Haverford and moved his family to Haddonfield, in part to be closer to the fossil beds of western New Jersey. Due to the time-consuming nature of his Haverford position, Cope had not had time to attend to his farm and had let it out to others, but eventually found he was in need of more money to fuel his scientific habits. Pleading with his father for money to pursue his career, he finally sold the farm in 1869. Alfred apparently did not press his son to continue farming, and Edward focused on his scientific career. He continued his continental travels, including trips to Virginia, Tennessee, and North Carolina. He visited caves across the region. He stopped these cave explorations after an 1871 trip to the Wyandotte Caves in Indiana, but remained interested in the subject. Cope had visited Haddonfield many times in the 1860s, paying periodic visits to the marl pits. The fossils he found in these pits became the focus of several papers, including a description in 1868 of Elasmosaurus platyurus and Laelaps. Marsh accompanied him on one of these excursions. Cope's proximity to the beds after moving to Haddonfield made more frequent trips possible. The Copes lived comfortably in a frame house backed by an apple orchard. Two maids tended the estate, which entertained a number of guests. Cope's only concern was for more money to spend on his scientific work. The 1870s were the golden years of Cope's career, marked by his most prominent discoveries and rapid flow of publications. Among his descriptions were the therapsid Lystrosaurus (1870), the archosauromorph Champsosaurus (1876), and the sauropod Amphicoelias (1878), possibly the largest dinosaur ever discovered. In the period of one year, from 1879 to 1880, Cope published 76 papers based on his travels through New Mexico and Colorado, and on the findings of his collectors in Texas, Kansas, Oregon, Colorado, Wyoming, and Utah. During the peak years, Cope published around 25 reports and preliminary observations each year. The hurried publications led to errors in interpretation and naming—many of his scientific names were later canceled or withdrawn. In comparison, Marsh wrote and published less frequently and more succinctly—his works appeared in the widely read American Journal of Science, which led to faster reception abroad, and Marsh's reputation grew more rapidly than Cope's. In autumn 1871, Cope began prospecting farther west to the fossil fields of Kansas. Leidy and Marsh had been to the region earlier, and Cope employed one of Marsh's guides, Benjamin Mudge, who was in want of a job. Cope's companion Charles Sternberg described the lack of water and good food available to Cope and his helpers on these expeditions. Cope would suffer from a "severe attack of nightmare" in which "every animal of which we had found trace during the day played with him at night ... sometimes he would lose half the night in this exhausting slumber." Nevertheless, Cope continued to lead the party from sunrise to sunset, sending letters to his wife and child describing his finds. The severe desert conditions and Cope's habit of overworking himself till he was bedridden caught up with him, and in 1872, he broke down from exhaustion. Cope maintained a regular pattern of summers spent prospecting and winters writing up his findings from 1871 to 1879. Throughout the decade, Cope traveled across the West, exploring rocks of the Eocene in 1872 and the Titanothere Beds of Colorado in 1873. In 1874, Cope was employed with the Wheeler Survey, a group of surveys led by George Montague Wheeler that mapped parts of the United States west of the 100th meridian. The survey traveled through New Mexico, whose Puerco formations, he wrote to his father, provided "the most important find in geology I have ever made". The New Mexico bluffs contained millions of years of formation and subsequent deformation, and were in an area which had not been visited by Leidy or Marsh. Being part of the survey had other advantages; Cope was able to draw on fort commissaries and defray publishing costs. While there was no salary, his findings would be published in the annual reports the surveys printed. Cope brought Annie and Julia along on one such survey, and rented a house for them at Fort Bridger, but he spent more of his own money on these survey trips than he would have liked. Alfred died December 4, 1875, and left Edward with an inheritance of nearly a quarter of a million dollars. Alfred's death was a blow to Cope; his father was a constant confidant. The same year marked a suspension of much of Cope's field work and a new emphasis on writing up discoveries of the previous years. His chief publication of the time, The Vertebrata of the Cretaceous Formations of the West, was a collection of 303 pages and 54 illustration plates. The memoir summarized his experiences prospecting in New Jersey and Kansas. Cope now had the finances to hire multiple teams to search for fossils for him year-round and he advised the Philadelphia Centennial Exhibition on their fossil displays. Cope's studies of marine reptiles of Kansas closed in 1876, opening a new focus on terrestrial reptiles. The same year, Cope moved from Haddonfield to 2100 and 2102 Pine Street in Philadelphia. He converted one of the two houses into a museum where he stored his growing collection of fossils. Cope's expeditions took him across Kansas, Colorado, New Mexico, Wyoming, and Montana. His initial journey into the Clarendon beds of Upper Miocene and Lower Pliocene of Texas led to an affiliation with the Geological Survey of Texas. Cope's papers on the region constitute some of his most important paleontological contributions. In 1877, he purchased half the rights to the American Naturalist to publish the papers he produced at a rate so high, Marsh questioned their dating. Cope returned to Europe in August 1878 in response to an invitation to join the British Association for the Advancement of Science's Dublin meeting. He was warmly welcomed in England and France, and met with the distinguished paleontologists and archeologists of the period. Marsh's attempts to sully Cope's reputation had made little impact on anyone save paleontologist Thomas Henry Huxley, who according to Osborn, "alone treated [Cope] with coolness". Following the Dublin meeting, Cope spent two days with the French Association for the Advancement of Science. At each gathering, Cope exhibited dinosaur restorations by Philadelphia colleague John A. Ryder and various charts and plates from geological surveys of the 1870s led by Ferdinand Vandeveer Hayden. He returned to London on October 12, meeting with anatomist Richard Owen, ichthyologist Albert Günther, and paleontologist H. G. Seeley. While in Europe, Cope purchased a great collection of fossils from Argentina. Cope never found time to describe the collection and many of the boxes remained unopened until his death. Bone Wars Cope's relations with Marsh turned into a competition for fossils between the two, known today as the Bone Wars. The conflict's seeds began upon the men's return to the United States in the 1860s, although Cope named Colosteus marshii for Marsh in 1867, and Marsh returned the favor, naming Mosasaurus copeanus for Cope in 1869. Cope introduced his colleague to the marl pit owner Albert Vorhees when the two visited the site. Marsh went behind Cope's back and made a private agreement with Vorhees: any fossils that Vorhees's men found were sent back to Marsh at New Haven. When Marsh was at Haddonfield examining one of Cope's fossil finds—a complete skeleton of a large aquatic plesiosaur, Elasmosaurus, that had four flippers and a long neck—he commented that the fossil's head was on the wrong end, evidently stating that Cope had put the skull at the end of the vertebrae of the tail. Cope was outraged and the two argued for some time until they agreed to have Leidy examine the bones and determine who was right. Leidy came, picked up the head of the fossil and put it on the other end. Cope was horrified since he had already published a paper on the fossil with the error at the American Philosophical Society. He immediately tried to buy back the copies, but some remained with their buyers (Marsh and Leidy kept theirs). The whole ordeal might have passed easily enough had Leidy not exposed the cover-up at the next society meeting, not to alienate Cope, but only in response to Cope's brief statement where he never admitted he was wrong. Cope and Marsh would never talk to each other amicably again, and by 1873, open hostility had broken out between them. The rivalry between the two increased towards the latter half of the 1870s. In 1877, Marsh received a letter from Arthur Lakes, a schoolteacher in Golden, Colorado. Lakes had been hiking in the mountains near the town of Morrison with his friend, H. C. Beckwith, looking for fossilized leaves in the Dakota sandstone. Instead, the pair found large bones embedded in the rock. Lakes wrote that the bones were "apparently a vertebra and a humerus bone of some gigantic saurian." While Lakes sent Marsh some 1,500 pounds of bone, he also sent Cope some of the specimens. Marsh published his finds first, and having been paid $100 for the finds Lakes wrote to Cope that the samples should be forwarded to Marsh. Cope was offended by the slight. Meanwhile, Cope received bones from school superintendent O.W. Lucas in March 1877 from Cañon City; the remains were of a dinosaur even bigger than Lakes' that Marsh had described. Word that Lakes had notified Cope of his finds galvanized Marsh into action. When Marsh heard from Union Pacific Railroad workers W.E. Carlin and W.H. Reed about a vast boneyard northwest of Laramie in Como Bluff, Marsh sent his agent, Samuel Wendell Williston, to take charge of the digging. Cope, in response, learned of Carlin and Reed's discoveries and sent his own men to find bones in the area. The two scientists attempted to sabotage each other's progress. Cope was described as a genius and what Marsh lacked in intelligence, he easily made up for in connections—Marsh's uncle was George Peabody, a rich banker who supported Marsh with money, and a secure position at the Peabody Museum. Marsh lobbied John Wesley Powell to act against Cope and attempted to persuade Hayden to "muzzle" Cope's publishing. Both men tried to spy on the other's whereabouts and attempted to offer their collectors more money in the hopes of recruiting them to their own side. Cope was able to recruit David Baldwin in New Mexico and Frank Williston in Wyoming from Marsh. Cope and Marsh were extremely secretive as to the source of their fossils. When Henry Fairfield Osborn, at the time a student at Princeton, visited Cope to ask where to travel to look for fossils in the West, Cope politely refused to answer. When Cope arrived back in the United States after his tour of Europe in 1878, he had nearly two years of fossil findings from Lucas. Among these dinosaurs was Camarasaurus, one of the most recognizable dinosaur recreations of the time. The summer of 1879 took Cope to Salt Lake City, San Francisco, and north to Oregon, where he was amazed at the rich flora and the blueness of the Pacific Ocean. In 1879, the United States Congress consolidated the various government survey teams into the United States Geological Survey with Clarence King as its leader. This was discouraging to Cope because King named Marsh, an old college friend, as the chief paleontologist. The period of Cope's and Marsh's paleontological digs in the American West spanned from 1877 to 1892, by which time both men exhausted much of their financial resources. Later years The 1880s proved disastrous for Cope. Marsh's close association with the Geological Survey gave him the resources to employ 54 staff members over the course of ten years. His teaching position at Yale meant he had guaranteed access to the American Journal of Science for publication. Cope had his interest in the Naturalist, but it drained him of funds. After Hayden was removed from the survey, Cope lost his source of government funding. His fortune was not enough to support his rivalry, so Cope invested in mining. Most of his properties were silver mines in New Mexico; one mine yielded an ore vein worth $3 million in silver chloride. Cope visited the mines each summer from 1881 to 1885, taking the opportunity to supervise or collect other minerals. For a while he made good money, but the mines stopped producing and by 1886 he had to give up his now-worthless stocks. The same year he received a teaching position at the University of Pennsylvania. He continued to travel west, but realized he would not be able to best Marsh in cornering the market for bones; he had to release the collectors he had hired and sell his collections. During this period, he published 40 to 75 papers each year. With the failure of his mines, Cope began searching for a job, but was turned down at the Smithsonian and American Museum of Natural History. He turned to giving lectures for hire and writing magazine articles. Each year, he lobbied Congress for an appropriation with which to finish his work on "Cope's Bible", a volume on Tertiary vertebrates, but was continually turned down. Rather than work with Powell and the survey, Cope tried to inflame sentiment against them. At Marsh's urging, Powell pushed for Cope to return specimens he had unearthed during his employment under the government surveys. This was an outrage to Cope, who had used his own money while working as a volunteer. In response, Cope went to the editor of the New York Herald and promised a scandalous headline. Since 1885, Cope had kept an elaborate journal of mistakes and misdeeds that both Marsh and Powell had committed over the years. From scientific errors to publishing mistakes, he had them written down in a journal he kept in the bottom drawer of his Pine Street desk. Cope sought out Marsh's assistants, who complained of being denied access and credit by their employer and of being chronically underpaid. Reporter William Hosea Ballou ran the first article on January 12, 1890, in what would become a series of newspaper debates between Marsh, Powell, and Cope. Cope attacked Marsh for plagiarism and financial mismanagement, and attacked Powell for his geological classification errors and misspending of government-allocated funds. Marsh and Powell published their own side of the story and, in the end, little changed. No congressional hearing was created to investigate Powell's alleged misallocation of funds, while Cope and Marsh were not held responsible for any mistakes. Indirectly, however, the attacks may have been influential in Marsh's fall from power in the survey. Due to pressure from Powell over bad press, Marsh was removed from his position for the government surveys. Cope's relations with the president of the University of Pennsylvania soured, and the entire funding for paleontology in the government surveys was pulled. Cope took his sinking fortunes in stride. In writing to Osborn about the articles, he laughed at the outcome, saying, "It will now rest largely with you whether or not I am supposed to be a liar and am actuated by jealousy and disappointment. I think Marsh is impaled on the horns of Monoclonius sphenocerus." Cope was well aware of his enemies and was carefree enough to name a species after a combination of "Cope" and "hater", Anisonchus cophater. Through his years of financial hardship, he was able to continue publishing papers—his most productive years were 1884 and 1885, with 79 and 62 papers published, respectively. The 1880s marked the publication of two of the best-known fossil taxa described by Cope: the pelycosaur Edaphosaurus in 1882 and the early dinosaur Coelophysis in 1889. In 1889, he succeeded Leidy, who had died the previous year, as professor of zoology at the University of Pennsylvania. The small yearly stipend was enough for Cope's family to move back into one of the townhouses he had been forced to relinquish earlier. In 1892, Cope (then 52 years old) was granted expense money for field work from the Texas Geological Survey. With his finances improved, he was able to publish a massive work on the Batrachians of North America, which was the most detailed analysis and organization of the continent's frogs and amphibians ever mastered, and the 1,115-page The Crocodilians Lizards and Snakes of North America. In the 1890s, his publication rate increased to an average of 43 articles a year. His final expedition to the West took place in 1894, when he prospected for dinosaurs in South Dakota and visited sites in Texas and Oklahoma. The same year, Julia was married to William H. Collins, a Haverford astronomy professor. The couple's ages—Julia was 28 and the groom 35—were past the conventions of Victorian marriage. After their European honeymoon, the couple returned to Haverford. While Annie moved to Haverford, as well, Cope did not. His official reason was the long commute and late lectures he gave in Philadelphia. In private correspondence, however, Osborn wrote that the two had essentially separated, though they remained on amiable terms. Cope sold his collections to the American Museum of Natural History in 1895; his set of 10,000 American fossil mammals sold for $32,000, lower than Cope's asking price of $50,000. The purchase was financed by the donations from New York's high society. Cope sold three other collections for $29,000. While his collection contained more than 13,000 specimens, Cope's fossil hoard was still much smaller than Marsh's collection, valued at over a million dollars. The University of Pennsylvania bought part of Cope's ethnological artifact collection for $5,500. The Academy of Natural Sciences, Philadelphia's foremost museum, did not bid on any of Cope's sales due to bad blood between Cope and the museum's leaders; as a result, many of Cope's major finds left the city. Cope's proceeds from the sales allowed him to rehire Sternberg to prospect for fossils on his behalf. Death In 1896, Cope began suffering from a gastrointestinal illness he said was cystitis. His wife cared for him in Philadelphia when she was able; at other times, Cope's university secretary, Anna Brown, tended to him. Cope at this time lived in his Pine Street museum and rested on a cot surrounded by his fossil finds. Cope often prescribed himself medications, including large amounts of morphine, belladonna, and formalin, a substance based on formaldehyde used to preserve specimens. Osborn was horrified by Cope's actions and made arrangements for surgery, but the plans were put on hold after a temporary improvement in Cope's health. Cope went to Virginia looking for fossils, became ill again, and returned to his home very weak. Osborn visited Cope on April 5, inquiring about Cope's health, but the sick paleontologist pressed his friend for his views on the origin of mammals. Word of Cope's illness spread, and he was visited by friends and colleagues; even in a feverish condition Cope delivered lectures from his bed. Cope died on April 12, 1897, 16 weeks short of his 57th birthday. Sternberg, still prospecting for Cope that spring, was woken by a liveryman who relayed word from Annie that Cope had died three days earlier. Sternberg wrote in his memoirs, "I had lost friends before, and I had known what it was to bury my own dead, even my firstborn son, but I had never sorrowed more deeply than I did over the news." Cope's Quaker funeral consisted of six men: Osborn, his colleague William Berryman Scott, Cope's friend Persifor Frazer, son-in-law Collins, Horatio Wood, and Harrison Allen. The six sat around Cope's coffin among the fossils and Cope's pets, a tortoise and a Gila monster, for what Osborn called "a perfect Quaker silence ... an interminable length of time." Anticipating the quiet, Osborn had brought along a Bible and read an excerpt from the Book of Job, ending by saying, "These are the problems to which our friend devoted his life." The coffin was loaded on a hearse and carried to a gathering at Fairfield; much of the gathering was spent in silence. After the coffin was removed, the assembled began talking. Frazer recalled that each person remembered Cope differently, and "Few men succeeded so well in concealing from anyone ... all the sides of his multiform character." Osborn, intending to follow the coffin to the graveyard, was instead pulled aside by Collins and taken to the reading of Cope's will—Osborn and Cope's brother-in-law John Garrett were named executors. Cope gave his family a choice of his books, with the remainder to be sold or donated to the University of Pennsylvania. After debts were handled, Cope left small bequests to friends and family—Anna Brown and Julia received $5000 each, while the remainder went to Annie. Cope's estate was valued at $75,327, not including additional revenue raised by sales of fossils to the American Museum of Natural History, for a total of $84,600. Some specimens preserved in alcohol made their way to the Academy of Natural Sciences, including a few Gordian worms. Cope insisted through his will that no graveside service or burial be held; he had donated his body to science. He issued a final challenge to Marsh at his death: he had his skull donated to science so his brain could be measured, hoping his brain would be larger than that of his adversary; at the time, brain size was thought to be the true measure of intelligence. Marsh never accepted the challenge. Osborn listed Cope's cause of death as uremic poisoning, combined with a large prostate, but the true cause of death is unknown. Many believed Cope had died of syphilis contracted from the women with whom he fraternized during his travels. In 1995, Davidson gained permission to have the skeleton examined by a medical doctor at the university. Dr. Morrie Kricun, a professor of radiology, concluded no evidence of bony syphilis was found on Cope's skeleton. Public mentions of Cope's death were relatively slight. The Naturalist ran four photographs, a six-page obituary by editor J. S. Kingsley, and a two-page remembrance by Frazer. The National Academy of Sciences' official memoir was submitted years later and written by Osborn. The American Journal of Science devoted six paragraphs to Cope's passing, and incorrectly gave his age as 46. Cope was outlived by his rival Marsh, who was suffering poor health. Personality Julia assisted Osborn in writing a biography of her father, titled Cope: Master Naturalist. She would not comment on the name of the woman with whom her father had had an affair prior to his first European travel. Julia is believed to have burned any of the scandalous letters and journals Cope had kept, but many of his friends were able to give their recollections of the scandalous nature of some of Cope's unpublished routines. Charles R. Knight, a former friend called, "Cope's mouth the filthiest, from hearsay that in [Cope's] heyday no woman was safe within five miles of him." As Julia was the major financier behind The Master Naturalist, she wanted to keep her father's name in good standing and refused to comment on any misdeeds her father might have committed. Cope was described by zoologist Henry Weed Fowler as "a man of medium height and build, but always impressive with his great energy and activity". To him, Fowler wrote, "[Cope] was both genial and always interesting, easily approachable, and both kindly and helpful." Cope's affability during visits to the Academy of Natural Sciences to compare specimens was later recalled by his colleague Witmer Stone: "I have often seen him busily engaged in such comparisons, all the while whistling whole passages from grand opera, or else counting the scales on the back of a lizard, while he conversed in a most amusing manner with some small street urchin who had drifted into the museum and was watching in awe with eyes and mouth wide open." His self-taught nature, however, meant that he was largely hostile to bureaucracy and politics. He had a famous temper; one friend called Cope a "militant paleontologist". Despite his faults, he was generally well liked by his contemporaries. American paleontologist Alfred Romer wrote that, "[Cope's] little slips from virtue were those we might make ourselves, were we bolder". Cope was raised as a Quaker, and was taught that the Bible was literal truth. Although he never confronted his family about their religious views, Osborn writes that Cope was at least aware of the conflict between his scientific career and his religion. Osborn writes: "If Edward harbored intellectual doubts about the literalness of the Bible ... he did not express them in his letters to his family but there can be little question ... that he shared the intellectual unrest of the period." Lanham writes that Cope's religious fervor (which seems to have subsided after his father's death) was embarrassing to even his devout Quaker associates. Biographer Jane Davidson believes that Osborn overstated Cope's internal religious conflicts. She ascribes Cope's deference to his father's beliefs as an act of respect or a measure to retain his father's financial support. Frazer's reminiscences about his friend suggest Cope often told people what they wanted to hear, rather than his true views. Views As a young man, Cope read Charles Darwin's Voyage of a Naturalist, which had little effect on him. The only comment about Darwin's book recorded by Cope was that Darwin discussed "too much geology" from the account of his voyage. Due to his background in taxonomy and paleontology, Cope focused on evolution in terms of changing structure, rather than emphasizing geography and variation within populations as Darwin had. Over his lifetime, Cope's views on evolution shifted. His early views held that while Darwin's natural selection may affect the preservation of superficial characteristics in organisms, natural selection alone could not explain the formation of genera. Cope's suggested mechanism for this action was a "steady progressive development of organization" through what Cope termed "a continual crowding backward of the successive steps of individual development". His beliefs later evolved to one with an increased emphasis on continual and utilitarian evolution with less involvement of a Creator. He became one of the founders of the Neo-Lamarckism school of thought, which holds that an individual can pass on traits acquired in its lifetime to offspring. Although the view has been shown incorrect, it was the prevalent theory among paleontologists in Cope's time. In 1887, Cope published his own "Origin of the Fittest: Essays in Evolution", detailing his views on the subject. He was a strong believer in the law of use and disuse—that an individual will slowly, over time, favor an anatomical part of its body so much that it will become stronger and larger as time progresses down the generations. The giraffe, for example, stretched its neck to reach taller trees and passed this acquired characteristic to its offspring in a developmental phase that is added to gestation in the womb. Cope's Theology of Evolution (1887) argued that consciousness comes from the mind of the universe and governs evolution by directing animals to new goals. According to Sideris (2003), "[Cope] argued that organisms respond to changes in their environments by an exercise of choice. Consciousness itself, he maintained, was the principal force in evolution. Cope credited God with having built into evolution a life force that propelled organisms toward even higher levels of consciousness." Cope's views on human races and sex were influenced by his Lamarckian beliefs, which posited those of European ancestry as more highly evolved than nonwhite groups. In his essays on evolution, he assessed the physiognomies of three "sub-species of human" — termed the Negro, the Mongolian, and the Indo-European — in comparison to those of apes and human embryos, and drew the following conclusion:The Indo-European race is then the highest by virtue of the acceleration of growth in the development of the muscles by which the body is maintained in the erect position (extensors of the leg), and in those important elements of beauty, a well-developed nose and beard. It is also superior in those points in which it is more embryonic than the other races, viz., the want of prominence of the jaws and cheek-bones, since these are associated with a greater predominance of the cerebral part of the skull, increased size of cerebral hemispheres, and greater intellectual power.He believed that if, "a race was not white then it was inherently more ape-like". He was opposed to blacks because of their "degrading vices", believing that the "inferior Negro should go back to Africa" and that their "poor virtue" was inherent. Cope was against the modern view of women's rights, believing in the husband's role as protector; he was opposed to women's suffrage, as he felt they would be unduly influenced by their husbands, and that it would lead to the retrogression of the race as women took on more "masculine" attributes. Legacy In fewer than 40 years as a scientist, Cope published over 1,400 scientific papers, a record that is rivaled by few other scientists. His major works include three volumes: On the Origin of Genera (1867), The Vertebrata of the Tertiary Formations of the West (1884) and "Essays in Evolution". He discovered a total of 56 new dinosaur species during the Bone Wars compared to Marsh's 80. Although Cope is today known as a herpetologist and paleontologist, his contributions extended to ichthyology, in which he cataloged 300 species of fishes and described over 300 species of reptiles over three decades. In total, he discovered and described over 1,000 species of fossil vertebrates and published 600 separate titles. One species of Caribbean snake, Liophis juliae, he named in honor of his daughter Julia. The salamander Dicamptodon copei , the dinosaur Drinker nisti , the lizards Alopoglossus copii , Gambelia copeii , Plestiodon copei , Sepsina copei ,Sphaerodactylus copei , the snakes Thamnophis copei , Aspidura copei , Cemophora coccinea copei , Coniophanes imperialis copei ,Dipsas copei , Cope's gray treefrog Hyla chrysoscelis Cope, 1880, and the splash tetra genus Copella are among the many taxa named in honor of Cope. Currently, 21 fish species named copei are distributed among 11 families. Cope lent his name to the Journal of the American Society of Ichthyologists and Herpetologists (nee Copeia) from 1913 to 2020. Cope's Pine Street home is recognized as a national landmark. Cope's remains are still kept as scientific specimens. His brain was donated to the American Anthropometric Society and is still preserved in alcohol at the Wistar Institute. His skull is at the University of Pennsylvania Museum of Archaeology and Anthropology. His ashes were placed at the institute with those of Leidy, while his bones were extracted and kept in a locked drawer to be studied by anatomy students. In 1907, Edward Anthony Spitzka published a paper of his analysis of six brains at the American Anthropometric Society, including Cope's with additional analysis of Cope's skull. In 1993, the American photographer Louie Psihoyos loaned Cope's skull and prominently incorporated it into his book Hunting Dinosaurs (1994). Psihoyos and his assistant, John Knoebber, traveled around America with Cope's skull, taking pictures of the skull in different places on the "adventure" and showing it to various modern paleontologists. Among the notable incidents during the journey was Robert T. Bakker pouring pasta into the skull to measure Cope's brain size. See also Port Kennedy Bone Cave List of species in Port Kennedy Bone Cave :Category:Taxa named by Edward Drinker Cope Footnotes References Bibliography Beolens, Bo; Watkins, Michael; Grayson, Michael (2011). The Eponym Dictionary of Reptiles. Baltimore: Johns Hopkins University Press. xiii + 296 pp. . Jackson, J.R. & Quinn, A. (2023), "Post-Darwinian Fish Classifications: Theories and Methodologies of Günther, Cope, and Gill", History and Philosophy of the Life Sciences, Vol.45, No.4, (2023), pp. 1–37. Selected works On The Origin of Genera; From the Proceedings of the Academy of Natural Sciences of Philadelphia, Oct. 1868 (Merrihew & Son, 1869) The Vertebrata of the Tertiary Formations of the West (Government Printing Office, 1884). "The Origin of the Fittest: Essays on Evolution" (Nature, 1887). Internet Archive / Archive.org The Crocodilians, Lizards and Snakes of North America (Government Printing Office, 1900). External links Profile of Edward Drinker Cope at the Niagara Falls Museum Collection Edward Drinker Cope obituary, 1897, View works by Edward Drinker Cope online at the Biodiversity Heritage Library. National Academy of Sciences Biographical Memoir 1840 births 1897 deaths 19th-century American zoologists American taxonomists American anatomists American herpetologists American ichthyologists American paleontologists American Quakers American expatriates in West Germany American expatriates in England Lamarckism Members of the American Anthropometric Society Orthogenesis People from Haddonfield, New Jersey Scientists from Philadelphia Theistic evolutionists
Edward Drinker Cope
Biology
9,383
4,646,810
https://en.wikipedia.org/wiki/TM%2031-210%20Improvised%20Munitions%20Handbook
The TM 31-210 Improvised Munitions Handbook is a 256-page United States Army technical manual intended for the United States Army Special Forces. It was first published in 1969 by the Department of the Army. Like many other U.S. military manuals dealing with improvised explosive devices (IEDs) and unconventional warfare, it was declassified and released into the public domain as a result of provisions such as the Freedom of Information Act (FOIA), and is now freely available to the public in both electronic and printed formats. The manual explains how in unconventional warfare operations, for logistical or security reasons, it may be impossible or unwise to use conventional military munitions as tools when conducting certain missions. Starting from this consideration, the manual describes the manufacture of various types of ordnances from readily available materials, from junk piles, common household chemicals and supplies purchased from regular stores. The manual was mentioned in news reports by various media after it was seized from people suspected of planning guerrilla or terrorism activities. The manual is one of the best official references on improvised explosive devices (IEDs) manufacturing, and some of the weapons described in it have been used against U.S. troops by foreign troops. For example, the hand-grenade-in-a-can trap was used against U.S. troops in Vietnam. Furthermore, the manual was found in many abandoned safe houses of various Islamist groups, for example in Kabul, Mazar-e Sharif and Kandahar (Afghanistan), as well as in destroyed training camps. The TM 31-210 manual was subject to considerations regarding the repercussions of easy public access to information on the artisanal manufacturing of weapons and explosives. The manual has also been mentioned in scientific literature, used as a reference for works dealing with topics such as ballistics, forensic investigations, security engineering and counterterrorism. Sections The TM 31-210 manual consists of seven main sections: Explosives and propellants (including igniters) Mines and grenades Small arms weapons and ammunition Mortars and rockets Incendiary devices Fuses, detonators & delay mechanisms Miscellaneous The miscellaneous section deals with the production of various types of trigger mechanisms (pressure, pressure release, traction, etc.), a makeshift precision balance, electric batteries, makeshift bulletproof barricades and more. The manual ends with two appendices, which briefly deal with the properties of some primary and secondary explosives. Popular culture The TM 31-210 manual appeared as an "Easter egg" in the 1995 CGI animated film, Toy Story. In the scene where Woody is trapped under a blue plastic box in Sid's bedroom, it's possible to see behind him a document titled "TM 31-210 Improvised Interrogation Handbook", a clear reference to the actual document. References External links 1969 books Bombs Explosives Explosive weapons Guerrilla warfare Improvised explosive devices Improvised weapons Special Operations Forces of the United States United States Army United States Army Field Manuals Vietnam War
TM 31-210 Improvised Munitions Handbook
Chemistry
601
40,095,376
https://en.wikipedia.org/wiki/Panvalet
Computer Associates Panvalet (also known as CA-Panvalet) is a revision control and source code management system originally developed by Pansophic Systems for mainframe computers such as the IBM System z and IBM System/370 running the z/OS and z/VSE operating systems. CA-PAN/LCM is a similar product for PCs. Overview Panvalet can be used to manage program source code, JCL, Macros/commands for utilities such as Easytrieve and object module files. History Panvalet was developed by Pansophic Systems in 1969 as a program to store and manage computer program source code on direct-access storage devices. Before Panvalet code was saved as paper punch cards, typically with 500 to 3,000 cards per program, often 1,000,000 or more per data center. Cards were bulky, difficult to store and transport, difficult and costly to back up, and prone to catastrophic errors since one misplaced card could prevent a program from running correctly. Pansophic began selling the program in 1970 at a price of $2,880 per copy. It was immediately successful. In 1978, it was reported that Panvalet, at the time a product of Pansophic Systems, Inc, was in use at over 3,000 sites. Throughout much of its existence, the main competitor to Panvalet was The Librarian product from Applied Data Research. It had roughly the same number of installations as Panvalet. As recollected by Piscopo, "Panvalet and Librarian basically divided the program library market between the two of them.... Virtually everyone ended up with one or the other of the products." Computer Associates acquired Panvalet in 1991 when it purchased Pansophic Systems for $390M. Broadcom acquired Panvalet in 2018 when it purchased Computer Associates. See also CA Technologies References External links Configuration management Proprietary version control systems IBM mainframe software CA Technologies
Panvalet
Engineering
399
57,483,158
https://en.wikipedia.org/wiki/Bunbury%20woodchip%20bombing
The Bunbury woodchip bombing was an unprecedented and politically motivated act of property destruction that took place on 19 July 1976 at a woodchip export terminal in Bunbury, Western Australia. More than 1,000 sticks of gelignite were planted by two environmental protesters, with the resulting partial detonation causing an estimated $300,000 in damages; two charges failed to detonate which limited the bombing's impact. No injuries were reported, although a security guard was held at gunpoint and shrapnel impacted a nearby residential area. The intention of the bombing was to prevent the export of woodchips from Western Australian old growth forests for 18 months. This act of protest is now largely unknown outside of Western Australia but was considered to have been a serious setback for the emerging environmental movement at the time, despite the perpetrators being unaffiliated with any environmental organisation. Background The introduction of the wood chip industry to Western Australia in the 1960s initially attracted less opposition then it did in the Eastern states. This started to change in the 1970s. When woodchipping began around the town of Manjimup in 1975, local residents Michael Haabjoern and John Chester felt they had no recourse to legal means of stopping the destruction of the old growth forests. The bombing On the morning of 19 July 1976, Michael Haabjoern and John Chester arrived in Bunbury, having driven a stolen car from Manjimup. The car contained more than 1,000 sticks of gelignite, fuses, detonators and timing devices stolen from a Perth magazine. The motive of the protesters was to destroy the port's loading facilities and prevent the export of woodchips from Western Australia's old-growth karri and jarrah forests for at least 18 months. During that time they hoped to create a groundswell of opposition to the woodchipping industry with laws passed to prevent it in future. Wearing stocking masks and armed with a .303 rifle, the two men cut through the fencing and held up the watchman, Trevor Morritt, at gunpoint. Haabjoern set the charges at three critical points of the machinery and informed the watchman of their plans whilst also checking whether any personnel would be passing through the site. They then left the facility in Morritt's car, taking him with them. Morritt would later describe Chester as agitated and a "loose cannon", whereas Haabjoern was "fairly cool and calm. On the access road they placed two signs reading "Danger Separate Explosives" and "Danger Charges Ahead" and left the watchman 9 km away in Australind. At 5:25 am the first charge detonated, causing damage estimated at $300,000 () to the gantry structure and sending steel pieces over the Leschenault inlet into a housing estate, smashing windows. The remaining two charges failed to detonate. Aftermath The facilities were not critically damaged and the port was able to resume exports with barely an interruption. The remaining explosives were defused by bomb expert Jack Billing, who was flown to Bunbury from Perth. Meanwhile police located Haabjoern and Chester within a week. In court, the two men pleaded guilty to the charges whilst justifying their actions with the argument that limited violence against industrial equipment would prevent greater violence against the unique environmental heritage of Western Australia's native forests. The two men were sentenced to seven years imprisonment, with a minimum term of ten months. Justice Jones said that he believed that the two men had been motivated by a higher ideal and were unlikely to commit the same offence again. This attracted criticism for being too lenient and an appeal by the Crown led to an increase in the minimum term to three and a half years. Neither Haabjoern or Chester were affiliated with any environmental organisation, however, the bombing allowed opponents of the emerging movement to characterize all protesters as violent extremists. GW Kelly of the Forest Products Association said that the incident had been caused by extremists who "inflamed emotions with a campaign of bitterness and hatred, divorced almost entirely from the truth." Premier Charles Court considered the bombing to be an act of terrorism whereas the WA Police treated it as "just" a criminal act. Both the Campaign to Save Native Forests and the South West Forests Defence Foundation distanced themselves from the bombing and reaffirmed a commitment to protest by lawful means. Chester would later escape from Geraldton Regional Prison on 3 February 1978. Inquiries were made throughout Australia, with Sydney detectives suggesting he could be a suspect in the Hilton Hotel bombing. WA Police however thought it unlikely for him to have left the state, and a man hunt took place in and around Manjimup where Chester was located on 13 March. Chester once again escaped from the custody the following day and went into hiding in the bush. Chester made contact with a journalist and issued threats against the Premier Charles Court, mining magnate Lang Hancock and said he might blow up a woodchip train. References 1976 crimes in Australia Crime in Western Australia Eco-terrorism Timber industry in Western Australia Bombing
Bunbury woodchip bombing
Chemistry
1,038
24,045,080
https://en.wikipedia.org/wiki/Calcium%20bromate
Calcium bromate, Ca(BrO3)2, is a calcium salt of bromic acid. It is most commonly encountered as the monohydrate, Ca(BrO3)2•H2O. It can be prepared by reacting calcium hydroxide with sodium bromate or calcium sulfate with barium bromate. Above 180 °C, calcium bromate decomposes to form calcium bromide and oxygen. In theory, electrolysis of calcium bromide solution will also yield calcium bromate. It is used as a bread dough and flour "improver" or conditioner (E number E924b) in some countries. References Calcium compounds Bromates Oxidizing agents
Calcium bromate
Chemistry
139
36,210,519
https://en.wikipedia.org/wiki/Recombinant%20factor%20VIIa
Recombinant factor VIIa (rfVIIa) is a form of blood factor VII that has been manufactured via recombinant technology. It is administered via an injection into a vein. It is used to treat bleeding episodes in people who have acquired haemophilia, among other indications. There are several disimilar forms, and biosimilars for each. All forms are activated. The most common side effects with Novoseven include venous thromboembolic events (problems caused by blood clots in the veins), rash, pruritus (itching), urticaria (hives), fever and reduced effectiveness of treatment. The most common side effects with Cevenfacta include injection site discomfort and hematoma (a collection of blood under the skin) as well as injection-related reactions, an increase in body temperature, dizziness and headache. Novoseven was approved for medical use in the European Union in February 1996, and in the United States in March 1999. Medical uses Novoseven is indicated for the treatment of bleeding episodes and for the prevention of bleeding in surgical interventions or invasive procedures in people with acquired hemophilia. Novoseven RT is indicated for the treatment of bleeding episodes and peri-operative management in adults and children with hemophilia A or B with inhibitors, congenital factor VII deficiency, and Glanzmann's thrombasthenia with refractoriness to platelet transfusions, with or without antibodies to platelets and for the treatment of bleeding episodes and peri-operative management in adults with acquired hemophilia. Sevenfact [coagulation factor VIIa (recombinant)-jncw] is approved for use in the United States and is indicated for the treatment and control of bleeding episodes occurring in adults and adolescents twelve years of age and older with hemophilia A or B with inhibitors (neutralizing antibodies). As of 2012, recombinant factor VIIa is not supported by the evidence for treating most cases of major bleeding. There is a significant risk of arterial thrombosis with its use and thus, other than in those with factor VII deficiency or acquired hemophilia, it should only be given in clinical trials. Recombinant human factor VII, while initially looking promising in intracerebral hemorrhage, failed to show benefit following further study and is no longer recommended. A possible role in severe postpartum hemorrhage has been suggested. In people with hemophilia type A and B who have a deficiency of factors VIII and IX, these two factors are administered for controlling bleeding or as prophylaxis medication before starting surgeries. However, in some cases they subsequently develop neutralizing antibodies, called inhibitors, against the drug. These inhibitors often increase over time and inhibit the action of coagulation in the body. Recombinant factor VIIa, which is an activated form of factor VII, bypasses factors VIII and IX and causes coagulation without the need for factors VIII and IX. It may be used in acquired hemophilia patients with higher inhibitor titers. Other indications include use for patients with inherited deficiency of factor VII, and people with Glanzmann's thrombasthenia. Pharmacology Mechanism of action This treatment results in activation of the extrinsic pathway of blood coagulation. Recombinant factor VIIa activates factor X, which starts the clotting process and thereby provides control of the bleeding. Because factor VII acts directly on factor X, independently from factors VIII and IX, recombinant factor VIIa can be used to restore haemostasis in their absence or in the presence of inhibitors. Coagulation factor VIIa (recombinant)-jncw Coagulation factor VIIa (recombinant)-jncw (Sevenfact) is expressed in the mammary gland of genetically engineered rabbits and secreted into the rabbits' milk. During purification and processing of the milk, FVII is converted into activated FVII (FVIIa). The recombinant DNA (rDNA) construct in the genetically engineered rabbits used for the production of Sevenfact was approved by the FDA's Center for Veterinary Medicine. The safety and efficacy of coagulation factor VIIa (recombinant)-jncw were determined using data from a clinical study that evaluated 27 patients with hemophilia A or B with inhibitors, which included treatment of 465 mild or moderate, and three severe bleeding episodes. The study assessed the efficacy of treatment twelve hours after the initial dose was given. The proportion of mild or moderate bleeding episodes treated successfully both with the lower dose of 75mcg/kg and higher dose of 225 mcg/kg (requiring no further treatment for the bleeding episode, no administration of blood products and no increase in pain beyond 12 hours from initial dose) was approximately 86%. The study also included three severe bleeding episodes that were treated successfully with the higher dose. Another study evaluated the safety and pharmacokinetics of three escalating doses of coagulation factor VIIa (recombinant)-jncw in 15 subjects with severe hemophilia A or B with or without inhibitors. Results from this study were used to select the two doses, 75mcg/kg and 225 mcg/kg, that were evaluated in the study described above. The most common side effects of coagulation factor VIIa (recombinant)-jncw are headache, dizziness, infusion site discomfort, infusion related reaction, infusion site hematoma and fever. Coagulation factor VIIa (recombinant)-jncw is contraindicated in those with known allergy or hypersensitivity to rabbits or rabbit proteins. In 2022, the EU approved eptacog beta (Cevenfacta), which is very similar to jncw. Both are made by the same manufacturer (LFB) and through rabiit milk. Eptacog beta is almost identical to, and functions like, coagulation factor VII. Society and culture Legal status Novoseven was approved for use in the United States in March 1999, and indicated for the treatment of bleeding episodes in hemophilia A or B patients with inhibitors to Factor VIII or Factor IX. It was approved in October 2006, and indicated for the treatment of bleeding episodes and for the prevention of bleeding in surgical interventions or invasive procedures in patients with acquired hemophilia. Novoseven RT was approved for use in the United States in May 2008 as a room-temperature stable formulation. In January 2010, the label was updated to include a boxed warning on serious thrombotic adverse events associated with the use of Novoseven RT outside labeled indications. In April 2020, coagulation factor VIIa (recombinant)-jncw (Sevenfact) was approved for use in the United States. In May 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Cevenfacta, intended for the treatment of bleeding episodes. The applicant for this medicinal product is Laboratoire français du Fractionnement et des Biotechnologies (LFB). Eptacog beta (activated) was approved for medical use in the EU in July 2022. Military use Recombinant factor VIIa was used routinely in severely wounded American troops during the Iraq War, credited with saving many lives but also resulting in a high number of deep venous thromboses and pulmonary emboli, as well as unexpected strokes, heart attacks, and deaths. References Further reading Antihemorrhagics Glycoproteins
Recombinant factor VIIa
Chemistry
1,634
1,572,553
https://en.wikipedia.org/wiki/Shravana
Shravana (Devanagari: श्रवण), also known as Thiruvonam in Tamil and Malayalam (Tamil: திருவோணம், Malayalam: തിരുവോണം), is the 22nd nakshatra (Devanagari नक्षत्र) or lunar mansion as used in Hindu astronomy, Hindu calendar and Hindu astrology. It belongs to the constellation Makara (Devanagari: मकर), a legendary sea creature resembling a crocodile] or Capricorn. The name alludes to Shravan, a mythological character who attained repute due to his utmost devotion to his aged and blind parents. Lord Venkateswara of Tirupati and Lord Oppiliappan near Kumbakonam, who married Markandeya Rishi's daughter Bhuvalli, are believed to be born in this Nakshatra in the Bhadrapada maasa. Onam, the biggest festival of Kerala, is celebrated on this Nakshathra in the Malayalam month of Chingam. Traditional Hindu given names are determined by which pada (quarter) of a nakshatra the Ascendant/Lagna was in at the time of birth. In the case of Shravana Nakshatra, the given name would begin with the following syllables: Khi (Devanagari: खी) Khu (Devanagari: खू) Khe (Devanagari: खे) Kho (Devanagari: खो) References Nakshatra
Shravana
Astronomy
314
3,651,757
https://en.wikipedia.org/wiki/Weaving%20%28horse%29
Weaving is a behaviour in horses that is classified as a stable vice, in which the horse repetitively sways on its forelegs, shifting its weight back and forth by moving the head and neck side to side. It may also include swaying of the rest of the body and picking up the front legs. Some horses exhibit non-stereotypical weaving, and instead engage in variations on this behavior. Causes Ultimately, the housing condition of horses is considered to be the cause of stable vices such as weaving. There are no reports of wild horses displaying weaving behaviour, mainly because these horses are in their natural state, i.e. they are not confined or on a schedule. Domestic horses are often housed in stalls (typically 8x8 or 12x12) at night, and are allowed turnout (i.e. time outside) during the day. Horses that are housed in solitary confinement from other horses, or those that do not get daily turnout, or inadequate turnout, are more at risk for developing stable vices such as weaving. Horses often perform this vice due to stress. Horses tend to weave near the door of their stall, possibly because they desire to leave the stall to move around. Horses also sometimes weave near a window to the aisle or the exterior of the stable, which would provide visual stimulation. Stress during critical periods such as weaning may also contribute to the development of stables vices. However, some horses that have developed the habit will also weave while turned out, signifying that the problem is not solely one of confinement. Many equestrians believe weaving is a learned habit. However, some experts theorize that weaving could more likely develop in horses with a genetic predisposition to it. Thus, there is a debate over whether weaving is a learned behavior that horses pick up by observing another horse who weaves, or if it is an inborn tendency that develops under a certain set of environmental conditions. These two arguments fail to take into account the fact that most behaviours can be both genetically and environmentally influenced. It is possible that both sides are correct to some extent. Horses that exhibit non-stereotypical weaving do not necessarily begin after watching another horse weaving (stereotypical or non-stereotypical), suggesting that horses can begin weaving without learning it from another horse. Some people claim that it is usually safe to allow other horses to see a weaver, unless it is known that the horse may be genetically predisposed (their sire or dam was a weaver). Others feel it is caused by environmental factors, and that other horses in the same setting will pick up the behavior once a single horse starts. However, this may be due to all horses experiencing similar stresses, and thus engaging in similar behavior. Weaving may also be caused by anticipation of predictable events such as feeding or turnout, as the behavior has been observed to increase prior to these events. Negative effects Weaving is generally not a very damaging vice over short periods of time, but horses that are consistent weavers may show abnormal hoof wear, and stress on their joints (which can cause lameness). Damage to the stall floor may also occur. The overall value of a horse is not necessarily diminished by its weaving, but the underlying cause of stress or boredom that is causing the behavior should be investigated and rectified to ensure the horse's well-being. Weaving is also linked to weight loss, uneven muscle development, and performance problems. Management Like most vices, weaving is a very difficult habit to break, and may not disappear even after the original problem has been resolved. However, there are several ways to manage a weaver and reduce its stress: Allow a weaver to see other horses, even if he is stalled separately. Provide a companion for the horse, if possible. Some options include goats, cats, or chickens. Provide visual stimulation. In a stall, an open window often helps the situation. Keep the horse occupied when stalled. For example, provide a good-quality continuous hay or a toy. Allow the horse to spend more time outside of its stall. This mimics a horse's natural environment and should reduce stress levels. Hanging a mirror in a stall often helps weaving, because the horse believes there is a nearby horse. This trick is often very effective, and recent studies in the UK have demonstrated that it can reduce weaving by 97%. Note that the mirror should be made from stainless steel to minimize safety concerns. Feed a high-quality, high-fiber hay and/or grain to reduce feeding frustration. Consistent feeding times and quality is important, and try to find ways to increase eating time (hay nets). Try not to feed wean-age horses concentrated feed (grain), as this can increase stress levels and frustration. Alter stall design if horse weaves over door. V-shaped anti-weaving bars prevent weaving. This method is strictly a prevention, and may actually increase the horse's frustration. Increasing horse's exercise, especially if it has limited turnout time during the day. Pay particular attention to lowering levels of stress during critical periods in a horse's life, especially during weaning. Gradual weaning techniques have been shown to reduce the risk of developing stable vices. See also Horse care Horse behavior References Abnormal behaviour in animals Ethology Horse management Horse behavior
Weaving (horse)
Biology
1,073
28,795,715
https://en.wikipedia.org/wiki/Favstar
Favstar (also known as favstar.fm) was an online service that tracked Twitter activity, launched in July 2009. Favstar utilized Twitter's API to rank tweets based on popularity metrics such as Favorites and Retweets. The platform gained popularity among users who wanted to see trending tweets and engage with popular content. Favstar ceased operations on June 19, 2018, when Twitter deprecated the API it relied on for its functionality. This decision was part of a broader overhaul of Twitter's API system, which aimed to limit third-party applications' access to certain features. In a statement regarding the shutdown, Favstar's creator, Tim Haines, expressed that the uncertainty surrounding Twitter's API changes made it impractical for Favstar to continue operating. Following the shutdown of Favstar, many users and developers of other third-party Twitter applications expressed concerns about the impact of Twitter's API changes on their services. The deprecation of the legacy APIs, originally scheduled for June 19, was later postponed to August 16, 2018, allowing some developers additional time to transition to the new system. However, the eventual shutdown of Favstar highlighted the challenges faced by third-party developers in adapting to Twitter's evolving platform policies. References Twitter services and applications Internet properties established in 2009 Internet properties disestablished in 2018
Favstar
Technology
291
5,779,959
https://en.wikipedia.org/wiki/Harmonic%20mixer
The harmonic mixer and subharmonic mixer are a type of frequency mixer, which is a circuit that changes one signal frequency to another. The ordinary mixer has two input signals and one output signal. If the two input signals are sinewaves at frequencies f1 and f2, then the output signal consists of frequency components at the sum f1+f2 and difference f1−f2 frequencies. In contrast, the harmonic and subharmonic mixers form sum and difference frequencies at a harmonic multiple of one of the inputs. The output signal then contains frequencies such as f1+kf2 and f1−kf2 where k is an integer. Background The classic frequency mixer is a multiplier. Multiplying two sinewaves produces just the sum and difference frequencies; the input frequencies are suppressed, and, in theory, there are no other heterodyne products. In practice, the multiplier is not perfect, and the input frequencies and other heterodyne products will be present. An actual multiplier is not needed. The significant requirement is a nonlinearity, and at microwave frequencies it is easier to use a nonlinearity rather than an ideal multiplier. A Taylor series expansion of a nonlinearity will show multiplications that give rise to the desired higher order products. Design goals for mixers seek to select the desired heterodyne products and suppress the undesired ones. Diode mixers. Overdriven diode bridge mixers. Drive signal looks like odd harmonic waveform (essentially a square wave). Harmonic mixer One classic design for a harmonic mixer uses a step recovery diode (SRD). The mixer's subharmonic input is first amplified to a power level that might be around 1 watt. That signal then drives a step recovery diode impulse generator circuit that turns the sine wave into something approximating an impulse train. The resulting impulse train has the harmonics of the input sine wave present to a high frequency (such as 18 GHz). The impulse train can then be used with a diode mixer (also called a sampler). The SRD usually has a very high frequency multiplication ratio, and can be used as the basis of a comb receiver, monitoring several harmonically related frequencies at once. This forms the basis of many simple 'bug detectors' where the intention is to detect transmission on any frequency, even if not known in advance. (This is not the same as a 'rake' receiver which is a correlation device.) When the required frequency multiple is lower, such as doubling, tripling or quadrupling, then Schottky diode circuits are more common. The conduction angle can be adjusted by changing drive level or temperature, and determines which part of the I/V curve is used and therefore the relative strengths of the different harmonically related outputs. If an even multiple is desired then an anti-parallel pair of diodes will suppress the odd local oscillator contribution, to the level that the diodes can be made identical and experience the same source impedance. Unlike a normal mixer, there is a fairly clear optimum drive level, above which the conversion loss increases. A harmonic mixer can be used to avoid the complexity of generating a microwave local oscillator, and is common as a simple and reliable frequency extender to a low frequency design. Usage Subharmonic mixers (a particular form of harmonic mixer where the LO is provided at a sub multiple of the frequency to be mixed with the incoming signal) are often used in direct-digital, or zero IF, communications system in order to eliminate the unwanted effects of LO self-mixing which occurs in many fundamental frequency mixers. Used in frequency synthesizers and network analyzers. A variation on the subharmonic mixer exists that has two switching stages is used to improved mixer gain in a direct downconversion receiver. The first switching stage mixes a received RF signal to an intermediate frequency that is one-half the received RF signal frequency. The second switching stage mixes the intermediate frequency to baseband. By connecting the two switching stages in series, current is reused and harmonic content from the first stage is fed into the second stage thereby improving the mixer gain. See also Frequency multiplier Sampling (signal processing) Synthesizer using harmonic mixing References External links http://www.microwaves101.com/encyclopedia/mixerssubharmonic.cfm Just hints at elements of SHM design . Describes some theory, use of antiparallel mixer diodes, odd harmonic selection Frequency mixers Electrical circuits Communication circuits Radio electronics Telecommunication theory
Harmonic mixer
Engineering
940
48,554,735
https://en.wikipedia.org/wiki/Friedrich%20Hecht
Friedrich Hecht (3 August 1903, Vienna – 8 March 1980, Vienna) was an Austrian chemist and writer. Hecht studied chemistry at the University of Vienna, and in 1928 was awarded a PhD. He was an assistant at the Institute of Chemistry. He wrote science fiction under the pseudonym Manfred Langrenus. In 1980, he died in Vienna, Austria. Even before the Anschluss, in 1933 Hecht was a member of the (at that time illegal) National Socialist German Workers' Party (NSDAP) or Nazi Party and Sturmabteilung or SA, from 1934, the Schutzstaffel or SS. In 1938, Hecht moved to the Analytical Department of the University of Vienna and achieved habilitation there in 1941. From 1943 to 1950 he was Professor of Microchemistry and Geochemistry at the Graz University of Technology. From 1959-1973, he was Associate Professor of Analytical Chemistry and head of the Analytical Institute in Vienna. At Vienna, Hecht was assisted by Edith Kroupa. In 1938, Hecht received the Fritz Pregl Prize for distinguished achievements in chemistry by the Austrian Academy of Sciences. Novels Reich im Mond. Utopisch-wissenschaftlicher Roman aus naher Zukunft und jahrmillionenferner Vergangenheit (Empire in the Moon. Utopian-scientific novel of the near future and of a millions of years distant past.), 1951. New edition in 1959 as Reich im Mond. Utopisch-wissenschaftlicher Roman (Empire in the Moon. Utopian-scientific novel). Im Banne des Alpha Centauri (Under the spell of the Alpha Centauri). Roman, 1955. References Ernst Klee: Das Personenlexikon zum Dritten Reich, 2005. (The People Lexicon to the Third Reich), 2005. Reich im Mond and Im Banne des Alpha Centauri in: Werkführer durch die utopisch-phantastische Literatur (work-guide on the utopian-fantastic literature), edited by Franz Rottensteiner and Michael Koseler (loose-leaf collection, publisher: Corian-Verlag, Meitingen.) External links About Empire in the Moon 1903 births 1980 deaths Austrian chemists Austrian Nazis Scientists from Vienna University of Vienna alumni Academic staff of the University of Vienna Austrian geochemists Sturmabteilung personnel SS personnel Austrian male writers Austrian science fiction writers
Friedrich Hecht
Chemistry
515
36,805,856
https://en.wikipedia.org/wiki/Kappa%20Pyxidis
Kappa Pyxidis, Latinized from κ Pyxidis, is a single, orange-hued star in the southern constellation of Pyxis. It is visible to the naked eye as a faint point of light with an apparent visual magnitude of +4.62. The star is located approximately 520 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −45 km/s and may come as close as in around 2.6 million years. It is moving through space at the rate of 53.7 km/s relative to the Sun and is following an orbit through the Milky Way galaxy with a large eccentricity of 0.68 This is an aging giant with a stellar classification of K4III, having exhausted the supply of hydrogen at its core then expanded and cooled. At present it has 67 times the radius of the Sun. It is a variable star of uncertain type, changing brightness with an amplitude of 0.0058 in visual magnitude over a period of 8.5 days. The star radiates 927 times the luminosity of the Sun from its bloated photosphere at an effective temperature of 3,931 K. A magnitude 10 visual companion is located at an angular separation of . References K-type giants Pyxidis, Kappa Pyxis Durchmusterung objects 078541 044824 3628
Kappa Pyxidis
Astronomy
288
63,070,573
https://en.wikipedia.org/wiki/Derived%20noncommutative%20algebraic%20geometry
In mathematics, derived noncommutative algebraic geometry, the derived version of noncommutative algebraic geometry, is the geometric study of derived categories and related constructions of triangulated categories using categorical tools. Some basic examples include the bounded derived category of coherent sheaves on a smooth variety, , called its derived category, or the derived category of perfect complexes on an algebraic variety, denoted . For instance, the derived category of coherent sheaves on a smooth projective variety can be used as an invariant of the underlying variety for many cases (if has an ample (anti-)canonical sheaf). Unfortunately, studying derived categories as geometric objects of themselves does not have a standardized name. Derived category of projective line The derived category of is one of the motivating examples for derived non-commutative schemes due to its easy categorical structure. Recall that the Euler sequence of is the short exact sequence if we consider the two terms on the right as a complex, then we get the distinguished triangle Since we have constructed this sheaf using only categorical tools. We could repeat this again by tensoring the Euler sequence by the flat sheaf , and apply the cone construction again. If we take the duals of the sheaves, then we can construct all of the line bundles in using only its triangulated structure. It turns out the correct way of studying derived categories from its objects and triangulated structure is with exceptional collections. Semiorthogonal decompositions and exceptional collections The technical tools for encoding this construction are semiorthogonal decompositions and exceptional collections. A semiorthogonal decomposition of a triangulated category is a collection of full triangulated subcategories such that the following two properties hold (1) For objects we have for (2) The subcategories generate , meaning every object can be decomposed in to a sequence of , such that . Notice this is analogous to a filtration of an object in an abelian category such that the cokernels live in a specific subcategory. We can specialize this a little further by considering exceptional collections of objects, which generate their own subcategories. An object in a triangulated category is called exceptional if the following property holds where is the underlying field of the vector space of morphisms. A collection of exceptional objects is an exceptional collection of length if for any and any , we have and is a strong exceptional collection if in addition, for any and any , we have We can then decompose our triangulated category into the semiorthogonal decomposition where , the subcategory of objects in such that . If in addition then the strong exceptional collection is called full. Beilinson's theorem Beilinson provided the first example of a full strong exceptional collection. In the derived category the line bundles form a full strong exceptional collection. He proves the theorem in two parts. First showing these objects are an exceptional collection and second by showing the diagonal of has a resolution whose compositions are tensors of the pullback of the exceptional objects. Technical Lemma An exceptional collection of sheaves on is full if there exists a resolution in where are arbitrary coherent sheaves on . Another way to reformulate this lemma for is by looking at the Koszul complex associated towhere are hyperplane divisors of . This gives the exact complexwhich gives a way to construct using the sheaves , since they are the sheaves used in all terms in the above exact sequence, except for which gives a derived equivalence of the rest of the terms of the above complex with . For the Koszul complex above is the exact complexgiving the quasi isomorphism of with the complex Orlov's reconstruction theorem If is a smooth projective variety with ample (anti-)canonical sheaf and there is an equivalence of derived categories , then there is an isomorphism of the underlying varieties. Sketch of proof The proof starts out by analyzing two induced Serre functors on and finding an isomorphism between them. It particular, it shows there is an object which acts like the dualizing sheaf on . The isomorphism between these two functors gives an isomorphism of the set of underlying points of the derived categories. Then, what needs to be check is an ismorphism , for any , giving an isomorphism of canonical rings If can be shown to be (anti-)ample, then the proj of these rings will give an isomorphism . All of the details are contained in Dolgachev's notes. Failure of reconstruction This theorem fails in the case is Calabi-Yau, since , or is the product of a variety which is Calabi-Yau. Abelian varieties are a class of examples where a reconstruction theorem could never hold. If is an abelian variety and is its dual, the Fourier–Mukai transform with kernel , the Poincare bundle, gives an equivalence of derived categories. Since an abelian variety is generally not isomorphic to its dual, there are derived equivalent derived categories without isomorphic underlying varieties. There is an alternative theory of tensor triangulated geometry where we consider not only a triangulated category, but also a monoidal structure, i.e. a tensor product. This geometry has a full reconstruction theorem using the spectrum of categories. Equivalences on K3 surfaces K3 surfaces are another class of examples where reconstruction fails due to their Calabi-Yau property. There is a criterion for determining whether or not two K3 surfaces are derived equivalent: the derived category of the K3 surface is derived equivalent to another K3 if and only if there is a Hodge isometry , that is, an isomorphism of Hodge structure. Moreover, this theorem is reflected in the motivic world as well, where the Chow motives are isomorphic if and only if there is an isometry of Hodge structures. Autoequivalences One nice application of the proof of this theorem is the identification of autoequivalences of the derived category of a smooth projective variety with ample (anti-)canonical sheaf. This is given by Where an autoequivalence is given by an automorphism , then tensored by a line bundle and finally composed with a shift. Note that acts on via the polarization map, . Relation with motives The bounded derived category was used extensively in SGA6 to construct an intersection theory with and . Since these objects are intimately relative with the Chow ring of , its chow motive, Orlov asked the following question: given a fully-faithful functor is there an induced map on the chow motives such that is a summand of ? In the case of K3 surfaces, a similar result has been confirmed since derived equivalent K3 surfaces have an isometry of Hodge structures, which gives an isomorphism of motives. Derived category of singularities On a smooth variety there is an equivalence between the derived category and the thick full triangulated of perfect complexes. For separated, Noetherian schemes of finite Krull dimension (called the ELF condition) this is not the case, and Orlov defines the derived category of singularities as their difference using a quotient of categories. For an ELF scheme its derived category of singularities is defined as for a suitable definition of localization of triangulated categories. Construction of localization Although localization of categories is defined for a class of morphisms in the category closed under composition, we can construct such a class from a triangulated subcategory. Given a full triangulated subcategory the class of morphisms , in where fits into a distinguished trianglewith and . It can be checked this forms a multiplicative system using the octahedral axiom for distinguished triangles. Given with distinguished triangles where , then there are distinguished triangles where since is closed under extensions. This new category has the following properties It is canonically triangulated where a triangle in is distinguished if it is isomorphic to the image of a triangle in The category has the following universal property: any exact functor where where , then it factors uniquely through the quotient functor , so there exists a morphism such that . Properties of singularity category If is a regular scheme, then every bounded complex of coherent sheaves is perfect. Hence the singularity category is trivial Any coherent sheaf which has support away from is perfect. Hence nontrivial coherent sheaves in have support on . In particular, objects in are isomorphic to for some coherent sheaf . Landau–Ginzburg models Kontsevich proposed a model for Landau–Ginzburg models which was worked out to the following definition: a Landau–Ginzburg model is a smooth variety together with a morphism which is flat. There are three associated categories which can be used to analyze the D-branes in a Landau–Ginzburg model using matrix factorizations from commutative algebra. Associated categories With this definition, there are three categories which can be associated to any point , a -graded category , an exact category , and a triangulated category , each of which has objects where are multiplication by . There is also a shift functor send to.The difference between these categories are their definition of morphisms. The most general of which is whose morphisms are the -graded complex where the grading is given by and differential acting on degree homogeneous elements by In the morphisms are the degree morphisms in . Finally, has the morphisms in modulo the null-homotopies. Furthermore, can be endowed with a triangulated structure through a graded cone-construction in . Given there is a mapping code with maps where and where Then, a diagram in is a distinguished triangle if it is isomorphic to a cone from . D-brane category Using the construction of we can define the category of D-branes of type B on with superpotential as the product category This is related to the singularity category as follows: Given a superpotential with isolated singularities only at , denote . Then, there is an exact equivalence of categories given by a functor induced from cokernel functor sending a pair . In particular, since is regular, Bertini's theorem shows is only a finite product of categories. Computational tools Knörrer periodicity There is a Fourier-Mukai transform on the derived categories of two related varieties giving an equivalence of their singularity categories. This equivalence is called Knörrer periodicity. This can be constructed as follows: given a flat morphism from a separated regular Noetherian scheme of finite Krull dimension, there is an associated scheme and morphism such that where are the coordinates of the -factor. Consider the fibers , , and the induced morphism . And the fiber . Then, there is an injection and a projection forming an -bundle. The Fourier-Mukai transform induces an equivalence of categories called Knörrer periodicity. There is another form of this periodicity where is replaced by the polynomial . These periodicity theorems are the main computational techniques because it allows for a reduction in the analysis of the singularity categories. Computations If we take the Landau–Ginzburg model where , then the only fiber singular fiber of is the origin. Then, the D-brane category of the Landau–Ginzburg model is equivalent to the singularity category . Over the algebra there are indecomposable objects whose morphisms can be completely understood. For any pair there are morphisms where for these are the natural projections for these are multiplication by where every other morphism is a composition and linear combination of these morphisms. There are many other cases which can be explicitly computed, using the table of singularities found in Knörrer's original paper. See also Derived category Triangulated category Perfect complex Semiorthogonal decomposition Fourier–Mukai transform Bridgeland stability condition Homological mirror symmetry Derived Categories notes - http://www.math.lsa.umich.edu/~idolga/derived9.pdf References Research articles A noncommutative version of Beilinson's theorem Derived Categories of Toric Varieties Derived Categories of Toric Varieties II Algebraic geometry Noncommutative geometry
Derived noncommutative algebraic geometry
Mathematics
2,498
65,380,994
https://en.wikipedia.org/wiki/Huang%27s%20law
Huang's law is the observation in computer science and engineering that advancements in graphics processing units (GPUs) are growing at a rate much faster than with traditional central processing units (CPUs). The observation is in contrast to Moore's law that predicted the number of transistors in a dense integrated circuit (IC) doubles about every two years. Huang's law states that the performance of GPUs will more than double every two years. The hypothesis is subject to questions about its validity. History The observation was made by Jensen Huang, the chief executive officer of Nvidia, at its 2018 GPU Technology Conference (GTC) held in San Jose, California. He observed that Nvidia's GPUs were "25 times faster than five years ago" whereas Moore's law would have expected only a ten-fold increase. As microchip components become smaller, it became harder for chip advancement to meet the speed of Moore's law. In 2006, Nvidia's GPU had a 4x performance advantage over other CPUs. In 2018 the Nvidia GPU was 20 times faster than a comparable CPU node: the GPUs were 1.7x faster each year. Moore's law would predict a doubling every two years, however Nvidia's GPU performance was more than tripled every two years, fulfilling Huang's law. Huang's law claims that a synergy between hardware, software, and artificial intelligence makes the new 'law' possible. Huang said, "The innovation isn't just about chips," he said, "It's about the entire stack." He said that graphics processors especially are important to a new paradigm. Elimination of bottlenecks can speed up the process and create advantages in getting to the goal. "Nvidia is a one trick pony," Huang has said. According to Huang: "Accelerated computing is liberating, … Let’s say you have an airplane that has to deliver a package. It takes 12 hours to deliver it. Instead of making the plane go faster, concentrate on how to deliver the package faster, look at 3D printing at the destination." The object "… is to deliver the goal faster." For artificial intelligence tasks, Huang said that training the convolutional network AlexNet took six days on two of Nvidia's GTX 580 processors to complete the training process but only 18 minutes on a modern DGX-2 AI server, resulting in a speed-up factor of 500. Compared to Moore's law, which focuses purely on CPU transistors, Huang's law describes a combination of advances in architecture, interconnects, memory technology, and algorithms. Reception Bharath Ramsundar wrote that deep learning is being coupled with "[i]mprovements in custom architecture". For example, machine learning systems have been implemented in the blockchain world, where Bitmain assaulted "many cryptocurrencies by designing custom mining ASICs (application-specific integrated circuits)" which had been envisioned as undoable. "Nvidia's grand achievement however is in making the case that these improvement in architectures are not merely isolated victories for specific applications but perhaps broadly applicable to all of computer science." They have suggested that broad harnessing of GPUs and the GPU stack (cf., CPU stack) can deliver "dramatic growth in deep learning architecture." "The magic" of Huang's law promise is that as nascent deep learning powered software becomes more availed, the improvements from GPU scaling and more generally from architectural improvements" will concretely improve "performance and behavior of modern software stacks." There has been criticism. Journalist Joel Hruska writing in ExtremeTech in 2020 said "there is no such thing as Huang's Law", calling it an "illusion" that rests on the gains made possible by Moore's law; and that it is too soon to determine a law exists. The research nonprofit Epoch has found that, between 2006 and 2021, GPU price performance (in terms of FLOPS/$) has tended to double approximately every 2.5 years, much slower than predicted by Huang's law. See also Accelerating change List of eponymous laws Notes References External links 2018 introductions Computer architecture statements Digital Revolution History of computing hardware Rules of thumb
Huang's law
Technology
894
25,918,856
https://en.wikipedia.org/wiki/Psilocybe%20tasmaniana
Psilocybe tasmaniana is a species of coprophilous agaric fungus in the family Hymenogastraceae. It was described by Gastón Guzmán and Roy Watling in 1978 as a small tawny orange mushroom that grows on dung, with a slight blueing reaction to damage, known only from Tasmania and southeastern Australia. It was likened to Psilocybe subaeruginosa although characteristics, appearance, and the association with dung were not typical for that species. As a blueing member of the genus Psilocybe it contains the psychoactive compounds psilocin and psilocybin. In 1992 an attempt was made to combine the name as a synonym of Psilocybe subaeruginosa. This was unsuccessful but the species was not well known and it gained a reputation as invalid, and with a lack of authentic records the name fell out of use. In recent years it has been applied to a species in Australia and New Zealand which may or may not be the same species Guzmán and Watling described. There are similarities but it is not on dung, and there are departures from the described appearance and geographic range. It is reported most often from New Zealand. The holotype is deposited at the Royal Botanic Gardens in Edinburgh but no DNA sequences are available for comparison with current records. Taxonomy Psilocybe tasmaniana was first described in 1978 by mycologists Gastón Guzmán and Roy Watling as a dung-inhabiting species found in Tasmania and southeastern Australia. The species was placed in the newly created section Cyanescens by Guzmán in 1983, alongside P. australiana, P. cyanescens, P. eucalypta and P. mairei. The section was to group bluing species with hyaline pleurocystidia (clear or transparent cystidia on the gill face), and thick-walled, sub-ellipsoid spores measuring up to 10 μm. Guzmán notes P. tasmaniana was tentatively considered; microscopic features of pleurocystidia and spores were close, but the cheilocystidia (cystidia on the gill edge), minor bluing reaction, association with dung, and appearance were not typical of other species in the group. In 1992 botanist Yu Shyun Chang critically re-examined P. australiana, P. eucalypta, and P. tasmaniana, by this time reported from both Australia and New Zealand. He proposed combining them as synonyms of Psilocybe subaeruginosa, arguing that the described features and those of the specimens he examined overlapped in range, and characteristics such as habitat and elements of appearance were not enough to distinguish them. Guzmán opposed the combination, calling the comparisons confused, but it was accepted in 1995 by New Zealand mycologists Peter R. Johnston and Peter K Buchanan with the exception of P. tasmaniana which they removed from the synonymy. They did not consider the description of characteristic cheilocystidia and coprophilous habit to fit, and the single specimen examined, PDD 57404 from New Zealand, was examined again and found to be P. subaeruginosa that had been misidentified. They comment that "authentic records of Psilocybe tasmaniana are not known from New Zealand." In 2015-2016 photos of an unidentified Psilocybe species in Australia and another from New Zealand appeared online with similar characteristics to the description. The mushrooms were not on dung but other features fit, and both were determined as P. tasmaniana, leading to the current use of the name. There are some departures from the original concept, including appearance, habitat and geographic range. The species presence in New Zealand (biostatus) is listed as uncertain and recorded in error. Etymology The epithet tasmaniana refers to the type locality and geographic region of Tasmania. Description The pileus or cap is small, 10–20 mm in diameter, convex to subcampanulate (domed to somewhat bell-shaped), without umbo or papilla. The surface is glabrous apart from whitish flecks of veil remnants at the margin. It is slightly striate at the margin when moist, feels a little tacky or sticky, and is hygrophanous, changing colour abruptly from wet to dry. Coloured tawny orange, drying dull or ochraceous straw. The gills are broad, with an adnate attachment to the stem, coloured violaceous brown with the edges remaining whiter. The stipe is long and slender in proportion to the cap, 40-50 x 1–2 mm, cylindrical and equal, silky fibrillose, coloured white to almost the same colour as the cap, and slightly bluish green at the base. The veil is well developed as a white arachnoid mass, but not forming any annulus on the stem. The trama or flesh is pallid and whitish. A moderate blueing reaction is observed in this species when handled or damaged. The spore print is purple-brown. Microscopic characteristics Spores are 12-13 x 7.1-7.7 μm, ellipsoid or subellipsoid to subovate in shape and dark brownish yellow. They have thick walls and a broad apical germ pore (situated at one end). The basidia are 4-spored, transparent and subcylindric, measuring 22-33 x 5.5-9.9 μm. The pleurocystidia (cystidia on the gill edge) are 19-24x6.6-8.8 μm, abundant, fusoid-ventricose in shape (tapered toward both ends but distinctly enlarged in the middle), with short necks measuring 1.6-2.8 μm. The cheilocystidia (cystidia on the gill face) are 22-23x4.4-9.9 μm, abundant, fusoid-ventricose, globose-ventricose (having the shape of a globe but swollen or distended) or sublageniform (somewhat shaped like a flask), with long necks measuring 5-11 x 1.6-3.3 μm of which some bifurcate (branch into two), others have a transparent swollen drop at the tip. The subhymenium (tissue just beneath the surface of the hymenium) is subcellular with short elements, brownish yellow appearing or encrusted on the hyphal walls. Trama (flesh) hyaline with parallel elongated hyphae 4-6 μm. Epicutis (the outermost layer of the cap cuticle) a thin, gelatinized layer of parallel elongated hyphae up to 3 μm, brownish to transparent. Hypodermium (the layer of tissue beneath the epicutis) brownish to transparent, with subglobose (almost globe-shaped) to elongated hyphae 5-10 μm wide. Clamp connections are present. Published description The Genus Psilocybe (1983) is not the original publication but was authored by Guzmán, and contains Watling and Guzmán's descriptions, microscopy, drawings, information on placement into sections, and descriptions for related taxa. See pages 332–41. Habitat and distribution The fruitbodies were described growing on animal dung, some from kangaroo, or with wood and leaves intermixed with dung, in grasslands and Australian Eucalyptus forests. The growth pattern is solitary to gregarious in small groups (close together but not densely clustered). Under its current use this species is observed growing solitary to gregarious from soil mixed with woody debris, sticks and grasses, or in potting mix and in areas of landscaping from clay soil and decomposing bark mulch. Described from Tasmania and southeastern Australia (New South Wales in part, and the Australian Capital Territory); currently reported from New South Wales and, predominantly, New Zealand. Watling collected the type material from the small rural farming and logging localities of Nugent and Buckland, approximately 50 kilometres northeast of the Tasmanian state capital of Hobart. Further collections came from Mount Field National Park in Tasmania, which ranges from temperate Eucalyptus rainforest to alpine moorland, and Tidbinbilla Nature Reserve, a large, steep walled valley and mountain range near Canberra, now an IUCN Category II Protected Area and the traditional Country of the Ngunnawal people. Similar species Guzmán related Psilocybe tasmaniana with Psilocybe subaeruginosa, which is very occasionally seen on dung, and placed it in the section Cyanescens of Psilocybe. DNA of the current species shows similarity to P. alutacea, P. angulospora, P. baeocystis, P. semilanceata, and P. stuntzii, all species in the phylogenetic Semilanceata group. Psilocybe alutacea is extremely similar, small and dung-associated. It was described from Tasmania and Australia in 2006. It has overlapping macroscopic and microscopic characteristics, although some differences seem to exist. Most notably the cheilocystidia of P. alutacea can be three-pronged, and pleurocystidia are reportedly rare, with long necks. Under the current use in New Zealand, P. tasmaniana may be confused with other Psilocybe species that share the same habitats. These include a mushroom related to P. stuntzii, and P. angulospora; the three are similar in size and found in potted plants and landscaping. P. angulospora can be recognised by an often acute central papilla on the cap. See also List of psilocybin mushrooms References External links Psilocybe tasmaniana on iNaturalist Psilocybe tasmaniana on Mushroom Observer Landcare Research Manaaki Whenua New Zealand online fungarium records. Entheogens Psychoactive fungi tasmaniana Psychedelic tryptamine carriers Fungi described in 1978 Fungi of Australia Taxa named by Roy Watling Taxa named by Gastón Guzmán Fungi of New Zealand Fungus species
Psilocybe tasmaniana
Biology
2,081
63,425,559
https://en.wikipedia.org/wiki/Caesium%20cyanide
Cesium cyanide (chemical formula: CsCN) is the cesium salt of hydrogen cyanide. It is a white solid, easily soluble in water, with a smell reminiscent of bitter almonds, and with crystals similar in appearance to sugar. Caesium cyanide has chemical properties similar to potassium cyanide and is very toxic. Production Hydrogen cyanide reacts with cesium hydroxide giving cesium cyanide and water: HCN + CsOH → CsCN + H2O. References Cyanides Caesium compounds
Caesium cyanide
Chemistry
121
73,519,725
https://en.wikipedia.org/wiki/Californium%28II%29%20iodide
Californium(II) iodide is a binary inorganic compound of californium and iodine with the formula . Synthesis It can be produced by reducing californium triiodide with hydrogen in a quartz thin tube at 570 °C: Physical properties The compound forms a dark purple solid. At slightly higher temperatures, it melts and reacts with the silica in the thin tube, producing CfOI. Californium diiodide has two crystal structures, one is -type crystal structure, stable at room temperature, with lattice parameters a = 743.4 ± 1.1 pm and α = 35.83 ± 0.07°; the other is metastable, of -type with lattice parameters a = 455.7 ± 0.4 pm and c = 699.2 ± 0.6 pm. Californium diiodide has an absorption band in the wavelength range from 300 to 1100 nm, which proves the existence of Cf(II). References Californium compounds Iodides Actinide halides
Californium(II) iodide
Chemistry
222
14,509,372
https://en.wikipedia.org/wiki/List%20of%20ProCurve%20products
HP ProCurve was the name of the networking division of Hewlett-Packard from 1998 to 2010 and associated with the products that it sold. The name of the division was changed to HP Networking in September 2010. Please use HP Networking Products for an actual list of products. The HP ProCurve division sold network switches, wireless access points, WAN routers, and Access Control Servers/Software under the "HP ProCurve" brand name. Switching Core Switches 8212zl Series - (Released September 2007) Core switch offering, 12-module slot chassis with dual fabric modules and options for dual management modules and system support modules for high availability (HA). IPV6-ready, 692 Gbit/s fabric. Up to 48 10GbE ports, 288 Gb ports, or 288 SFPs. Powered by a combination of either 875W or 1500W PSUs, to provide a maximum of 3600W (5400W using additional power supplies) of power for PoE. Datacenter Switches 6600 Series - (Released February 2009) Datacenter switch offered in five versions. There are four switches with either 24 or 48 Gb ports, with two models featuring four 10GbE SFP ports. There is also a 24 port 10GbE version. All of these feature front to back cooling and removable power supplies. Interconnect Fabric 8100fl series - Chassis based, 8 or 16 slot bays. Supports up to 16 10 Gigabit Ethernet ports / 160 Gigabit Ethernet Ports / 160 SFPs. Distribution/Aggregator 6200yl - Stackable switch, Layer 3, with 24 SFP transceiver ports, and the capability of 10GE ports 6400cl series - Stackable switch, Layer 3, with either CX4 10GE ports or X2 10GE ports 6108 - Stackable switch, with 6 Gigabit ports, and a further 2 Dual Personality Gigabit ports (either 1000BASE-T or SFPs) Managed edge switches Entry level 2530, 2620 and 2540 lines are Aruba/HPE branded and included for comparison purposes only Mainstream 2920 line is Aruba/HPE branded and included for comparison purposes only Chassis/Advanced Web managed switches 1800 series - Fanless 8 or 24 Gb ports. The 1800-24G also has 2 Dual Personality Ports (2 x Gb or SFP). No CLI or SNMP management. 1700 series - Fanless 7 10/100 ports plus 1 Gb or 22 10/100 ports plus 2 Gb. The 1700-24 also has 2 Dual Personality Ports (2 x Gb or SFPs). No CLI or SNMP management. Unmanaged switches 2300 series 2124 1400 series 408 Routing WAN Routers 7000dl - Stackable WAN routers with modules for T1/E1, E1+G.703, ADSL2+, Serial, ISDN, and also IPsec VPN. German company .vantronix marketed software products until 2009. Mobility Due to country laws, ProCurve released different versions of their wireless access points and MultiService Access points. MultiService Mobility / Access Controllers The MSM Access and Mobility Controllers support security, roaming and quality of service across MSM Access Points utilising 802.11 a/b/g/n wireless technology. MSM710 - Supports up to 10 x MSM Access points. Supports up to 100 Guest Users. MSM730 - Supports up to 40 x MSM Access points. Supports up to 500 Guest Users. MSM750 - Supports up to 200 x MSM Access points. Supports up to 2000 Guest Users. MSM760 - Supports 40 x MSM Access Points, plus license support up to 200 MSM765 - Supports 40 x MSM Access Points, plus license support up to 200. This is a module form, and based on the ProCurve ONE. MultiService Access Points Most access points are designed to work in controlled mode: a controller manages and provides authentication services for them. MSM310 - Single 802.11a/b/g radio. Includes 2.4 GHz dipole antennas MSM310-R - External use. Single 802.11a/b/g radio. Includes 2.4 GHz dipole antennas MSM313 - Integrated MSM Controller + single radio Access Point MSM313-R - External Use. Integrated MSM Controller + single radio Access Point MSM317 - Single 802.11b/g radio, with integrated 4 port switch MSM320 - Dual radios (802.11a/b/g + 802.11a/b/g) for outdoor deployment options. Includes 2.4 GHz dipole antennas. Supports PoE. MSM320-R - External use. Dual radios (802.11a/b/g + 802.11a/b/g). Includes 2.4 GHz dipole antennas. Supports PoE. MSM323 - Integrated MSM Controller + dual radio Access point. MSM323-R - External Use. Integrated MSM Controller + dual radio Access point. MSM325 - Dual radios (802.11a/b/g + 802.11a/b/g) including RF security sensor. Requires PoE MSM335 - Triple radios (802.11a/b/g + 802.11a/b/g + 802.11a/b/g RF security sensor) MSM410 - Single 802.11 a/b/g/n radio. Requires PoE. Internal antenna only. MSM422 - Dual-radio 802.11n + 802.11a/b/g. MSM460 - Dual-Radio 802.11a/b/g/n. Only internal 3x3 MIMO Antenna. Requires PoE. MSM466 - Same as MSM460 but with external 3x3 MIMO antenna connectors, no internal antenna. Requires PoE. Centralised wireless Wireless Edge Services Module - Controls Radio ports, and is an integrated module that fits into ProCurve Switches 5300xl / 5400zl / 8200zl only. Redundant Module available for failover. Supports the following Radio Ports: RP-210 - Single 802.11b/g radio and integrated antenna RP-220 - Dual-radio design (one 802.11a and one 802.11b/g); plenum rated; external antennas required RP-230 - Dual-radio design (one 802.11a and one 802.11b/g); features internal, integrated antennas Wireless access points M110 - Single 802.11a/b/g radio M111 - Wireless Client Bridge including dual band antennas AP-530 - Wireless access point; Dual radios support simultaneous 802.11a and 802.11b/g transmissions. The AP-530 has two integrated radios (one of which supports 802.11a/b/g; the other of which supports 802.11b/g). The AP supports the Wireless Distribution System. AP-420 - Wireless access point; Features a single, dual-diversity 802.11b/g radio. AP-10ag - Wireless access point; Dual radios support simultaneous 802.11a and 802.11b/g transmissions. Management Software ProCurve Manager (PCM) is a network management suite for products by ProCurve. ProCurve Manager ProCurve Manager comes in two versions; a base version supplied both free of charge with all managed ProCurve Products and also for download, and a "Plus" version that incorporates more advanced functionality and also enables plugin support. There is a 60-day trial version including all modules. Both derive from the trial version and need to be activated via Internet. The Plus version can also be implemented in HP OpenView Network Node Manager for Windows. The software ProCurve Manager is predominantly for ProCurve products. Protocols PCM uses Link Layer Discovery Protocol (LLDP, Cisco Discovery Protocol (CDP) and FDP (Foundry) for detecting network devices For identification and deep inspection of network devices SNMP V2c or V3 is used. Network traffic is analysed using RMON and sFlow. Plugins IDM (Identity Driven Manager) - Add-on Module for PCM+; contains Intranet Network Access Security using 802.1X; compatible with MicrosoftNetwork Access Protection (NAP) since Version IDM V2.3 NIM (Network Immunity Manager) - Add-On Module for PCM+ v2.2 and above; contains Intranet Intrusion Detection and Network Behavior Anomaly Detection (NBAD) using sFlow PMM (ProCurve Mobility Manager) - Add-on Module for PCM+; contains Element Management for ProCurve Access Points (420/520/530) starting from Version PMM V1; WESM Modules and Radio Ports are supported since Version PMM V2. Since PMM v3, the MSM Access Points and Controllers are now supported Security Network access control with endpoint testing ProCurve Network Access Controller 800 - Management and security for endpoints when they access the network Firewall The Threat Management Services Module is based on the ProCurve ONE Module, and is primarily a firewall with additional Intrusion-prevention system and VPN capabilities Accessories External power supplies ProCurve 600 Redundant external power supply - supports one of six times Redundant Power for series 2600-PWR (not series 2600 w/o PWR), 2610, 2800, 3400cl, 6400cl and 7000dl as well as two times optional External PoE Power for series 2600-PWR, 2610-PWR or mandatory External PoE Power for series 5300 with xl 24-Port 10/100-TX PoE Module only ProCurve 610 External power supply - supports four times optional External PoE Power for series 2600-PWR, 2610-PWR, or mandatory External PoE Power for series 5300 with xl 24-Port 10/100-TX PoE Module only ProCurve 620 Redundant/External power supply - supports two times optional External PoE Power for series 3500yl and two times Redundant Power for series 2900, 3500yl and 6200yl ProCurve Switch zl power supply shelf - supports two times optional External PoE Power for series 5400zl and 8200zl; must be additionally equipped with max. two 875W or 1500W (typical) ProCurve Switch zl power supplies GBICs and optics ProCurve have a range of Transceivers, GBICs and 10GbE optics for use within ProCurve devices. Transceivers are used in the unmanaged 2100 & 2300 series, and the managed 2500 series of switches GBICs are used for most switches for 100 Mbit/s and 1000 Mbit/s fiber connectivity. All fiber GBICs have an LC presentation. ProCurve ONE HP ProCurve ONE Services zl Module The HP ProCurve ONE Services zl Module is an x86-based server module that provides two 10-GbE network links into the switch backplane. Coupled with ProCurve-certified services and applications that can take advantage of the switch-targeted API for better performance, this module creates a virtual appliance within a switch slot to provide solutions for business needs, such as network security. The ProCurve ONE Services zl Module is supported in the following switches: HP ProCurve Switch 5406zl HP ProCurve Switch 5412zl HP ProCurve Switch 8212zl The following applications have completed, or will complete the ProCurve ONE Integrated certification on the HP ProCurve Services zl Module in early 2009. Data center automation HP ProCurve Data Center Connection Manager ONE Software (Q3 2009) Location Ekahau Real Time Location System Wireless IPS AirTight Networks SpectraGuard Enterprise Network management InMon Corporation Traffic Sentinel VoIP / Unified Communications Aastra 5000 Next Generation IP telephony Avaya Unified Communications Solutions Video distribution VBrick Systems ViP Other - Unsupported Other 'unofficial' methods for loading alternative platform software such as pfSense and VMware's ESXi on to ONE Service modules have been discovered. HP ZL Compute Blade on the Cheap | Tinkeringdadblog Discontinued Products 1600M - stackable Layer 2 switch 2400M - stackable Layer 2 switch 2424M - stackable Layer 2 switch 4000M - modular Layer 2 switch 8000M - modular Layer 2 switch 9400 - modular Layer 3 Router AP 520 - Access Point 4100gl - modular Layer 2 switch 2700 series - unmanaged Layer 2 switch 9300m series - modular Layer 3 Router ProCurve Access Controller Series 700wl 745wl ACM (Access Control Module) for the 5300xl only 5300xl series - Chassis based, Layer 3, in either 4 or 8 slot bays. See also Aruba Networks ProCurve References External links HPE Networking website VisioCafe (Home of HP VisioShapes) ProCurve Network management ProCurve products
List of ProCurve products
Technology,Engineering
2,754
30,497,848
https://en.wikipedia.org/wiki/UgMicroSatdb
UgMicroSatdb (UniGene Microsatellites database) is a database of microsatellites present in uniGene. See also Microsatellites Unigene References External links http://veenuash.info/web1/index.htm Biological databases Genetics databases Repetitive DNA sequences
UgMicroSatdb
Biology
69
33,109,308
https://en.wikipedia.org/wiki/2-Oleoylglycerol
2-Oleoylglycerol (2OG) is a monoacylglycerol that is found in biologic tissues. Its synthesis is derived from diacylglycerol precursors. It is metabolized to oleic acid and glycerol primarily by the enzyme monoacylglycerol lipase (MAGL). In 2011, 2OG was found to be an endogenous ligand to GPR119. 2OG has been shown to increase glucagon-like peptide-1 (GLP-1) and gastric inhibitory polypeptide (GIP) levels following administration to the small intestine. 2OG has also been discovered to potentiate G protein and not β-arrestin signaling via allosteric binding of the 5-HT2A receptor. See also 2-Arachidonoylglycerol JZL184 References Fatty acid esters Lipids Endocannabinoids
2-Oleoylglycerol
Chemistry
202
26,777,830
https://en.wikipedia.org/wiki/HL7%20Services%20Aware%20Interoperability%20Framework
This article documents the effort of the Health Level Seven(HL7) community and specifically the former HL7 Architecture Board (ArB) to develop an interoperability framework that would support services, messages, and Clinical Document Architecture(CDA) ISO 10871. HL7 provides a framework and standards for the exchange, integration, sharing, and retrieval of electronic health information. SAIF Overview The HL7 Services-Aware Interoperability Framework Canonical Definition (SAIF-CD) provides consistency between all artifacts, and enables a standardized approach to enterprise architecture (EA) development and implementation, and a way to measure the consistency. SAIF is a way of thinking about producing specifications that explicitly describe the governance, conformance, compliance, and behavioral semantics that are needed to achieve computable semantic working interoperability. The intended information transmission technology might use a messaging, document exchange, or services approach. SAIF is the framework that is required to rationalize interoperability of standards. SAIF is an architecture for achieving interoperability, but it is not a whole-solution design for enterprise architecture management. The informative document may be found at Public SAIF CD. Since the release of this document, the SAIF-CD has been balloted as a Draft Standard for Trial Use. The document will be made available by the end of May, 2012. A Short introduction to SAIF was made to provide insight to users of the SAIF-CD. Document Divisions The SAIF-CD consists of the following sections: Introduction Governance Framework (GF) Behavioral Framework (BF) Information Framework(IF) Enterprise Consistency and Conformity Framework (ECCF) Interoperability Specification Matrix (ISM) Compliant SAIF Implementation Guides Appendix Introduction The SAIF Introduction and Overview describes the general constructs that frame the SAIF. SAIF represents a synthesis of best practices and concepts from multiple architectural frameworks. This introduction to SAIF goes into a lot of technical depth and assumes you are already familiar with architectural standards and the HL7 organization. Governance Framework The Governance Framework (GF) describes the motivation for, the structure, content and utilization of the GF. The ECCF and BF are discussed in detail in separate documents, and are mentioned in the course of this document only when necessary to either contextualize or logically link GF content to the larger context of the Services Aware Interoperability Framework. Behavioral Framework The Behavioral Framework (BF) provides a set of constructs for defining the behavioral semantics of specifications, which enable working interoperability. As a result, the focus of the BF is accountability – a description of “who does what when.” Accountability describes the perspective of the various technology components that are involved in a particular instance or scenario designed to achieve Working Interoperability. The BF is technology-neutral and, therefore, can be used within model-driven specification stacks. Information Framework The Information Framework (IF) is a SAIF-compliant recasting of existing HL7 expertise regarding the specification of static semantics. The Information Framework will draw on the information available from the following sources: Storyboards Domain Analysis Models (DAM) Reference Information Model (RIM) ISO21731 Vocabulary concepts HL7 Core Principles Enterprise Consistency and Conformity Framework The major goal of the Enterprise Consistency and Conformity Framework (ECCF) is enabling working interoperability between different users, organizations, and systems. The ECCF is manifest in a structure called the ECCF specification stack (SS). This structure identifies, defines, organizes, and relates a set of artifacts that collectively specify the relevant semantics of a software component specification or other system-of- interest. In summary, the ECCF SS provides an organizational framework in which inter-related artifacts are sorted by content – for example a Unified Modeling Language (UML) activity diagram in the Business viewpoint that contains static data constructs (for example, documents or data structures) which passes between the various structures would all have the relevant static constructs detailed in artifacts, specifically, business rules, information constructors, behavioral contracts, and level-of-abstraction. Interoperability Specification Matrix (ISM) The Interoperability Specification Matrix (ISM) defines a 5-column-by-3-row matrix (“table”) which distributes the multiple aspects of a given component's specification across the various cells of the matrix. The structure of the ISM is based on proven cognitive models for describing complex systems which revolve around the notion of partitioning complexity based on a number of Dimensions while simultaneously viewing each of these dimensions from multiple Perspectives. Compliant SAIF Implementation Guides This section defines the creation of a compliant implementation guide. References International standards Agent-based software Health standards Information science American National Standards Institute Data interchange standards
HL7 Services Aware Interoperability Framework
Technology
976
448,270
https://en.wikipedia.org/wiki/Daniel%20Rutherford
Daniel Rutherford (3 November 1749 – 15 November 1819) was a Scottish physician, chemist and botanist who is known for the isolation of nitrogen in 1772. Life Rutherford was born on 3 November 1749, the son of Anne Mackay and Professor John Rutherford (1695–1779). He began college at the age of 16 at Mundell's School on the West Bow close to his family home, and then studied medicine under William Cullen and Joseph Black at the University of Edinburgh, graduating with a doctorate (MD) in 1772. From 1775 to 1786 he practiced as a physician in Edinburgh. On 12 April 1782 Rutherford was one of the founding members of the Harveian Society of Edinburgh and served as President in 1787. In 1783 he was a joint founder of the Royal Society of Edinburgh. In 1784 he was elected a member of the Aesculapian Club. At this time he lived at Hyndford Close on the Royal Mile a house he (or his father) had purchased from Dunbar Douglas, 4th Earl of Selkirk He was a professor of botany at the University of Edinburgh and the 5th Regius Keeper of the Royal Botanic Garden Edinburgh from 1786 to 1819. He was president of the Royal College of Physicians of Edinburgh from 1796 to 1798. His pupils included Thomas Brown of Lanfine and Waterhaughs. Around 1805 he moved from Hyndfords Close to a newly built townhouse at 20 Picardy Place at the top of Leith Walk, where he lived for the rest of his life. He died suddenly in Edinburgh on 15 November 1819. His sister died two days later and the second sister (Scott's mother) only seven days after the latter. Family He was the uncle of novelist Sir Walter Scott. In 1786 he married Harriet Mitchelson of Middleton. Isolation of nitrogen Rutherford discovered nitrogen by the isolation of the particle in 1772. When Joseph Black was studying the properties of carbon dioxide, he found that a candle would not burn in it. Black turned this problem over to his student at the time, Rutherford. Rutherford kept a mouse in a space with a confined quantity of air until it died. Then, he burned a candle in the remaining air until it went out. Afterwards, he burned phosphorus in that, until it would not burn. Then the air was passed through a carbon dioxide absorbing solution. The remaining component of the air did not support combustion, and a mouse could not live in it. Rutherford called the gas (which we now know would have consisted primarily of nitrogen) "noxious air" or "phlogisticated air". Rutherford reported the experiment in 1772. He and Black were convinced of the validity of the phlogiston theory, so they explained their results in terms of it. Botanical reference References External links Biographical note at “Lectures and Papers of Professor Daniel Rutherford (1749–1819), and Diary of Mrs Harriet Rutherford” Scottish antiquarians 1749 births 1819 deaths Discoverers of chemical elements Founder fellows of the Royal Society of Edinburgh Fellows of the Linnean Society of London Industrial gases Academics of the University of Edinburgh Members of the Philosophical Society of Edinburgh Presidents of the Royal College of Physicians of Edinburgh Scientists from Edinburgh People educated at James Mundell's School Alumni of the University of Edinburgh 18th-century Scottish botanists 19th-century Scottish botanists 18th-century British chemists 19th-century Scottish chemists 18th-century Scottish medical doctors 19th-century Scottish medical doctors Fellows of the Society of Antiquaries of Scotland Fellows of the Royal Society of Edinburgh Medical doctors from Edinburgh Office bearers of the Harveian Society of Edinburgh Members of the Harveian Society of Edinburgh
Daniel Rutherford
Chemistry
734
10,450,522
https://en.wikipedia.org/wiki/Nystr%C3%B6m%20method
In mathematics numerical analysis, the Nyström method or quadrature method seeks the numerical solution of an integral equation by replacing the integral with a representative weighted sum. The continuous problem is broken into discrete intervals; quadrature or numerical integration determines the weights and locations of representative points for the integral. The problem becomes a system of linear equations with equations and unknowns, and the underlying function is implicitly represented by an interpolation using the chosen quadrature rule. This discrete problem may be ill-conditioned, depending on the original problem and the chosen quadrature rule. Since the linear equations require operations to solve, high-order quadrature rules perform better because low-order quadrature rules require large for a given accuracy. Gaussian quadrature is normally a good choice for smooth, non-singular problems. Discretization of the integral Standard quadrature methods seek to represent an integral as a weighed sum in the following manner: where are the weights of the quadrature rule, and points are the abscissas. Example Applying this to the inhomogeneous Fredholm equation of the second kind , results in . See also Boundary element method References Bibliography Leonard M. Delves & Joan E. Walsh (eds): Numerical Solution of Integral Equations, Clarendon, Oxford, 1974. Hans-Jürgen Reinhardt: Analysis of Approximation Methods for Differential and Integral Equations, Springer, New York, 1985. Integral equations Numerical analysis Numerical integration (quadrature)
Nyström method
Mathematics
302
70,400
https://en.wikipedia.org/wiki/Bounded%20rationality
Bounded rationality is the idea that rationality is limited when individuals make decisions, and under these limitations, rational individuals will select a decision that is satisfactory rather than optimal. Limitations include the difficulty of the problem requiring a decision, the cognitive capability of the mind, and the time available to make the decision. Decision-makers, in this view, act as satisficers, seeking a satisfactory solution, with everything that they have at the moment rather than an optimal solution. Therefore, humans do not undertake a full cost-benefit analysis to determine the optimal decision, but rather, choose an option that fulfills their adequacy criteria. Some models of human behavior in the social sciences assume that humans can be reasonably approximated or described as rational entities, as in rational choice theory or Downs' political agency model. The concept of bounded rationality complements the idea of rationality as optimization, which views decision-making as a fully rational process of finding an optimal choice given the information available. Therefore, bounded rationality can be said to address the discrepancy between the assumed perfect rationality of human behaviour (which is utilised by other economics theories), and the reality of human cognition. In short, bounded rationality revises notions of perfect rationality to account for the fact that perfectly rational decisions are often not feasible in practice because of the intractability of natural decision problems and the finite computational resources available for making them. The concept of bounded rationality continues to influence (and be debated in) different disciplines, including political science, economics, psychology, law, philosophy, and cognitive science. Background and motivation was coined by Herbert A. Simon, where it was proposed as an alternative basis for the mathematical and neoclassical economic modelling of decision-making, as used in economics, political science, and related disciplines. Many economics models assume that agents are on average rational, and can in large quantities be approximated to act according to their preferences in order to maximise utility. With bounded rationality, Simon's goal was "to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist." Soon after the term bounded rationality appeared, studies in the topic area began examining the issue in depth. A study completed by Allais in 1953 began to generate ideas of the irrationality of decision making as he found that given preferences, individuals will not always choose the most rational decision and therefore the concept of rationality was not always reliable in economic predictions. In Models of Man, Simon argues that most people are only partly rational, and are irrational in the remaining part of their actions. In another work, he states "boundedly rational agents experience limits in formulating and solving complex problems and in processing (receiving, storing, retrieving, transmitting) information". Simon used the analogy of a pair of scissors, where one blade represents "cognitive limitations" of actual humans and the other the "structures of the environment", illustrating how minds compensate for limited resources by exploiting known structural regularity in the environment. Simon describes a number of dimensions along which classical models of rationality can be made somewhat more realistic, while remaining within the vein of fairly rigorous formalization. These include: limiting the types of utility functions recognizing the costs of gathering and processing information the possibility of having a vector or multi-valued utility function Simon suggests that economic agents use heuristics to make decisions rather than a strict rigid rule of optimization. They do this because of the complexity of the situation. An example of behaviour inhibited by heuristics can be seen when comparing the cognitive strategies utilised in simple situations (e.g. tic-tac-toe), in comparison to strategies utilised in difficult situations (e.g. chess). Both games, as defined by game theory economics, are finite games with perfect information, and therefore equivalent. However, within chess, mental capacities and abilities are a binding constraint, therefore optimal choices are not a possibility. Thus, in order to test the mental limits of agents, complex problems, such as those within chess, should be studied to test how individuals work around their cognitive limits, and what behaviours or heuristics are used to form solutions Anchoring and adjustment are types of heuristics that give some explanation to bounded rationality and why decision makers do not make rational decisions. A study undertaken by Zenko et al. showed that the amount of physical activity completed by decision makers was able to be influenced by anchoring and adjustment as most decision makers would typically be considered irrational and would unlikely do the amount of physical activity instructed and it was shown that these decision makers use anchoring and adjustment to decide how much exercise they will complete. Other heuristics that are closely related to the concept of bounded rationality include the availability heuristic and representativeness heuristic. The availability heuristic refers to how people tend to overestimate the likelihood of events that are easily brought to mind, such as vivid or recent experiences. This can lead to biased judgments based on incomplete or unrepresentative information. The representativeness heuristic states that people often judge the probability of an event based on how closely it resembles a typical or representative case, ignoring other relevant factors like base rates or sample size. These mental shortcuts and systematic errors in thinking demonstrate how people's decision-making abilities are limited and often deviate from perfect rationality. Example An example of bounded rationality in individuals would be a customer who made a suboptimal decision to order some food at the restaurant because they felt rushed by the waiter who was waiting beside the table. Another example is a trader who would make a moderate and risky decision to trade their stock due to time pressure and imperfect information of the market at that time. In organisational context, a CEO cannot make fully rational decisions in an ad-hoc situation because their cognition was overwhelmed by a lot of information in that tense situation. The CEO also needs to take time to process all the information given to them, but due to the limited time and fast decision making needed, they will disregard some information in determining the decision. Bounded rationality can have significant effects on political decision-making, voter behavior, and policy outcomes. A prominent example of this is heuristic-based voting. According to the theory of bounded rationality, individuals have limited time, information, and cognitive resources to make decisions. In the context of voting, this means that most voters cannot realistically gather and process all available information about candidates, issues, and policies. Even if such information were available, the time and effort required to analyze it would be prohibitively high for many voters. As a result, voters often resort to heuristics, which allow voters to make decisions based on cues like party affiliation, candidate appearance, or single-issue positions, rather than engaging in a comprehensive evaluation of all relevant factors. For example, a voter who relies on the heuristic of party affiliation may vote for a candidate whose policies do not actually align with their interests, simply because the candidate belongs to their preferred party. Model extensions As decision-makers have to make decisions about how and when to decide, Ariel Rubinstein proposed to model bounded rationality by explicitly specifying decision-making procedures as decision-makers with the same information are also not able to analyse the situation equally thus reach the same rational decision. Rubinstein argues that consistency in reaching final decision for the same level of information must factor in the decision making procedure itself. This puts the study of decision procedures on the research agenda. Gerd Gigerenzer stated that decision theorists, to some extent, have not adhered to Simon's original ideas. Rather, they have considered how decisions may be crippled by limitations to rationality, or have modeled how people might cope with their inability to optimize. Gigerenzer proposes and shows that simple heuristics often lead to better decisions than theoretically optimal procedures. Moreover, Gigerenzer claimed, agents react relative to their environment and use their cognitive processes to adapt accordingly. Huw Dixon later argued that it may not be necessary to analyze in detail the process of reasoning underlying bounded rationality. If we believe that agents will choose an action that gets them close to the optimum, then we can use the notion of epsilon-optimization, which means we choose our actions so that the payoff is within epsilon of the optimum. If we define the optimum (best possible) payoff as , then the set of epsilon-optimizing options S(ε) can be defined as all those options s such that: The notion of strict rationality is then a special case (ε=0). The advantage of this approach is that it avoids having to specify in detail the process of reasoning, but rather simply assumes that whatever the process is, it is good enough to get near to the optimum. From a computational point of view, decision procedures can be encoded in algorithms and heuristics. Edward Tsang argues that the effective rationality of an agent is determined by its computational intelligence. Everything else being equal, an agent that has better algorithms and heuristics could make more rational (closer to optimal) decisions than one that has poorer heuristics and algorithms. Tshilidzi Marwala and Evan Hurwitz in their study on bounded rationality observed that advances in technology (e.g. computer processing power because of Moore's law, artificial intelligence, and big data analytics) expand the bounds that define the feasible rationality space. Because of this expansion of the bounds of rationality, machine automated decision making makes markets more efficient. The model of bounded rationality also extends to bounded self-interest, in which humans are sometimes willing to forsake their own self-interests for the benefits of others due to incomplete information that the individuals have at the time being. This is something that had not been considered in earlier economic models. The theory of rational inattention, an extension of bounded rationality, studied by Christopher Sims, found that decisions may be chosen with incomplete information as opposed to affording the cost to receive complete information. This shows that decision makers choose to endure bounded rationality. On the other hand, another extension came from the notion of bounded rationality and was explained by Ulrich Hoffrage and Torsten Reimer in their studies of a "fast and frugal heuristic approach". The studies explained that complete information sometimes is not needed as there are easier and simpler ways to reach the same optimal outcome. However, this approach which is usually known as the gaze heuristic was explained to be the theory for non-complex decision making only. Behavioral Economics Bounded rationality attempts to address assumption points discussed within neoclassical economics theory during the 1950s. This theory assumes that the complex problem, the way in which the problem is presented, all alternative choices, and a utility function, are all provided to decision-makers in advance, where this may not be realistic. This was widely used and accepted for a number of decades, however economists realised some disadvantages exist in utilising this theory. This theory did not consider how problems are initially discovered by decision-makers, which could have an impact on the overall decision. Additionally, personal values, the way in which alternatives are discovered and created, and the environment surrounding the decision-making process are also not considered when using this theory. Alternatively, bounded rationality focuses on the cognitive ability of the decision-maker and the factors which may inhibit optimal decision-making. Additionally, placing a focus on organisations rather than focusing on markets as neoclassical economics theory does, bounded rationality is also the basis for many other economics theories (e.g. organisational theory) as it emphasises that the "...performance and success of an organisation is governed primarily by the psychological limitations of its members..." as stated by John D.W. Morecroft (1981). One concept closely related to the idea of bounded rationality is nudging. The connection between nudging and bounded rationality lies in the fact that nudges are designed to help people overcome the cognitive limitations and biases that arise from their bounded rationality. Nudging involves designing choice architectures that guide people towards making better decisions without limiting their freedom of choice. The concept was popularized by Richard Thaler and Cass Sunstein in their 2008 book "Nudge: Improving Decisions About Health, Wealth, and Happiness." As nudging has become more popular in the last decade, governments around the world and nongovernmental organizations like the United Nations have established behavioral insights teams or incorporated nudging into their policy-making processes. One way nudges are used is with the aim of simplifying complex decisions by presenting information in a clear and easily understandable format, reducing the cognitive burden on individuals. Nudges can also be designed to counteract common heuristics and biases, such as the default bias (people's tendency to stick with the default option). For example, with adequate other policies in place, making posthumous organ donation the default option with an opt-out provision has been shown to increase actual donation rates. Moreover, in cases where the information needed to make an informed decision is incomplete, nudges can provide the relevant information. For instance, displaying the calorie content of menu items can help people make healthier food choices. Nudges can also guide people towards satisfactory options when they are unable or unwilling to invest the time and effort to find the optimal choice. For example, providing a limited set of well-designed investment options in a retirement plan can help people make better financial decisions. In economic models based on behavioral economics, implementing bounded rationality implies finding replacements for utility maximization and profit maximization as used in conventional general equilibrium models. Stock-flow consistent models (SFC) and agent-based models (ABM) often implement that agents follow a sequence of simple rule-of-thumb behavior instead of an optimization procedure. Other dynamic models interpret bounded rationality as “looking for the direction of improvement“ such that agents use a gradient climbing approach to increase their utility. Principles of Boundedness In addition to bounded rationality, bounded willpower and bounded selfishness are two other key concepts in behavioral economics that challenge the traditional neoclassical economic assumption of perfectly rational, self-interested, and self-disciplined individuals.  Bounded willpower refers to the idea that people often have difficulty following through on their long-term plans and intentions due to limited self-control and the tendency to prioritize short-term desires. This can lead to problems like procrastination, impulsive spending, and unhealthy lifestyle choices. The concept of bounded willpower is closely related to the idea of hyperbolic discounting, which describes how people tend to value immediate rewards more highly than future ones, leading to inconsistent preferences over time. While traditional economic models assume that people are primarily motivated by self-interest, bounded selfishness suggests that people also have social preferences and care about factors such as fairness, reciprocity, and the well-being of others. This concept helps explain phenomena like charitable giving, cooperation in social dilemmas, and the existence of social norms. However, people's concern for others is often bounded in the sense that it is limited in scope and can be influenced by factors such as in-group favoritism and emotional distance. Together, these three concepts form the core of behavioral economics and have been used to develop more realistic models of human decision-making and behavior. By recognizing the limitations and biases that people face in their daily lives, behavioral economists aim to design policies, institutions, and choice architectures that can help people make better decisions and achieve their long-term goals. In psychology The collaborative works of Daniel Kahneman and Amos Tversky expand upon Herbert A. Simon's ideas in the attempt to create a map of bounded rationality. The research attempted to explore the choices made by what was assumed as rational agents compared to the choices made by individuals optimal beliefs and their satisficing behaviour. Kahneman cites that the research contributes mainly to the school of psychology due to imprecision of psychological research to fit the formal economic models; however, the theories are useful to economic theory as a way to expand simple and precise models and cover diverse psychological phenomena. Three major topics covered by the works of Daniel Kahneman and Amos Tversky include heuristics of judgement, risky choice, and framing effect, which were a culmination of research that fit under what was defined by Herbert A. Simon as the psychology of bounded rationality. In contrast to the work of Simon; Kahneman and Tversky aimed to focus on the effects bounded rationality had on simple tasks which therefore placed more emphasis on errors in cognitive mechanisms irrespective of the situation. The study undertaken by Kahneman found that emotions and the psychology of economic decisions play a larger role in the economics field than originally thought. The study focused on the emotions behind decision making such as fear and personal likes and dislikes and found these to be significant factors in economic decision making. Bounded rationality is also shown to be useful in negotiation techniques as shown in research undertaken by Dehai et al. that negotiations done using bounded rationality techniques by labourers and companies when negotiating a higher wage for workers were able to find an equal solution for both parties. Influence on social network structure Recent research has shown that bounded rationality of individuals may influence the topology of the social networks that evolve among them. In particular, Kasthurirathna and Piraveenan have shown that in socio-ecological systems, the drive towards improved rationality on average might be an evolutionary reason for the emergence of scale-free properties. They did this by simulating a number of strategic games on an initially random network with distributed bounded rationality, then re-wiring the network so that the network on average converged towards Nash equilibria, despite the bounded rationality of nodes. They observed that this re-wiring process results in scale-free networks. Since scale-free networks are ubiquitous in social systems, the link between bounded rationality distributions and social structure is an important one in explaining social phenomena. See also References Further reading Bayer, R. C., Renner, E., & Sausgruber, R. (2009). Confusion and reinforcement learning in experimental public goods games. NRN working papers 2009–22, The Austrian Center for Labor Economics and the Analysis of the Welfare State, Johannes Kepler University Linz, Austria. Felin, T., Koenderink, J., & Krueger, J. (2017). "Rationality, perception and the all-seeing eye." Psychonomic Bulletin and Review, 25: 1040-1059. DOI 10.3758/s13423-016-1198-z Gershman, S.J., Horvitz, E.J., & Tenenbaum, J.B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 49: 273-278. DOI: 10.1126/science.aac6076 Hayek, F.A (1948) Individualism and Economic order Simon, Herbert (1957). "A Behavioral Model of Rational Choice", in Models of Man, Social and Rational: Mathematical Essays on Rational Human Behavior in a Social Setting. New York: Wiley. External links Bounded Rationality in Stanford Encyclopedia of Philosophy Mapping Bounded Rationality by Daniel Kahneman Artificial Intelligence and Economic Theory chapter 7 of Surfing Economics by Huw Dixon. Behavioral economics Game theory Rational choice theory Cognitive biases
Bounded rationality
Mathematics,Biology
4,050
3,061,815
https://en.wikipedia.org/wiki/Homogeneous%20broadening
Homogeneous broadening is a type of emission spectrum broadening in which all atoms radiating from a specific level under consideration radiate with equal opportunity. If an optical emitter (e.g. an atom) shows homogeneous broadening, its spectral linewidth is its natural linewidth, with a Lorentzian profile. Broadening in laser systems Broadening in laser physics is a physical phenomenon that affects the spectroscopic line shape of the laser emission profile. The laser emission is due to the (excitation and subsequent) relaxation of a quantum system (atom, molecule, ion, etc.) between an excited state (higher in energy) and a lower one. These states can be thought of as the eigenstates of the energy operator. The difference in energy between these states is proportional to the frequency/wavelength of the photon emitted. Since this energy difference has a fluctuation, then the frequency/wavelength of the "macroscopic emission" (the beam) will have a certain width (i.e. it will be "broadened" with respect to the "ideal" perfectly monochromatic emission). Depending on the nature of the fluctuation, there can be two types of broadening. If the fluctuation in the frequency/wavelength is due to a phenomenon that is the same for each quantum emitter, there is homogeneous broadening, while if each quantum emitter has a different type of fluctuation, the broadening is inhomogeneous. Examples of situations where the fluctuation is the same for each system (homogeneous broadening) are natural or lifetime broadening, and collisional or pressure broadening. In these cases each system is affected "on average" in the same way (e.g. by the collisions due to the pressure). The most frequent situation in solid state systems where the fluctuation is different for each system (inhomogeneous broadening) is when because of the presence of dopants, the local electric field is different for each emitter, and so the Stark effect changes the energy levels in an inhomogeneous way. The homogeneous broadened emission line will have a Lorentzian profile (i.e. will be best fitted by a Lorentzian function), while the inhomogeneously broadened emission will have a Gaussian profile. One or more phenomena may be present at the same time, but if one has a wider fluctuation, it will be the one responsible for the character of the broadening. These effects are not limited to laser systems, or even to optical spectroscopy. They are relevant in magnetic resonance as well, where the frequency range is in the radiofrequency region for NMR, and one can also refer to these effects in EPR where the lineshape is observed at fixed (microwave) frequency and in a magnetic field range. Semiconductors In semiconductors, if all oscillations have the same eigenfrequency and the broadening in the imaginary part of the dielectric function results only from a finite damping , the system is said to be homogeneously broadened, and has a Lorentzian profile. If the system contains many oscillators with slightly different frequencies about however, then the system is inhomogeneously broadened. See also Homogeneity (physics) Voigt profile Spectral line shape References Laser science Atomic, molecular, and optical physics
Homogeneous broadening
Physics,Chemistry
707
11,861,063
https://en.wikipedia.org/wiki/Global%20serializability
In concurrency control of databases, transaction processing (transaction management), and other transactional distributed applications, global serializability (or modular serializability) is a property of a global schedule of transactions. A global schedule is the unified schedule of all the individual database (and other transactional object) schedules in a multidatabase environment (e.g., federated database). Complying with global serializability means that the global schedule is serializable, has the serializability property, while each component database (module) has a serializable schedule as well. In other words, a collection of serializable components provides overall system serializability, which is usually incorrect. A need in correctness across databases in multidatabase systems makes global serializability a major goal for global concurrency control (or modular concurrency control). With the proliferation of the Internet, Cloud computing, Grid computing, and small, portable, powerful computing devices (e.g., smartphones), as well as increase in systems management sophistication, the need for atomic distributed transactions and thus effective global serializability techniques, to ensure correctness in and among distributed transactional applications, seems to increase. In a federated database system or any other more loosely defined multidatabase system, which are typically distributed in a communication network, transactions span multiple (and possibly distributed) databases. Enforcing global serializability in such system, where different databases may use different types of concurrency control, is problematic. Even if every local schedule of a single database is serializable, the global schedule of a whole system is not necessarily serializable. The massive communication exchanges of conflict information needed between databases to reach conflict serializability globally would lead to unacceptable performance, primarily due to computer and communication latency. Achieving global serializability effectively over different types of concurrency control has been open for several years. The global serializability problem Problem statement The difficulties described above translate into the following problem: Find an efficient (high-performance and fault tolerant) method to enforce Global serializability (global conflict serializability) in a heterogeneous distributed environment of multiple autonomous database systems. The database systems may employ different concurrency control methods. No limitation should be imposed on the operations of either local transactions (confined to a single database system) or global transactions (span two or more database systems). Quotations Lack of an appropriate solution for the global serializability problem has driven researchers to look for alternatives to serializability as a correctness criterion in a multidatabase environment (e.g., see Relaxing global serializability below), and the problem has been characterized as difficult and open. The following two quotations demonstrate the mindset about it by the end of the year 1991, with similar quotations in numerous other articles: "Without knowledge about local as well as global transactions, it is highly unlikely that efficient global concurrency control can be provided... Additional complications occur when different component DBMSs [Database Management Systems] and the FDBMSs [Federated Database Management Systems] support different concurrency mechanisms... It is unlikely that a theoretically elegant solution that provides conflict serializability without sacrificing performance (i.e., concurrency and/or response time) and availability exists." Proposed solutions Several solutions, some partial, have been proposed for the global serializability problem. Among them: Global conflict graph (serializability graph, precedence graph) checking Distributed Two phase locking (Distributed 2PL) Distributed Timestamp ordering Tickets (local logical timestamps which define local total orders, and are propagated to determine global partial order of transactions) Relaxing global serializability Some techniques have been developed for relaxed global serializability (i.e., they do not guarantee global serializability; see also Relaxing serializability). Among them (with several publications each): Quasi serializability Two-level serializability Another common reason nowadays for Global serializability relaxation is the requirement of availability of internet products and services. This requirement is typically answered by large scale data replication. The straightforward solution for synchronizing replicas' updates of a same database object is including all these updates in a single atomic distributed transaction. However, with many replicas such a transaction is very large, and may span several computers and networks that some of them are likely to be unavailable. Thus such a transaction is likely to end with abort and miss its purpose. Consequently, Optimistic replication (Lazy replication) is often utilized (e.g., in many products and services by Google, Amazon, Yahoo, and alike), while global serializability is relaxed and compromised for eventual consistency. In this case relaxation is done only for applications that are not expected to be harmed by it. Classes of schedules defined by relaxed global serializability properties either contain the global serializability class, or are incomparable with it. What differentiates techniques for relaxed global conflict serializability (RGCSR) properties from those of relaxed conflict serializability (RCSR) properties that are not RGCSR is typically the different way global cycles (span two or more databases) in the global conflict graph are handled. No distinction between global and local cycles exists for RCSR properties that are not RGCSR. RCSR contains RGCSR. Typically RGCSR techniques eliminate local cycles, i.e., provide local serializability (which can be achieved effectively by regular, known concurrency control methods); however, obviously they do not eliminate all global cycles (which would achieve global serializability). References Data management Databases Transaction processing Concurrency control
Global serializability
Technology
1,156
61,932,070
https://en.wikipedia.org/wiki/Grazing%20%28behaviour%29
Grazing is a method of feeding in which a herbivore feeds on low-growing plants such as grasses or other multicellular organisms, such as algae. Many species of animals can be said to be grazers, from large animals such as hippopotamuses to small aquatic snails. Grazing behaviour is a type of feeding strategy within the ecology of a species. Specific grazing strategies include graminivory (eating grasses); coprophagy (producing part-digested pellets which are reingested); pseudoruminant (having a multi-chambered stomach but not chewing the cud); and grazing on plants other than grass, such as on marine algae. Grazing's ecological effects can include redistributing nutrients, keeping grasslands open or favouring a particular species over another. Ecology Many small selective herbivores follow larger grazers which skim off the highest, tough growth of grasses, exposing tender shoots. For terrestrial animals, grazing is normally distinguished from browsing in that grazing is eating grass or forbs, whereas browsing is eating woody twigs and leaves from trees and shrubs. Grazing differs from predation because the organism being grazed upon may not be killed. It differs from parasitism because the two organisms live together in a constant state of physical externality (i.e., low intimacy). Water animals that feed by rasping algae and other micro-organisms from stones are called grazers–scrapers. Graminivory Graminivory is a form of grazing involving feeding primarily on grass (specifically "true" grasses in the Poaceae). Horses, cattle, capybara, hippopotamuses, grasshoppers, geese, and giant pandas are graminivores. Giant pandas (Ailuropoda melanoleuca) are obligate bamboo grazers, 99% of their diet consisting of sub-alpine bamboo species. Cecotrophy For lagomorphs (rabbits, hares, pikas), easily digestible food is processed in the gastrointestinal tract & expelled as regular feces. But to get nutrients out of hard-to-digest fiber, lagomorphs ferment fiber in the cecum (in the GI tract) and then expel the contents as cecotropes, which are reingested (cecotrophy). The cecotropes are then absorbed in the small intestine to utilize the nutrients. This process is different from cows chewing their cud but with similar results. Capybara (Hydrochoerus hydrochaeris) are herbivores that graze mainly on grasses and aquatic plants, as well as fruit and tree bark. As with other grazers, they can be very selective, feeding on the leaves of one species and disregarding other species surrounding it. They eat a greater variety of plants during the dry season, as fewer plants are available. While they eat grass during the wet season, they have to switch to more abundant reeds during the dry season. The capybara's jaw hinge is not perpendicular; hence, it chews food by grinding back-and-forth rather than side-to-side. Like lagomorphs, capybara create, expel & eat cecotropes (cecotrophy) to get more nutrition from their food. They may also regurgitate food to masticate again, similar to cud-chewing by a cow. As with other rodents, the front teeth of capybara grow continually to compensate for the constant wear from eating grasses. Their cheek teeth also grow continuously. Pseudoruminant The hippopotamus is a large, semi-aquatic mammal inhabiting rivers, lakes, and mangrove swamps. During the day, they remain cool by staying in the water or mud; reproduction and childbirth occur in water. They emerge at dusk to graze on grasses. While hippopotamuses rest near each other in the water, grazing is solitary. Their incisors can be as long as and the canines (tusks) up to ; however, the canines and incisors are used for combat, and play no role in feeding. Hippos rely on their broad, horny lips to grasp and pull grasses which are then ground by the molars. The hippo is considered to be a pseudoruminant; it has a complex three- or four-chambered stomach but does not "chew cud". Non-grass grazing Although grazing is typically associated with mammals feeding on grasslands, ecologists sometimes use the word in a broader sense to include any organism that feeds on any other species without ending the life of the prey organism. Use of the term "grazing" varies further; for example, a marine biologist may describe herbivorous sea urchins that feed on kelp as grazers, even when they kill the organism by cutting the plant at the base. Malacologists sometimes apply the word to aquatic snails that feed by consuming the microscopic film of algae, diatoms and detritus—a biofilm—that covers the substrate and other surfaces underwater. In marine ecosystems, grazing by mesograzers such as some crustaceans maintains habitat structure by preventing algal overgrowth, especially in coral reefs. Benefits Environmental Grazer urine and feces "recycle nitrogen, phosphorus, potassium and other plant nutrients and return them to the soil". Grazing can allow for the accumulation of organic matter which may help to combat soil erosion. This acts as nutrition for insects and organisms found within the soil. These organisms "aid in carbon sequestration and water filtration". Biodiversity When grass is grazed, dead litter grass is reduced which is advantageous for birds such as waterfowl. Grazing can increase biodiversity. Without grazing, many of the same grasses grow, for example brome and bluegrass, consequently producing a monoculture. References External links Ethology Herbivory Land use
Grazing (behaviour)
Biology
1,226
2,923,194
https://en.wikipedia.org/wiki/Off-the-grid
Off-the-grid or off-grid is a characteristic of buildings and a lifestyle designed in an independent manner without reliance on one or more public utilities. The term "off-the-grid" traditionally refers to not being connected to the electrical grid, but can also include other utilities like water, gas, and sewer systems, and can scale from residential homes to small communities. Off-the-grid living allows for buildings and people to be self-sufficient, which is advantageous in isolated locations where normal utilities cannot reach and is attractive to those who want to reduce environmental impact and cost of living. Generally, an off-grid building must be able to supply energy and potable water for itself, as well as manage food, waste and wastewater. Energy Energy for electrical power and heating can be derived from burning hydrocarbons (e.g., diesel generators, propane heating), or generated on-site with renewable energy sources such as solar (particularly with photovoltaics), wind, or micro hydro. Additional forms of energy include biomass, commonly in the form of wood, waste, and alcohol fuels and geothermal energy, which uses differences in the underground temperature to regular indoor air environments in buildings. It is possible to simply eliminate energy shortage (e.g., via solar and wind tech such as in Old Order Amish - while used and sanctioned, not all agree) and Old Order Mennonite communities, and many Amish people still use steam engines. Electrical power Grid-connected buildings receive electricity from power plants, which mainly use natural resources such as coal and natural gas as energy to convert into electrical power. 2017's breakdown of world energy sources shows that the globe, mainly dependent on grid power, uses a majority of non-renewables, while popular renewables such as solar PV and wind power are a small portion. When off the grid, such as in Africa where 55% people of do not have access to electricity, buildings and homes must take advantage of the renewable energy sources around them, because it is the most abundant and allows for self-sufficiency. Solar photovoltaics Solar photovoltaics (PV), which use energy from the sun, are one of the most popular energy solutions for off-grid buildings. PV arrays (solar panels) allow for energy from the sun to be converted into electrical energy. PV is dependent upon solar radiation and ambient temperature. Other components needed in a PV system include charge controllers, inverters, and rapid shutdown controls. These systems give off-grid sites the ability to generate energy without grid connection. Every quarter, Bloomberg New Energy Finance evaluates manufacturers on their actual projects over the previous quarter and publish a list of Tier 1 Solar Module (panel) Manufacturers. Wind turbines Wind energy can be harnessed by wind turbines. Wind turbines components consist of blades that get pushed by wind, gearboxes, controllers, generators, brakes, and a tower. The amount of mechanical power captured from a wind turbine is a factor of the wind speed, air density, blade rotational area, and the aerodynamic power coefficient of the turbine. Micro-hydro Where water is abundant, hydropower is a promising energy solution. Large scale hydropower involves a dam and reservoir, and small scale micro-hydro can use turbines in rivers with constant levels of water. The amount of mechanical power generated is a factor of the flow of the stream, turbine size, water density, and power coefficient, similar to wind turbines. The energy from waves and tides can also provide power to coastal areas. Batteries When renewables produce energy that is not currently needed, the electrical energy is usually directed to charge a battery. This solves intermittency issues caused by the non-constant production of renewables and allows for variations in building loads. Common batteries include the lead-acid battery and lithium-ion battery. There are portable batteries and non-portable batteries. These portable power stations are often used in remote areas, since they don't require installation and can be used in a variety of scenarios. The technology of these portable batteries has evolved much through the years. Most of the portable power stations use 2 types of Lithium-Ion batteries: Nickel Manganese Cobalt (NMC) batteries and Lithium Iron Phosphate (LFP or LiFePo4) batteries. Hybrid energy systems In order to protect against intermittency issues and system failures, many off-grid communities create hybrid energy systems. These combine traditional renewables like solar PV, and wind, micro-hydro, batteries or even diesel generators. This can be cheaper and more effective than extending or maintaining grids to isolated communities. Radioisotope thermoelectric generator Historically remote applications such as lighthouses, weather stations and the likes which draw a small but continuous amount of power were powered by Radioisotope thermoelectric generators (RTGs) with the needed radioisotopes either extracted from spent nuclear fuel or produced in dedicated facilities. Both the Soviet Union and the United States employed numerous such devices on earth and almost every deep space probe reaching beyond the orbit of Mars (and even some in the inner solar system) has had an RTG to provide power where solar panels no longer deliver sufficient electricity per unit of mass. Direct current buildings Electricity produced by photovoltaics is direct current and is stored in batteries as direct current and DC buildings would eliminate the need for conversions from AC to DC. One third of electricity in the home is used as DC for electronics, LED lights, and other appliances already. The market for DC home appliances is maturing, which is necessary to have a 100% DC powered home. The electrical panel, circuit breakers, and fuses would need to be replaced with DC compatible components if retrofitting an AC house to DC. For net metering, to sell back to the grid, an inverter would still be needed, and also to use the grid-as-a-backup, if still using a grid-tied electrical system. DC electricity doesn't transmit over power lines efficiently over long distances, but if it is generated and stored in batteries on site, it is more efficient by 10-20 percent to keep it as DC and run appliances that way without inverting. Temperature control Types of solar-energy passive off-grid cooling systems could be used for cooling houses and/or refrigeration – including some that do not require electrical components and are allowing for chemically stored on-demand energy. Such may be useful for climate change mitigation and adaptation. Communications Meshnets such as B.A.T.M.A.N. could be used to sustain or establish communications without conventional infrastructure. Moreover, off-grid communications technologies could be used for environmental, security and agricultural monitoring as well as for emergency communications and coordination – such as for work assignation. Healthcare Drones have been used for off-grid healthcare, especially in the most remote regions of the world. With communications enabled, they deliver test samples, medicine, vaccines, food, water and anti-venoms. Waste management Small-scale waste management techniques in Western Europe, often for specific or standardized waste, were reported to mostly use one of two main strategies: aerobic (with plants) and anaerobic treatment (with biogas production). Water and sanitation Water is a crucial consideration in the off-grid environment, which must be collected, used, and disposed of efficiently to make use of the environment. There are many ways to supply water for indoor domestic use, which vary based on local access and preference. Sources Local water bodies Nearby streams, ponds, rivers, and lakes are easy access points for fresh water. Oceans can also be considered with proper desalination. Wells and springs This traditional method involves digging down to where water is present and abundant underground, usually to the water table or to an aquifer, and bringing it up for use, or collecting at springs where underground water comes to the surface. Systems for bringing underground water to buildings include wind and solar driven pumps or hand pumps. Well water should be tested on a regular basis and when changes in the water's taste, odor, or appearance occur to ensure its quality. Rain catchments This system relies on the weather to provide water. Catchment systems are designed based on the water demand of the users and local rainfall characteristics. Rain water is typically funneled from the roof of a building to water tanks where the water is stored until needed. Foreign supplies Another, less self-sufficient method involves bringing large amounts of clean water to the site where it is stored. This system relies on access to clean drinking water elsewhere and transportation to the off-grid site. Devices Atmospheric water generators have a large potential for off-the-grid water generation. Treatment Wherever the water does come from, it must be safe to drink and use indoors. For various issues with water quality, different water treatment strategies are available. Filtration A physical barrier allows water to pass through and blocks impurities in the water and, if the filter is fine enough, can filter out biological contaminants. Chemical treatment In order to disinfect water, such as chlorine, chlorine dioxide, and ozone are introduced which kill microorganisms. Ultraviolet light (UV) A UV system uses bulbs that emit ultraviolet light into filtered water to kill all types of viruses, bacteria, and protozoa. Electrochemically activated solutions A less typical approach, this involves applying a current to water that has a small salt solution added to disinfect biological contaminants. Combined with filtration, this is a means to provide safe drinking water. Desalination Some groundwater may have high salinity levels and can be non-potable, which is fixed through distillation. Coastal communities may benefit by getting water from the ocean through the use of desalination plants that remove salt. Water softening The presence of certain minerals in water creates hard water which can clog pipes over time, interfere with soap and detergents, and can leave scum on glasses and dishes. Water softening systems introduce sodium and potassium ions which make the hard minerals precipitate. Usage and sanitation For off-grid buildings, efficient use of water is needed to prevent water supplies from running out. While this is ultimately habit-dependent, measures involve low-flow fixtures for faucets, shower heads, and toilets which decreases the flow rate of faucets or the volume of water per flush to reduce total water used. Water can be eliminated in toilets through the uses of a composting toilet. Automatic leak detectors and tap closures can reduce amounts of wasted water. Greywater recycling can further save on water by reusing water from faucets, showers, dishwashers, and clothes washers. This is done through storing and treating the greywater, which can then be reused as a non-potable water source. If an off-grid home is not connected to a sewer system, a wastewater system must also be included. On-site wastewater management is usually done through storage and leaching. This involves storing greywater and blackwater in a septic tank or aeration tank to be treated, which is connected to a leaching field that slowly allows the water to percolate out into the ground. While more and expensive waste water treatment options are also available, this is a common reliable means to dispose of waste water without polluting the environment. Financial Going off the grid financially could be done by using cash, cash cards, cryptocurrencies, alternative community currencies, off grid peer-to-peer lending, and bullion coins. It could be useful to protect financial assets from bank failures, bank fraud, asset freezing, electromagnetic pulse, and from creditors or debt collectors. Environmental impact and sustainability Because off-grid buildings and communities mainly rely on upon renewable energy, off-grid living is generally good for the environment with little negative impact. Hybrid energy systems also provide communities with a sustainable way to live without the dependence and cost of being connected to public infrastructure which can be unreliable in developing countries. Generally, isolated concerns of environmental impacts are the use diesel generators, which produce greenhouse gases, batteries, which use many resources to make and can be hazardous, and pollution in natural environments from solid waste and wastewater. It is prudent to note that, while the concerns below address negative environmental impacts, going off-grid as a whole is a viable option to help reduce impacts on the environment when replacing grid-connected buildings that contribute global warming and climate change. Diesel generator concerns in Canadian off-grid communities Canada has about 175 aboriginal and northern off-grid communities, defined as "a community that is neither connected to the North American electrical grid nor to the piped natural gas network; it is permanent or long-term (5 years or more), and the settlements have at least 10 permanent buildings." Aboriginal Affairs and Northern Development Canada lists the following environmental concerns for these off-grid communities: Burning large amounts of diesel produces substantial greenhouse gas emissions. This contributes to climate change which negatively affects communities. Fuel must be transported long distances by airplane, truck or barge, leading to a greater risk of fuel spills. The transportation of fuel by trucks on winter roads impacts the environment negatively through high greenhouse gas emissions from the vehicles. Fuel spills may take place while the fuel is being transported and stored, posing environmental risks. Fuel tank leaks contaminate soil and groundwater. Generators can be noisy and disruptive, especially in quiet, remote communities. Emissions from diesel generators could contribute to health problems in community members. The environmental impacts of the systems used in off-grid buildings must also be considered due to embodied energy, embodied carbon, choice and source of materials, which can contribute to world issues such as climate change, air, water, and soil pollution, resource depletion and more. Sustainable communities The concept of a sustainable off-grid community must take into consideration the basic needs of all who live in the community. To become truly self-sufficient, the community would need to provide all of its own electrical power, food, shelter and water. Using renewable energy, an on-site water source, sustainable agriculture and vertical farming techniques is paramount in taking a community off the grid. A recent concept design by Eric Wichman shows a multi-family community, which combines all of these technologies into one self-sufficient neighborhood. To grow the community you simply add neighborhoods using the same model as the first. A self-sustained community reduces its impact on the environment by controlling its waste and carbon footprint. Economic consideration In situations where grid parity has been reached, it becomes cheaper to generate one's own electricity rather than purchasing it from the grid. This depends on equipment costs, the availability of renewable energy sources, and the cost of a grid connection. For example, in certain remote areas a grid connection would be prohibitively expensive, resulting in grid parity being reached immediately. It is often done to residential buildings only occasionally occupied, such as vacation cabins, to avoid high initial costs of traditional utility connections. Other people choose to live in houses where the cost of outside utilities is prohibitive, or such a distance away as to be impractical. In his book How to live off-grid Nick Rosen lists seven reasons for going off-grid. The top two are saving money, and reducing the carbon footprint. Others include survivalists, preparing for the collapse of the oil economy and bringing life back to the countryside. Off-grid power for marginalized communities Reliable centralized electricity systems have provided supply constancy which has bolstered societies and their economies. Electricity provides opportunities for improved productivity, learning, and hygienic end-uses in the home, such as cooking without the use of polluting biomass fuel sources, yet as of 2016, 20 percent of people worldwide lived without it. Bridging the gap from the current under-provision of grid electricity to universal access has been projected to require US$17 trillion and 30 years even on a rigorous timetable. Researchers have argued that a lack of centralized energy infrastructure can result in low resilience to damage to productivity and property from changing climates and severe weather. In addition, the advantages of central power generation and distribution are receding in the face of climatic degradation due to fossil fuel powered generation, vulnerabilities to extreme weather events and electronic manipulation, and increasingly complex design and regulatory processes. Decentralized, off-grid energy systems can constitute a sustainable interim alternative to extending national grids to rural customers. Those using limited off-grid power as a stepping stool to eventual grid access can accumulate energy efficient knowledge, behavior, and products that confer added resiliency while grid networks increase in reliability and carbon neutrality. However, providing off-grid electricity to rural users without also including training and education about its use and applications can result in under-utilization. To counteract this possibility, off-grid systems should reflect the cultural structures, values, and mores of host communities. Off-grid electrical systems can power individual residences or a community linked in a shared arrangement known as a micro-grid. In addition, they may be powered by renewable energy sources or by conventional fossil fuels. In Kenya, Mpeketoni township began a community-based, diesel-powered micro-grid project (the Mpeketoni Electricity Project [MEP]) in 1994 with an outlay of approximately US$40,000, and eventually grew to serve 105 residences and 116 commercial, educational, government, and healthcare buildings. The MEP demonstrated unanticipated supply and demand effects when artisans using tools powered by MEP electricity increased their productivity enough to cause depreciation of their wares, necessitating lowering of their prices; however, higher volumes of sales eventually offset these losses. MEP electricity facilitated cold storage of agricultural products, in addition to well pumping, which allowed students who previously spent several hours per day fetching water to spend that time studying in the evening by electric light. Electricity provided by the MEP also expanded teaching hours and sanitation at local schools through electric lighting and pumped water. The MEP off-grid project had numerous direct and indirect benefits for community members, and because the MEP emphasized promotion of the uses for electricity and the community had the ability to pay nominal rates for its use, the project achieved 94 percent cost recovery in its first ten years of operation. Relation to alternatives Off-the-grid generation may sometimes inhibit efforts to develop permanent infrastructure – such as in the case of devices for water generation and permanent piped water supply networks. Furthermore, grids may often be substantially more efficient and effective or necessary – such as in the case of smart grids and super grids for sustainable energy – and hence may often only be useful on large scales for autonomous community development of alternatives, as fallback, for disaster response, for other humanitarian aid during temporary relocation, and for initial support of long-term infrastructure development. Land labs as off-grid educational environments Land labs provide an outdoor classroom environment for students to learn about off-grid technology and methods. Within a land lab, students can learn about permaculture, photovoltaics, rainwater catchment, animal husbandry, composting, market gardening, biochar systems, methane digesters, rocket mass heaters, horticulture, ecology, and countless other off-grid concepts. Public schools, charter schools, private schools and homeschools can all benefit from using a land lab environment to teach students about sustainability, independence, and ecological systems. See also Anarcho-primitivism Autonomous building Back-to-the-land movement Battery charger Camping Distributed generation Domestic energy consumption Hazards of outdoor recreation Human-wildlife conflict Inverter Microgeneration Rural electrification Rural-urban fringe Simple living Slab City, California Soft power Solar charge controller Solar Guerrilla Survival skills Survivalism Wide area synchronous grid Wildland-urban interface Zero energy building Gallery References External links Expert platform that helps to find the right off grid power system Smithsonian magazine interview about living off-grid OffGridWizard a DIY Guide for Off grid Energy OffGridEnclave a collection of information, projects and community about living off-grid West Texas Weekly article on John Wells' life off the grid Into the wild: the rebels living off-grid all over Europe – in pictures. "They've opted out of cities and started all-new rural lives, building their own straw homes, teepees and bath tubs. Since 2010, photographer Antoine Bruy has travelled from the Pyrenees to Romania tracking down urban refuseniks." The Guardian House types Simple living Renewable energy Lifestyles Low-energy building Electric power
Off-the-grid
Physics,Engineering
4,199
9,996,449
https://en.wikipedia.org/wiki/Atarsamain
Atarsamain (also spelled Attar-shamayin, Attarshamayin, Attarsame (ʿAttarsamē); "morning star of heaven") () was an astral deity of uncertain gender, worshipped in the pre-Islamic northern and central Arabian Peninsula. Worshipped widely by Arab tribes, Atarsamain is known from around 800 BC and is identified in letters of the Assyrian kings Esarhaddon and Assurbanipal. Atarsamain may be identical with Allāt, whose cult was centred on Palmyra and also with Attar. According to Dierk Lange, Atarsamain was the main deity in a trinity of gods worshipped by what he calls the Yumu'il Confederation, which he describes as a northern Arab tribal confederation of Ishmaelite ancestry headed by the "clan of Kedar" (i.e. the Qedarites). Lange identifies Nuha as the solar deity, Ruda as the lunar deity, and Atarsamin as the main deity associated with Venus. A similar trinity of gods representing the sun, moon and Venus is found among the peoples of the South Arabian kingdoms of Awsan, Ma'in, Qataban and Hadhramawt between the 9th and 4th centuries BC. There, the deity associated with Venus was Astarte, the sun deity was Yam, and moon deity was variously called Wadd, Amm and Sin. Atarsamain is twice mentioned in the annals of Ashurbanipal, king of the Neo-Assyrian empire in the 7th century BC. The reference is to a?lu (sā) a-tar-sa-ma-a-a-in ("the people of Attar of Heaven") who are said to have been defeated together with the Nebayot (Nebaioth/Nabataeans) and the Qedarites led by Yauta ben Birdadda, who was also known as "king of the Arabs". References Bibliography Further reading Encyclopedia of Gods, Michael Jordan, Kyle Cathie Limited, 2002 Arabian deities History of the Arabian Peninsula Stellar deities Venusian deities
Atarsamain
Astronomy
434
45,468
https://en.wikipedia.org/wiki/Pareto%20efficiency
In welfare economics, a Pareto improvement formalizes the idea of an outcome being "better in every possible way". A change is called a Pareto improvement if it leaves at least one person in society better-off without leaving anyone else worse off than they were before. A situation is called Pareto efficient or Pareto optimal if all possible Pareto improvements have already been made; in other words, there are no longer any ways left to make one person better-off, without making some other person worse-off. In social choice theory, the same concept is sometimes called the unanimity principle, which says that if everyone in a society (non-strictly) prefers A to B, society as a whole also non-strictly prefers A to B. The Pareto front consists of all Pareto-efficient situations. In addition to the context of efficiency in allocation, the concept of Pareto efficiency also arises in the context of efficiency in production vs. x-inefficiency: a set of outputs of goods is Pareto-efficient if there is no feasible re-allocation of productive inputs such that output of one product increases while the outputs of all other goods either increase or remain the same. Besides economics, the notion of Pareto efficiency has also been applied to selecting alternatives in engineering and biology. Each option is first assessed, under multiple criteria, and then a subset of options is identified with the property that no other option can categorically outperform the specified option. It is a statement of impossibility of improving one variable without harming other variables in the subject of multi-objective optimization (also termed Pareto optimization). History The concept is named after Vilfredo Pareto (1848–1923), an Italian civil engineer and economist, who used the concept in his studies of economic efficiency and income distribution. Pareto originally used the word "optimal" for the concept, but this is somewhat of a misnomer: Pareto's concept more closely aligns with an idea of "efficiency", because it does not identify a single "best" (optimal) outcome. Instead, it only identifies a set of outcomes that might be considered optimal, by at least one person. Overview Formally, a state is Pareto-optimal if there is no alternative state where at least one participant's well-being is higher, and nobody else's well-being is lower. If there is a state change that satisfies this condition, the new state is called a "Pareto improvement". When no Pareto improvements are possible, the state is a "Pareto optimum". In other words, Pareto efficiency is when it is impossible to make one party better off without making another party worse off. This state indicates that resources can no longer be allocated in a way that makes one party better off without harming other parties. In a state of Pareto Efficiency, resources are allocated in the most efficient way possible. Pareto efficiency is mathematically represented when there is no other strategy profile s such that ui (s') ≥ ui (s) for every player i and uj (s') > uj (s) for some player j. In this equation s represents the strategy profile, u represents the utility or benefit, and j represents the player. Efficiency is an important criterion for judging behavior in a game. In a notable and often analyzed game known as Prisoner's Dilemma, depicted below as a normal-form game, this concept of efficiency can be observed, in that the strategy profile (Cooperate, Cooperate) is more efficient than (Defect, Defect). Using the definition above, let s = (-2, -2) (Both Defect) and s' = (-1, -1) (Both Cooperate). Then ui(s') > ui(s) for all i. Thus Both Cooperate is a Pareto improvement over Both Defect, which means that Both Defect is not Pareto-efficient. Furthermore, neither of the remaining strategy profiles, (0, -5) or (-5, 0), is a Pareto improvement over Both Cooperate, since -5 < -1. Thus Both Cooperate is Pareto-efficient. In zero-sum games, every outcome is Pareto-efficient. A special case of a state is an allocation of resources. The formal presentation of the concept in an economy is the following: Consider an economy with agents and goods. Then an allocation , where for all i, is Pareto-optimal if there is no other feasible allocation where, for utility function for each agent , for all with for some . Here, in this simple economy, "feasibility" refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. In a more complex economy with production, an allocation would consist both of consumption vectors and production vectors, and feasibility would require that the total amount of each consumed good is no greater than the initial endowment plus the amount produced. Under the assumptions of the first welfare theorem, a competitive market leads to a Pareto-efficient outcome. This result was first demonstrated mathematically by economists Kenneth Arrow and Gérard Debreu. However, the result only holds under the assumptions of the theorem: markets exist for all possible goods, there are no externalities, markets are perfectly competitive, and market participants have perfect information. In the absence of perfect information or complete markets, outcomes will generally be Pareto-inefficient, per the Greenwald–Stiglitz theorem. The second welfare theorem is essentially the reverse of the first welfare theorem. It states that under similar, ideal assumptions, any Pareto optimum can be obtained by some competitive equilibrium, or free market system, although it may also require a lump-sum transfer of wealth. Pareto efficiency and market failure An ineffective distribution of resources in a free market is known as market failure. Given that there is room for improvement, market failure implies Pareto inefficiency. For instance, excessive use of negative commodities (such as drugs and cigarettes) results in expenses to non-smokers as well as early mortality for smokers. Cigarette taxes may help individuals stop smoking while also raising money to address ailments brought on by smoking. Pareto efficiency and equity A Pareto improvement may be seen, but this does not always imply that the result is desirable or equitable. After a Pareto improvement, inequality could still exist. However, it does imply that any change will violate the "do no harm" principle, because at least one person will be worse off. A society may be Pareto efficient but have significant levels of inequality. The most equitable course of action would be to split the pie into three equal portions if there were three persons and a pie. The third person does not lose out (even if he does not partake in the pie), hence splitting it in half and giving it to two individuals would be considered Pareto efficient. On a frontier of production possibilities, Pareto efficiency will happen. It is impossible to raise the output of products without decreasing the output of services when an economy is functioning on a basic production potential frontier, such as at point A, B, or C. Pareto order If multiple sub-goals (with ) exist, combined into a vector-valued objective function , generally, finding a unique optimum becomes challenging. This is due to the absence of a total order relation for which would not always prioritize one target over another target (like the lexicographical order). In the multi-objective optimization setting, various solutions can be "incomparable" as there is no total order relation to facilitate the comparison . Only the Pareto order is applicable: Consider a vector-valued minimization problem: Pareto dominates if and only if: : and We then write , where is the Pareto order. This means that is not worse than in any goal but is better (since smaller) in at least one goal . The Pareto order is a strict partial order, though it is not a product order (neither non-strict nor strict). If , then this defines a preorder in the search space and we say Pareto dominates the alternative and we write . Variants Weak Pareto efficiency Weak Pareto efficiency is a situation that cannot be strictly improved for every individual. Formally, a strong Pareto improvement is defined as a situation in which all agents are strictly better-off (in contrast to just "Pareto improvement", which requires that one agent is strictly better-off and the other agents are at least as good). A situation is weak Pareto-efficient if it has no strong Pareto improvements. Any strong Pareto improvement is also a weak Pareto improvement. The opposite is not true; for example, consider a resource allocation problem with two resources, which Alice values at {10, 0}, and George values at {5, 5}. Consider the allocation giving all resources to Alice, where the utility profile is (10, 0): It is a weak PO, since no other allocation is strictly better to both agents (there are no strong Pareto improvements). But it is not a strong PO, since the allocation in which George gets the second resource is strictly better for George and weakly better for Alice (it is a weak Pareto improvement) its utility profile is (10, 5). A market does not require local nonsatiation to get to a weak Pareto optimum. Constrained Pareto efficiency Constrained Pareto efficiency is a weakening of Pareto optimality, accounting for the fact that a potential planner (e.g., the government) may not be able to improve upon a decentralized market outcome, even if that outcome is inefficient. This will occur if it is limited by the same informational or institutional constraints as are individual agents. An example is of a setting where individuals have private information (for example, a labor market where the worker's own productivity is known to the worker but not to a potential employer, or a used-car market where the quality of a car is known to the seller but not to the buyer) which results in moral hazard or an adverse selection and a sub-optimal outcome. In such a case, a planner who wishes to improve the situation is unlikely to have access to any information that the participants in the markets do not have. Hence, the planner cannot implement allocation rules which are based on the idiosyncratic characteristics of individuals; for example, "if a person is of type A, they pay price p1, but if of type B, they pay price p2" (see Lindahl prices). Essentially, only anonymous rules are allowed (of the sort "Everyone pays price p") or rules based on observable behavior; "if any person chooses x at price px, then they get a subsidy of ten dollars, and nothing otherwise". If there exists no allowed rule that can successfully improve upon the market outcome, then that outcome is said to be "constrained Pareto-optimal". Fractional Pareto efficiency Fractional Pareto efficiency is a strengthening of Pareto efficiency in the context of fair item allocation. An allocation of indivisible items is fractionally Pareto-efficient (fPE or fPO) if it is not Pareto-dominated even by an allocation in which some items are split between agents. This is in contrast to standard Pareto efficiency, which only considers domination by feasible (discrete) allocations. As an example, consider an item allocation problem with two items, which Alice values at {3, 2} and George values at {4, 1}. Consider the allocation giving the first item to Alice and the second to George, where the utility profile is (3, 1): It is Pareto-efficient, since any other discrete allocation (without splitting items) makes someone worse-off. However, it is not fractionally Pareto-efficient, since it is Pareto-dominated by the allocation giving to Alice 1/2 of the first item and the whole second item, and the other 1/2 of the first item to George its utility profile is (3.5, 2). Ex-ante Pareto efficiency When the decision process is random, such as in fair random assignment or random social choice or fractional approval voting, there is a difference between ex-post and ex-ante Pareto efficiency: Ex-post Pareto efficiency means that any outcome of the random process is Pareto-efficient. Ex-ante Pareto efficiency means that the lottery determined by the process is Pareto-efficient with respect to the expected utilities. That is: no other lottery gives a higher expected utility to one agent and at least as high expected utility to all agents. If some lottery L is ex-ante PE, then it is also ex-post PE. Proof: suppose that one of the ex-post outcomes x of L is Pareto-dominated by some other outcome y. Then, by moving some probability mass from x to y, one attains another lottery L that ex-ante Pareto-dominates L. The opposite is not true: ex-ante PE is stronger that ex-post PE. For example, suppose there are two objects a car and a house. Alice values the car at 2 and the house at 3; George values the car at 2 and the house at 9. Consider the following two lotteries: With probability 1/2, give car to Alice and house to George; otherwise, give car to George and house to Alice. The expected utility is for Alice and for George. Both allocations are ex-post PE, since the one who got the car cannot be made better-off without harming the one who got the house. With probability 1, give car to Alice, then with probability 1/3 give the house to Alice, otherwise give it to George. The expected utility is for Alice and for George. Again, both allocations are ex-post PE. While both lotteries are ex-post PE, the lottery 1 is not ex-ante PE, since it is Pareto-dominated by lottery 2. Another example involves dichotomous preferences. There are 5 possible outcomes and 6 voters. The voters' approval sets are . All five outcomes are PE, so every lottery is ex-post PE. But the lottery selecting c, d, e with probability 1/3 each is not ex-ante PE, since it gives an expected utility of 1/3 to each voter, while the lottery selecting a, b with probability 1/2 each gives an expected utility of 1/2 to each voter. Bayesian Pareto efficiency Bayesian efficiency is an adaptation of Pareto efficiency to settings in which players have incomplete information regarding the types of other players. Ordinal Pareto efficiency Ordinal Pareto efficiency is an adaptation of Pareto efficiency to settings in which players report only rankings on individual items, and we do not know for sure how they rank entire bundles. Pareto efficiency and equity Although an outcome may be a Pareto improvement, this does not imply that the outcome is equitable. It is possible that inequality persists even after a Pareto improvement. Despite the fact that it is frequently used in conjunction with the idea of Pareto optimality, the term "efficiency" refers to the process of increasing societal productivity. It is possible for a society to have Pareto efficiency while also have high levels of inequality. Consider the following scenario: there is a pie and three persons; the most equitable way would be to divide the pie into three equal portions. However, if the pie is divided in half and shared between two people, it is considered Pareto efficient meaning that the third person does not lose out (despite the fact that he does not receive a piece of the pie). When making judgments, it is critical to consider a variety of aspects, including social efficiency, overall welfare, and issues such as diminishing marginal value. Pareto efficiency and market failure In order to fully understand market failure, one must first comprehend market success, which is defined as the ability of a set of idealized competitive markets to achieve an equilibrium allocation of resources that is Pareto-optimal in terms of resource allocation. According to the definition of market failure, it is a circumstance in which the conclusion of the first fundamental theorem of welfare is erroneous; that is, when the allocations made through markets are not efficient. In a free market, market failure is defined as an inefficient allocation of resources. Due to the fact that it is feasible to improve, market failure implies Pareto inefficiency. For example, excessive consumption of depreciating items (drugs/tobacco) results in external costs to non-smokers, as well as premature death for smokers who do not quit. An increase in the price of cigarettes could motivate people to quit smoking while also raising funds for the treatment of smoking-related ailments. Approximate Pareto efficiency Given some ε > 0, an outcome is called ε-Pareto-efficient if no other outcome gives all agents at least the same utility, and one agent a utility at least (1 + ε) higher. This captures the notion that improvements smaller than (1 + ε) are negligible and should not be considered a breach of efficiency. Pareto-efficiency and welfare-maximization Suppose each agent i is assigned a positive weight ai. For every allocation x, define the welfare of x as the weighted sum of utilities of all agents in x: Let xa be an allocation that maximizes the welfare over all allocations: It is easy to show that the allocation xa is Pareto-efficient: since all weights are positive, any Pareto improvement would increase the sum, contradicting the definition of xa. Japanese neo-Walrasian economist Takashi Negishi proved that, under certain assumptions, the opposite is also true: for every Pareto-efficient allocation x, there exists a positive vector a such that x maximizes Wa. A shorter proof is provided by Hal Varian. Use in engineering The notion of Pareto efficiency has been used in engineering. Given a set of choices and a way of valuing them, the Pareto front (or Pareto set or Pareto frontier''') is the set of choices that are Pareto-efficient. By restricting attention to the set of choices that are Pareto-efficient, a designer can make trade-offs within this set, rather than considering the full range of every parameter. Use in public policy Modern microeconomic theory has drawn heavily upon the concept of Pareto efficiency for inspiration. Pareto and his successors have tended to describe this technical definition of optimal resource allocation in the context of it being an equilibrium that can theoretically be achieved within an abstract model of market competition. It has therefore very often been treated as a corroboration of Adam Smith's "invisible hand" notion. More specifically, it motivated the debate over "market socialism" in the 1930s. However, because the Pareto-efficient outcome is difficult to assess in the real world when issues including asymmetric information, signalling, adverse selection, and moral hazard are introduced, most people do not take the theorems of welfare economics as accurate descriptions of the real world. Therefore, the significance of the two welfare theorems of economics is in their ability to generate a framework that has dominated neoclassical thinking about public policy. That framework is that the welfare economics theorems allow the political economy to be studied in the following two situations: "market failure" and "the problem of redistribution". Analysis of "market failure" can be understood by the literature surrounding externalities. When comparing the "real" economy to the complete contingent markets economy (which is considered efficient), the inefficiencies become clear. These inefficiencies, or externalities, are then able to be addressed by mechanisms, including property rights and corrective taxes. Analysis of "the problem with redistribution" deals with the observed political question of how income or commodity taxes should be utilized. The theorem tells us that no taxation is Pareto-efficient and that taxation with redistribution is Pareto-inefficient. Because of this, most of the literature is focused on finding solutions where given there is a tax structure, how can the tax structure prescribe a situation where no person could be made better off by a change in available taxes. Use in biology Pareto optimisation has also been studied in biological processes. In bacteria, genes were shown to be either inexpensive to make (resource-efficient) or easier to read (translation-efficient). Natural selection acts to push highly expressed genes towards the Pareto frontier for resource use and translational efficiency. Genes near the Pareto frontier were also shown to evolve more slowly (indicating that they are providing a selective advantage). Common misconceptions It would be incorrect to treat Pareto efficiency as equivalent to societal optimization, as the latter is a normative concept, which is a matter of interpretation that typically would account for the consequence of degrees of inequality of distribution. An example would be the interpretation of one school district with low property tax revenue versus another with much higher revenue as a sign that more equal distribution occurs with the help of government redistribution. Criticism Some commentators contest that Pareto efficiency could potentially serve as an ideological tool. With it implying that capitalism is self-regulated thereof, it is likely that the embedded structural problems such as unemployment would be treated as deviating from the equilibrium or norm, and thus neglected or discounted. Pareto efficiency does not require a totally equitable distribution of wealth, which is another aspect that draws in criticism. An economy in which a wealthy few hold the vast majority of resources can be Pareto-efficient. A simple example is the distribution of a pie among three people. The most equitable distribution would assign one third to each person. However, the assignment of, say, a half section to each of two individuals and none to the third is also Pareto-optimal despite not being equitable, because none of the recipients could be made better off without decreasing someone else's share; and there are many other such distribution examples. An example of a Pareto-inefficient distribution of the pie would be allocation of a quarter of the pie to each of the three, with the remainder discarded. The liberal paradox elaborated by Amartya Sen shows that when people have preferences about what other people do, the goal of Pareto efficiency can come into conflict with the goal of individual liberty. Lastly, it is proposed that Pareto efficiency to some extent inhibited discussion of other possible criteria of efficiency. As Wharton School professor Ben Lockwood argues, one possible reason is that any other efficiency criteria established in the neoclassical domain will reduce to Pareto efficiency at the end. See also Admissible decision rule, analog in decision theory Arrow's impossibility theorem Bayesian efficiency Fundamental theorems of welfare economics Deadweight loss Economic efficiency Highest and best use Kaldor–Hicks efficiency Marginal utility Market failure, when a market result is not Pareto-optimal Maximal element, concept in order theory Maxima of a point set Multi-objective optimization Nash equilibrium Pareto-efficient envy-free division Social Choice and Individual Values'' for the "(weak) Pareto principle" TOTREP Welfare economics References Pareto, V (1906). Manual of Political Economy. Oxford University Press. https://global.oup.com/academic/product/manual-of-political-economy-9780199607952?cc=ca&lang=en&. Further reading Book preview. Game theory Law and economics Welfare economics Management theory Mathematical optimization Electoral system criteria Vilfredo Pareto
Pareto efficiency
Mathematics
4,865
3,251,189
https://en.wikipedia.org/wiki/Pseudorandom%20generator%20theorem
In computational complexity theory and cryptography, the existence of pseudorandom generators is related to the existence of one-way functions through a number of theorems, collectively referred to as the pseudorandom generator theorem. Introduction Pseudorandomness A distribution is considered pseudorandom if no efficient computation can distinguish it from the true uniform distribution by a non-negligible advantage. Formally, a family of distributions Dn is pseudorandom if for any polynomial size circuit C, and any ε inversely polynomial in n |Probx∈U [C(x)=1] − Probx∈D [C(x)=1] | ≤ ε. Pseudorandom generators A function Gl: {0,1}l → {0,1}m, where l < m is a pseudorandom generator if: Gl can be computed in time polynomial in l Gl(x) is pseudorandom, when x is uniformly random. One additional pseudorandom bit implies polynomially more pseudorandom bits It can be shown that if there is a pseudorandom generator Gl: {0,1}l → {0,1}l+1, i.e. a generator that adds only one pseudorandom bit, then for any m = poly(l), there is a pseudorandom generator G'l: {0,1}l → {0,1}m. The idea of the proof is as follows: first s bits from uniform distribution Ul are picked and used as the seed to the first instance of Gl, which is known to be a pseudorandom generator. Next, the output of the first instance of Gl is divided into two parts: first l bits are fed into the second instance of Gl as a seed, while the last bit becomes the first bit of the output. Repeating this process for m times yields an output of m pseudorandom bits. It can be shown that such G'l, that consists of m instances of Gl, is indeed a pseudorandom generator by using a hybrid approach and proof by contradiction as follows: Consider m+1 intermediate distributions Hi: 0  ≤  i  ≤  m, where first i bits are chosen from the uniform distribution, and last m − i bits are chosen from the output of G'l. Thus, H0 is the full output of G'l and Hm is a true uniform distribution Um. Therefore, distributions Hi and Hi+1 will be different in only one bit (bit number i+1). Now, assume that G'l is not a pseudorandom distribution; that is, there exists some circuit C that can distinguish between G'l and Um with an advantage ε  =   1/poly(l). In other words, this circuit can distinguish between H0 and Hm. Therefore, there exists such i that the circuit C can distinguish between Hi and Hi+1 by at least ε / m. Note, that since m are polynomial in l, then ε / m is also polynomial in l and is still a non-negligible advantage. Now, assume we are given l+1 bits that are either output of Gl or a drawn from uniform distribution. Let's reuse the approach of building large pseudorandom generators out of instances of Gl and construct a string of pseudorandom bits of length m−i−1 in the same way the G'l was constructed above using the first l given bits as the seed. Then, let's create a string consisting of i bits drawn from uniform, concatenated with the last one of the given bits, followed by the created m−i−1 bits. The resulting output is either Hi or Hi+1, since the i+1 bit is either drawn from uniform or from Gl. Since by assumption we can distinguish between Hi and Hi+1 with non-negligible advantage, then we can distinguish between U and Gl, which implies that Gl is not a pseudorandom generator, which is a contradiction to the hypothesis. Q.E.D. Now, let's illustrate that if exists, the circuit for distinguishing between Gl and Ul+1 does not have to toss random coins. As we showed above, if exists a circuit C for distinguishing between G'l and Um, where m = poly(l), then exists a circuit C' for distinguishing between Gl and Ul+1 that uses i random bits. For this circuit C' : | Probu, s [C' (u1,...,ui,Gl(s)) = 1 ] − Probu, y [C' (u1,>,...,ui,y) = 1] | ≥ ε / m, where u is a string of i uniformly random bits, s is a string of l uniformly random bits, and y is a string of l+1 uniformly random bits. Then, Probu[ | Probs [C' (u1,...,ui,Gl(s)) = 1] - Proby [C' (u1,...,ui,y) = 1] | ] ≥ ε / m; Which means, there exists some fixed string u of i bits that can be used as an "advice" to circuit C' for distinguishing between Gl and Ul+1. Existence of pseudorandom generators The existence of pseudorandom generators is related to the existence of one-way functions and hard-core predicates. Formally, pseudorandom generators exist if and only if one-way functions exist, or PRG ↔ OWF Definitions One-way functions Intuitively one-way functions are functions that are easy to compute and hard to invert. In other words, the complexity (or circuit size) of the function is much smaller than that of its inverse. Formally: A function ƒ:  {0,1}n → {0,1}n is (S,ε) one-way if for any circuit C of size ≤ S, Prob[ƒ(C(ƒ(x))) = ƒ(x)] ≤ ε. Moreover, ƒ is a one-way function if ƒ can be computed in polynomial time ƒ is (poly(n), 1/poly(n)) one-way Hard-core predicate Function B: {0,1}n → {0,1} is a hard-core predicate for function ƒ if B can be computed in polynomial time for any polynomial size circuit C and any non-negligible ε = 1/poly(n), Probx~U [C(ƒ(x))  = B(x)] ≤ 1/2+ε In other words, it is hard to predict B(x) from function ƒ(x). Proof Here an outline of the proof is given. Please see references for detailed proofs. PRG → OWF Consider a pseudorandom generator Gl: {0,1}l → {0,1}2l. Let's create the following one-way function ƒ:  {0,1}n → {0,1}n that uses the first half of the output of Gl as its output. Formally, ƒ(x,y) → Gl(x) A key observation that justifies such selection, is that the size of the pre-image universe is 2n and is a negligible fraction of the image of the function of size 22n. To prove that ƒ is indeed a one-way function let's construct an argument by contradiction. Assume there exists a circuit C that inverts ƒ with advantage ε: Prob[ƒ(C(ƒ(x,y)))  = ƒ(x,y)] > ε Then we can create the following algorithm that will distinguish Gl from uniform, which contradicts the hypothesis. The algorithm would take an input of 2n bits z and compute (x,y) = C(z). If Gl(x) = z the algorithm would accept, otherwise it rejects. Now, if z is drawn from uniform distribution, the probability that the above algorithm accepts is ≤ 1/2l, since the size of the pre-image is 1/2l of the size of the image. However, if z was drawn from the output of Gl then the probability of acceptance is > ε by assumption of the existence of circuit C. Therefore, the advantage that circuit C has in distinguishing between the uniform U and output of Gl is > ε − 1/2l, which is non-negligible and thus contradicts our assumption of Gl being a pseudorandom generator. Q.E.D. OWF → PRG For this case we prove a weaker version of the theorem: One-way permutation → pseudorandom generator A one-way permutation is a one-way function that is also a permutation of the input bits. A pseudorandom generator can be constructed from one-way permutation ƒ as follows: Gl: {0,1}l→{0,1}l+1  =  ƒ(x).B(x), where B is hard-core predicate of ƒ and "." is a concatenation operator. Note, that by the theorem proven above, it is only needed to show the existence of a generator that adds just one pseudorandom bit. Hard-core predicate → PRG First, let's show that if B is a hard-core predicate for ƒ then Gl is indeed pseudorandom. Again, we'll use an argument by contradiction. Assume that Gl is not a pseudorandom generator; that is, there exists circuit C of polynomial size that distinguishes Gl(x) =ƒ(x).B(x) from Ul+1 with advantage ≥ε, where ε is non-negligible. Note, that since ƒ(x) is a permutation, then if x is drawn from uniform distribution, then so if ƒ(x). Therefore, Ul+1 is equivalent to ƒ(x).b, where b is a bit drawn independently from a uniform distribution. Formally, Probx~U [C(G(x))=1] − Probx~U,b~U [C(x.b)=1]  ≥ ε Let's construct the following algorithm C''': 1. Given z=f(x) guess bit b 2. Run C on z.b 3. IF C(z.b)=1 4. output b 5. ELSE 6. output 1-b Given the output of ƒ the algorithm first guesses bit b by tossing a random coin, i.e. Prob[b=0] = Prob[b=1] = 0.5. Then, algorithm (circuit) C is run on f(x).b and if the result is 1 then b is outputted, otherwise the inverse of b is returned. Then probability of C guessing B(x) correctly is: Probx~U [C'''(z)=B(x)] = Prob[b=B(x) ∧ C(z.b)=1] + Prob[b≠B(x) ∧ C(z.b)=0] = Prob[b=B(x)]⋅Prob[C(z.b)=1 | b=B(x)] + Prob[b≠B(x)]⋅Prob[C(z.b)=0 | b≠B(x)] = 1/2⋅Prob[C(z.b)=1 | b=B(x)] + 1/2⋅Prob[C(z.b)=0 | b≠B(x)] = (1−1/2)⋅Prob[C(z.b)=1 | b=B(x)] + 1/2⋅(1−Prob[C(z.b)=1 | b≠B(x)]) = 1/2+Probz.b~G(x) [C(z.b)=1] − 1/2⋅(Prob[C(z.b)=1 | b=B(x)]+Prob[C(z.b)=1 | b≠B(x)]) = 1/2+Probz.b~G(x) [C(z.b)=1] − Probz.b~U [C(z.b)=1] ≥ 1/2+εThis implies that circuit C can predict B(x) with probability more than 1/2 + ε, which means that B cannot be a hard-core predicate for ƒ and the hypothesis is contradicted. Q.E.D. OWP → hard-core predicate The outline of the proof is as follows: If ƒ{0,1}n→{0,1}n is a one-way permutation, then so is ƒ'{0,1}2n→{0,1}2n, where ƒ'(x,y)=ƒ(x).y by definition. Then B(x,y)=x⋅y is a hard-core predicate for ƒ', where ⋅ is a vector dot product. To prove that it is indeed hard-core let's assume otherwise, and show a contradiction with the hypothesis of ƒ being one-way. If B is not a hard-core predicate, then there exists a circuit C that predicts it, so Probx,y[C(ƒ(x),y)=x⋅y] ≥  1/2+ε. That fact can be used to recover x by cleverly constructing permutations y that isolate bits in x. In fact, for a constant fraction of x, there exists a polynomial time algorithm that lists O(1/ε2) candidates that include all valid x. Thus, an algorithm can invert ƒ(x) in polynomial time for a non-negligible fraction of x, which contradicts the hypothesis. References W. Diffie, M.E. Hellman. "New Directions in Cryptography." IEEE Transactions on Information Theory, IT-22, pp. 644–654, 1976. A.C. Yao. "Theory and Application of Trapdoor Functions." 23rd IEEE Symposium on Foundations of Computer Science, pp. 80–91, 1982. M. Blum and S. Micali "How to Generate Cryptographically Strong Sequences of Pseudo-Random Bits." SIAM Journal on Computing, v13, pp. 850–864, 1984. J. Hastad, R. Impagliazzo, L.A. Levin and M. Luby. "A Pseudorandom Generator from any One-way Function." SIAM Journal on Computing, v28 n4, pp.-1364-1396, 1999. Pseudorandomness Theorems in computational complexity theory
Pseudorandom generator theorem
Mathematics
3,151
208,656
https://en.wikipedia.org/wiki/Non-standard%20cosmology
A non-standard cosmology is any physical cosmological model of the universe that was, or still is, proposed as an alternative to the then-current standard model of cosmology. The term non-standard is applied to any theory that does not conform to the scientific consensus. Because the term depends on the prevailing consensus, the meaning of the term changes over time. For example, hot dark matter would not have been considered non-standard in 1990, but would have been in 2010. Conversely, a non-zero cosmological constant resulting in an accelerating universe would have been considered non-standard in 1990, but is part of the standard cosmology in 2010. Several major cosmological disputes have occurred throughout the history of cosmology. One of the earliest was the Copernican Revolution, which established the heliocentric model of the Solar System. More recent was the Great Debate of 1920, in the aftermath of which the Milky Way's status as but one of the Universe's many galaxies was established. From the 1940s to the 1960s, the astrophysical community was equally divided between supporters of the Big Bang theory and supporters of a rival steady state universe; this is currently decided in favour of the Big Bang theory by advances in observational cosmology in the late 1960s. Nevertheless, there remained vocal detractors of the Big Bang theory including Fred Hoyle, Jayant Narlikar, Halton Arp, and Hannes Alfvén, whose cosmologies were relegated to the fringes of astronomical research. The few Big Bang opponents still active today often ignore well-established evidence from newer research, and as a consequence, today non-standard cosmologies that reject the Big Bang entirely are rarely published in peer-reviewed science journals but appear online in marginal journals and private websites. The current standard model of cosmology is the Lambda-CDM model, wherein the Universe is governed by general relativity, began with a Big Bang and today is a nearly-flat universe that consists of approximately 5% baryons, 27% cold dark matter, and 68% dark energy. Lambda-CDM has been a successful model, but recent observational evidence seem to indicate significant tensions in Lambda-CDM, such as the Hubble tension, the KBC void, the dwarf galaxy problem, ultra-large structures, et cetera. Research on extensions or modifications to Lambda-CDM, as well as fundamentally different models, is ongoing. Topics investigated include quintessence, Modified Newtonian Dynamics (MOND) and its relativistic generalization TeVeS, and warm dark matter. History Modern physical cosmology as it is currently studied first emerged as a scientific discipline in the period after the Shapley–Curtis debate and discoveries by Edwin Hubble of a cosmic distance ladder when astronomers and physicists had to come to terms with a universe that was of a much larger scale than the previously assumed galactic size. Theorists who successfully developed cosmologies applicable to the larger-scale universe are remembered today as the founders of modern cosmology. Among these scientists are Arthur Milne, Willem de Sitter, Alexander Friedman, Georges Lemaître, and Albert Einstein himself. After confirmation of the Hubble's law by observation, the two most popular cosmological theories became the Steady State theory of Hoyle, Gold and Bondi, and the big bang theory of Ralph Alpher, George Gamow, and Robert Dicke with a small number of supporters of a smattering of alternatives. One of the major successes of the Big Bang theory compared to its competitor was its prediction for the abundance of light elements in the universe that corresponds with the observed abundances of light elements. Alternative theories do not have a means to explain these abundances. Theories which assert that the universe has an infinite age with no beginning have trouble accounting for the abundance of deuterium in the cosmos, because deuterium easily undergoes nuclear fusion in stars and there are no known astrophysical processes other than the Big Bang itself that can produce it in large quantities. Hence the fact that deuterium is not an extremely rare component of the universe suggests both that the universe has a finite age and that there was a process that created deuterium in the past that no longer occurs. Theories which assert that the universe has a finite life, but that the Big Bang did not happen, have problems with the abundance of helium-4. The observed amount of 4He is far larger than the amount that should have been created via stars or any other known process. By contrast, the abundance of 4He in Big Bang models is very insensitive to assumptions about baryon density, changing only a few percent as the baryon density changes by several orders of magnitude. The observed value of 4He is within the range calculated. Still, it was not until the discovery of the Cosmic microwave background radiation (CMB) by Arno Penzias and Robert Wilson in 1965, that most cosmologists finally concluded that observations were best explained by the big bang model. Steady State theorists and other non-standard cosmologies were then tasked with providing an explanation for the phenomenon if they were to remain plausible. This led to original approaches including integrated starlight and cosmic iron whiskers, which were meant to provide a source for a pervasive, all-sky microwave background that was not due to an early universe phase transition. Scepticism about the non-standard cosmologies' ability to explain the CMB caused interest in the subject to wane since then, however, there have been two periods in which interest in non-standard cosmology has increased due to observational data which posed difficulties for the big bang. The first occurred in the late 1970s when there were a number of unsolved problems, such as the horizon problem, the flatness problem, and the lack of magnetic monopoles, which challenged the big bang model. These issues were eventually resolved by cosmic inflation in the 1980s. This idea subsequently became part of the understanding of the big bang, although alternatives have been proposed from time to time. The second occurred in the mid-1990s when observations of the ages of globular clusters and the primordial helium abundance, apparently disagreed with the big bang. However, by the late 1990s, most astronomers had concluded that these observations did not challenge the big bang and additional data from COBE and the WMAP, provided detailed quantitative measures which were consistent with standard cosmology. Today, heterodox non-standard cosmologies are generally considered unworthy of consideration by cosmologists while many of the historically significant nonstandard cosmologies are considered to have been falsified. The essentials of the big bang theory have been confirmed by a wide range of complementary and detailed observations, and no non-standard cosmologies have reproduced the range of successes of the big bang model. Speculations about alternatives are not normally part of research or pedagogical discussions, except as object lessons or for their historical importance. An open letter started by some remaining advocates of non-standard cosmology has affirmed that: "today, virtually all financial and experimental resources in cosmology are devoted to big bang studies...." In the 1990s, a dawning of a "golden age of cosmology" was accompanied by a startling discovery that the expansion of the universe was, in fact, accelerating. Previous to this, it had been assumed that matter either in its visible or invisible dark matter form was the dominant energy density in the universe. This "classical" big bang cosmology was overthrown when it was discovered that nearly 70% of the energy in the universe was attributable to the cosmological constant, often referred to as "dark energy". This has led to the development of a so-called concordance ΛCDM model which combines detailed data obtained with new telescopes and techniques in observational astrophysics with an expanding, density-changing universe. Today, it is more common to find in the scientific literature proposals for "non-standard cosmologies" that actually accept the basic tenets of the big bang cosmology, while modifying parts of the concordance model. Such theories include alternative models of dark energy, such as quintessence, phantom energy and some ideas in brane cosmology; alternative models of dark matter, such as modified Newtonian dynamics; alternatives or extensions to inflation such as chaotic inflation and the ekpyrotic model; and proposals to supplement the universe with a first cause, such as the Hartle–Hawking boundary condition, the cyclic model, and the string landscape. There is no consensus about these ideas amongst cosmologists, but they are nonetheless active fields of academic inquiry. Alternatives to Big Bang cosmologies Before observational evidence was gathered, theorists developed frameworks based on what they understood to be the most general features of physics and philosophical assumptions about the universe. When Albert Einstein developed his general theory of relativity in 1915, this was used as a mathematical starting point for most cosmological theories. In order to arrive at a cosmological model, however, theoreticians needed to make assumptions about the nature of the largest scales of the universe. The assumptions that the current standard model of cosmology relies upon are: the universality of physical laws – that the laws of physics do not change from one place and time to another, the cosmological principle – that the universe is roughly homogeneous and isotropic in space though not necessarily in time, and the Copernican principle – that we are not observing the universe from a preferred locale. These assumptions when combined with General Relativity result in a universe that is governed by the Friedmann–Robertson–Walker metric (FRW metric). The FRW metric allows for a universe that is either expanding or contracting (as well as stationary but unstable universes). When Hubble's law was discovered, most astronomers interpreted the law as a sign the universe is expanding. This implies the universe was smaller in the past, and therefore led to the following conclusions: the universe emerged from a hot, dense state at a finite time in the past, because the universe heats up as it contracts and cools as it expands, in the first minutes that time existed as we know it, the temperatures were high enough for Big Bang nucleosynthesis to occur, and a cosmic microwave background pervading the entire universe should exist, which is a record of a phase transition that occurred when the atoms of the universe first formed. These features were derived by numerous individuals over a period of years; indeed it was not until the middle of the twentieth century that accurate predictions of the last feature and observations confirming its existence were made. Non-standard theories developed either by starting from different assumptions or by contradicting the features predicted by the prevailing standard model of cosmology. Steady State theories The Steady State theory extends the homogeneity assumption of the cosmological principle to reflect a homogeneity in time as well as in space. This "perfect cosmological principle" as it would come to be called asserted that the universe looks the same everywhere (on the large scale), the same as it always has and always will. This is in contrast to Lambda-CDM, in which the universe looked very different in the past and will look very different in the future. Steady State theory was proposed in 1948 by Fred Hoyle, Thomas Gold, Hermann Bondi and others. In order to maintain the perfect cosmological principle in an expanding universe, steady state cosmology had to posit a "matter-creation field" (the so-called C-field) that would insert matter into the universe in order to maintain a constant density. The debate between the Big Bang and the Steady State models would happen for 15 years with camps roughly evenly divided until the discovery of the cosmic microwave background (CMB) radiation. This radiation is a natural feature of the Big Bang model which demands a "time of last scattering" where photons decouple with baryonic matter. The Steady State model proposed that this radiation could be accounted for by so-called "integrated starlight" which was a background caused in part by Olbers' paradox in an infinite universe. In order to account for the uniformity of the background, steady state proponents posited a fog effect associated with microscopic iron particles that would scatter radio waves in such a manner as to produce an isotropic CMB. The proposed phenomena was whimsically named "cosmic iron whiskers" and served as the thermalization mechanism. The Steady State theory did not have the horizon problem of the Big Bang because it assumed an infinite amount of time was available for thermalizing the background. As more cosmological data began to be collected, cosmologists began to realize that the Big Bang correctly predicted the abundance of light elements observed in the cosmos. What was a coincidental ratio of hydrogen to deuterium and helium in the steady state model was a feature of the Big Bang model. Additionally, detailed measurements of the CMB since the 1990s with the COBE, WMAP and Planck observations indicated that the spectrum of the background was closer to a blackbody than any other source in nature. The best integrated starlight models could predict was a thermalization to the level of 10% while the COBE satellite measured the deviation at one part in 105. After this dramatic discovery, the majority of cosmologists became convinced that the steady state theory could not explain the observed CMB properties. Although the original steady state model is now considered to be contrary to observations (particularly the CMB) even by its one-time supporters, modifications of the steady state model have been proposed, including a model that envisions the universe as originating through many little bangs rather than one big bang (the so-called "quasi-steady state cosmology"). It supposes that the universe goes through periodic expansion and contraction phases, with a soft "rebound" in place of the Big Bang. Thus the Hubble law is explained by the fact that the universe is currently in an expansion phase. Work continues on this model (most notably by Jayant V. Narlikar), although it has not gained widespread mainstream acceptance. Alternatives and extensions to Lambda-CDM The standard model of cosmology today, the Lambda-CDM model, has been extremely successful at providing a theoretical framework for structure formation, the anisotropies in the cosmic microwave background, and the accelerating expansion of the universe. However, it is not without its problems. There are many proposals today that challenge various aspects of the Lambda-CDM model. These proposals typically modify some of the main features of Lambda-CDM, but do not reject the Big Bang. Anisotropic universe Isotropicity – the idea that the universe looks the same in all directions – is one of the core assumptions that enters into the Friedmann equations. In 2008 however, scientists working on the Wilkinson Microwave Anisotropy Probe data claimed to have detected a 600–1000 km/s flow of clusters toward a 20-degree patch of sky between the constellations of Centaurus and Vela. They suggested that the motion may be a remnant of the influence of no-longer-visible regions of the universe prior to inflation. The detection is controversial, and other scientists have found that the universe is isotropic to a great degree. Massive compact halo object (MACHO) Solitary black holes, neutron stars, burnt-out dwarf stars, and other massive objects that are hard to detect are collectively known as MACHOs; some scientists initially hoped that baryonic MACHOs could account for and explain all the dark matter. However, evidence has accumulated that these objects cannot explain a large fraction of the dark matter mass. Exotic dark matter In Lambda-CDM, dark matter is a form of matter that interacts with both ordinary matter and light only through gravitational effects. To produce the large-scale structure we see today, dark matter is "cold" (the 'C' in Lambda-CDM), i.e. non-relativistic. Dark matter has not been conclusively identified, and its exact nature is the subject of intense study. Hypothetical weakly interacting massive particles (WIMPs), axions and primordial black holes are the leading dark matter candidates but there are a variety of other proposals, e.g.: Self-interacting dark matter, wherein dark matter particles interact with themselves. Warm dark matter, which are more relativistic than cold dark matter, but less relativistic than the observationally-excluded hot dark matter. Fuzzy cold dark matter, which have particles much lighter than axions – in the 10−22 eV range. Yet other theories attempt to explain dark matter and dark energy as different facets of the same underlying fluid (see dark fluid), or hypothesize that dark matter could decay into dark energy. Exotic dark energy In Lambda-CDM, dark energy is an unknown form of energy that tends to accelerate the expansion of the universe. It is less well-understood than dark matter, and similarly mysterious. The simplest explanation of dark energy is the cosmological constant (the 'Lambda' in Lambda-CDM). This is a simple constant added to the Einstein field equations to provide a repulsive force. Thus far observations are fully consistent with the cosmological constant, but leave room for a plethora of alternatives, e.g.: Quintessence, which is a scalar field similar to the one that drove cosmic inflation shortly after the Big Bang. In quintessence, dark energy will usually vary over time (as opposed to the cosmological constant, which remains a constant). Inhomogeneous cosmology. One of the fundamental assumptions of Lambda-CDM is that the universe is homogeneous – that is, it looks broadly the same regardless of where the observer is. In the inhomogeneous universe scenario, the observed dark energy is a measurement artefact caused by us being located at an emptier-than-average region of space. Variable dark energy, which is similar to quintessence in that the properties of dark energy vary over time (see figure), but different in that dark energy is not due to a scalar field. Alternatives to general relativity General relativity, upon which the FRW metric is based, is an extremely successful theory which has met every observational test so far. However, at a fundamental level it is incompatible with quantum mechanics, and by predicting singularities, it also predicts its own breakdown. Any alternative theory of gravity would immediately imply an alternative cosmological theory since Lambda-CDM is dependent on general relativity as a framework assumption. There are many different motivations to modify general relativity, such as to eliminate the need for dark matter or dark energy, or to avoid such paradoxes as the firewall. There are very many modified gravity theories, none of which have gained widespread acceptance, although it remains an active field of research. Some of the more notable theories are below. Machian universe Ernst Mach developed a kind of extension to general relativity which proposed that inertia was due to gravitational effects of the mass distribution of the universe. This led naturally to speculation about the cosmological implications for such a proposal. Carl Brans and Robert Dicke were able to incorporate Mach's principle into general relativity which admitted for cosmological solutions that would imply a variable mass. The homogeneously distributed mass of the universe would result in a roughly scalar field that permeated the universe and would serve as a source for Newton's gravitational constant; creating a theory of quantum gravity. MOND Modified Newtonian Dynamics (MOND) is a relatively modern proposal to explain the galaxy rotation problem based on a variation of Newton's Second Law of Dynamics at low accelerations. This would produce a large-scale variation of Newton's universal theory of gravity. A modification of Newton's theory would also imply a modification of general relativistic cosmology in as much as Newtonian cosmology is the limit of Friedman cosmology. While almost all astrophysicists today reject MOND in favor of dark matter, a small number of researchers continue to enhance it, recently incorporating Brans–Dicke theories into treatments that attempt to account for cosmological observations. Tensor–vector–scalar gravity (TeVeS) is a proposed relativistic theory that is equivalent to Modified Newtonian dynamics (MOND) in the non-relativistic limit, which purports to explain the galaxy rotation problem without invoking dark matter. Originated by Jacob Bekenstein in 2004, it incorporates various dynamical and non-dynamical tensor fields, vector fields and scalar fields. The break-through of TeVeS over MOND is that it can explain the phenomenon of gravitational lensing, a cosmic optical illusion in which matter bends light, which has been confirmed many times. A recent preliminary finding is that it can explain structure formation without CDM, but requiring a ~2eV massive neutrino (they are also required to fit some Clusters of galaxies, including the Bullet Cluster). However, other authors (see Slosar, Melchiorri and Silk) argue that TeVeS can not explain cosmic microwave background anisotropies and structure formation at the same time, i.e. ruling out those models at high significance. gravity gravity is a family of theories that modify general relativity by defining a different function of the Ricci scalar (). The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. gravity was first proposed in 1970 by Hans Adolph Buchdahl (although was used rather than for the name of the arbitrary function). It has become an active field of research following work by Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions, ; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems. Other alternatives Kaluza–Klein theory, which posits an extra spatial dimension, thereby making our universe 5D instead of the 4D of General Relativity. The DGP model is one of the models in this category, claimed to be able to explain dark energy without invoking a cosmological constant. Entropic gravity, which describes gravity as an entropic force with macro-scale homogeneity but which is subject to quantum-level disorder. The theory claims to be able to remove the need for dark matter, as well as provide a natural explanation for dark energy. The GRSI model modifies General Relativity by adding self-interaction terms similar to those in quantum chromodynamics, leading to an effect similar to quark confinement in gravity. It is claimed to be able to explain observations without needing dark matter or dark energy. Shockwave cosmology, proposed by Joel Smoller and Blake Temple in 2003, has the “big bang” as an explosion inside a black hole, producing the expanding volume of space and matter that includes the observable universe. This black hole eventually becomes a white hole as the matter density reduces with the expansion. A related theory proposes that the acceleration of the expansion of the observable universe, normally attributed to dark energy, may be caused by an effect of the shockwave. See also Quantum cosmology Notes Bibliography Arp, Halton, Seeing Red. Apeiron, Montreal, Canada. 1998. Hannes, Alfvén D., Cosmic Plasma. Reidel Publishing Company, 1981. Hoyle, Fred; Geoffrey Burbidge, and Jayant V. Narlikar, A Different Approach to Cosmology: From a Static Universe through the Big Bang towards Reality. Cambridge University Press. 2000. Lerner, Eric J., Big Bang Never Happened, Vintage Books, 1992. Narlikar, Jayant Vishnu, Introduction to Cosmology. Jones & Bartlett Pub. 2nd edition, 1993. External links and references Narlikar, Jayant V. and T. Padmanabhan, "Standard Cosmology and Alternatives: A Critical Appraisal". Annual Review of Astronomy and Astrophysics, Vol. 39, pp. 211–248 (2001) Wright, Edward L. "Cosmological Fads and Fallacies:" Errors in some popular attacks on the Big Bang Physical cosmology Fringe physics Astronomical controversies Big Bang
Non-standard cosmology
Physics,Astronomy
5,071
2,146,106
https://en.wikipedia.org/wiki/Edip%20Y%C3%BCksel
Edip Yüksel (born December 20, 1957) is an American-Kurdish activist and prominent figure in the Quranism movement. Born in Güroymak, Yuksel is the author of more than twenty books on religion, politics, philosophy and law in Turkish. After settling in the United States, where he began his career as a lawyer, he became a colleague and friend of Rashad Khalifa. However, his interpretation of the Qur'an has differed with Khalifa on a number of issues, and his work has represented a new trend within the Quranist movement. Biography Yüksel comes from a Kurdish family who lived in Turkey, and is brother of Metin Yüksel. He is the author of more than twenty books on religion, politics, philosophy and law in Turkish. He has also written various articles and essays in English. He was a Turkish Islamist and a popular Islamic commentator until the mid-1980s when he rejected his previous religious beliefs and only used the Quran as the source of divine laws. He became a Quran-only Muslim, or known as Quranist. However, this movement is very controversial in the main Muslim circles, and thus Yüksel gained the rejection and hostility of many religious Islamic authorities in his home country. In 1989 Yüksel was forced to emigrate. He then settled in the United States of America, where he began his career as a lawyer. In the US, he worked with Rashad Khalifa, who claimed to have discovered a Quran code, also known as Code 19, in the Quran and called on Muslims to return to the Quran alone and to abandon all hadiths. Yüksel is critical of Islamic creationists, such as Harun Yahya. Professor Aisha Musa, from Florida International University, says in her book Hadith as Scripture about Yüksel: References External links 1957 births Living people People from Güroymak American people of Kurdish descent Turkish emigrants to the United States Turkish Kurdish people Turkish Quranist Muslims American Quranist Muslims Kurdish scholars Theistic evolutionists Muslim evolutionists
Edip Yüksel
Biology
416
40,253,698
https://en.wikipedia.org/wiki/Sclerotiorin
Sclerotiorin is an antimicrobial Penicillium frequentans isolate. Sclerotiorin is an aldose reductase inhibitor (IC50=0.4 μM) as well as a reversible lipoxygenase inhibitor (IC50=4.2 μM). Notes Chloroarenes Acetate esters Heterocyclic compounds with 2 rings Oxygen heterocycles
Sclerotiorin
Chemistry
91
25,459,021
https://en.wikipedia.org/wiki/Chlornaphazine
Chlornaphazine, a derivative of 2-naphthylamine, is a nitrogen mustard that was developed in the 1950s for the treatment of polycythemia and Hodgkin's disease. However, a high incidence of bladder cancers in patients receiving treatment with chlornaphthazine led to use of the drug being discontinued. The International Agency for Research on Cancer has listed chlornaphazine as a human carcinogen. Chlornaphazine appears as a brown solid or as colorless plates and has a boiling point of 210 °C at 5 mmHg. History Medical use Chlornaphazine was clinically used as a cytostatic agent for the treatment of Hodgkin's disease and polycythemia vera in multiple countries including Denmark and Italy. Discontinued use Chlornaphazine was discontinued as a clinical drug due to sufficient evidence for carcinogenicity in humans. The drug caused cancer of the urinary bladder. In the Medical Department of the Finsen Institute in Copenhagen, Danish researchers observed many patients over the years with polycythemia vera who had been administered different total doses of chlornaphazine. The initial therapeutic results reported in 1961 indicated that 75% of 32 patients that used chlornaphazine experienced a favorable effect. At the time of the analysis, seven patients died and in the autopsy of one of these patients, a carcinoma of the bladder was accidentally found. In a subsequent study from the Medical Department of the Finsen Institute, 61 patients diagnosed with polycythemia vera that had been treated with chlornaphazine were followed. It was found that among the 61 patients, eight patients developed an invasive carcinoma of the bladder, another eight patients had abnormal urinary cytology, and five patients had developed a papillary carcinoma grade II of the bladder. This led to the discontinuation of chlornaphazine in Denmark in 1963. Mechanism of action Chlornaphazine is a nitrogen mustard that was predominantly used in Scandinavia as a treatment for polycythemia and Hodgkin's disease. The possibility of nitrogen mustards to chemically inhibit abnormal cell growth was explored and accelerated through wartime studies of the physiological effects of war gases. The war gas known as nitrogen mustard appeared to be a potential drug for the treatment of leukemia, Hodgkin's disease, and lymphadenopathies in general due to its destructive effect on the bone marrow and particularly on the hemopoietic system. The anti-carcinoma activity of nitrogen mustards is based on their action on mitosis, inhibiting the growth of both normal and abnormal dividing cells. Due to the undesirable side effects of nitrogen mustard, chlornaphazine, another member of the nitrogen mustard series with a similar tumor-inhibiting capacity was developed. Just like the other active nitrogen mustards, chlornaphazine belongs to the direct-acting alkylating agents. It exerts its cytotoxic action by attaching an alkyl group to a lone pair of electrons on an atom of a wide variety of biological molecules by nucleophilic substitution. Through this covalent modification of DNA, chlornaphazine can interfere with essential processes in cancer cells, including DNA replication and protein synthesis. Since this drug is a bifunctional alkylator, it can react at two different sites in the DNA, forming intra- and interstrand cross-links. The structural modifications of DNA caused by chlornaphazine lead to misreading of the DNA code, the inhibition of DNA, RNA, and protein synthesis, and programmed cell death. Cancer cells are among the most affected since alkylating agents have their primary effect on rapidly proliferating cells which do not have time for DNA repair. Reactivity The ability to alkylate DNA bases is the predominant aspect of the reactivity of chlornaphazine in the body. The N7 of guanine bases is the preferred position for alkylation since it is the most nucleophilic and accessible site. The mechanism of reaction with DNA proceeds through two successive SN2 reactions in which the N(CH2CH2Cl)2 moiety of chlornaphazine is involved. In the first reaction, the nitrogen acts as a nucleophile to form an aziridinium ion by displacing the halogen. The aziridinium ion is subsequently attacked by nucleophilic sites in DNA. When these two steps are repeated with the second CH2CH2Cl side chain, intra- or interstrand cross-links can be formed. Metabolism After oral administration and subsequent absorption, chlornaphazine is metabolized to 2-naphthylamine which is N-acetylated by N-acetyltransferase (NAT) 2 in the liver. This is a detoxification reaction since it leads to the formation of non-reactive compounds. Alternatively, CYP1A2, a member of the cytochrome P450 superfamily may convert 2-naphthylamine in its N-hydroxy metabolite. The N-hydroxy metabolite can be further metabolized in the liver or transported to the urinary bladder. In the liver, it can undergo S-glutathionylation catalyzed by glutathione S-transferase Mu 1 (GSTM1), which involves the substitution of the hydroxy group by glutathione. This reaction leads to the detoxification of the N-hydroxy metabolite. The other biotransformation that may occur in the liver is the conjugation with glucuronic acid for which the cosubstrate uridine diphosphate-glucuronic acid (UDPGA) and enzyme UDP-glucuronosyltransferase are required. The stability of the N-glucuronides at neutral pH allows the transport via the blood to the kidneys where they are excreted into the urine. Under the mildly acidic conditions of the urine, the glucuronide is hydrolyzed, liberating the N-hydroxy metabolite in the bladder. The bladder epithelium further activates the N-hydroxy amine to an arylnitrenium ion. A second mechanism through which the N-hydroxy metabolites can be activated to arylnitrenium ions is via NAT1-catalyzed O-acetylation in the bladder. The products of chlornaphazine biotransformation are eliminated in the urine. Efficacy Due to its pronounced cytostatic effect, free solubility in water, and easy absorption from the intestinal tract, chlornaphazine appeared to be a suitable treatment for malignant systemic diseases such as Hodgkin's disease and polycythemia vera. Chlornaphazine was an effective medicine for treating fever, weight loss, itching, and sweating which are symptoms of advanced stages of Hodgkin's disease. These beneficial effects were notable when 200-400 mg chlornaphazine was administered daily for months to years. Hodgkin's disease can be treated with chemotherapy consisting of chlornaphazine alone or in combination with other cytostatic drugs. A study that investigated the efficacy of cytostatic treatment for Hodgkin's disease reported that 73% of the patients went into complete remission after receiving cytostatic drugs for 23 months. Chemotherapy can also be combined with radiotherapy. To patients who became resistant to radiation, nitrogen mustards such as chlornaphazine could be administered. In patients with Hodgkin's disease treated with nitrogen mustards tumor masses regressed rapidly. However, tumors reoccurred more rapidly compared to X-ray therapy. Although chlornaphazine was an effective treatment for polycythemia vera, the risks of chlornaphazine were too high making alternative treatments more advantageous. Adverse effects Initially, it was reported that chlornaphazine has no major adverse effects since no immediate side effects were found. However, years later, multiple cases were reported of patients treated with chlornaphazine for Hodgkin's disease who were diagnosed with bladder cancer several years after the chlornaphazine treatment had been stopped. In addition, 30% of the polycythemia patients treated with chlornaphazine had developed bladder cancer. These cases and additional research shed light on this alarming side effect of chlornaphazine. It was concluded that chlornaphazine treatment can induce bladder cancer 3 to 10 years after treatment. Development of bladder cancer was only observed in patients who had received a minimal total dose of 100 g chlornaphazine and therefore this late adverse effect is dose-dependent. Common adverse effects of alkylating agents in general are increased risk of malignancy, impaired spermatogenesis, intestinal mucosal damage, alopecia, anemia, pancytopenia, and amenorrhea. Toxicity Cancer of the urinary bladder has been observed in many cases treated with chlornaphazine. It has been implied that the carcinogenic effect is caused by the metabolites, whereas the chemotherapeutic action is due to the drug itself. Chlornaphazine contains a nitrogen mustard group at the basic molecule of 2-naphthylamine. The biotransformation of chlornaphazine involves the cleavage of the chloroethyl group, resulting in the formation of 2-naphthylamine. The carcinogenic effect of this compound on the human urinary bladder is well known. The bioactivation of 2-naphthylamine in the liver and urinary bladder results in the formation of products that readily decompose to form reactive arylnitrenium ions. These ions are reactive electrophiles that form adducts by covalently binding to nucleophilic sites on proteins, DNA, and RNA. The tumor induction of chlornaphazine derivatives is specific to the urinary bladder since the carcinogenic metabolites can only be liberated by the acidic environment of urine. Effects on animals The mutagenic effects of chornaphazine are studied in multiple animal models. Many studies have shown that rodents are inappropriate animal models to study the carcinogenicity of chlornaphazine due to differences in metabolic pathways between humans and rodents. Therefore, rodents treated with chlornaphazine usually do not develop bladder cancer like humans. Chlornaphazine was shown to cause chromosomal abnormalities in Chinese hamster lung cells, mutations in lymphoma cells of mice, and spontaneous in vitro synthesis of DNA in rat hepatocytes. In addition, chlornaphazine induces chromosomal abnormalities in erythrocytes in both mouse and rat bone marrow. It has been reported that chornaphazine is a genetic hazard to the offspring of mice previously exposed to this drug since it is highly mutagenic in post-meiotic male germ cells. Moreover, mice injected with chlornaphazine intraperitoneally developed lung tumors whereas rats injected with chlornaphazine subcutaneously developed local sarcomas. In contrast to rodents, dogs are suitable for studying bladder carcinogens, such as chlornaphazine, since the metabolism of aromatic amines in dogs is similar to that in humans due to the comparable urine pH and urination frequency of both species. Animal experiments had also been performed to expand the knowledge of the carcinogenic effects of chlornaphazine's metabolite 2-naphthylamine. Like in humans, 2-naphthylamine induces bladder tumors in dogs, monkeys, and hamsters. Bladder cancer can be induced in rats, but 2-naphthylamine is a very weak carcinogen in rats due to their urine pH, urination frequency, and resorption. In mice, 2-naphthylamine caused an increase in hepatomas, liver adenomas, and cholangiomas. References Alkylating antineoplastic agents IARC Group 1 carcinogens Naphthylamines Nitrogen mustards Chloroethyl compounds Withdrawn drugs 2-Naphthyl compounds
Chlornaphazine
Chemistry
2,586
5,065,981
https://en.wikipedia.org/wiki/Hydrodynamic%20radius
The hydrodynamic radius of a macromolecule or colloid particle is . The macromolecule or colloid particle is a collection of subparticles. This is done most commonly for polymers; the subparticles would then be the units of the polymer. For polymers in solution, is defined by where is the distance between subparticles and , and where the angular brackets represent an ensemble average. The theoretical hydrodynamic radius was originally an estimate by John Gamble Kirkwood of the Stokes radius of a polymer, and some sources still use hydrodynamic radius as a synonym for the Stokes radius. Note that in biophysics, hydrodynamic radius refers to the Stokes radius, or commonly to the apparent Stokes radius obtained from size exclusion chromatography. The theoretical hydrodynamic radius arises in the study of the dynamic properties of polymers moving in a solvent. It is often similar in magnitude to the radius of gyration. Applications to aerosols The mobility of non-spherical aerosol particles can be described by the hydrodynamic radius. In the continuum limit, where the mean free path of the particle is negligible compared to a characteristic length scale of the particle, the hydrodynamic radius is defined as the radius that gives the same magnitude of the frictional force, as that of a sphere with that radius, i.e. where is the viscosity of the surrounding fluid, and is the velocity of the particle. This is analogous to the Stokes' radius, however this is untrue as the mean free path becomes comparable to the characteristic length scale of the particulate - a correction factor is introduced such that the friction is correct over the entire Knudsen regime. As is often the case, the Cunningham correction factor is used, where: , where were found by Millikan to be: 1.234, 0.414, and 0.876 respectively. Notes References Grosberg AY and Khokhlov AR. (1994) Statistical Physics of Macromolecules (translated by Atanov YA), AIP Press. Polymer physics Radii
Hydrodynamic radius
Chemistry,Materials_science
441
65,176,759
https://en.wikipedia.org/wiki/Peter%20Marks%20%28physician%29
Peter Marks is an American hematologist oncologist serving as the director of the Center for Biologics Evaluation and Research within the Food and Drug Administration. He was appointed to the position in 2016 after previously serving as deputy director. Education Marks earned a Bachelor of Science degree from Columbia University, followed by a Doctor of Medicine and PhD in cell and molecular biology from New York University in the lab of Fredrick R. Maxfield. As an undergraduate, he volunteered at Mount Sinai St. Luke's in New York City, where he worked in the radioimmunoassay lab. He completed an internal medicine residency and oncology training at the Brigham and Women's Hospital. Career After completing his training, Marks worked at the Brigham and Women's Hospital as a clinician-scientist, and later served as Clinical Director of Hematology. He then worked in the pharmaceutical industry, where he worked on the development of hematology and oncology products. He later managed the Adult Leukemia Service at Yale University and served as the Chief Clinical Officer of the Yale New Haven Hospital Cancer Center. Marks joined the Center for Biologics Evaluation and Research as deputy director in 2012, and was promoted to director in 2016. In May 2020, he was selected to serve as a member of the White House Coronavirus Task Force, although he left a few days later over concerns that his participation would represent a conflict with his position at FDA. Marks also played a role in establishing Operation Warp Speed, a partnership between the federal government and various private companies to develop a COVID-19 vaccine, but left the project in May 2020 shortly after it was launched. Marks believed he would be more useful in his role as chief regulator of vaccines as the Director of FDA's Center for Biologics Evaluation and Research. In 2021, Marks served as a plenary speaker at the State of the Science Research Summit. In 2024, Marks overruled FDA staff to approve gene pharmacotherapy Elevidys—intended to treat Duchenne muscular dystrophy—despite it failing in Phase III clinical trial. Personal life Marks has two children and resides in Washington, D.C., with his wife. References American oncologists Columbia University alumni New York University Graduate School of Arts and Science alumni New York University Grossman School of Medicine alumni Yale University faculty Food and Drug Administration people Living people Year of birth missing (living people) Members of the National Academy of Medicine
Peter Marks (physician)
Biology
500