id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
50,602,087
https://en.wikipedia.org/wiki/Adiabatic%20MRI%20Pulses
Adiabatic radio frequency (RF) pulses are used in magnetic resonance imaging (MRI) to achieve excitation that is insensitive to spatial inhomogeneities in the excitation field or off-resonances in the sampled object. Nuclear magnetic resonance (NMR) experiments are often performed with surface transceiver coils that have desirable sensitivity, but have the disadvantage of producing an inhomogeneous excitation field. This inhomogeneous field causes spatial variations in spin flip angles, which, in turn, causes errors and degrades the receiver's sensitivity. RF pulses can be designed to create low-variation flip-angles or uniform magnetization inversion across a sample, even in the presence of inhomogeneities such as B1-variation and off-resonance. Analysis - Adiabatic Excitation Principles Traditional RF Excitation In traditional MRI RF excitation, an RF pulse, B1, is applied with a frequency that is resonant with the Larmor precession frequency of the spins of interest. In the frame rotating at the Larmor frequency, the effective field experienced by the spins is in the transverse plane. Observing the spins in this frame shows spins precessing about Beffective at a frequency proportional to . If the RF pulse is applied for a time shorter than the period of this precession, one can engineer the flip-angle (the angle with z-axis) by turning the pulse on and off at the appropriate time. In RF excitation analysis, the effective field is derived in the rotating frame of reference, which depends on the radial frequency of the radio-frequency field. in the laboratory frame is written as: Where B0 is the background magnetic field which points along the laboratory z-axis. In the frame rotating about the z-axis at radial frequency, the effective magnetic field can be derived: When is equal to the Larmor frequency for a particular spin, defined as , has only an x-component for that spin. Therefore, the spin will precess about the x-axis in the rotating frame. Adiabatic Passage In general, the RF magnetic field can be written with a time-varying phase and time-varying amplitude, so that, And Beffective in the frame rotating at can be written as: In an “adiabatic passage” process, the parameters A and ω can be varied gradually. Magnetic spin components that are in parallel with Beffective in the rotating frame will “track” Beffective as it changes, provided that Beffective changes “slowly enough.” Similarly, components that are perpendicular to Beffective will remain perpendicular to Beffective as it gradually changes direction. Figure 1 shows how spins track Beffective in an adiabatic passage transition. The “gradualness” of the changing Beffective is defined as the “adiabaticity,” K, of the pulse, which is given by: where is defined as the angle of Beffective with the z-axis in the rotating frame of reference. The term K can be understood by examining the ratio by which it is defined: the precession frequency of a spin about Beffective is proportional to the strength of the effective field, and the angle of the field, phi, must change slower than the precession frequency so that the spin can “track” the effective field as it changes direction. This means that in order for a pulse to be considered adiabatic, the K-factor, or adiabaticity, must be much greater than 1 for the entire duration of the excitation sequence. Figure 2 shows the magnitude and angle of for a commonly-used type of adiabatic sweep modulation function where A(t) and (t) are given by: It is common to examine the so-called “sweep-diagram” for adiabatic pulses. Sweep-diagrams plot the trajectory of the effective field for a spin with a particular resonance frequency. Figure 3 shows the sweep diagram for two on-resonance spins: blue sweep trajectory for a spin experiencing a B1 field strength without error, and red sweep trajectory for a spin experiencing a B1 field strength with significant deviation. Since the direction of Beffective is largely independent of B1 strength, adiabatic pulses are considered insensitive to B1 inhomogeneities. For off-resonance spins, the Beffective sweep diagram is shifted up or down by the amount of off-resonance. Another interpretation: the frequency excursion of the adiabatic pulse is always centered at the presumed Larmor frequency. If we have a spin which is not at the Larmor frequency, the relative frequency excursion of the adiabatic RF pulse will not be centered at that spin's resonance. Adiabatic Pulse Design Adiabatic passage can be used to design several different kinds of pulses, which can be insensitive to common variations that are common in modern MRI system design. Several common adiabatic sequences are summarized here: adiabatic half-passage (AHP), adiabatic full-passage (AFP); and B1-insensitive rotation (BIR) pulses. Adiabatic Half-Passage (AHP) The sweep diagram for AHP is the same as in Figure 3. Half-passage refers to a 90-degree rotation of Beffective. If the initial magnetization is along the initial axis of Beffective, the magnetization will track Beffective as it rotates. If the initial magnetization has components in the transverse plane, these perpendicular components will precess around Beffective during the time of the AHP pulse. The accumulated phase will depend on the length of the pulse and the strength of Beffective, so signal strength after AHP can be sensitive to initial transverse magnetization. Adiabatic Full-Passage (AFP) The sweep diagram for AFP is also shown in Figure 4. AFP pulses are exactly the same as AHP pulses, except that the angle of Beffective is changed to 180 degrees. AFP pulses are also known as adiabatic inversion pulses. An interesting feature of AFP pulses is their insensitivity to off-resonance spins in a particular bandwidth. Figure 4 shows sweep diagrams for an on-resonance spin (blue) and the same sweep parameters for an off-resonance spin (red). The effective field has a trajectory which is shifted upward, but still has a final ending position which points along the –z-axis. For this reason, AFP pulses are considered insensitive to off-resonance sources, within a certain bandwidth. Spins with off-resonance outside this bandwidth will not experience an inversion, as shown in the sweep diagram. B1-Insensitive Rotation (BIR) Pulses BIR pulses, also called “universal rotators,” induce arbitrary flip angles for all spins in a plane perpendicular to the rotation axis defined by Beffective. The plane that is perpendicular to Beffective is only defined by the direction of Beffective, and not the strength of Beffective. As long as adiabaticity is maintained during rotations of Beffective, inhomogeneities in the B1 field strength will not have an effect on the flip-angle of the magnetization after a BIR pulse. Two commonly analyzed BIR pulse sequences are explained and summarized here: the BIR-1 and BIR-4 pulse sequences. BIR-1 An important BIR pulse is known as the BIR-1. In this sequence, Beffective is applied initially along the +x-axis in the rotating frame, and then adiabatically swept from the +x-axis to the +z-axis. Magnetization in the plane initially perpendicular to Beffective remains perpendicular to Beffective throughout the adiabatic sweep. In the time that it takes the field to sweep, M precesses about the Beffective axis. Spins will have an arbitrary phase accrual due to precession about Beffective in the time period TP/2, and the amount of phase accrual will depend on the strength of B1. In the second half of the pulse, Beffective is applied along the -z-axis and adiabatically swept to point along the -y-axis. If the time of the -z to -y sweep is the same as the original +x to +z sweep, the phase-accrual due to B1 inhomogeneity will be reversed and M will now point along the -x axis. This process is shown in Figure 5, along the 90-degree path. BIR-1 pulses can be used to achieve an arbitrary flip angle. This is achieved by applying a phase-shift relative to the initial Beffective. In the case described above, the phase shift is 90-degrees. This can be understood by examining how the plane rotates during the second half of the BIR pulse. For a flip-angle of theta, the phase shift is designed: The BIR-1 pulse with arbitrary flip-angle will rotate M about the x-axis by theta, and then apply a phase-shift of theta. This means that all flip-angles induced by BIR-1 pulses will not lie in the same plane. BIR-1 pulses can be sensitive to off-resonance effects for several reasons. Firstly, off-resonance spins will have asymmetric phase accrual in the first and second halves of the pulse, meaning that off-resonance spins may not be refocused by the BIR-1. Secondly, the initial Beffective for an off-resonance spin can have a significant component pointing along the z-axis of the rotating frame, which causes the spin to track Beffective during the entire adiabatic pulse (known as “spin-locking”). Spin-locked sources will end up pointing along the -y-axis after a BIR-1 pulse, since that is the final direction for Beffective. BIR-4 A BIR-4 pulse is designed simply as two BIR-1 pulses back-to-back. For a 180-degree excitation (inversion), the second BIR-1 sequence is performed with Beffective initially pointing along the –y-axis, sweeps to the +z-axis, flips to the –z-axis, and sweeps to the +x-axis. Exactly in the same manner as the first BIR-1 pulse, phase accrual of M occurs in the sweep from –y to +z, and is undone in the sweep from –z to +x. Arbitrary flip angles are achieved by selecting a phase-shift for each of the BIR-1 parts of the pulse. For a flip-angle of theta, the phase shift of each pulse is set by this design: ; The BIR-4 pulse with arbitrary flip-angle will always rotate M by theta about the initial direction of the Beffective field, which is not true of the BIR-1 pulse. This means that all flip-angles induced by BIR-4 pulses will lie in the same plane (if the initial Beffective is in the same direction for each of the BIR-4 pulses). Off-resonance spins will exhibit some degree of spin-locking to the Beffective field, similarly to the BIR-1 case. Modulation Functions for Adiabatic Inversion Pulses Many different combinations of phase and amplitude modulated pulses can perform similar adiabatic inversions. The selection and/or design of adiabatic pulses depends on the required adiabaticity of the application. Once a required adiabaticity is defined, the amplitude and phase functions can be optimized against several desired features, such as reduction in the total time of the pulse, insensitivity to off-resonance or constant gradient fields, and reduction in the peak power required in the B1 pulse. We can examine the cases of AHP and AFP to demonstrate principles in adiabatic pulse design. In NMR excitation, it is desired to create offset-frequency-independent excitation with low RF peak power. Consider a spin which has an offset frequency of . The effective field experienced by this spin is given, The adiabaticity of an AHP pulse is given, Considering the time , where for a particular off-resonance, , the K-factor reduces to This represents the time when the rotating reference frame is rotating in resonance with the frequency-shifted spin. At this time, correlated to the amount of off-resonance, the adiabaticity, K, is at a minimum for the off-resonant spin, which means it is a minimum constraint on the design of and . One way to design and is to examine the minimum K-factor as a function of off-resonance. When this is done, the bandwidth of the adiabatic pulse can be defined by the width of off-resonance that still has a minimum K-factor greater than a minimum K-factor threshold. The pulses can be parameterized with desired constraints, such as pulse length or RF peak power, and the AHP pulse modulation functions can then be selected based on the desired application specifications. Applications and Current Research Most applications of adiabatic pulse design are in MR spectroscopy or MR imaging where B1 inhomogeneity or effects like chemical shift cause significant errors. These applications include: Spectroscopy and imaging with surface coils that are used for both transmission and receiving.;;; Ultra-high-field NMR, which can induce non-magnetic resonances with non-trivial effects on B1 homogeneity Imaging in the presence of chemical-shift, where normal excitation would exhibit apparent spatial displacement (error in position) of off-resonant spins Spectroscopy and imaging applications that have constraints on RF peak power for excitation. In addition, several research efforts have demonstrated methods for inverse design of adiabatic pulse sequences References Magnetic resonance imaging
Adiabatic MRI Pulses
Chemistry
2,867
22,449,897
https://en.wikipedia.org/wiki/Tulosesus%20heterosetulosus
Tulosesus heterosetulosus is a species of mushroom producing fungus in the family Psathyrellaceae. Taxonomy It was first classified as Coprinus heterosetulosus by the French mycologist Marcel Locquin in 1947. In 2001 a phylogenetic study resulted in a major reorganization and reshuffling of that genus and this species was transferred to Coprinellus. The species was known as Coprinellus heterosetulosus until 2020 when the German mycologists Dieter Wächter & Andreas Melzer reclassified many species in the Psathyrellaceae family based on phylogenetic analysis. Description It is a coprophilous fungus, known to grow on the dung of either sheep or goats. References Fungi described in 1976 Fungi of Greece heterosetulosus Fungus species
Tulosesus heterosetulosus
Biology
164
15,718,248
https://en.wikipedia.org/wiki/Conference%20on%20Implementation%20and%20Application%20of%20Automata
CIAA, the International Conference on Implementation and Application of Automata is an annual academic conference in the field of computer science. Its purpose is to bring together members of the academic, research, and industrial community who have an interest in the theory, implementation, and application of automata and related structures. There, the conference concerns research on all aspects of implementation and application of automata and related structures, including theoretical aspects. In 2000, the conference grew out of the Workshop on Implementation of Automata (WIA). Like most theoretical computer science conferences its contributions are strongly peer-reviewed; the articles appear in proceedings published in Springer Lecture Notes in Computer Science. Extended versions of selected papers of each year's conference alternatingly appear in the journals Theoretical Computer Science and International Journal of Foundations of Computer Science. Every year a best paper award is presented. Topics of the Conference Since the focus of the conference is on applied theory, contributions usually come from a widespread range of application domains. Typical topics of the conference include, among others, the following, as they relate to automata: Bio-inspired computing Complexity of automata operations, state complexity Compilers Computer-aided verification, model checking Concurrency Data and image compression Design and architecture of automata software Document engineering Natural language processing Pattern matching Teaching of automata theory Text processing Techniques for graphical display of automata History of the Conference The CIAA conference series was founded by Darrell Raymond and Derick Wood. Since 2013, the Steering committee is chaired by Kai Salomaa. See also List of computer science conferences contains other academic conferences in computer science References . . External links official website of CIAA CIAA proceedings information from DBLP "List of conferences and workshops" question at cstheory.stackexchange Theoretical computer science conferences Automata (computation) Formal languages
Conference on Implementation and Application of Automata
Mathematics,Technology
364
26,128,168
https://en.wikipedia.org/wiki/Dharni%20%28unit%29
The dharni () is a still used ancient unit of mass, used in Nepal, of about  seer. It was divided into 2 bisauli (बिसौलि), 4 boṛi (बोड़ि), or 12 pāu (पाउ). The United Nations Statistical Office gave an approximate equivalence of 2.3325 kilograms (5.142 pounds avoirdupois) in 1966. References External links Sizes.com Units of mass Customary units in India Obsolete units of measurement
Dharni (unit)
Physics,Mathematics
103
31,093,358
https://en.wikipedia.org/wiki/Linnean%20Tercentenary%20Medal
The Linnean Tercentenary Medal was commissioned in 2007 by the Linnean Society to commemorate the tercentenary of the birth of Carl Linnaeus. Recipients were in two categories: Silver Medal and Bronze Medal, for outstanding contributions to natural history. The front of the medal features an illustration by Linnaeus of Andromeda (mythology) next to one of the plant he named Andromeda, from his 1732 expedition to Lapland and on the back, a spiral design made from illustrations taken from Systema Naturae. Silver Medal Sir David Attenborough CBE, Hon. FLS, FRS Professor Steve Jones FRS, FLS Professor Edward O. Wilson FMLS, ForMemRS Bronze Medal Ms Gina Douglas FLS Dr Jenny Edmonds FLS Professor Carl-Olof Jacobsen FLS Professor Bengt Jonsell FLS Dr Martyn Rix FLS Mr Nigel Rowland FLS Ms Elaine Shaughnessy FLS See also List of biology awards References External links The Linnean Tercentenary Medal: The Linnean Society of London Biology awards Commemoration of Carl Linnaeus
Linnean Tercentenary Medal
Technology
219
37,926,134
https://en.wikipedia.org/wiki/Fish%20gill
Fish gills are organs that allow fish to breathe underwater. Most fish exchange gases like oxygen and carbon dioxide using gills that are protected under gill covers (operculum) on both sides of the pharynx (throat). Gills are tissues that are like short threads, protein structures called filaments. These filaments have many functions including the transfer of ions and water, as well as the exchange of oxygen, carbon dioxide, acids and ammonia. Each filament contains a capillary network that provides a large surface area for exchanging oxygen and carbon dioxide. Fish exchange gases by pulling oxygen-rich water through their mouths and pumping it over their gills. Within the gill filaments, capillary blood flows in the opposite direction to the water, causing counter-current exchange. The gills push the oxygen-poor water out through openings in the sides of the pharynx. Some fish, like sharks and lampreys, possess multiple gill openings. However, bony fish have a single gill opening on each side. This opening is hidden beneath a protective bony cover called the operculum. Juvenile bichirs have external gills, a very primitive feature that they share with larval amphibians. Previously, the evolution of gills was thought to have occurred through two diverging lines: gills formed from the endoderm, as seen in jawless fish species, or those form by the ectoderm, as seen in jawed fish. However, recent studies on gill formation of the little skate (Leucoraja erinacea) has shown potential evidence supporting the claim that gills from all current fish species have in fact evolved from a common ancestor. Breathing with gills All basal vertebrates (types of fish) breathe with gills. The gills are carried right behind the head, bordering the posterior margins of a series of openings from the esophagus to the exterior. Each gill is supported by a cartilaginous or bony gill arch. The gills of vertebrates typically develop in the walls of the pharynx, along a series of gill slits opening to the exterior. Most species employ a counter-current exchange system to enhance the diffusion of substances in and out of the gill, with blood and water flowing in opposite directions to each other. The gills are composed of comb-like filaments, the gill lamellae, which help increase their surface area for oxygen exchange. When a fish breathes, it draws in a mouthful of water at regular intervals. Then it draws the sides of its throat together, forcing the water through the gill openings, so that it passes over the gills to the outside. The bony fish have three pairs of arches, cartilaginous fish have five to seven pairs, while the primitive jawless fish have seven. The vertebrate ancestor no doubt had more arches, as some of their chordate relatives have more than 50 pairs of gills. Gills usually consist of thin filaments of tissue, branches, or slender tufted processes that have a highly folded surface to increase surface area. The high surface area is crucial to the gas exchange of aquatic organisms as water contains only a small fraction of the dissolved oxygen that air does. A cubic meter of air contains about 250 grams of oxygen at STP. The concentration of oxygen in water is lower than air and it diffuses more slowly. In a litre of freshwater the oxygen content is 8 cm3 per litre compared to 210 in the same volume of air. Water is 777 times more dense than air and is 100 times more viscous. Oxygen has a diffusion rate in air 10,000 times greater than in water. The use of sac-like lungs to remove oxygen from water would not be efficient enough to sustain life. Rather than using lungs "Gaseous exchange takes place across the surface of highly vascularised gills over which a one-way current of water is kept flowing by a specialised pumping mechanism. The density of the water prevents the gills from collapsing and lying on top of each other, which is what happens when a fish is taken out of water." Higher vertebrates do not develop gills, the gill arches form during fetal development, and lay the basis of essential structures such as jaws, the thyroid gland, the larynx, the columella (corresponding to the stapes in mammals) and in mammals the malleus and incus. Fish gill slits may be the evolutionary ancestors of the tonsils, thymus gland, and Eustachian tubes, as well as many other structures derived from the embryonic branchial pouches. Bony fish In bony fish, the gills lie in a branchial chamber covered by a bony operculum (branchia is an Ancient Greek word for gills). The great majority of bony fish species have five pairs of gills, although a few have lost some over the course of evolution. The operculum can be important in adjusting the pressure of water inside of the pharynx to allow proper ventilation of the gills, so that bony fish do not have to rely on ram ventilation (and hence near constant motion) to breathe. Valves inside the mouth keep the water from escaping. The gill arches of bony fish typically have no septum, so that the gills alone project from the arch, supported by individual gill rays. Some species retain gill rakers. Though all but the most primitive bony fish lack a spiracle, the pseudobranch associated with it often remains, being located at the base of the operculum. This is, however, often greatly reduced, consisting of a small mass of cells without any remaining gill-like structure. Fish transfer oxygen from the sea water to their blood using a highly efficient mechanism called countercurrent exchange. Countercurrent exchange means the flow of water over the gills is in the opposite direction to the flow of blood through the capillaries in the lamellae. The effect of this is that the blood flowing in the capillaries always encounters water with a higher oxygen concentration, allowing diffusion to occur all the way along the lamellae. As a result the gills can extract over 80% of the oxygen available in the water. Marine teleosts also use their gills to excrete osmolytes (e.g. Na⁺, Cl−). The gills' large surface area tends to create a problem for fish that seek to regulate the osmolarity of their internal fluids. Seawater contains more osmolytes than the fish's internal fluids, so marine fishes naturally lose water through their gills via osmosis. To regain the water, marine fishes drink large amounts of sea water while simultaneously expending energy to excrete salt through the Na+/K+-ATPase ionocytes (formerly known as mitochondrion-rich cells and chloride cells). Conversely, freshwater has less osmolytes than the fish's internal fluids. Therefore, freshwater fishes must utilize their gill ionocytes to attain ions from their environment to maintain optimal blood osmolarity. In some primitive bony fishes and amphibians, the larvae bear external gills, branching off from the gill arches. These are reduced in adulthood, their function taken over by the gills proper in fishes and by lungs in most amphibians. Some amphibians retain the external larval gills in adulthood, the complex internal gill system as seen in fish apparently being irrevocably lost very early in the evolution of tetrapods. Cartilaginous fish Sharks and rays typically have five pairs of gill slits that open directly to the outside of the body, though some more primitive sharks have six or seven pairs. Adjacent slits are separated by a cartilaginous gill arch from which projects a long sheet-like septum, partly supported by a further piece of cartilage called the gill ray. The individual lamellae of the gills lie on either side of the septum. The base of the arch may also support gill rakers, small projecting elements that help to filter food from the water. A smaller opening, the spiracle, lies in the back of the first gill slit. This bears a small pseudobranch that resembles a gill in structure, but only receives blood already oxygenated by the true gills. The spiracle is thought to be homologous to the ear opening in higher vertebrates. Most sharks rely on ram ventilation, forcing water into the mouth and over the gills by rapidly swimming forward. In slow-moving or bottom dwelling species, especially among skates and rays, the spiracle may be enlarged, and the fish breathes by sucking water through this opening, instead of through the mouth. Chimaeras differ from other cartilagenous fish, having lost both the spiracle and the fifth gill slit. The remaining slits are covered by an operculum, developed from the septum of the gill arch in front of the first gill. The shared trait of breathing via gills in bony fish and cartilaginous fish is a famous example of symplesiomorphy. Bony fish are more closely related to terrestrial vertebrates, which evolved out of a clade of bony fishes that breathe through their skin or lungs, than they are to the sharks, rays, and the other cartilaginous fish. Their kind of gill respiration is shared by the "fishes" because it was present in their common ancestor and lost in the other living vertebrates. But based on this shared trait, we cannot infer that bony fish are more closely related to sharks and rays than they are to terrestrial vertebrates. Lampreys and hagfish Lampreys and hagfish do not have gill slits as such. Instead, the gills are contained in spherical pouches, with a circular opening to the outside. Like the gill slits of higher fish, each pouch contains two gills. In some cases, the openings may be fused together, effectively forming an operculum. Lampreys have seven pairs of pouches, while hagfishes may have six to fourteen, depending on the species. In the hagfish, the pouches connect with the pharynx internally. In adult lampreys, a separate respiratory tube develops beneath the pharynx proper, separating food and water from respiration by closing a valve at its anterior end. Breathing without gills Some fish can at least partially respire without gills. In some species cutaneous respiration accounts for 5 to 40 per cent of the total respiration, depending on temperature. Cutaneous respiration is more important in species that breathe air, such as mudskippers and reedfish, and in such species can account for nearly half the total respiration. Fish from multiple groups can live out of the water for extended time periods. Air breathing fish can be divided into obligate air breathers and facultative air breathers. Obligate air breathers, such as the African lungfish, are obligated to breathe air periodically or they suffocate. Facultative air breathers, such as the catfish Hypostomus plecostomus, only breathe air if they need to and can otherwise rely on their gills for oxygen. Most air breathing fish are facultative air breathers that avoid the energetic cost of rising to the surface and the fitness cost of exposure to surface predators. Catfish of the families Loricariidae, Callichthyidae, and Scoloplacidae absorb air through their digestive tracts. Parasites on gills Fish gills are the preferred habitat of many ectoparasites (parasites attached to the gill but living out of it); the most commons are monogeneans and certain groups of parasitic copepods, which can be extremely numerous. Other ectoparasites found on gills are leeches and, in seawater, larvae of gnathiid isopods. Endoparasites (parasites living inside the gills) include encysted adult didymozoid trematodes, a few trichosomoidid nematodes of the genus Huffmanela, including Huffmanela ossicola which lives within the gill bone, and the encysted parasitic turbellarian Paravortex. Various protists and Myxosporea are also parasitic on gills, where they form cysts. See also Aquatic respiration Book lung Gill raker Gill slit Lung Artificial gills (human) References Further references External links Fish Dissection - Gills exposed Australian Museum. Updated: 11 June 2010. Retrieved 16 January 2012. Fish anatomy Organs (anatomy) Respiratory system
Fish gill
Biology
2,565
213,070
https://en.wikipedia.org/wiki/Artificial%20insemination
Artificial insemination is the deliberate introduction of sperm into a female's cervix or uterine cavity for the purpose of achieving a pregnancy through in vivo fertilization by means other than sexual intercourse. It is a fertility treatment for humans, and is a common practice in animal breeding, including dairy cattle (see frozen bovine semen) and pigs. Artificial insemination may employ assisted reproductive technology, sperm donation and animal husbandry techniques. Artificial insemination techniques available include intracervical insemination (ICI) and intrauterine insemination (IUI). Where gametes from a third party are used, the procedure may be known as 'assisted insemination'. Humans History The first recorded case of artificial insemination was John Hunter in 1790, who helped impregnate a linen draper's wife. The first reported case of artificial insemination by donor occurred in 1884: William H. Pancoast, a professor in Philadelphia, took sperm from his "best looking" student to inseminate an anesthetized woman without her knowledge. The case was reported 25 years later in a medical journal. The sperm bank was developed in Iowa starting in the 1950s in research conducted by University of Iowa medical school researchers Jerome K. Sherman and Raymond Bunge. In the United Kingdom, the British obstetrician Mary Barton founded one of the first fertility clinics to offer donor insemination in the 1930s, with her husband Bertold Wiesner fathering hundreds of offspring. In the 1980s, direct intraperitoneal insemination (DIPI) was occasionally used, where doctors injected sperm into the lower abdomen through a surgical hole or incision, with the intention of letting them find the oocyte at the ovary or after entering the genital tract through the ostium of the fallopian tube. Patients and gamete donors There are multiple methods used to obtain the semen necessary for artificial insemination, and the sperm used in artificial insemination may be provided by the recipient patient's partner or by a sperm donor whose identity is known or unknown. Artificial insemination techniques were originally used mainly to assist heterosexual couples to conceive where they were having difficulties, but with the advancement of techniques in this field, notably ICSI, the use of artificial insemination for such couples has largely been rendered unnecessary. However, there are still reasons why a couple would seek to use artificial insemination using the male partner's sperm. In the case of such couples, before artificial insemination is turned to as the solution, doctors will require an examination of both the male and female involved in order to remove any and all physical hindrances that are preventing them from naturally achieving a pregnancy including any factors which prevent the couple from having satisfactory sexual intercourse. The couple is also given a fertility test to determine the motility, number, and viability of the male's sperm and the success of the female's ovulation. From these tests, the doctor may or may not recommend a form of artificial insemination. The results of investigations may, for example, show that the woman's immune system may be rejecting her partner's sperm as invading molecules. Women who have issues with the cervix – such as cervical scarring, cervical blockage from endometriosis, or thick cervical mucus – may also benefit from artificial insemination, since the sperm must pass through the cervix to result in fertilization. Nowadays artificial insemination in humans is mainly used as a substitute for sexual intercourse for women without a male partner who wish to have their own children—such as women in lesbian relationships and single women—and thus where sperm from a sperm donor is used. Barriers for patients and donors Some countries have laws which restrict and regulate who can donate sperm and who is able to receive artificial insemination. Some women who live in a jurisdiction which does not permit artificial insemination in the circumstance in which she finds herself may travel to another jurisdiction which permits it. Compared with natural insemination, artificial insemination can be more expensive and more invasive, and may require professional assistance. Preparations Timing is critical, as the window and opportunity for fertilization is little more than twelve hours from the release of the ovum. To increase the chance of success, the woman's menstrual cycle is closely observed, often using ovulation kits, ultrasounds or blood tests, such as basal body temperature tests over, noting the color and texture of the vaginal mucus, and the softness of the nose of her cervix. To improve the success rate of artificial insemination, drugs to create a stimulated cycle may be used, but the use of such drugs also results in an increased chance of a multiple birth. Sperm can be provided fresh or washed. Washed sperm is required in certain situations. Pre- and post-concentration of motile sperm is counted. Sperm from a sperm bank will be frozen and quarantined for a period, and the donor will be tested before and after production of the sample to ensure that he does not carry a transmissible disease. Sperm from a sperm bank will also be suspended in a semen extender which assists with freezing, storing and shipping. If sperm is provided by a private donor, either directly or through a sperm agency, it is usually supplied fresh, not frozen, and it will not be quarantined. Donor sperm provided in this way may be given directly to the recipient woman or her partner, or it may be transported in specially insulated containers. Some donors have their own freezing apparatus to freeze and store their sperm. Techniques Semen used is either fresh, raw, or frozen. Where donor sperm is supplied by a sperm bank, it will always be quarantined and frozen, and will need to be thawed before use. The sperm is ideally donated after two or three days of abstinence, without lubrication as the lubricant can inhibit the sperm motility. When an ovum is released, semen is introduced into the woman's vagina, uterus or cervix, depending on the method being used. Sperm is occasionally inserted twice within a 'treatment cycle'. Intracervical Intracervical insemination (ICI) is the method of artificial insemination which most closely mimics the natural ejaculation of semen by the penis into the vagina during sexual intercourse. It is painless and is the simplest, easiest and most common method of artificial insemination involving the introduction of unwashed or raw semen into the vagina at the entrance to the cervix, usually by means of a needleless syringe. The vagina acts as a filter to separate out the sperm from other chemicals in the ejaculate, as with intercourse, so that only sperm pass through the cervix on their way to the uterus. ICI is commonly used in the home, by self-insemination and practitioner insemination. Sperm used in ICI inseminations does not have to be 'washed' to remove seminal fluid so that raw semen from a private donor may be used. Semen supplied by a sperm bank prepared for ICI or IUI use is suitable for ICI. ICI is a popular method of insemination amongst single and lesbian women purchasing donor sperm on-line. Although ICI is the simplest method of artificial insemination, a meta-analysis has shown no difference in live birth rates compared with IUI. It may also be performed privately by the woman, or, if she has a partner, in the presence of her partner, or by her partner. ICI was previously used in many fertility centers as a method of insemination, but its popularity in this context has waned as other, more reliable methods of insemination have become available. During ICI, air is expelled from a needleless syringe which is then filled with semen which has been allowed to liquify. A specially-designed syringe, wider and with a more rounded end, may be used for this purpose. Any further enclosed air is removed by gently pressing the plunger forward. The woman lies on her back and the syringe is inserted into the vagina. Care is optimal when inserting the syringe, so that the tip is as close to the entrance to the cervix as possible. A vaginal speculum may be used for this purpose and a catheter may be attached to the tip of the syringe to ensure delivery of the semen as close to the entrance to the cervix as possible. The plunger is then slowly pushed forward and the semen in the syringe is gently emptied deep into the vagina. It is important that the syringe is emptied slowly for safety and for the best results, bearing in mind that the purpose of the procedure is to replicate as closely as possible a natural deposit of the semen in the vagina. The syringe (and catheter if used) may be left in place for several minutes before removal. The woman can bring herself to orgasm so that the cervix 'dips down' into the pool of semen, again replicating closely vaginal intercourse, and this may improve the success rate. Following insemination, fertile sperm will swim through the cervix into the uterus and from there to the fallopian tubes in a natural way as if the sperm had been deposited in the vagina through intercourse. The woman is therefore advised to lie still for about half-an-hour to assist conception. One insemination during a cycle is usually sufficient. Additional inseminations during the same cycle may not improve the chances of a pregnancy. Ordinary sexual lubricants should not be used in the process, but special fertility or 'sperm-friendly' lubricants can be used for increased ease and comfort. When performed at home without the presence of a professional, aiming the sperm in the vagina at the neck of the cervix may be more difficult to achieve and the effect may be to 'flood' the vagina with semen, rather than to target it specifically at the entrance to the cervix. This procedure is sometimes referred to as 'intravaginal insemination' (IVI). Sperm supplied by a sperm bank will be frozen and must be allowed to thaw before insemination. The sealed end of the straw itself must be cut off and the open end of the straw is usually fixed straight on to the tip of the syringe, allowing the contents to be drawn into the syringe. Sperm from more than one straw can generally be used in the same syringe. Where fresh semen is used, this must be allowed to liquefy before inserting it into the syringe, or alternatively, the syringe may be back-loaded. A conception cap, which is a form of conception device, may be inserted into the vagina following insemination and may be left in place for several hours. Using this method, a woman may go about her usual activities while the cervical cap holds the semen in the vagina close to the entrance to the cervix. Advocates of this method claim that it increases the chances of conception. One advantage with the conception device is that fresh, non-liquefied semen may be used. The man may ejaculate straight into the cap so that his fresh semen can be inserted immediately into the vagina without waiting for it to liquefy, although a collection cup may also be used. Other methods may be used to insert semen into the vagina notably involving different uses of a conception cap. These include a specially designed conception cap with a tube attached which may be inserted empty into the vagina after which liquefied semen is poured into the tube. These methods are designed to ensure that semen is inseminated as close as possible to the cervix and that it is kept in place there to increase the chances of conception. Intrauterine Intrauterine insemination (IUI) involves injection of 'washed' sperm directly into the uterus with a catheter. Washing involves the removal of chemicals other than sperm which are in the natural ejaculate. In forms of vaginal insemination, including artificial vaginal insemination and ICI, these chemicals will be filtered out by the vagina. Insemination in this way also means that the sperm do not have to swim through the cervix which is coated with a mucus layer. This layer of mucus can slow down the passage of sperm and can result in many sperm perishing before they can enter the uterus. Donor sperm is sometimes tested for mucus penetration if it is to be used for ICI inseminations but partner sperm may or may not be able to pass through the cervix. In these cases, the use of IUI can provide a more efficient delivery of the sperm. In general terms, IUI is usually regarded as more efficient than ICI or IVI. It is therefore the method of choice for single and lesbian women wishing to conceive using donor sperm since this group of recipients usually require artificial insemination because they do not have a male partner, not because they have medical problems. Owing to the high number of these recipients using donor sperm services, IUI is therefore the most popular method of insemination today at a fertility clinic. The term 'artificial insemination' has, in many cases, come to mean IUI insemination. It is important that washed sperm is used because unwashed sperm may elicit uterine cramping, expelling the semen and causing pain, due to content of prostaglandins. (Prostaglandins are also the compounds responsible for causing the myometrium to contract and expel the menses from the uterus, during menstruation.) Resting on the table for fifteen minutes after an IUI is optimal for the woman to increase the pregnancy rate. Using this technique, as with ICI, fertilization takes place naturally in the external part of the fallopian tubes in the same way that occurs following intercourse. For heterosexual couples, the indications to perform an intrauterine insemination are usually a moderate male factor, the incapability to ejaculate in vagina and an idiopathic infertility. A short period of ejaculatory abstinence before intrauterine insemination is associated with higher pregnancy rates. For the man, a TMS of more than 5 million per ml is optimal. In practice, donor sperm will satisfy these criteria and since IUI is a more efficient method of artificial insemination than ICI and, because of its generally higher success rate, IUI is usually the insemination procedure of choice for single women and lesbians using donor semen in a fertility centre. Lesbians and single women are less likely to have fertility issues of their own and enabling donor sperm to be inserted directly into the womb will often produce a better chance of conceiving. A 2019 showed that pregnancy rates were similar between lesbian women and heterosexual women undergoing IUI. However, it was found that there is a significantly higher multiple gestation rate among lesbian women undergoing ovulation induction (OI) when compared to lesbian women undergoing natural cycles. Unlike ICI, intrauterine insemination normally requires a medical practitioner to perform the procedure. One of the requirements is to have at least one permeable tube, proved by hysterosalpingography. The infertility duration is also important. A female under 30 years of age has optimal chances with IUI; A promising cycle is one that offers two follicles measuring more than 16 mm, and estrogen of more than 500 pg/mL on the day of hCG administration. However, GnRH agonist administration at the time of implantation does not improve pregnancy outcome in intrauterine insemination cycles according to a randomized controlled trial. One of the prominent private clinic in Europe has published a data A multiple logistic regression model showed that sperm origin, maternal age, follicle count at hCG administration day, follicle rupture, and the number of uterine contractions observed after the second insemination procedure were associated with the live-birth rate The steps to follow in order to perform an intrauterine insemination are: Mild controlled ovarian stimulation (COS): there is no control of how many oocytes are at the same time when stimulating ovulation. For that reason, it is necessary to check the amount being ovulated via ultrasound (checking the amount of follicles developing at the same time) and administering the desired amount of hormones. Ovulation induction: using substances known as ovulation inductors. Semen capacitation: wash and centrifugation, swim-up, or gradient. The insemination should not be performed later than an hour after capacitation. 'Washed sperm' may be purchased directly from a sperm bank if donor semen is used, or 'unwashed semen' may be thawed and capacitated before performing IUI insemination, provided that the capacitation leaves a minimum of, usually, five million motile sperm. Luteal phase support: a lack of progesterone in the endometrium could end a pregnancy. To avoid that 200 mg/day of micronized progesterone are administered via vagina. If there is pregnancy, this hormone is kept administering until the tenth week of pregnancy. The cost breakdown for Intrauterine Insemination (IUI) involves several components. The procedure itself typically ranges from $300 to $1,000 per cycle without insurance. The cost of the sperm may vary widely, with prices per vial ranging from $500 to $1,000 or more from a sperm bank. Additional expenses might include consultation fees, ovulation-inducing medication, ultrasounds, and blood tests. The extent of insurance coverage for fertility treatments, including Intrauterine Insemination (IUI), varies considerably. Some insurance plans may cover some of the costs, while others may not provide any financial support for fertility treatments. Coverage depends on various factors, such as the insurance plan, state policies and regulations, and the underlying cause of infertility. Several states have mandated insurers to provide coverage for infertility services. IUI can be used in conjunction with controlled ovarian hyperstimulation (COH). Clomiphene Citrate is the first line, Letrozole is second line, in order to stimulate ovaries before moving on to IVF. Still, advanced maternal age causes decreased success rates; women aged 38–39 years appear to have reasonable success during the first two cycles of ovarian hyperstimulation and IUI. However, for women aged over 40 years, there appears to be no benefit after a single cycle of COH/IUI. Medical experts therefore recommend considering in vitro fertilization after one failed COH/IUI cycle for women aged over 40 years. A double intrauterine insemination theoretically increases pregnancy rates by decreasing the risk of missing the fertile window during ovulation. However, a randomized trial of insemination after ovarian hyperstimulation found no difference in live birth rate between single and double intrauterine insemination. A Cochrane found uncertain evidence about the effect of IUI compared with timed intercourse or expectant management on live birth rates but IUI with controlled ovarian hyperstimulation is probably better than expectant management. Due to the lack of reliable evidence from controlled clinical trials, it is not certain which semen preparation techniques are more effective (wash and centrifugation; swim-up; or gradient) in terms of pregnancy and live birth rates. Intrauterine insemination success factors Intrauterine insemination (IUI) procedures have shown to be more successful and effective with certain factors taken into account. One major factor is the health of the sperm that is used. Sperm motility, which is improved by the sperm washing procedure, sperm density, and the sperm concentration index, all of which are found through washing and studying of the health of the specimen, are major indicators of a positive pregnancy test following IUI. The age of both the male and female (egg and sperm donors) involved in the process are extremely important. Although age has typically been pinned on the women as a determining factor, research shows that both male and female age has about equal impact on the success of the procedure. Along with age, the duration of fertility is also found to be a factor in IUI success, the longer one faces infertility, the lower the chance of a positive pregnancy test occurring. When people talk about age as a risk factor, they are generally speaking to the way in which the DNA in the eggs and sperm have increased probabilities of mutations. Lastly, the biological factors of the female’s body can have some impact on the success of the IUI procedure. The endometrial thickness at time of insemination is moderately important, though less of a concern than some of the other factors. The number of follicles developed, grown, and retrieved from the ovaries during ovarian stimulation is particularly important and a major success factor in fertility treatments. And lastly, for the female partner, the estradiol concentration within the body on the day of HCG administration. Who IUI can be used for Because IUI is less expensive and less invasive than other fertility options (for example, in vitro fertilisation, or IVF), it is typically the first outlet for those looking for fertility treatments. For individuals or couples who struggle with getting pregnant, but haven’t explored any fertility treatments yet, they would be good candidates for IUI. IUI provides those with a more affordable and accessible outlet for fertility treatments, however, IUI may not be the most successful option if it is determined to be female factor infertility. IUI is also a very good option for single individuals who are using donor sperm, as donor sperm undergoes regulations and checks which may not be the case for a partner sperm donation. IUI can additionally be a good fertility outlet for lesbian or queer couples as they most often do not face infertility, and would most likely be using regulated and checked donor sperm. Furthermore, surrogates can be artificially inseminated through IUI to help other individuals and/or couples become pregnant with their sperm. Intrauterine tuboperitoneal Intrauterine tuboperitoneal insemination (IUTPI) involves injection of washed sperm into both the uterus and fallopian tubes. The cervix is then clamped to prevent leakage to the vagina, best achieved with a specially designed double nut bivalve (DNB) speculum. The sperm is mixed to create a volume of 10 ml, sufficient to fill the uterine cavity, pass through the interstitial part of the tubes and the ampulla, finally reaching the peritoneal cavity and the Pouch of Douglas where it would be mixed with the peritoneal and follicular fluid. IUTPI can be useful in unexplained infertility, mild or moderate male infertility, and mild or moderate endometriosis. In non-tubal sub fertility, fallopian tube sperm perfusion may be the preferred technique over intrauterine insemination. Intratubal Intratubal insemination (ITI) involves injection of washed sperm into the fallopian tube, although this procedure is no longer generally regarded as having any beneficial effect compared with IUI. ITI however, should not be confused with gamete intrafallopian transfer, where both eggs and sperm are mixed outside the woman's body and then immediately inserted into the fallopian tube where fertilization takes place. LGBTQ+ concerns Although many fertilization procedures, such as IUI are typically carried out in a medical setting, society is increasingly recognizing the important role that this plays in the lives of individuals who might otherwise not conceive through heterosexual penetrative sexual intercourse. Artificial insemination using a sperm donor for LGBTQ+ individuals and couples is one of the more cost-effective avenues to parenting. While clinic based IUI may be open to many, it typically still includes hetero-reproductive narratives which dates from the early days of fertilization procedures when these were often exclusively for married couples and when there was a resistance in many societies to extend these services to the LGBTQ+ community. Indeed, in the early days, there were very few fertility clinics which would provide services to single women and lesbian couples. In the UK, notable pioneers in this respect were the British Pregnancy Advisory Service (BPAS) and the Pregnancy Advisory Service (PAS), both of which operated before statutory control of fertility services in 1992, and the London Women's Clinic (LWC) which provided artificial insemination to single women and lesbians from 1998. Most donor insemination procedures undertaken in many countries today are for lesbian couples or single mainly lesbian women, yet much of their rhetoric and advertising is directed at heterosexual couples. Indeed, many sperm banks seem reluctant to inform donors that most of their donations will be used for lesbians and single women. To improve the way society talks about and carries out donor insemination inclusive language may be used. One way to do this is to bring LGBTQ narratives into this process, with a particular emphasis on this being a family-centered process. Even in a medical setting, it is important to bring intimacy and family-centeredness into this process, as this promotes connectedness and inclusiveness in what can be seen as a hostile and discriminatory environment. LGBTQ couples or individuals typically have to navigate more complexities and barriers than heterosexual couples when undergoing fertility treatment, such as stigma and carrier decisions, so allowing room for intimacy and connectedness in the process can improve the experience for individuals, reduce stress, and minimize barriers that target marginalized individuals. Lesbian couples may either select a friend or family member as their sperm donor or choose an anonymous donor. After a sperm donor is selected, a couple can proceed with donor sperm IUI. IUI is an economic option for same-sex couples and can be done without the use of medication. According to a study from 2021, lesbian women undergoing IUI had an average clinical pregnancy rate of 13.2% per cycle and 42.2% success rate giving the average number of cycles at 3.6. Pregnancy rate The rates of successful pregnancy for artificial insemination are 10-15% per menstrual cycle using ICI, and 15–20% per cycle for IUI. In IUI, about 60 to 70% have achieved pregnancy after 6 cycles. However, these pregnancy rates may be very misleading, since many factors have to be included to give a meaningful answer, e.g. definition of success and calculation of the total population. These rates can be influenced by age, overall reproductive health, and if the patient had an orgasm during the insemination. The literature is conflicting on immobilization after insemination has increasing the chances of pregnancy. Previous data suggests that it is statistically significant for the patient to remain immobile for 15 minutes after insemination, while another review article claims that it is not. A point of consideration, is that it does cost the patient or healthcare system to remain immobile for 15 minutes if it does increase the chances. For couples with unexplained infertility, unstimulated IUI is no more effective than natural means of conception. The pregnancy rate also depends on the total sperm count, or, more specifically, the total motile sperm count (TMSC), used in a cycle. The success rate increases with increasing TMSC, but only up to a certain count, when other factors become limiting to success. The summed pregnancy rate of two cycles using a TMSC of 5 million (may be a TSC of ~10 million on graph) in each cycle is substantially higher than one single cycle using a TMSC of 10 million. However, although more cost-efficient, using a lower TMSC also increases the average time taken to achieve pregnancy. Women whose age is becoming a major factor in fertility may not want to spend that extra time. Samples per child The number of samples (ejaculates) required to give rise to a child varies substantially from person to person, as well as from clinic to clinic. However, the following equations generalize the main factors involved: For intracervical insemination: N is how many children a single sample can give rise to. Vs is the volume of a sample (ejaculate), usually between 1.0 mL and 6.5 mL c is the concentration of motile sperm in a sample after freezing and thawing, approximately 5–20 million per ml but varies substantially rs is the pregnancy rate per cycle, between 10% and 35% nr is the total motile sperm count recommended for vaginal insemination (VI) or intra-cervical insemination (ICI), approximately 20 million pr. ml. The pregnancy rate increases with increasing number of motile sperm used, but only up to a certain degree, when other factors become limiting instead. With these numbers, one sample would on average help giving rise to 0.1–0.6 children, that is, it actually takes on average 2–5 samples to make a child. For intrauterine insemination, a centrifugation fraction (fc) may be added to the equation: fc is the fraction of the volume that remains after centrifugation of the sample, which may be about half (0.5) to a third (0.33). On the other hand, only 5 million motile sperm may be needed per cycle with IUI (nr=5 million) Thus, only 1–3 samples may be needed for a child if used for IUI. Social implications One of the key issues arising from the rise of dependency on assisted reproductive technology (ARTs) is the pressure placed on couples to conceive, "where children are highly desired, parenthood is culturally mandatory, and childlessness socially unacceptable". The medicalization of infertility creates a framework in which individuals are encouraged to think of infertility quite negatively. In many cultures donor insemination is religiously and culturally prohibited, often meaning that less accessible "high tech" and expensive ARTs, like IVF, are the only solution. An over-reliance on reproductive technologies in dealing with infertility prevents many – especially, for example, in the "infertility belt" of central and southern Africa – from dealing with many of the key causes of infertility treatable by artificial insemination techniques; namely preventable infections, dietary and lifestyle influences. If good records are not kept, the offspring when grown up risk accidental incest. Risk factors The risk factors of artificial insemination are comparatively low to other forms of fertility treatment. The most prominent risk factor would be infection after the procedure, with other risk factors including a higher risk of having twins or triplets, and minor vaginal bleeding during the procedure. Although these risk factors are minor and generally manageable, there is a significant knowledge gap between identity groups around risk factors for fertility treatments in general. For instance, it was found that LGBTQ+ individuals had "had significant knowledge gaps of risk factors associated with reproductive outcomes when compared to heterosexual female peers." Therefore, it is imperative that providers take extra care in educating their LGBTQ+ patients on potential risk factors of artificial insemination. The implications of this knowledge gap between LGTBQ+ individuals and their heterosexual counterparts are serious and worth noting. Lack of access to proper information and risk factors around procedures like these may dissuade someone from pursuing these procedures altogether. As a result, there will be less normalization of LGBTQ+ family making and reproduction, which only perpetuates this cycle of lack of information among LGBTQ+ folks. Legal restrictions Some countries restrict artificial insemination in a variety of ways. For example, some countries do not permit AI for single women, and other countries do not permit the use of donor sperm. As of May 2013, the following European countries permit medically assisted AI for single women: Belarus Belgium Britain Bulgaria Denmark Estonia Finland Germany Greece Hungary Iceland Ireland Latvia Moldova Montenegro Netherlands North Macedonia Romania Russia Spain Ukraine Armenia Cyprus Law in the United States History of Law Around Artificial Insemination Artificial insemination used to be seen as adultery and was illegal until the 1960s when states started recognizing the child born from artificial insemination as legitimate. Once the children began to be recognized as legitimate, legal questions around who the parents of the child are, how to handle surrogacy, paternity rights, and eventually artificial insemination and LGBT+ parents began to arise. Prior to the use of artificial insemination, the legal parents of a child were the two people who conceived the child or the person who birthed the child and their legal spouse, but artificial insemination complicates the legal process of becoming a parent as well as who is the parent of the child. Deciding who the parents of the child are is the largest legal predicament around artificial insemination. However, questions around surrogacy and donor's rights also appear as a side question to determining the parent(s). Some major cases that deal with artificial insemination and parental rights are, K.M v E.G, Johnson v Calvert, Matter of Baby M, and In Re K.M.H. Legal Parental Relations and Artificial Insemination When children are conceived the traditional way, there is little discrepancy around who the legal parents of the child are. However, because children conceived using artificial insemination may not be genetically related to one or more of their parents, who the legal parents of the child are can come into question. Prior to the passage of the Uniform Parentage Act in 1973, children conceived via artificial insemination were deemed as “illegitimate” children. The Uniform Parentage Act then recognized the children born from artificial insemination as legal and laid precedent for how the legal parents of the child were decided. However, this act only applied to the children of those married couples. It established that the person who birthed the child was the mother and the father would be the husband of the woman. In 2002, the Uniform Parentage Act, which is adopted individually on a state by state basis, was revised to address non married couples and states that an unmarried couple has the same rights to the child that a married couple would. This extended who has the right to be a parent to a man who would supposedly fill in the social role as a “father.” There were now numerous ways to establish parental rights for both the mother and the father depending on if the child was born using a sperm donor or a surrogate. Currently, a revised version of the Uniform Parentage Act is starting to be passed in a few states that expands how parental relations can be determined. This bill includes expanding “father” to mean any person who would fill the role of a father, regardless of their gender and “mother” is expanded to anyone who gives birth to the child regardless of gender. In addition, this act would also change any language of “husband” or “wife” to “spouse.” Paternity rights There is no federal law that applies to all fifty states when it comes to artificial insemination and paternity rights, but the Uniform Parentage Act is a model which many states have adopted. Under the 1973 UPA, married heterosexual couples making use of artificial insemination through a licensed physician could list the husband as the natural father of the child, rather than the sperm donor. Since then a revised version of the Act has been introduced, though to less widespread adoption Generally paternity is not an issue when artificial insemination is between a married woman and an anonymous donor. Most states provide that anonymous donors' paternity claims are not recognized, and most sperm donation centers make use of contracts that require donors to sign away their paternity rights before they can participate. When the mother knows the donor, however, or engages in artificial insemination while unmarried, complications may arise. In cases of private sperm donation, paternity rights and responsibilities are often conferred onto sperm donors when: the donor and recipient did not comply with state laws regarding artificial insemination, the sperm donor and recipient know one another, or the donor had the intent of being a father to the child. When one or a number of these things is true, courts have at times found written agreements relinquishing parental rights to be unenforceable. Opposition and criticism Religious opposition Some theologically buttressed arguments reject the moral validity of this practice, such as Pope John XXIII. However, according to a document of the USCCB, the intrauterine insemination (IUI) of “licitly obtained” (normal intercourse with a silastic sheath i.e. a perforated condom) but technologically prepared semen sample (washed, etc.) has been neither approved nor disapproved by Church authority and its moral validity remains under discussion. Some religious groups, such as the Catholic Church, and individuals have also criticized artificial insemination because acquiring sperm for the procedure is seen as "a form of adultery promoting the vice of masturbation." Other morality-based opposition There are critics of artificial insemination who voice concerns regarding the potential for AI to encourage eugenicist practices through selection of particular traits. The line of reasoning follows the history of artificial insemination in breeding livestock and other domesticated animals wherein preferred traits are encouraged through human-controlled selection. Other animals Artificial insemination is used for pets, livestock, endangered species, and animals in zoos or marine parks difficult to transport. Reasons and techniques It may be used for many reasons, including to allow a male to inseminate a much larger number of females, to allow the use of genetic material from males separated by distance or time, to overcome physical breeding difficulties, to control the paternity of offspring, to synchronize births, to avoid injury incurred during natural mating, and to avoid the need to keep a male at all (such as for small numbers of females or in species whose fertile males may be difficult to manage). Artificial insemination is much more common than natural mating, as it allows several female animals to be impregnated from a single male. For instance, up to 30-40 female pigs can be impregnated from a single boar. Workers collect the semen by masturbating the boars, then insert it into the sows via a raised catheter known as a pork stork. Boars are still physically used to excite the females prior to insemination, but are prevented from actually mating. Semen is collected, extended, then cooled or frozen. It can be used on-site or shipped to the female's location. If frozen, the small plastic tube holding the semen is referred to as a straw. To allow the sperm to remain viable during the time before and after it is frozen, the semen is mixed with a solution containing glycerol or other cryoprotectants. An extender is a solution that allows the semen from a donor to impregnate more females by making insemination possible with fewer sperm. Antibiotics, such as streptomycin, are sometimes added to the sperm to control some bacterial venereal diseases. Before the actual insemination, estrus may be induced through the use of progestogen and another hormone (usually PMSG or Prostaglandin F2α). History The first viviparous animal to be artificially fertilized was a dog. The experiment was conducted with success by the Italian Lazzaro Spallanzani in 1780. Another pioneer was the Russian Ilya Ivanov in 1899. In 1935, diluted semen from Suffolk sheep was flown from Cambridge in Britain to Kraków, Poland, as part of an international research project. The participants included Prawochenki (Poland), Milovanoff (USSR), Hammond and Walton (UK), and Thomasset (Uruguay). Modern artificial insemination was pioneered by John O. Almquist of Pennsylvania State University. He improved breeding efficiency by the use of antibiotics (first proven with penicillin in 1946) to control bacterial growth, decreasing embryonic mortality, and increase fertility. This, and various new techniques for processing, freezing, and thawing of frozen semen significantly enhanced the practical utilization of artificial insemination in the livestock industry and earned him the 1981 Wolf Foundation Prize in Agriculture. Many techniques developed by him have since been applied to other species, including humans. Species Artificial insemination is used in many non-human animals, including sheep, horses, cattle, pigs, dogs, pedigree animals generally, zoo animals, turkeys and creatures as tiny as honeybees and as massive as orcas (killer whales). Artificial insemination of farm animals is common in the developed world, especially for breeding dairy cattle (75% of all inseminations). Swine are also bred using this method (up to 85% of all inseminations). It is an economical means for a livestock breeder to improve their herds utilizing males having desirable traits. Although common with cattle and swine, artificial insemination is not as widely practiced in the breeding of horses. A small number of equine associations in North America accept only horses that have been conceived by "natural cover" or "natural service" – the actual physical mating of a mare to a stallion – the Jockey Club being the most notable of these, as no artificial insemination is allowed in Thoroughbred breeding. Other registries such as the AQHA and warmblood registries allow registration of foals created through artificial insemination, and the process is widely used allowing the breeding of mares to stallions not resident at the same facility – or even in the same country – through the use of transported frozen or cooled semen. In modern species conservation, semen collection and artificial insemination are used also in birds. In 2013 scientist of the Justus-Liebig-University of Giessen, Germany, from the working group of Michael Lierz, Clinic for birds, reptiles, amphibians, and fish, developed a novel technique for semen collection and artificial insemination in parrots producing the world's first macaw by assisted reproduction. Scientists working with captive orcas were able to pioneer the technique in the early 2000s, resulting in "the first successful conceptions, resulting in live offspring, using artificial insemination in any cetacean species". John Hargrove, a SeaWorld trainer, describes Kasatka as being the first orca to receive artificial insemination. Violation of rights Artificial insemination on animals has been criticised as a violation of animal rights, with animal rights advocates equating it with rape and arguing it constitutes institutionalized bestiality. Artificial insemination of farm animals is condemned by animal rights campaigners such as People for the Ethical Treatment of Animals (PETA) and Joey Carbstrong, who identify the practice as a form of rape due to its sexual, involuntary and perceived painful nature. Animal rights organizations such as PETA and Mercy for Animals frequently write against the practice in their articles. Much of the meat production in the United States depends on artificial insemination, resulting in an explosive growth of the procedure over the past three decades. The state of Kansas makes no exceptions for artificial insemination under its bestiality law, thus making the procedure illegal. Criteria for benefiting from artificial insemination according to the 2021 Bioethics Law According to the 2021 Bioethics Law, the criteria that must be met to benefit from artificial insemination are as follows: Artificial insemination can be performed using sperm from the husband or frozen sperm from an anonymous donor. Both spouses or the unmarried woman must consent in advance to artificial insemination or embryo transfer. The parenting project must be validated through a series of interviews with professionals (doctors, psychologists, etc.). Individuals benefiting from artificial insemination must be of reproductive age. The 2021 Bioethics Law has expanded the scope of Medically Assisted Procreation (MAP). See also Accidental incest Conception device Donor conceived people Embryo transfer Ex-situ conservation Frozen bovine semen Frozen zoo Intracytoplasmic sperm injection Semen extender Sperm bank Sperm donation Sperm sorting Surrogacy References Further reading Hammond, John, et al., The Artificial Insemination of Cattle (Cambridge, Heffer, 1947, 61pp) External links Detailed description of the different fertility treatment options available A history of artificial insemination What are the Ethical Considerations for Sperm Donation? United States state court rules sperm donor is not liable for children UK Sperm Donors Lose Anonymity AI technique in the equine IntraUterine TuboPeritoneal Insemination (IUTPI) The Hastings Center's Bioethics Briefing Book entry on assisted reproduction Annales de Gembloux L´Organisation Scientifique de l Índustrie Animale en URSS, Artificial Insemination in the URSS, by Luis Thomasset, 1936 Fertility medicine Reproduction in mammals Livestock Pets Cryobiology Semen Assisted reproductive technology Theriogenology Ethically disputed business practices towards animals
Artificial insemination
Physics,Chemistry,Biology
9,459
13,203,980
https://en.wikipedia.org/wiki/Hull%20classification%20symbol%20%28Canada%29
The Royal Canadian Navy uses hull classification symbols to identify the types of its ships, which are similar to the United States Navy's hull classification symbol system. The Royal Navy and some European and Commonwealth navies (19 in total) use a somewhat analogous system of pennant numbers. In a ship name such as the ship prefix HMCS for Her or His Majesty's Canadian Ship indicates the vessel is a warship in service to the Monarch of Canada, while the proper name Algonquin may follow a naming convention for the class of vessel. The hull classification symbol in the example is the parenthetical suffix (DDG 283), where the hull classification type DDG indicates that the Algonquin is a guided-missile destroyer and the hull classification number 283 is unique within that type. Listed below are various hull classification types with some currently in use and others that are retired and no longer in use. Auxiliary ships AGOR: Auxiliary General Oceanographic Research (retired), AGSC: surveying vessel (retired) Example included: AOR: Auxiliary Oiler Replenishment, ARE: Auxiliary Replenishment Escort (retired). Examples ASL: diving support vessel (retired from the Royal Canadian Navy) Included: F: escort armed ships (retired pre World War II passenger ships that were converted to military roles during the war) FHE: Fast Hydrofoil Escort (retired, prototype tested 1968–1971), K: sloop and submarine tender (also used for frigates and corvettes). Example included: KC: sail training. Example includes: PCT: Patrol Craft Training (supersedes YAG) Examples include: s T: armed trawler (retired). Example included: , YAG: Yard Auxiliary General (retired training vessels, superseded by PCT) YAG training vessels CFAV Grizzly (YAG 306), CFAV Cougar (YAG 308) YTB: Yard Tug. Examples include: YTL: Yard Tug. Examples include: s Lawrenceville (YTL 590), CFAV Parksville (YTL 591) YTM: Yard Tug. Example includes: YTR: Yard Tractor tug fireboats. Examples include: s Aircraft carriers CVL: light carrier (retired) Examples included: , , and D: World War II escort carrier (retired) Examples included and s: R: carrier World War II (retired, was also used for destroyers) Corvettes K: corvette (retired, was also used for frigates and a sloop-of-war). Examples included: s Cruisers C: light cruiser (retired) Examples included Destroyers D: destroyer - World War II era (retired) eg DD: destroyer - World War II era (retired, DD was used by the United States Navy, I was used by the Royal Canadian Navy for US built DD destroyers) DDE: escort destroyer (retired) DDH: air defence destroyer - helicopter, eg DDG: area air defence - guided missile G: destroyer - World War II era (retired, included and es) H: escort destroyer - World War II era (retired, included and es) I: destroyer - World War II era (retired)< Examples included: , , R: destroyer (post World War II retired, was also used for a carrier) World War II destroyer examples included: - V class and - Frigates F: frigate FFE: escort frigate (post World War II; used for , retired) FFH: multi-role patrol frigate - helicopter eg Minesweepers J: minesweeper (retired, used for World War II era , , and s) MCB: post World War II minesweeper (retired) used for MSA: Mine Sweeper Auxiliary: (in use 1989–2000, retired) MM: Mechanical Minesweeper - more recently known as coastal defence vessels such as Submarines CC: World War I era gas powered submarines CH: World War I era diesel-electric submarines S: Submarine (retired Cold War era diesel electric: last used by s) SS: Submarine (retired, used for US built (1961–1969) and (1968–1974)-class vessels) SSK: Hunter-Killer Submarine or long range submarines. Eg Victoria-class submarines Patrol AOPV: Arctic and Offshore Patrol Vessel Notes References Royal Canadian Navy Naval ships of Canada Ship identification numbers
Hull classification symbol (Canada)
Mathematics
881
314,366
https://en.wikipedia.org/wiki/H-infinity%20methods%20in%20control%20theory
H∞ (i.e. "H-infinity") methods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H∞ methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H∞ techniques have the advantage over classical control techniques in that H∞ techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H∞ techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s by George Zames (sensitivity minimization), J. William Helton (broadband matching), and Allen Tannenbaum (gain margin optimization). The phrase H∞ control comes from the name of the mathematical space over which the optimization takes place: H∞ is the Hardy space of matrix-valued functions that are analytic and bounded in the open right-half of the complex plane defined by Re(s) > 0; the H∞ norm is the supremum singular value of the matrix over that space. In the case of a scalar-valued function, the elements of the Hardy space that extend continuously to the boundary and are continuous at infinity is the disk algebra. For a matrix-valued function, the norm can be interpreted as a maximum gain in any direction and at any frequency; for SISO systems, this is effectively the maximum magnitude of the frequency response. H∞ techniques can be used to minimize the closed loop impact of a perturbation: depending on the problem formulation, the impact will either be measured in terms of stabilization or performance. Simultaneously optimizing robust performance and robust stabilization is difficult. One method that comes close to achieving this is H∞ loop-shaping, which allows the control designer to apply classical loop-shaping concepts to the multivariable frequency response to get good robust performance, and then optimizes the response near the system bandwidth to achieve good robust stabilization. Commercial software is available to support H∞ controller synthesis. Problem formulation First, the process has to be represented according to the following standard configuration: The plant P has two inputs, the exogenous input w, that includes reference signal and disturbances, and the manipulated variables u. There are two outputs, the error signals z that we want to minimize, and the measured variables v, that we use to control the system. v is used in K to calculate the manipulated variables u. Notice that all these are generally vectors, whereas P and K are matrices. In formulae, the system is: It is therefore possible to express the dependency of z on w as: Called the lower linear fractional transformation, is defined (the subscript comes from lower): Therefore, the objective of control design is to find a controller such that is minimised according to the norm. The same definition applies to control design. The infinity norm of the transfer function matrix is defined as: where is the maximum singular value of the matrix . The achievable H∞ norm of the closed loop system is mainly given through the matrix D11 (when the system P is given in the form (A, B1, B2, C1, C2, D11, D12, D22, D21)). There are several ways to come to an H∞ controller: A Youla-Kucera parametrization of the closed loop often leads to very high-order controller. Riccati-based approaches solve two Riccati equations to find the controller, but require several simplifying assumptions. An optimization-based reformulation of the Riccati equation uses linear matrix inequalities and requires fewer assumptions. See also Blaschke product Hardy space H square H-infinity loop-shaping Linear-quadratic-Gaussian control (LQG) Rosenbrock system matrix References Bibliography . . . . . . Control theory Hardy spaces
H-infinity methods in control theory
Mathematics
886
56,138,626
https://en.wikipedia.org/wiki/Nut%20shell%20filter
A nut shell filter is a device to remove oil from water. In the oil and gas industry, the term walnut shell filter is common since black walnuts are most often used. Typically nut shell filters are designed for loadings under 100 mg/L oil and 100 mg/L suspended solids and operate with 90–95% removal efficiency. High oil and solids loadings reduce run times between backwashes and results in reduced effluent quality. Design A bed of nut shell media is contained in a vessel. Vessels are typically vertical, but may also be horizontal.  Particles are captured as flow penetrates through the media bed.  Although it is possible to use other medias for this purpose, walnut and pecan shells are most commonly used since they have several desirable properties making them well suited for oil removal.  First, nut shells are hard with a high modulus of elasticity, resulting in a low attrition rate and minimal media replacement, typically <5% per year. Nut shells also have an equal affinity for water and oil, allowing oil to be adsorbed during normal operation, but also enable oil removed from the bed during agitation allowing for media reuse. During normal operation, water typically flows down through the media bed where oil is coalesced and attracted to the nut shells and accumulates in the interstitial spaces between the media. Typical nut shell media is 12/20 (0.8 to 1.7 mm) and 12/16 mesh (1.2 to 1.7 mm). Although not designed for solids removal, an added benefit is that solids accumulate in the bed. As solids are collected, the differential pressure across the bed increases. Periodical backwashes are initiated to regenerate the media. Typically, backwash is triggered by one of the following: Differential pressure Timer (often 24 hours) Operator initiated (often due to exceeding limit for effluent quality) Backwash occurs through mechanical agitation such as Backwash through draft tube Backwash through external media scrubber Mechanical mixer If backwash is not sufficient, oil can cause media to agglomerate, known as mudballing. Typical flux of nut shell filters is 7 to 27 gpm/ft2. Commercial vessels are sized to accommodate the flow rate of water and range up to 14 feet in diameter. For continuous operation, multiple vessels are frequently used so flow can continue to be treated while backwash occurs in one vessel. For large flows, several vessels may be used. Unlike some oil / water separators, no chemicals are required for oil removal in nut shell filters. Uses Nut shell filters were designed to separate crude oil from oilfield produced water in the 1970s, which remains the principal use. Nut shell filters can be used onshore and offshore, but are more common onshore where the treatment requirements are typically more stringent and footprint is not limited. Nut shell filters are used for tertiary treatment following primary and secondary treatment which removes the bulk of the oil and suspended solids. Typically, effluent is reinjected for reuse or disposal or discharged to a surface body of water. Categories Media filter References Filters Water filters Walnut
Nut shell filter
Chemistry,Engineering
637
76,824,481
https://en.wikipedia.org/wiki/74%20Geminorum
74 Geminorum (f Geminorum) is a K-type giant star in the constellation Gemini. It is located about 640 light-years from Earth based on its Gaia DR3 parallax. The star is often subject to lunar occultations, allowing an accurate measurement of its angular diameter. It has an apparent magnitude of 5.05, making it faintly visible to the naked eye. Characteristics Based on its spectral type of K5.5III, it is a star that has left the main sequence and evolved into a K-type giant star. It radiates about 670 times the solar luminosity from its photosphere at an effective temperature of 3,933 K. The angular diameter, as measured by a lunar occultation, is . At the current distance of , as measured by a Hipparcos parallax of 6.13 milliarcseconds, it gives a physical size of . 74 Geminorum has an apparent magnitude of 5.05, making it visible to the naked eye only from locations with dark skies, far from light pollution. The absolute magnitude, i.e. the magnitude of the star if it was seen at , is -1.01. It is located in the coordinates RA , DEC , which is within the Gemini constellation. The star is moving away from Earth at a velocity of 25.38 km/s. f Geminorum is the star's Bayer designation. Other designations for the star include 74 Geminorum (the Flamsteed designation), HIP 37300 (from the Hipparcos catalogue), HR 2938 (from the Bright Star Catalogue) and HD 61338 (from the Henry Draper Catalogue). The star is often subject to lunar occultations. One of these occultations were observed by the SAO RAS 6-m telescope, which allowed the angular diameter of 74 Geminorum to be accurately measured at . See also List of stars in Gemini Notes References Gemini (constellation) K-type giants Geminorum, 74 Geminorum, f Durchmusterung objects 061338 037300 2938
74 Geminorum
Astronomy
435
11,128,261
https://en.wikipedia.org/wiki/Phyllosticta%20coffeicola
Phyllosticta coffeicola is a fungal plant pathogen infecting coffee. References External links USDA ARS Fungal Database Fungal plant pathogens and diseases Coffee diseases coffeicola Fungi described in 1896 Fungus species
Phyllosticta coffeicola
Biology
47
49,905,706
https://en.wikipedia.org/wiki/Moduli%20stack%20of%20elliptic%20curves
In mathematics, the moduli stack of elliptic curves, denoted as or , is an algebraic stack over classifying elliptic curves. Note that it is a special case of the moduli stack of algebraic curves . In particular its points with values in some field correspond to elliptic curves over the field, and more generally morphisms from a scheme to it correspond to elliptic curves over . The construction of this space spans over a century because of the various generalizations of elliptic curves as the field has developed. All of these generalizations are contained in . Properties Smooth Deligne-Mumford stack The moduli stack of elliptic curves is a smooth separated Deligne–Mumford stack of finite type over , but is not a scheme as elliptic curves have non-trivial automorphisms. j-invariant There is a proper morphism of to the affine line, the coarse moduli space of elliptic curves, given by the j-invariant of an elliptic curve. Construction over the complex numbers It is a classical observation that every elliptic curve over is classified by its periods. Given a basis for its integral homology and a global holomorphic differential form (which exists since it is smooth and the dimension of the space of such differentials is equal to the genus, 1), the integralsgive the generators for a -lattice of rank 2 inside of pg 158. Conversely, given an integral lattice of rank inside of , there is an embedding of the complex torus into from the Weierstrass P function pg 165. This isomorphic correspondence is given byand holds up to homothety of the lattice , which is the equivalence relationIt is standard to then write the lattice in the form for , an element of the upper half-plane, since the lattice could be multiplied by , and both generate the same sublattice. Then, the upper half-plane gives a parameter space of all elliptic curves over . There is an additional equivalence of curves given by the action of thewhere an elliptic curve defined by the lattice is isomorphic to curves defined by the lattice given by the modular actionThen, the moduli stack of elliptic curves over is given by the stack quotientNote some authors construct this moduli space by instead using the action of the Modular group . In this case, the points in having only trivial stabilizers are dense. Stacky/Orbifold points Generically, the points in are isomorphic to the classifying stack since every elliptic curve corresponds to a double cover of , so the -action on the point corresponds to the involution of these two branches of the covering. There are a few special points pg 10-11 corresponding to elliptic curves with -invariant equal to and where the automorphism groups are of order 4, 6, respectively pg 170. One point in the Fundamental domain with stabilizer of order corresponds to , and the points corresponding to the stabilizer of order correspond to pg 78. Representing involutions of plane curves Given a plane curve by its Weierstrass equationand a solution , generically for j-invariant , there is the -involution sending . In the special case of a curve with complex multiplicationthere the -involution sending . The other special case is when , so a curve of the form there is the -involution sending where is the third root of unity . Fundamental domain and visualization There is a subset of the upper-half plane called the Fundamental domain which contains every isomorphism class of elliptic curves. It is the subsetIt is useful to consider this space because it helps visualize the stack . From the quotient mapthe image of is surjective and its interior is injectivepg 78. Also, the points on the boundary can be identified with their mirror image under the involution sending , so can be visualized as the projective curve with a point removed at infinitypg 52. Line bundles and modular functions There are line bundles over the moduli stack whose sections correspond to modular functions on the upper-half plane . On there are -actions compatible with the action on given byThe degree action is given byhence the trivial line bundle with the degree action descends to a unique line bundle denoted . Notice the action on the factor is a representation of on hence such representations can be tensored together, showing . The sections of are then functions sections compatible with the action of , or equivalently, functions such that This is exactly the condition for a holomorphic function to be modular. Modular forms The modular forms are the modular functions which can be extended to the compactificationthis is because in order to compactify the stack , a point at infinity must be added, which is done through a gluing process by gluing the -disk (where a modular function has its -expansion)pgs 29-33. Universal curves Constructing the universal curves is a two step process: (1) construct a versal curve and then (2) show this behaves well with respect to the -action on . Combining these two actions together yields the quotient stack Versal curve Every rank 2 -lattice in induces a canonical -action on . As before, since every lattice is homothetic to a lattice of the form then the action sends a point toBecause the in can vary in this action, there is an induced -action on giving the quotient spaceby projecting onto . SL2-action on Z2 There is a -action on which is compatible with the action on , meaning given a point and a , the new lattice and an induced action from , which behaves as expected. This action is given bywhich is matrix multiplication on the right, so See also Fundamental domain Homothety Level structure (algebraic geometry) Moduli of abelian varieties Shimura variety Modular curve Elliptic cohomology References External links Algebraic geometry
Moduli stack of elliptic curves
Mathematics
1,199
22,842,521
https://en.wikipedia.org/wiki/Black%20morel
Several species of fungi share the name black morel: Morchella angusticeps (L.) Pers. (1801) Morchella conica Pers. Morchella elata Fr. (1822) Morchella tomentosa M.Kuo (2008), the black foot morel
Black morel
Biology
67
66,926
https://en.wikipedia.org/wiki/Demodulation
Demodulation is the process of extracting the original information-bearing signal from a carrier wave. A demodulator is an electronic circuit (or computer program in a software-defined radio) that is used to recover the information content from the modulated carrier wave. There are many types of modulation, and there are many types of demodulators. The signal output from a demodulator may represent sound (an analog audio signal), images (an analog video signal) or binary data (a digital signal). These terms are traditionally used in connection with radio receivers, but many other systems use many kinds of demodulators. For example, in a modem, which is a contraction of the terms modulator/demodulator, a demodulator is used to extract a serial digital data stream from a carrier signal which is used to carry it through a telephone line, coaxial cable, or optical fiber. History Demodulation was first used in radio receivers. In the wireless telegraphy radio systems used during the first 3 decades of radio (1884–1914) the transmitter did not communicate audio (sound) but transmitted information in the form of pulses of radio waves that represented text messages in Morse code. Therefore, the receiver merely had to detect the presence or absence of the radio signal, and produce a click sound. The device that did this was called a detector. The first detectors were coherers, simple devices that acted as a switch. The term detector stuck, was used for other types of demodulators and continues to be used to the present day for a demodulator in a radio receiver. The first type of modulation used to transmit sound over radio waves was amplitude modulation (AM), invented by Reginald Fessenden around 1900. An AM radio signal can be demodulated by rectifying it to remove one side of the carrier, and then filtering to remove the radio-frequency component, leaving only the modulating audio component. This is equivalent to peak detection with a suitably long time constant. The amplitude of the recovered audio frequency varies with the modulating audio signal, so it can drive an earphone or an audio amplifier. Fessendon invented the first AM demodulator in 1904 called the electrolytic detector, consisting of a short needle dipping into a cup of dilute acid. The same year John Ambrose Fleming invented the Fleming valve or thermionic diode which could also rectify an AM signal. Techniques There are several ways of demodulation depending on how parameters of the base-band signal such as amplitude, frequency or phase are transmitted in the carrier signal. For example, for a signal modulated with a linear modulation like amplitude modulation (AM), we can use a synchronous detector. On the other hand, for a signal modulated with an angular modulation, we must use a frequency modulation (FM) demodulator or a phase modulation (PM) demodulator. Different kinds of circuits perform these functions. Many techniques such as carrier recovery, clock recovery, bit slip, frame synchronization, rake receiver, pulse compression, Received Signal Strength Indication, error detection and correction, etc., are only performed by demodulators, although any specific demodulator may perform only some or none of these techniques. Many things can act as a demodulator, if they pass the radio waves on nonlinearly. AM radio An AM signal encodes the information into the carrier wave by varying its amplitude in direct sympathy with the analogue signal to be sent. There are two methods used to demodulate AM signals: The envelope detector is a very simple method of demodulation that does not require a coherent demodulator. It consists of a rectifier (anything that will pass current in one direction only) or other non-linear component that enhances one half of the received signal over the other and a low-pass filter. The rectifier may be in the form of a single diode or may be more complex. Many natural substances exhibit this rectification behaviour, which is why it was the earliest modulation and demodulation technique used in radio. The filter is usually an RC low-pass type but the filter function can sometimes be achieved by relying on the limited frequency response of the circuitry following the rectifier. The crystal set exploits the simplicity of AM modulation to produce a receiver with very few parts, using the crystal as the rectifier and the limited frequency response of the headphones as the filter. The product detector multiplies the incoming signal by the signal of a local oscillator with the same frequency and phase as the carrier of the incoming signal. After filtering, the original audio signal will result. SSB is a form of AM in which the carrier is reduced or suppressed entirely, which require coherent demodulation. For further reading, see sideband. FM radio Frequency modulation (FM) has numerous advantages over AM such as better fidelity and noise immunity. However, it is much more complex to both modulate and demodulate a carrier wave with FM, and AM predates it by several decades. There are several common types of FM demodulators: The quadrature detector, which phase shifts the signal by 90 degrees and multiplies it with the unshifted version. One of the terms that drops out from this operation is the original information signal, which is selected and amplified. The signal is fed into a PLL and the error signal is used as the demodulated signal. The most common is a Foster–Seeley discriminator. This is composed of an electronic filter which decreases the amplitude of some frequencies relative to others, followed by an AM demodulator. If the filter response changes linearly with frequency, the final analog output will be proportional to the input frequency, as desired. A variant of the Foster–Seeley discriminator called the ratio detector Another method uses two AM demodulators, one tuned to the high end of the band and the other to the low end, and feed the outputs into a difference amplifier. Using a digital signal processor, as used in software-defined radio. PM QAM QAM demodulation requires a coherent receiver. It uses two product detectors whose local reference signals are a quarter cycle apart in phase: one for the in-phase component and one for the quadrature component. The demodulator keeps these product detectors tuned to a continuous or intermittent pilot signal. See also Detection theory Detector (radio) Fax demodulator References External links Demodulation chapter on All About Circuits
Demodulation
Engineering
1,358
56,636,552
https://en.wikipedia.org/wiki/HackRF%20One
HackRF One is a wide band software defined radio (SDR) half-duplex transceiver created and manufactured by Great Scott Gadgets. It is able to send and receive signals. Its principal designer, Michael Ossmann, launched a successful Kickstarter campaign in 2014 with a first run of the project called HackRF. The hardware and software's open source nature has attracted hackers, amateur radio enthusiasts, and information security practitioners. Overview HackRF One is capable of receiving and transmitting on a frequency range of 1 MHz to 6 GHz with maximum output power of up to 15 dBm depending on the band. The unit comes with an SMA antenna port, clock input and clock output SMA ports, and a USB 2.0 port. HackRF One integrates with popular software defined radio software such as GNU Radio and SDR#. The popularity of HackRF One as a security research platform has made it featured in many information security conference talks such as BlackHat, DEF CON and BSides. Academic research Kimmo Heinäaro presented a paper at the 2015 International Conference on Military Communications and Information Systems (ICMCIS) outlining how military tactical communications could be hacked with HackRF One and other open source tools. In 2017, researchers described a GPS spoofing attack to feed a vehicle false signals and mapping data to deliver the target to a desired location. Media attention HackRF One has received criticism in several media reports because it can be used to intercept and replay the key fob signals to open car and garage doors. External links HackRF One on Great Scott Gadgets References Software-defined radio
HackRF One
Engineering
329
486,117
https://en.wikipedia.org/wiki/Quinary
Quinary (base 5 or pental) is a numeral system with five as the base. A possible origination of a quinary system is that there are five digits on either hand. In the quinary place system, five numerals, from 0 to 4, are used to represent any real number. According to this method, five is written as 10, twenty-five is written as 100, and sixty is written as 220. As five is a prime number, only the reciprocals of the powers of five terminate, although its location between two highly composite numbers (4 and 6) guarantees that many recurring fractions have relatively short periods. Comparison to other radices Usage Many languages use quinary number systems, including Gumatj, Nunggubuyu, Kuurn Kopan Noot, Luiseño, and Saraveca. Gumatj has been reported to be a true "5–25" language, in which 25 is the higher group of 5. The Gumatj numerals are shown below: However, Harald Hammarström reports that "one would not usually use exact numbers for counting this high in this language and there is a certain likelihood that the system was extended this high only at the time of elicitation with one single speaker," pointing to the Biwat language as a similar case (previously attested as 5-20, but with one speaker recorded as making an innovation to turn it 5-25). Biquinary In this section, the numerals are in decimal. For example, "5" means five, and "10" means ten. A decimal system with two and five as a sub-bases is called biquinary and is found in Wolof and Khmer. Roman numerals are an early biquinary system. The numbers 1, 5, 10, and 50 are written as I, V, X, and L respectively. Seven is VII, and seventy is LXX. The full list of symbols is: Note that these are not positional number systems. In theory, a number such as 73 could be written as IIIXXL (without ambiguity) and as LXXIII. To extend Roman numerals to beyond thousands, a vinculum (horizontal overline) was added, multiplying the letter value by a thousand, e.g. overlined M̅ was one million. There is also no sign for zero. But with the introduction of inversions like IV and IX, it was necessary to keep the order from most to least significant. Many versions of the abacus, such as the suanpan and soroban, use a biquinary system to simulate a decimal system for ease of calculation. Urnfield culture numerals and some tally mark systems are also biquinary. Units of currencies are commonly partially or wholly biquinary. Bi-quinary coded decimal is a variant of biquinary that was used on a number of early computers including Colossus and the IBM 650 to represent decimal numbers. Calculators and programming languages Few calculators support calculations in the quinary system, except for some Sharp models (including some of the EL-500W and EL-500X series, where it is named the pental system) since about 2005, as well as the open-source scientific calculator WP 34S. See also Bi-quinary coded decimal References External links Quinary Base Conversion, includes fractional part, from Math Is Fun Quinary-pentavigesimal and decimal calculator, uses D'ni numerals from the Myst franchise, integers only, fan-made. Positional numeral systems 5 (number)
Quinary
Mathematics
757
22,409,572
https://en.wikipedia.org/wiki/Sibling%20relationship
Siblings play a unique role in one another's lives that simulates the companionship of parents as well as the influence and assistance of friends. Because siblings often grow up in the same household, they have a large amount of exposure to one another, like other members of the immediate family. However, though a sibling relationship can have both hierarchical and reciprocal elements, this relationship tends to be more egalitarian and symmetrical than with family members of other generations. Furthermore, sibling relationships often reflect the overall condition of cohesiveness within a family. Siblings normally spend more time with each other during their childhood than they do with parents or anyone else; they trust and cherish each other, so betrayal by one sibling could cause problems for that person physically as well as mentally and emotionally. Sibling relationships are often the longest-lasting relationship in individuals' lives. Cultural differences The content and context of sibling relationships varies between cultures. In industrialized cultures, sibling relationships are typically discretionary. People are encouraged to stay in contact and cooperate with their brothers and sisters, but this is not an obligation. Older siblings in these cultures are sometimes given responsibilities to watch over a younger sibling, but this is only occasional, with parents taking on the primary role of caretaker. In contrast, close sibling relationships in nonindustrialized cultures are often obligatory, with strong cultural norms prompting cooperation and close proximity between siblings. In India, the brother-sister sibling relationship is so cherished that a festival is held in observance called Raksha Bandhan. At this celebration, the sister presents the brother with a woven bracelet to show their lasting bond even when they have raised their own families. These cultures also extend caregiving roles to older siblings, who are constantly expected to watch over younger siblings. Throughout the lifespan Infancy and childhood A relationship begins with the introduction of two siblings to one another. Older siblings are often made aware of their soon-to-be younger brother or sister at some point during their mother's pregnancy, which may help facilitate adjustment for the older child and result in a better immediate relationship with the newborn. Parents pay attention not only to the newborns but to the older children to avoid sibling rivalry; interactions that can contribute to the older sibling's social aptitude can cognitively stimulate the younger sibling. Older siblings even adapt their speech to accommodate for the low language comprehension of the younger sibling, much like parents do with baby talk. The attachment theory used to describe an infant's relationship to a primary caregiver may also be applied to siblings. If an infant finds an older sibling to be responsive and sees him or her as a source of comfort, a supportive bond may form. On the contrary, a negative bond may form if the older sibling acts in an aggressive, neglectful, or otherwise negative manner. Sibling attachment is further accentuated in the absence of a primary caregiver when the younger sibling must rely on the older one for security and support. Even as siblings age and develop, their relationships have considerable stability from infancy through middle childhood, during which positive and negative interactions remain constant in frequency. Still, this time period marks great changes for both siblings. Assuming an age gap of only a few years, this marks the time when the older sibling is beginning school, meeting peers, and making friends. This shift in environment reduces both children's access to one another and depletes the older sibling's dependency on the younger for social support, which can now be found outside the relationship. When the younger sibling begins school, the older sibling may help him or her become acclimated and advise on the new struggles that come with being a student. At the same time, the older sibling is also available to answer questions and discuss topics that the younger sibling may not feel comfortable bringing up with a parent. Adolescence The nature of sibling relationships changes from childhood to adolescence. While young adolescents often provide one another with warmth and support, this period of development is also marked by increased conflict and emotional distance. However, this effect varies based on the sex of siblings. Mixed-sex sibling pairs often experience more drastic decreases in intimacy during adolescence, while same-sex sibling pairs experience a slight rise in intimacy during early adolescence followed by a slight drop. In both instances, intimacy once again increases during young adulthood. This trend may be the result of an increased emphasis on peer relationships during adolescence. Often, adolescents from the same family adopt differing lifestyles which further contributes to emotional distance between one another. Siblings may influence one another in much the same way that peers do, especially during adolescence. These relationships may even compensate for the negative psychological impact of not having friends and may provide individuals with a sense of self-worth. Older siblings can effectively model good behaviour for younger siblings. For instance, there is evidence that communication about safe sex with a sibling may be just as effective as with a parent. Conversely, an older sibling may encourage risky sexual behaviour by modelling a sexually advanced lifestyle, and younger siblings of teen parents are more likely to become teen parents themselves. Research on adolescents suggests positive sibling influences can promote healthy and adaptive functioning while negative interactions can increase vulnerabilities and problem behaviours. Intimate and positive sibling interactions are an important source of support for adolescents and can promote the development of prosocial behaviour. However, when sibling relationships are characterized by conflict and aggression, they can promote delinquency, and antisocial behaviour among peers. Adulthood and old age When siblings reach adulthood, it is more likely that they will no longer live in the same place and that they will become involved in jobs, hobbies, and romantic interests that they do not share and therefore cannot use to relate to one another. In this stage the common struggles of school and being under the strict jurisdiction of parents is dissolved. Despite these factors, siblings often maintain a relationship through adulthood and even old age. Proximity is a large factor in maintaining contact between siblings; those who live closer to one another are more likely to visit each other frequently. In addition, gender also plays a significant role. Sisters are most likely to maintain contact with one another, followed by mixed-gender dyads. Brothers are least likely to contact one another frequently. Communication is especially important when siblings do not live near one another. Communication may take place in person, over the phone, by mail, and with increasing frequency, by means of online communication such as email and social networking. Often, siblings will communicate indirectly through a parent or a mutual friend of relative. Between adult and elderly siblings, conversations tend to focus on family happenings and reflections of the past. In adulthood, siblings still perform a role similar to that of friends. Friends and siblings are often similar in age, with any age gap seeming even less significant in adulthood. Furthermore, both relationships are often egalitarian in nature, although unlike sibling relationships, friendships are voluntary. The specific roles of each relationship also differ, especially later in life. For elderly siblings, friends tend to act as companions while siblings play the roles of confidants. It is difficult to make long-term assumptions about adult sibling relationships, as they may rapidly change in response to individual or shared life events. Marriage of one sibling may either strengthen or weaken the sibling bond. The same can be said for change of location, birth of a child, and numerous other life events. However, divorce or widowhood of one sibling or death of a close family member most often results in increased closeness and support between siblings. Family system Sibling relationships are important within the family system. Family systems theory (Kerr and Bowen, 1988) is a theory of human behavior that defines the family unit as a complex social system, in which members interact to influence each other's behavior These relationships have an effect on child development, behavior, and support throughout their life span. A child's development is influenced by the dynamic system as a result of the family system. The relationship between siblings is the most important, but isn't focused on as much as other family relationships. Within the family system, not all roles amongst siblings are the same or shared. An older sibling can be placed in a position to fulfill the shoes of a parental role. This makes the older sibling a role model and caretaker to the younger sibling. A positive impact on the younger siblings' development may occur. Sibling rivalry Sibling rivalry describes the competitive relationship or animosity between siblings, blood-related or not. Often competition is the result of a desire for greater attention from parents. However, even the most conscientious parents can expect to see sibling rivalry in play to a degree. Children tend to naturally compete with each other for not only attention from parents but for recognition in the world. Siblings generally spend more time together during childhood than they do with parents. The sibling bond is often complicated and is influenced by factors such as parental treatment, birth order, personality, and people and experiences outside the family. According to child psychologist Sylvia Rimm, sibling rivalry is particularly intense when children are very close in age and of the same gender, or where one child is intellectually gifted. Sibling rivalry involves aggression and insults, especially between siblings close in age. Causes Siblings may be jealous of and harbor resentment toward one another. The main causes of sibling rivalry are lack of social skills, concerns with fairness, individual temperaments, special needs, parenting style, parent's conflict resolution skills and culture. In many families, the children count their siblings among their friends. But it's also common for siblings to be great friends on one day and hateful to one another on the next. There are many things that can influence and shape sibling rivalry. According to Kyla Boyse from the University of Michigan, each child in a family competes to define who they are as individuals and want to show that they are separate from their siblings. Children may feel they are getting unequal amounts of their parents' attention, discipline, and responsiveness. Children fight more in families where there is no understanding that fighting is not an acceptable way to resolve conflicts, and no alternative ways of handling such conflicts. Stress in the parents' and children's lives can create more conflict and increase sibling rivalry. Psychoanalytic view Sigmund Freud saw the sibling relationship as an extension of the Oedipus complex, where brothers were in competition for their mother's attention and sisters for their father's. For example, in the case of Little Hans, Freud postulated that the young boy's fear of horses was related to jealousy of his baby sister, as well as the boy's desire to replace his father as his mother's mate. This view has been largely discredited by modern research. Parent-offspring conflict theory Formulated by Robert Trivers, parent-offspring theory is important for understanding sibling dynamics and parental decision-making. Because parents are expected to invest whatever is necessary to ensure the survival of their offspring, it is generally thought that parents will allocate the maximum amount of resources available, possibly to their own detriment and that of other potential offspring. While parents are investing as much as possible to their offspring, offspring may at the same time attempt to obtain more resources than the parents are able to give to maximize its own reproductive success. Therefore, there is a conflict between the wants of the individual offspring and what the parent is able or willing to give. An extension of Trivers' theory leads to predict that it will pay siblings to compete intensely with one another. It can pay to be selfish even to the detriment of not only one's parents but also to one's siblings, as long as the total fitness benefits of doing do outweigh the total costs. Other psychological approaches Alfred Adler saw siblings as "striving for significance" within the family and felt that birth order was an important aspect of personality development. The feeling of being replaced or supplanted is often the cause of jealousy on the part of the older sibling. In fact, psychologists and researchers today endorse the influence of birth order, as well as age and gender constellations, on sibling relationships. A child's personality can also have an effect on how much sibling rivalry will occur in a home. Some kids seem to naturally accept changes, while others may be naturally competitive, and exhibit this nature long before a sibling enters the home. However, parents are seen as capable of having an important influence on whether they are competitive or not. David Levy introduced the term "sibling rivalry" in 1941, claiming that for an older sibling "the aggressive response to the new baby is so typical that it is safe to say it is a common feature of family life." Researchers today generally endorse this view, noting that parents can ameliorate this response by being vigilant to favoritism and by taking appropriate preventative steps. In fact, say researchers, the ideal time to lay the groundwork for a lifetime of supportive relationships between siblings is during the months prior to the new baby's arrival. Throughout life According to observational studies by Judy Dunn, children as early as one may be able to exhibit self-awareness and perceive difference in parental treatment between themselves and a sibling and early impressions can shape a lifetime relationship with the younger sibling. From 18 months on siblings can understand family rules and know how to comfort and be kind to each other. By 3 years old, children have a sophisticated grasp of social rules, can evaluate themselves in relation to their siblings, and know how to adapt to circumstances within the family. Whether they have the drive to adapt, to get along with a sibling whose goals and interests may be different from their own, can make the difference between a cooperative relationship and a rivalrous one. Studies have further shown that the greatest sibling rivalry tends to be shown between brothers, and the least between sisters. Naturally, there are exceptions to this rule. What makes brother/brother ties so rivalrous? Deborah Gold has launched a new study that is not yet completed. But she has found a consistent theme running through the interviews she's conducted thus far. "The thing that rides through with brothers that doesn't come across in other sibling pairs is this notion of parental and societal comparison. Somehow with boys, it seems far more natural to compare them, especially more than with sister/brother pairs. Almost from day one, the fundamental developmental markers—who gets a tooth first, who crawls, walks, speaks first—are held up on a larger-than-life scale. And this comparison appears to continue from school to college to the workplace. Who has the biggest house, who makes the most money, drives the best car are constant topics of discussion. In our society, men are supposed to be achievement-oriented, aggressive. They're supposed to succeed." Sibling rivalry often continues throughout childhood and can be very frustrating and stressful to parents. Adolescents fight for the same reasons younger children fight, but they are better equipped physically and intellectually to hurt and be hurt by each other. Physical and emotional changes cause pressures in the teenage years, as do changing relationships with parents and friends. Fighting with siblings as a way to get parental attention may increase in adolescence. One study found that the age group 10 to 15 reported the highest level of competition between siblings. However, the degree of sibling rivalry and conflict is not constant. Longitudinal studies looking at the degree of sibling rivalry throughout childhood from Western societies suggest that, over time, sibling relationships become more egalitarian and this suggest less conflict. Yet, this effect is moderated by birth order: Older siblings report more or less the same level of conflict and rivalry throughout their childhood. In contrast, young siblings report a peak in conflict and rivalry around young adolescence and a drop in late adolescence. The decline in late adolescence makes sense from an evolutionary perspective: Once resources cease and/ or individuals have started their own reproductive career, it makes little sense for sibling to continue fierce competition over resources that do not affect their reproductive success anymore. Sibling rivalry can continue into adulthood and sibling relationships can change dramatically over the years. Events such as a parent's illness may bring siblings closer together, whereas marriage may drive them apart, particularly if the in-law relationship is strained. Approximately one-third of adults describe their relationship with siblings as rivalrous or distant. However, rivalry often lessens over time. At least 80 percent of siblings over age 60 enjoy close ties. Prevention Parents can reduce the opportunity for rivalry by refusing to compare or typecast their children, teaching the children positive ways to get attention from each other and from the parent, planning fun family activities together, and making sure each child has enough time and space of their own. They can also give each child individual attention, encourage teamwork, refuse to hold up one child as a role model for the others, and avoid favoritism. It is also important for parents to invest in time spent together as a whole family. Children who have a strong sense of being part of a family are likely to see siblings as an extension of themselves. However, according to Sylvia Rimm, although sibling rivalry can be reduced it is unlikely to be eliminated. In moderate doses, rivalry may be a healthy indication that each child is assertive enough to express his or her differences with other siblings. Weihe suggests that four criteria should be used to determine if questionable behavior is rivalry or sibling abuse. First, one must determine if the questionable behavior is age appropriate: e.g., children use different conflict-resolution tactics during various developmental stages. Second, one must determine if the behavior is an isolated incident or part of an enduring pattern: abuse is, by definition, a long-term pattern rather than occasional disagreements. Third, one must determine if there is an "aspect of victimization" to the behavior: rivalry tends to be incident-specific, reciprocal and obvious to others, while abuse is characterized by secrecy and an imbalance of power. Fourth, one must determine the goal of the questionable behavior: the goal of abuse tends to be embarrassment or domination of the victim. Parents should remember that sibling rivalry today may someday result in siblings being cut off from each other when the parents are gone. Continuing to encourage family togetherness, treating siblings equitably, and using family counseling to help arrest sibling rivalry that is excessive may ultimately serve children in their adult years. Sibling marriage and incest While cousin marriage is legal in most countries, and avunculate marriage is legal in many, sexual relations between siblings are considered incestuous almost universally. Innate sexual aversion between siblings forms due to close association in childhood, in what is known as the Westermarck effect. Children who grow up together do not normally develop sexual attraction, even if they are unrelated, and conversely, siblings who were separated at a young age may develop sexual attraction. Thus, many cases of sibling incest, including accidental incest, concern siblings who were separated at birth or at a very young age. One study from New England has shown that roughly 10% of males and 15% of females had experienced some form of sexual contact with a brother or sister, with the most common form being fondling or touching of one another's genitalia. Among adults John M. Goggin and William C. Sturtevant (1964) listed eight societies which generally allowed sibling marriage, and thirty-four societies where sibling marriage was permissible among certain classes only. A historical marriage that took place between full siblings was that between John V, Count of Armagnac and Isabelle d'Armagnac, dame des Quatre-Vallées, c. 1450. The provided papal dispensation for this union was declared forged in 1457. The marriage was declared invalid and the children were declared bastards and removed from the line of succession. In antiquity, Laodice IV, a Seleucid princess, priestess, and queen, married all three of her brothers in turn. Sibling marriage was especially frequent in Roman Egypt, and probably even the preferred norm among the nobility. In most cases, marriage of siblings in Roman Egypt was a result of the religious belief in divinity and maintaining purity. Based on the model from the myth of Osiris and Isis, it was considered necessary for a god to marry a goddess and vice versa. This led to Osiris marrying his sister Isis due to limited options of gods and goddesses to marry. In order to preserve the divinity of ruling families, siblings of the royal families would marry each other. Sibling marriage is also common among the Zande people of Central Africa. In a number of European countries such as Belgium, France, Luxembourg, the Netherlands and Spain, marriage between siblings remains prohibited, but incest between siblings is no longer prosecuted. Among children According to , between forty and seventy-five percent of children will engage in some sort of sexual behavior before reaching 13 years of age. In these situations, children are exploring each other's bodies while also exploring gender roles and behaviors, and their sexual experimentation does not indicate that these children are child sex offenders. As siblings are generally close in age and locational proximity, the opportunity for sexual exploration between siblings is fairly high and that, if simply based on mutual curiosity, then these activities are not harmful or distressing, either in childhood or later in adulthood. According to , studying early sexual behavior generally, over half of all six- and seven-year-old boys have engaged in sex play with other boys, and more than a third of them with girls, while more than a third of six- and seven-year-old girls have engaged in such play with both other girls and with boys. This play includes playing doctor, mutual touching, and attempts at simulated, non-penetrative intercourse. Reinisch views such play as part of a normal progression from the sensual elements of bonding with parents, to masturbation, and then to sex play with others. By the age of eight or nine, according to Reinisch, children become aware that sexual arousal is a specific type of erotic sensation, and will seek these pleasurable experiences through various sights, self-touches, and fantasy, so that earlier generalized sex play shifts into more deliberate and intentional arousal. Abusive incestuous relationships between siblings can have adverse effects on the parties involved. Such abuse can leave victims detrimentally hindered in developmental processes, such as those necessary for interpersonal relations, and can be the cause for depression, anxiety, and substance abuse in the victim's adult life. Definitions used have varied widely. Child sexual abuse between siblings is defined by the (US) National Task Force on Juvenile Sexual Offending as: sexual acts initiated by one sibling toward another without the other's consent, by use of force or coercion, or where there is a power differential between the siblings. In , sibling child sexual abuse is defined as "sexual behavior between siblings that is not age appropriate, not transitory, and not motivated by developmentally, mutually appropriate curiosity". When child sexual experimentation is carried out with siblings, some researchers, e.g. , do consider it incest, but those researchers who do use that term distinguish between abusive incest and non-abusive incest. Bank and Kahn say that abusive incest is power-oriented, sadistic, exploitative, and coercive, often including deliberate physical or mental abuse. Views of young sibling sexual contact may be affected by more general views regarding sexuality and minors: consider sexual contact to be abusive only under these circumstances: it occurs with a child less than 13 years old, and the perpetrator is more than five years older than the victim or if the child is between 13 and 16 years old, and the perpetrator is ten years older than the victim; coercion, force, or threat is used. says that behavior that is sexually abusive of children (generally speaking) depends upon the use of power, authority, bribery, or appeal to the child's trust or affection. offers four criteria to judge whether sexual behavior involving persons under 14 years old is abusive or not: an age difference of more than five years; use of force, threat, or authority; attempted penile penetration; physical injury to the victim. According to De Jong, if one or more of these is present, the behavior is abusive, whereas if none is present, the behavior must be considered normal sexual experimentation. See also Siblings Day Sibling estrangement References Bibliography General references Santrock, J.W. (2007). A Topical Approach to Life-Span Development. New York, NY: McGraw-Hill Companies, Inc. Further reading Interpersonal relationships Kinship and descent Traditions involving siblings
Sibling relationship
Biology
4,980
2,955,276
https://en.wikipedia.org/wiki/Enol%20ether
In organic chemistry an enol ether is an alkene with an alkoxy substituent. The general structure is R2C=CR-OR where R = H, alkyl or aryl. A common subfamily of enol ethers are vinyl ethers, with the formula ROCH=CH2. Important enol ethers include the reagent 3,4-dihydropyran and the monomers methyl vinyl ether and ethyl vinyl ether. Reactions and uses Akin to enamines, enol ethers are electron-rich alkenes by virtue of the electron-donation from the heteroatom via pi-bonding. Enol ethers have oxonium ion character. By virtue of their bonding situation, enol ethers display distinctive reactivity. In comparison with simple alkenes, enol ethers exhibit enhanced susceptibility to attack by electrophiles such as Bronsted acids. Similarly, they undergo inverse demand Diels-Alder reactions. The reactivity of enol ethers is highly dependent on the presence of substituents alpha to oxygen. The vinyl ethers are susceptible to polymerization to give polyvinyl ethers. They also react readily with thiols in the thiol-ene reaction to form thioethers. This makes enol ether-functionalized monomers ideal for polymerization with thiol-based monomers to form thiol-ene networks. Some vinyl ethers find some use as inhalation anesthetics. Enol ethers bearing α substituents do not polymerize readily. They are mainly of academic interest, e.g. as intermediates in the synthesis of more complex molecules. The acid-catalyzed addition of hydrogen peroxide to vinyl ethers gives the hydroperoxide: C2H5OCH=CH2 + H2O2 → C2H5OCH(OOH)CH3 Nazi Germany used vinyl ether mixtures as rocket propellants during WWII, because their hypergolic combustion with a mixture of nitric and sulfuric acids is relatively insensitive to temperature. Preparation Vinyl ethers can be prepared from alcohols by iridium-catalyzed transesterification of vinyl esters, especially the widely available vinyl acetate: ROH + CH2=CHOAc → ROCH=CH2 + HOAc Vinyl ethers can be prepared by reaction of acetylene and alcohols in presence of a base. Although enol ethers can be considered the ether of the corresponding enolates, they are not prepared by alkylation of enolates. Some enol ethers are prepared from saturated ethers by elimination reactions. Occurrence in nature A prominent enol ether is phosphoenol pyruvate. The enzyme chorismate mutase catalyzes the Claisen rearrangement of the enol ether called chorismate to prephenate, an intermediate in the biosynthesis of phenylalanine and tyrosine. Batyl alcohol and related glycyl ethers are susceptible to dehydrogenation catalyzed unsaturases to give the vinyl ethers called plasmalogens: See also Silyl enol ether References Functional groups
Enol ether
Chemistry
674
48,934,337
https://en.wikipedia.org/wiki/Empty%20weight
The empty weight of a vehicle is based on its weight without any payload (cargo, passengers, usable fuel, etc.). Aviation Many different empty weight definitions exist. Here are some of the more common ones used. GAMA standardization In 1975 (or 1976 per FAA-H-8083-1B) the General Aviation Manufacturers Association (GAMA) standardized the definition of empty weight terms for Pilot Operating Handbooks as follows: Standard Empty Weight includes the following: Empty weight of the airplane Full Hydraulic Fluid Unusable Fuel Full Oil Optional Equipment includes the following: All equipment installed beyond standard Non-GAMA usage Previously (Regarding aircraft certified under CAR Part 3) the following were commonly used to define empty weights: In this definition Empty Weight includes the following: Empty weight of the airplane Undrainable Oil Full Hydraulic Fluid Note that weight of oil must be added to Licensed Empty Weight for it to be equivalent to Basic Empty Weight Ground transportation In the United States, bridge weight limits for trucks and other heavy vehicles may be expressed in terms of gross vehicle weight or empty weight. See also Zero Fuel Weight Maximum Takeoff Weight References Aircraft weight measurements Vehicle law Trucks
Empty weight
Physics,Engineering
232
46,590,633
https://en.wikipedia.org/wiki/Ammonium%20hexachloroiridate%28IV%29
Ammonium hexachloroiridate(IV) is the inorganic compound with the formula (NH4)2[IrCl6]. This dark red solid is the ammonium salt of the iridium(IV) complex [IrCl6]2−. It is a commercially important iridium compound, one of the most common complexes of iridium(IV). A related but ill-defined compound is iridium tetrachloride, which has been used interchangeably. Structure and synthesis The compound has been characterized by X-ray crystallography. The salt crystallizes in a cubic motif like that of ammonium hexachloroplatinate. The [IrCl6]2− centers adopt octahedral molecular geometry. The compound is prepared in the laboratory by the addition of ammonium chloride to an aqueous solution of sodium hexachloroiridate. The salt is poorly soluble like most other diammonium hexachlorometallates. Uses It is an intermediate in the isolation of iridium from ores. Most other metals form insoluble sulfides when aqueous solutions of their chlorides are treated with hydrogen sulfide, but [IrCl6]2− resists ligand substitution. Upon heating under hydrogen, the solid salt converts to the metal: Bonding The electronic structure of ammonium hexachloroiridate(IV) has attracted much attention. Its magnetic moment is less than that calculated for one electron. This result is attributed to antiferromagnetic coupling between Ir centers mediated by Cl---Cl interactions. Electron spin resonance studies reveal that more than half of the spin density resides on chloride, thus the description of the complex as Ir(IV) is an oversimplification. References Iridium compounds Ammonium compounds Chloro complexes Chlorometallates
Ammonium hexachloroiridate(IV)
Chemistry
382
53,637,314
https://en.wikipedia.org/wiki/Samsung%20DeX
Samsung DeX (stylized as SΛMSUNG DeX) is a feature included on some high-end Samsung handheld devices that enables users to extend their device into a desktop-like experience by connecting a keyboard, mouse, and monitor. The name "DeX" is a contraction of "Desktop eXperience". For technical specifications, Samsung DeX requires hardware such as USB 3.1 transfer specification, USB-C port with DisplayPort Alternate Mode support to be present on a mobile device. Samsung first included the DeX feature on the Galaxy S8, and has continued to support the feature on most of their high-end smartphones, including the Galaxy S, Note and Z Fold lines. The feature is also available on many Galaxy Tab S models, from Galaxy Tab S4 onwards. History In 2017, the original version of DeX was released, which required the use of a proprietary docking accessory called the DeX Station. This provided a USB-C port, Ethernet, HDMI 2.0 output and two USB 2.0 ports. In August 2018 with the launch of the Note 9, Samsung introduced the DeX HDMI adapter (USB-C to female HDMI), DeX cable (USB-C to male HDMI) and DeX multiport adapter, which whilst still proprietary and containing active electronics, eliminated the need for the previous docking accessories. This design enabled the cell phone to lie flat and function as a touchpad or even continue being used as a phone in its usual fashion whilst being connected to a display and with DeX operating. Also in 2018, Samsung released the DeX PAD. This provided a USB-C port, HDMI, and two USB ports. This design enabled the cell phone to lay flat and function as a touchpad or even continue being used as a phone in its usual fashion whilst being connected to a display and with DeX operating. Since 2019, with the Note 10 and Galaxy Fold, DeX can now be launched via a direct cable connection to a physical computer using the existing provided charging cable or any similar off-the-shelf USB-C cable with data transfer, eliminating the need for any proprietary docking accessories. DeX has also been used in the public safety setting to replace in-vehicle laptops. Samsung also announced "Linux on Galaxy" (since renamed to "Linux on DeX") which allows the use of a compatible Linux distribution rather than the default Android OS giving full personal computer capabilities. The DeX Desktop can also be accessed with a downloadable app for Windows and macOS or through third-party accessories. Users can connect to their mobile devices with a USB cable. As of April 2022, MacOS and Windows 7 are no longer supported. Samsung DeX devices can be managed by Samsung Knox (3.3 and higher) to allow or restrict access using the Knox platform for added control and security. In October 2019 Samsung announced that Linux on DeX will not be available for Android 10 and warned users that after upgrading to Android 10 they will not be able to downgrade, permanently losing the ability to use full Linux applications. In 2020, wireless Dex was introduced, enabling Note 9 and newer phones to use Miracast to project the desktop experience to a PC previously connected via USB or a wireless monitor/TV. See also Webtop, a similar feature of the Motorola Atrix series from the early 2010s, which required specialized hardware. Ready For, a feature of some high-end Motorola phones that includes a desktop mode as well as TV, Video Chat and Game modes. Easy Projection, a similar desktop mode found on the Huawei Mate 10, Mate 20 and Huawei Mate 30 phones Screen+, a similar desktop environment mode found on the LG Velvet and V60 phones. Continuum, a similar feature announced by Microsoft in 2015 for Windows 10 Mobile Docking station Lapdock References External links Support page Docking stations Mobile/desktop convergence
Samsung DeX
Technology
772
12,832
https://en.wikipedia.org/wiki/G%20protein-coupled%20receptor
G protein-coupled receptors (GPCRs), also known as seven-(pass)-transmembrane domain receptors, 7TM receptors, heptahelical receptors, serpentine receptors, and G protein-linked receptors (GPLR), form a large group of evolutionarily related proteins that are cell surface receptors that detect molecules outside the cell and activate cellular responses. They are coupled with G proteins. They pass through the cell membrane seven times in the form of six loops (three extracellular loops interacting with ligand molecules, three intracellular loops interacting with G proteins, an N-terminal extracellular region and a C-terminal intracellular region) of amino acid residues, which is why they are sometimes referred to as seven-transmembrane receptors. Ligands can bind either to the extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (rhodopsin-like family). They are all activated by agonists, although a spontaneous auto-activation of an empty receptor has also been observed. G protein-coupled receptors are found only in eukaryotes, including yeast, and choanoflagellates. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases. There are two principal signal transduction pathways involving the G protein-coupled receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway. When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13). GPCRs are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars . It is estimated that GPCRs are targets for about 50% of drugs currently on the market, mainly due to their involvement in signaling pathways related to many diseases i.e. mental, metabolic including endocrinological disorders, immunological including viral infections, cardiovascular, inflammatory, senses disorders, and cancer. The long ago discovered association between GPCRs and many endogenous and exogenous substances, resulting in e.g. analgesia, is another dynamically developing field of the pharmaceutical research. History and significance With the determination of the first structure of the complex between a G-protein coupled receptor (GPCR) and a G-protein trimer (Gαβγ) in 2011 a new chapter of GPCR research was opened for structural investigations of global switches with more than one protein being investigated. The previous breakthroughs involved determination of the crystal structure of the first GPCR, rhodopsin, in 2000 and the crystal structure of the first GPCR with a diffusible ligand (β2AR) in 2007. The way in which the seven transmembrane helices of a GPCR are arranged into a bundle was suspected based on the low-resolution model of frog rhodopsin from cryogenic electron microscopy studies of the two-dimensional crystals. The crystal structure of rhodopsin, that came up three years later, was not a surprise apart from the presence of an additional cytoplasmic helix H8 and a precise location of a loop covering retinal binding site. However, it provided a scaffold which was hoped to be a universal template for homology modeling and drug design for other GPCRs – a notion that proved to be too optimistic. Results 7 years later were surprising because the crystallization of β2-adrenergic receptor (β2AR) with a diffusible ligand revealed quite a different shape of the receptor extracellular side than that of rhodopsin. This area is important because it is responsible for the ligand binding and is targeted by many drugs. Moreover, the ligand binding site was much more spacious than in the rhodopsin structure and was open to the exterior. In the other receptors crystallized shortly afterwards the binding side was even more easily accessible to the ligand. New structures complemented with biochemical investigations uncovered mechanisms of action of molecular switches which modulate the structure of the receptor leading to activation states for agonists or to complete or partial inactivation states for inverse agonists. The 2012 Nobel Prize in Chemistry was awarded to Brian Kobilka and Robert Lefkowitz for their work that was "crucial for understanding how G protein-coupled receptors function". There have been at least seven other Nobel Prizes awarded for some aspect of G protein–mediated signaling. As of 2012, two of the top ten global best-selling drugs (Advair Diskus and Abilify) act by targeting G protein-coupled receptors. Classification The exact size of the GPCR superfamily is unknown, but at least 831 different human genes (or about 4% of the entire protein-coding genome) have been predicted to code for them from genome sequence analysis. Although numerous classification schemes have been proposed, the superfamily was classically divided into three main classes (A, B, and C) with no detectable shared sequence homology between classes. The largest class by far is class A, which accounts for nearly 85% of the GPCR genes. Of class A GPCRs, over half of these are predicted to encode olfactory receptors, while the remaining receptors are liganded by known endogenous compounds or are classified as orphan receptors. Despite the lack of sequence homology between classes, all GPCRs have a common structure and mechanism of signal transduction. The very large rhodopsin A group has been further subdivided into 19 subgroups (A1-A19). According to the classical A-F system, GPCRs can be grouped into six classes based on sequence homology and functional similarity: Class A (or 1) (Rhodopsin-like) Class B (or 2) (Secretin receptor family) Class C (or 3) (Metabotropic glutamate/pheromone) Class D (or 4) (Fungal mating pheromone receptors) Class E (or 5) (Cyclic AMP receptors) Class F (or 6) (Frizzled/Smoothened) More recently, an alternative classification system called GRAFS (Glutamate, Rhodopsin, Adhesion, Frizzled/Taste2, Secretin) has been proposed for vertebrate GPCRs. They correspond to classical classes C, A, B2, F, and B. An early study based on available DNA sequence suggested that the human genome encodes roughly 750 G protein-coupled receptors, about 350 of which detect hormones, growth factors, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome have unknown functions. Some web-servers and bioinformatics prediction methods have been used for predicting the classification of GPCRs according to their amino acid sequence alone, by means of the pseudo amino acid composition approach. Physiological roles GPCRs are involved in a wide variety of physiological processes. Some examples of their physiological roles include: The visual sense: The opsins use a photoisomerization reaction to translate electromagnetic radiation into cellular signals. Rhodopsin, for example, uses the conversion of 11-cis-retinal to all-trans-retinal for this purpose. The gustatory sense (taste): GPCRs in taste cells mediate release of gustducin in response to bitter-, umami- and sweet-tasting substances. The sense of smell: Receptors of the olfactory epithelium bind odorants (olfactory receptors) and pheromones (vomeronasal receptors) Behavioral and mood regulation: Receptors in the mammalian brain bind several different neurotransmitters, including serotonin, dopamine, histamine, GABA, and glutamate Regulation of immune system activity and inflammation: chemokine receptors bind ligands that mediate intercellular communication between cells of the immune system; receptors such as histamine receptors bind inflammatory mediators and engage target cell types in the inflammatory response. GPCRs are also involved in immune-modulation, e. g. regulating interleukin induction or suppressing TLR-induced immune responses from T cells. Autonomic nervous system transmission: Both the sympathetic and parasympathetic nervous systems are regulated by GPCR pathways, responsible for control of many automatic functions of the body such as blood pressure, heart rate, and digestive processes Cell density sensing: A novel GPCR role in regulating cell density sensing. Homeostasis modulation (e.g., water balance). Involved in growth and metastasis of some types of tumors. Used in the endocrine system for peptide and amino-acid derivative hormones that bind to GCPRs on the cell membrane of a target cell. This activates cAMP, which in turn activates several kinases, allowing for a cellular response, such as transcription. Receptor structure GPCRs are integral membrane proteins that possess seven membrane-spanning domains or transmembrane helices. The extracellular parts of the receptor can be glycosylated. These extracellular loops also contain two highly conserved cysteine residues that form disulfide bonds to stabilize the receptor structure. Some seven-transmembrane helix proteins (channelrhodopsin) that resemble GPCRs may contain ion channels, within their protein. In 2000, the first crystal structure of a mammalian GPCR, that of bovine rhodopsin (), was solved. In 2007, the first structure of a human GPCR was solved This human β2-adrenergic receptor GPCR structure proved highly similar to the bovine rhodopsin. The structures of activated or agonist-bound GPCRs have also been determined. These structures indicate how ligand binding at the extracellular side of a receptor leads to conformational changes in the cytoplasmic side of the receptor. The biggest change is an outward movement of the cytoplasmic part of the 5th and 6th transmembrane helix (TM5 and TM6). The structure of activated beta-2 adrenergic receptor in complex with Gs confirmed that the Gα binds to a cavity created by this movement. GPCRs exhibit a similar structure to some other proteins with seven transmembrane domains, such as microbial rhodopsins and adiponectin receptors 1 and 2 (ADIPOR1 and ADIPOR2). However, these 7TMH (7-transmembrane helices) receptors and channels do not associate with G proteins. In addition, ADIPOR1 and ADIPOR2 are oriented oppositely to GPCRs in the membrane (i.e. GPCRs usually have an extracellular N-terminus, cytoplasmic C-terminus, whereas ADIPORs are inverted). Structure–function relationships In terms of structure, GPCRs are characterized by an extracellular N-terminus, followed by seven transmembrane (7-TM) α-helices (TM-1 to TM-7) connected by three intracellular (IL-1 to IL-3) and three extracellular loops (EL-1 to EL-3), and finally an intracellular C-terminus. The GPCR arranges itself into a tertiary structure resembling a barrel, with the seven transmembrane helices forming a cavity within the plasma membrane that serves a ligand-binding domain that is often covered by EL-2. Ligands may also bind elsewhere, however, as is the case for bulkier ligands (e.g., proteins or large peptides), which instead interact with the extracellular loops, or, as illustrated by the class C metabotropic glutamate receptors (mGluRs), the N-terminal tail. The class C GPCRs are distinguished by their large N-terminal tail, which also contains a ligand-binding domain. Upon glutamate-binding to an mGluR, the N-terminal tail undergoes a conformational change that leads to its interaction with the residues of the extracellular loops and TM domains. The eventual effect of all three types of agonist-induced activation is a change in the relative orientations of the TM helices (likened to a twisting motion) leading to a wider intracellular surface and "revelation" of residues of the intracellular helices and TM domains crucial to signal transduction function (i.e., G-protein coupling). Inverse agonists and antagonists may also bind to a number of different sites, but the eventual effect must be prevention of this TM helix reorientation. The structure of the N- and C-terminal tails of GPCRs may also serve important functions beyond ligand-binding. For example, The C-terminus of M3 muscarinic receptors is sufficient, and the six-amino-acid polybasic (KKKRRK) domain in the C-terminus is necessary for its preassembly with Gq proteins. In particular, the C-terminus often contains serine (Ser) or threonine (Thr) residues that, when phosphorylated, increase the affinity of the intracellular surface for the binding of scaffolding proteins called β-arrestins (β-arr). Once bound, β-arrestins both sterically prevent G-protein coupling and may recruit other proteins, leading to the creation of signaling complexes involved in extracellular-signal regulated kinase (ERK) pathway activation or receptor endocytosis (internalization). As the phosphorylation of these Ser and Thr residues often occurs as a result of GPCR activation, the β-arr-mediated G-protein-decoupling and internalization of GPCRs are important mechanisms of desensitization. In addition, internalized "mega-complexes" consisting of a single GPCR, β-arr(in the tail conformation), and heterotrimeric G protein exist and may account for protein signaling from endosomes. A final common structural theme among GPCRs is palmitoylation of one or more sites of the C-terminal tail or the intracellular loops. Palmitoylation is the covalent modification of cysteine (Cys) residues via addition of hydrophobic acyl groups, and has the effect of targeting the receptor to cholesterol- and sphingolipid-rich microdomains of the plasma membrane called lipid rafts. As many of the downstream transducer and effector molecules of GPCRs (including those involved in negative feedback pathways) are also targeted to lipid rafts, this has the effect of facilitating rapid receptor signaling. GPCRs respond to extracellular signals mediated by a huge diversity of agonists, ranging from proteins to biogenic amines to protons, but all transduce this signal via a mechanism of G-protein coupling. This is made possible by a guanine-nucleotide exchange factor (GEF) domain primarily formed by a combination of IL-2 and IL-3 along with adjacent residues of the associated TM helices. Mechanism The G protein-coupled receptor is activated by an external signal in the form of a ligand or other signal mediator. This creates a conformational change in the receptor, causing activation of a G protein. Further effect depends on the type of G protein. G proteins are subsequently inactivated by GTPase activating proteins, known as RGS proteins. Ligand binding GPCRs include one or more receptors for the following ligands: sensory signal mediators (e.g., light and olfactory stimulatory molecules); adenosine, bombesin, bradykinin, endothelin, γ-aminobutyric acid (GABA), hepatocyte growth factor (HGF), melanocortins, neuropeptide Y, opioid peptides, opsins, somatostatin, GH, tachykinins, members of the vasoactive intestinal peptide family, and vasopressin; biogenic amines (e.g., dopamine, epinephrine, norepinephrine, histamine, serotonin, and melatonin); glutamate (metabotropic effect); glucagon; acetylcholine (muscarinic effect); chemokines; lipid mediators of inflammation (e.g., prostaglandins, prostanoids, platelet-activating factor, and leukotrienes); peptide hormones (e.g., calcitonin, C5a anaphylatoxin, follicle-stimulating hormone [FSH], gonadotropin-releasing hormone [GnRH], neurokinin, thyrotropin-releasing hormone [TRH], and oxytocin); and endocannabinoids. GPCRs that act as receptors for stimuli that have not yet been identified are known as orphan receptors. However, in contrast to other types of receptors that have been studied, wherein ligands bind externally to the membrane, the ligands of GPCRs typically bind within the transmembrane domain. However, protease-activated receptors are activated by cleavage of part of their extracellular domain. Conformational change The transduction of the signal through the membrane by the receptor is not completely understood. It is known that in the inactive state, the GPCR is bound to a heterotrimeric G protein complex. Binding of an agonist to the GPCR results in a conformational change in the receptor that is transmitted to the bound Gα subunit of the heterotrimeric G protein via protein domain dynamics. The activated Gα subunit exchanges GTP in place of GDP which in turn triggers the dissociation of Gα subunit from the Gβγ dimer and from the receptor. The dissociated Gα and Gβγ subunits interact with other intracellular proteins to continue the signal transduction cascade while the freed GPCR is able to rebind to another heterotrimeric G protein to form a new complex that is ready to initiate another round of signal transduction. It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive biophysical states. The binding of ligands to the receptor may shift the equilibrium toward the active receptor states. Three types of ligands exist: Agonists are ligands that shift the equilibrium in favour of active states; inverse agonists are ligands that shift the equilibrium in favour of inactive states; and neutral antagonists are ligands that do not affect the equilibrium. It is not yet known how exactly the active and inactive states differ from each other. G-protein activation/deactivation cycle When the receptor is inactive, the GEF domain may be bound to an also inactive α-subunit of a heterotrimeric G-protein. These "G-proteins" are a trimer of α, β, and γ subunits (known as Gα, Gβ, and Gγ, respectively) that is rendered inactive when reversibly bound to Guanosine diphosphate (GDP) (or, alternatively, no guanine nucleotide) but active when bound to guanosine triphosphate (GTP). Upon receptor activation, the GEF domain, in turn, allosterically activates the G-protein by facilitating the exchange of a molecule of GDP for GTP at the G-protein's α-subunit. The cell maintains a 10:1 ratio of cytosolic GTP:GDP so exchange for GTP is ensured. At this point, the subunits of the G-protein dissociate from the receptor, as well as each other, to yield a Gα-GTP monomer and a tightly interacting Gβγ dimer, which are now free to modulate the activity of other intracellular proteins. The extent to which they may diffuse, however, is limited due to the palmitoylation of Gα and the presence of an isoprenoid moiety that has been covalently added to the C-termini of Gγ. Because Gα also has slow GTP→GDP hydrolysis capability, the inactive form of the α-subunit (Gα-GDP) is eventually regenerated, thus allowing reassociation with a Gβγ dimer to form the "resting" G-protein, which can again bind to a GPCR and await activation. The rate of GTP hydrolysis is often accelerated due to the actions of another family of allosteric modulating proteins called regulators of G-protein signaling, or RGS proteins, which are a type of GTPase-activating protein, or GAP. In fact, many of the primary effector proteins (e.g., adenylate cyclases) that become activated/inactivated upon interaction with Gα-GTP also have GAP activity. Thus, even at this early stage in the process, GPCR-initiated signaling has the capacity for self-termination. Crosstalk GPCRs downstream signals have been shown to possibly interact with integrin signals, such as FAK. Integrin signaling will phosphorylate FAK, which can then decrease GPCR Gαs activity. Signaling If a receptor in an active state encounters a G protein, it may activate it. Some evidence suggests that receptors and G proteins are actually pre-coupled. For example, binding of G proteins to receptors affects the receptor's affinity for ligands. Activated G proteins are bound to GTP. Further signal transduction depends on the type of G protein. The enzyme adenylate cyclase is an example of a cellular protein that can be regulated by a G protein, in this case the G protein Gs. Adenylate cyclase activity is activated when it binds to a subunit of the activated G protein. Activation of adenylate cyclase ends when the G protein returns to the GDP-bound state. Adenylate cyclases (of which 9 membrane-bound and one cytosolic forms are known in humans) may also be activated or inhibited in other ways (e.g., Ca2+/calmodulin binding), which can modify the activity of these enzymes in an additive or synergistic fashion along with the G proteins. The signaling pathways activated through a GPCR are limited by the primary sequence and tertiary structure of the GPCR itself but ultimately determined by the particular conformation stabilized by a particular ligand, as well as the availability of transducer molecules. Currently, GPCRs are considered to utilize two primary types of transducers: G-proteins and β-arrestins. Because β-arr's have high affinity only to the phosphorylated form of most GPCRs (see above or below), the majority of signaling is ultimately dependent upon G-protein activation. However, the possibility for interaction does allow for G-protein-independent signaling to occur. G-protein-dependent signaling There are three main G-protein-mediated signaling pathways, mediated by four sub-classes of G-proteins distinguished from each other by sequence homology (Gαs, Gαi/o, Gαq/11, and Gα12/13). Each sub-class of G-protein consists of multiple proteins, each the product of multiple genes or splice variations that may imbue them with differences ranging from subtle to distinct with regard to signaling properties, but in general they appear reasonably grouped into four classes. Because the signal transducing properties of the various possible βγ combinations do not appear to radically differ from one another, these classes are defined according to the isoform of their α-subunit. While most GPCRs are capable of activating more than one Gα-subtype, they also show a preference for one subtype over another. When the subtype activated depends on the ligand that is bound to the GPCR, this is called functional selectivity (also known as agonist-directed trafficking, or conformation-specific agonism). However, the binding of any single particular agonist may also initiate activation of multiple different G-proteins, as it may be capable of stabilizing more than one conformation of the GPCR's GEF domain, even over the course of a single interaction. In addition, a conformation that preferably activates one isoform of Gα may activate another if the preferred is less available. Furthermore, feedback pathways may result in receptor modifications (e.g., phosphorylation) that alter the G-protein preference. Regardless of these various nuances, the GPCR's preferred coupling partner is usually defined according to the G-protein most obviously activated by the endogenous ligand under most physiological or experimental conditions. Gα signaling The effector of both the Gαs and Gαi/o pathways is the cyclic-adenosine monophosphate (cAMP)-generating enzyme adenylate cyclase, or AC. While there are ten different AC gene products in mammals, each with subtle differences in tissue distribution or function, all catalyze the conversion of cytosolic adenosine triphosphate (ATP) to cAMP, and all are directly stimulated by G-proteins of the Gαs class. In contrast, however, interaction with Gα subunits of the Gαi/o type inhibits AC from generating cAMP. Thus, a GPCR coupled to Gαs counteracts the actions of a GPCR coupled to Gαi/o, and vice versa. The level of cytosolic cAMP may then determine the activity of various ion channels as well as members of the ser/thr-specific protein kinase A (PKA) family. Thus cAMP is considered a second messenger and PKA a secondary effector. The effector of the Gαq/11 pathway is phospholipase C-β (PLCβ), which catalyzes the cleavage of membrane-bound phosphatidylinositol 4,5-bisphosphate (PIP2) into the second messengers inositol (1,4,5) trisphosphate (IP3) and diacylglycerol (DAG). IP3 acts on IP3 receptors found in the membrane of the endoplasmic reticulum (ER) to elicit Ca2+ release from the ER, while DAG diffuses along the plasma membrane where it may activate any membrane localized forms of a second ser/thr kinase called protein kinase C (PKC). Since many isoforms of PKC are also activated by increases in intracellular Ca2+, both these pathways can also converge on each other to signal through the same secondary effector. Elevated intracellular Ca2+ also binds and allosterically activates proteins called calmodulins, which in turn tosolic small GTPase, Rho. Once bound to GTP, Rho can then go on to activate various proteins responsible for cytoskeleton regulation such as Rho-kinase (ROCK). Most GPCRs that couple to Gα12/13 also couple to other sub-classes, often Gαq/11. Gβγ signaling The above descriptions ignore the effects of Gβγ–signalling, which can also be important, in particular in the case of activated Gαi/o-coupled GPCRs. The primary effectors of Gβγ are various ion channels, such as G-protein-regulated inwardly rectifying K+ channels (GIRKs), P/Q- and N-type voltage-gated Ca2+ channels, as well as some isoforms of AC and PLC, along with some phosphoinositide-3-kinase (PI3K) isoforms. G-protein-independent signaling Although they are classically thought of working only together, GPCRs may signal through G-protein-independent mechanisms, and heterotrimeric G-proteins may play functional roles independent of GPCRs. GPCRs may signal independently through many proteins already mentioned for their roles in G-protein-dependent signaling such as β-arrs, GRKs, and Srcs. Such signaling has been shown to be physiologically relevant, for example, β-arrestin signaling mediated by the chemokine receptor CXCR3 was necessary for full efficacy chemotaxis of activated T cells. In addition, further scaffolding proteins involved in subcellular localization of GPCRs (e.g., PDZ-domain-containing proteins) may also act as signal transducers. Most often the effector is a member of the MAPK family. Examples In the late 1990s, evidence began accumulating to suggest that some GPCRs are able to signal without G proteins. The ERK2 mitogen-activated protein kinase, a key signal transduction mediator downstream of receptor activation in many pathways, has been shown to be activated in response to cAMP-mediated receptor activation in the slime mold D. discoideum despite the absence of the associated G protein α- and β-subunits. In mammalian cells, the much-studied β2-adrenoceptor has been demonstrated to activate the ERK2 pathway after arrestin-mediated uncoupling of G-protein-mediated signaling. Therefore, it seems likely that some mechanisms previously believed related purely to receptor desensitisation are actually examples of receptors switching their signaling pathway, rather than simply being switched off. In kidney cells, the bradykinin receptor B2 has been shown to interact directly with a protein tyrosine phosphatase. The presence of a tyrosine-phosphorylated ITIM (immunoreceptor tyrosine-based inhibitory motif) sequence in the B2 receptor is necessary to mediate this interaction and subsequently the antiproliferative effect of bradykinin. GPCR-independent signaling by heterotrimeric G-proteins Although it is a relatively immature area of research, it appears that heterotrimeric G-proteins may also take part in non-GPCR signaling. There is evidence for roles as signal transducers in nearly all other types of receptor-mediated signaling, including integrins, receptor tyrosine kinases (RTKs), cytokine receptors (JAK/STATs), as well as modulation of various other "accessory" proteins such as GEFs, guanine-nucleotide dissociation inhibitors (GDIs) and protein phosphatases. There may even be specific proteins of these classes whose primary function is as part of GPCR-independent pathways, termed activators of G-protein signalling (AGS). Both the ubiquity of these interactions and the importance of Gα vs. Gβγ subunits to these processes are still unclear. Details of cAMP and PIP2 pathways There are two principal signal transduction pathways involving the G protein-linked receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway. cAMP signal pathway The cAMP signal transduction contains five main characters: stimulative hormone receptor (Rs) or inhibitory hormone receptor (Ri); stimulative regulative G-protein (Gs) or inhibitory regulative G-protein (Gi); adenylyl cyclase; protein kinase A (PKA); and cAMP phosphodiesterase. Stimulative hormone receptor (Rs) is a receptor that can bind with stimulative signal molecules, while inhibitory hormone receptor (Ri) is a receptor that can bind with inhibitory signal molecules. Stimulative regulative G-protein is a G-protein linked to stimulative hormone receptor (Rs), and its α subunit upon activation could stimulate the activity of an enzyme or other intracellular metabolism. On the contrary, inhibitory regulative G-protein is linked to an inhibitory hormone receptor, and its α subunit upon activation could inhibit the activity of an enzyme or other intracellular metabolism. Adenylyl cyclase is a 12-transmembrane glycoprotein that catalyzes the conversion of ATP to cAMP with the help of cofactor Mg2+ or Mn2+. The cAMP produced is a second messenger in cellular metabolism and is an allosteric activator of protein kinase A. Protein kinase A is an important enzyme in cell metabolism due to its ability to regulate cell metabolism by phosphorylating specific committed enzymes in the metabolic pathway. It can also regulate specific gene expression, cellular secretion, and membrane permeability. The protein enzyme contains two catalytic subunits and two regulatory subunits. When there is no cAMP,the complex is inactive. When cAMP binds to the regulatory subunits, their conformation is altered, causing the dissociation of the regulatory subunits, which activates protein kinase A and allows further biological effects. These signals then can be terminated by cAMP phosphodiesterase, which is an enzyme that degrades cAMP to 5'-AMP and inactivates protein kinase A. Phosphatidylinositol signal pathway In the phosphatidylinositol signal pathway, the extracellular signal molecule binds with the G-protein receptor (Gq) on the cell surface and activates phospholipase C, which is located on the plasma membrane. The lipase hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP2) into two second messengers: inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG). IP3 binds with the IP3 receptor in the membrane of the smooth endoplasmic reticulum and mitochondria to open Ca2+ channels. DAG helps activate protein kinase C (PKC), which phosphorylates many other proteins, changing their catalytic activities, leading to cellular responses. The effects of Ca2+ are also remarkable: it cooperates with DAG in activating PKC and can activate the CaM kinase pathway, in which calcium-modulated protein calmodulin (CaM) binds Ca2+, undergoes a change in conformation, and activates CaM kinase II, which has unique ability to increase its binding affinity to CaM by autophosphorylation, making CaM unavailable for the activation of other enzymes. The kinase then phosphorylates target enzymes, regulating their activities. The two signal pathways are connected together by Ca2+-CaM, which is also a regulatory subunit of adenylyl cyclase and phosphodiesterase in the cAMP signal pathway. Receptor regulation GPCRs become desensitized when exposed to their ligand for a long period of time. There are two recognized forms of desensitization: 1) homologous desensitization, in which the activated GPCR is downregulated; and 2) heterologous desensitization, wherein the activated GPCR causes downregulation of a different GPCR. The key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic) receptor domain by protein kinases. Phosphorylation by cAMP-dependent protein kinases Cyclic AMP-dependent protein kinases (protein kinase A) are activated by the signal chain coming from the G protein (that was activated by the receptor) via adenylate cyclase and cyclic AMP (cAMP). In a feedback mechanism, these activated kinases phosphorylate the receptor. The longer the receptor remains active the more kinases are activated and the more receptors are phosphorylated. In β2-adrenoceptors, this phosphorylation results in the switching of the coupling from the Gs class of G-protein to the Gi class. cAMP-dependent PKA mediated phosphorylation can cause heterologous desensitisation in receptors other than those activated. Phosphorylation by GRKs The G protein-coupled receptor kinases (GRKs) are protein kinases that phosphorylate only active GPCRs. G-protein-coupled receptor kinases (GRKs) are key modulators of G-protein-coupled receptor (GPCR) signaling. They constitute a family of seven mammalian serine-threonine protein kinases that phosphorylate agonist-bound receptor. GRKs-mediated receptor phosphorylation rapidly initiates profound impairment of receptor signaling and desensitization. Activity of GRKs and subcellular targeting is tightly regulated by interaction with receptor domains, G protein subunits, lipids, anchoring proteins and calcium-sensitive proteins. Phosphorylation of the receptor can have two consequences: Translocation: The receptor is, along with the part of the membrane it is embedded in, brought to the inside of the cell, where it is dephosphorylated within the acidic vesicular environment and then brought back. This mechanism is used to regulate long-term exposure, for example, to a hormone, by allowing resensitisation to follow desensitisation. Alternatively, the receptor may undergo lysozomal degradation, or remain internalised, where it is thought to participate in the initiation of signalling events, the nature of which depending on the internalised vesicle's subcellular localisation. Arrestin linking: The phosphorylated receptor can be linked to arrestin molecules that prevent it from binding (and activating) G proteins, in effect switching it off for a short period of time. This mechanism is used, for example, with rhodopsin in retina cells to compensate for exposure to bright light. In many cases, arrestin's binding to the receptor is a prerequisite for translocation. For example, beta-arrestin bound to β2-adrenoreceptors acts as an adaptor for binding with clathrin, and with the beta-subunit of AP2 (clathrin adaptor molecules); thus, the arrestin here acts as a scaffold assembling the components needed for clathrin-mediated endocytosis of β2-adrenoreceptors. Mechanisms of GPCR signal termination As mentioned above, G-proteins may terminate their own activation due to their intrinsic GTP→GDP hydrolysis capability. However, this reaction proceeds at a slow rate (≈0.02 times/sec) and, thus, it would take around 50 seconds for any single G-protein to deactivate if other factors did not come into play. Indeed, there are around 30 isoforms of RGS proteins that, when bound to Gα through their GAP domain, accelerate the hydrolysis rate to ≈30 times/sec. This 1500-fold increase in rate allows for the cell to respond to external signals with high speed, as well as spatial resolution due to limited amount of second messenger that can be generated and limited distance a G-protein can diffuse in 0.03 seconds. For the most part, the RGS proteins are promiscuous in their ability to deactivate G-proteins, while which RGS is involved in a given signaling pathway seems more determined by the tissue and GPCR involved than anything else. In addition, RGS proteins have the additional function of increasing the rate of GTP-GDP exchange at GPCRs, (i.e., as a sort of co-GEF) further contributing to the time resolution of GPCR signaling. In addition, the GPCR may be desensitized itself. This can occur as: a direct result of ligand occupation, wherein the change in conformation allows recruitment of GPCR-Regulating Kinases (GRKs), which go on to phosphorylate various serine/threonine residues of IL-3 and the C-terminal tail. Upon GRK phosphorylation, the GPCR's affinity for β-arrestin (β-arrestin-1/2 in most tissues) is increased, at which point β-arrestin may bind and act to both sterically hinder G-protein coupling as well as initiate the process of receptor internalization through clathrin-mediated endocytosis. Because only the liganded receptor is desensitized by this mechanism, it is called homologous desensitization the affinity for β-arrestin may be increased in a ligand occupation and GRK-independent manner through phosphorylation of different ser/thr sites (but also of IL-3 and the C-terminal tail) by PKC and PKA. These phosphorylations are often sufficient to impair G-protein coupling on their own as well. PKC/PKA may, instead, phosphorylate GRKs, which can also lead to GPCR phosphorylation and β-arrestin binding in an occupation-independent manner. These latter two mechanisms allow for desensitization of one GPCR due to the activities of others, or heterologous desensitization. GRKs may also have GAP domains and so may contribute to inactivation through non-kinase mechanisms as well. A combination of these mechanisms may also occur. Once β-arrestin is bound to a GPCR, it undergoes a conformational change allowing it to serve as a scaffolding protein for an adaptor complex termed AP-2, which in turn recruits another protein called clathrin. If enough receptors in the local area recruit clathrin in this manner, they aggregate and the membrane buds inwardly as a result of interactions between the molecules of clathrin, in a process called opsonization. Once the pit has been pinched off the plasma membrane due to the actions of two other proteins called amphiphysin and dynamin, it is now an endocytic vesicle. At this point, the adapter molecules and clathrin have dissociated, and the receptor is either trafficked back to the plasma membrane or targeted to lysosomes for degradation. At any point in this process, the β-arrestins may also recruit other proteins—such as the non-receptor tyrosine kinase (nRTK), c-SRC—which may activate ERK1/2, or other mitogen-activated protein kinase (MAPK) signaling through, for example, phosphorylation of the small GTPase, Ras, or recruit the proteins of the ERK cascade directly (i.e., Raf-1, MEK, ERK-1/2) at which point signaling is initiated due to their close proximity to one another. Another target of c-SRC are the dynamin molecules involved in endocytosis. Dynamins polymerize around the neck of an incoming vesicle, and their phosphorylation by c-SRC provides the energy necessary for the conformational change allowing the final "pinching off" from the membrane. GPCR cellular regulation Receptor desensitization is mediated through a combination phosphorylation, β-arr binding, and endocytosis as described above. Downregulation occurs when endocytosed receptor is embedded in an endosome that is trafficked to merge with an organelle called a lysosome. Because lysosomal membranes are rich in proton pumps, their interiors have low pH (≈4.8 vs. the pH≈7.2 cytosol), which acts to denature the GPCRs. In addition, lysosomes contain many degradative enzymes, including proteases, which can function only at such low pH, and so the peptide bonds joining the residues of the GPCR together may be cleaved. Whether or not a given receptor is trafficked to a lysosome, detained in endosomes, or trafficked back to the plasma membrane depends on a variety of factors, including receptor type and magnitude of the signal. GPCR regulation is additionally mediated by gene transcription factors. These factors can increase or decrease gene transcription and thus increase or decrease the generation of new receptors (up- or down-regulation) that travel to the cell membrane. Receptor oligomerization G-protein-coupled receptor oligomerisation is a widespread phenomenon. One of the best-studied examples is the metabotropic GABAB receptor. This so-called constitutive receptor is formed by heterodimerization of GABABR1 and GABABR2 subunits. Expression of the GABABR1 without the GABABR2 in heterologous systems leads to retention of the subunit in the endoplasmic reticulum. Expression of the GABABR2 subunit alone, meanwhile, leads to surface expression of the subunit, although with no functional activity (i.e., the receptor does not bind agonist and cannot initiate a response following exposure to agonist). Expression of the two subunits together leads to plasma membrane expression of functional receptor. It has been shown that GABABR2 binding to GABABR1 causes masking of a retention signal of functional receptors. Origin and diversification of the superfamily Signal transduction mediated by the superfamily of GPCRs dates back to the origin of multicellularity. Mammalian-like GPCRs are found in fungi, and have been classified according to the GRAFS classification system based on GPCR fingerprints. Identification of the superfamily members across the eukaryotic domain, and comparison of the family-specific motifs, have shown that the superfamily of GPCRs have a common origin. Characteristic motifs indicate that three of the five GRAFS families, Rhodopsin, Adhesion, and Frizzled, evolved from the Dictyostelium discoideum cAMP receptors before the split of opisthokonts. Later, the Secretin family evolved from the Adhesion GPCR receptor family before the split of nematodes. Insect GPCRs appear to be in their own group and Taste2 is identified as descending from Rhodopsin. Note that the Secretin/Adhesion split is based on presumed function rather than signature, as the classical Class B (7tm_2, ) is used to identify both in the studies. See also G protein-coupled receptors database List of MeSH codes (D12.776) Metabotropic receptor Orphan receptor Pepducins, a class of drug candidates targeted at GPCRs Receptor activated solely by a synthetic ligand, a technique for control of cell signaling through synthetic GPCRs TOG superfamily References Further reading External links GPCR Cell Line ; GPCR-HGmod , a database of 3D structural models of all human G-protein coupled receptors, built by the GPCR-I-TASSER pipeline Biochemistry Integral membrane proteins Molecular biology Protein families Signal transduction Protein superfamilies
G protein-coupled receptor
Chemistry,Biology
9,970
5,353,379
https://en.wikipedia.org/wiki/Sonchus%20asper
Sonchus asper, the prickly sow-thistle, rough milk thistle, spiny sowthistle, sharp-fringed sow thistle, or spiny-leaved sow thistle, is a widespread flowering plant in the tribe Cichorieae within the family Asteraceae. Description Sonchus asper is an annual or biennial herb sometimes reaching a height of with spiny leaves and yellow flowers resembling those of the dandelion. The leaves are bluish-green, simple, lanceolate, with wavy and sometimes lobed margins, covered in spines on both the margins and beneath. The base of the leaf surrounds the stem. The leaves and stems emit a milky sap when cut. One plant will produce several flat-topped arrays of flower heads, each head containing numerous yellow ray flowers but no disc flowers. Distribution Sonchus asper is native to Europe, North Africa, and western Asia. It has also become naturalized on other continents and is regarded as a noxious, invasive weed in many places. Its edible leaves make a palatable and nutritious leaf vegetable. It is found in cultivated soil, pastures, roadsides, edges of yards, vacant lots, construction sites, waste areas and in grasslands. References External links Spiny Sowthistle in Virginia Tech Weed Identification Guide WeedAlert.com's article on the Spiny Sowthistle photo of herbarium specimen at Missouri Botanical Garden, collected in Madagascar in 1932 asper Flora of Europe Flora of Western Asia Flora of North Africa Leaf vegetables Cosmopolitan species Flora of Syria
Sonchus asper
Biology
318
6,207,936
https://en.wikipedia.org/wiki/Villin-1
Villin-1 is a 92.5 kDa tissue-specific actin-binding protein associated with the actin core bundle of the brush border. Villin-1 is encoded by the VIL1 gene. Villin-1 contains multiple gelsolin-like domains capped by a small (8.5 kDa) "headpiece" at the C-terminus consisting of a fast and independently folding three-helix bundle that is stabilized by hydrophobic interactions. The headpiece domain is a commonly studied protein in molecular dynamics due to its small size and fast folding kinetics and short primary sequence. Structure Villin-1 is made up of seven domains, six homologous domains make up the N-terminal core and the remaining domain makes up the C-terminal cap. Villin contains three phosphatidylinositol 4,5-biphosphate (PIP2) binding sites, one of which is located at the head piece and the other two in the core. The core domain is approximately 150 amino acid residues grouped in six repeats. On this core is an 87 residue, hydrophobic, C-terminal headpiece The headpiece (HP67) is made up of a compact, 70 amino acid folded protein at the C-terminus. This headpiece contains an F-actin binding domain. Residues K38, E39, K65, 70-73:KKEK, G74, L75 and F76 surround a hydrophobic core and are believed to be involved in the binding of F-actin to villin-1. Residues E39 and K70 form a salt bridge buried within the headpiece which serves to connect N and C terminals. This salt bridge may also orient and fix the C-terminal residues involved in F-actin binding as in the absence of this salt bridge no binding occurs. A hydrophobic “cap” is formed by residue W64 side chains, which is completely conserved throughout the villin family. Below this cap is a crown of alternative positive and negative charged localities. Villin can undergo post-translational modifications like tyrosine phosphorylation. Villin-1 has the ability to dimerize and the dimerization site is located at the amino end of the protein. Expression Villin-1 is an actin binding protein expressed mainly in the brush border of the epithelium in vertebrates but sometimes it is ubiquitously expressed in protists and plants. Villin is found localized in the microvilli of the brush border of the epithelium lining of the gut and renal tubules in vertebrates. Function Villin-1 is believed to function in the bundling, nucleation, capping and severing of actin filaments. In vertebrates, villin proteins help to support the microfilaments of the microvilli of the brush border. However, knockout mice appear to show ultra-structurally normal microvilli reminding us that the function of villin is not definitively known; it may play a role in cell plasticity through F-actin severing. The six-repeat villin core is responsible for Ca2+ actin severing while the headpiece is responsible for actin crosslinking and bundling (Ca independent). Villin is postulated to be the controlling protein for Ca2+ induced actin severing in the brush border. Ca2+ inhibits proteolytic cleavage of the domains of the 6 N-terminal core which inhibits actin severing. In normal mice raising Ca2+ levels induces the severing of actin by villin, whereas in villin knockout mice this activity does not occur in response to heightened Ca2+ levels. In the presence of low concentrations of Ca2+ the villin headpiece functions to bundle actin filaments whereas in the presence of high Ca2+ concentrations the N-terminal caps and severs these filaments. The association of PIP2 with villin inhibits the actin capping and severing action and increases actin binding at the headpiece region, possibly through structural changes in the protein. PIP2 increases actin bundling not only by decreasing the severing action of villin but also through dissociating capping proteins, releasing actin monomers from sequestering proteins and stimulating actin nucleation and cross linking. Villin subdomain The C-terminal subdomain of Villin Headpiece VHP67, denoted VHP35, is stabilised in part, by a buried cluster of three phenylalanine residues. Its small size and high helical content are expected to promote rapid folding, and this has been confirmed experimentally. Villin-4 C-terminal construct VHP76 in Arabidopsis thaliana has been shown to exhibit higher affinity for F-actin in increasing concentrations of Ca2+, which further confirms the function of villin. Structure It has a simple topology consisting of three α-helices that form a well-packed hydrophobic core. Degradation and regulation Currently, it is theorized the regulation of plant villins are caused by degradation via the binding protein auxin, which targets the headpiece domain (VHP). See also Supervillin References Further reading External links Proteins
Villin-1
Chemistry
1,099
46,549,320
https://en.wikipedia.org/wiki/C12orf40
C12orf40, also known as Chromosome 12 Open Reading Frame 40, HEL-206, and Epididymis Luminal Protein 206 is a protein that in humans is encoded by the C12orf40 gene. Gene Human gene In humans, the gene for C12orf40 is located on chromosome 12. There are 13 exons in the canonical isoform that is transcribed into an mRNA of 2797 base pairs. Three other isoforms have been isolated. Evolution Homologs exist as distant as the green sea turtle and chickens at approximately 60% sequence identity, suggesting that the gene may have arisen in the amniotes after their divergence from other tetrapods; the first 4 exons are conserved with 36% identity as distantly as the anemone. Protein Properties The human C12orf40 protein is 652 amino acids in length. Its molecular weight is predicted to be 74.52 kDa, and its isoelectric point 7.822. Amino acids 229-652 contain a domain of unknown function (DUF4552) which is conserved in vertebrates. C12orf40 is predicted to be a soluble protein with no transmembrane segments. Its secondary and tertiary structures are not currently known. Interactions Experimental evidence shows that C12orf40 has a physical interaction with dynein light chain 2 (DYNLL2). This protein is part of a complex that regulates the function of the motor protein dynein. Expression Within the cell, C12orf40 is predicted to be present in the nucleus based on signals within its sequence. An analysis of normal human tissues shows that C12orf40 expression occurs primarily in the testis, suggesting importance to the male reproductive system. Clinical significance The function of C12orf40 is not yet well understood. However, the three prime untranslated region (3' UTR) of C12orf40 is highly similar to the 3' UTR of the cystic fibrosis transmembrane conductance regulator (CFTR), which may mean that the two genes share certain expression patterns. In the fibroblasts of hypertrophic scars, exposure to the immunosuppressant Tacrolimus causes C12orf40 up-regulation. In pigs, a region homologous to human C12orf40 plays a role in arthrogryposis, a disease characterized by congenital fibrosis. The common thread of these studies suggests that C12orf40 may have a connection to the formation of healthy connective tissue. References Chromosomes Protein families
C12orf40
Biology
528
74,638,119
https://en.wikipedia.org/wiki/Tildipirosin
Tildipirosin, sold under the brand name Zuprevo, is a macrolide antibiotic used in pigs and cattle. Medical uses In the United States, tildipirosin is indicated for the treatment or control of bovine respiratory disease associated with Mannheimia haemolytica, Pasteurella multocida, and Histophilus somni in beef and non-lactating dairy cattle. In the European Union, tildipirosin is indicated for the treatment and metaphylaxis of swine respiratory disease associated with Actinobacillus pleuropneumoniae, P. multocida, Bordetella bronchiseptica, and Glaesserella parasuis sensitive to tildipirosin; and for the treatment and prevention of bovine respiratory disease associated with M. haemolytica, P. multocida]], and H. somni sensitive to tildipirosin. References Veterinary drugs Piperidines Dimethylamino compounds Lactones Cyclic ketones Glucosides Amino sugars
Tildipirosin
Chemistry
232
383,038
https://en.wikipedia.org/wiki/Carpool
Carpooling is the sharing of car journeys so that more than one person travels in a car, and prevents the need for others to have to drive to a location themselves. Carpooling is considered a Demand-Responsive Transport (DRT) service. By having more people using one vehicle, carpooling reduces each person's travel costs such as: fuel costs, tolls, and the stress of driving. Carpooling is also a more environmentally friendly and sustainable way to travel as sharing journeys reduces air pollution, carbon emissions, traffic congestion on the roads, and the need for parking spaces. Authorities often encourage carpooling, especially during periods of high pollution or high fuel prices. Car sharing is a good way to use up the full seating capacity of a car, which would otherwise remain unused if it were just the driver using the car. In 2009, carpooling represented 43.5% of all trips in the United States and 10% of commute trips. The majority of carpool commutes (over 60%) are "fam-pools" with family members. Carpool commuting is more popular for people who work in places with more jobs nearby, and who live in places with higher residential densities. Carpooling is significantly correlated with transport operating costs, including fuel prices and commute length, and with measures of social capital, such as time spent with others, time spent eating and drinking and being unmarried. However, carpooling is significantly less among people who spend more time at work, elderly people, and homeowners. Operation Drivers and passengers offer and search for journeys through one of the several mediums available. After finding a match they contact each other to arrange any details for the journey(s). Costs, meeting points and other details like space for luggage are agreed on. They then meet and carry out their shared car journey(s) as planned. Carpooling is commonly implemented for commuting but is increasingly popular for longer one-off journeys, with the formality and regularity of arrangements varying between schemes and journeys. Carpooling is not always arranged for the whole length of a journey. Especially on long journeys, it is common for passengers to only join for parts of the journey, and give a contribution based on the distance that they travel. This gives carpooling extra flexibility and enables more people to share journeys and save money. Some carpooling is now organized in online marketplaces or ride-matching websites that allow drivers and passengers to find a travel match and/or make a secured transaction to share the planned travel cost. Like other online marketplaces, they use community-based trust mechanisms, such as user-ratings, to create an optimal experience for users. Arrangements for carpooling can be made through many different media including public websites, social media, acting as marketplaces, employer websites, smartphone applications, carpooling agencies and pick-up points. Initiatives Many companies and local authorities have introduced programs to promote carpooling. In an effort to reduce traffic and encourage carpooling, some governments have introduced high-occupancy vehicle (HOV) lanes in which only vehicles with two or more passengers are allowed to drive. HOV lanes can create strong practical incentives for carpooling by reducing travel time and expense. In some countries, it is common to find parking spaces reserved for carpoolers. In 2011, an organization called Greenxc created a campaign to encourage others to use this form of transportation in order to reduce their own carbon footprint. Carpooling, or car sharing as it is called in British English, is promoted by a national UK charity, Carplus, whose mission is to promote responsible car use in order to alleviate financial, environmental and social costs of motoring today, and encourage new approaches to car dependency in the UK. Carplus is supported by Transport for London, the British government initiative to reduce congestion and parking pressure and contribute to relieving the burden on the environment and to the reduction of traffic-related air-pollution, in London. However, not all countries are helping carpooling to spread: in Hungary it is a tax crime to carry someone in a car for a cost share (or any payment) unless the driver has a taxi license and there is an invoice issued and taxes are paid. Several people were fined by undercover tax officers during a 2011 crackdown, posing as passengers looking for a ride on carpooling websites. On 19 March 2012 Endre Spaller, a member of the Hungarian Parliament interpellated Zoltán Cséfalvay, Secretary of State for the National Economy, about this practice who replied that carpooling should be endorsed instead of punished, however care must be taken for some people trying to turn it into a way to gain untaxed profit. Cost sharing Carpooling usually means to divide the travel expenses equally between all the occupants of the vehicle (driver or passenger). The driver does not try to earn money, but to share with several people the cost of a trip he/she would do anyway. The expenses to be divided basically include the fuel and possible tolls. But if we include in the calculation the depreciation of the vehicle purchase and maintenance, insurance and taxes paid by the driver, we get a cost around $1/mile. There are platforms that facilitate carpooling by connecting people seeking respectively passengers and drivers. Usually there is a fare set up by the car driver and accepted by passengers because they get an agreement before trip start. The second generation of these platforms is designed to manage urban trips in real time, using the traveler's smartphones. They make possible to occupy the vehicle's empty seats on the fly, collecting and delivering passengers along its entire route (and not only at common points of origin and destination). This system automatically performs an equitable sharing of travel costs, allowing each passenger to reimburse the driver a fair share according to the benefit actually gained by the vehicle usage, proportional to the distance traveled by the passenger and the number of people that shared the car. History Carpooling first became prominent in the United States as a rationing tactic during World War II. Ridesharing began during World War II through "car clubs" or "car-sharing clubs". The US Office of Civilian Defense asked neighborhood councils to encourage four workers to share a ride in one car to conserve rubber for the war effort. It also created a ride sharing program called the Car Sharing Club Exchange and Self-Dispatching System. Carpooling returned in the mid-1970s due to the 1973 oil crisis and the 1979 energy crisis. At that time the first employee vanpools were organized at Chrysler and 3M. Carpooling declined precipitously between the 1970s and the 2000s, peaking in the US in 1970 with a commute mode share of 20.4%. By 2011 it was down to 9.7%. In large part this has been attributed to the dramatic fall in gas prices (45%) during the 1980s. In the 1990s it was popular among college students, where campuses have limited parking space. Together with Prof. James Davidson from Harvard, Dace Campbell, Ivan Lin and Habib Rached from Washington, and others, began to investigate the feasibility of further development although the comprehensive technologies were not commercially available yet at the time. Their work is considered by many to be a forerunner of carpooling & ridesharing systems technology used by Garrett Camp, Travis Kalanick, Oscar Salazar and Conrad Whelan at Uber. The character of carpool travel has been shifting from "Dagwood Bumstead" variety, in which each rider is picked up in sequence, to a "park and ride" variety, where all the travelers meet at a common location. Recently, however, the Internet has facilitated growth for carpooling and the commute share mode has grown to 10.7% in 2005. In 2007 with the advent of smart phones and GPS, which became commercially available, John Zimmer and Logan Green, from Cornell University and University of California, Santa Barbara respectively, rediscovered and created carpooling system called Zimride, a precursor to Lyft. The popularity of the Internet and smart phones has greatly helped carpooling to expand, enabling people to offer and find rides thanks to easy-to-use and reliable online transport marketplaces. These websites are commonly used for one-off long-distance journeys with high fuel costs. In Europe, long-distance car-pooling has become increasingly popular over the past years, thanks to BlaBlaCar. According to its website, , Blablacar counted more than 80 million users, across Europe and beyond. , Uber and Lyft have suspended carpooling services in the U.S. and Canada in efforts to control the COVID-19 pandemic via social distancing. Other forms Carpooling exists in other forms: Slugging is a form of ad hoc, informal carpooling between strangers. No money changes hands, but a mutual benefit still exists between the driver and passenger(s) making the practice worthwhile. Flexible carpooling expands the idea of ad hoc carpooling by designating formal locations for travelers to join carpools. Ridesharing companies allow people to arrange ad hoc rides on very short notice, through the use of smartphone applications or the internet. Passengers are simply picked up at their current location. Challenges Flexibility - Carpooling can struggle to be flexible enough to accommodate in route stops or changes to working times/patterns. One survey identified this as the most common reason for not carpooling. To counter this some schemes offer 'sweeper services' with later running options, or a 'guaranteed ride home' arrangement with a local taxi company. Reliability - If a carpooling network lacks a "critical mass" of participants, it may be difficult to find a match for certain trips. The parties may not necessarily follow through on the agreed-upon ride. Several internet carpooling marketplaces are addressing this concern by implementing online paid passenger reservation, billed even if passengers do not turn up. Riding with strangers - Concerns over security have been an obstacle to sharing a vehicle with strangers, though in reality the risk of crime is small. One remedy used by internet carpooling schemes is reputation systems that flag problematic users and allow responsible users to build up trust capital, such systems greatly increase the value of the website for the user community. Overall efficacy - Though carpooling is officially sanctioned by most governments, including construction of lanes specifically allocated for car-pooling, some doubts remain as to the overall efficacy of carpool lanes. As an example, many car-pool lanes, or lanes restricted to car-pools during peak traffic hours, are seldom occupied by car-pools in the traditional sense. Instead, these lanes are often empty, leading to an overall net increase in fuel consumption as freeway capacity is possibly contracted, forcing the solo-occupied cars to travel slower, leading to reduced fuel efficiency. In 2012, the Queensland government announced it would end carpool lanes (known as Transit Lanes) claiming they were creating congestion and delays. The move was supported by the RACQ motoring group. No carpooling service provides the ability for drivers to declare the time range during which they provide services in advance. Although the majority of carpooling services use a mobile application, this is not the case for interurban carpooling services (i.e., Ride joy and Autostrade carpooling) . In addition, no carpooling was found to guarantee a minimum delay for drivers or a single dropoff/pickup point. Some carpooling platforms (i.e., TwoGo and BlaBlaLines operated by BlablaCar) use an intelligent technology to analyze rides from all users to find the best fit for each user. This intelligent technology even factors in real-time traffic data to calculate precise routes and arrival times. In popular culture In the 1970s, the US Department of Transportation released a humorous, animated public service announcement to promote carpooling entitled "Kalaka." In the commercial, an interviewer is shown talking to Noah, "the original share-the-ride-with-a-friend man." Noah explains that carpooling is an economical way to get where you're going, but back in his time it was known as "kalaka." Cabbing All the Way is a book written by author Jatin Kuberkar that narrates a success story of a carpool with twelve people on board. Based in the city of Hyderabad, India, the book is a real life narration and highlights the potential benefits of having a carpool. The 2017 smartphone game Crazy Taxi Tycoon (formerly titled Crazy Taxi Gazillionaire) antagonizes ride-sharing as a threat to taxi business, as it becomes a powerful megacorporation that rips off those whom it serves. The player is tasked in hiring taxi drivers to establish a taxi service that offers a more legitimate, friendly and reliable transport experience. Carpool Karaoke, best known as a recurring segment from The Late Late Show with James Corden. See also Flight sharing Fuel tax Hitchhiking Public transportation Ridesharing company Rota (schedule) Shared transport Slugging Sustainable transportation Traditional car sharing and peer-to-peer carsharing Vanpool When You Ride Alone You Ride with bin Laden References External links Car culture Hitchhiking Sustainable transport Commuting
Carpool
Physics
2,747
22,090,355
https://en.wikipedia.org/wiki/Bovista%20nigrescens
Bovista nigrescens, commonly referred to as the brown puffball or black bovist, is an edible cream white or brown puffball. Phylogenetic relationships between Bovista nigrescens and species of Lycoperdaceae were established based on ITS and LSU sequence data from north European taxa. Description The fruit body of Bovista nigrescens is across. The roughly spherical fruit body is slightly pointed at the bottom. Although it lacks a sterile base, the fruit body is attached to the substrate by a single mycelial cord which often breaks, leaving the fruit body free to roll about in the wind. The outer wall is white at first, but soon flakes off in large scales at maturity to expose the dark purple-brown to blackish inner wall that encloses the spore mass. These spores leave via an apical pore, which is caused by extensive splitting and cracking. The gleba is often dark purple-brown. The capillitium is highly branched with brown dendroid elements. Spores are brown and ovoid, with a diameter of 4.5–6 μm. They are thick-walled, and nearly smooth, with a central oil droplet, and a long, warted pedicel. Habitat and distribution Bovista nigrescens puffballs are often found in grass and pastureland. Although they are found most abundantly in late summer to autumn, they persist in old dried condition for many months. They are uncommon in most areas, but frequent in North and West Europe. They are edible when young. In addition, they are found on the ground, fields, lawns or on roadsides. Typically, they may be found at an altitude of up to . Uses The young specimens can be halved and cooked. References Agaricaceae Fungi of Europe Edible fungi Puffballs Fungi described in 1794 Fungus species
Bovista nigrescens
Biology
386
2,026,112
https://en.wikipedia.org/wiki/Construction%20management
Construction management (CM) aims to control the quality of a construction project's scope, time, and cost (sometimes referred to as a project management triangle or "triple constraints") to maximize the project owner's satisfaction. It uses project management techniques and software to oversee the planning, design, construction and closeout of a construction project safely, on time, on budget and within specifications. Practitioners of construction management are called construction managers. They have knowledge and experience in the field of business management and building science. Professional construction managers may be hired for large-scaled, high budget undertakings (commercial real estate, transportation infrastructure, industrial facilities, and military infrastructure), called capital projects. Construction managers use their knowledge of project delivery methods to deliver the project optimally. The role of the contractor Contractors are assigned to a construction project during the design or once the design has been completed by a licensed architect or a licensed civil engineer. This is done by going through a bidding process with different contractors. As dictated by the project delivery method, the contractor is selected by using one of three common selection methods: low-bid selection, best-value selection, or qualifications-based selection. A construction manager is hired for the following deliverables means and methods, communications with the authority having jurisdiction, time management, document control, cost controls and management, quality controls, decision making, mathematics, shop drawings, record drawings and human resources. In the US, the Construction Management Association of America (CMAA) states the most common responsibilities of a Construction Manager fall into the following 7 categories: Project Management Planning, Cost Management, Time Management, Quality Management, Contract Administration, Safety Management, and CM Professional Practice. CM professional practice includes specific activities such as defining the responsibilities and management structure of the project management team, organizing and leading by implementing project controls, defining roles and responsibilities, developing communication protocols, and identifying elements of project design and construction likely to give rise to disputes and claims. Function The functions of construction management typically include the following: Specifying project objectives and plans including delineation of scope, budgeting, scheduling, setting performance requirements, and selecting project participants. Maximizing the resource efficiency through procurement of labor, materials and equipment. Implementing various operations through proper coordination and control of planning, design, estimating, contracting and construction in the entire process. Developing effective communications and mechanisms for resolving conflicts. Obtaining the project Bids A bid is given to the owner by construction managers that are willing to complete their construction project. A bid tells the owner how much money they should expect to pay the construction management company in order for them to complete the project. Open bid: An open bid is used for public projects. Any and all contractors are allowed to submit their bid due to public advertising. Closed bid: A closed bid is used for private projects. A selection of contractors are sent an invitation for bid so only they can submit a bid for the specified project. Selection methods Low-bid selection: This selection focuses on the price of a project. Multiple construction management companies submit a bid to the owner that is the lowest amount they are willing to do the job for. Then the owner usually chooses the company with the lowest bid to complete the job for them. Best-value selection: This selection focuses on both the price and qualifications of the contractors submitting bids. This means that the owner chooses the contractor with the best price and the best qualifications. The owner decides by using a request for proposal (RFP), which provides the owner with the contractor's exact form of scheduling and budgeting that the contractor expects to use for the project. Qualifications-based selection: This selection is used when the owner decides to choose the contractor only on the basis of their qualifications. The owner then uses a request for qualifications (RFQ), which provides the owner with the contractor's experience, management plans, project organization, and budget and schedule performance. The owner may also ask for safety records and individual credentials of their members. This method is most often used when the contractor is hired early during the design process so that the contractor can provide input and cost estimates as the design develops. Payment contracts Lump sum: This is the most common type of contract. The construction manager and the owner agree on the overall cost of the construction project and the owner is responsible for paying that amount whether the construction project exceeds or falls below the agreed price of payment. Cost plus fee: This contract provides payment for the contractor including the total cost of the project as well as a fixed fee or percentage of the total cost. This contract is beneficial to the contractor since any additional costs will be paid for, even though they were unexpected for the owner. Guaranteed maximum price: This contract is the same as the cost-plus-fee contract although there is a set price that the overall cost and fee do not go above. Unitprice: This contract is used when the cost cannot be determined ahead of time. The owner provides materials with a specific unit price to limit spending. Project stages The stages of a typical construction project have been defined as feasibility, design, construction and operation, each stage relating to the project life cycle. Feasibility and design Feasibility and design involves four steps: programming and feasibility, schematic design, design development, and contract documents. It is the responsibility of the design team to ensure that the design meets all building codes and regulations. It is during the design stage that the bidding process takes place. Conceptual/programming and feasibility: The needs, goals, and objectives must be determined for the building. Decisions must be made on the building size, number of rooms, how the space will be used, and who will be using the space. This must all be considered to begin the actual designing of the building. This phase is normally a written list of each room or space, the critical information about those spaces, and the approximate square footage of each area. Schematic design: Schematic designs are sketches used to identify spaces, shapes, and patterns. Materials, sizes, colors, and textures must be considered in the sketches. This phase usually involves developing the floor plan, elevations, a site plan, and possibly a few details. Design development (DD): This step requires research and investigation into what materials and equipment will be used as well as their cost. During this phase, the drawings are refined with information from structural, plumbing, mechanical, and electrical engineers. It also involves a more rigorous evaluation how the applicable building codes will impact the project. Contract documents (CDs): Contract documents are the final drawings and specifications of the construction project. They are used by contractors to determine their bid while builders use them for the construction process. Contract documents can also be called working drawings. Pre-construction The pre-construction stage begins when the owner gives a notice to proceed to the contractor that they have chosen through the bidding process. A notice to proceed is when the owner gives permission to the contractor to begin their work on the project. The first step is to assign the project team which includes the project manager (PM), contract administrator, superintendent, and field engineer. Project manager: The project manager is in charge of the project team. Contract administrator: The contract administrator assists the project manager as well as the superintendent with the details of the construction contract. Superintendent: It is the superintendent's job to make sure everything is on schedule, including the flow of materials, deliveries, and equipment. They are also in charge of coordinating on-site construction activities. Field engineer: A field engineer is considered an entry-level position and is responsible for paperwork. During the pre-construction stage, a site investigation must take place. A site investigation takes place to discover if any steps need to be implemented on the job site. This is in order to get the site ready before the actual construction begins. This also includes any unforeseen conditions, such as historical artifacts or environment problems. A soil test must be done to determine if the soil is in good condition to be built upon. Procurement The procurement stage is when labor, materials and equipment needed to complete the project are purchased. This can be done by the general contractor if the company does all their own construction work. If the contractor does not do their own work, they obtain it through subcontractors. Subcontractors are contractors who specialize in one particular aspect of the construction work such as concrete, welding, glass, or carpentry. Subcontractors are hired the same way a general contractor would be, which is through the bidding process. Purchase orders are also part of the procurement stage. Purchase orders: A purchase order is used in various types of businesses. In this case, a purchase order is an agreement between a buyer and seller that the products purchased meet the required specifications for the agreed price. Construction The construction stage begins with a pre-construction meeting brought together by the superintendent (on an American project). The pre-construction meeting is meant to make decisions dealing with work hours, material storage, quality control, and site access. The next step is to move everything onto the construction site and set it all up. A contractor progress payment schedule is a schedule of when (according to project milestones or specified dates) contractors and suppliers will be paid for the current progress of installed work. Progress payments or interim payments are partial payments for work completed during a portion of a construction period, usually a month. Progress payments are made to general contractors, subcontractors, and suppliers as construction projects progress. Payments are typically made on a monthly basis but could be modified to meet certain milestones. Progress payments are an important part of contract administration for the contractor. Proper preparation of the information necessary for payment processing can help the contractor financially complete the project. Owner occupancy Once the owner moves into the building, a warranty period begins. This is to ensure that all materials, equipment, and quality meet the expectations of the owner that are included within the contract. Issues resulting from construction Dust and mud When construction vehicles are driving around a site or moving earth, a lot of dust is created, especially during the dryer months. This may cause disruption for surrounding businesses or homes. A popular method of dust control is to have a water truck driving through the site spraying water on the dry dirt to minimize the movement of dust within and out of the construction site. When water is introduced, mud is created. This mud sticks to the tires of the construction vehicles and is often tracked out to the surrounding roads. A street sweeper may clean the roads to reduce dirty road conditions. Environmental protections Storm water pollution: As a result of construction, the soil is displaced from its original location which can possibly cause environmental problems in the future. Runoff can occur during storms which can possibly transfer harmful pollutants through the soil to rivers, lakes, wetlands, and coastal waters. Endangered species: If endangered species have been found on the construction site, the site must be shut down for some time. The construction site must be shut down for as long as it takes for authorities to make a decision on the situation. Once the situation has been assessed, the contractor makes the appropriate accommodations to not disturb the species. Vegetation: There may be particular trees or other vegetation that must be protected on the job site. This may require fences or security tape to warn builders that they must not be harmed. Wetlands: The contractor must make accommodations so that erosion and water flow are not affected by construction. Any liquid spills must be maintained due to contaminants that may enter the wetland. Historical or cultural artifacts: Artifacts may include arrowheads, pottery shards, and bones. All work comes to a halt if any artifacts are found and will not resume until they can be properly examined and removed from the area. Construction activity documentation Project meetings take place at scheduled intervals to discuss the progress on the construction site and any concerns or issues. The discussion and any decisions made at the meeting must be documented. Diaries, logs, and daily field reports keep track of the daily activities on a job site each day. Diaries: Each member of the project team is expected to keep a project diary. The diary contains summaries of the day's events in the member's own words. They are used to keep track of any daily work activity, conversations, observations, or any other relevant information regarding the construction activities. Diaries can be referred to when disputes arise and a diary happens to contain information connected with the disagreement. Diaries that are handwritten can be used as evidence in court. Logs: Logs keep track of the regular activities on the job site such as phone logs, transmittal logs, delivery logs, and request for information (RFI) logs. Daily field reports: Daily field reports are a more formal way of recording information on the job site. They contain information that includes the day's activities, temperature and weather conditions, delivered equipment or materials, visitors on the site, images of the job site and equipment used that day. Labor statements are required on a daily basis. Also list of Labor, PERT CPM are needed for labor planning to complete a project in time. Resolving disputes Mediation: Mediation uses a third party mediator to resolve any disputes. The mediator helps both disputing parties to come to a mutual agreement. This cost-saving process ensures that no attorneys become involved in the dispute and is less time-consuming. Minitrial: A minitrial takes more time and money than a mediation. The minitrial takes place in an informal setting and involves some type of advisor or attorney that must be paid. The disputing parties may come to an agreement or the third party advisor may offer their advice. The agreement is nonbinding and can be broken. Arbitration: Arbitration is the most costly and time-consuming way to resolve a dispute short of litigation. Each party is represented by an attorney while witnesses and evidence are presented. Once all information is provided on the issue, the arbitrator makes a ruling which provides the final decision. The arbitrator provides the final decision on what must be done and it is a binding agreement between each of the disputing parties. Study and practice Construction Management education comes in a variety of formats: formal degree programs (two-year associate degree; four-year baccalaureate degree, master's degree, project management, operations management engineer degree, doctor of philosophy degree, postdoctoral researcher); on-the-job-training; and continuing education and professional development. Information on degree programs is available from ABET, the American Council for Construction Education (ACCE), the American Academy of Project Management (AAPM), the Construction Management Association of America (CMAA) or the Associated Schools of Construction (ASC). According to the American Council for Construction Education (one of the academic accreditation agencies responsible for accrediting construction management programs in the U.S.), the academic field of construction management encompasses a wide range of topics. These range from general management skills, through management skills specifically related to construction, to technical knowledge of construction methods and practices. There are many schools offering Construction Management programs, including some offering a master's degree. Software Capital project management software (CPMS) refers to the systems that are currently available that help capital project owner/operators, program managers, and construction managers, control and manage the vast amount of information that capital construction projects create. A collection, or portfolio of projects only makes this a bigger challenge. These systems go by different names: capital project management software, computer construction software, construction management software, project management information systems. Usually construction management can be referred as subset of CPMS where the scope of CPMS is not limited to construction phases of project. Required knowledge Construction and building Technology Public safety Customer service Human resources Mathematics Leadership Project delivery methods Design, bid, build contracts The phrase "design, bid, build" describes the prevailing model of construction management, in which the general contractor is engaged through a tender process after designs have been completed by the architect or engineer. Design-build contracts Many owners – particularly government agencies – let out contracts known as design-build contracts. In this type of contract, the construction team (known as the design-builder) is responsible for taking the owner's concept and completing a detailed design before (following the owner's approval of the design) proceeding with construction. Virtual design and construction technology may be used by contractors to maintain a tight construction time. There are three main advantages to a design-build contract. First, the construction team is motivated to work with the architect to develop a practical design. The team can find creative ways to reduce construction costs without reducing the function of the final product. The second major advantage involves the schedule. Many projects are commissioned within a tight time frame. Under a traditional contract, construction cannot begin until after the design is finished and the project has been awarded to a bidder. In a design-build contract the contractor is established at the outset, and construction activities can proceed concurrently with the design. The third major advantage is that the design-build contractor has an incentive to keep the combined design and construction costs within the owner's budget. If speed is important, design and construction contracts can be awarded separately; bidding takes place on preliminary plans in a not-to-exceed contract instead of a single firm design-build contract. The major problem with design-build contracts is an inherent conflict of interest. In a standard contract the architect works for the owner and is directly responsible to the owner. In design-build teaming agreement, the architect works for the design-builder, not the owner, therefore the design-builder may make design and construction decisions that benefit the design-builder, but that may not benefit the owner. During construction, the architect normally acts as the owner's representative. This includes reviewing the builder's work and ensuring that the products and methods meet specifications and codes. The architect's role is compromised when the architect works for the design-builder and not for the owner directly. Thus, the owner may get a building that is over-designed to increase profits for the design-builder, or a building built with lesser-quality products to maximize profits. However, incentive clauses are written into the contract to mitigate these issues. Project Management as PDM Turnkey Contracts A project delivery method where the construction company takes full responsibility for a project. Construction Management as PDM The construction industry typically includes three parties: an owner, a licensed designer (architect or engineer) and a builder (usually known as a general contractor). There are traditionally two contracts between these parties as they work together to plan, design and construct the project. The first contract is the owner-designer contract, which involves planning, design, and construction contract administration. The second contract is the owner-contractor contract, which involves construction. An indirect third-party relationship exists between the designer and the contractor, due to these two contracts. An owner may also contract with a construction project management company as an adviser, creating a third contract relationship in the project. The construction manager's role is to provide construction advice to the designer, design advice to the constructor on the owner's behalf and other advice as necessary. The construction project manager is sometimes referred to as an "Owner's Representative." The CM's role is to represent the interests of the Owner throughout the various phases of a project beginning as early as feasibility studies and conceptual planning of the project. Construction Managers help to inform good decision making on behalf of the owner through planning, design, permitting, construction contract procurement, and during construction. A primary role of the CM is to ensure the terms of the Construction Contract are fulfilled by the Contractor. A CM can be an individual or company focused on providing construction management services. A CM typically does not hold the contracts of the project design firms or construction firms but assists or leads the effort on behalf of the Owner to procure those services and ensure successful execution of those contracts' terms. A CM firm is typically hired as a personal or professional, qualifications-based service rather than as a bid. Costs are based on a guaranteed maximum price or fixed price, and substantiated by level of effort or staffing plan that identifies the hours per service task to be provided, and based on individual billable hourly rates of proposed project-assigned staff. Agency CM Construction cost management is a fee-based service in which the construction manager (CM) is responsible exclusively to the owner, acting in the owner's interests at every stage of the project. The construction manager offers impartial advice on matters such as: Optimum use of available funds Control of the scope of the work Project scheduling Optimum use of design and construction firms' skills and talents Avoidance of delays, changes and disputes Enhancing project design and construction quality Optimum flexibility in contracting and procurement Cash-flow management Comprehensive management of every stage of the project, beginning with the original concept and project definition, yields the greatest benefit to owners. As time progresses beyond the pre-design phase, the CM's ability to effect cost savings diminishes. The agency CM can represent the owner by helping select the design and construction teams and managing the design (preventing scope creep), helping the owner stay within a predetermined budget with value engineering, cost-benefit analysis and best-value comparisons. The software-application field of construction collaboration technology has been developed to apply information technology to construction management. CM at-risk (CMaR) CM at-risk is a delivery method which entails a commitment by the construction manager to deliver the project within a Guaranteed Maximum Price (GMP). The construction manager acts as a consultant to the owner in the development and design phases (preconstruction services), and as a general contractor during construction. When a construction manager is bound to a GMP, the fundamental character of the relationship is changed. In addition to acting in the owner's interest, the construction manager must control construction costs to stay within the GMP. CM at-risk is a global term referring to the business relationship of a construction contractor, owner and architect (or designer). Typically, a CM at-risk arrangement eliminates a "low-bid" construction project. A GMP agreement is a typical part of the CM-and-owner agreement (comparable to a "low-bid" contract), but with adjustments in responsibility for the CM. The advantage of a CM at-risk arrangement is budget management. Before a project's design is completed (six to eighteen months of coordination between designer and owner), the CM is involved with estimating the cost of constructing a project based on the goals of the designer and owner (design concept) and the project's scope. In balancing the costs, schedule, quality and scope of the project, the design may be modified instead of redesigned; if the owner decides to expand the project, adjustments can be made before pricing. To manage the budget before design is complete and construction crews mobilized, the CM conducts site management and purchases major items to efficiently manage time and cost. Advantages CM is working "at risk", therefore have incentive to act in the owner's interest, as well as to efficiently manage construction costs, considering they would be liable for any amount in excess of the GMP Ability to handle changes in design or scope Drawbacks If a cost overrun occurred, it could cost the CM a great deal of money The CM is allowed some mistake-related contingency, therefore there is a possibility that they will compensate by reducing the scope of the work to fit the GMP Since the GMP is settled before design begins, it is difficult for owners to know whether they received the best possible bid Bottom line An at-risk delivery method is best for large projects—both complete construction and renovation—that are not easy to define, have a possibility of changing in scope, or have strict schedule deadlines. Additionally, it is an efficient method in projects containing technical complexity, multi-trade coordination, or multiple phases. Accelerated construction techniques Starting with its Accelerated Bridge Program in the late 2000s, the Massachusetts Department of Transportation began employing accelerated construction techniques, in which it signs contracts with incentives for early completion and penalties for late completion, and uses intense construction during longer periods of complete closure to shorten the overall project duration and reduce cost. See also Architectural engineering Building officials Civil engineering Construction engineering Construction estimating software Cost overrun Cost engineering Earthquake engineering EPC (contract) International Building Code Quality, cost, delivery - trilemma of project Site manager Structural engineering Work breakdown structure Index of construction articles References Further reading Halpin, Daniel W., Construction Management, Wiley, Third Edition. Rawlinsons Australian Construction Handbook, annual editions Construction Building engineering Construction and extraction occupations Project management by type
Construction management
Engineering
5,007
38,012,368
https://en.wikipedia.org/wiki/UniPrise%20Systems
UniPrise Systems, Inc. was a privately held software company with its headquarters in Irvine, California. The company was founded in November 1993 by Joseph Perry, Randy Knapp, and Robert Mowry. Software development, engineering and technical support were located in North Chelmsford, Massachusetts. UniPrise specialized in compilers and database products for the Unix environment. In 1997 they signed an agreement with Hewlett-Packard corporation to bundle their database monitoring software with HP OpenView systems management software. As of 1998 products included: IMPERA – a management tool for distributed database systems. Access/DAL – database access middleware. UniPrise PL/I for UNIX. PL/I for OpenVMS The company abruptly went out of business in 1998. Seventeen employees filed a lawsuit against the company's chairmen the following year, citing unpaid wages, business expenses, and benefits. The case was settled in 2001. References Defunct companies based in Greater Los Angeles Defunct software companies of the United States
UniPrise Systems
Technology
203
13,860
https://en.wikipedia.org/wiki/Hahn%E2%80%93Banach%20theorem
The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a vector subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". Another version of the Hahn–Banach theorem is known as the Hahn–Banach separation theorem or the hyperplane separation theorem, and has numerous uses in convex geometry. History The theorem is named for the mathematicians Hans Hahn and Stefan Banach, who proved it independently in the late 1920s. The special case of the theorem for the space of continuous functions on an interval was proved earlier (in 1912) by Eduard Helly, and a more general extension theorem, the M. Riesz extension theorem, from which the Hahn–Banach theorem can be derived, was proved in 1923 by Marcel Riesz. The first Hahn–Banach theorem was proved by Eduard Helly in 1912 who showed that certain linear functionals defined on a subspace of a certain type of normed space () had an extension of the same norm. Helly did this through the technique of first proving that a one-dimensional extension exists (where the linear functional has its domain extended by one dimension) and then using induction. In 1927, Hahn defined general Banach spaces and used Helly's technique to prove a norm-preserving version of Hahn–Banach theorem for Banach spaces (where a bounded linear functional on a subspace has a bounded linear extension of the same norm to the whole space). In 1929, Banach, who was unaware of Hahn's result, generalized it by replacing the norm-preserving version with the dominated extension version that uses sublinear functions. Whereas Helly's proof used mathematical induction, Hahn and Banach both used transfinite induction. The Hahn–Banach theorem arose from attempts to solve infinite systems of linear equations. This is needed to solve problems such as the moment problem, whereby given all the potential moments of a function one must determine if a function having these moments exists, and, if so, find it in terms of those moments. Another such problem is the Fourier cosine series problem, whereby given all the potential Fourier cosine coefficients one must determine if a function having those coefficients exists, and, again, find it if so. Riesz and Helly solved the problem for certain classes of spaces (such as and ) where they discovered that the existence of a solution was equivalent to the existence and continuity of certain linear functionals. In effect, they needed to solve the following problem: () Given a collection of bounded linear functionals on a normed space and a collection of scalars determine if there is an such that for all If happens to be a reflexive space then to solve the vector problem, it suffices to solve the following dual problem: (The functional problem) Given a collection of vectors in a normed space and a collection of scalars determine if there is a bounded linear functional on such that for all Riesz went on to define space () in 1910 and the spaces in 1913. While investigating these spaces he proved a special case of the Hahn–Banach theorem. Helly also proved a special case of the Hahn–Banach theorem in 1912. In 1910, Riesz solved the functional problem for some specific spaces and in 1912, Helly solved it for a more general class of spaces. It wasn't until 1932 that Banach, in one of the first important applications of the Hahn–Banach theorem, solved the general functional problem. The following theorem states the general functional problem and characterizes its solution. The Hahn–Banach theorem can be deduced from the above theorem. If is reflexive then this theorem solves the vector problem. Hahn–Banach theorem A real-valued function defined on a subset of is said to be a function if for every Hence the reason why the following version of the Hahn–Banach theorem is called . The theorem remains true if the requirements on are relaxed to require only that be a convex function: A function is convex and satisfies if and only if for all vectors and all non-negative real such that Every sublinear function is a convex function. On the other hand, if is convex with then the function defined by is positively homogeneous (because for all and one has ), hence, being convex, it is sublinear. It is also bounded above by and satisfies for every linear functional So the extension of the Hahn–Banach theorem to convex functionals does not have a much larger content than the classical one stated for sublinear functionals. If is linear then if and only if which is the (equivalent) conclusion that some authors write instead of It follows that if is also , meaning that holds for all then if and only Every norm is a seminorm and both are symmetric balanced sublinear functions. A sublinear function is a seminorm if and only if it is a balanced function. On a real vector space (although not on a complex vector space), a sublinear function is a seminorm if and only if it is symmetric. The identity function on is an example of a sublinear function that is not a seminorm. For complex or real vector spaces The dominated extension theorem for real linear functionals implies the following alternative statement of the Hahn–Banach theorem that can be applied to linear functionals on real or complex vector spaces. The theorem remains true if the requirements on are relaxed to require only that for all and all scalars and satisfying This condition holds if and only if is a convex and balanced function satisfying or equivalently, if and only if it is convex, satisfies and for all and all unit length scalars A complex-valued functional is said to be if for all in the domain of With this terminology, the above statements of the Hahn–Banach theorem can be restated more succinctly: Hahn–Banach dominated extension theorem: If is a seminorm defined on a real or complex vector space then every dominated linear functional defined on a vector subspace of has a dominated linear extension to all of In the case where is a real vector space and is merely a convex or sublinear function, this conclusion will remain true if both instances of "dominated" (meaning ) are weakened to instead mean "dominated " (meaning ). Proof The following observations allow the Hahn–Banach theorem for real vector spaces to be applied to (complex-valued) linear functionals on complex vector spaces. Every linear functional on a complex vector space is completely determined by its real part through the formula and moreover, if is a norm on then their dual norms are equal: In particular, a linear functional on extends another one defined on if and only if their real parts are equal on (in other words, a linear functional extends if and only if extends ). The real part of a linear functional on is always a (meaning that it is linear when is considered as a real vector space) and if is a real-linear functional on a complex vector space then defines the unique linear functional on whose real part is If is a linear functional on a (complex or real) vector space and if is a seminorm then Stated in simpler language, a linear functional is dominated by a seminorm if and only if its real part is dominated above by The proof above shows that when is a seminorm then there is a one-to-one correspondence between dominated linear extensions of and dominated real-linear extensions of the proof even gives a formula for explicitly constructing a linear extension of from any given real-linear extension of its real part. Continuity A linear functional on a topological vector space is continuous if and only if this is true of its real part if the domain is a normed space then (where one side is infinite if and only if the other side is infinite). Assume is a topological vector space and is sublinear function. If is a continuous sublinear function that dominates a linear functional then is necessarily continuous. Moreover, a linear functional is continuous if and only if its absolute value (which is a seminorm that dominates ) is continuous. In particular, a linear functional is continuous if and only if it is dominated by some continuous sublinear function. Proof The Hahn–Banach theorem for real vector spaces ultimately follows from Helly's initial result for the special case where the linear functional is extended from to a larger vector space in which has codimension This lemma remains true if is merely a convex function instead of a sublinear function. Assume that is convex, which means that for all and Let and be as in the lemma's statement. Given any and any positive real the positive real numbers and sum to so that the convexity of on guarantees and hence thus proving that which after multiplying both sides by becomes This implies that the values defined by are real numbers that satisfy As in the above proof of the one–dimensional dominated extension theorem above, for any real define by It can be verified that if then where follows from when (respectively, follows from when ). The lemma above is the key step in deducing the dominated extension theorem from Zorn's lemma. When has countable codimension, then using induction and the lemma completes the proof of the Hahn–Banach theorem. The standard proof of the general case uses Zorn's lemma although the strictly weaker ultrafilter lemma (which is equivalent to the compactness theorem and to the Boolean prime ideal theorem) may be used instead. Hahn–Banach can also be proved using Tychonoff's theorem for compact Hausdorff spaces (which is also equivalent to the ultrafilter lemma) The Mizar project has completely formalized and automatically checked the proof of the Hahn–Banach theorem in the HAHNBAN file. Continuous extension theorem The Hahn–Banach theorem can be used to guarantee the existence of continuous linear extensions of continuous linear functionals. In category-theoretic terms, the underlying field of the vector space is an injective object in the category of locally convex vector spaces. On a normed (or seminormed) space, a linear extension of a bounded linear functional is said to be if it has the same dual norm as the original functional: Because of this terminology, the second part of the above theorem is sometimes referred to as the "norm-preserving" version of the Hahn–Banach theorem. Explicitly: Proof of the continuous extension theorem The following observations allow the continuous extension theorem to be deduced from the Hahn–Banach theorem. The absolute value of a linear functional is always a seminorm. A linear functional on a topological vector space is continuous if and only if its absolute value is continuous, which happens if and only if there exists a continuous seminorm on such that on the domain of If is a locally convex space then this statement remains true when the linear functional is defined on a vector subspace of Proof for normed spaces A linear functional on a normed space is continuous if and only if it is bounded, which means that its dual norm is finite, in which case holds for every point in its domain. Moreover, if is such that for all in the functional's domain, then necessarily If is a linear extension of a linear functional then their dual norms always satisfy so that equality is equivalent to which holds if and only if for every point in the extension's domain. This can be restated in terms of the function defined by which is always a seminorm: A linear extension of a bounded linear functional is norm-preserving if and only if the extension is dominated by the seminorm Applying the Hahn–Banach theorem to with this seminorm thus produces a dominated linear extension whose norm is (necessarily) equal to that of which proves the theorem: Non-locally convex spaces The continuous extension theorem might fail if the topological vector space (TVS) is not locally convex. For example, for the Lebesgue space is a complete metrizable TVS (an F-space) that is locally convex (in fact, its only convex open subsets are itself and the empty set) and the only continuous linear functional on is the constant function . Since is Hausdorff, every finite-dimensional vector subspace is linearly homeomorphic to Euclidean space or (by F. Riesz's theorem) and so every non-zero linear functional on is continuous but none has a continuous linear extension to all of However, it is possible for a TVS to not be locally convex but nevertheless have enough continuous linear functionals that its continuous dual space separates points; for such a TVS, a continuous linear functional defined on a vector subspace have a continuous linear extension to the whole space. If the TVS is not locally convex then there might not exist any continuous seminorm (not just on ) that dominates in which case the Hahn–Banach theorem can not be applied as it was in the above proof of the continuous extension theorem. However, the proof's argument can be generalized to give a characterization of when a continuous linear functional has a continuous linear extension: If is any TVS (not necessarily locally convex), then a continuous linear functional defined on a vector subspace has a continuous linear extension to all of if and only if there exists some continuous seminorm on that dominates Specifically, if given a continuous linear extension then is a continuous seminorm on that dominates and conversely, if given a continuous seminorm on that dominates then any dominated linear extension of to (the existence of which is guaranteed by the Hahn–Banach theorem) will be a continuous linear extension. Geometric Hahn–Banach (the Hahn–Banach separation theorems) The key element of the Hahn–Banach theorem is fundamentally a result about the separation of two convex sets: and This sort of argument appears widely in convex geometry, optimization theory, and economics. Lemmas to this end derived from the original Hahn–Banach theorem are known as the Hahn–Banach separation theorems. They are generalizations of the hyperplane separation theorem, which states that two disjoint nonempty convex subsets of a finite-dimensional space can be separated by some , which is a fiber (level set) of the form where is a non-zero linear functional and is a scalar. When the convex sets have additional properties, such as being open or compact for example, then the conclusion can be substantially strengthened: Then following important corollary is known as the Geometric Hahn–Banach theorem or Mazur's theorem (also known as Ascoli–Mazur theorem). It follows from the first bullet above and the convexity of Mazur's theorem clarifies that vector subspaces (even those that are not closed) can be characterized by linear functionals. Supporting hyperplanes Since points are trivially convex, geometric Hahn–Banach implies that functionals can detect the boundary of a set. In particular, let be a real topological vector space and be convex with If then there is a functional that is vanishing at but supported on the interior of Call a normed space smooth if at each point in its unit ball there exists a unique closed hyperplane to the unit ball at Köthe showed in 1983 that a normed space is smooth at a point if and only if the norm is Gateaux differentiable at that point. Balanced or disked neighborhoods Let be a convex balanced neighborhood of the origin in a locally convex topological vector space and suppose is not an element of Then there exists a continuous linear functional on such that Applications The Hahn–Banach theorem is the first sign of an important philosophy in functional analysis: to understand a space, one should understand its continuous functionals. For example, linear subspaces are characterized by functionals: if is a normed vector space with linear subspace (not necessarily closed) and if is an element of not in the closure of , then there exists a continuous linear map with for all and (To see this, note that is a sublinear function.) Moreover, if is an element of , then there exists a continuous linear map such that and This implies that the natural injection from a normed space into its double dual is isometric. That last result also suggests that the Hahn–Banach theorem can often be used to locate a "nicer" topology in which to work. For example, many results in functional analysis assume that a space is Hausdorff or locally convex. However, suppose is a topological vector space, not necessarily Hausdorff or locally convex, but with a nonempty, proper, convex, open set . Then geometric Hahn–Banach implies that there is a hyperplane separating from any other point. In particular, there must exist a nonzero functional on — that is, the continuous dual space is non-trivial. Considering with the weak topology induced by then becomes locally convex; by the second bullet of geometric Hahn–Banach, the weak topology on this new space separates points. Thus with this weak topology becomes Hausdorff. This sometimes allows some results from locally convex topological vector spaces to be applied to non-Hausdorff and non-locally convex spaces. Partial differential equations The Hahn–Banach theorem is often useful when one wishes to apply the method of a priori estimates. Suppose that we wish to solve the linear differential equation for with given in some Banach space . If we have control on the size of in terms of and we can think of as a bounded linear functional on some suitable space of test functions then we can view as a linear functional by adjunction: At first, this functional is only defined on the image of but using the Hahn–Banach theorem, we can try to extend it to the entire codomain . The resulting functional is often defined to be a weak solution to the equation. Characterizing reflexive Banach spaces Example from Fredholm theory To illustrate an actual application of the Hahn–Banach theorem, we will now prove a result that follows almost entirely from the Hahn–Banach theorem. The above result may be used to show that every closed vector subspace of is complemented because any such space is either finite dimensional or else TVS–isomorphic to Generalizations General template There are now many other versions of the Hahn–Banach theorem. The general template for the various versions of the Hahn–Banach theorem presented in this article is as follows: is a sublinear function (possibly a seminorm) on a vector space is a vector subspace of (possibly closed), and is a linear functional on satisfying on (and possibly some other conditions). One then concludes that there exists a linear extension of to such that on (possibly with additional properties). For seminorms So for example, suppose that is a bounded linear functional defined on a vector subspace of a normed space so its the operator norm is a non-negative real number. Then the linear functional's absolute value is a seminorm on and the map defined by is a seminorm on that satisfies on The Hahn–Banach theorem for seminorms guarantees the existence of a seminorm that is equal to on (since ) and is bounded above by everywhere on (since ). Geometric separation Maximal dominated linear extension If is a singleton set (where is some vector) and if is such a maximal dominated linear extension of then Vector valued Hahn–Banach Invariant Hahn–Banach A set of maps is (with respect to function composition ) if for all Say that a function defined on a subset of is if and on for every This theorem may be summarized: Every -invariant continuous linear functional defined on a vector subspace of a normed space has a -invariant Hahn–Banach extension to all of For nonlinear functions The following theorem of Mazur–Orlicz (1953) is equivalent to the Hahn–Banach theorem. The following theorem characterizes when scalar function on (not necessarily linear) has a continuous linear extension to all of Converse Let be a topological vector space. A vector subspace of has the extension property if any continuous linear functional on can be extended to a continuous linear functional on , and we say that has the Hahn–Banach extension property (HBEP) if every vector subspace of has the extension property. The Hahn–Banach theorem guarantees that every Hausdorff locally convex space has the HBEP. For complete metrizable topological vector spaces there is a converse, due to Kalton: every complete metrizable TVS with the Hahn–Banach extension property is locally convex. On the other hand, a vector space of uncountable dimension, endowed with the finest vector topology, then this is a topological vector spaces with the Hahn–Banach extension property that is neither locally convex nor metrizable. A vector subspace of a TVS has the separation property if for every element of such that there exists a continuous linear functional on such that and for all Clearly, the continuous dual space of a TVS separates points on if and only if has the separation property. In 1992, Kakol proved that any infinite dimensional vector space , there exist TVS-topologies on that do not have the HBEP despite having enough continuous linear functionals for the continuous dual space to separate points on . However, if is a TVS then vector subspace of has the extension property if and only if vector subspace of has the separation property. Relation to axiom of choice and other theorems The proof of the Hahn–Banach theorem for real vector spaces (HB) commonly uses Zorn's lemma, which in the axiomatic framework of Zermelo–Fraenkel set theory (ZF) is equivalent to the axiom of choice (AC). It was discovered by Łoś and Ryll-Nardzewski and independently by Luxemburg that HB can be proved using the ultrafilter lemma (UL), which is equivalent (under ZF) to the Boolean prime ideal theorem (BPI). BPI is strictly weaker than the axiom of choice and it was later shown that HB is strictly weaker than BPI. The ultrafilter lemma is equivalent (under ZF) to the Banach–Alaoglu theorem, which is another foundational theorem in functional analysis. Although the Banach–Alaoglu theorem implies HB, it is not equivalent to it (said differently, the Banach–Alaoglu theorem is strictly stronger than HB). However, HB is equivalent to a certain weakened version of the Banach–Alaoglu theorem for normed spaces. The Hahn–Banach theorem is also equivalent to the following statement: (∗): On every Boolean algebra there exists a "probability charge", that is: a non-constant finitely additive map from into (BPI is equivalent to the statement that there are always non-constant probability charges which take only the values 0 and 1.) In ZF, the Hahn–Banach theorem suffices to derive the existence of a non-Lebesgue measurable set. Moreover, the Hahn–Banach theorem implies the Banach–Tarski paradox. For separable Banach spaces, D. K. Brown and S. G. Simpson proved that the Hahn–Banach theorem follows from WKL0, a weak subsystem of second-order arithmetic that takes a form of Kőnig's lemma restricted to binary trees as an axiom. In fact, they prove that under a weak set of assumptions, the two are equivalent, an example of reverse mathematics. See also Notes Proofs References Bibliography Reed, Michael and Simon, Barry, Methods of Modern Mathematical Physics, Vol. 1, Functional Analysis, Section III.3. Academic Press, San Diego, 1980. . Tao, Terence, The Hahn–Banach theorem, Menger's theorem, and Helly's theorem Wittstock, Gerd, Ein operatorwertiger Hahn-Banach Satz, J. of Functional Analysis 40 (1981), 127–150 Zeidler, Eberhard, Applied Functional Analysis: main principles and their applications, Springer, 1995. Articles containing proofs Linear algebra Linear functionals Theorems in functional analysis Topological vector spaces
Hahn–Banach theorem
Mathematics
5,010
460,637
https://en.wikipedia.org/wiki/Continuous%20linear%20extension
In functional analysis, it is often convenient to define a linear transformation on a complete, normed vector space by first defining a linear transformation on a dense subset of and then continuously extending to the whole space via the theorem below. The resulting extension remains linear and bounded, and is thus continuous, which makes it a continuous linear extension. This procedure is known as continuous linear extension. Theorem Every bounded linear transformation from a normed vector space to a complete, normed vector space can be uniquely extended to a bounded linear transformation from the completion of to In addition, the operator norm of is if and only if the norm of is This theorem is sometimes called the BLT theorem. Application Consider, for instance, the definition of the Riemann integral. A step function on a closed interval is a function of the form: where are real numbers, and denotes the indicator function of the set The space of all step functions on normed by the norm (see Lp space), is a normed vector space which we denote by Define the integral of a step function by: as a function is a bounded linear transformation from into Let denote the space of bounded, piecewise continuous functions on that are continuous from the right, along with the norm. The space is dense in so we can apply the BLT theorem to extend the linear transformation to a bounded linear transformation from to This defines the Riemann integral of all functions in ; for every The Hahn–Banach theorem The above theorem can be used to extend a bounded linear transformation to a bounded linear transformation from to if is dense in If is not dense in then the Hahn–Banach theorem may sometimes be used to show that an extension exists. However, the extension may not be unique. See also References Functional analysis Linear operators
Continuous linear extension
Mathematics
354
12,404,241
https://en.wikipedia.org/wiki/Pseudoeurycea%20nigromaculata
Pseudoeurycea nigromaculata, commonly known as the black-spotted salamander or black-spotted false brook salamander is a species of salamander in the family Plethodontidae. It is endemic to Veracruz, Mexico, and known from Cerro Chicahuaxtla ( asl)) in Cuatlalpan (the type locality, near Fortín de las Flores) and from Volcán San Martín at elevations of . These separate populations likely represent distinct species. Description Pseudoeurycea nigromaculata is a medium-sized plethodontid: females in the type series (collected by Hobart Muir Smith) measure in snout–vent length. The tail is longer than the snout–vent length, giving a maximum total length of about . The body is blackish (lighter in younger specimens), the tail has lighter coloration, and both carry black spots that have given the species its name. Two observed egg clutches contained 19 and 25 eggs. Habitat and conservation Pseudoeurycea nigromaculata is an arboreal species living in bromeliads in cloud forest. Once relatively common, it now appears to be very rare. Most of its habitat has disappeared or is severely degraded. It is protected by Mexican law under the "Special Protection" category. References nigromaculata Endemic amphibians of Mexico Fauna of the Sierra Madre de Oaxaca EDGE species Taxonomy articles created by Polbot Amphibians described in 1941 Taxa named by Edward Harrison Taylor
Pseudoeurycea nigromaculata
Biology
306
3,597,377
https://en.wikipedia.org/wiki/Magnetosphere%20particle%20motion
The ions and electrons of a plasma interacting with the Earth's magnetic field generally follow its magnetic field lines. These represent the force that a north magnetic pole would experience at any given point. (Denser lines indicate a stronger force.) Plasmas exhibit more complex second-order behaviors, studied as part of magnetohydrodynamics. Thus in the "closed" model of the magnetosphere, the magnetopause boundary between the magnetosphere and the solar wind is outlined by field lines. Not much plasma can cross such a stiff boundary. Its only "weak points" are the two polar cusps, the points where field lines closing at noon (-z axis GSM) get separated from those closing at midnight (+z axis GSM); at such points the field intensity on the boundary is zero, posing no barrier to the entry of plasma. (This simple definition assumes a noon-midnight plane of symmetry, but closed fields lacking such symmetry also must have cusps, by the fixed point theorem.) The amount of solar wind energy and plasma entering the actual magnetosphere depends on how far it departs from such a "closed" configuration, i.e. the extent to which Interplanetary Magnetic Field field lines manage to cross the boundary. As discussed further below, that extent depends very much on the direction of the Interplanetary Magnetic Field, in particular on its southward or northward slant. Trapping of plasma, e.g. of the ring current, also follows the structure of field lines. A particle interacting with this B field experiences a Lorentz Force which is responsible for many of the particle motion in the magnetosphere. Furthermore, Birkeland currents and heat flow are also channeled by such lines — easy along them, blocked in perpendicular directions. Indeed, field lines in the magnetosphere have been likened to the grain in a log of wood, which defines an "easy" direction along which it easily gives way. Motion of charged particles The simplest magnetic field B is a constant one– straight parallel field lines and constant field intensity. In such a field, if an ion or electron enters perpendicular to the field lines, it can be shown to move in a circle (the field only needs to be constant in the region covering the circle). If q is the charge of the particle, m its mass, v its velocity and Rg the radius of the circle ("gyration radius"), all one needs do is notice that the centripetal force mv2/Rg must equal the magnetic force qvB. One gets If the initial velocity of the particle has a different direction, one only needs resolve it into a component v⊥perpendicular to B and a component v// parallel to B, and replace v in the above formula with v⊥. If W⊥=m v⊥2/2 is the energy associated with the perpendicular motion in electron-volts (all calculations here are non-relativistic), in a field of B nT (nanotesla), then Rg in kilometers is For protons: Rg = (144/B) For electrons: Rg = (3.37/B) The velocity parallel to the field v// is not affected by the field, because no magnetic force exists in that direction. That velocity just stays constant (as long as the field does), and adding the two motions together gives a spiral around a central guiding field line. If the field curves or changes, the motion is modified, but the general character of spiraling around a central field line persists: hence the name "guiding center motion." Because the magnetic force is perpendicular to the velocity, it performs no work and requires no energy—nor does it provide any. Thus magnetic fields (like the Earth's) can profoundly affect particle motion in them, but need no energy input to maintain their effect. Magnetic Mirroring and Magnetic Drift The spacing between field lines is an indicator of the relative strength of the magnetic field. Where magnetic field lines converge the field grows stronger, and where they diverge, weaker. Now, it can be shown that in the motion of gyrating particles, the "magnetic moment" μ = W⊥/B (or relativistically, p⊥2/2mγB) stays very nearly constant. The "very nearly" qualifier sets it apart from true constants of motion, such as energy, reducing it to merely an "adiabatic invariant." For most plasmas in the magnetosphere, the deviation from constancy is negligible. The conservation of μ is tremendously important (in laboratory plasmas as well as in space). Suppose the field line guiding a particle, the axis of its spiral path, belongs to a converging bundle of lines, so that the particle is led into an increasingly larger B. To keep μ constant, W⊥ must also grow. However, as noted before, the total energy of a particle in a "purely magnetic" field remains constant. What therefore happens is that energy is converted, from the part associated with the parallel motion v// to the perpendicular part. As v// decreases, the angle between v and B then increases, until it reaches 90°. At that point W⊥ contains all the available energy, it can grow no more and no further advance into the stronger field can occur. The result is known as magnetic mirroring. The particle briefly gyrates perpendicular to its guiding field line, and then retreats back to the weaker field, the spiral unwinding again in the process. It may be noted that such motion was first derived by Henri Poincaré in 1895, for a charged particle in the field of a magnetic monopole, whose field lines are all straight and converge to a point. The conservation of μ was only pointed by Alfvén about 50 years later, and the connection to adiabatic invariant was only made afterwards. Magnetic mirroring makes possible the "trapping" in the dipole-like field lines near Earth of particles in the radiation belt and in the ring current. On all such lines the field is much stronger at their ends near Earth, compared to its strength when it crosses the equatorial plane. Assuming such particles are somehow placed in the equatorial region of that field, most of them stay trapped, because every time their motion along the field line brings them into the strong field region, they "get mirrored" and bounce back and forth between hemispheres. Only particles whose motion is very close to parallel to the field line, with near-zero μ, avoid mirroring—and these are quickly absorbed by the atmosphere and lost. Their loss leaves a bundle of directions around the field line which is empty of particles—the "loss cone". In addition to gyrating around their guiding field lines and bouncing back and forth between mirror points, trapped particles also drift slowly around Earth, switching guiding field lines but staying at approximately the same distance (another adiabatic invariant is involved, "the second invariant"). This motion was mentioned earlier in connection with the ring current. One reason for the drift is that the intensity of B increases as Earth is approached. The gyration around the guiding field line is therefore not a perfect circle, but curves a little more tightly on the side closer to the Earth, where the larger B gives a smaller Rg. This change in curvature makes ions advance sideways, while electrons, which gyrate in the opposite sense, advance sideways in the opposite direction. The net result, as already noted, produces the ring current, though additional effects (like non-uniform distribution of plasma density) also affect the result. Plasma fountain In the 1980s, a "plasma fountain" of hydrogen, helium, and oxygen ions was discovered flowing from the Earth's North Pole. References External links "3D Earth Magnetic Field Charged-Particle Simulator" Tool dedicated to the 3d simulation of charged particles in the magnetosphere.. [VRML Plug-in Required] Electromagnetism Planetary science Space plasmas
Magnetosphere particle motion
Physics,Astronomy
1,622
52,136,533
https://en.wikipedia.org/wiki/Skeletocutis%20bicolor
Skeletocutis bicolor is a species of poroid crust fungus in the family Polyporaceae. It is found in Singapore. Taxonomy The fungus was originally described as new to science in 1920 by American mycologist Curtis Gates Lloyd as Polystictus bicolor. He characterized it as follows: "Small, about a cm., growing broadly, attached to the host and developing a little, conchate pileus." The type was collected in Singapore by English botanist Thomas Ford Chipp. Leif Ryvarden examined Lloyd's type collections, and transferred the species to the genus Skeletocutis in 1992. Description The fruitbody is in the form of a crust with the edges sticking out to form small caps up to 4 mm wide. It has a smooth, pale brown surface. The pore surface, or hymenium, comprises tiny angular pores that number 6–7 per mm. Skeletocutis bicolor has a dimitic hyphal system, meaning it has both generative hyphae and skeletal hyphae. The skeletal hyphae are covered with spiny crystals, especially in the dissepiments (the tissue between the pores)—a characteristic feature of the genus Skeletocutis. The spores are spherical, hyaline (translucent), and measure 2.5–3 μm in diameter. References Fungi described in 1920 Fungi of Asia bicolor Fungus species
Skeletocutis bicolor
Biology
296
73,542,847
https://en.wikipedia.org/wiki/Chimeric%20small%20molecule%20therapeutics
Chimeric small molecule therapeutics are a class of drugs designed with multiple active domains to operate outside of the typical protein inhibition model. While most small molecule drugs inhibit target proteins by binding their active site, chimerics form protein-protein ternary structures to induce degradation or, less frequently, other protein modifications. Background Small molecule drugs, compounds typically <1 kD in mass, comprise a large portion of the therapeutic market. These drugs usually operate by agonizing or antagonizing the active site on a disease-linked protein of interest, though allosteric regulation is possible. With an estimated 93% of the human proteome lacking druggable binding sites, methods have been developed to modulate protein activity through binding of any available site rather than only the active site. These drugs contain a target protein binding warhead in addition to a linker-separated active domain. This domain may recruit a second protein to the proximity, induce protease-mediated degradation, or recruit a kinase for directed phosphorylation, among other functions. These drugs expand both the mechanism of action for small molecule therapeutics and the pool of potential protein targets. Proteolysis-targeting chimeras Proteolysis-targeting chimeras (PROTACs) were first reported by Kathleen Sakamoto, Craig Crews, and Raymond Deshaies in 2001. A chimeric molecule consisting of ovalicin (a MetAP-2 small molecule inhibitor) and IκBα phosphopeptide (a recruiter of the SCFβ-TRCP E3 ligase complex) separated by a linker was constructed and shown to induce MetAP-2 degradation in in vitro cell models. Further study confirmed that E3 ligase-mediated ubiquitination and subsequent proteasome degradation was responsible for reduced MetAP-2 levels. Continued work on this system by Craig Crews and others has expanded the potential pool of E3 ligases and degradation targets with Arvinas Inc. founded in 2013 to bring PROTAC drugs to market. As of April 2023, Arvinas has one drug in Stage 3 clinical trials (ARV-471, an estrogen receptor degrader), and two drugs in Stage 2 clinical trials (androgen receptor degraders ARV-110 and ARV-766) for treatment of breast and prostate cancer, respectively. Arvinas released Phase 2 clinical trial results for ARV-471 in December, 2022 reporting a clinical benefit rate of 40% in CDK4/6 inhibitor-pretreated patients and an absence of dose-limiting toxicities. Hydrophobic tag degradation Hydrophobic tag degraders contain a binding domain in addition to a linker-separated hydrophobic moiety, such as adamantyl, to induce protein degradation. An early example of a hydrophobically tagged degrader is fulvestrant, an estrogen receptor antagonist that contains a long hydrophobic side chain that induces the degradation of the estrogen receptor. Fulvestrant has inspired the development of additional selective estrogen receptor degraders (SERDs). As exposed hydrophobicity is characteristic of protein misfolding, the native cell proteasome may recognize and degrade proteins tagged with the hydrophobic moiety. Taavi Neklesa and Craig Crews first reported hydrophobic tag degradation in 2011 as a tool to probe protein function in conjunction with cognate HaloTag fusion proteins. This principle has also been further used to effectively degrade transcription factors (a traditionally difficult class to drug) and cancer-linked EZH2 in in vitro models. As of yet, no drug candidates have been publicly identified making use of this technology. Additional use cases Lysosome-targeting chimeras (LYTACs) have been developed, combining target-binding compounds or antibodies and glycopeptide ligands to stimulate the lysosomal degradation pathway. Unlike the proteasome pathway, this enables the targeted degradation of extracellular and membrane-bound proteins in addition to cytoplasmic ones. Autophagy-targeting chimeras (AUTACs) can be employed to degrade proteins as well as protein aggregates and organelles. AUTAC degradation tags are typically derived from guanine though the particular mechanism of action is still unclear. Autophagosome-tethering compounds (ATTECs) mimic this strategy, directly appending a target protein to the autophagosome membrane for degradation absent the use of a linker. Phosphorylation-inducing chimeric small molecules (PHICS) employ the warhead-linker-recruiter structure to direct phosphorylation of a given target by proximity to a desired kinase. This technique does not necessarily involve protein degradation and may instead be used to modulate protein function to direct or inhibit certain pathways. Further work in the Crews Lab has used chimeric oligonucleotides, the dCas9 protein, and chimeric small molecules to create the TRAFTAC system for generalizable transcription factor degradation. Advantages The ability to inhibit or modify enzyme function absent a catalytic pocket binding site target greatly expands the potentially druggable portion of the proteome. Furthermore, most classes of chimeric small molecules can act on many targets over their life cycle, lowering the effective dose compared to traditional inhibitors that act only on one protein at a time. These therapeutics provide an alternative mechanism of action that may be useful as a combination therapy in diseases where drug resistance is a concern. Chimeric drug activity is also highly dependent on distance between targeted proteins allowing effect to be effectively tuned through optimization of the linker structure. Challenges The existence of two or more binding domains increases the difficulty of synthesis for chimeric molecules. Each component must be discovered, optimized, and synthesized in such a way that they can be linked together, driving up cost relative to single-domain inhibitors. The large size of chimeric molecules (typically 700-1100 Da) makes effective delivery difficult and increases complexity in pharmacokinetic design. Care must be taken to ensure that the molecule is capable of passing through the cell membrane and subsisting long enough to have therapeutic effect. Additionally, protein-protein ternary complexes are generally unstable, adding to the difficulty of chimeric drug design References Medicinal chemistry
Chimeric small molecule therapeutics
Chemistry,Biology
1,291
36,692,110
https://en.wikipedia.org/wiki/Rad%C3%B3%E2%80%93Kneser%E2%80%93Choquet%20theorem
In mathematics, the Radó–Kneser–Choquet theorem, named after Tibor Radó, Hellmuth Kneser and Gustave Choquet, states that the Poisson integral of a homeomorphism of the unit circle is a harmonic diffeomorphism of the open unit disk. The result was stated as a problem by Radó and solved shortly afterwards by Kneser in 1926. Choquet, unaware of the work of Radó and Kneser, rediscovered the result with a different proof in 1945. Choquet also generalized the result to the Poisson integral of a homeomorphism from the unit circle to a simple Jordan curve bounding a convex region. Statement Let f be an orientation-preserving homeomorphism of the unit circle |z| = 1 in C and define the Poisson integral of f by for r < 1. Standard properties of the Poisson integral show that Ff is a harmonic function on |z| < 1 which extends by continuity to f on |z| = 1. With the additional assumption that f is orientation-preserving homeomorphism of this circle, Ff is an orientation preserving diffeomorphism of the open unit disk. Proof To prove that Ff is locally an orientation-preserving diffeomorphism, it suffices to show that the Jacobian at a point a in the unit disk is positive. This Jacobian is given by On the other hand, that g is a Möbius transformation preserving the unit circle and the unit disk, Taking g so that g(a) = 0 and taking the change of variable ζ = g(z), the chain rule gives It follows that It is therefore enough to prove positivity of the Jacobian when a = 0. In that case where the an are the Fourier coefficients of f: Following , the Jacobian at 0 can be expressed as a double integral Writing where h is a strictly increasing continuous function satisfying the double integral can be rewritten as Hence where This formula gives R as the sum of the sines of four non-negative angles with sum 2π, so it is always non-negative. But then the Jacobian at 0 is strictly positive and Ff is therefore locally a diffeomorphism. It remains to deduce Ff is a homeomorphism. By continuity its image is compact so closed. The non-vanishing of the Jacobian, implies that Ff is an open mapping on the unit disk, so that the image of the open disk is open. Hence the image of the closed disk is an open and closed subset of the closed disk. By connectivity, it must be the whole disk. For |w| < 1, the inverse image of w is closed, so compact, and entirely contained in the open disk. Since Ff is locally a homeomorphism, it must be a finite set. The set of points w in the open disk with exactly n preimages is open. By connectivity every point has the same number N of preimages. Since the open disk is simply connected, N = 1. In fact taking any preimage of the origin, every radial line has a unique lifting to a preimage, and so there is an open subset of the unit disk mapping homeomorphically onto the open disk. If N > 1, its complement would also have to be open, contradicting connectivity. Notes References Theorems in harmonic analysis
Radó–Kneser–Choquet theorem
Mathematics
699
37,395,610
https://en.wikipedia.org/wiki/Chromium%28III%29%20boride
Chromium(III) boride, also known as chromium monoboride (CrB), is an inorganic compound with the chemical formula CrB. It is one of the six stable binary borides of chromium, which also include Cr2B, Cr5B3, Cr3B4, CrB2, and CrB4. Like many other transition metal borides, it is extremely hard (21-23 GPa), has high strength (690 MPa bending strength), conducts heat and electricity as well as many metallic alloys, and has a high melting point (~2100 °C). Unlike pure chromium, CrB is known to be a paramagnetic, with a magnetic susceptibility that is only weakly dependent on temperature. Due to these properties, among others, CrB has been considered as a candidate material for wear resistant coatings and high-temperature diffusion barriers. It can be synthesized as powders by many methods including direct reaction of the constituent elemental powders, self-propagating high-temperature synthesis (SHS), borothermic reduction, and molten salt growth. Slow-cooling of molten aluminum solutions from high-temperatures has been used to grow large single crystals, with a maximum size of 0.6 mm x 0.6 mm x 8.3 mm. CrB has an orthorhombic crystal structure (space group Cmcm) that was first discovered in 1951, and subsequently confirmed by later work using single crystals. The crystal structure can be visualized as slabs face-sharing BCr6 trigonal prisms, in the ac-plane, that are stacked parallel to the <010> crystallographic direction. Similar to Cr3B4 and Cr2B3, the B atoms in the structure form covalent bonds with each other and are characterized by unidirectional B-B- chains parallel to the <001> crystallographic direction. The transition metal monoborides VB, NbB, TaB, and NiB have the same crystal structure. References Chromium(III) compounds Borides
Chromium(III) boride
Chemistry
439
4,475,501
https://en.wikipedia.org/wiki/H.%20C.%20%C3%98rsted%20Medal
The H. C. Ørsted Medal is a medal for scientific achievement awarded by the Danish Society for the Dissemination of Natural Science (danish: Selskabet for naturlærens udbredelse). It is named after founder of the society Hans Christian Ørsted, and awarded chiefly to Danes. Medals The medal is awarded in three versions: Gold "for excellent scientific work in the fields of physics and chemistry published in recent years" Silver "for excellent research dissemination of exact science to wider circles over a number of years" Bronze "for many years of excellent dissemination of exact science to wider circles through e.g. teaching, museum activities, arrangement of competitions, association activities or similar" Recipients Source: Society for Dissemination of Natural Science Gold 2024 Morten Meldal 2024 Jens Kehlet Nørskov 2020 Charles M. Marcus 2019 Karl Anker Jørgensen 1989 Thor A. Bak 1977 Kai Arne Jensen 1974 Jens Lindhard 1970 Christian Møller 1970 Aage Bohr 1965 Bengt Strömgren 1959 Jens Anton Christiansen 1959 Paul Bergsøe 1952 Alex Langseth 1941 Kaj Linderstrøm-Lang 1928 Peder Oluf Pedersen 1928 Niels Bjerrum 1928 Johannes Nicolaus Brønsted 1924 Niels Bohr 1916 Martin Knudsen 1912 Christian Christiansen 1909 S.P.L. Sørensen Silver 2024 Henrik Stiesdal 2024 Andreas Mogensen 2022 Johan Gotthardt Olsen 2021 Samel Arslanagic 2020 Jens Ramskov 2019 Thomas Bolander 2016 Anja Cetti Andersen 2000 Jens Martin Knudsen 1999 Ove Nathan 1991 Niels Ove Lassen 1990 Jens J. Kjærgaard 1988 Niels Blædel 1980 K.G. Hansen Bronze 2024 Gregers Mogensen 2024 Louise Ibsen 2023 Nicolai Bogø Stabell 2023 Per Saxtorph Jørgensen 2022 Lisbeth Tavs Gregersen 2022 Peter Blirup 2021 Anja Skaar Jacobsen 2021 Hans Emil Sølyst Hjerl 2020 Lasse Seidelin Bendtsen 2020 Claus Rintza 2020 Stefan Emil Lemser Eychenne 2020 Claus Rintza 2020 Lasse Seidelin Bendtsen 2019 Michael Lentfer Jensen 2019 Jeannette Overgaard Tejlmann Madsen 2018 Ole Bakander 2017 Bjarning Grøn 2016 Martin Frøhling Jensen 2015 Henrik Parbo 2014 Pia Halkjær Gommesen 2013 Niels Christian Hartling 2013 Peter Arnborg Videsen 2012 Jannik Johansen (scientist) 2006 Finn Berg Rasmussen 2004 Erik Schou Jensen 2003 Ryan Holm 2001 Asger Høeg See also List of chemistry awards List of physics awards External links Selskabet for naturlærens udbredelse References Chemistry awards Danish science and technology awards Physics awards Science writing awards Science communication awards
H. C. Ørsted Medal
Technology
603
40,982,936
https://en.wikipedia.org/wiki/IWXXM
ICAO Meteorological Information Exchange Model (IWXXM) is a format for reporting weather information in XML/GML. IWXXM includes XML/GML-based representations for products standardized in International Civil Aviation Organization (ICAO) Annex III, such as METAR/SPECI, TAF, SIGMET, AIRMET, Tropical Cyclone Advisory (TCA), Volcanic Ash Advisory (VAA), Space Weather Advisory and World Area Forecast System (WAFS) Significant Weather (SIGWX) Forecast. IWXXM products are used for operational exchanges of meteorological information for use in aviation. ICAO Annex 3 defines what IWXXM capability is required at different time frames. These capabilities can also be considered in context of the ICAO SWIM-concept (Doc 10039, Manual on System Wide Information Management (SWIM) Concept). Unlike the traditional forms of the ICAO Annex III products, IWXXM is not intended to be directly used by aircraft pilots. IWXXM is designed to be consumed by software acting on behalf of pilots, such as display software. History IWXXM Version 1 was introduced in October 2013, representing METAR, SPECI, TAF and SIGMET formats as specified in International Civil Aviation Organization (ICAO) Annex III, Amendment 76. IWXXM became an optional format for the bilateral exchange of weather reports in November 2013 when the amendment became applicable. The seventeenth WMO Congress approved IWXXM 1.1, a WMO standard data representation to be included in the new Volume I.3 of WMO-No. 306, Manual on Codes. IWXXM Version 2 was issued in August 2016 with the introduction of new products including AIRMET, Tropical Cyclone Advisory and Volcanic Ash Advisory, loads of improvements and bug fixes. Supported by the sixteenth session of the WMO Commission for Basic System in 2016, a slightly revised version IWXXM 2.1 has been approved by the sixty-ninth WMO Executive Council in May 2017. A patch (IWXXM Version 2.1.1) had been released and approved in Nov 2017 to fix minor issues on validation and examples. IWXXM Version 3 was first made available as version 3.0RC1 in July 2018. Major changes include restructuring and simplifying with the removal of Observations and Measurements model (O&M), addition of the new Space Weather Advisory and other changes with regard to Amendment 78 to ICAO Annex 3, and numerous fixes and enhancements. IWXXM 3.0RC2 was released in October 2018 for further comments. Another release candidate IWXXM 3.0RC3 was released in April 2019. Approval was received in October 2019 and IWXXM 3.0RC4 was released before publishing of the finalized version on 7 November 2019. IWXXM Version 2021-2 was published in Nov 2021 meeting new requirements in Amendments 79 and 80 to ICAO Annex 3, including the introduction of the new WAFS SIGWX Forecast to be provided by World Area Forecast Centers (WAFCs) by 2023. A bug fix version (IWXXM Version 2023-1) was published on 15 June 2023 to fix a few bugs involved in the schematron rules as well as introducing the missing icing phenomenon required in WAFS SIGWX Forecast. A new version of IWXXM is being developed in response to the proposed changes in the upcoming Amendment 81 to ICAO Annex 3. Regulation IWXXM is regulated by WMO in association with ICAO. IWXXM is defined at the technical regulation level in WMO No.306 Volume I.3 to meet the regulatory requirements described in ICAO Annex III. Another document ICAO Doc 10003 is also available to provide a high level description of the model. Development The WMO Commission for Observation, Infrastructures and Information Systems (INFCOM) Task Team on Aviation Data or TT-AvData (previously Commission for Basic System (CBS) Task Team on Aviation XML or TT-AvXML) and ICAO Meteorological Panel (METP) Working Group on Meteorological Information Exchange (WG-MIE) are involved in the development of IWXXM. The e-mail group tt-avdata@groups.wmo.int was created (Subscription required. Visit List information - groups.wmo.int - Simplelists for details) to collect feedback from users. A GitHub repository https://github.com/wmo-im/iwxxm has been created to engage community participation. Relationship with WXXM WXXM is governed by FAA and EUROCONTROL for international products outside of those represented by ICAO or WMO. WXXM 1.0 was released in 2007. There were no new releases since the publication of WXXM 3.0.0 in 2019. See also METAR/SPECI TAF SIGMET AIRMET Tropical Cyclone Advisory Volcanic Ash Advisory Space Weather Advisory WAFS Significant Weather Forecast References External links WMO IWXXM Schema Repository WMO Web-Accessible Codes Registry Aviation meteorology Weather forecasting Meteorological data and networks XML-based standards
IWXXM
Technology
1,061
27,373,279
https://en.wikipedia.org/wiki/Trabucco
The trabucco (), known in some southern dialects as trabocco or travocc, is an ancient fishing machine typical of the Adriatic shores of Abruzzo — famously dubbed the Costa dei Trabocchi ( Trabocchi Coast ) and the Gargano coast, where they are preserved as historical monuments within the Gargano National Park. These distinctive structures are prevalent along the southern Adriatic coastline, particularly in the Italian provinces of Chieti, Campobasso, and Foggia. Trabucchi can also be found on select parts of the southern Tyrrhenian Sea coast. The trabocchi in Literature The renowned Italian poet Gabriele d'Annunzio was among the first to describe these fishing machines in literature. In his work "Il Trionfo della Morte" he portrays a trabucco extending from the tip of a promontory, above a cluster of rocks, likening it to a colossal spider made entirely of planks and beams. He writes, "From the furthest point of the right promontory, over a group of rocks, a trabucco was extended, a strange fishing machine, all composed of boards and beams, resembling a colossal spider..." Moreover, d'Annunzio vividly captures the trabucco’s skeletal form, resembling "the colossal skeleton of a prehistoric amphibian", bleaching white against the landscape. He describes the trabucco as a "great white skeletal structure protesting against the cliff...an erect and treacherous form in perpetual ambush, often contrasting the solitude's benignity." At hot mid-days and sunsets, it sometimes assumed formidable aspects, "…even in the most distant rocks were poles fixed to support the reinforcement ropes; countless small boards were nailed up the trunks to strengthen weak spots. The long struggle against the fury of the waves seemed inscribed on the great carcass through those knots, those nails, those devices. The machine seemed to live a life of its own, bearing the air and semblance of a living body". Construction features A trabucco is a massive construction built from wood consisting of a platform anchored to the rock by large logs of Aleppo pine, jutting out into the sea. From this platform, two (or more) long arms called "antennae" stretch out suspended some feet above the water, supporting a huge, narrow-meshed, net (called "trabocchetto"). The morphology of the Gargano coast and of Abruzzo determined the presence of two different types of trabucco: the Garganic trabucco is usually anchored to a rocky platform, longitudinally extended to the coastline, from which the antennae depart. The variant of Abruzzo and Molise, also called bilancia, are often found on shallower coasts and therefore are characterized by the presence of a platform, transversal to the coast, which is connected by a tight bridge made of wooden boards. A bilancia has just one winch, often electrically operating, even when the sea is perfectly calm. Abruzzo bilancia also have a much smaller net than that of the Gargano trabucco. Another feature that differentiates the two types is the length and number of antennae, with more extensive antennae found in Gargano (double that of Abruzzo and Molise) in Termoli balances were more than two antennae, Gargano always two or more. History According with some historians of Apulia, the trabucco was invented in the region imported from Phoenicians. However, the earliest documented existence in Gargano dates back to 18th century, during which Gargano fishermen, during that period sparsely populated, devised an ingenious technique of fishing that wasn't subject to weather conditions in the area. Trabucchi were built in the most prominent promontories, jutting nets out to sea through a system of monumental wooden arms. The development of the trabucchi allowed fishing without being submitted to sea conditions using the morphology of the rocky coast of Gargano. The trabucco is built with traditional wood Aleppo pine - the typical pine of Gargano and common throughout the South-Western Adriatic - because this material is widely available in the region, modeled, elastic, weatherproof and resistant to salt (trabucco must resist the strong winds of Provence usually blowing in these areas). Some trabucchi have been rebuilt in recent years, thanks to public funds. However, since they lost their economic function in the past centuries when they were the main economical source of entire families of fishermen, trabucchi rose into the role of cultural and architectural symbols and tourist attraction. Fishing system The fishing technique, quite efficacious, is on sight. It consists of intercepting, with wide nets, the flows of fish moving along the ravines of the coast. Trabucchi are located where the sea is deep enough (at least 6 meters), and are built on rocky peaks generally oriented southeast or north in order to exploit the favorable marine current. The net is lowered into the water through a complex system of winches and, likewise, promptly pulled up to retrieve its catch. At least two men are entrusted with the tough task of operating the winches that maneuver the giant net. The small trabucchi of Abruzzo and Molise Coast are often electrically powered. The trabucco is managed at least by four fishermen called trabuccolanti who share the duties of watching the fish and maneuvering. Distribution The trabucchi are a distinguishing feature of the coastal landscape of the lower Adriatic. Their presence is also attested on the lower Tyrrhenian Sea. Trabucchi are spread throughout the Trabocchi Coast, in the Abruzzo region, where they are called "travocchi" (in dialect of Molise and Abruzzo) in the province of Campobasso, Termoli, Chieti and south of Ortona and in the Gargano coast, but are more widely present in the area between Peschici and Vieste (where there isn't any promontory without one of these giant structures). The ancient trabucchi are protected by the National Park of Gargano, which adopted them as a sign of respect for tradition and environment of the Gargano, as a symbol of civilization, are now a favorite subject of artists and craftsmen. Costa dei Trabocchi The Trabocchi Coast (Costa dei Trabocchi) is a stretch of coast province of Chieti, which includes the countries situated between Francavilla al Mare and Vasto. The coast is full of quaint fishing overflow, some of which have been converted into restaurants. See also Chinese fishing nets Trabucco's area of diffusion Gargano Peschici Rodi Garganico San Menaio Vico del Gargano Vieste Area of diffusion of trabocco or bilancia variants Ortona San Vito Chietino Termoli Vasto Notes References G. D'Annunzio, Il Trionfo della Morte, 1894 Paula Hardy, Abigail Hole, Olivia Pozzan, Puglia & Basilicata, 2008, , p. 93 P. Barone, L. Marino, O. Pignatelli, I Trabocchi, Macchine da pesca della costa adriatica, CIERRE edizioni, 1999 M. Fasanella, G. De Nittis, Il Trabucco, Vieste FG, Grafiche Laconeta, 1992 Pietro Cupido, Trabocchi, Traboccanti e Briganti, Ortona CH, Edizioni Menabò, Libreria D'Abruzzo Teresa Maria Rauzino, Rita Lombardi, Raffaella Specchiulli, Ignazio Polignone, I trabucchi della costa garganica External links Apulia Fishing techniques and methods Marine architecture Coastal construction
Trabucco
Engineering
1,671
320,498
https://en.wikipedia.org/wiki/Computer-mediated%20communication
Computer-mediated communication (CMC) is defined as any human communication that occurs through the use of two or more electronic devices. While the term has traditionally referred to those communications that occur via computer-mediated formats (e.g., instant messaging, email, chat rooms, online forums, social network services), it has also been applied to other forms of text-based interaction such as text messaging. Research on CMC focuses largely on the social effects of different computer-supported communication technologies. Many recent studies involve Internet-based social networking supported by social software. Forms Computer-mediated communication can be broken down into two forms: synchronous and asynchronous. Synchronous computer-mediated communication refers to communication that occurs in real-time. All parties are engaged in the communication simultaneously; however, they are not necessarily all in the same location. Examples of synchronous communication are video chats and audio calls. On the other hand, asynchronous computer-mediated communication refers to communication that takes place when the parties engaged are not communicating in unison. In other words, the sender does not receive an immediate response from the receiver. Most forms of computer-mediated technology are asynchronous. Examples of asynchronous communication are text messages and emails. Scope Scholars from a variety of fields study phenomena that can be described under the umbrella term of computer-mediated communication (CMC) (see also Internet studies). For example, many take a sociopsychological approach to CMC by examining how humans use "computers" (or digital media) to manage interpersonal interaction, form impressions and maintain relationships. These studies have often focused on the differences between online and offline interactions, though contemporary research is moving towards the view that CMC should be studied as embedded in everyday life. Another branch of CMC research examines the use of paralinguistic features such as emoticons, pragmatic rules such as turn-taking and the sequential analysis and organization of talk, and the various sociolects, styles, registers or sets of terminology specific to these environments (see Leet). The study of language in these contexts is typically based on text-based forms of CMC, and is sometimes referred to as "computer-mediated discourse analysis". The way humans communicate in professional, social, and educational settings varies widely, depending upon not only the environment but also the method of communication in which the communication occurs, which in this case is through computers or other information and communication technologies (ICTs). The study of communication to achieve collaboration—common work products—is termed computer-supported collaboration and includes only some of the concerns of other forms of CMC research. Popular forms of CMC include e-mail, video, audio or text chat (text conferencing including "instant messaging"), bulletin board systems, list-servs, and MMOs. These settings are changing rapidly with the development of new technologies. Weblogs (blogs) have also become popular, and the exchange of RSS data has better enabled users to each "become their own publisher". Characteristics Communication occurring within a computer-mediated format has an effect on many different aspects of an interaction. Some of those that have received attention in the scholarly literature include impression formation, deception, group dynamics, disclosure reciprocity, disinhibition and especially relationship formation. CMC is examined and compared to other communication media through a number of aspects thought to be universal to all forms of communication, including (but not limited to) synchronicity, persistence or "recordability", and anonymity. The association of these aspects with different forms of communication varies widely. For example, instant messaging is intrinsically synchronous but not persistent, since one loses all the content when one closes the dialog box unless one has a message log set up or has manually copy-pasted the conversation. E-mail and message boards, on the other hand, are low in synchronicity since response time varies, but high in persistence since messages sent and received are saved. Properties that separate CMC from other media also include transience, its multimodal nature, and its relative lack of governing codes of conduct. CMC is able to overcome physical and social limitations of other forms of communication and therefore allow the interaction of people who are not physically sharing the same space. Technology would be a powerful tool when defining communication as a learning process that needs a sender and receiver. According to Nicholas Jankowski in his book The Contours of Multimedia, a third party, like software, acts in the middle between a sender and receiver. The sender is interacting with this third party to send. The receiver interacts with it as well, creating an additional interaction with the medium itself along with the initially intended one between sender and receiver. The medium in which people choose to communicate influences the extent to which people disclose personal information. CMC is marked by higher levels of self-disclosure in conversation as opposed to face-to-face interactions. Self disclosure is any verbal communication of personally relevant information, thought, and feeling which establishes and maintains interpersonal relationships. This is due in part to visual anonymity and the absence of nonverbal cues which reduce concern for losing positive face. According to Walther’s (1996) hyperpersonal communication model, computer-mediated communication is valuable in providing a better communication and better first impressions. Moreover, Ramirez and Zhang (2007) indicate that computer-mediated communication allows more closeness and attraction between two individuals than a face-to-face communication. Online impression management, self-disclosure, attentiveness, expressivity, composure and other skills contribute to competence in computer mediated communication. In fact, there is a considerable correspondence of skills in computer-mediated and face-to-face interaction even though there is great diversity of online communication tools. Anonymity and in part privacy and security depends more on the context and particular program being used or web page being visited. However, most researchers in the field acknowledge the importance of considering the psychological and social implications of these factors alongside the technical "limitations". Language learning CMC is widely discussed in language learning because CMC provides opportunities for language learners to practice their language. For example, Warschauer conducted several case studies on using email or discussion boards in different language classes. Warschauer claimed that information and communications technology “bridge the historic divide between speech...and writing”. Thus, considerable concern has arisen over the reading and writing research in L2 due to the booming of the Internet. In the learning process, students, especially kids, need cognitive learning, but they also need social interaction, which enhances their psychological needs. Although technology has its powerful effect in assisting the English language learners to learn, it can not be a comprehensive way that covers different aspects of the learning process. Benefits The nature of CMC means that it is easy for individuals to engage in communication with others regardless of time, location, or other spatial constraints to communication. In that CMC allows for individuals to collaborate on projects that would otherwise be impossible due to such factors as geography, it has enhanced social interaction not only between individuals but also in working life. In addition, CMC can also be useful for allowing individuals who might be intimidated due to factors like character or disabilities to participate in communication. By allowing an individual to communicate in a location of their choosing, a CMC call allows a person to engage in communication with minimal stress. Making an individual comfortable through CMC also plays a role in self-disclosure, which allows a communicative partner to open up more easily and be more expressive. When communicating through an electronic medium, individuals are less likely to engage in stereotyping and are less self-conscious about physical characteristics. The role that anonymity plays in online communication can also encourage some users to be less defensive and form relationships with others more rapidly. Disadvantages While computer-mediated communication can be beneficial, technological mediation can also inhibit the communication process. Unlike face-to-face communication, nonverbal cues such as tone and physical gestures, which assist in conveying the message, are lost through computer-mediated communication. As a result, the message being communicated is more vulnerable to being misunderstood due to a wrong interpretation of tone or word meaning. Moreover, according to Dr. Sobel-Lojeski of Stony Brook University and Professor Westwell of Flinders University, the virtual distance that is fundamental to computer-mediated communication can create a psychological and emotional sense of detachment, which can contribute to sentiments of societal isolation. Crime Cybersex trafficking and other cyber crimes involve computer-mediated communication. Cybercriminals can carry out the crimes in any location where they have a computer or tablet with a webcam or a smartphone with an internet connection. They also rely on social media networks, videoconferences, pornographic video sharing websites, dating pages, online chat rooms, apps, dark web sites, and other platforms. They use online payment systems and cryptocurrencies to hide their identities. Millions of reports of these crimes are sent to authorities annually. New laws and police procedures are needed to combat crimes involving CMC. See also Emotions in virtual communication Internet relationship Discourse community References Further reading External links Applied linguistics Information systems Internet culture he:למידה משולבת מחשב
Computer-mediated communication
Technology
1,918
21,923,868
https://en.wikipedia.org/wiki/Oscillating%20gene
In molecular biology, an oscillating gene is a gene that is expressed in a rhythmic pattern or in periodic cycles. Oscillating genes are usually circadian and can be identified by periodic changes in the state of an organism. Circadian rhythms, controlled by oscillating genes, have a period of approximately 24 hours. For example, plant leaves opening and closing at different times of the day or the sleep-wake schedule of animals can all include circadian rhythms. Other periods are also possible, such as 29.5 days resulting from circalunar rhythms or 12.4 hours resulting from circatidal rhythms. Oscillating genes include both core clock component genes and output genes. A core clock component gene is a gene necessary for to the pacemaker. However, an output oscillating gene, such as the AVP gene, is rhythmic but not necessary to the pacemaker. History The first recorded observations of oscillating genes come from the marches of Alexander the Great in the fourth century B.C. At this time, one of Alexander's generals, Androsthenes, wrote that the tamarind tree would open its leaves during the day and close them at nightfall. Until 1729, the rhythms associated with oscillating genes were assumed to be "passive responses to a cyclic environment". In 1729, Jean-Jacques d'Ortous de Mairan demonstrated that the rhythms of a plant opening and closing its leaves continued even when placed somewhere where sunlight could not reach it. This was one of the first indications that there was an active element to the oscillations. In 1923, Ingeborg Beling published her paper "Über das Zeitgedächtnis der Bienen" ("On the Time Memory of Bees") which extended oscillations to animals, specifically bees In 1971, Ronald Konopka and Seymour Benzer discovered that mutations of the PERIOD gene caused changes in the circadian rhythm of flies under constant conditions. They hypothesized that the mutation of the gene was affecting the basic oscillator mechanism. Paul Hardin, Jeffrey Hall, and Michael Rosbash demonstrated that relationship by discovering that within the PERIOD gene, there was a feedback mechanism that controlled the oscillation. The mid-1990s saw an outpouring of discoveries, with CLOCK, CRY, and others being added to the growing list of oscillating genes. Molecular circadian mechanisms The primary molecular mechanism behind an oscillating gene is best described as a transcription/translation feedback loop. This loop contains both positive regulators, which increase gene expression, and negative regulators, which decrease gene expression. The fundamental elements of these loops are found across different phyla. In the mammalian circadian clock, for example, transcription factors CLOCK and BMAL1 are the positive regulators. CLOCK and BMAL1 bind to the E-box of oscillating genes, such as Per1, Per2, and Per3 and Cry1 and Cry2, and upregulate their transcription. When the PERs and CRYs form a heterocomplex in the cytoplasm and enter the nucleus again, they inhibit their own transcription. This means that over time the mRNA and protein levels of PERs and CRYs, or any other oscillating gene under this mechanism, will oscillate. There also exists a secondary feedback loop, or 'stabilizing loop', which regulates the cyclic expression of Bmal1. This is caused by two nuclear receptors, REV-ERB and ROR, which suppresses and activates Bmal1 transcription, respectively. In addition to these feedback loops, post-translational modifications also play a role in changing the characteristics of the circadian clock, such as its period. Without any type of feedback repression, the molecular clock would have a period of just a few hours. Casein kinase members CK1ε and CK1δ were both found to be mammalian protein kinases involved in circadian regulation. Mutations in these kinases are associated with familial advanced sleep phase syndrome (FASPS). In general, phosphorylation is necessary for the degradation of PERs via ubiquitin ligases. In contrast, phosphorylation of BMAL1 via CK2 is important for accumulation of BMAL1. Examples The genes provided in this section are only a small number of the vast amount of oscillating genes found in the world. These genes were selected because they were determined to be the some of most important genes in regulating the circadian rhythm of their respective classification. Mammalian genes Cry1 and Cry2 – Cryptochromes are a class of blue light sensitive flavoproteins found in plants and animals. Cry1 and Cry2 code for the proteins CRY1 and CRY2. In Drosophila, CRY1 and CRY2 bind to TIM, a circadian gene that is a component of the transcription-translation negative feedback loop, in a light dependent fashion and blocks its function. In mammals, CRY1 and CRY2 are light independent and function to inhibit the CLOCK-BMAL1 dimer of the circadian clock which regulates cycling of Per1 transcription. Bmal1 – Bmal1 also known as ARNTL or Aryl hydrocarbon receptor nuclear translocator-like, encodes a protein that forms a heterodimer with the CLOCK protein. This heterodimer binds to E-box enhancers found in the promoter regions of many genes such as Cry1 and Cry2 and Per1-3, thereby activating transcription. The resulting proteins translocate back into the nucleus and act as negative regulators by interacting with CLOCK and/or BMAL1 inhibiting transcription. Clock – Clock, also known as Circadian Locomotor Output Cycles Kaput, is a transcription factor in the circadian pacemaker of mammals. It affects both the persistence and period of circadian rhythms by its interactions with the gene Bmal1. For more information, refer to Bmal1. Per genes – There are three different per genes, also known as Period genes, (per 1, per 2, and per 3) that are related by sequence in mice. Transcription levels for mPer1 increase in the late night before subjective dawn and is followed by increases in the levels of mPer3 and then by mPer2. mPer1 peaks at CT 4-6, mPer3 at CT 4 and 8 and mPer2 at CT 8. mPer1 is necessary for phase shifts induced by light or glutamate release. mPer 2 and mPer3 are involved in resetting the circadian clock to environmental light cues. Drosophila genes Clock – The clock gene in Drosophila encodes for the CLOCK protein and forms a heterodimer with the protein CYCLE in order to control the main oscillating activity of the circadian clock. The heterodimer binds to the E-box promoter region of both per and tim which causes activation of their respective gene expression. Once protein levels for both PER and TIM have reached a critical point, they too dimerize and interact with the CLOCK-CYCLE heterodimer to prevent it from binding to the E-Box and activating transcription. This negative feedback loop is essential for the functioning and timing of the circadian clock. Cycle – the cycle gene encodes for the CYCLE protein to form a heterodimer with the protein CLOCK. The heterodimer creates a transcription-translation feedback loop that controls the levels of both the PER and TIM gene. This feedback loop has been shown to be imperative for both the functioning and timing of the circadian clock in Drosophila. For more information, refer to Clock. Per – The per gene is a clock gene that encodes for the PER protein in Drosophila. The protein levels and transcription rates of PER demonstrate robust circadian rhythms that peak around CT 16. It creates a heterodimer with TIM to control the circadian rhythm. The heterodimer enters the nucleus in order to inhibit the CLOCK-CYCLE heterodimer which acts as a transcriptional activator for per and tim. This results in an inhibition of the transcription factors of per and tim thereby lowering the respective mRNA levels and protein levels. For more information, refer to Clock. Timeless – The tim gene encodes for the TIM protein that is critical in circadian regulation in Drosophila. Its protein levels and transcription rates demonstrate a circadian oscillation that peaks at around CT 16. TIM binds to the PER protein to create a heterodimer whose transcription-translation feedback loop controls the periodicity and phase of the circadian rhythms. For more information, refer to Period and Clock. Fungal genes Frq – The Frq gene, also known as the Frequency gene, encodes central components of an oscillatory loop within the circadian clock in Neurospora. In the oscillator's feedback loop, frq gives rise to transcripts that encode for two forms of the FRQ protein. Both forms are required for robust rhythmicity throughout the organism. Rhythmic changes in the amount of frq transcript are essential for synchronous activity, and abrupt changes in frq levels reset the clock. Bacterial genes Kai genes – Found in the Synechococcus elongatus, these genes are essential components of the cyanobacterium clock, the leading example of bacterial circadian rhythms. Kai proteins regulate genome wide gene expression. The oscillation of phosphorylation and dephosphorylation of KaiC acts as the pacemaker of the circadian clock. Plant genes CCA1 – The CCA1 gene, also known as Circadian and Clock Associated Gene 1, is a gene that is especially important in maintaining the rhythmicity of plant cellular oscillations. Overexpression, results in the loss of rhythmic expression of clock controlled genes (CCGs), loss of photoperiod control, and loss of rhythmicity in LHY expression. See LHY gene below for more information. LHY – The LHY gene, also known as the Late Elongated Hypocotyl gene, is a gene found in plants that encodes components of mutually regulatory negative feedback loops with CCA1 in which overexpression of either results in dampening of both of their expression. This negative feedback loop affects the rhythmicity of multiple outputs creating a daytime protein complex. Toc1 gene – Toc1, also known as Timing of CAB Expression 1 gene, is an oscillating gene found in the plants that is known to control the expression of CAB. It has been shown to affect the period of circadian rhythms through its repression of transcription factors. This was found through mutations of toc1 in plants that had shortened period of CAB expression. See also Chronobiology References Molecular biology Chronobiology
Oscillating gene
Chemistry,Biology
2,234
417,665
https://en.wikipedia.org/wiki/Cubane
Cubane is a synthetic hydrocarbon compound with the formula . It consists of eight carbon atoms arranged at the corners of a cube, with one hydrogen atom attached to each carbon atom. A solid crystalline substance, cubane is one of the Platonic hydrocarbons and a member of the prismanes. It was first synthesized in 1964 by Philip Eaton and Thomas Cole. Before this work, Eaton believed that cubane would be impossible to synthesize due to the "required 90 degree bond angles". The cubic shape requires the carbon atoms to adopt an unusually sharp 90° bonding angle, which would be highly strained as compared to the 109.45° angle of a tetrahedral carbon. Once formed, cubane is quite kinetically stable, due to a lack of readily available decomposition paths. It is the simplest hydrocarbon with octahedral symmetry. Having high potential energy and kinetic stability makes cubane and its derivative compounds useful for controlled energy storage. For example, octanitrocubane and heptanitrocubane have been studied as high-performance explosives. These compounds also typically have a very high density for hydrocarbon molecules. The resulting high energy density means a large amount of energy can be stored in a comparably smaller amount of space, an important consideration for applications in fuel storage and energy transport. Furthermore, their geometry and stability make them suitable isosteres for benzene rings. Synthesis The classic 1964 synthesis starts with the conversion of 2-cyclopentenone to 2-bromocyclopentadienone: Allylic bromination with N-bromosuccinimide in carbon tetrachloride followed by addition of molecular bromine to the alkene gives a 2,3,4-tribromocyclopentanone. Treating this compound with diethylamine in diethyl ether causes elimination of two equivalents of hydrogen bromide to give the diene product. The construction of the eight-carbon cubane framework begins when 2-bromocyclopentadienone undergoes a spontaneous Diels-Alder dimerization. One ketal of the endo isomer is subsequently selectively deprotected with aqueous hydrochloric acid to 3. In the next step, the endo isomer 3 (with both alkene groups in close proximity) forms the cage-like isomer 4 in a photochemical [2+2] cycloaddition. The bromoketone group is converted to ring-contracted carboxylic acid 5 in a Favorskii rearrangement with potassium hydroxide. Next, the thermal decarboxylation takes place through the acid chloride (with thionyl chloride) and the tert-butyl perester 6 (with tert-butyl hydroperoxide and pyridine) to 7; afterward, the acetal is once more removed in 8. A second Favorskii rearrangement gives 9, and finally another decarboxylation gives, via 10, cubane (11). A more approachable laboratory synthesis of disubstituted cubane involves bromination of the ethylene ketal of cyclopentanone to give a tribromocyclopentanone derivative. Subsequent steps involve dehydrobromination, Diels-Alder dimerization, etc. The resulting cubane-1,4-dicarboxylic acid is used to synthesize other substituted cubanes. Cubane itself can be obtained nearly quantitatively by photochemical decarboxylation of the thiohydroxamate ester (the Barton decarboxylation). Derivatives The synthesis of the octaphenyl derivative from tetraphenylcyclobutadiene nickel bromide by Freedman in 1962 pre-dates that of the parent compound. It is a sparingly soluble colourless compound that melts at 425–427 °C. A hypercubane, with a hypercube-like structure, was predicted to exist in a 2014 publication. Two isomers of cubene have been synthesized, and a third analyzed computationally. The alkene in ortho- is exceptionally reactive due to its pyramidalized geometry. At the time of its synthesis, this was the most pyramidalized alkene to have been made. The meta- isomer is even less stable, and the para- isomer probably only exists as a diradical rather than an actual diagonal bond. In 2022, both heptafluorocubane and octafluorocubane were synthesized. Octafluorocubane is of theoretical interest because of its unusual electronic structure, which is indicated by its susceptibility to undergo reduction to a detectable anion , with a free electron trapped inside the cube, in effect making it the world's smallest box. Cubylcubanes and oligocubanes (1,2-dehydrocubane) and 1,4-cubanediyl(1,4-dehydrocubane) are enormously strained compounds which both undergo nucleophilic addition very rapidly, and this has enabled chemists to synthesize cubylcubane. X-ray diffraction structure solution has shown that the central cubylcubane bond is exceedingly short (1.458 Å), much shorter than the typical C-C single bond (1.578 Å). This is attributed to the fact that the exocyclic orbitals of cubane are s-rich and close to the nucleus. Chemists at the University of Chicago extended and modified the sequence in a way that permits the preparation of a host of [n]cubylcubane oligomers. The [n]cubylcubanes are rigid molecular rods with the particular promise at the time of making liquid crystals with exceptional UV transparency. As the number of linked cubane units increases, the solubility of [n]cubylcubane plunges; as a result, only limited chain length (up to 40 units) have been synthesized in solutions. The skeleton of [n]cubylcubanes is still composed of enormously strained carbon cubes, which therefore limit its stability. In contrast, researchers at Penn State University showed that poly-cubane synthesized by solid-state reaction is 100% sp3 carbon bonded with a tetrahedral angle (109.5°) and exhibits exceptional optical properties (high refractive index). Reactions Cuneane may be produced from cubane by a metal-ion-catalyzed σ-bond rearrangement. With a rhodium catalyst, it first forms syn-tricyclooctadiene, which can thermally decompose to cyclooctatetraene at 50–60 °C. See also Basketane Tetrahedrane Platonic hydrocarbon References External links Eaton's cubane synthesis at SynArchive.com Tsanaktsidis's cubane synthesis at SynArchive.com Cubane chemistry at Imperial College London Polycyclic nonaromatic hydrocarbons Molecular geometry Theoretical chemistry Cyclobutanes Substances discovered in the 1960s Pentacyclic compounds Cubes
Cubane
Physics,Chemistry
1,487
24,392,848
https://en.wikipedia.org/wiki/Coelenteramide
Coelenteramide is the oxidized product, or oxyluciferin, of the bioluminescent reactions in many marine organisms that use coelenterazine. It was first isolated as a blue fluorescent protein from Aequorea victoria after the animals were stimulated to emit light. Under basic conditions, the compound will break down further into coelenteramine and 4-hydroxyphenylacetic acid. It is an aminopyrazine. References External links Bioluminescence Aminopyrazines Carboxamides
Coelenteramide
Chemistry,Biology
113
22,358,951
https://en.wikipedia.org/wiki/Cyptotrama%20asprata
Cyptotrama asprata (alternatively spelled aspratum), commonly known as the golden-scruffy collybia or spiny woodknight is a saprobic species of mushroom in the family Physalacriaceae. Widely distributed in tropical regions of the world, it is characterized by the bright orange to yellow cap that in young specimens is covered with tufts of fibrils resembling small spikes. This fungus has had a varied taxonomical history, having been placed in fourteen genera before finally settling in Cyptotrama. This species is differentiated from several other similar members of genus Cyptotrama by variations in cap color, and spore size and shape. History This species was first described from Ceylon by English naturalist Miles Joseph Berkeley in 1847; soon after (1852), specimens were collected from South Carolina USA. Later, the fungus was described under a variety of names: Lentinus chrysopeplus from Cuba; Agaricus sabriusculus and Agaricus lacunosa from New York; Collybia lacunosa from Michigan; and Omphalia scabriuscula in Connecticut. As Canadian mycologists Redhead and Ginns explain in a 1980 article on the species, since its original 1847 description, C. asprata has been given 28 names, and placed in 14 different genera. Description The cap is in diameter, convex to cushion-shaped. The cap surface is dry, and younger specimens are covered with characteristic spikes; as the spikes break up with age, they tend to look more hairy or woolly. Older specimens typically have the surface features worn off. The cap margin tends to be rolled inwards when young, gradually becoming straight with maturity. The color of the cap is bright or pale yellow, increasing in intensity towards the center of the cap. C. asprata has a web-like ring that soon disappears. The gills, pale yellow to white in color, are distantly spaced and have an adnate (squarely attached) or short decurrent (running down the length) attachment to the stem; they feel greasy when dried and crushed. The stem is long by thick at the stem apex; the stem is slightly thicker towards the base, and may be covered with hyphae that appear woolly (flocculose) or hairy (fibrillose). The surface of the stem may also be scaly – especially towards the base – or it may be covered with very small particles (granular). The flesh of this mushroom is white or pale yellow, with no distinctive taste or odor. The spore print is white. It is considered inedible. Microscopic features Spores are thin-walled, smooth, and ellipsoidal or oval in shape. Viewed with a microscope, they appear translucent (hyaline), and stain red or blue with Melzer's reagent (inamyloid). Their dimensions are typically 7–10 by 5–7 μm; the spores contain a single large oil droplet. The spore-bearing cells, the basidia, are club-shaped, two- to four-spored, and 25–30 by 5–7 μm. The presence of sterile cells called pleurocystidia (large cells found on the gill face in some mushrooms) is uncommon; specimens may contain few or abundant cheilocystidia (large sterile cells found on the gill edge) that are club-shaped, thin-walled and 39–87.5 by 8.5–16 μm in size. Habitat and distribution Cyptotrama asprata is a saprobic fungus, and grows on the decaying wood of deciduous and coniferous trees. Host species include white fir (Abies concolor), sugar maple (Acer saccharum) and other maple (Acer) species, grey alder (Alnus oblongifolia), beech (Fagus) species, spruce (Picea) species, ponderosa pine (Pinus ponderosa) and other pine (Pinus) species, poplar (Populus) and oak (Quercus) species. In temperate North America, specimens are typically collected between July through September. The species has a pantropical distribution, and is widely distributed in tropical regions of the world. It has been collected from Australia, southeastern Canada, China, Costa Rica, India, Hawaii, New Zealand, Japan, and the Russian Far East. It is absent from Europe and Northwestern North America. Similar species Many other members of genus Cyptotrama are similar in appearance and differ from C. asprata by only one or two readily observable features. For example, C. granulosa is bright yellowish-brown (rather than bright or pale yellow in C. asprata); C. lachnocephala is ochre-colored; C. deseynesiana is cream-colored with brown scales; C. verruculosa has a "copper-rust-brown" cap; C. costesii has olive-colored pigments. Species may also be distinguished by differences in spore size and shape, although a considerable size range has been noted for C. asprata spores. References Physalacriaceae Fungi described in 1847 Fungi of North America Fungi of Central America Fungi native to Australia Fungi of New Zealand Inedible fungi Taxa named by Miles Joseph Berkeley Fungus species
Cyptotrama asprata
Biology
1,105
61,523,241
https://en.wikipedia.org/wiki/Tomka%20gas%20test%20site
Tomka gas test site () was a secret chemical weapons testing facility near a place codenamed Volsk-18 (Wolsk, in German literature), 20 km off Volsk, now Shikhany, Saratov Oblast, Russia created within the framework of German-Soviet military cooperation to circumvent the demilitarization provisions of the post-World War I Treaty of Versailles. It was co-directed by Yakov Moiseevich Fishman (начальник воен­но-химического управления Красной Армии), and German chemists Alexander von Grundherr and Ludwig von Sicherer. It operated (according to an agreement undersigned by fictitious joint stock companies) during 1926-1933. After 1933 the area was used by the Red Army and expanded under the name "Volsk-18" or "Schichany-2" to Russia's most important center for the development of chemical warfare agents and protective measures against NBC weapons. Another chemical site was established by the settlement of Ukhtomsky, Moscow Region. See also Kama tank school Lipetsk fighter-pilot school References Reichswehr Military history of the Soviet Union Military history of Germany 1926 establishments in the Soviet Union Secret military programs Germany–Soviet Union relations Military education and training in the Soviet Union Chemical warfare facilities Soviet chemical weapons program
Tomka gas test site
Chemistry,Engineering
309
6,100,522
https://en.wikipedia.org/wiki/Dirichlet%20density
In mathematics, the Dirichlet density (or analytic density) of a set of primes, named after Peter Gustav Lejeune Dirichlet, is a measure of the size of the set that is easier to use than the natural density. Definition If A is a subset of the prime numbers, the Dirichlet density of A is the limit if it exists. Note that since as (see Prime zeta function), this is also equal to This expression is usually the order of the "pole" of at s = 1, (though in general it is not really a pole as it has non-integral order), at least if this function is a holomorphic function times a (real) power of s−1 near s = 1. For example, if A is the set of all primes, it is the Riemann zeta function which has a pole of order 1 at s = 1, so the set of all primes has Dirichlet density 1. More generally, one can define the Dirichlet density of a sequence of primes (or prime powers), possibly with repetitions, in the same way. Properties If a subset of primes A has a natural density, given by the limit of (number of elements of A less than N)/(number of primes less than N) then it also has a Dirichlet density, and the two densities are the same. However it is usually easier to show that a set of primes has a Dirichlet density, and this is good enough for many purposes. For example, in proving Dirichlet's theorem on arithmetic progressions, it is easy to show that the set of primes in an arithmetic progression a + nb (for a, b coprime) has Dirichlet density 1/φ(b), which is enough to show that there are an infinite number of such primes, but harder to show that this is the natural density. Roughly speaking, proving that some set of primes has a non-zero Dirichlet density usually involves showing that certain L-functions do not vanish at the point s = 1, while showing that they have a natural density involves showing that the L-functions have no zeros on the line Re(s) = 1. In practice, if some "naturally occurring" set of primes has a Dirichlet density, then it also has a natural density, but it is possible to find artificial counterexamples: for example, the set of primes whose first decimal digit is 1 has no natural density, but has Dirichlet density log(2)/log(10). See also Natural density Notes References J.-P. Serre, A course in arithmetic, , chapter VI section 4. Analytic number theory
Dirichlet density
Mathematics
572
29,586,267
https://en.wikipedia.org/wiki/Origin%20and%20function%20of%20meiosis
The origin and function of meiosis are currently not well understood scientifically, and would provide fundamental insight into the evolution of sexual reproduction in eukaryotes. There is no current consensus among biologists on the questions of how sex in eukaryotes arose in evolution, what basic function sexual reproduction serves, and why it is maintained, given the basic two-fold cost of sex. It is clear that it evolved over 1.2 billion years ago, and that almost all species which are descendants of the original sexually reproducing species are still sexual reproducers, including plants, fungi, and animals. Meiosis is a key event of the sexual cycle in eukaryotes. It is the stage of the life cycle when a cell gives rise to haploid cells (gametes) each having half as many chromosomes as the parental cell. Two such haploid gametes, ordinarily arising from different individual organisms, fuse by the process of fertilization, thus completing the sexual cycle. Meiosis is ubiquitous among eukaryotes. It occurs in single-celled organisms such as yeast, as well as in multicellular organisms, such as humans. Eukaryotes arose from prokaryotes more than 2.2 billion years ago and the earliest eukaryotes were likely single-celled organisms. To understand sex in eukaryotes, it is necessary to understand (1) how meiosis arose in single celled eukaryotes, and (2) the function of meiosis. Origin of meiosis There are two conflicting theories on how meiosis arose. One is that meiosis evolved from prokaryotic sex (bacterial recombination) as eukaryotes evolved from prokaryotes. The other is that meiosis arose from mitosis. From prokaryotic sex In prokaryotic sex, DNA from one prokaryote is taken up by another prokaryote and its information integrated into the DNA of the recipient prokaryote. In extant prokaryotes the donor DNA can be transferred either by transformation or conjugation. Transformation in which DNA from one prokaryote is released into the surrounding medium and then taken up by another prokaryotic cell may have been the earliest form of sexual interaction. One theory on how meiosis arose is that it evolved from transformation. According to this view, the evolutionary transition from prokaryotic sex to eukaryotic sex was continuous. Transformation, like meiosis, is a complex process requiring the function of numerous gene products. A key similarity between prokaryotic sex and eukaryotic sex is that DNA originating from two different individuals (parents) join up so that homologous sequences are aligned with each other, and this is followed by exchange of genetic information (a process called genetic recombination). After the new recombinant chromosome is formed it is passed on to progeny. When genetic recombination occurs between DNA molecules originating from different parents, the recombination process is catalyzed in prokaryotes and eukaryotes by enzymes that have similar functions and that are evolutionarily related. One of the most important enzymes catalyzing this process in bacteria is referred to as RecA, and this enzyme has two functionally similar counterparts that act in eukaryotic meiosis, RAD51 and DMC1. Support for the theory that meiosis arose from prokaryotic transformation comes from the increasing evidence that early diverging lineages of eukaryotes have the core genes for meiosis. This implies that the precursor to meiosis was already present in the prokaryotic ancestor of eukaryotes. For instance the common intestinal parasite Giardia intestinalis, a simple eukaryotic protozoan was, until recently, thought to be descended from an early diverging eukaryotic lineage that lacked sex. However, it has since been shown that G. intestinalis contains within its genome a core set of genes that function in meiosis, including five genes that function only in meiosis. In addition, G. intestinalis was recently found to undergo a specialized, sex-like process involving meiosis gene homologs. This evidence, and other similar examples, suggest that a primitive form of meiosis, was present in the common ancestor of all eukaryotes, an ancestor that arose from an antecedent prokaryote. From mitosis Mitosis is the normal process in eukaryotes for cell division; duplicating chromosomes and segregating one of the two copies into each of the two daughter cells, in contrast with meiosis. The mitosis theory states that meiosis evolved from mitosis. According to this theory, early eukaryotes evolved mitosis first, became established, and only then did meiosis and sexual reproduction arise. Supporting this idea are observations of some features, such as the meiotic spindles that draw chromosome sets into separate daughter cells upon cell division, as well as processes regulating cell division that employ the same, or similar molecular machinery. Yet there is no compelling evidence for a period in the early evolution of eukaryotes, during which meiosis and accompanying sexual capability did not yet exist. In addition, as noted by Wilkins and Holliday, there are four novel steps needed in meiosis that are not present in mitosis. These are: (1) pairing of homologous chromosomes, (2) extensive recombination between homologs; (3) suppression of sister chromatid separation in the first meiotic division; and (4) avoiding chromosome replication during the second meiotic division. Although the introduction of these steps seems to be complicated, Wilkins and Holliday argue that only one new step, homolog synapsis, was particularly initiated in the evolution of meiosis from mitosis. Meanwhile, two of the other novel features could have been simple modifications, and extensive recombination could have evolved later. Coevolution with mitosis If meiosis arose from prokaryotic transformation, during the early evolution of eukaryotes, mitosis and meiosis could have evolved in parallel. Both processes use shared molecular components, where mitosis evolved from the molecular machinery used by prokaryotes for DNA replication and segregation, and meiosis evolved from the prokaryotic sexual process of transformation. However, meiosis also made use of the evolving molecular machinery for DNA replication and segregation. Function Stress-induced sex Abundant evidence indicates that facultative sexual eukaryotes tend to undergo sexual reproduction under stressful conditions. For instance, the budding yeast Saccharomyces cerevisiae (a single-celled fungus) reproduces mitotically (asexually) as diploid cells when nutrients are abundant, but switches to meiosis (sexual reproduction) under starvation conditions. The unicellular green alga, Chlamydomonas reinhardtii grows as vegetative cells in nutrient rich growth medium, but depletion of a source of nitrogen in the medium leads to gamete fusion, zygote formation and meiosis. The fission yeast Schizosaccharomyces pombe, treated with H2O2 to cause oxidative stress, substantially increases the proportion of cells which undergo meiosis. The simple multicellular eukaryote Volvox carteri undergoes sex in response to oxidative stress or stress from heat shock. These examples, and others, suggest that, in simple single-celled and multicellular eukaryotes, meiosis is an adaptation to respond to stress. Prokaryotic sex also appears to be an adaptation to stress. For instance, transformation occurs near the end of logarithmic growth, when amino acids become limiting in Bacillus subtilis, or in Haemophilus influenzae when cells are grown to the end of logarithmic phase. In Streptococcus mutans and other streptococci, transformation is associated with high cell density and biofilm formation. In Streptococcus pneumoniae, transformation is induced by the DNA damaging agent mitomycin C. These, and other, examples indicate that prokaryotic sex, like meiosis in simple eukaryotes, is an adaptation to stressful conditions. This observation suggests that the natural selection pressures maintaining meiosis in eukaryotes are similar to the selective pressures maintaining prokaryotic sex. This similarity suggests continuity, rather than a gap, in the evolution of sex from prokaryotes to eukaryotes. Stress is, however, a general concept. What is it specifically about stress that needs to be overcome by meiosis? And what is the specific benefit provided by meiosis that enhances survival under stressful conditions? DNA repair In one theory, meiosis is primarily an adaptation for repairing DNA damage. Environmental stresses often lead to oxidative stress within the cell, which is well known to cause DNA damage through the production of reactive forms of oxygen, known as reactive oxygen species (ROS). DNA damages, if not repaired, can kill a cell by blocking DNA replication, or transcription of essential genes. When only one strand of the DNA is damaged, the lost information (nucleotide sequence) can ordinarily be recovered by repair processes that remove the damaged sequence and fill the resulting gap by copying from the opposite intact strand of the double helix. However, ROS also cause a type of damage that is difficult to repair, referred to as double-strand damage. One common example of double-strand damage is the double-strand break. In this case, genetic information (nucleotide sequence) is lost from both strands in the damaged region, and proper information can only be obtained from another intact chromosome homologous to the damaged chromosome. The process that the cell uses to accurately accomplish this type of repair is called recombinational repair. Meiosis is distinct from mitosis in that a central feature of meiosis is the alignment of homologous chromosomes followed by recombination between them. The two chromosomes which pair are referred to as non-sister chromosomes, since they did not arise simply from the replication of a parental chromosome. Recombination between non-sister chromosomes at meiosis is known to be a recombinational repair process that can repair double-strand breaks and other types of double-strand damage. In contrast, recombination between sister chromosomes cannot repair double-strand damages arising prior to the replication which produced them. Thus on this view, the adaptive advantage of meiosis is that it facilitates recombinational repair of DNA damage that is otherwise difficult to repair, and that occurs as a result of stress, particularly oxidative stress. If left unrepaired, this damage would likely be lethal to gametes and inhibit production of viable progeny. Even in multicellular eukaryotes, such as humans, oxidative stress is a problem for cell survival. In this case, oxidative stress is a byproduct of oxidative cellular respiration occurring during metabolism in all cells. In humans, on average, about 50 DNA double-strand breaks occur per cell in each cell generation. Meiosis, which facilitates recombinational repair between non-sister chromosomes, can efficiently repair these prevalent damages in the DNA passed on to germ cells, and consequently prevent loss of fertility in humans. Thus with the theory that meiosis arose from prokaryotic sex, recombinational repair is the selective advantage of meiosis in both single celled eukaryotes and multicellular eukaryotes, such as humans. An argument against this hypothesis is that adequate repair mechanisms including those involving recombination already exist in prokaryotes. Prokaryotes do have DNA repair mechanism enriched with recombinational repair, and the existence of prokaryotic life in severe environment indicates the extreme efficiency of this mechanism to help them survive many DNA damages related to the environment. This implies that an extra costly repair in the form of meiosis would be unnecessary. However, most of these mechanisms cannot be as accurate as meiosis and are possibly more mutagenic than the repair mechanism provided by meiosis. They primarily do not require a second homologous chromosome for the recombination that promotes a more extensive repair. Thus, despite the efficiency of recombinational repair involving sister chromatids, the repair still needs to be improved, and another type of repair is required. Moreover, due to the more extensive homologous recombinational repair in meiosis in comparison to the repair in mitosis, meiosis as a repair mechanism can accurately remove any damage that arises at any stage of the cell cycle more than mitotic repair mechanism can do and was, therefore, naturally selected. In contrast, the sister chromatid in mitotic recombination could have been exposed to similar amount of stress, and, thus, this type of recombination, instead of eliminating the damage, could actually spread the damage and decrease fitness. Prophase I arrest Female mammals and birds are born possessing all the oocytes needed for future ovulations, and these oocytes are arrested at the prophase I stage of meiosis. In humans, as an example, oocytes are formed between three and four months of gestation within the fetus and are therefore present at birth. During this prophase I arrested stage (dictyate), which may last for many years, four copies of the genome are present in the oocytes. The arrest of ooctyes at the four genome copy stage was proposed to provide the informational redundancy needed to repair damage in the DNA of the germline. The repair process used likely involves homologous recombinational repair. Prophase arrested oocytes have a high capability for efficient repair of DNA damages. The adaptive function of the DNA repair capability during meiosis appears to be a key quality control mechanism in the female germ line and a critical determinant of fertility. Genetic diversity Another hypothesis to explain the function of meiosis is that stress is a signal to the cell that the environment is becoming adverse. Under this new condition, it may be beneficial to produce progeny that differ from the parent in their genetic make up. Among these varied progeny, some may be more adapted to the changed condition than their parents. Meiosis generates genetic variation in the diploid cell, in part by the exchange of genetic information between the pairs of chromosomes after they align (recombination). Thus, on this view, an advantage of meiosis is that it facilitates the generation of genomic diversity among progeny, allowing adaptation to adverse changes in the environment. However, in the presence of a fairly stable environment, individuals surviving to reproductive age have genomes that function well in their current environment. This raises the question of why such individuals should risk shuffling their genes with those of another individual, as occurs during meiotic recombination? Considerations such as this have led many investigators to question whether genetic diversity is a major adaptive advantage of sex. See also DNA repair Giardia Oxidative stress Asexual reproduction, ways to avoid the two-fold cost of sexual reproduction Apomixis Parthenogenesis References Mitosis DNA repair Meiosis
Origin and function of meiosis
Biology
3,171
48,520,204
https://en.wikipedia.org/wiki/Computational%20anatomy
Computational anatomy is an interdisciplinary field of biology focused on quantitative investigation and modelling of anatomical shapes variability. It involves the development and application of mathematical, statistical and data-analytical methods for modelling and simulation of biological structures. The field is broadly defined and includes foundations in anatomy, applied mathematics and pure mathematics, machine learning, computational mechanics, computational science, biological imaging, neuroscience, physics, probability, and statistics; it also has strong connections with fluid mechanics and geometric mechanics. Additionally, it complements newer, interdisciplinary fields like bioinformatics and neuroinformatics in the sense that its interpretation uses metadata derived from the original sensor imaging modalities (of which magnetic resonance imaging is one example). It focuses on the anatomical structures being imaged, rather than the medical imaging devices. It is similar in spirit to the history of computational linguistics, a discipline that focuses on the linguistic structures rather than the sensor acting as the transmission and communication media. In computational anatomy, the diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow in . The flows between coordinates in computational anatomy are constrained to be geodesic flows satisfying the principle of least action for the Kinetic energy of the flow. The kinetic energy is defined through a Sobolev smoothness norm with strictly more than two generalized, square-integrable derivatives for each component of the flow velocity, which guarantees that the flows in are diffeomorphisms. It also implies that the diffeomorphic shape momentum taken pointwise satisfying the Euler–Lagrange equation for geodesics is determined by its neighbors through spatial derivatives on the velocity field. This separates the discipline from the case of incompressible fluids for which momentum is a pointwise function of velocity. Computational anatomy intersects the study of Riemannian manifolds and nonlinear global analysis, where groups of diffeomorphisms are the central focus. Emerging high-dimensional theories of shape are central to many studies in computational anatomy, as are questions emerging from the fledgling field of shape statistics. The metric structures in computational anatomy are related in spirit to morphometrics, with the distinction that Computational anatomy focuses on an infinite-dimensional space of coordinate systems transformed by a diffeomorphism, hence the central use of the terminology diffeomorphometry, the metric space study of coordinate systems via diffeomorphisms. Genesis At computational anatomy's heart is the comparison of shape by recognizing in one shape the other. This connects it to D'Arcy Wentworth Thompson's developments On Growth and Form which has led to scientific explanations of morphogenesis, the process by which patterns are formed in biology. Albrecht Durer's Four Books on Human Proportion were arguably the earliest works on computational anatomy. The efforts of Noam Chomsky in his pioneering of computational linguistics inspired the original formulation of computational anatomy as a generative model of shape and form from exemplars acted upon via transformations. Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging (MRI), computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morphome scale in 3D. The spirit of this discipline shares strong overlap with areas such as computer vision and kinematics of rigid bodies, where objects are studied by analysing the groups responsible for the movement in question. Computational anatomy departs from computer vision with its focus on rigid motions, as the infinite-dimensional diffeomorphism group is central to the analysis of Biological shapes. It is a branch of the image analysis and pattern theory school at Brown University pioneered by Ulf Grenander. In Grenander's general metric pattern theory, making spaces of patterns into a metric space is one of the fundamental operations since being able to cluster and recognize anatomical configurations often requires a metric of close and far between shapes. The diffeomorphometry metric of computational anatomy measures how far two diffeomorphic changes of coordinates are from each other, which in turn induces a metric on the shapes and images indexed to them. The models of metric pattern theory, in particular group action on the orbit of shapes and forms is a central tool to the formal definitions in computational anatomy. History Computational anatomy is the study of shape and form at the morphome or gross anatomy millimeter, or morphology scale, focusing on the study of sub-manifolds of points, curves surfaces and subvolumes of human anatomy. An early modern computational neuro-anatomist was David Van Essen performing some of the early physical unfoldings of the human brain based on printing of a human cortex and cutting. Jean Talairach's publication of Talairach coordinates is an important milestone at the morphome scale demonstrating the fundamental basis of local coordinate systems in studying neuroanatomy and therefore the clear link to charts of differential geometry. Concurrently, virtual mapping in computational anatomy across high resolution dense image coordinates was already happening in Ruzena Bajcy's and Fred Bookstein's earliest developments based on computed axial tomography and magnetic resonance imagery. The earliest introduction of the use of flows of diffeomorphisms for transformation of coordinate systems in image analysis and medical imaging was by Christensen, Joshi, Miller, and Rabbitt. The first formalization of computational anatomy as an orbit of exemplar templates under diffeomorphism group action was in the original lecture given by Grenander and Miller with that title in May 1997 at the 50th Anniversary of the Division of Applied Mathematics at Brown University, and subsequent publication. This was the basis for the strong departure from much of the previous work on advanced methods for spatial normalization and image registration which were historically built on notions of addition and basis expansion. The structure preserving transformations central to the modern field of Computational Anatomy, homeomorphisms and diffeomorphisms carry smooth submanifolds smoothly. They are generated via Lagrangian and Eulerian flows which satisfy a law of composition of functions forming the group property, but are not additive. The original model of computational anatomy was as the triple, the group , the orbit of shapes and forms , and the probability laws which encode the variations of the objects in the orbit. The template or collection of templates are elements in the orbit of shapes. The Lagrangian and Hamiltonian formulations of the equations of motion of computational anatomy took off post 1997 with several pivotal meetings including the 1997 Luminy meeting organized by the Azencott school at Ecole-Normale Cachan on the "Mathematics of Shape Recognition" and the 1998 Trimestre at Institute Henri Poincaré organized by David Mumford "Questions Mathématiques en Traitement du Signal et de l'Image" which catalyzed the Hopkins-Brown-ENS Cachan groups and subsequent developments and connections of computational anatomy to developments in global analysis. The developments in computational anatomy included the establishment of the Sobolev smoothness conditions on the diffeomorphometry metric to insure existence of solutions of variational problems in the space of diffeomorphisms, the derivation of the Euler–Lagrange equations characterizing geodesics through the group and associated conservation laws, the demonstration of the metric properties of the right invariant metric, the demonstration that the Euler–Lagrange equations have a well-posed initial value problem with unique solutions for all time, and with the first results on sectional curvatures for the diffeomorphometry metric in landmarked spaces. Following the Los Alamos meeting in 2002, Joshi's original large deformation singular Landmark solutions in computational anatomy were connected to peaked solitons or peakons as solutions for the Camassa–Holm equation. Subsequently, connections were made between computational anatomy's Euler–Lagrange equations for momentum densities for the right-invariant metric satisfying Sobolev smoothness to Vladimir Arnold's characterization of the Euler equation for incompressible flows as describing geodesics in the group of volume preserving diffeomorphisms. The first algorithms, generally termed LDDMM for large deformation diffeomorphic mapping for computing connections between landmarks in volumes and spherical manifolds, curves, currents and surfaces, volumes, tensors, varifolds, and time-series have followed. These contributions of computational anatomy to the global analysis associated to the infinite dimensional manifolds of subgroups of the diffeomorphism group is far from trivial. The original idea of doing differential geometry, curvature and geodesics on infinite dimensional manifolds goes back to Bernhard Riemann's Habilitation (Ueber die Hypothesen, welche der Geometrie zu Grunde liegen); the key modern book laying the foundations of such ideas in global analysis are from Michor. The applications within medical imaging of computational anatomy continued to flourish after two organized meetings at the Institute for Pure and Applied Mathematics conferences at University of California, Los Angeles. Computational anatomy has been useful in creating accurate models of the atrophy of the human brain at the morphome scale, as well as Cardiac templates, as well as in modeling biological systems. Since the late 1990s, computational anatomy has become an important part of developing emerging technologies for the field of medical imaging. Digital atlases are a fundamental part of modern Medical-school education and in neuroimaging research at the morphome scale. Atlas based methods and virtual textbooks which accommodate variations as in deformable templates are at the center of many neuro-image analysis platforms including Freesurfer, FSL, MRIStudio, SPM. Diffeomorphic registration, introduced in the 1990s, is now an important player with existing codes bases organized around ANTS, DARTEL, DEMONS, LDDMM, StationaryLDDMM, FastLDDMM, are examples of actively used computational codes for constructing correspondences between coordinate systems based on sparse features and dense images. Voxel-based morphometry is an important technology built on many of these principles. The deformable template orbit model of computational anatomy The model of human anatomy is a deformable template, an orbit of exemplars under group action. Deformable template models have been central to Grenander's metric pattern theory, accounting for typicality via templates, and accounting for variability via transformation of the template. An orbit under group action as the representation of the deformable template is a classic formulation from differential geometry. The space of shapes are denoted , with the group with law of composition ; the action of the group on shapes is denoted , where the action of the group is defined to satisfy The orbit of the template becomes the space of all shapes, , being homogenous under the action of the elements of . The orbit model of computational anatomy is an abstract algebra – to be compared to linear algebra – since the groups act nonlinearly on the shapes. This is a generalization of the classical models of linear algebra, in which the set of finite dimensional vectors are replaced by the finite-dimensional anatomical submanifolds (points, curves, surfaces and volumes) and images of them, and the matrices of linear algebra are replaced by coordinate transformations based on linear and affine groups and the more general high-dimensional diffeomorphism groups. Shapes and forms The central objects are shapes or forms in computational anatomy, one set of examples being the 0,1,2,3-dimensional submanifolds of , a second set of examples being images generated via medical imaging such as via magnetic resonance imaging (MRI) and functional magnetic resonance imaging. The 0-dimensional manifolds are landmarks or fiducial points; 1-dimensional manifolds are curves such as sulcal and gyral curves in the brain; 2-dimensional manifolds correspond to boundaries of substructures in anatomy such as the subcortical structures of the midbrain or the gyral surface of the neocortex; subvolumes correspond to subregions of the human body, the heart, the thalamus, the kidney. The landmarks are a collections of points with no other structure, delineating important fiducials within human shape and form (see associated landmarked image). The sub-manifold shapes such as surfaces are collections of points modeled as parametrized by a local chart or immersion , (see Figure showing shapes as mesh surfaces). The images such as MR images or DTI images , and are dense functions are scalars, vectors, and matrices (see Figure showing scalar image). Groups and group actions Groups and group actions are familiar to the Engineering community with the universal popularization and standardization of linear algebra as a basic model for analyzing signals and systems in mechanical engineering, electrical engineering and applied mathematics. In linear algebra the matrix groups (matrices with inverses) are the central structure, with group action defined by the usual definition of as an matrix, acting on as vectors; the orbit in linear-algebra is the set of -vectors given by , which is a group action of the matrices through the orbit of . The central group in computational anatomy defined on volumes in are the diffeomorphisms which are mappings with 3-components , law of composition of functions , with inverse . Most popular are scalar images, , with action on the right via the inverse. For sub-manifolds , parametrized by a chart or immersion , the diffeomorphic action the flow of the position Several group actions in computational anatomy have been defined. Lagrangian and Eulerian flows for generating diffeomorphisms For the study of rigid body kinematics, the low-dimensional matrix Lie groups have been the central focus. The matrix groups are low-dimensional mappings, which are diffeomorphisms that provide one-to-one correspondences between coordinate systems, with a smooth inverse. The matrix group of rotations and scales can be generated via a closed form finite-dimensional matrices which are solution of simple ordinary differential equations with solutions given by the matrix exponential. For the study of deformable shape in computational anatomy, a more general diffeomorphism group has been the group of choice, which is the infinite dimensional analogue. The high-dimensional diffeomorphism groups used in Computational Anatomy are generated via smooth flows which satisfy the Lagrangian and Eulerian specification of the flow fields as first introduced in, satisfying the ordinary differential equation: with the vector fields on termed the Eulerian velocity of the particles at position of the flow. The vector fields are functions in a function space, modelled as a smooth Hilbert space of high-dimension, with the Jacobian of the flow a high-dimensional field in a function space as well, rather than a low-dimensional matrix as in the matrix groups. Flows were first introduced for large deformations in image matching; is the instantaneous velocity of particle at time . The inverse required for the group is defined on the Eulerian vector-field with advective inverse flow The diffeomorphism group of computational anatomy The group of diffeomorphisms is very big. To ensure smooth flows of diffeomorphisms avoiding shock-like solutions for the inverse, the vector fields must be at least 1-time continuously differentiable in space. For diffeomorphisms on , vector fields are modelled as elements of the Hilbert space using the Sobolev embedding theorems so that each element has strictly greater than 2 generalized square-integrable spatial derivatives (thus is sufficient), yielding 1-time continuously differentiable functions. The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm: where with the linear operator mapping to the dual space , with the integral calculated by integration by parts when is a generalized function in the dual space. Diffeomorphometry: The metric space of shapes and forms The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation. The diffeomorphometry metric measures how close and far two shapes or images are from each other; the metric length is the shortest length of the flow which carries one coordinate system into the other. Oftentimes, the familiar Euclidean metric is not directly applicable because the patterns of shapes and images do not form a vector space. In the Riemannian orbit model of computational anatomy, diffeomorphisms acting on the forms do not act linearly. There are many ways to define metrics, and for the sets associated to shapes the Hausdorff metric is another. The method we use to induce the Riemannian metric is used to induce the metric on the orbit of shapes by defining it in terms of the metric length between diffeomorphic coordinate system transformations of the flows. Measuring the lengths of the geodesic flow between coordinates systems in the orbit of shapes is called diffeomorphometry. The right-invariant metric on diffeomorphisms Define the distance on the group of diffeomorphisms this is the right-invariant metric of diffeomorphometry, invariant to reparameterization of space since for all , . The metric on shapes and forms The distance on shapes and forms,, the images are denoted with the orbit as and metric . The action integral for Hamilton's principle on diffeomorphic flows In classical mechanics the evolution of physical systems is described by solutions to the Euler–Lagrange equations associated to the Least-action principle of Hamilton. This is a standard way, for example of obtaining Newton's laws of motion of free particles. More generally, the Euler–Lagrange equations can be derived for systems of generalized coordinates. The Euler–Lagrange equation in computational anatomy describes the geodesic shortest path flows between coordinate systems of the diffeomorphism metric. In computational anatomy the generalized coordinates are the flow of the diffeomorphism and its Lagrangian velocity , the two related via the Eulerian velocity . Hamilton's principle for generating the Euler–Lagrange equation requires the action integral on the Lagrangian given by the Lagrangian is given by the kinetic energy: Diffeomorphic or Eulerian shape momentum In computational anatomy, was first called the Eulerian or diffeomorphic shape momentum since when integrated against Eulerian velocity gives energy density, and since there is a conservation of diffeomorphic shape momentum which holds. The operator is the generalized moment of inertia or inertial operator. The Euler–Lagrange equation on shape momentum for geodesics on the group of diffeomorphisms Classical calculation of the Euler–Lagrange equation from Hamilton's principle requires the perturbation of the Lagrangian on the vector field in the kinetic energy with respect to first order perturbation of the flow. This requires adjustment by the Lie bracket of vector field, given by operator which involves the Jacobian given by . Defining the adjoint then the first order variation gives the Eulerian shape momentum satisfying the generalized equation: meaning for all smooth Computational anatomy is the study of the motions of submanifolds, points, curves, surfaces and volumes. Momentum associated to points, curves and surfaces are all singular, implying the momentum is concentrated on subsets of which are dimension in Lebesgue measure. In such cases, the energy is still well defined since although is a generalized function, the vector fields are smooth and the Eulerian momentum is understood via its action on smooth functions. The perfect illustration of this is even when it is a superposition of delta-diracs, the velocity of the coordinates in the entire volume move smoothly. The Euler–Lagrange equation () on diffeomorphisms for generalized functions was derived in. In Riemannian Metric and Lie-Bracket Interpretation of the Euler–Lagrange Equation on Geodesics derivations are provided in terms of the adjoint operator and the Lie bracket for the group of diffeomorphisms. It has come to be called EPDiff equation for diffeomorphisms connecting to the Euler-Poincare method having been studied in the context of the inertial operator for incompressible, divergence free, fluids. Diffeomorphic shape momentum: a classical vector function For the momentum density case , then Euler–Lagrange equation has a classical solution:The Euler–Lagrange equation on diffeomorphisms, classically defined for momentum densities first appeared in for medical image analysis. Riemannian exponential (geodesic positioning) and Riemannian logarithm (geodesic coordinates) In medical imaging and computational anatomy, positioning and coordinatizing shapes are fundamental operations; the system for positioning anatomical coordinates and shapes built on the metric and the Euler–Lagrange equation a geodesic positioning system as first explicated in Miller Trouve and Younes. Solving the geodesic from the initial condition is termed the Riemannian-exponential, a mapping at identity to the group. The Riemannian exponential satisfies for initial condition , vector field dynamics , for classical equation diffeomorphic shape momentum , , then for generalized equation, then , , Computing the flow onto coordinates Riemannian logarithm, mapping at identity from to vector field ; Extended to the entire group they become ; . These are inverses of each other for unique solutions of Logarithm; the first is called geodesic positioning, the latter geodesic coordinates (see exponential map, Riemannian geometry for the finite dimensional version). The geodesic metric is a local flattening of the Riemannian coordinate system (see figure). Hamiltonian formulation of computational anatomy In computational anatomy the diffeomorphisms are used to push the coordinate systems, and the vector fields are used as the control within the anatomical orbit or morphological space. The model is that of a dynamical system, the flow of coordinates and the control the vector field related via The Hamiltonian view reparameterizes the momentum distribution in terms of the conjugate momentum or canonical momentum, introduced as a Lagrange multiplier constraining the Lagrangian velocity .accordingly: This function is the extended Hamiltonian. The Pontryagin maximum principle gives the optimizing vector field which determines the geodesic flow satisfying as well as the reduced Hamiltonian The Lagrange multiplier in its action as a linear form has its own inner product of the canonical momentum acting on the velocity of the flow which is dependent on the shape, e.g. for landmarks a sum, for surfaces a surface integral, and. for volumes it is a volume integral with respect to on . In all cases the Greens kernels carry weights which are the canonical momentum evolving according to an ordinary differential equation which corresponds to EL but is the geodesic reparameterization in canonical momentum. The optimizing vector field is given by with dynamics of canonical momentum reparameterizing the vector field along the geodesic Stationarity of the Hamiltonian and kinetic energy along Euler–Lagrange Whereas the vector fields are extended across the entire background space of , the geodesic flows associated to the submanifolds has Eulerian shape momentum which evolves as a generalized function concentrated to the submanifolds. For landmarks the geodesics have Eulerian shape momentum which are a superposition of delta distributions travelling with the finite numbers of particles; the diffeomorphic flow of coordinates have velocities in the range of weighted Green's Kernels. For surfaces, the momentum is a surface integral of delta distributions travelling with the surface. The geodesics connecting coordinate systems satisfying have stationarity of the Lagrangian. The Hamiltonian is given by the extremum along the path , , equalling the and is stationary along . Defining the geodesic velocity at the identity , then along the geodesic The stationarity of the Hamiltonian demonstrates the interpretation of the Lagrange multiplier as momentum; integrated against velocity gives energy density. The canonical momentum has many names. In optimal control, the flows is interpreted as the state, and is interpreted as conjugate state, or conjugate momentum. The geodesi of EL implies specification of the vector fields or Eulerian momentum at , or specification of canonical momentum determines the flow. The metric on geodesic flows of landmarks, surfaces, and volumes within the orbit In computational anatomy the submanifolds are pointsets, curves, surfaces and subvolumes which are the basic primitives. The geodesic flows between the submanifolds determine the distance, and form the basic measuring and transporting tools of diffeomorphometry. At the geodesic has vector field determined by the conjugate momentum and the Green's kernel of the inertial operator defining the Eulerian momentum . The metric distance between coordinate systems connected via the geodesic determined by the induced distance between identity and group element: Conservation laws on diffeomorphic shape momentum for computational anatomy Given the least-action there is a natural definition of momentum associated to generalized coordinates; the quantity acting against velocity gives energy. The field has studied two forms, the momentum associated to the Eulerian vector field termed Eulerian diffeomorphic shape momentum, and the momentum associated to the initial coordinates or canonical coordinates termed canonical diffeomorphic shape momentum. Each has a conservation law. The conservation of momentum goes hand in hand with the . In computational anatomy, is the Eulerian momentum since when integrated against Eulerian velocity gives energy density; operator the generalized moment of inertia or inertial operator which acting on the Eulerian velocity gives momentum which is conserved along the geodesic: Conservation of Eulerian shape momentum was shown in and follows from ; conservation of canonical momentum was shown in Geodesic interpolation of information between coordinate systems via variational problems Construction of diffeomorphic correspondences between shapes calculates the initial vector field coordinates and associated weights on the Greens kernels . These initial coordinates are determined by matching of shapes, called large-deformation diffeomorphic metric mapping (LDDMM). LDDMM has been solved for landmarks with and without correspondence and for dense image matchings. curves, surfaces, dense vector and tensor imagery, and varifolds removing orientation. LDDMM calculates geodesic flows of the onto target coordinates, adding to the action integral an endpoint matching condition measuring the correspondence of elements in the orbit under coordinate system transformation. Existence of solutions were examined for image matching. The solution of the variational problem satisfies the for with boundary condition. Matching based on minimizing kinetic energy action with endpoint condition Conservation from extends the B.C. at to the rest of the path . The inexact matching problem with the endpoint matching term has several alternative forms. One of the key ideas of the stationarity of the Hamiltonian along the geodesic solution is the integrated running cost reduces to initial cost at t = 0, geodesics of the are determined by their initial condition . The running cost is reduced to the initial cost determined by of . Matching based on geodesic shooting The matching problem explicitly indexed to initial condition is called shooting, which can also be reparamerized via the conjugate momentum . Dense image matching in computational anatomy Dense image matching has a long history now with the earliest efforts exploiting a small deformation framework. Large deformations began in the early 1990s, with the first existence to solutions to the variational problem for flows of diffeomorphisms for dense image matching established in. Beg solved via one of the earliest LDDMM algorithms based on solving the variational matching with endpoint defined by the dense imagery with respect to the vector fields, taking variations with respect to the vector fields. Another solution for dense image matching reparameterizes the optimization problem in terms of the state giving the solution in terms of the infinitesimal action defined by the advection equation. LDDMM dense image matching For Beg's LDDMM, denote the Image with group action . Viewing this as an optimal control problem, the state of the system is the diffeomorphic flow of coordinates , with the dynamics relating the control to the state given by . The endpoint matching condition gives the variational problem Beg's iterative LDDMM algorithm has fixed points which satisfy the necessary optimizer conditions. The iterative algorithm is given in Beg's LDDMM algorithm for dense image matching. Hamiltonian LDDMM in the reduced advected state Denote the Image , with state and the dynamics related state and control given by the advective term . The endpoint gives the variational problem Viallard's iterative Hamiltonian LDDMM has fixed points which satisfy the necessary optimizer conditions. Diffusion tensor image matching in computational anatomy Dense LDDMM tensor matching takes the images as 3x1 vectors and 3x3 tensors solving the variational problem matching between coordinate system based on the principle eigenvectors of the diffusion tensor MRI image (DTI) denoted consisting of the -tensor at every voxel. Several of the group actions defined based on the Frobenius matrix norm between square matrices . Shown in the accompanying figure is a DTI image illustrated via its color map depicting the eigenvector orientations of the DTI matrix at each voxel with color determined by the orientation of the directions. Denote the tensor image with eigen-elements , . Coordinate system transformation based on DTI imaging has exploited two actions one based on the principle eigen-vector or entire matrix. LDDMM matching based on the principal eigenvector of the diffusion tensor matrix takes the image as a unit vector field defined by the first eigenvector. The group action becomes LDDMM matching based on the entire tensor matrix has group action becomes transformed eigenvectors . The variational problem matching onto the principal eigenvector or the matrix is described LDDMM Tensor Image Matching. High angular resolution diffusion image (HARDI) matching in computational anatomy High angular resolution diffusion imaging (HARDI) addresses the well-known limitation of DTI, that is, DTI can only reveal one dominant fiber orientation at each location. HARDI measures diffusion along uniformly distributed directions on the sphere and can characterize more complex fiber geometries. HARDI can be used to reconstruct an orientation distribution function (ODF) that characterizes the angular profile of the diffusion probability density function of water molecules. The ODF is a function defined on a unit sphere, . Dense LDDMM ODF matching takes the HARDI data as ODF at each voxel and solves the LDDMM variational problem in the space of ODF. In the field of information geometry, the space of ODF forms a Riemannian manifold with the Fisher-Rao metric. For the purpose of LDDMM ODF mapping, the square-root representation is chosen because it is one of the most efficient representations found to date as the various Riemannian operations, such as geodesics, exponential maps, and logarithm maps, are available in closed form. In the following, denote square-root ODF () as , where is non-negative to ensure uniqueness and . The variational problem for matching assumes that two ODF volumes can be generated from one to another via flows of diffeomorphisms , which are solutions of ordinary differential equations starting from the identity map . Denote the action of the diffeomorphism on template as , , are respectively the coordinates of the unit sphere, and the image domain, with the target indexed similarly, ,,. The group action of the diffeomorphism on the template is given according to , where is the Jacobian of the affine-transformed ODF and is defined as This group action of diffeomorphisms on ODF reorients the ODF and reflects changes in both the magnitude of and the sampling directions of due to affine transformation. It guarantees that the volume fraction of fibers oriented toward a small patch must remain the same after the patch is transformed. The LDDMM variational problem is defined as where the logarithm of is defined as where is the normal dot product between points in the sphere under the metric. This LDDMM-ODF mapping algorithm has been widely used to study brain white matter degeneration in aging, Alzheimer's disease, and vascular dementia. The brain white matter atlas generated based on ODF is constructed via Bayesian estimation. Regression analysis on ODF is developed in the ODF manifold space in. Metamorphosis The principle mode of variation represented by the orbit model is change of coordinates. For setting in which pairs of images are not related by diffeomorphisms but have photometric variation or image variation not represented by the template, active appearance modelling has been introduced, originally by Edwards-Cootes-Taylor and in 3D medical imaging in. In the context of computational anatomy in which metrics on the anatomical orbit has been studied, metamorphosis for modelling structures such as tumors and photometric changes which are not resident in the template was introduced in for magnetic resonance image models, with many subsequent developments extending the metamorphosis framework. For image matching the image metamorphosis framework enlarges the action so that with action . In this setting metamorphosis combines both the diffeomorphic coordinate system transformation of computational anatomy as well as the early morphing technologies which only faded or modified the photometric or image intensity alone. Then the matching problem takes a form with equality boundary conditions: Matching landmarks, curves, surfaces Transforming coordinate systems based on Landmark point or fiducial marker features dates back to Bookstein's early work on small deformation spline methods for interpolating correspondences defined by fiducial points to the two-dimensional or three-dimensional background space in which the fiducials are defined. Large deformation landmark methods came on in the late 1990s. The above Figure depicts a series of landmarks associated three brain structures, the amygdala, entorhinal cortex, and hippocampus. Matching geometrical objects like unlabelled point distributions, curves or surfaces is another common problem in computational anatomy. Even in the discrete setting where these are commonly given as vertices with meshes, there are no predetermined correspondences between points as opposed to the situation of landmarks described above. From the theoretical point of view, while any submanifold in , can be parameterized in local charts , all reparametrizations of these charts give geometrically the same manifold. Therefore, early on in computational anatomy, investigators have identified the necessity of parametrization invariant representations. One indispensable requirement is that the end-point matching term between two submanifolds is itself independent of their parametrizations. This can be achieved via concepts and methods borrowed from Geometric measure theory, in particular currents and varifolds which have been used extensively for curve and surface matching. Landmark or point matching with correspondence Denoted the landmarked shape with endpoint , the variational problem becomes The geodesic Eulerian momentum is a generalized function , supported on the landmarked set in the variational problem. The endpoint condition with conservation implies the initial momentum at the identity of the group: The iterative algorithm for large deformation diffeomorphic metric mapping for landmarks is given. Measure matching: unregistered landmarks Glaunes and co-workers first introduced diffeomorphic matching of pointsets in the general setting of matching distributions. As opposed to landmarks, this includes in particular the situation of weighted point clouds with no predefined correspondences and possibly different cardinalities. The template and target discrete point clouds are represented as two weighted sums of Diracs and living in the space of signed measures of . The space is equipped with a Hilbert metric obtained from a real positive kernel on , giving the following norm: The matching problem between a template and target point cloud may be then formulated using this kernel metric for the endpoint matching term: where is the distribution transported by the deformation. Curve matching In the one dimensional case, a curve in 3D can be represented by an embedding , and the group action of Diff becomes . However, the correspondence between curves and embeddings is not one to one as the any reparametrization , for a diffeomorphism of the interval [0,1], represents geometrically the same curve. In order to preserve this invariance in the end-point matching term, several extensions of the previous 0-dimensional measure matching approach can be considered. Curve matching with currents In the situation of oriented curves, currents give an efficient setting to construct invariant matching terms. In such representation, curves are interpreted as elements of a functional space dual to the space vector fields, and compared through kernel norms on these spaces. Matching of two curves and writes eventually as the variational problem with the endpoint term is obtained from the norm the derivative being the tangent vector to the curve and a given matrix kernel of . Such expressions are invariant to any positive reparametrizations of and , and thus still depend on the orientation of the two curves. Curve matching with varifolds Varifold is an alternative to currents when orientation becomes an issue as for instance in situations involving multiple bundles of curves for which no "consistent" orientation may be defined. Varifolds directly extend 0-dimensional measures by adding an extra tangent space direction to the position of points, leading to represent curves as measures on the product of and the Grassmannian of all straight lines in . The matching problem between two curves then consists in replacing the endpoint matching term by with varifold norms of the form: where is the non-oriented line directed by tangent vector and two scalar kernels respectively on and the Grassmannian. Due to the inherent non-oriented nature of the Grassmannian representation, such expressions are invariant to positive and negative reparametrizations. Surface matching Surface matching share many similarities with the case of curves. Surfaces in are parametrized in local charts by embeddings , with all reparametrizations with a diffeomorphism of U being equivalent geometrically. Currents and varifolds can be also used to formalize surface matching. Surface matching with currents Oriented surfaces can be represented as 2-currents which are dual to differential 2-forms. In , one can further identify 2-forms with vector fields through the standard wedge product of 3D vectors. In that setting, surface matching writes again: with the endpoint term given through the norm with the normal vector to the surface parametrized by . This surface mapping algorithm has been validated for brain cortical surfaces against CARET and FreeSurfer. LDDMM mapping for multiscale surfaces is discussed in. Surface matching with varifolds For non-orientable or non-oriented surfaces, the varifold framework is often more adequate. Identifying the parametric surface with a varifold in the space of measures on the product of and the Grassmannian, one simply replaces the previous current metric by: where is the (non-oriented) line directed by the normal vector to the surface. Growth and atrophy from longitudinal time-series There are many settings in which there are a series of measurements, a time-series to which the underlying coordinate systems will be matched and flowed onto. This occurs for example in the dynamic growth and atrophy models and motion tracking such as have been explored in An observed time sequence is given and the goal is to infer the time flow of geometric change of coordinates carrying the exemplars or templars through the period of observations. The generic time-series matching problem considers the series of times is . The flow optimizes at the series of costs giving optimization problems of the form . There have been at least three solutions offered thus far, piecewise geodesic, principal geodesic and splines. The random orbit model of computational anatomy The random orbit model of computational anatomy first appeared in modelling the change in coordinates associated to the randomness of the group acting on the templates, which induces the randomness on the source of images in the anatomical orbit of shapes and forms and resulting observations through the medical imaging devices. Such a random orbit model in which randomness on the group induces randomness on the images was examined for the Special Euclidean Group for object recognition in. Depicted in the figure is a depiction of the random orbits around each exemplar, , generated by randomizing the flow by generating the initial tangent space vector field at the identity , and then generating random object . The random orbit model induces the prior on shapes and images conditioned on a particular atlas . For this the generative model generates the mean field as a random change in coordinates of the template according to , where the diffeomorphic change in coordinates is generated randomly via the geodesic flows. The prior on random transformations on is induced by the flow , with constructed as a Gaussian random field prior . The density on the random observables at the output of the sensor are given by Shown in the Figure on the right the cartoon orbit are a random spray of the subcortical manifolds generated by randomizing the vector fields supported over the submanifolds. The Bayesian model of computational anatomy The central statistical model of computational anatomy in the context of medical imaging has been the source-channel model of Shannon theory; the source is the deformable template of images , the channel outputs are the imaging sensors with observables (see Figure). See The Bayesian model of computational anatomy for discussions (i) MAP estimation with multiple atlases, (ii) MAP segmentation with multiple atlases, MAP estimation of templates from populations. Statistical shape theory in computational anatomy Shape in computational anatomy is a local theory, indexing shapes and structures to templates to which they are bijectively mapped. Statistical shape in computational anatomy is the empirical study of diffeomorphic correspondences between populations and common template coordinate systems. This is a strong departure from Procrustes Analyses and shape theories pioneered by David G. Kendall in that the central group of Kendall's theories are the finite-dimensional Lie groups, whereas the theories of shape in computational anatomy have focused on the diffeomorphism group, which to first order via the Jacobian can be thought of as a field–thus infinite dimensional–of low-dimensional Lie groups of scale and rotations. The random orbit model provides the natural setting to understand empirical shape and shape statistics within computational anatomy since the non-linearity of the induced probability law on anatomical shapes and forms is induced via the reduction to the vector fields at the tangent space at the identity of the diffeomorphism group. The successive flow of the Euler equation induces the random space of shapes and forms . Performing empirical statistics on this tangent space at the identity is the natural way for inducing probability laws on the statistics of shape. Since both the vector fields and the Eulerian momentum are in a Hilbert space the natural model is one of a Gaussian random field, so that given test function , then the inner-products with the test functions are Gaussian distributed with mean and covariance. This is depicted in the accompanying figure where sub-cortical brain structures are depicted in a two-dimensional coordinate system based on inner products of their initial vector fields that generate them from the template is shown in a 2-dimensional span of the Hilbert space. Template estimation from populations The study of shape and statistics in populations are local theories, indexing shapes and structures to templates to which they are bijectively mapped. Statistical shape is then the study of diffeomorphic correspondences relative to the template. A core operation is the generation of templates from populations, estimating a shape that is matched to the population. There are several important methods for generating templates including methods based on Frechet averaging, and statistical approaches based on the expectation-maximization algorithm and the Bayes Random orbit models of computational anatomy. Shown in the accompanying figure is a subcortical template reconstruction from the population of MRI subjects. Software for diffeomorphic mapping Software suites containing a variety of diffeomorphic mapping algorithms include the following: ANTS DARTEL Voxel-based morphometry DEFORMETRICA DEMONS LDDMM Large deformation diffeomorphic metric mapping LDDMM based on frame-based kernel StationaryLDDMM Cloud software MRICloud See also Bayesian estimation of templates in computational anatomy Computational neuroanatomy Geometric data analysis Large deformation diffeomorphic metric mapping Procrustes analysis Riemannian metric and Lie-bracket in computational anatomy Shape analysis (disambiguation) Statistical shape analysis References Geometry Fluid mechanics Bayesian estimation Neuroscience Neural engineering Biomedical engineering Computational science
Computational anatomy
Mathematics,Engineering,Biology
9,264
619,984
https://en.wikipedia.org/wiki/Surface%20layer
The surface layer is the layer of a turbulent fluid most affected by interaction with a solid surface or the surface separating a gas and a liquid where the characteristics of the turbulence depend on distance from the interface. Surface layers are characterized by large normal gradients of tangential velocity and large concentration gradients of any substances (temperature, moisture, sediments et cetera) transported to or from the interface. The term boundary layer is used in meteorology and physical oceanography. The atmospheric surface layer is the lowest part of the atmospheric boundary layer (typically the bottom 10% where the log wind profile is valid). The ocean has two surface layers: the benthic, found immediately above the sea floor, and the marine surface layer, at the air-sea interface. Mathematical formulation A simple model of the surface layer can be derived by first examining the turbulent momentum flux through a surface. Using Reynolds decomposition to express the horizontal flow in the direction as the sum of a slowly varying component, , and a turbulent component, , and the vertical flow, , in an analogous fashion, we can express the flux of turbulent momentum through a surface, , as the time-averaged magnitude of vertical turbulent transport of horizontal turbulent momentum, : . If the flow is homogeneous within the region, we can set the product of the vertical gradient of the mean horizontal flow and the eddy viscosity coefficient equal to : , where is defined in terms of Prandtl's mixing length hypothesis: where is the mixing length. We can then express as: . Assumptions about the mixing length From the figure above, we can see that the size of a turbulent eddy near the surface is constrained by its proximity to the surface; turbulent eddies centered near the surface cannot be as large as those centered further from the surface. From this consideration, and in neutral conditions, it is reasonable to assume that the mixing length, is proportional to the eddy's depth in the surface: , where is the depth and is known as the von Kármán constant. Thus the gradient can be integrated to solve for : . So, we see that the mean flow in the surface layer has a logarithmic relationship with depth. In non-neutral conditions the mixing length is also affected by buoyancy forces and Monin-Obukhov similarity theory is required to describe the horizontal-wind profile. Surface layer in oceanography The surface layer is studied in oceanography, as both the wind stress and action of surface waves can cause turbulent mixing necessary for the formation of a surface layer. The world's oceans are made up of many different water masses. Each have particular temperature and salinity characteristics as a result of the location in which they formed. Once formed at a particular source, a water mass will travel some distance via large-scale ocean circulation. Typically, the flow of water in the ocean is described as turbulent (i.e. it doesn't follow straight lines). Water masses can travel across the ocean as turbulent eddies, or parcels of water usually along constant density (isopycnic) surfaces where the expenditure of energy is smallest. When these turbulent eddies of different water masses interact, they will mix together. With enough mixing, some stable equilibrium is reached and a mixed layer is formed. Turbulent eddies can also be produced from wind stress by the atmosphere on the ocean. This kind of interaction and mixing through buoyancy at the surface of the ocean also plays a role in the formation of a surface mixed layer. Discrepancies with traditional theory The logarithmic flow profile has long been observed in the ocean, but recent, highly sensitive measurements reveal a sublayer within the surface layer in which turbulent eddies are enhanced by the action of surface waves. It is becoming clear that the surface layer of the ocean is only poorly modeled as being up against the "wall" of the air-sea interaction. Observations of turbulence in Lake Ontario reveal under wave-breaking conditions the traditional theory significantly underestimates the production of turbulent kinetic energy within the surface layer. Diurnal cycle The depth of the surface mixed layer is affected by solar insolation and thus is related to the diurnal cycle. After nighttime convection over the ocean, the turbulent surface layer is found to completely decay and restratify. The decay is caused by the decrease in solar insolation, divergence of turbulent flux and relaxation of lateral gradients. During the nighttime, the surface ocean cools because the atmospheric circulation is reduced due to the change in heat with the setting of the sun each day. Cooler water is less buoyant and will sink. This buoyancy effect causes water masses to be transported to lower depths even lower those reached during daytime. During the following daytime, water at depth is restratified or un-mixed because of the warming of the sea surface and buoyancy driving the warmed water upward. The entire cycle will be repeated and the water will be mixed during the following nighttime. In general, the surface mixed layer only occupies the first 100 meters of the ocean but can reach 150 m in the end of winter. The diurnal cycle does not change the depth of the mixed layer significantly relative to the seasonal cycle which produces much larger changes in sea surface temperature and buoyancy. With several vertical profiles, one can estimate the depth of the mixed layer by assigning a set temperature or density difference in water between surface and deep ocean observations – this is known as the “threshold method”. However, this diurnal cycle does not have the same effect in midlatitudes as it does at tropical latitudes. Tropical regions are less likely than midlatitude regions to have a mixed layer dependent on diurnal temperature changes. One study explored diurnal variability of the mixed layer depth in the Western Equatorial Pacific Ocean. Results suggested no appreciable change in the mixed layer depth with the time of day. The significant precipitation in this tropical area would lead to further stratification of the mixed layer. Another study which instead focused on the Central Equatorial Pacific Ocean found a tendency for increased depths of the mixed layer during nighttime. The extratropical or midlatitude mixed layer was shown in one study to be more affected by diurnal variability than the results of the two tropical ocean studies. Over a 15-day study period in Australia, the diurnal mixed layer cycle repeated in a consistent manner with decaying turbulence throughout the day. See also Boundary layer Mixed layer Density Salinity Sea surface microlayer References Boundary layer meteorology Oceanography
Surface layer
Physics,Environmental_science
1,320
42,043,691
https://en.wikipedia.org/wiki/Agricultural%20value%20chain
An agricultural value chain is the integrated range of goods and services (value chain) necessary for an agricultural product to move from the producer to the final consumer. The concept has been used since the beginning of the millennium, primarily by those working in agricultural development in developing countries, although there is no universally accepted definition of the term. Background The term value chain was first popularized in a book published in 1985 by Michael Porter, who used it to illustrate how companies could achieve what he called “competitive advantage” by adding value within their organization. Subsequently, the term was adopted for agricultural development purposes and has now become very much in vogue among those working in this field, with an increasing number of bilateral and multilateral aid organisations using it to guide their development interventions. At the heart of the agricultural value chain concept is the idea of actors connected along a chain producing and delivering goods to consumers through a sequence of activities. However, this “vertical” chain cannot function in isolation and an important aspect of the value chain approach is that it also considers “horizontal” impacts on the chain, such as input and finance provision, extension support and the general enabling environment. The approach has been found useful, particularly by donors, in that it has resulted in a consideration of all those factors impacting on the ability of farmers to access markets profitably, leading to a broader range of chain interventions. It is used both for upgrading existing chains and for donors to identify market opportunities for small farmers. Definitions There is no commonly agreed definition of what is actually meant by agricultural value chains. Indeed, some agencies are using the term without having a workable definition or definitions and simply redefined ongoing activities as “value chain” work when the term came into vogue. Published definitions include the World Bank’s “the term ‘’value chain’’ describes the full range of value adding activities required to bring a product or service through the different phases of production, including procurement of raw materials and other inputs”, UNIDO’s “actors connected along a chain producing, transforming and bringing goods and services to end-consumers through a sequenced set of activities”, and CIAT’s “a strategic network among a number of business organizations”. Without a universal definition, the term “value chain” is now being used to refer to a range of types of chain, including: An international, or regional commodity market. Examples could include “the global cotton value chain”, “the southern African maize value chain” or “the Brazilian coffee value chain”; A national or local commodity market or marketing system such as “the Ghanaian tomato value chain” or “”the Accra tomato value chain”; A supply chain, which can cover both of the above; An extended supply chain or marketing channel, which embraces all activities needed to produce the product, including information/extension, planning, input supply and finance. It is probably the most common usage of the value chain term; A dedicated chain designed to meet the needs of one or a limited number of buyers. This usage, which is arguably most faithful to Porter’s concept, stresses that a value chain is designed to capture value for all actors by carrying out activities to meet the demand of consumers or of a particular retailer, processor or food service company supplying those consumers. Emphasis is firmly placed on demand as the source of the value. Value chain methodologies Donors and others supporting agricultural development, such as FAO, World Bank, GIZ, DFID, ILO, IIED and UNIDO, have produced a range of documents designed to assist their staff and others to evaluate value chains in order to decide on the most appropriate interventions to either update existing chains or promote new ones. However, the application of value chain analysis is being interpreted differently by different organisations, with possible repercussions for their development impact. The proliferation of guides has taken place in an environment where key conceptual and methodological elements of value chain analysis and development are still evolving. Many of these guides include not only detailed procedures that require experts to carry out the analysis but also use detailed quasi-academic methodologies. One such methodology is to compare the same value chain over time (a comparative or panel study) to assess changes in rents, governance, systemic efficiency and the institutional framework. Linking farmers to markets A major subset of value chain development work is concerned with ways of linking producers to companies, and hence into the value chains. While there are examples of fully integrated value chains that do not involve smallholders (e.g. Unilever operates tea estates and tea processing facilities in Kenya and then blends and packs the tea in Europe before selling it as Lipton, Brooke Bond or PG Tips brands), the great bulk of agricultural value chains involve sales to companies from independent farmers. Such arrangements frequently involve contract farming in which the farmer undertakes to supply agreed quantities of a crop or livestock product, based on the quality standards and delivery requirements of the purchaser, often at a price that is established in advance. Companies often also agree to support the farmer through input supply, land preparation, extension advice and transporting produce to their premises. Inclusive value chains Work to promote market linkages in developing countries is often based on the concept of “inclusive value chains”, which usually places emphasis on identifying possible ways in which small-scale farmers can be incorporated into existing or new value chains or can extract greater value from the chain, either by increasing efficiency or by also carrying out activities further along the chain. In the various publications on the topic the definition of “inclusion” is often imprecise as it is often unclear whether the development aim is to include all farmers or only those best able to take advantage of the opportunities. Emerging literature in the last two decades increasingly references the value of responsible sourcing or what are called "sustainable supply chains". Sustainability in agricultural value chains The private sector’s role in achieving sustainability has increasingly been recognized since the publication of Our Common Future (Brundtland Report) in 1987 by the World Commission on Environment and Development. More recently, the role of value chains has become very prominent and businesses are emerging as the primary catalyst for sustainability. Kevin Dooley, Chief Scientist of the Sustainability Consortium, claims that such market-based mechanisms are the most efficient and effective way to induce the adoption of sustainable practices. Still, there are concerns about whether value chains are really driving sustainability or merely green-washing. These concepts can also be expanded or understood as power dynamics. In the last decade or so, hybrid forms of governance have emerged where business, civil society and public actors interact, and these multi-stakeholder approaches claim new concepts of legitimacy and even more likely sustainability.  Scholars including Michael Schmidt (Dean and Department Chair, University Brandenburg and Daniele Giovannucci (President of the Committee on Sustainability Assessment) consider that evidence is emerging on what makes a value chain sustainable. There is evidence too that global value chains that have an impact on the environment and the societies they serve such as farmers and suppliers can be effectively measured. The World Bank also supports the perspective that GVCs can be valuable for sustainable development and provides an array of examples and data. Agricultural value chain finance Agricultural value chain finance is concerned with the flows of funds to and within a value chain to meet the needs of chain actors for finance, to secure sales, to buy inputs or produce, or to improve efficiency. Examining the potential for value chain finance involves a holistic approach to analyze the chain, those working in it, and their inter-linkages. These linkages allow financing to flow through the chain. For example, inputs can be provided to farmers and the cost can be repaid directly when the product is delivered, without need for farmers taking a loan from a bank or similar institution. This is common under contract farming arrangements. Types of value chain finance include product financing through trader and input supplier credit or credit supplied by a marketing company or a lead firm. Other trade finance instruments include receivables financing where the bank advances funds against an assignment of future receivables from the buyer, and factoring in which a business sells its accounts receivable at a discount. Also falling under value chain finance are asset collateralization, such as on the basis of warehouse receipts, and risk mitigation, such as forward contracting, futures and insurance. The use of ICTs in value chains Information and Communication Technologies, or ICTs, have become an important tool in promoting agricultural value chain efficiency. There has been a rapid expansion in the use of mobile technologies, in particular. The price of ICT services is falling and the technologies are becoming more affordable to many in developing countries. Applications can support farmers directly through SMS messages. Examples include , developed in Kenya, which provides information on the gestation period, on artificial insemination of the cows, and on how to look after them. Applications such as M-Pesa can support access to mobile payment services for a large percentage of those without banks, thereby facilitating transactions in the value chain. Other applications have been developed to promote provision of crop insurance through input dealers, for example. ICTs are also being used to strengthen the capacity of agricultural extension officers and NGO field staff to reach farmers with timely and accurate information and, at the same time, help capture data from the field. The Grameen Foundation’s Community Knowledge Worker (CKW) programme is a small-scale example. Farmer representatives are trained to use ICT applications on a smartphone to provide agricultural information and extension support. Other efforts include Lutheran World Relief’s Mobile Farmer and diverse efforts funded by the Bill and Melinda Gates Foundation in Africa. Most market price information is now delivered to farmers via SMS. Further along the chain, technologies offer considerable possibilities to enhance traceability, which is particularly relevant as certification grows in importance. Where necessary many exporters can now trace consignments back to individual farmers and take necessary measures to address problems. Finally, systems such as , promoted by the Forum for Agricultural Research in Africa, are also supporting agricultural researchers through data collection and analysis and access to up-to-date research publications. Enabling environments As with all agricultural growth, two things appear essential for successful value chain development: creating the right environment for agriculture and investing in rural public goods. An enabling environment implies peace and public order, macro-economic stability, inflation under control, exchange rates based on market fundamentals rather than government allocation of foreign currency, predictable taxation that is reinvested in public goods and property rights. There is a positive correlation of agricultural growth with investment in irrigation, transport infrastructure and other technologies. Governments have a responsibility to provide essential goods and services, infrastructure, such as rural roads, and agricultural research and extension. Value chain development is often constrained by corruption, both at a high level and at the ubiquitous road blocks found in many countries, particularly in Africa. Many measures to improve value chains require collaboration between a wide range of different ministries, and this can be difficult to achieve. See also Agribusiness Agricultural marketing Agricultural diversification Contract farming Value chain References External links Contract farming resource centre: FAO Fin4Ag Agricultural Value Chain Conference, Nairobi, July 2014 Rural Finance Learning Center Agricultural economics Development economics Supply chain management International development Food industry Intensive farming
Agricultural value chain
Chemistry
2,260
11,894,889
https://en.wikipedia.org/wiki/Cellular%20microarray
A cellular microarray (or cell microarray) is a laboratory tool that allows for the multiplex interrogation of living cells on the surface of a solid support. The support, sometimes called a "chip", is spotted with varying materials, such as antibodies, proteins, or lipids, which can interact with the cells, leading to their capture on specific spots. Combinations of different materials can be spotted in a given area, allowing not only cellular capture, when a specific interaction exists, but also the triggering of a cellular response, change in phenotype, or detection of a response from the cell, such as a specific secreted factor. There are a large number of types of cellular microarrays: Reverse transfection cell microarrays. David M. Sabatini's laboratory developed reverse-transfection cell microarrays at the Whitehead Institute, publishing their work in 2001. PMHC Cellular Microarrays. This type of microarray were developed by Daniel Chen, Yoav Soen, Dan Kraft, Patrick Brown and Mark Davis at Stanford University Medical Center. References Chen DS, Davis MM (2006) Molecular and functional analysis using live cell microarrays. Curr Opin Chem Biol 10:28-34 Chen DS, Soen Y, Stuge TB, Lee PP, Weber JS, Brown PO, Davis MM (2005) Marked Differences in Human Melanoma Antigen-Specific T Cell Responsiveness after Vaccination Using a Functional Microarray. PLoS Med 2: 10: e265 () Soen Y., Chen D. S., Kraft D. L., Davis M. M. and Brown P.O. (2003) Detection and characterization of cellular immune responses using peptide-MHC microarrays. PLoS Biol. 1: E65 (http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.0000065) Chen DS, Davis MM (2005) Cellular immunotherapy: Antigen recognition is just the beginning. Springer Semin Immunopathol 27:119–127 Chen DS, Soen Y, Davis MM, Brown PO (2004) Functional and molecular profiling of heterogeneous tumor samples using a novel cellular microarray. J Clin Oncol 22:9507 (https://web.archive.org/web/20041020122342/http://meeting.jco.org/cgi/content/abstract/22/14_suppl/9507) Soen Y, Chen DS, Stuge TB, Weber JS, Lee PP, et al. (2004) A novel cellular microarray identifies functional deficiences in tumor-specific T cell responses. J Clin Oncol 22:2510 Ziauddin J, Sabatini DM (2001) Microarrays of cells expressing defined cDNAs. Nature. 2001 May 3;411(6833):107-10. Pubmed link Biotechnology
Cellular microarray
Biology
649
220,116
https://en.wikipedia.org/wiki/The%20Invincible
The Invincible () is a hard science fiction novel by Polish writer Stanisław Lem, serialized in Gazeta Bialostocka in 1963 and published as a book in 1964. The Invincible originally appeared as the title story in Lem's collection Niezwyciężony i inne opowiadania ("The Invincible and Other Stories"). A translation into German was published in 1967; an English translation by Wendayne Ackerman, based on the German one, was published in 1973. A direct translation into English from Polish, by Bill Johnston, was published in 2006. It was one of the first novels to explore the ideas of microrobots, smartdust, artificial swarm intelligence, and "necroevolution" (a term suggested by Lem in the novel for the evolution of non-living matter). Plot summary A heavily armed interstellar spacecraft called Invincible lands on the planet Regis III, which seems uninhabited and bleak, to investigate the loss of her sister ship, Condor. During the investigation, the crew finds evidence of a form of quasi-life, born through evolution of autonomous, self-replicating machines, apparently left behind by an alien civilization ship which landed on Regis III a very long time ago. The protagonists come to speculate that a kind of evolution must have taken place under the selection pressures of "robot wars", with the only surviving form being swarms of minuscule, insect-like micromachines. Individually, or in small groups, they are quite harmless and capable of only very simple behavior. When threatened, they can assemble into huge clouds, travel at a high speed, and even climb to the top of the troposphere. These swarms display complex behavior arising from self-organization and can incapacitate any intelligent threat by a powerful surge of electromagnetic interference. Condors crew suffered a complete memory erasure as a consequence of attacks from these "clouds". The swarm, however, is reactive. It lacks intelligence and cannot formulate attack strategies proactively. Invincibles crew mounts an escalating series of attacks on the perceived enemy, but eventually recognizes the futility of their efforts. The robotic "fauna", dubbed "necrosphere", has become part of the planet's ecology, and would require a disruption on a planetary scale to be destroyed. In the face of defeat and imminent withdrawal of the Invincible, Rohan, the spaceship's first navigator, undertakes a trip into the "enemy area" in search of four crew members who went missing in action – an attempt which he and the Invincibles commander Horpach see as certainly futile, but necessary for moral reasons. Rohan wanders into canyons covered by metallic "shrubs" and "insects", and finds some of the missing crewmen dead. He gathers some evidence and returns to the ship unharmed, thanks partially to a device that cloaks his brain activity and partially to his calm and nonthreatening behavior. Rohan expresses his intention to petition for preservation of the planet's artificial ecosystem, which fascinates him. Commentary The novel turns into an analysis of the relationship between different life domains, and their place in the Universe. In particular, it is an imaginary experiment to demonstrate that evolution may not necessarily lead to dominance by intellectually superior life forms. The plot also involves a philosophical dilemma, juxtaposing the values of humanity and the efficiency of mechanical insects. Jarzębski comments that the novel demonstrates that the advantage of humans is not in the ability to annihilate the enemy but in the "ability to stop", to overcome the Darwinian instinct of struggle for an advantage. Theodore Sturgeon praised The Invincible as "SF in the grand tradition", saying "The Science is hard. The descriptions are vivid and powerful." The idea of an "ultimate weapon system" was finalized by Lem in his 1983/1986 fictitious review "Weapon Systems of the Twenty First Century or The Upside-down Evolution". The themes of microrobots and smart dust from his faux review were used verbatim in his 1985 novel Peace on Earth, where Ijon Tichy reads chapters from the (faux) book. Adaptations In the late 1960s, Michael Redstone acquired the rights to a film adaptation of the novel, but he failed to find producers. In his usual grumpy manner Lem commented that "it would probably have been awful, but I did earn a lot". In 1991, Swedish author Kerstin Ekman created an educational computer game titled Rymdresa, which is mainly based on The Invincible. In 2019, Rafał Mikołajczyk published the comic book Niezwyciężony [The Invincible], . Reviewers note the faithful rendering of Lem's original novel by Mikołajczyk in a different medium. In 2020, Polish video game developer Starward Industries announced a video game adaptation of The Invincible. According to the developer, the adaptation is designed for PC, PlayStation 5 and Xbox Series X/S consoles. The game was released on November 6, 2023. Notes References External links About the novel on the official Stanisław Lem website About the novel on the official Stanisław Lem website (different content) Military science fiction novels Novels by Stanisław Lem 1963 science fiction novels Books with cover art by Richard M. Powers Hard science fiction Novels about artificial intelligence Novels set on fictional planets Fiction about memory erasure and alteration Hive minds in fiction Self-replicating machines in fiction Evolution in popular culture Fiction about nanotechnology Polish novels Polish science fiction novels Science fiction about first contact
The Invincible
Materials_science
1,137
2,067,647
https://en.wikipedia.org/wiki/El%20Farol%20Bar%20problem
The El Farol bar problem is a problem in game theory. Every Thursday night, a fixed population want to go have fun at the El Farol Bar, unless it's too crowded. If less than 60% of the population go to the bar, they'll all have more fun than if they stayed home. If more than 60% of the population go to the bar, they'll all have less fun than if they stayed home. Everyone must decide at the same time whether to go or not, with no knowledge of others' choices. Paradoxically, if everyone uses a deterministic pure strategy which is symmetric (same strategy for all players), it is guaranteed to fail no matter what it is. If the strategy suggests it will not be crowded, everyone will go, and thus it will be crowded; but if the strategy suggests it will be crowded, nobody will go, and thus it will not be crowded, but again no one will have fun. Better success is possible with a probabilistic mixed strategy. For the single-stage El Farol Bar problem, there exists a unique symmetric Nash equilibrium mixed strategy where all players choose to go to the bar with a certain probability, determined according to the number of players, the threshold for crowdedness, and the relative utility of going to a crowded or uncrowded bar compared to staying home. There are also multiple Nash equilibria in which one or more players use a pure strategy, but these equilibria are not symmetric. Several variants are considered in Game Theory Evolving by Herbert Gintis. In some variants of the problem, the players are allowed to communicate before deciding to go to the bar. However, they are not required to tell the truth. Named after a bar in Santa Fe, New Mexico, the problem was created in 1994 by W. Brian Arthur. However, under another name, the problem was formulated and solved dynamically six years earlier by B. A. Huberman and T. Hogg. Minority game A variant is the Minority Game proposed by Yi-Cheng Zhang and Damien Challet from the University of Fribourg. An odd number of players each must make a binary choice independently at each turn, and the winners are those players who end up on the minority side. As in the El Farol Bar problem, no single (symmetric) deterministic strategy can give an equilibrium, but for mixed strategies, there is a unique symmetric Nash equilibrium (each player chooses with 50% probability), as well as multiple asymmetric equilibria. A multi-stage, cooperative Minority Game was featured in the manga Liar Game, in which the majority was repeatedly eliminated until only one player was left. Kolkata Paise Restaurant Problem Another variant of the El Farol Bar problem is the Kolkata Paise Restaurant Problem (KPR), named for the many cheap restaurants where laborers can grab a quick lunch, but may have to return to work hungry if their chosen restaurant is too crowded. Formally, a large number N of players each choose one of a large number n of restaurants, typically N = n (while in the El Farol Bar Problem, n = 2, including the stay-home option). At each restaurant, one customer at random is served lunch (payoff = 1) while all others lose (payoff = 0). The players do not know each others' choices on a given day, but the game is repeated daily, and the history of all players' choices is available to everyone. Optimally, each player chooses a different restaurant, but this is practically impossible without coordination, resulting in both hungry customers and unattended restaurants wasting capacity. In a similar problem, there are hospital beds in every locality, but patients are tempted to go to prestigious hospitals out of their district. However, if too many patients go to a prestige hospital, some get no hospital bed at all, while additionally wasting the unused beds at their local hospitals. Strategies are evaluated based on their aggregate payoff and/or the proportion of attended restaurants (utilization ratio). A leading stochastic strategy, with utilization ~0.79, gives each customer a probability p of choosing the same restaurant as yesterday (p varying inversely with the number of players who chose that restaurant yesterday), while choosing among other restaurants with uniform probability. This is a better result than deterministic algorithms or simple random choice (noise trader), with utilization fraction 1 - 1/e ≈ 0.63. Increased utilization for customers having allowance for local optimization search using Traveling Salesman Problem type algorithms have also been studied. Extensions of KPR for on-call car hire problems have been explored in. Stability of the KPR, induced by the introduction of dining clubs have also studied. Extensions to quantum games for three player KPR have been studied; see for a recent review. References Further reading External links An Introductory Guide to the Minority Game Minority Games (a popularization account) Minority game on arxiv.org El Farol bar in Santa Fe, New Mexico The El Farol Bar problem in Java using The Java Agent-Based Modelling Toolkit (JABM) Kolkata Paise Restaurant (KPR) Problem: Wolfram Demonstrations Non-cooperative games
El Farol Bar problem
Mathematics
1,059
59,111,219
https://en.wikipedia.org/wiki/Multilevel%20Flow%20Modeling
Multilevel Flow Modeling (MFM) is a framework for modeling industrial processes. MFM is a kind of functional modeling employing the concepts of abstraction, decomposition, and functional representation. The approach regards the purpose, rather than the physical behavior of a system as its defining element. MFM hierarchically decomposes the function of a system along the means-end and whole-part dimensions in relation to intended actions. Functions are syntactically modeled by the relations of fundamental concepts contributing as part of a subsystem. Each subsystem is considered in the context of the overall system in terms of the purpose (end) of its function (means) in the system. Using only a few fundamental concepts as building blocks allows qualitative reasoning about action success or failure. MFM defines a graphical modeling language for representing the encompassed knowledge. History MFM originated as a modeling language for capturing how human operators identify and handle unknown operation situations in order to improve the design of human-machine interfaces. Syntax MFM describes the function of a system as a means for a specific end in terms of mass and energy flow. The flow is the defining element for the underlying function concepts. The concepts of transport and barrier play the most important role, as they connect pairs of the other function types, reflecting the physical flows in the system. Sink and source functions mark the boundary of the considered system and the end or beginning of a flow. Storage and balance concepts can both be collection or splitting points for multiple flow paths. Accordingly, valid MFM syntax requires a transport or a barrier linking two functions of the remaining four types. In addition to the flow within one perspective (mass or energy) MFM connects the influence between mass and energy by the means-end relations (mediate and producer-product) as well as the causal links introduced by the way the system is controlled by using separate control flow structures. Diagnostic information about the causality between abnormal states through the system is inferred from the physical effect between the functions. Petersen distinguishes direct and indirect influence between functions: Direct influence is the effect of a transport taking in mass or energy from the upstream function and passing it on to the downstream function. Indirect influence, on the other hand, is derived from different physical implementations and represented by influence or participate relation of another function toward the transport. The state of transport can be affected e.g. by an abnormal state of influencing downstream storage, while the state would not be affected by a participating one. According to the underlying physical interpretation inference rules for all possible patterns of flow functions have been established. Zhang compiled these patterns and the implied causality. Example The MFM diagram of a heat pump reflects the overarching objective (cob2) of maintaining the energy level on the warm side constant. The energy flow structure efs2 shows the system function from the most prevalent (energetic) perspective which is further decomposed in the mass flow of coolant (mfs1) as the means for the desired energy transport. Further hierarchical analysis produces efs1 that represents the energy needed for the pump as a means to produce a part of the mass flow. The operational constraints introduced by control systems such as a water flow controller are modeled by cfs1 and a temperature controller cfs2. Application MFM based solutions for many aspects of industrial automation have been proposed. Research directions include: Plant wide diagnosis Alarm Management Risk Assessment Automatic procedure generation References Automation software Knowledge engineering
Multilevel Flow Modeling
Engineering
698
3,958,742
https://en.wikipedia.org/wiki/List%20of%20building%20materials
This is a list of building materials. Many types of building materials are used in the construction industry to create buildings and structures. These categories of materials and products are used by architects and construction project managers to specify the materials and methods used for building projects. Some building materials like cold rolled steel framing are considered modern methods of construction, over the traditionally slower methods like blockwork and timber. Catalogs Catalogs distributed by architectural product suppliers are typically organized into these groups. Industry standards The Construction Specifications Institute maintains the following industry standards: MasterFormat 50 standard divisions of building materials - 2004 edition (current in 2009) 16 Divisions Original 16 divisions of building materials See also Category: Building materials Alternative natural materials Glass in green buildings Green building and wood List of commercially available roofing material Red List building materials Sources Building Materials: Dangerous Properties of Products in MasterFormat Divisions 7 and 9 - H. Leslie Simmons, Richard J. Lewis, Richard J. Lewis (Sr.) - Google Books Building Materials - P.C. Varghese - Google Books Architectural Building Materials - Salvan, George S. - Google Books Durability of Building Materials and Components 8: Service Life and Asset Management - Michael A. Lacasse, Dana J. Vanier - Google Books Durability of Building Materials and Components - J. M. Baker - Google Books Understanding Green Building Materials - Traci Rose Rider, Stacy Glass, Jessica McNaughton - Google Books Heat-Air-Moisture Transport: Measurements on Building Materials - Phālgunī Mukhopādhyāẏa, M. K. Kumaran - Google Books External links Building materials Building materials
List of building materials
Physics,Engineering
330
32,701,959
https://en.wikipedia.org/wiki/Ion%20layer%20gas%20reaction
Ion layer gas reaction (ILGAR®) is a non-vacuum, thin-film deposition technique developed and patented by the group of Professor Dr. Christian-Herbert Fischer at the Helmholtz-Zentrum Berlin for materials and energy in Berlin, Germany. It is a sequential and cyclic process that enables the deposition of semiconductor thin films, mainly for (although not restricted to) photovoltaic applications, specially chalcopyrite absorber layers and buffer layers. The ILGAR technique was awarded as German High Tech Champion 2011 by the Fraunhofer Society. ILGAR is a chemical process that allows for the deposition of layers in a homogeneous, adherent and mechanically stable form without using vacuum or high temperatures. It is a sequential and cyclic process which can be automated and scaled up. It consists basically of the following steps: Application of a precursor solution on a substrate by dipping (Dip-ILGAR) or spraying (Spray ILGAR). Reaction of the dry solid precursor layer with a hydrogen chalcogenide gas. These steps are repeated until the desired layer thickness is obtained. In the case of spray-ILGAR, the spray deposition of the ionic layer is performed using similar equipment to atmospheric pressure aerosol assisted chemical vapour deposition or spray pyrolysis. Spray pyrolysis can be regarded as a simplified version of the spray ILGAR process, where there is no reaction of the precursor layer with a reactant gas. The cyclical nature of this process makes it similar to atomic layer deposition (ALD), which is also used for buffer layer deposition. Applications The applications of the ILGAR and spray-pyrolysis techniques at the Helmholtz-Zentrum Berlin lie mainly in the field of chalcopyrite thin-film solar cells, although these techniques can be used for other applications involving substrate coating with thin films. The following list summarizes the applications of these techniques: Buffer layers for chalcopyrite-based thin-film devices: Replacement of the standard CdS by ecologically more favorable materials (In2S3, etc.) deposited using Spray-ILGAR on different absorber materials. Nano-dots passivation layers: nanodots can be deposited in a controlled way using the spray ILGAR technique. ZnS nanodots have been used as passivation layers-point contact buffer layers in chalcopyrite based thin-film solar cells. These dots (5–10 nm in diameter) act as a passivation layer at the absorber-buffer interface which results is some cases in an efficiency gain up to 2% absolute. Al2O3 barrier layers: Thin-film solar modules on metallic substrates like steel foil need a barrier layer between substrate and Mo back-contact for electrical insulation and to prevent a detrimental iron diffusion into the absorber. Also uncontrolled sodium diffusion from the glass substrate can be stopped by a barrier before intentionally doping the absorber with the desired amount of sodium. Al2O3 layers deposited by spray-pyrolysis result in fully functioning barrier layers for the cases stated above ZnO Window layers: high quality i-ZnO window layers have been grown by spray pyrolysis and constitute a feasible replacement of the standard sputtered i-ZnO layers. Chalcopyrite absorber layers: Spray-ILGAR is a low temperature technique that enables the growth of chalcopyrite absorber layers such as CuInS2 and . Spray-ILGAR CuInS2 layers can be used as absorbers in thin-film solar cells. Surface Coating: The surface coating of ceramics, metal, glass and even plastics for catalytic purposes as well as anti-corrosion, antistatic or mechanical protection is feasible using the ILGAR technique. ILGAR as a replacement for chemical bath deposition The advantage of ILGAR compared to chemical bath deposition (CBD) lies in the fact that it is easier to deposit high quality precursor layers and convert them to the chalcogenide than to directly deposit chalcogenide thin films. It is also possible to grow films with graded properties or compositions by changing the precursors or the process parameters. Furthermore, ILGAR is an in-line process whereas chemical bath deposition is intrinsically a batch process. References Semiconductor device fabrication Thin film deposition Coatings
Ion layer gas reaction
Chemistry,Materials_science,Mathematics
866
47,385,194
https://en.wikipedia.org/wiki/Traverse%20%28trench%20warfare%29
In trench warfare, a traverse is an adaptation to reduce casualties to defenders occupying a trench. One form of traverse is a U-shaped detour in the trench with the trench going around a protrusion formed of earth and sandbags. The fragments or shrapnel, or shockwave from a shell landing and exploding within a trench then cannot spread horizontally past the obstacle the traverse interposes. Also, an enemy that has entered a trench is unable to fire down the length at the defenders, or otherwise enfilade the trench. A traverse trench is a trench dug perpendicular to a trench line, but extending away from the enemy. It has two functions. One function is to provide an entry into the main trench. A second function is to provide a place for defenders to shelter and regroup should the enemy have penetrated into the main trench and be able to fire down the main trench's length. On an approach trench, that is, a trench leading from the rear to the frontline or firing trench, defenders may construct an island traverse. With an island traverse, the approach trench splits to go around both sides of a traverse before coming together again. Lastly, a flying or bridge traverse is a sandbagged covering for a stretch of trench to block shrapnel or shell fragments from entering the trench. References Anon. (1917) Professional Memoirs, Engineer Bureau, United States Army, Volume 9. (Engineer School). Smith, J.S. (1917) Trench Warfare: A Manual for Officers and Men. (New York: E.P. Dutton & Co.). Military terminology Trench warfare Military engineering
Traverse (trench warfare)
Engineering
333
5,972,583
https://en.wikipedia.org/wiki/Misumena%20vatia
Misumena vatia is a species of crab spider found in Europe and North America. In North America, it is called the goldenrod crab spider or flower (crab) spider, as it is commonly found hunting in goldenrod sprays and milkweed plants. They are called crab spiders because of their unique ability to walk sideways as well as forwards and backwards. Both males and females of this species progress through several molts before reaching their adult sizes, though females must molt more to reach their larger size. Females can grow up to while males are quite small, reaching at most. Misumena vatia are usually yellow or white or a pattern of these two colors. They may also present with pale green or pink instead of yellow, again, in a pattern with white. They have the ability to change between these colors based on their surroundings through the molting process. They have a complex visual system, with eight eyes, that they rely on for prey capture and for their color-changing abilities. Sometimes, if Misumena vatia consumes colored prey, the spider itself will take on that color. Misumena vatia feed on common insects, often consuming prey much larger than themselves. They use venom to immobilize their prey, though they are harmless to humans. They face threats due to parasites and larger insects. For Misumena vatia, survival depends on the choice of hunting site. The spiders closely monitor multiple sites to see if others nearby are frequented by greater numbers of potential prey. The primary sex ratio is biased toward females. Females are stationary and choose a flower to settle on, while males cover great distances searching for mates. Females do not emit pheromones, rather, they leave "draglines" of silk behind them as they move, which males follow. Females live longer than males, on average. After mating, females guard their nests until the young have hatched, after which they die. Taxonomy and phylogeny The species Misumena vatia was first described by Swedish arachnologist and entomologist Carl Alexander Clerck in his book Aranei Svecici. Misumena vatia belongs to the family Thomisidae, or spiders known as crab spiders. The family includes more than 2,000 species, which are found all over the world. The genus Misumena includes many other species which are found worldwide. Misumena vatia falls into the Thomisus clade. Other clades in the family Thomisidae include the Borboropactus clade, the Epidius clade, and the Stephanopis clade. Similar species Close relatives include Mecaphesa asperata, which is also found in North America, as well as Central America and the Caribbean. It is similar in size and shape but is light-gray to brown with pink stripes on the abdomen and cephalothorax. It is also coated with hairs that are short and stiff. Similar species of the Misumenoides and Misumenops genera tend to be found south of Misumena vatia’s home range, but some species, such as Misumenoides formosipes, are found in North America as well. Philodromidae is a closely related family of wandering spiders. These spiders differ from those in the family Thomisidae in that their front legs are of a similar length as their back legs. Thus, their hunting style is quite different. Description This species has a wide, flat body that is short and crab-like. It can walk sideways in addition to being able to move forward and backward. Of its eight legs, the first two pairs are the longest. These sets of legs are usually held open, as the spider uses them to capture its prey. Misumena vatia is harmless to humans, as its fangs are not powerful enough to penetrate human skin and its venom is too weak to harm larger animals. Color These spiders may be yellow or white. This ultimately depends on the flower on which they are hunting (active camouflage). Younger females especially, which may hunt on a variety of flowers such as daisies and sunflowers, have a strong tendency to adapt to the color of the surrounding flower. However, the color-changing process is not instant and can require up to 25 days to complete. Older females need large amounts of relatively large prey to produce the best possible clutch of eggs. In North America, they are most commonly found in goldenrods, bright yellow flowers which attract large numbers of insects, particularly in autumn. It is often very hard even for a searching human to spot this spider on a yellow flower. These spiders are sometimes called 'banana spiders' because of their striking yellow color. Females have light complexions, either white or yellow with darker sides. They may have some markings on the abdomen that can be brown or red. These markings are genetically determined and not affected by a background color change. Males are darker than females, with red or brown outer shells. They have a characteristic white spot in the middle that continues through the area around the eyes. Males specifically have two sets of red and white bands both dorsally and laterally. Similar species of the crab spiders appear in a variety of colors such as those of the genus Diaea, which can be lime green, or some species of Xysticus and Coriarchne which are brown. Color change These spiders change color based on visual cues. The color-change is most obvious on females of this species. The ability of males and juveniles to change color has not been documented. Two other known spiders with this color change ability include Thomisus onustus and Thomisus spectabilis. Depending on the color of flower they see around them, they can secrete a liquid yellow pigment into the body's outer cell layer. The baseline color of the spider is white. In its white state, the yellow pigment is sequestered beneath the outer cell layer so that inner glands which are filled with white guanine are visible. They are able to match with greater accuracy to white flowers, such as Chaerophyllum temulum (the rough chervil) in particular, compared to yellow flowers based on the spectral reflectance functions. While the spider is residing on a white plant, it tends to excrete the yellow pigment instead of storing it in its glands. In order to change back to yellow, the spider must first produce enough of the yellow pigment. For this reason it takes these spiders much longer to turn from white to yellow than it does for them to go from yellow to white. The color change from white to yellow can take between 10 and 25 days while the opposite color change takes only about six days. The yellow pigments are kynurenine and 3-hydroxykynurenine. Color changes are induced by visual cues and spiders with impaired vision lose this ability. Notably, spiders of this species sometimes choose to hunt on flowers that, to the human eye, they do not appear to match in color. For instance, they can be found hunting on the pink petals of the pasture rose (Rosa carolina). The spider appears white, or changes to white, causing it to stand out to human observers. Arthropods, on the other hand, serve as both predators and prey to Misumena vatia, and have photoreceptors that allow them to see ultraviolet, blue, and green light but oftentimes lack red receptors altogether. As a result, Misumena vatia is camouflaged, appearing dark on a dark background. Sexual dimorphism Misumena vatia is highly sexually dimorphic. Females are larger than males and tend to fall between in length. Males, on the other hand, are only long. The female's legs are white or yellow, while the male's first and second legs are brown or red and the third and fourth are yellow. Additionally, in concurrence with the male’s smaller size, the male Misumena vatia molts two fewer times than the female. Other physical characteristics Misumena vatia has two rows of eyes. Those in the anterior row are equally spaced and curved backward while those in the second row vary in appearance from animal to animal, and can be curved more or less than the first. The area around the eyes is narrower in the front than the back. The spider's hair is erect and can be either filiform or rod-shaped. The legs do not have spines, except under the tibia and metatarsus of the first two sets of legs. The appearance of the clypeus and the structure of the cephalothorax can be used to distinguish the genus Misumena within its subfamily. Habitat and distribution Misumena vatia is found only in North America and Europe. Other species of crab spiders, however, can be found all over the world. The species prefers a temperate climate and generally inhabits forest biomes. Misumena vatia is terrestrial and can be found on several plants and flowers such as milkweed and goldenrod in North America, as well as trillium, white fleabane (Erigeron strigosus), ox-eye daisy (Chrysanthemum leucanthemum), red clover (Trifolium pratense) and buttercups (Ranunculus acris)." Home range Females of this species do not travel more than a few yards (meters) from their feeding location. They are attracted by the fragrance of flowers, though other visual and tactile clues also help them choose a territory. Their survival depends on their ability to choose a small area home to flowering plants which will attract prey. Males are highly motile and may disperse great distances as they search for mates. Additionally, spiderlings may travel great distances by ballooning, if they find the area around their nest to be lacking in resources. However, this is risky as there is no guarantee that the search for a new territory will be successful. Patch choice Misumena vatia hunts large prey with a low rate of success. Hunting success rate is highly dependent upon the spider's choice of hunting site. Before they lay their eggs, females are heavy and slow, which necessitates that they choose a hunting site on which to stay. When hunting on their preferred plant of milkweed, they monitor nearby umbels to see if another site would be more profitable. Over the course of a few hours, Misumena vatia may recognize and move to another nearby hunting site. Most often, they move to flowers that produce more nectar and attract more prey, however occasionally some intentionally and consistently pick less profitable sites to hunt on. Thus, the population is dimorphic in terms of patch choice behavior. The reason a minority might choose poorer hunting sites on purpose remains unknown. Diet Hunting patterns Crab spiders are carnivorous, feeding on invertebrate insects such as flies, bees, butterflies, grasshoppers, dragonflies, and hoverflies. Bumblebees (Bombus appositus) provide the spider with the most biomass, but small syrphid flies (Toxomerus marginatus) are the prey captured most frequently. Other frequently captured prey include honeybees (Apis mellifera) and moths. Immature Misumena vatia commonly feed on smaller-sized prey such as thrips, aphids in the family Aphididae, and dance flies in the family Empididae. They may also use nectar and pollen as food sources when prey is scarce. Misumena vatia is primarily dependent on its vision to hunt, so it typically finds and captures food during the day. Adult males search the upper stratum of field vegetation, where females are commonly found hunting, for potential mates. The spider can hunt bugs and insects larger than itself because it has the ability to use paralyzing venom to immobilize its prey. Misumena vatia waits, camouflaging itself on a flowering plant or on the ground, for prey to pass by, and then grabs the prey with its forelegs and quickly injects its venom. Unlike many spiders which wrap their prey in silk, Misumena vatia forgoes wrapping prey and instead allows its venom alone to subdue insects before eating them. It then uses its fangs to inject the immobilized prey with digestive enzymes before sucking out the rendered bodily liquids. This is a form of external digestion. As a result, prey size is not a limiting factor for consumption. Although Misumena vatia most often hunts during the daytime, there is evidence that it is sometimes driven to hunt at night due to an increase in nocturnal prey activity. This behavior occurs most commonly in response to increased night-time activity by moths in early September. Misumena vatia has the ability to retain its excretions for at least 50 days and will not excrete when confined to small spaces or near its hunting sites. Excretion may alert predators to the spider's whereabouts. Diet-induced color change Misumena vatia can also change color as a result of prey consumption. Once consumed, colorful prey can show through the thin, transparent epidermis of the abdomen, affecting opisthosomal coloration. Ingestion of red-eyed fruit flies cause the abdomen to turn pink. Coloration changes caused by prey consumption revert to the normal white or yellow within 4–6 days after prey ingestion. Color change intensity is positively correlated with the amount of colorful prey consumed. Color change intensity also decreases with the spider's age. These spiders have been observed to have pink, orange, yellow, brown, green, or white opisthosomas depending on the prey consumed. Reproduction and lifecycle Sex ratios among Misumena vatia vary from a ratio of 1.5 females per male at hatching to a ratio of 2.5–5.1 females per male by the time they reach adulthood. Since males must spend considerable time searching for females, they face danger from the environment, reducing their numbers. Males cannot mate multiple times in quick succession but require a two-day interval between matings. In nature, Misumena vatia produces a single brood. However, females are capable of producing another brood if artificially induced. Nests Female Misumena vatia prefer common milkweed (Asclepias syriaca) over spreading dogbane (Apocynum androsaemifolium), pasture rose (Rosa carolina), and chokecherry (Prunus virginiana) for nest construction. Females who lay eggs on milkweed have higher nesting success, which correlates with early survival of clutches. The nest appearance can vary widely, depending on the type of plant on which it is constructed. In the case of the pasture rose and the sensitive fern (Onoclea sinsibilis), nests consist of several small leaves bound together. These nests are more vulnerable to predators though, because they are not as tightly bound as those created on milkweed, and have a greater area that is covered only by silk. Mate-guarding A minority of males—only about ten percent—guard pre-reproductive females as they molt into their adult stage. Almost all males who guard such females mate with them after they have molted. The low level of mate guarding is related to the female-leaning sex ratio expressed by Misumena vatia. Males of this species tend to guard less frequently and exhibit less aggression than other closely related species, such as Misumenoides formosipes, which do not have a female-biased sex ratio. Lifecycle In May and early June, males molt into adulthood, with the number of adult males peaking between June 5 to July 15. Females do not molt into adulthood until mid to late June, with numbers of adult females peaking around June 25. After males molt, their body mass does not increase, remaining at about . Males, however, do undergo body changes as they enter the adult stage. Their front legs lengthen while the abdomen shrinks. Females live an average of two years and spend most of this time guarding their eggs sacs and the territory (flowers) on which they hunt. Males have shorter lifespans by about a month. Only near the end of the female's second year of life will she allow males onto her territory to mate. Females lay their eggs most commonly in the middle of the summer; these hatch after 3.5 weeks. Females usually die very soon after their eggs have hatched, during their second winter. Young undergo one molt within the egg sac, and emerge after hatching as second instars. They can sustain themselves for a few days with the nutrients from their yolk sacs. Mating Mate-searching behavior The much smaller males scamper from flower to flower in search of females and are often seen missing one or more of their legs. This may be due either to near misses by predators such as birds or to fighting with other males. Males exhibit a random pattern of searching for mates until they discover a female dragline. Females leave these draglines behind them as they search for prey. Males follow the draglines in search of potential mates. Unlike many spider species, the females do not deposit any pheromones on these lines. Males follow the lines mechanically rather than chemically. The tendency for a male to follow a line is highly influenced by its life stage and the stage of the female of interest. Adult males preferentially follow adult female and juvenile female draglines, while penultimate males do not display a specific preference. Male Misumena vatia are also more likely to follow lines laid by their own species than those of a related species. Male–male interactions Two males interested in the same female may compete, since encounters with females are relatively rare. This may include light touching, chasing, foreleg lashing, grappling, and biting between males. If the female is being guarded by an existing male, the guarding male may either fend off the challenger or be replaced. The male which finds the female of interest first has an advantage in any ensuing contest. In this species, unlike many other species of spider, older and younger males are generally the same size. While older males initiate contests more frequently than younger males, the nature of their attacks is less likely to include extensive bodily contact. Younger males win significantly more contests than older males. After a contest between males, the winner immediately mates with the female while the loser retreats. Female–male interactions When a male finds a female, he climbs over her head, over her opisthosoma and onto her underside, where he inserts his pedipalps to inseminate her. The male may wrap the female loosely with silk during copulation. Females have a pair of gonopores which the male pedipalp may enter for copulation. When the male inserts his pedipalp into the female's gonopore he will make rhythmic, vibratory movements that can last from 1–2 seconds. Gonopore contacts of less than 30 seconds will result in unfertilized eggs and a failed copulation. Matings last an average of four minutes. Males can accurately identify the reproductive condition of potential mates. They prefer to mate with virgin females over those which have previously mated. Males mate for longer with virgin females and produce more pedipalp movements than during copulation with previously-mated females. Males are most likely to enter both gonopores of a virgin female, while they may only enter one gonopore of a previously-mated female. Females have a low probability of mating with a second male, but have a higher probability of mating with a second male than they do a third one. The female then lays her eggs preferentially on plants from the Asclepias genus (milkweeds). When preparing to lay eggs, she first identifies a suitable location for her brood. She descends the plant stalk down to the leaf that she chooses and then rolls up the end of a leaf. She secures the leaf by spreading silk, creating a cocoon-like structure, and lays her eggs inside the nest she has created. She tends to lay her eggs at night. The young grow to be about by autumn and remain on the ground through winter. Their final molt, from penultimate instars to adults, occurs during May of the next year. Because Misumena vatia employs camouflaging, it can focus more energy on growth and reproduction rather than on finding food and escaping from predators. As in many Thomisidae species, there is a positive correlation between female weight and egg clutch size, or fecundity. Selection for larger female body size thus increases reproductive success. Sperm quantity This species exhibits a high first-male sperm precedence, so providing a virgin female with a large sperm quantity is advantageous. Because there is a very limited number of virgin females available at any given time, there is strong selective pressure that favors males that provide large sperm quantities. The need to produce a large sperm quantity for each copulation prevents males from remating quickly. Additionally, females are likely to deny subsequent males after their first mating as further reproduction will interfere too strongly with the female's foraging success. More matings are also unfavorable because they can increase the risk of infection with sexually transmitted parasites or diseases. In some cases, males can tell whether a female has mated previously from a distance. This may be due the female's heightened aggression that manifests after she has mated once. It is more common, however, that in order to assess the reproductive history of a given female, the male must first mount her, which is dangerous for the male as there is a chance the female may attack, capture, and kill him. Since females are difficult to locate and the cost of searching for them is so high, the risk is usually worthwhile to the male. In addition, since locating females is difficult, there is a low chance that another male has previously inseminated a given female, so it is therefore beneficial for males to provide large quantities of sperm. This adaptation is also beneficial in the case that a female loses her first brood. When a female has been inseminated with a large quantity of sperm, she may have enough to fertilize a second brood to replace the lost first brood. Sexual cannibalism Like many other arachnids and insects, Misumena vatia may express sexual cannibalism; however it is only considered moderately common. In cases of precopulatory sexual cannibalism, older males are more likely than younger males to be targets of attack, and are more likely to suffer death or injury as a result of such an attack, especially during the latter half of the mating season. This could be a result of a decreased ability among older males to evade attack from females. Older males do not tend to display riskier mating behavior than younger males. The size of the male does not influence its likelihood of being cannibalized during copulation. Females increase cannibalistic attacks as the mating season progresses. More males tend to be cannibalized after mid July, which could be a result of male aging but is more likely a result of increased female aggression during this time. Non-reproductive cannibalism is uncommon among Misumena vatia. However, it has been observed in individuals in roughly one percent of broods. In these broods, cannibalism tends to occur among spiderlings. Cannibalistic individuals can be up to three times larger than those who are non-cannibalistic. Parental care Egg-guarding Like many other species, Misumena vatia guards its nest to protect its vulnerable eggs from attack. Nest guarding increases the spider's overall reproductive success by protecting against predation from ichneumonid and dipteran egg predators. These spiders are usually observed guarding the nest by standing on its underside, the most vulnerable face of the nest. Most guarding spiders will remain by the nest until the young have begun to emerge from their eggs—about three weeks. A minority of spiders abandon their nests before spiderlings have hatched, while some may remain until all the young have hatched or longer. Most die within a few days of the hatching of their young. Enemies Parasitization by the ichneumonid wasp, Trychosis cyperia, an egg predator, is common. The wasp deposits an egg in the nest and its larva feeds on the eggs. One attack can destroy the nest completely. Misumena vatia experience strong selection to minimize attack from wasps, which is why egg guarding by the female is important for reproductive success. Wasps tend to feed on small egg masses guarded by small spiders, as small spiders cannot defend their nests as effectively. Other known predators include ants, other spiders, birds, lizards, and shrews. When defending the nest from an approaching predator, females typically raise their front legs in a display otherwise observed when they are attacking prey. Physiology Sensation These spiders respond quickly to motion that is both within and outside of their visual range. To do so, they rely heavily on several types of mechanoreceptors. Tactile hairs sense touch, trichobothria sense air currents, and slit sensilla are sensitive to vibrations and mechanical stresses. While still important, vision plays a less important role in prey detection. Remarkably, Misumena vatia fail to notice prey when it is stationary. Vision These spiders have two rows of four eyes each for a total of eight eyes. The antero-lateral (75 μm diameter) and postero-lateral (65 μm diameter) eyes are the larger in size of the four sets of eyes. The antero-median (59 μm diameter) are considered the principal eyes and along with the postero-median (55 μm diameter) constitute the smaller of the sets. All of the eyes other than the principal eyes are considered the secondary eyes. The antero-median eyes appear the clearest, while the other sets of eyes appear darker. The postero-median eyes look directly upward, and their field of view overlaps somewhat with that of the postero-lateral eyes. The antero-lateral and postero-lateral eyes also share a slight overlap in their visual fields. The antero-lateral eyes give these spiders a region of binocular vision. The organization of the antero-lateral, postero-median, and postero-lateral eyes allows these spiders to see nearly their entire upper visual environment. The four pairs of eyes are similar in structure, all containing a retina, a dioptric apparatus, and a cellular vitreous body. The outermost layer of the eye is the lens. The columnar cells of the vitreous body stand between the lens and the retina, and their nuclei rest next to the retina. Three layers of pigment cells surround the vitreous body. The epidermis is the outer layer, and it contains electron-dense granules and electron-lucent inclusions of micro-crystals. The middle layer contains dark, pigment granules, and the innermost layer contains larger, dark, pigment granules inside glial cells. These layers prevent light that may enter through a nearby transparent cuticle from reaching the retina, keeping each eye isolated. The retina contains photoreceptor cells and other supporting cells. The principle eyes have a complex and unique organization. They have three different photoreceptive segments. The periphery contains a half-circle of one type of rhabdomere, while the center is pigmented and contains two types of rhabdomeres. These spiders also have a "giant rhabdom" in the lowest layer of the center of the retina. Only the light entering along its optic axis stimulates this giant rhabdom, so the visual information comes in the shape of a dot. Misumena vatia can control the trajectory of the giant rhabdom by moving their eye muscles, which means these single points of visual information are integrated to generate the spider's visual field. Vision plays an important role in the spider's substrate color matching. Misumena vatia have the necessary physiological machinery to see color, and are most sensitive to wavelengths of light between 340-520 nm. Misumena vatia's principle eyes have tiered retinas, with four layers containing different types of photoreceptors. These spiders have been proven to have green and UV photoreceptors, and likely have many other types which allow them to see a full range of colors. The secondary eyes are dichromal, meaning that they have two types of photoreceptors. Since Misumena vatia use their visual systems to inform color changes, they must be able to see color in their environment and on their own bodies. The visual field of the antero-lateral and antero-medial eyes allow the spider to see its legs, while the postero-lateral eyes see the opisthoma. Since the visual fields are so wide, these spiders see their own bodies and the color of their surroundings, which supports the idea that color matching is facilitated by the visual system. Autotomy Autotomy, the loss of one leg, can happen in a variety of critical situations, including fleeing from predators, fighting, and getting rid of parasites. The disadvantage is obvious, but most spiders can grow back lost limbs if the loss occurs during a juvenile stage and before the final molting. The loss of an anterior leg is common among males. Over their lifetimes, approximately 30 percent of males will lose one of their anterior legs. One direct disadvantage of losing a leg is a decrease in mobility. Spiders with all eight legs have considerably higher body weights, showing that losing legs negatively impacts foraging and significantly decreases the speed with which they can move along lines. Since females are widely dispersed, the impairment of mobility adversely affects the male’s reproductive success. References External links Thomisidae Articles containing video clips Spiders of Europe Mimicry Taxa named by Carl Alexander Clerck Spiders described in 1757 Spiders of North America
Misumena vatia
Biology
6,181
60,377,295
https://en.wikipedia.org/wiki/West%20PC-800
The West PC-800 was a home computer introduced by Norwegian company West Computer AS in 1984. The computer was designed as an alarm center allowing use of several CPUs (6502, Z80, 8086, 68000) and operating systems. The company introduced an IBM PC compatible in early 1986 and the West PC-800 line was phased out. History West Computer AS was founded in late 1983 by Tov Westby, Terje Holen and Geir Ståle Sætre. In early 1984, the company presented its computer then called Sherlock at the Mikrodata'84 fair. The new computer had both 6502 and Z80 CPUs, promised rich expansion capabilities and included two rather unusual features: a wireless keyboard and an alarm device, which could report fire, flood or burglary via phone and the built-in modem. The machine was released in Autumn 1984 at the Sjølyst "Home and Hobby" fair. The West PC 800 did not sell as well as expected, probably due to weak Apple II position in Norway, and West Computer AS announced in late 1985 the IBM PC compatible West PC 1600. In March 1985, the price of the basic computer was NOK10,200. An additional package with one floppy disk drive (200 KB unformatted capacity), 3 applications and 3 games was available for NOK3,750 and another floppy disk drive for NOK3,300. Features West Computer designed its computer primarily as an alarm center with emphasis that it could also function as a games machine (thanks to it having Apple II compatibility). From ca. serial no. 100 the machine became Apple II Plus compatible due to an updated BIOS. Built-in software included two BASIC variants (one for 6502, one for Z80), but available was only an old BASIC variant for 6502 (for full Applesoft BASIC compatibility). Disk drives are controlled by West DOS (similar to Apple DOS), whose commands are accessible directly from BASIC. However, ProDOS - at the time of the machine introduction - was not compatible with the West DOS. A Z80 CPU was available for CP/M compatibility. As access to the Z80 is via 6502, its performance is crippled by design. The company offered additional CPU cards (e.g. Z80B 6 MHz) to improve the performance. The alarm system is independent on the machine and has its own CPU and memory. A Supplied 300/300 baud modem can work as an autodial modem, Which includes a telephone number database. The modem can be connected to sensors and during an alarm situation, the machine will dial selected number(s). The alarm system works also with a wearable "panic button" with an infrared transmitter, and the computer may even dial another number, if the first desired number is not responding. The Wireless keyboard offers 20 function keys and Caps Lock, with another key to turn the keyboard ON and OFF. It is able to operate up to 12–15 meters from the machine for about three hours, and recharging takes about 16 hours. The West PC-800 can take several CPU cards including a MS-DOS compatibility package (NOK3,000) and Motorola 68000 (NOK7-12,000) expansion cards. There was even a Motorola 6809 CPU card for OS-9 compatibility. The computer allows cassette and floppy disk drive data storage. The standard floppy disk drive (FDD) had a 142 KB formatted capacity (Apple II compatible) and there were several other storage options e.g. additional FDD 655 KB, 128 KB RAM disk or hard disk drives up to 20 MB. The West PC-800 offers rich expansion capabilities thanks to its Apple II compatible expansion bus with 7 expansion slots, but some are occupied in the standard configuration (e.g. by the alarm card or RF modulator). Hardware details 4 microprocessors: Z80A 4 MHz CPU for CP/M 6502 1 MHz CPU for Apple II 8400 CPU for alarm and modem (300 baud) 8035 CPU for in the keyboard 64KB RAM (expandable to 192 kB or up to 1 MB with additional CPU card) 18KB ROM (10 KB BASIC, 2 KB system monitor, 2 KB character set, 4 KB alarm/modem) Ports: Joystick - Analog/digital Composite video - PAL RF video modulator - PAL Datasette RS232 Phone outlet for modem Graphics: Text mode: 40x24 (with 5x7 points per character) Mode 1: 40x48, 15 colours Mode 2: 140x192, 6 colours Mode 3: 280x192, black/white 7 expansion slots (4 available for expansion in the standard configuration) IR receiver for wireless keyboard Optional upgrade with e.g. a 8086 and 68000 card with up to 1 MB RAM Reception The West PC-800 was well received by the press. Especially lauded were its alarm features and high flexibility of the machine's design. On the other hand, its graphics capabilities were found dated by 1985 standards and support for some of the platforms was rather rudimentary (e.g. supplied only an old MS-DOS version, issues with Z80 speed without a dedicated Z80 CPU card, limited data transfer on the available floppy disk drive). A Review in Hjemme-Data magazine concluded, "it is hard to judge the computer, as it stands too outside of the regular market." Marketing West Computers choose the advertising agency Næss og Mørch with Jørgen Gulvik as Creative Director for the introduction campaign for this new home computer before the Christmas sales 1984. Together with Founder Tov Westby and CEO Fredrik Stange they designed this ad, which won an award from the Norwegian Advertising Association as the best advertising for consumer products in 1984. Apple would use the same picture in their advertising for the Think Different campaign in 1997. Notes References External links Facebook group for discussing the West PC-800 More pictures of the West PC-800 West PC-800 emulator Vision and concept for the development of Norway's first home computer with immediate benefit! (Norwegian) The West Story The story of West Computers as seen by author Dag Westby (Norwegian) Norwegian news broadcast from NRK about the West PC-800. Computers Apple II clones Microcomputers Personal computers Computer-related introductions in 1984
West PC-800
Technology
1,313
19,657,952
https://en.wikipedia.org/wiki/Ladder%20graph
In the mathematical field of graph theory, the ladder graph is a planar, undirected graph with vertices and edges. The ladder graph can be obtained as the Cartesian product of two path graphs, one of which has only one edge: . Properties By construction, the ladder graph Ln is isomorphic to the grid graph G2,n and looks like a ladder with n rungs. It is Hamiltonian with girth 4 (if n>1) and chromatic index 3 (if n>2). The chromatic number of the ladder graph is 2 and its chromatic polynomial is . Ladder rung graph Sometimes the term "ladder graph" is used for the n × P2 ladder rung graph, which is the graph union of n copies of the path graph P2. Circular ladder graph The circular ladder graph CLn is constructible by connecting the four 2-degree vertices in a straight way, or by the Cartesian product of a cycle of length n ≥ 3 and an edge. In symbols, . It has 2n nodes and 3n edges. Like the ladder graph, it is connected, planar and Hamiltonian, but it is bipartite if and only if n is even. Circular ladder graph are the polyhedral graphs of prisms, so they are more commonly called prism graphs. Circular ladder graphs: Möbius ladder Connecting the four 2-degree vertices crosswise creates a cubic graph called a Möbius ladder. References Parametric families of graphs Planar graphs
Ladder graph
Mathematics
306
6,135,775
https://en.wikipedia.org/wiki/Grimm%27s%20hydride%20displacement%20law
Grimm's Hydride Displacement Law is an early hypothesis, formulated in 1925, to describe bioisosterism, the ability of certain chemical groups to function as or mimic other chemical groups. “Atoms anywhere up to four places in the periodic system before an inert gas change their properties by uniting with one to four hydrogen atoms, in such a manner that the resulting combinations behave like pseudoatoms, which are similar to elements in the groups one to four places respectively, to their right.” According to Grimm, each vertical column (of Table below) would represent a group of isosteres. References Grimm, H. G. Structure and Size of the Non-metallic Hydrides Z. Electrochem. 1925, 31, 474–480. Grimm, H. G. On the Systematic Arrangement of Chemical Compounds from the Perspective of Research on Atomic Composition; and on Some Challenges in Experimental Chemistry. Naturwissenschaften 1929, 17, 557–564. Patani, G. A.; LaVoie, E. J. Bioisosterism: A Rational Approach in Drug Design. Chem. Rev. 1996, 96, 3147–3176. ( ) Medicinal chemistry Hydrides
Grimm's hydride displacement law
Chemistry,Biology
254
1,144,848
https://en.wikipedia.org/wiki/Sodium%20amalgam
Sodium amalgam, with the common formula Na(Hg), is an alloy of mercury and sodium. The term amalgam is used for alloys, intermetallic compounds, and solutions (both solid solutions and liquid solutions) involving mercury as a major component. Sodium amalgams are often used in reactions as strong reducing agents with better handling properties compared to solid sodium. They are less dangerously reactive toward water and in fact are often used as an aqueous suspension. Sodium amalgam was used as a reagent as early as 1862. A synthesis method was described by J. Alfred Wanklyn in 1866. Structure and compositions No particular formula is assigned to "sodium amalgam". Na5Hg8 and Na3Hg are well defined compounds. In sodium amalgams, the Hg-Hg distances are expanded to around 5 Å vs. about 3 Å for mercury itself. Usually amalgams are classified on the weight percent of sodium. Amalgams with 2% Na are solids at room temperature, whereas some more dilute amalgams remain liquid. Preparation Metallic sodium dissolves in mercury exothermically, i.e. with the release of heat, therefore, formation of sodium amalgam is famously dangerous for generating sparks. The process causes localised boiling of the mercury and for this reason the formation is usually conducted in a fume hood and often performed using air-free techniques, such as synthesis under anhydrous liquid paraffin. Sodium amalgam may be prepared in the laboratory by dissolving sodium metal in mercury or the reverse. Sodium amalgams can be purchased from chemical supply houses. Uses Sodium amalgam has been used in organic chemistry as a powerful reducing agent, which is safer to handle than sodium itself. It is used in Emde degradation, and also for reduction of aromatic ketones to hydrols. A sodium amalgam is used in the design of the high pressure sodium lamp providing sodium to produce the proper color, and mercury to tailor the electrical characteristics of the lamp. Mercury cell electrolysis Sodium amalgam is a by-product of chlorine made by mercury cell electrolysis. In this cell, brine (concentrated sodium chloride solution) is electrolysed between a liquid mercury cathode and a titanium or graphite anode. Chlorine is formed at the anode, while sodium formed at the cathode dissolves into the mercury, making sodium amalgam. Normally this sodium amalgam is drawn off and reacted with water in a "decomposer cell" to produce hydrogen gas, concentrated sodium hydroxide solution, and mercury to be recycled through the process. In principle, all the mercury should be completely recycled, but inevitably a small portion goes missing. Because of concerns about this mercury escaping into the environment, the mercury cell process is generally being replaced by plants which use a less toxic cathode. References External links Oxford MSDS Reducing agents Sodium alloys Amalgams de:Amalgam#Natriumamalgam
Sodium amalgam
Chemistry
623
26,700,564
https://en.wikipedia.org/wiki/Harmonic%20differential
In mathematics, a real differential one-form ω on a surface is called a harmonic differential if ω and its conjugate one-form, written as ω∗, are both closed. Explanation Consider the case of real one-forms defined on a two dimensional real manifold. Moreover, consider real one-forms that are the real parts of complex differentials. Let , and formally define the conjugate one-form to be . Motivation There is a clear connection with complex analysis. Let us write a complex number z in terms of its real and imaginary parts, say x and y respectively, i.e. . Since , from the point of view of complex analysis, the quotient tends to a limit as dz tends to 0. In other words, the definition of ω∗ was chosen for its connection with the concept of a derivative (analyticity). Another connection with the complex unit is that (just as ). For a given function f, let us write , i.e. , where ∂ denotes the partial derivative. Then . Now d((df)∗) is not always zero, indeed , where . Cauchy–Riemann equations As we have seen above: we call the one-form ω harmonic if both ω and ω∗ are closed. This means that (ω is closed) and (ω∗ is closed). These are called the Cauchy–Riemann equations on . Usually they are expressed in terms of as and . Notable results A harmonic differential (one-form) is precisely the real part of an (analytic) complex differential. To prove this one shows that satisfies the Cauchy–Riemann equations exactly when is locally an analytic function of . Of course an analytic function is the local derivative of something (namely ∫w(z) dz). The harmonic differentials ω are (locally) precisely the differentials df of solutions f to Laplace's equation . If ω is a harmonic differential, so is ω∗. See also De Rham cohomology References Mathematical analysis
Harmonic differential
Mathematics
419
28,306,592
https://en.wikipedia.org/wiki/NAS%20Award%20in%20the%20Neurosciences
The NAS Award in the Neurosciences is awarded by the U.S. National Academy of Sciences "in recognition of extraordinary contributions to progress in the fields of neuroscience, including neurochemistry, neurophysiology, neuropharmacology, developmental neuroscience, neuroanatomy, and behavioral and clinical neuroscience." It was first awarded in 1988. Recipients Source: National Academy of Sciences Nancy Kanwisher (2022) For her groundbreaking insights into the functional organization of the human brain, including the discovery of neocortical subregions that differentially engage in the perception of faces, places, music and even what others think, thereby linking modularity of mind theories to neuroscience. Eve Marder (2019) For her body of work that has transformed the perception of neuronal circuits as static structures well-described by connectivity diagrams, to our current understanding of microcircuits as flexible and dynamic entities that efficiently balance the needs for plasticity and stability. Mortimer Mishkin (2016) For fundamental contributions to understanding the functional organization of the primate brain, including discovery of the visual functions of inferior temporal cortex, the role of the dorsal and ventral visual pathways in spatial and object processing, and anatomical descriptions of cognitive and non-cognitive memory systems. Solomon H. Snyder (2013) For the elucidation of fundamental mechanisms of chemical signaling, including opiate receptors, NO signaling, and other neurotransmitter/receptor interactions. Roger A. Nicoll (2010) For his seminal discoveries elucidating cellular and molecular bases for synaptic plasticity in the brain. Jean-Pierre Changeux (2007) For the pioneering discovery that fast-acting neurotransmitters mediate their effects through allosteric regulation of the neurotransmitter protein. Brenda Milner (2004) For her pioneering and seminal investigations of the functioning of the temporal lobes and other brain regions in learning, memory, and speech. Seymour Benzer (2001) For his pioneering contributions which have brought neurogenetics to maturity. Benzer's discoveries in fruit flies have identified specific genes contributing to behaviors of central importance. Vernon B. Mountcastle (1998) For his discovery of the columnar organization of the mammalian cerebral cortex and for original studies relating behavior to function of single cells in higher cortical areas. Walle J. H. Nauta (1994) For development of a powerful method for determining connectivity among specific brain sites and thus establishing now-classical circuits in the limbic system. Paul Greengard (1991) For his discovery of the central role played by neuronal phosphoproteins in normal brain function as well as in neuropsychiatric and related disorders. Seymour S. Kety and Louis Sokoloff (1988) For developing techniques to measure brain blood flow and metabolism -- valuable tools in the study of brain function that have major applications in clinical medicine. See also List of neuroscience awards References Awards established in 1988 Neuroscience awards Awards of the United States National Academy of Sciences
NAS Award in the Neurosciences
Technology
622
4,024,861
https://en.wikipedia.org/wiki/Transpiration%20stream
In plants, the transpiration stream is the uninterrupted stream of water and solutes which is taken up by the roots and transported via the xylem to the leaves where it evaporates into the air/apoplast-interface of the substomatal cavity. It is driven by capillary action and in some plants by root pressure. The main driving factor is the difference in water potential between the soil and the substomatal cavity caused by transpiration. Transpiration Transpiration can be regulated through stomatal closure or opening. It allows for plants to efficiently transport water up to their highest body organs, regulate the temperature of stem and leaves and it allows for upstream signaling such as the dispersal of an apoplastic alkalinization during local oxidative stress. Summary of water movement: Soil Roots and Root Hair Xylem Leaves Stomata Air Osmosis The water passes from the soil to the root by osmosis. The long and thin shape of root hairs maximizes surface area so that more water can enter. There is greater water potential in the soil than in the cytoplasm of the root hair cells. As the cell's surface membrane of the root hair cell is semi-permeable, osmosis can take place; and water passes from the soil to the root hairs. The next stage in the transpiration stream is water passing into the xylem vessels. The water either goes through the cortex cells (between the root cells and the xylem vessels) or it bypasses them – going through their cell walls. After this, the water moves up the xylem vessels to the leaves through diffusion: A pressure change between the top and bottom of the vessel. Diffusion takes place because there is a water potential gradient between water in the xylem vessel and the leaf (as water is transpiring out of the leaf). This means that water diffuses up the leaf. There is also a pressure change between the top and bottom of the xylem vessels, due to water loss from the leaves. This reduces the pressure of water at the top of the vessels. This means water moves up the vessels. The last stage in the transpiration stream is the water moving into the leaves, and then the actual transpiration. First, the water moves into the mesophyll cells from the top of the xylem vessels. Then the water evaporates out of the cells into the spaces between the cells in the leaf. After this, the water leaves the leaf (and the whole plant) by diffusion through stomata. See also Soil plant atmosphere continuum for modelling plant transpiration. References Felle HH, Herrmann A, Hückelhoven R, Kogel K-H (2005) Root-to-shoot signalling: apoplastic alkalinization, a general stress response and defence factor in barley (Hordeum vulgare). Protoplasma 227, 17 - 24. Salibury F, Ross C (1991) Plant Physiology. Brooks Cole, pp 682, . Plant physiology
Transpiration stream
Biology
636
774,009
https://en.wikipedia.org/wiki/Hulbert%20Harrington%20Warner
Hulbert Harrington Warner (1842–1923) was a Rochester, New York businessman and philanthropist who made his fortune from the sales of patent medicine. Biography He was born near Syracuse, New York, in a small settlement called Warners. Warners had been named for Warner's grandfather, Seth, who had moved there in 1807 from Stockbridge, Massachusetts. In 1865, Warner moved to Michigan to engage in the stove and hardware business. In 1870, Warner moved to Rochester and entered into the first business that would make him a millionaire, selling fire- and burglar-proof safes. The demand for safes had escalated dramatically after the discovery of oil in western Pennsylvania; by decade's end, it is estimated that Warner and his sales agents had sold 60,000 safes worth an estimated $10 million ($ in present terms). Marriage and children Warner was married twice. He married Martha L. Keeney of Skaneateles, New York in 1864. Martha died suddenly in 1871, and is buried at Lakeview Cemetery in Skaneateles. In 1872, Warner remarried, this time to Emily Olive Stoddard of Michigan. Although the details of his second marriage remain vague, it appears that Warner and Stoddard separated in 1893. It appears that the couple may have had one child, Maud, but there is little information available about her. Warner later lived with Christina de Martinez of Mexico. Warner and Martinez were never actually married (and it appears that Warner and Stoddard were never divorced), but Martinez took Warner's name as her own and they resided in the same household after Warner moved to Minneapolis. Patent medicines Safe Cure Based upon the history recounted in Warner's early almanacs, Warner used a portion of the wealth he accumulated from the safe business to purchase the formula for a patent medicine from Dr. Charles Craig of Rochester. Warner developed an unexpectedly severe case of Bright's disease, a kidney disease. While close to death, Warner used a vegetable concoction sold by Craig and was restored to health. Based upon his admiration for Craig's Original Kidney Cure, Warner purchased the formula and the rights to the product and in 1879 introduced Warner's Safe Kidney & Liver Cure. Although Warner's early publications herald Craig's potion as a revelation, references to Craig soon disappeared from Warner's advertising, and ultimately the two ended up in court when Craig attempted to reenter the patent medicine business with a cure remarkably similar to the one he had sold to Warner. In addition to his Kidney & Liver Cure, Warner also introduced a Safe Nervine, Safe Diabetes Cure, Safe Tonic, Safe Tonic Bitters, Safe Bitters, Safe Rheumatic Cure, Safe Pills, and later his Tippecanoe Bitters. The Warner's patent medicine products, with the exception of the Safe Pills and Tippecanoe, appeared in a unique bottle, which featured an embossed safe on the front. This drew upon his earlier business and implied to his potential customers that his product posed no risk. In January, 1884, Warner opened his new Rochester headquarters in a lavish multi-story building on St. Paul Street. The H. H. Warner Building became the centerpiece of his medicine production and turned out an estimated of Safe Cure per day. It also served as the headquarters for his promotional department, which published an untold number of almanac and advertising circulars distributed with his medicines to local druggists and grocers. The Warner Building still exists today and houses a variety of businesses. Its granite façade still bears the initial "W". Log Cabin Remedies In 1887, Warner introduced a new product line, which he called his Log Cabin Remedies. Unlike his Safe Cures, these products appeared in amber bottles with three slanted panels with the name of the particular remedy embossed. The bottles were in red, white, blue, and yellow boxes that featured the image of a log cabin viewed from a window. The Log Cabin Remedies did not replace the Safe Cure line; they only supplemented it. Warner realized that the nation was in a head-long race for expansion westward and his marketing pitch appealed to the American desire for self-reliance. Indeed, the entire thrust of Warner's marketing from its inception can best be described as appealing to his customer's desire to "heal thyself". Warner's Foreign Offices Based upon his success in marketing his Safe Cure products in the United States, Warner quickly decided to expand his operation internationally. In 1883, he opened offices in Toronto, Ontario, Canada and London, England. The bottles from Toronto have become known as "3-Cities", because they featured the names of all of his offices at that time: Rochester, London, and Toronto. In 1887, he opened offices in Melbourne, Australia and Frankfurt, Germany. In 1888, he expanded to Pressburg in Hungary; however, this office lasted only two years. In 1891, he opened an office in Dunedin, New Zealand; the bottles from that office have become known as "4-Cities", bearing the names of Rochester, Toronto, London, and Melbourne. The Dundein office was likely little more than a laboratory and, in fact, bottles from the Melbourne and Dundein offices were likely produced in either Rochester or London and shipped to the southern-hemisphere offices due to the primitive state of glass production that existed there at the time. Warner's advertising also boasts offices in Kreuslingen, Switzerland; Brussels; and Paris. No bottles with these cities embossed have ever appeared, and only one bottle labeled in French is known to exist. Warner's offices lasted well into the 20th century, with the Rochester office closing around 1944. Philanthropy and failure Having made millions on his second business in patent medicine, Warner embarked on various philanthropic endeavors, most notably his sponsorship of the Warner Observatory in Rochester. Prior to opening his patent medicine business, Warner had chanced to meet Dr. Lewis Swift, an astronomer, who was ready to leave Rochester for Colorado when Warner convinced him to stay and operate his new observatory. The observatory was completed in 1883 at the then-staggering cost of $100,000 (in current terms, $). It was equipped with a state-of-the-art telescope and was pronounced as being the best-equipped private observatory in the world. The Observatory was used as a marketing centerpiece by Warner. His almanacs at the time ran essay contents and featured images of the Observatory. Swift used the Observatory to good end, and reportedly discovered six new comets and 900 nebulae. At one point, Warner offered a reward of $200 for each new comet discovered. This offer was of great help to the young astronomer Edward Emerson Barnard, who claimed eight such awards and used the proceeds to set himself and his new wife up in a newly built house in Nashville, Tennessee. Astronomer Swift and his telescope left Rochester in 1894. The Observatory was demolished in December, 1931. Warner also used his money to construct a lavish mansion for himself on East Avenue in Rochester. The house fell into disuse and was later demolished. Warner's patent-medicine empire reached its pinnacle in the late 1880s and began its gradual decline. Flush with success, Warner spent money on highly speculative investments in mining, all of which failed. In an effort to generate more capital, he took the company public, which did generate some revenue. He sold the company to an English investment group in 1889, which incorporated it as H. H. Warner & Co., Ltd. Warner bought up 80 percent of the English stock, and took the position of managing director of the company. However, Warner's speculative investments and his waning interest in the business took its toll. When the Panic of 1893 hit, Warner was unable to generate additional capital through stock sales, forcing him into bankruptcy. The American branch of his company was sold to a group of Rochester investors, who continued to operate it as the Warner's Safe Remedies Company. Life After Safe Cure After failing in Rochester, Warner lived for a time in New York City, then moved to Philadelphia, where he may have attempted to start a new patent medicine business, although this is unconfirmed. He ultimately landed in Minneapolis, where he promoted the Nuera Manufacturing Co., also known as Neura Remedy Co., with the help of his common-law wife Christina de Martinez. He also operated the Warner Renowned Remedies Company, which produced some products offered by mail order. Warner died in January, 1923, and is buried alongside his first wife, Martha, in Lakeview Cemetery in Skaneateles, NY. His legacy is his patent medicine empire that produced remedies sold around the world as well as the bottles in which those remedies were contained. The bottles are prized by collectors. References Atwater, Edward C., "Hulbert Harrington Warner and the Perfect Pitch: Sold Hope, Made Millions," New York History,56(2): 154-190 (1975). Seeliger, Michael, "H. H. Warner: His Company & His Bottles," (1974). Stecher, Jack, "H. H. Warner: World Renowned Patent Medicine King Biographical Sketch," Applied Seals (April 22, 2001). Seeliger, Michael "H.H. Warner His Company & His Bottles 2.0 (revised edition in digital format on a thumbdrive) External links Short biography Warner Observatory Warner bottles Warner's Safe Blog 1842 births 1923 deaths People associated with astronomy Businesspeople from Rochester, New York Patent medicine businesspeople
Hulbert Harrington Warner
Astronomy
1,963
17,118,964
https://en.wikipedia.org/wiki/Confusion%20of%20the%20inverse
Confusion of the inverse, also called the conditional probability fallacy or the inverse fallacy, is a logical fallacy whereupon a conditional probability is equated with its inverse; that is, given two events A and B, the probability of A happening given that B has happened is assumed to be about the same as the probability of B given A, when there is actually no evidence for this assumption. More formally, P(A|B) is assumed to be approximately equal to P(B|A). Examples Example 1 In one study, physicians were asked to give the chances of malignancy with a 1% prior probability of occurring. A test can detect 80% of malignancies and has a 10% false positive rate. What is the probability of malignancy given a positive test result? Approximately 95 out of 100 physicians responded the probability of malignancy would be about 75%, apparently because the physicians believed that the chances of malignancy given a positive test result were approximately the same as the chances of a positive test result given malignancy. The correct probability of malignancy given a positive test result as stated above is 7.5%, derived via Bayes' theorem: Other examples of confusion include: Hard drug users tend to use marijuana; therefore, marijuana users tend to use hard drugs (the first probability is marijuana use given hard drug use, the second is hard drug use given marijuana use). Most accidents occur within 25 miles from home; therefore, you are safest when you are far from home. Terrorists tend to have an engineering background; so, engineers have a tendency towards terrorism. For other errors in conditional probability, see the Monty Hall problem and the base rate fallacy. Compare to illicit conversion. Example 2 In order to identify individuals having a serious disease in an early curable form, one may consider screening a large group of people. While the benefits are obvious, an argument against such screenings is the disturbance caused by false positive screening results: If a person not having the disease is incorrectly found to have it by the initial test, they will most likely be distressed, and even if they subsequently take a more careful test and are told they are well, their lives may still be affected negatively. If they undertake unnecessary treatment for the disease, they may be harmed by the treatment's side effects and costs. The magnitude of this problem is best understood in terms of conditional probabilities. Suppose 1% of the group suffer from the disease, and the rest are well. Choosing an individual at random, Suppose that when the screening test is applied to a person not having the disease, there is a 1% chance of getting a false positive result (and hence 99% chance of getting a true negative result, a number known as the specificity of the test), i.e. Finally, suppose that when the test is applied to a person having the disease, there is a 1% chance of a false negative result (and 99% chance of getting a true positive result, known as the sensitivity of the test), i.e. Calculations The fraction of individuals in the whole group who are well and test negative (true negative): The fraction of individuals in the whole group who are ill and test positive (true positive): The fraction of individuals in the whole group who have false positive results: The fraction of individuals in the whole group who have false negative results: Furthermore, the fraction of individuals in the whole group who test positive: Finally, the probability that an individual actually has the disease, given that the test result is positive: Conclusion In this example, it should be easy to relate to the difference between the conditional probabilities P(positive | ill) which with the assumed probabilities is 99%, and P(ill | positive) which is 50%: the first is the probability that an individual who has the disease tests positive; the second is the probability that an individual who tests positive actually has the disease. Thus, with the probabilities picked in this example, roughly the same number of individuals receive the benefits of early treatment as are distressed by false positives; these positive and negative effects can then be considered in deciding whether to carry out the screening, or if possible whether to adjust the test criteria to decrease the number of false positives (possibly at the expense of more false negatives). See also Converse (logic) Prosecutor's fallacy References Probability fallacies Misuse of statistics zh:条件概率#条件概率谬论
Confusion of the inverse
Mathematics
918
67,550,887
https://en.wikipedia.org/wiki/Plant%20Protection%20and%20Inspection%20Services%20%28Israel%29
The Plant Protection and Inspection Services unit is an agency of the Ministry of Agriculture of Israel. PPIS handles phytosanitary matters both within Israel and in foreign trade. In pursuit of that purpose, it operates offices both within the country and in its foreign embassies, and acts as representative to some international bodies such as the IPPC (International Plant Protection Convention) and the EPPO (European and Mediterranean Plant Protection Organization). External links References Agriculture in Israel Phytosanitary authorities Export and import control Regulators of biotechnology products Foreign trade of Israel
Plant Protection and Inspection Services (Israel)
Biology
114
58,606,254
https://en.wikipedia.org/wiki/Hugh%20C.%20Williams
Hugh Cowie Williams (born 23 July 1943) is a Canadian mathematician. He deals with number theory and cryptography. Early life Williams studied mathematics at the University of Waterloo (bachelor's degree 1966, master's degree 1967), where he received his doctorate in 1969 in computer science under Ronald C. Mullin and Ralph Gordon Stanton (A generalization of the Lucas functions). He was a post-doctoral student at York University. Career In 1970 he became assistant professor at the University of Manitoba, where in 1972 he attained associate professor status and professor in 1979. In 2001 he became a professor at the University of Calgary, and professor emeritus since 2004. Since 2001 he has held the "iCore Chair" in Algorithmic Number Theory and Cryptography. Together with Rei Safavi-Naini he heads the Institute for Security, Privacy and Information Assurance (ISPIA) - formerly Centre for Information Security and Cryptography - at Calgary. Between 1998 and 2001 he was an adjunct professor at the University of Waterloo. He was a visiting scholar at the University of Bordeaux, at Macquarie University and at University of Leiden. From 1978 to January 2007 he was associate editor of the journal Mathematics of Computation. Among other things Williams dealt with primality tests; Williams primes were named for him. He developed custom hardware for number-theoretical calculations, for example the MSSU in 1995. In cryptography, he developed in 1994 with Renate Scheidler and Johannes Buchmann a method of public key cryptography based on real quadratic number fields. Williams developed algorithms for calculating invariants of algebraic number fields such as class numbers and regulators. Williams deals with math history and wrote a book about the history of primality tests. In it, he showed among other things that Édouard Lucas worked shortly before his early death on a test similar to today's elliptic curve method. He reconstructed the method that Fortuné Landry used in 1880 (at the age of 82) to factor the sixth Fermat number (a 20-digit number). Together with Jeffrey Shallit and François Morain he discovered a forgotten mechanical number sieve created by Eugène Olivier Carissan, the first such device from the beginning of the 20th century (1912), and described it in detail. Publications The influence of computers in the development of number theory. In: Computational Mathematics with Applications. Band 8, 1982, S. 75–93. Factoring on a computer. Mathematical Intelligencer, 1984, Nr. 3. with Attila Pethö, Horst-Günter Zimmer, Michael Pohst (Hrsg.): Computational Number Theory. de Gruyter 1991. with J. O. Shallit: Factoring integers before computers. In: W. Gautschi (Hrsg.): Mathematics of computation – 50 years of computational mathematics 1943–1993. Proc. Symposium Applied Math., Band 48. American Mathematical Society, 1994, S. 481–531. Édouard Lucas and primality testing. Wiley 1998. (Canadian Mathematical Society Series of Monographs and Advanced Texts. Band 22.) with M. J. Jacobson: Solving the Pell Equation. Springer 2008. References External links Hugh C. Williams at the website of the University of Calgary Profile of Hugh C. Williams at the faculty with links to publications Williams references at the Prime Pages 20th-century Canadian mathematicians 21st-century Canadian mathematicians Number theorists 1943 births Living people Academic staff of the University of Manitoba Academic staff of the University of Calgary University of Waterloo alumni Scientists from Ontario People from London, Ontario
Hugh C. Williams
Mathematics
720
9,742
https://en.wikipedia.org/wiki/Erd%C5%91s%20number
The Erdős number () describes the "collaborative distance" between mathematician Paul Erdős and another person, as measured by authorship of mathematical papers. The same principle has been applied in other fields where a particular individual has collaborated with a large and broad number of peers. Overview Paul Erdős (1913–1996) was an influential Hungarian mathematician who in the latter part of his life spent a great deal of time writing papers with a large number of colleagues—over 500—working on solutions to outstanding mathematical problems. He published more papers during his lifetime (at least 1,525) than any other mathematician in history. (Leonhard Euler published more total pages of mathematics but fewer separate papers: about 800.) Erdős spent most of his career with no permanent home or job. He traveled with everything he owned in two suitcases, and would visit mathematicians he wanted to collaborate with, often unexpectedly, and expect to stay with them. The idea of the Erdős number was originally created by the mathematician's friends as a tribute to his enormous output. Later it gained prominence as a tool to study how mathematicians cooperate to find answers to unsolved problems. Several projects are devoted to studying connectivity among researchers, using the Erdős number as a proxy. For example, Erdős collaboration graphs can tell us how authors cluster, how the number of co-authors per paper evolves over time, or how new theories propagate. Several studies have shown that leading mathematicians tend to have particularly low Erdős numbers (i.e. high proximity). The median Erdős number of Fields Medalists is 3. Only 7,097 (about 5% of mathematicians with a collaboration path) have an Erdős number of 2 or lower. As time passes, the lowest Erdős number that can still be achieved will necessarily increase, as mathematicians with low Erdős numbers die and become unavailable for collaboration. Still, historical figures can have low Erdős numbers. For example, renowned Indian mathematician Srinivasa Ramanujan has an Erdős number of only 3 (through G. H. Hardy, Erdős number 2), even though Paul Erdős was only 7 years old when Ramanujan died. Definition and application in mathematics To be assigned an Erdős number, someone must be a coauthor of a research paper with another person who has a finite Erdős number. Paul Erdős himself is assigned an Erdős number of zero. A certain author's Erdős number is one greater than the lowest Erdős number of any of their collaborators; for example, an author who has coauthored a publication with Erdős would have an Erdős number of 1. The American Mathematical Society provides a free online tool to determine the collaboration distance between two mathematical authors listed in the Mathematical Reviews catalogue. Erdős wrote around 1,500 mathematical articles in his lifetime, mostly co-written. He had 509 direct collaborators; these are the people with Erdős number 1. The people who have collaborated with them (but not with Erdős himself) have an Erdős number of 2 (12,600 people as of 7 August 2020), those who have collaborated with people who have an Erdős number of 2 (but not with Erdős or anyone with an Erdős number of 1) have an Erdős number of 3, and so forth. A person with no such coauthorship chain connecting to Erdős has an Erdős number of infinity (or an undefined one). Since the death of Paul Erdős, the lowest Erdős number that a new researcher can obtain is 2. There is room for ambiguity over what constitutes a link between two authors. The American Mathematical Society collaboration distance calculator uses data from Mathematical Reviews, which includes most mathematics journals but covers other subjects only in a limited way, and which also includes some non-research publications. The Erdős Number Project web site says: It also says: but excludes non-research publications such as elementary textbooks, joint editorships, obituaries, and the like. The "Erdős number of the second kind" restricts assignment of Erdős numbers to papers with only two collaborators. The Erdős number was most likely first defined in print by Casper Goffman, an analyst whose own Erdős number is 2. Goffman published his observations about Erdős' prolific collaboration in a 1969 article entitled "And what is your Erdős number?" See also some comments in an obituary by Michael Golomb. The median Erdős number among Fields medalists is as low as 3. Fields medalists with Erdős number 2 include Atle Selberg, Kunihiko Kodaira, Klaus Roth, Alan Baker, Enrico Bombieri, David Mumford, Charles Fefferman, William Thurston, Shing-Tung Yau, Jean Bourgain, Richard Borcherds, Manjul Bhargava, Jean-Pierre Serre and Terence Tao. There are no Fields medalists with Erdős number 1; however, Endre Szemerédi is an Abel Prize Laureate with Erdős number 1. Most frequent Erdős collaborators While Erdős collaborated with hundreds of co-authors, there were some individuals with whom he co-authored dozens of papers. This is a list of the ten persons who most frequently co-authored with Erdős and their number of papers co-authored with Erdős (i.e. their number of collaborations). Related fields , all Fields Medalists have a finite Erdős number, with values that range between 2 and 6, and a median of 3. In contrast, the median Erdős number across all mathematicians (with a finite Erdős number) is 5, with an extreme value of 13. The table below summarizes the Erdős number statistics for Nobel prize laureates in Physics, Chemistry, Medicine, and Economics. The first column counts the number of laureates. The second column counts the number of winners with a finite Erdős number. The third column is the percentage of winners with a finite Erdős number. The remaining columns report the minimum, maximum, average, and median Erdős numbers among those laureates. Physics Among the Nobel Prize laureates in Physics, Albert Einstein and Sheldon Glashow have an Erdős number of 2. Nobel Laureates with an Erdős number of 3 include Enrico Fermi, Otto Stern, Wolfgang Pauli, Max Born, Willis E. Lamb, Eugene Wigner, Richard P. Feynman, Hans A. Bethe, Murray Gell-Mann, Abdus Salam, Steven Weinberg, Norman F. Ramsey, Frank Wilczek, David Wineland, and Giorgio Parisi. Fields Medal-winning physicist Ed Witten has an Erdős number of 3. Biology Computational biologist Lior Pachter has an Erdős number of 2. Evolutionary biologist Richard Lenski has an Erdős number of 3, having co-authored a publication with Lior Pachter and with mathematician Bernd Sturmfels, each of whom has an Erdős number of 2. Finance and economics There are at least two winners of the Nobel Prize in Economics with an Erdős number of 2: Harry M. Markowitz (1990) and Leonid Kantorovich (1975). Other financial mathematicians with Erdős number of 2 include David Donoho, Marc Yor, Henry McKean, Daniel Stroock, and Joseph Keller. Nobel Prize laureates in Economics with an Erdős number of 3 include Kenneth J. Arrow (1972), Milton Friedman (1976), Herbert A. Simon (1978), Gerard Debreu (1983), John Forbes Nash, Jr. (1994), James Mirrlees (1996), Daniel McFadden (2000), Daniel Kahneman (2002), Robert J. Aumann (2005), Leonid Hurwicz (2007), Roger Myerson (2007), Alvin E. Roth (2012), and Lloyd S. Shapley (2012) and Jean Tirole (2014). Some investment firms have been founded by mathematicians with low Erdős numbers, among them James B. Ax of Axcom Technologies, and James H. Simons of Renaissance Technologies, both with an Erdős number of 3. Philosophy Since the more formal versions of philosophy share reasoning with the basics of mathematics, these fields overlap considerably, and Erdős numbers are available for many philosophers. Philosophers John P. Burgess and Brian Skyrms have an Erdős number of 2. Jon Barwise and Joel David Hamkins, both with Erdős number 2, have also contributed extensively to philosophy, but are primarily described as mathematicians. Law Judge Richard Posner, having coauthored with Alvin E. Roth, has an Erdős number of at most 4. Roberto Mangabeira Unger, a politician, philosopher, and legal theorist who teaches at Harvard Law School, has an Erdős number of at most 4, having coauthored with Lee Smolin. Politics Angela Merkel, Chancellor of Germany from 2005 to 2021, has an Erdős number of at most 5. Engineering Some fields of engineering, in particular communication theory and cryptography, make direct use of the discrete mathematics championed by Erdős. It is therefore not surprising that practitioners in these fields have low Erdős numbers. For example, Robert McEliece, a professor of electrical engineering at Caltech, had an Erdős number of 1, having collaborated with Erdős himself. Cryptographers Ron Rivest, Adi Shamir, and Leonard Adleman, inventors of the RSA cryptosystem, all have Erdős number 2. Linguistics The Romanian mathematician and computational linguist Solomon Marcus had an Erdős number of 1 for a paper in Acta Mathematica Hungarica that he co-authored with Erdős in 1957. Impact Erdős numbers have been a part of the folklore of mathematicians throughout the world for many years. Among all working mathematicians at the turn of the millennium who have a finite Erdős number, the numbers range up to 15, the median is 5, and the mean is 4.65; almost everyone with a finite Erdős number has a number less than 8. Due to the very high frequency of interdisciplinary collaboration in science today, very large numbers of non-mathematicians in many other fields of science also have finite Erdős numbers. For example, political scientist Steven Brams has an Erdős number of 2. In biomedical research, it is common for statisticians to be among the authors of publications, and many statisticians can be linked to Erdős via John Tukey, who has an Erdős number of 2. Similarly, the prominent geneticist Eric Lander and the mathematician Daniel Kleitman have collaborated on papers, and since Kleitman has an Erdős number of 1, a large fraction of the genetics and genomics community can be linked via Lander and his numerous collaborators. Similarly, collaboration with Gustavus Simmons opened the door for Erdős numbers within the cryptographic research community, and many linguists have finite Erdős numbers, many due to chains of collaboration with such notable scholars as Noam Chomsky (Erdős number 4), William Labov (3), Mark Liberman (3), Geoffrey Pullum (3), or Ivan Sag (4). There are also connections with arts fields. According to Alex Lopez-Ortiz, all the Fields and Nevanlinna prize winners during the three cycles in 1986 to 1994 have Erdős numbers of at most 9. Earlier mathematicians published fewer papers than modern ones, and more rarely published jointly written papers. The earliest person known to have a finite Erdős number is either Antoine Lavoisier (born 1743, Erdős number 13), Richard Dedekind (born 1831, Erdős number 7), or Ferdinand Georg Frobenius (born 1849, Erdős number 3), depending on the standard of publication eligibility. Martin Tompa proposed a directed graph version of the Erdős number problem, by orienting edges of the collaboration graph from the alphabetically earlier author to the alphabetically later author and defining the monotone Erdős number of an author to be the length of a longest path from Erdős to the author in this directed graph. He finds a path of this type of length 12. Also, Michael Barr suggests "rational Erdős numbers", generalizing the idea that a person who has written p joint papers with Erdős should be assigned Erdős number 1/p. From the collaboration multigraph of the second kind (although he also has a way to deal with the case of the first kind)—with one edge between two mathematicians for each joint paper they have produced—form an electrical network with a one-ohm resistor on each edge. The total resistance between two nodes tells how "close" these two nodes are. It has been argued that "for an individual researcher, a measure such as Erdős number captures the structural properties of [the] network whereas the h-index captures the citation impact of the publications," and that "One can be easily convinced that ranking in coauthorship networks should take into account both measures to generate a realistic and acceptable ranking." In 2004 William Tozier, a mathematician with an Erdős number of 4 auctioned off a co-authorship on eBay, hence providing the buyer with an Erdős number of 5. The winning bid of $1031 was posted by a Spanish mathematician, who refused to pay and only placed the bid to stop what he considered a mockery. Variations A number of variations on the concept have been proposed to apply to other fields, notably the Bacon number (as in the game Six Degrees of Kevin Bacon), connecting actors to the actor Kevin Bacon by a chain of joint appearances in films. It was created in 1994, 25 years after Goffman's article on the Erdős number. A small number of people are connected to both Erdős and Bacon and thus have an Erdős–Bacon number, which combines the two numbers by taking their sum. One example is the actress-mathematician Danica McKellar, best known for playing Winnie Cooper on the TV series The Wonder Years. Her Erdős number is 4, and her Bacon number is 2. Further extension is possible. For example, the "Erdős–Bacon–Sabbath number" is the sum of the Erdős–Bacon number and the collaborative distance to the band Black Sabbath in terms of singing in public. Physicist Stephen Hawking had an Erdős–Bacon–Sabbath number of 8, and actress Natalie Portman has one of 11 (her Erdős number is 5). In chess, the Morphy number describes a player's connection to Paul Morphy, widely considered the greatest chess player of his time and an unofficial World Chess Champion. In go, the Shusaku number describes a player's connection to Honinbo Shusaku, the strongest player of his time. In video games, the Ryu number describes a video game character's connection to the Street Fighter character Ryu. See also References External links Jerry Grossman, The Erdős Number Project. Contains statistics and a complete list of all mathematicians with an Erdős number less than or equal to 2. New Erdős Number Project website Migration to new site in 2021. "On a Portion of the Well-Known Collaboration Graph", Jerrold W. Grossman and Patrick D. F. Ion. "Some Analyses of Erdős Collaboration Graph", Vladimir Batagelj and Andrej Mrvar. American Mathematical Society, MR free tools: collaboration distance. A search engine for Erdős numbers and collaboration distance between other authors. Numberphile video. Ronald Graham on imaginary Erdős numbers. Number Social networks Mathematics literature Separation numbers Bibliometrics
Erdős number
Mathematics,Technology
3,209
4,622,852
https://en.wikipedia.org/wiki/NAG%20Numerical%20Library
The NAG Numerical Library is a commercial software product developed and sold by The Numerical Algorithms Group Ltd. It is a software library of numerical-analysis routines, containing more than 1,900 mathematical and statistical algorithms. Areas covered by the library include linear algebra, optimization, quadrature, the solution of ordinary and partial differential equations, regression analysis, and time series analysis. Users of the NAG Library call its routines from within their applications in order to incorporate its mathematical or statistical functionality and to solve numerical problems - for example, finding the minimum or maximum of a function, fitting a curve or surface to data, or solving a differential equation. The NAG Library can be accessed from a variety of languages and environments such as C/C++, Fortran, Python, AD, MATLAB, Java and .NET. The main supported systems are currently Windows, Linux and macOS running on x86-64 architectures; 32-bit Windows support is being phased out. Some NAG mathematical optimization solvers are accessible via the optimization modelling suite. History The original version of the NAG Library was written in Algol 60 and Fortran. It contained 98 user-callable routines, and was released for the ICL 1906A and 1906S machines on October 1, 1971. Three further Marks of the library appeared in the following five years; during this time the Algol version was ported to Algol 68, with the following platforms being supported: CDC 7600/CYBER (CDC ALGOL 68), IBM 360/370/AMDAHL (FLACC ALGOL 68), ICL 1900 (ALGOL 68R), ICL 1906A/S (ALGOL 68R), ICL 2900 (ALGOL 68RS) and Telefunken TR440 (ALGOL 68C). The first partially vectorized implementation of the NAG Fortran Library for the Cray-1 was released in 1983, while the first release of the NAG Parallel Library (which was specially designed for distributed memory parallel computer architectures) was in the early 1990s. Mark 1 of the NAG C Library was released in 1990. In 1992, the Library incorporated LAPACK routines for the first time; NAG had been a collaborator in the LAPACK project since 1987. The first release of the NAG Library for SMP & Multicore, which takes advantage of the shared memory parallelism of Symmetric Multi-Processors (SMP) and multicore processors, appeared in 1997 for multiprocessor machines built using the Dec Alpha and SPARC architectures. The NAG Library for .NET, which is a CLI DLL assembly containing methods and objects that give Common Language Infrastructure (CLI) users access to NAG algorithms, was first released in 2010. Current version Mark 29 of the NAG Library includes mathematical and statistical algorithms organised into chapters. See also List of numerical-analysis software List of numerical libraries References External links The NAG Library History of computing in the United Kingdom Numerical libraries Science and technology in Oxfordshire
NAG Numerical Library
Technology
616
342,127
https://en.wikipedia.org/wiki/Anti-gravity
Anti-gravity (also known as non-gravitational field) is the phenomenon of creating a place or object that is free from the force of gravity. It does not refer to either the lack of weight under gravity experienced in free fall or orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift. Anti-gravity is a recurring concept in science fiction. "Anti-gravity" is often used to refer to devices that look as if they reverse gravity even though they operate through other means, such as lifters, which fly in the air by moving air with electromagnetic fields. Historical attempts at understanding gravity The possibility of creating anti-gravity depends upon a complete understanding and description of gravity and its interactions with other physical theories, such as general relativity and quantum mechanics; however, no quantum theory of gravity has yet been found. During the summer of 1666, Isaac Newton observed an apple falling from the tree in his garden, thus realizing the principle of universal gravitation. Albert Einstein in 1915 considered the physical interaction between matter and space, where gravity occurs as a consequence of matter causing a geometric deformation of spacetime which is otherwise flat. Einstein, both independently and with Walther Mayer, attempted to unify his theory of gravity with electromagnetism using the work of Theodor Kaluza and James Clerk Maxwell to link gravity and quantum field theory. Theoretical quantum physicists have postulated the existence of a quantum gravity particle, the graviton. Various theoretical explanations of quantum gravity have been created, including superstring theory, loop quantum gravity, E8 theory and asymptotic safety theory amongst many others. Probable solutions In Newton's law of universal gravitation, gravity was an external force transmitted by unknown means. In the 20th century, Newton's model was replaced by general relativity where gravity is not a force but the result of the geometry of spacetime. Under general relativity, anti-gravity is impossible except under contrived circumstances. Gravity shields In 1948 businessman Roger Babson (founder of Babson College) formed the Gravity Research Foundation to study ways to reduce the effects of gravity. Their efforts were initially somewhat "crankish", but they held occasional conferences that drew such people as Clarence Birdseye, known for his frozen-food products, and helicopter pioneer Igor Sikorsky. Over time the Foundation turned its attention away from trying to control gravity, to simply better understanding it. The Foundation nearly disappeared after Babson's death in 1967. However, it continues to run an essay award, offering prizes of up to $4,000. As of 2017, it is still administered out of Wellesley, Massachusetts, by George Rideout Jr., son of the foundation's original director. Winners include California astrophysicist George F. Smoot (1993), who later won the 2006 Nobel Prize in Physics, and Gerard 't Hooft (2015) who previously won the 1999 Nobel Prize in Physics. General relativity research in the 1950s General relativity was introduced in the 1910s, but development of the theory was greatly slowed by a lack of suitable mathematical tools. It appeared that anti-gravity was outlawed under general relativity. It is claimed the US Air Force also ran a study effort throughout the 1950s and into the 1960s. Former Lieutenant Colonel Ansel Talbert wrote two series of newspaper articles claiming that most of the major aviation firms had started gravity control propulsion research in the 1950s. However, there is no outside confirmation of these stories, and since they take place in the midst of the policy by press release era, it is not clear how much weight these stories should be given. It is known that there were serious efforts underway at the Glenn L. Martin Company, who formed the Research Institute for Advanced Study. Major newspapers announced the contract that had been made between theoretical physicist Burkhard Heim and the Glenn L. Martin Company. Another effort in the private sector to master understanding of gravitation was the creation of the Institute for Field Physics, University of North Carolina at Chapel Hill in 1956, by Gravity Research Foundation trustee Agnew H. Bahnson. Military support for anti-gravity projects was terminated by the Mansfield Amendment of 1973, which restricted Department of Defense spending to only the areas of scientific research with explicit military applications. The Mansfield Amendment was passed specifically to end long-running projects that had no results. Under general relativity, gravity is the result of following spatial geometry (change in the normal shape of space) caused by local mass-energy. This theory holds that it is the altered shape of space, deformed by massive objects, that causes gravity, which is actually a property of deformed space rather than being a true force. Although the equations cannot normally produce a "negative geometry", it is possible to do so by using "negative mass". The same equations do not, of themselves, rule out the existence of negative mass. Both general relativity and Newtonian gravity appear to predict that negative mass would produce a repulsive gravitational field. In particular, Sir Hermann Bondi proposed in 1957 that negative gravitational mass, combined with negative inertial mass, would comply with the strong equivalence principle of general relativity theory and the Newtonian laws of conservation of linear momentum and energy. Bondi's proof yielded singularity-free solutions for the relativity equations. In July 1988, Robert L. Forward presented a paper at the AIAA/ASME/SAE/ASEE 24th Joint Propulsion Conference that proposed a Bondi negative gravitational mass propulsion system. Bondi pointed out that a negative mass will fall toward (and not away from) "normal" matter, since although the gravitational force is repulsive, the negative mass (according to Newton's law, F=ma) responds by accelerating in the opposite of the direction of the force. Normal mass, on the other hand, will fall away from the negative matter. He noted that two identical masses, one positive and one negative, placed near each other will therefore self-accelerate in the direction of the line between them, with the negative mass chasing after the positive mass. Notice that because the negative mass acquires negative kinetic energy, the total energy of the accelerating masses remains at zero. Forward pointed out that the self-acceleration effect is due to the negative inertial mass, and could be seen induced without the gravitational forces between the particles. The Standard Model of particle physics, which describes all currently known forms of matter, does not include negative mass. Although cosmological dark matter may consist of particles outside the Standard Model whose nature is unknown, their mass is ostensibly known – since they were postulated from their gravitational effects on surrounding objects, which implies their mass is positive. The proposed cosmological dark energy, on the other hand, is more complicated, since according to general relativity the effects of both its energy density and its negative pressure contribute to its gravitational effect. Unique force Under general relativity any form of energy couples with spacetime to create the geometries that cause gravity. A longstanding question was whether or not these same equations applied to antimatter. The issue was considered solved in 1960 with the development of CPT symmetry, which demonstrated that antimatter follows the same laws of physics as "normal" matter, and therefore has positive energy content and also causes (and reacts to) gravity like normal matter (see gravitational interaction of antimatter). For much of the last quarter of the 20th century, the physics community was involved in attempts to produce a unified field theory, a single physical theory that explains the four fundamental forces: gravity, electromagnetism, and the strong and weak nuclear forces. Scientists have made progress in unifying the three quantum forces, but gravity has remained "the problem" in every attempt. This has not stopped any number of such attempts from being made, however. Generally these attempts tried to "quantize gravity" by positing a particle, the graviton, that carried gravity in the same way that photons (light) carry electromagnetism. Simple attempts along this direction all failed, however, leading to more complex examples that attempted to account for these problems. Two of these, supersymmetry and the relativity related supergravity, both required the existence of an extremely weak "fifth force" carried by a graviphoton, which coupled together several "loose ends" in quantum field theory, in an organized manner. As a side effect, both theories also all but required that antimatter be affected by this fifth force in a way similar to anti-gravity, dictating repulsion away from mass. Several experiments were carried out in the 1990s to measure this effect, but none yielded positive results. In 2013 CERN looked for an antigravity effect in an experiment designed to study the energy levels within antihydrogen. The antigravity measurement was just an "interesting sideshow" and was inconclusive. Breakthrough Propulsion Physics Program During the close of the twentieth century NASA provided funding for the Breakthrough Propulsion Physics Program (BPP) from 1996 through 2002. This program studied a number of "far out" designs for space propulsion that were not receiving funding through normal university or commercial channels. Anti-gravity-like concepts were investigated under the name "diametric drive". The work of the BPP program continues in the independent, non-NASA affiliated Tau Zero Foundation. Empirical claims and commercial efforts There have been a number of attempts to build anti-gravity devices, and a small number of reports of anti-gravity-like effects in the scientific literature. None of the examples that follow are accepted as reproducible examples of anti-gravity. Gyroscopic devices Gyroscopes produce a force when twisted that operates "out of plane" and can appear to lift themselves against gravity. Although this force is well understood to be illusory, even under Newtonian models, it has nevertheless generated numerous claims of anti-gravity devices and any number of patented devices. None of these devices has ever been demonstrated to work under controlled conditions, and they have often become the subject of conspiracy theories as a result. Another "rotating device" example is shown in a series of patents granted to Henry Wallace between 1968 and 1974. His devices consist of rapidly spinning disks of brass, a material made up largely of elements with a total half-integer nuclear spin. He claimed that by rapidly rotating a disk of such material, the nuclear spin became aligned, and as a result created a "gravitomagnetic" field in a fashion similar to the magnetic field created by the Barnett effect. No independent testing or public demonstration of these devices is known. In 1989, it was reported that a weight decreases along the axis of a right spinning gyroscope. A test of this claim a year later yielded null results. A recommendation was made to conduct further tests at a 1999 AIP conference. Thomas Townsend Brown's gravitator In 1921, while still in high school, Thomas Townsend Brown found that a high-voltage Coolidge tube seemed to change mass depending on its orientation on a balance scale. Through the 1920s Brown developed this into devices that combined high voltages with materials with high dielectric constants (essentially large capacitors); he called such a device a "gravitator". Brown made the claim to observers and in the media that his experiments were showing anti-gravity effects. Brown would continue his work and produced a series of high-voltage devices in the following years in attempts to sell his ideas to aircraft companies and the military. He coined the names Biefeld–Brown effect and electrogravitics in conjunction with his devices. Brown tested his asymmetrical capacitor devices in a vacuum, supposedly showing it was not a more down-to-earth electrohydrodynamic effect generated by high voltage ion flow in air. Electrogravitics is a popular topic in ufology, anti-gravity, free energy, with government conspiracy theorists and related websites, in books and publications with claims that the technology became highly classified in the early 1960s and that it is used to power UFOs and the B-2 bomber. There is also research and videos on the internet purported to show lifter-style capacitor devices working in a vacuum, therefore not receiving propulsion from ion drift or ion wind being generated in air. Follow-up studies on Brown's work and other claims have been conducted by R. L. Talley in a 1990 US Air Force study, NASA scientist Jonathan Campbell in a 2003 experiment, and Martin Tajmar in a 2004 paper. They have found that no thrust could be observed in a vacuum and that Brown's and other ion lifter devices produce thrust along their axis regardless of the direction of gravity consistent with electrohydrodynamic effects. Gravitoelectric coupling In 1992, the Russian researcher Eugene Podkletnov claimed to have discovered, whilst experimenting with superconductors, that a fast rotating superconductor reduces the gravitational effect. Many studies have attempted to reproduce Podkletnov's experiment, always to negative results. Douglas Torr, of the University of Alabama in Huntsville proposed how a time-dependent magnetic field could cause the spins of the lattice ions in a superconductor to generate detectable gravitomagnetic and gravitoelectric fields in a series of papers published between 1991 and 1993. In 1999, a Miss Li appeared in Popular Mechanics, claiming to have constructed a working prototype to generate what she described as "AC Gravity." No further evidence of this prototype has been offered. Douglas Torr and Timir Datta were involved in the development of a "gravity generator" at the University of South Carolina. According to a leaked document from the Office of Technology Transfer at the University of South Carolina and confirmed to Wired reporter Charles Platt in 1998, the device would create a "force beam" in any desired direction and the university planned to patent and license this device. No further information about this university research project or the "Gravity Generator" device was ever made public. Göde Award The Institute for Gravity Research of the Göde Scientific Foundation has tried to reproduce many of the different experiments which claim any "anti-gravity" effects. All attempts by this group to observe an anti-gravity effect by reproducing past experiments have been unsuccessful thus far. The foundation has offered a reward of one million euros for a reproducible anti-gravity experiment. In fiction The existence of anti-gravity is a common theme in science fiction. The Encyclopedia of Science Fiction lists Francis Godwin's posthumously-published 1638 novel The Man in the Moone, where a "semi-magical" stone has the power to make gravity stronger or weaker, as the earliest variation of the theme. The first story to use anti-gravity for the purpose of space travel, as well as the first to treat the subject from a scientific rather than supernatural angle, was George Tucker's 1827 novel A Voyage to the Moon. Apergy Apergy is a term for a fictitious form of anti-gravitational energy first used by Percy Greg in his 1880 sword and planet novel Across the Zodiac. The term was later adopted by other fiction authors such as John Jacob Astor IV in his 1894 science fiction novel A Journey in Other Worlds, and it also appeared outside of explicit fiction writing. See also Area 51 Aerodynamic levitation Artificial gravity Burkhard Heim Casimir effect Clinostat Electrostatic levitation Exotic matter Gravitational interaction of antimatter Gravitational shielding Gravitational wave Ion-propelled aircraft Heim theory Magnetic levitation Nazi UFOs Optical levitation Reactionless drive Tractor beam References Bibliography Criteria: "Newtons discovery of the apple law" "Newtons principle of gravitation" > "Newtons principle of gravitation apple falls" (google > google books) Further reading Cady, W. M. (15 September 1952). "Thomas Townsend Brown: Electro-Gravity Device" (File 24–185). Pasadena, CA: Office of Naval Research. Public access to the report was authorized on 1 October 1952. External links Responding to Mechanical Antigravity, a NASA paper debunking a wide variety of gyroscopic (and related) devices Göde Scientific Foundation KURED Research General relativity History of physics History of science and technology in the United States Historiography of science Science fiction themes Fringe physics
Anti-gravity
Physics,Astronomy
3,322
12,422,990
https://en.wikipedia.org/wiki/C4H8O4
{{DISPLAYTITLE:C4H8O4}} The molecular formula C4H8O4 (molar mass: 120.10 g/mol, exact mass: 120.042259 u) may refer to: Tetroses Erythrose Erythrulose (or D-Erythrulose) Threose Molecular formulas
C4H8O4
Physics,Chemistry
79
34,237,939
https://en.wikipedia.org/wiki/GeneDB
GeneDB was a genome database for eukaryotic and prokaryotic pathogens. References External links http://www.genedb.org Genome databases Pathogen genomics Pathogenic microbes
GeneDB
Biology
43
7,085,773
https://en.wikipedia.org/wiki/Timeline%20of%20planetariums
This is a timeline of the history of planetariums. Historic influences Development of modern planetariums Digital and Fulldome video References Planetariums Planetariums
Timeline of planetariums
Astronomy
34
23,714,121
https://en.wikipedia.org/wiki/C12H18N2O2
{{DISPLAYTITLE:C12H18N2O2}} The molecular formula C12H18N2O2 may refer to: Doxpicomine Isophorone diisocyanate Mexacarbate Miotine Molecular formulas
C12H18N2O2
Physics,Chemistry
56
3,242,271
https://en.wikipedia.org/wiki/HD%20128311
HD 128311 is a variable star in the northern constellation of Boötes. It has the variable star designation HN Boötis, while HD 128311 is the star's designation in the Henry Draper Catalogue. The star is invisible to the naked eye with an apparent visual magnitude that fluctuates around 7.48. It is located at a distance of 53 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −9.6 km/s. Two confirmed extrasolar planets have been detected in orbit around this star. The stellar classification of HN Boo is K3V, which indicates this is a K-type main sequence star. Klaus G. Strassmeier et al. announced that the star's brightness varies, in the year 2000. It was given its variable star designation in 2006. It is a BY Draconis-type variable, randomly varying in brightness by 0.04 in magnitude over a period of 11.54 days due to star spots and high chromospheric activity. The star exhibits strong emission, which suggests an age of 0.5–1.0 billion years. It has 82% of the mass of the Sun and 78% of the Sun's radius. The metallicity of the star, meaning its abundance of heavier elements, appears slightly higher than in the Sun. It is radiating 31% of the luminosity of the Sun from its photosphere at an effective temperature of 4,863 K. Planetary system In 2002, the discovery of the exoplanet HD 128311 b was announced by Paul Butler. In 2005, the discovery of a second exoplanet HD 128311 c was announced by Steve Vogt. Most likely, the system has been formed in a very turbulent disc. The authors were able to show with both analytic and numerical models that certain libration modes are readily excited by turbulence. It was initially thought that the system could have been resulted from planet–planet scattering, but this is rather unlikely. In 2014, the true mass of HD 128311 c was measured via astrometry. The same study also proposed a third planetary candidate, but it has not been confirmed. See also List of extrasolar planets References External links External links GJ 3860 Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona Image HD 128311 K-type main-sequence stars BY Draconis variables Planetary systems with two confirmed planets Boötes Durchmusterung objects Gliese and GJ objects 128311 071395 Boötis, HN
HD 128311
Astronomy
541
153,563
https://en.wikipedia.org/wiki/Scilab
Scilab is a free and open-source, cross-platform numerical computational package and a high-level, numerically oriented programming language. It can be used for signal processing, statistical analysis, image enhancement, fluid dynamics simulations, numerical optimization, and modeling, simulation of explicit and implicit dynamical systems and (if the corresponding toolbox is installed) symbolic manipulations. Scilab is one of the two major open-source alternatives to MATLAB, the other one being GNU Octave. Scilab puts less emphasis on syntactic compatibility with MATLAB than Octave does, but it is similar enough that some authors suggest that it is easy to transfer skills between the two systems. Introduction Scilab is a high-level, numerically oriented programming language. The language provides an interpreted programming environment, with matrices as the main data type. By using matrix-based computation, dynamic typing, and automatic memory management, many numerical problems may be expressed in a reduced number of code lines, as compared to similar solutions using traditional languages, such as Fortran, C, or C++. This allows users to rapidly construct models for a range of mathematical problems. While the language provides simple matrix operations such as multiplication, the Scilab package also provides a library of high-level operations such as correlation and complex multidimensional arithmetic. Scilab also includes a free package called Xcos for modeling and simulation of explicit and implicit dynamical systems, including both continuous and discrete sub-systems. Xcos is the open source equivalent to Simulink from the MathWorks. As the syntax of Scilab is similar to MATLAB, Scilab includes a source code translator for assisting the conversion of code from MATLAB to Scilab. Scilab is available free of cost under an open source license. Due to the open source nature of the software, some user contributions have been integrated into the main program. Syntax Scilab syntax is largely based on the MATLAB language. The simplest way to execute Scilab code is to type it in at the prompt, --> , in the graphical command window. In this way, Scilab can be used as an interactive mathematical shell. Hello World! in Scilab: disp('Hello World'); Plotting a 3D surface function: // A simple plot of z = f(x,y) t=[0:0.3:2*%pi]'; z=sin(t)*cos(t'); plot3d(t,t',z) Determining the equivalent single index corresponding to a given set of subscript values: function I=sub2ind(dims,varargin) //I = sub2ind(dims,i1,i2,..) returns the linear index equivalent to the //row, column, ... subscripts in the arrays i1,i2,.. for an matrix of //size dims. //I = sub2ind(dims,Mi) returns the linear index //equivalent to the n subscripts in the columns of the matrix Mi for a matrix //of size dims. d=[1;cumprod(matrix(dims(1:$-1),-1,1))] for i=1:size(varargin) if varargin(i)==[] then I=[],return,end end if size(varargin)==1 then //subindices are the columns of the argument I=(varargin(1)-1)*d+1 else //subindices are given as separated arguments I=1 for i=1:size(varargin) I=I+(varargin(i)-1)*d(i) end end endfunction Toolboxes Scilab has many contributed toolboxes for different tasks, such as Scilab Image Processing Toolbox (SIP) and its variants (such as SIVP) Scilab Wavelet Toolbox Scilab Java and .NET Module Scilab Remote Access Module More are available on ATOMS Portal or the Scilab forge. History Scilab was created in 1990 by researchers from INRIA and École nationale des ponts et chaussées (ENPC). It was initially named Ψlab (Psilab). The Scilab Consortium was formed in May 2003 to broaden contributions and promote Scilab as worldwide reference software in academia and industry. In July 2008, in order to improve the technology transfer, the Scilab Consortium joined the Digiteo Foundation. Scilab 5.1, the first release compiled for Mac, was available in early 2009, and supported Mac OS X 10.5, a.k.a. Leopard. Thus, OSX 10.4, Tiger, was never supported except by porting from sources. Linux and Windows builds had been released since the beginning, with Solaris support dropped with version 3.1.1, and HP-UX dropped with version 4.1.2 after spotty support. In June 2010, the Consortium announced the creation of Scilab Enterprises. Scilab Enterprises develops and markets, directly or through an international network of affiliated services providers, a comprehensive set of services for Scilab users. Scilab Enterprises also develops and maintains the Scilab software. The ultimate goal of Scilab Enterprises is to help make the use of Scilab more effective and easy. In February 2017 Scilab 6.0.0 was released which leveraged the latest C++ standards and lifted memory allocation limitations. Since July 2012, Scilab is developed and published by Scilab Enterprises and in early 2017 Scilab Enterprises was acquired by Virtual Prototyping pioneer ESI Group Since 2019 and Scilab 6.0.2, the University of Technology of Compiègne provides resources to build and maintain the macOS version. Since mid 2022 the Scilab team is part of Dassault Systèmes. Scilab Cloud App & Scilab Cloud API Since 2016 Scilab can be embedded in a browser and be called via an interface written in Scilab or an API. This new deployment method has the notable advantages of masking code & data as well as providing large computational power. These features have not been included in the open source version of Scilab and are still proprietary developments. See also SageMath List of numerical-analysis software Comparison of numerical-analysis software SimulationX References Further reading External links Scilab website Array programming languages Dassault Group Free educational software Free mathematics software Free software programmed in Fortran Numerical analysis software for Linux Numerical analysis software for macOS Numerical analysis software for Windows Numerical programming languages Science software that uses GTK
Scilab
Mathematics
1,367
44,940,620
https://en.wikipedia.org/wiki/C/1490%20Y1
C/1490 Y1 is a comet that was recorded and observed across East Asia, particularly China and Korea, from December 1490 to February 1491. It is the parent body of the Quadrantids meteor shower. Orbit John Russell Hind, Benjamin Peirce, and Ichiro Hasegawa made the initial orbital calculations for the comet, which all resulted in a parabolic trajectory around the Sun. References External links
C/1490 Y1
Astronomy
85
77,348,149
https://en.wikipedia.org/wiki/Sacred%20enclosure
In the study of the history of religions and anthropology, a sacred enclosure refers to any structure intended to separate two spaces: a sacred space and a profane space. Generally, it is a separation wall erected to mark the difference between the two spaces, acquiring significant symbolic meaning. Many human cultures have made use of sacred enclosures, found in Mesopotamia, as well as in pre-Columbian America, sub-Saharan Africa, such as in Notsé, or in Mediterranean cultures, such as Greece and Rome. The use of sacred enclosures is also a crucial aspect of the Abrahamic religions, as seen in the construction of the Temple of Jerusalem or pilgrimages such as the Hajj. In some cases, this separation is placed within a single sacred space, dividing it, as with enclosures separating people according to their gender in certain churches, mosques, and synagogues. The term refers to the structure that establishes, reinforces, or accentuates separations, but it is sometimes used more broadly to describe all sacred boundaries imposed on spaces, although the term "sacred boundary" is more accurate in this case. Anthropologically, it is an important aspect of human culture, as it often establishes the limits of the profane space by erecting a visible marker signifying the presence of the sacred space. It is central to the notion of the sacred. Anthropology Clarifying aspect The erection of a sacred enclosure, whether a large compound or a simple wall, is central to a clarifying aspect. By establishing and making visible the boundaries between places, the enclosure defines both the sacred and the profane. It also generally reinforces cultic behaviors; faced with the material impossibility of crossing this space, humans must align their actions with the cult, which is thus materialized and made present to the entire community. Delimiting aspect The sacred enclosure marks an extraction from the profane world. After crossing the boundaries, the individual finds themselves in a different perception of time, where the normal course of events no longer seems to follow its usual rhythm. In this place and after passing through the enclosure, communication with the supernatural is perceived as more natural and evident. History Antiquity The erection of a sacred enclosure is often associated with the foundation of a city. For example, when the Phoenician city of Byblos was refounded in the mid-4th millennium BCE, the sacred enclosure demarcating the future temple of the city was the first structure of the city. Byblos was not unique; older Mesopotamian cities like Eridu and Uruk also centered around sacred enclosures that defined the boundaries of their temples. These two Mesopotamian cities have the most significant Mesopotamian sacred enclosures, but nearly all cities of the ancient Near East featured such enclosures, including those in Cyprus. While many myths directly link supernatural intervention to the selection and delimitation of the sacred space, in some cases, divine intervention was said to construct the enclosure, as seen in Uruk, where the god An was directly involved in its construction. In Minoan Crete and the wider ancient Aegean region, such structures are also attested. The Celts were frequent builders of sacred enclosures, often using them in their rituals. Prehistoric stone circles in France might be of a similar nature. Similar phenomena are attested in North America from the 5th century BCE. The Greeks also used sacred enclosures, which were central to their practices. They used them to delimit the space of temples or sacred groves, such as the sanctuary at Delphi. It is possible, though not certain, that the second part of the goddess Artemis’s name comes from the Greek root for sacred enclosure, “τέμενος” (temenos). The Persians were also known for this practice, as seen in Pasargadae. According to Strabo, the cults of ancient Georgia incorporated such enclosures. Among the Romans, the pomerium referred to the sacred boundary of the city. This boundary was sometimes marked by a sacred enclosure, which also had military and defensive roles, as seen with the Servian Wall. In this case, according to Plutarch, the gates were not part of the sacred enclosure, allowing passage through them. Parallel or similar dynamics are observed in ancient Judaism. For example, it was forbidden for a foreigner to enter the enclosure of the Temple of Jerusalem, as noted by the Temple Warning inscription. In the case of the Temple of Jerusalem, it was constructed in a concentric structure, where each crossed enclosure brought one closer to the Holy of Holies, perceived as the physical dwelling of the God of Israel. Thus, it was a place segmented by numerous sacred enclosures, which were omnipresent markers of the sanctity of each stage where one found themselves. Middle Ages In Europe and Asia, this structure was adopted in Christian places of worship, with churches separating themselves from the outside through the erection of walls that enclosed a sanctuary, separated from the rest by a wall or veil, the precursor to the iconostasis or rood screen. In some cases, Christians and Jews implemented other built markers within their places of worship, such as establishing a separate gynaeceum for female congregants. Similar internal separations are also found in mosques, with a different space, sometimes even a separate room, allocated for the prayers of men and women. In sub-Saharan Africa, such practices are found among the ancestors of the Ewe people, as evidenced by the stories related to the exodus of the Ewe from Notsé, where the ancestors decided to leave the city after the tyrannical king Agokoli chose to erect a vast sacred enclosure. The Incas in Central America also seemed to make use of sacred enclosures. References Temples Building types Types of monuments and memorials Sacral architecture Religious buildings and structures
Sacred enclosure
Engineering
1,181
32,085,457
https://en.wikipedia.org/wiki/Quasielastic%20scattering
In physics, quasielastic scattering designates a limiting case of inelastic scattering, characterized by energy transfers being small compared to the incident energy of the scattered particles. The term was originally coined in nuclear physics. It was applied to thermal neutron scattering by Leon van Hove and Pierre Gilles de Gennes (quasielastic neutron scattering, QENS). Finally, it is sometimes used for dynamic light scattering (also known by the more expressive term photon correlation spectroscopy). References Nuclear physics Neutron scattering
Quasielastic scattering
Physics,Chemistry
101
2,013,362
https://en.wikipedia.org/wiki/Klotski
Klotski (from ) is a sliding block puzzle thought to have originated in the early 20th century. The name may refer to a specific layout of ten blocks, or in a more global sense to refer to a whole group of similar sliding-block puzzles where the aim is to move a specific block to some predefined location. Rules Like other sliding-block puzzles, several different-sized block pieces are placed inside a box, which is normally 4×5 in size. Among the blocks, there is a special one (usually the largest) which must be moved to a special area designated by the game board. The player is not allowed to remove blocks, and may only slide blocks horizontally and vertically. Common goals are to solve the puzzle with a minimum number of moves or in a minimum amount of time. Naming The earliest known reference of the name Klotski originates from the computer version for Windows 3.x by ZH Computer in 1991, which was also included in Microsoft Windows Entertainment Pack. The sliding puzzle had already been trademarked and sold under different names for decades, including Psychoteaze Square Root, Intreeg, and Ego Buster. There was no known widely used name for the category of sliding puzzles described before Klotski appeared. History It is still unknown which version of the puzzle is the original. There are many confusing and conflicting claims, and several countries claim to be the ultimate origin of this game. One game—lacking the 5 × 4 design of Pennant, Klotski, and Chinese models but a likely inspiration—is the 19th century 15-puzzle, where fifteen wooden squares had to be rearranged. It is suggested that unless a 19th-century Asian evidence is found, the most reasonably likely path of transmission is from the late 19th century square designs to the early 20th century rectangular, such as Pennant, thence to Klotski and Huarong Road. United States The 15-puzzle enjoyed immense popularity in western countries during the late 19th century. Around this time, patents appeared for puzzles using differently shaped blocks. Henry Walton filed in 1893 for a sliding puzzle of identically shaped rectangles. Frank E. Moss filed in 1900 for a sliding puzzle of six squares and four rectangles, which is one of the first known occurrences of sliding puzzle with non-equal blocks. However, the early cognate of Klotski closest in design dates to 1909 in Chicago. Lewis W. Hardy obtained copyright for a game named Pennant Puzzle in 1909, manufactured by OK Novelty Co., Chicago. The aim of this puzzle is identical to Klotski, and only its default blocks and arrangement are different. Hardy also filed in 1907, which is about a sliding-block puzzle similar to Pennant Puzzle, but with a slightly different combination of blocks and a different goal—not only must the largest block be moved to a specific location, but all of the other blocks must achieve a specific configuration as well. The patent was granted in 1912. England John Harold Fleming obtained patent for a puzzle in 1934 in England, with almost identical configuration as described in this page. The puzzle concerned has the same blocks and almost identical placement as forget-me-not, only that the unique horizontal 2×1 block is placed at the bottom instead of beneath the 2×2 block. The patent included a 79-step solution. China The Klotski puzzle—with its two-by-two and one-by-two cells and with the same 4 × 5 dimensions—also closely resembles the Chinese game known as (), also known as Pass or Road. Popular versions of used cells with names or images of Han dynasty heroes and villains, based on a famous battle from the Yuan dynasty novel Romance of the Three Kingdoms. The Chinese design, commonly using Chinese names for the heroes, villains, and soldiers, has clear inspiration from the Chinese version of chess, which problematizes the origins of the genre of sliding puzzles. The first account of occurrence of Klotski in China is in Shaanxi Province, where Lin Dekuan from the Northwestern Polytechnical University noted that children in a village, a countryside in Chenggu county, were playing a version of Klotski made with pieces of paper in 1938. In 1943, the was publicized by Liang Qing, a teacher in the New Fourth Route Army who learnt it from people living in northern Jiangsu province, among soldiers to enhance their cultural life. One of the earliest books about standard Klotski was written by the Chinese professor Jiang Changying () from the Northwestern Polytechnical University in 1949 in his book《 – 》; Jiang Changying believed that the was invented in the late 1930s and became popular during the late 1930s and early 1940s. This book has been republished by Jiang Changying in 1997 as《 – 》(). Jiang Changying also believed that the was mostly likely introduced in Shanghai from the early 1940s from Northern Jiangsu province. In 1956, the appeared in math magazine where it was called "Guan Yu Releases Cao Cao"; some years later, it gained the revolutionary name of "Chase Away the Paper Tiger" when it was published in the Liaoning Pictorial of 1959. The plastic versions of the manufactured in the 1960s by the Shanghai No. 14 Toy Factory and Shanghai Changchun Plastic Factory made plastic version were named "Parking the Boats". In the 1980s, an association was created by enthusiasts; enthusiasts also organized competitions in Beijing, Shanghai, and Northeast China. Solving The minimum number of moves for the original puzzle is 81, which is verified by computer to be the absolute minimum for the default starting layout, if you consider sliding a single piece to any reachable position to be a single move. The first published 81-step solution is by Martin Gardner, in the February 1964 issue of Scientific American. In the article he discussed the following puzzles (with Edward Hordern classification code in parentheses): Pennant Puzzle (C19), L'Âne Rouge (C27d), Line Up the Quinties (C4), Ma's Puzzle (D1) and a form of Stotts' Baby Tiger Puzzle (F10). For earliest published solutions (not optimal solution), currently known is from Chinese educator Xǔ Chún Fǎng, in his book 數學漫談. (translation: Mathematics Tidbits; Kāi Mínɡ Shū Diàn, March 1952) His solution involves 100 steps. The current Guinness World Record for the Fastest time to Solve a 4x5 Klotski puzzle is 3.99 seconds, achieved by Lim Kai Yi, a speedsolver from Malaysia on . Variations There are several variations of this game, some with names specific to the culture of certain countries, some with different arrangement of blocks. It is still unknown whether these variations affected each other and how. The following variations basically have the same layout and block arrangement, varying only in name (human, animal, or others), usually with some sort of story behind the names. It is completely unknown whether they share the same origin, though this is highly possible as they are identical to each other. (, alternatively named Pass or Trail) is the Chinese variation, which shows unique Chinese characteristics, by basing itself on one of Four Great Classical Novels novel, Romance of the Three Kingdoms, about the warlord Cao Cao retreating through the Trail (present-day Jianli County, Jingzhou, Hubei) after his defeat at the Battle of Red Cliffs in the winter of 208/209 CE during the late Eastern Han dynasty. He encountered an enemy general, Guan Yu, who was guarding the path and waiting for him. Guan Yu spared Cao Cao and allowed the latter to pass through Trail on account of the generous treatment he received from Cao in the past. The largest block in the game is named "Cao Cao". Daughter in the box The Daughter in the Box (Japanese name: hakoiri musume 箱入り娘) wood puzzle depicts an "innocent young girl, who knows nothing of the world" trapped in a building. The largest piece is named "daughter", and other blocks are given names of other family members (like father, mother and so on). Another Japanese variation uses the names of shogi pieces. L'âne rouge In France, it is well known as L'âne rouge. It features a red donkey (the largest piece) trying to escape a maze of fences and pens to get to its carrots. However, there is no known and documented record of its first existence in France. Khun Chang Khun Phaen This is a variation from Thailand. Khun Phaen is a famous character in Thai legend, and the game is named after the epic poem Khun Chang Khun Phaen, in which the character is imprisoned. The game depicts Khun Phaen breaking out of the prison by overcoming its nine sentries. There is a slight difference between Khun Chang Khun Phaen and the standard layout—the two middle 1×1 blocks are moved to bottom. Other than that, all other blocks are the same. The origin of this variation is unknown. Other block arrangements In this context, the "basic" arrangement is assumed to be a 4×5 area laid out as follows:- In the left-hand column, two 1×2 blocks with a 1×1 block beneath. In the right-hand column, two 1×2 blocks with a 1×1 block beneath. In the middle two columns, a 2×2 block at the top, with a horizontal 2×1 block beneath it, two 1×1 blocks beneath that, leaving a 2×1 empty space at the bottom. This is used globally as the "basic" game of Klotski. It is coded C27d in Hordern classification of sliding puzzle games. Pennant Puzzle Coded as C19 in Hordern classification, it is first copyrighted in 1909 by Lewis. W. Hardy in United States. Standard Trailer Co. has it copyrighted under the name Dad's Puzzler in 1926 (also in US). Its arrangement is different: The default location of all blocks are different from Klotski. For example, the largest square block is in upper left corner. It is in 4×5 area, with one 2×2, two 1×2, four 2×1, two 1×1 pieces. The exit of block is not at the bottom middle, but bottom left. Other than these, the game rules are the same as Klotski. The minimum number of moves to solve the puzzle is 59. Ma's Puzzle Ma's Puzzle is copyrighted by Standard Trailer Co. at 1927. It was the first sliding puzzle to use non-rectangular shape. Its goal is to join its 2 L-shaped pieces together, either anywhere or top right corner of the board. Computerized version An early graphical computer version was created by Jim Bates in 1988. In 1991, Klotski was included in the third Microsoft Windows Entertainment Pack. Many versions of Klotski followed, either freely or commercially available. For example, one is included in the GNOME desktop environment. Some include blocks which have special effects. In video games The puzzle is also found in the 1995 Super Nintendo Entertainment System game Lufia II: Rise of the Sinistrals. In a cave on Dragon Mountain, the player is told "The world's most difficult trick is beyond this point." A few floors down, the player is faced with the sliding block puzzle. The central block contains four chests with powerful items: Mega shield, Holy Robe, Legend helm, and Lizard blow. See also N-puzzle Rush Hour (puzzle) Mechanical puzzles Combination puzzles Sliding puzzle Notes and references External links easy-to-follow video. 'Forget-me-not' (L'Âne Rouge) solution. Solving Dad's Puzzle Mechanical puzzles Combination puzzles Puzzle video games Microsoft Entertainment Pack Wooden toys Single-player games
Klotski
Mathematics
2,415
25,636,339
https://en.wikipedia.org/wiki/List%20of%20heaviest%20bells
Following is a list of the heaviest bells known to have been cast, and the period of time during which they held that title. Heaviest functioning bell in the world The title of heaviest functioning bell in the world has been held chronologically by: The Great Bell of Dhammazedi At approximately 300 tons, the Great Bell of Dhammazedi is the largest bell to have existed in recorded history. Cast in 1484 by King Dhammazedi of Mon, this bell was located at the Shwedagon Pagoda in Rangoon, Burma (now Yangon, Myanmar). The bell was said to be twelve cubits (6.276 m) high and eight cubits (4.184 m) wide. The Great Bell of Dhammazedi remained at the Shwedagon Pagoda as the heaviest functioning bell in the world until 1608. That year, Portuguese warlord and mercenary Philip de Brito removed it and attempted to carry it by a specially constructed raft down the Yangon River to his stronghold of Thanlyin (later known as Syriam). However, the ship carrying the bell sank at the confluence of the Yangon and Bago rivers. The Dhammazedi Bell remains buried to this day at that location, possibly well-preserved, beneath some of sediment. Numerous attempts have been made to locate and recover the bell, thus far without success. So while the Great Bell of Dhammazedi might indeed be the heaviest bell in the world, it must be disqualified from consideration as such, until it has been recovered and restored to a functional status. The Chion-in Temple Bell Cast in 1633, the 74-ton Chion-in Temple Bell, located in Kyoto, Japan, held the title of heaviest functioning bell in the world until 1810. From March 1839 until March 1896, the Mingun Bell was not functional due to the fact that it was not hanging freely from its shackles. During this period, the Chion-in Temple Bell regained its former title. The Mingun Bell Cast in 1808, the 90-ton Mingun Bell in Mingun, Sagaing Division, Burma became the heaviest functioning bell in the world from its suspension in 1810 until 23 March 1839. On that date, it was knocked off its supports by a large earthquake. The Mingun Bell was resuspended in March 1896 by a team of men from the Irrawaddy Flotilla Company. The Mingun Bell was again the world's heaviest functioning bell from its resuspension in 1896 until 1902. The Mingun Bell regained its status as the heaviest functioning bell in the world in 1942 and held that title until 2000. The Shitennō-ji Temple Bell In 1902, the newly-cast 114-ton Shitennō-ji Temple Bell was hung in Osaka, Japan. The Shitennō-ji Temple Bell reigned as the heaviest functioning bell in the world from that year until 1942, when it was melted down for its metal to assist with the then-ongoing World War II effort. The Bell of Good Luck Cast on New Year's Eve 2000, the Bell of Good Luck is located in the Foquan Temple in Pingdingshan, Henan, China. The bell weighs and it is in height and in diameter. The Bell of Good Luck has therefore claimed the title of heaviest functioning bell in the world since its construction in 2000, up to the present date. The Tsar Bell The 216-ton Russian Tsar Bell (also known as the Tsar Kolokol III) on display on the grounds of the Moscow Kremlin is the heaviest bell known to exist in the world today. However, a very large piece broke off from the Tsar Bell during a fire which engulfed the tower the bell was intended to be hung in, so this irreparably damaged bell has never been suspended or rung. The Tsar Bell cannot be considered as the heaviest functioning bell in the world due to its inability to serve as a percussion instrument. Rather, it may be considered to be the largest bell, or at least the largest bell-shaped sculpture in the world. Existing bells Bells weighing 25 tonnes or more: Destroyed or lost bells Bells weighing 25 tonnes or more, no longer in existence (lost or destroyed): Gallery See also American Bell Association International Bellfounding Campanology Carillon Russian Orthodox bell ringing References External links Blagovest Russian Church Bells:A Select List of Russian Bells Weighing 36,100 Pounds or More Guild of Carillonneurs in North America Russian Orthodox leader blesses new church bells at holy site List of heaviest Bells Burmese musical instruments Chinese musical instruments German musical instruments Japanese musical instruments Russian musical instruments Heaviest or most massive things
List of heaviest bells
Physics
954
31,833,878
https://en.wikipedia.org/wiki/SPRED1
Sprouty-related, EVH1 domain-containing protein 1 (pronounced spread-1) is a protein that in humans is encoded by the SPRED1 gene located on chromosome 15q13.2 and has seven coding exons. Function SPRED-1 is a member of the Sprouty family of proteins and is phosphorylated by tyrosine kinase in response to several growth factors. The encoded protein can act as a homodimer or as a heterodimer with SPRED2 to regulate activation of the MAP kinase cascade. Clinical associations Defects in this gene are a cause of neurofibromatosis type 1-like syndrome (NFLS). Mutations in this gene are associated with Legius syndrome. Childhood leukemia Mutations The following mutations have been observed: An exon 3 c.46C>T mutation leading to p.Arg16Stop. This mutation may result in a truncated nonfunctional protein. Blast cells analysis displayed the same abnormality as germline mutation with one mutated allele (no somatic SPRED1 single-point mutation or loss of heterozygosity was found). The M4/M5 phenotype of AML are most closely associated with Ras pathway mutations. Ras pathway mutations are also associated with monosomy 7. 3 Nonsense (R16X, E73X, R262X) 2 Frameshift (c.1048_c1049 delGG, c.149_1152del 4 bp) Missense (V44D) p.R18X and p.Q194X with phenotype altered pigmentation without tumoriginesis. Disease Database SPRED1 gene variant database See also Neurofibromin 1 Patients without Neurofibromin 1 or SPRED1 mutations may have SPRED2, SPRED3 or SPRY1, SPRY2, SPRY3 or SPRY4 mutations. References Further reading SPR domain EVH1 domain Human proteins Proteins Hematopathology Neuro-cardio-facial-cutaneous syndromes
SPRED1
Chemistry
432
42,962,805
https://en.wikipedia.org/wiki/Conservation%20and%20restoration%20of%20plastic%20objects
Conservation and restoration of objects made from plastics is work dedicated to the conservation of objects of historical and personal value made from plastics. When applied to cultural heritage, this activity is generally undertaken by a conservator-restorer. Background Within museum collections, there are a variety of artworks and artifacts that are composed of organic plastic materials, either synthetic or semi-synthetic; these were created for a range of uses from artistic, to technical, to domestic use. Plastics have become an integral component of life, and many plastic artifacts have become cultural icons or objects worth preserving for the future. Although relatively new materials for museum collections, having originated in the 19th century, plastics are deteriorating at an alarming rate. This risks the loss not only of the objects themselves, but other nearby materials may also be degraded by outgassing or reactions with other released chemicals. Identification of plastics Identification of plastic components of a collection is extremely important, because some plastics may release a harmful toxin or gas that can damage nearby objects. A preservation plan can be established to slow down the effects and protect a collection. Plastics are identified by various methods, including trade name, trademark, or patent number. Depending on the manufacturer, different chemical formulas and materials may have been used to produce the plastic over the years. A recycling code may be present, giving general information about the material composition. Plastic composites or proprietary blends can be more difficult to identify. If there are no markings to identify the type of plastic used, it may still be identified by using various types of spectroscopic technology such as optical spectrometer, Raman mid-infrared spectroscopy, and near-infrared spectroscopy, along with mass spectrometry. Other forms of identification include elemental analysis or thermal analysis to decipher the composition of plastics. The Museum of Design in Plastics (MoDiP), has created a guide to plastic objects that includes the manufacturing dates and manufacturing processes, along with its typical characteristics such as feel and smell. If an object in a collection has characteristics that differ from what is expected, it is possible that the piece has begun to deteriorate. In 2022, the Getty Conservation Institute published a book on the properties of commonly-used plastics and elastomers, including 56 "fact sheets" summarizing important characteristics of the materials, and methods of identification. Common plastics The list below is of chemical compositions that make up common plastics found in museum collections. These are some plastics that may degrade, but are not seriously harmful to nearby objects: Non-plasticized (rigid) polyvinyl chloride (PVC) The following are "malignant" plastic materials that will age rapidly if left untreated, and which have a higher risk of off-gassing or releasing toxic materials that can damage surrounding objects: Polyvinyl chloride treated with plasticizers Polyurethane Cellulose esters, including cellulose nitrate and cellulose acetate Vulcanized rubber Biodegradable plastics Environmental concerns have driven recent changes in plastic manufacturing towards biodegradable plastics, with a potentially negative effect upon the long-term stability of such materials within museum collections. Deterioration A difficult aspect of plastic deterioration is that one cannot see what types of chemical reactions are occurring in the interior of an object, or easily identify the makeup of proprietary plastic composites. Many plastics will give off a distinct odor, ooze liquids, or will begin to shrink or crack in some way as they age. Although deterioration cannot always be stopped, it is important to know the causes and be able to mitigate or slow damage. Causes The causes of deterioration regarding plastics can be linked to age, chemical composition, storage, and improper handling of the objects: Age – When plastics were first manufactured in the 19th century, they were derived directly from organic materials; over the years these objects have usually deteriorated due to lack of knowledge and improper handling of the early plastics. Chemical – Depending on an object's chemical composition, conservators can understand how it will react over time. Other chemical reactions are driven by heat, oxygen, light, liquids, additives, and biological attacks. Storage – Improper storage of plastic artifacts can allow contamination and deterioration to occur. This often occurs when temperature or relative humidity fluctuate in the storage area, and this may cause the polymers to react to the environment, to deteriorate, and possibly to contaminate surrounding objects. Maintaining stable storage conditions is also important when an object is on exhibit. When the object is lighted and on display, its temperature and humidity can fluctuate. Conditions inside the exhibit case must be monitored and adjusted when necessary, to help prevent any damage. Improper handling – Improper cleaning techniques when using water or solvents on incompatible materials can cause damage. Also, human error when handling objects can occur, causing abrasions or scratches. Chemical processes Understanding the different types of plastic chemical degradation helps in planning specific measures to protect plastic artifacts. Listed below are types of chemical reactions that accelerate the deterioration of the polymer's structure: Photo-oxidative degradation occurs when plastic degrades from exposure to ultraviolet (UV) or visible light; the most damaging wavelengths depend on the composition of the polymer. In general, plastic will be affected by light, and it is best practice to keep plastic away from light sources as much as possible, especially during longterm storage. Thermal degradation affects the entire bulk volume of the polymer making up an object, and is strongly affected by the temperature and amount of light exposure. Ozone-induced degradation will deteriorate saturated and unsaturated polymers when the plastic is exposed to atmospheric ozone. A test can be conducted to see if the object has been exposed, by taking small samples for analysis using Fourier-transform infrared spectroscopy (FTIR). Catalytic degradation mainly focuses on plastic waste polymers as they are transformed into hydrocarbons. Biodegradation causes the surface or the strength of the plastic to change; this process eventually decomposes vulnerable materials into carbon dioxide and water as microbes consume components of the material. Hydroperoxide decomposition occurs when metal and metal ions within the plastic material lead to the deterioration of the object Plasticizer migration occurs when additive chemicals intended to keep a plastic resin soft and pliable gradually move to the surface or are shed from an object. The loss of these chemicals causes the plastic to revert to a brittle state, often shrinking or distorting in shape. The migrating chemicals may cause other nearby objects to deform or otherwise degrade. In addition, many plasticizers, such as phthalates or bisphenol A (BPA) may be toxic, hormone disruptors, or carcinogenic in their biological effects. Additional effects of deterioration: Plastics composed of cellulose acetate, when exposed to water, often will give off a smell of vinegar (vinegar syndrome); the surface will have a white powder residue and will begin to shrink. Cellulose acetate butyrate (CAB) and cellulose butyrate will produce butyric acid which has a "vomit odor". Polyvinyl chloride may cause a "blooming" effect, white powder on the surface that can contaminate nearby materials. Preventive care A yearly checkup of plastic artifacts can help monitor their condition, as well as the condition of the surrounding objects to verify that they have not been cross-contaminated. Safe handling Impermeable safety gloves such as those made of nitrile can help prevent toxins from entering the skin when handling plastic objects. Dust masks, respirators, or other personal protective equipment may be required for protection from outgassing or airborne microplastic dusts produced by some decaying plastics. Storage environment Plastics are best stored with a relative humidity level of 50%, at a storage temperature of , in light-proof enclosures. Because the composition of each plastic material can be different, it is difficult to designate a single uniform storage care plan; understanding the specific composition of a plastic artifact can help determine its preferred climate conditions. Keeping plastics at a stable low temperature and placing these objects either in cold storage or in oxygen-impermeable bags helps to slow degradation. Monitoring plastics in their storage environment is done by tracking their status and condition by using log entries on spreadsheets or in another database. Monitoring the temperature environment is done using data logger hardware which tracks hourly changes in temperature (and optionally, humidity). Objects composed of flammable and unstable cellulose nitrate especially benefit from cold storage, to reduce their rate of decay. Long-term storage supplies Adsorbents such as activated carbon, silica gel, and zeolites are used to absorb gases that are released from plastics. These absorbents can also be used when the object is on display to prevent and off-gassing that could occur, whether the object is on exhibit or in long-term storage. Absorbents along with acid-free boxes can help slow down the process of degradation and vinegar syndrome which is common in certain types of film, Lego plastics, and artwork. Oxygen-impermeable bags are used to exclude atmospheric oxygen. In combination with oxygen absorbers, this prevents oxidation and deterioration of the contents. Conservation The process of conservation and restoration of plastics requires an understanding of chemical composition of the material and an appreciation for the possible methods of restoration and their limitations, as well as development of a post-treatment preventive care plan for the object. Cleaning The process of cleaning plastics is done with the use of appropriate solvents, after identifying the polymers that make up the composition of the plastic. A spot test can be performed if there is uncertainty how the object will react to water or solvents. Scratch removal Within the field of contemporary art, where the surface finish is part of the artist's intent, the removal of scratches may need to be more nuanced, compared to simply compensating for accidental damage to social-historical artifacts. Conservators have developed and scientifically investigated a variety of methods for scratch removal. Filling Fillings may be needed if an object has suffered considerable loss of material due to accidental damage or chemical deterioration. The process of filling depends on the object's chemical composition, and requires consideration of refractive indexes, transparency, viscosity, and its compatibility with the rest of the object. See also Disc rot References Further reading External links POPART: an international collaborative research project about the preservation of plastic artefacts in museums Conservation of plastics Safe Handling of Plastics in a Museum Environment Deutsches Kunsstoff Museum PlArt museo Conservation of rubber THE CONSERVATION OF A PLASTIC MASK BY MARISOL Care of plastics:Malignant Plastics Care of Objects Made from Rubber and Plastic MoDiP The Getty Conservation Institute Conservation and restoration of cultural heritage Art history Cultural heritage Museology Cultural heritage conservation Sculpture Plastics
Conservation and restoration of plastic objects
Physics
2,196
286,918
https://en.wikipedia.org/wiki/Intermedia
Intermedia is an art theory term coined in the mid-1960s by Fluxus artist Dick Higgins to describe the strategies of interdisciplinarity that occur within artworks existing between artistic genres. It was also used by John Brockman to refer to works in expanded cinema that were associated with Jonas Mekas' Film-Makers’ Cinematheque. Gene Youngblood also described intermedia, beginning in his Intermedia column for the Los Angeles Free Press beginning in 1967 as a part of a global network of multiple media that was expanding consciousness. Youngblood gathered and expanded upon intermedia ideas from this series of columns in his 1970 book Expanded Cinema, with an introduction by Buckminster Fuller. Over the years, intermedia has been used almost interchangeably with multi-media and more recently with the categories of digital media, technoetics, electronic media and post-conceptualism. Characteristics The areas such as those between drawing and poetry, or between painting and theatre could be described as intermedia. With repeated occurrences, these new genres between genres could develop their own names (e.g. visual poetry, performance art); historically, an example is haiga, which combined brush painting and haiku into one composition. Dick Higgins described the tendency of what he thought was the most interesting and best in the new art to cross boundaries of recognized media or even to fuse the boundaries of art with media that had not previously been considered for art forms, including computers. With characteristic modesty, Dick Higgins often noted that Samuel Taylor Coleridge had first used the term. Academia In 1968, Hans Breder founded the first university program in the United States to offer an M.F.A. in intermedia. The Intermedia Area at The University of Iowa graduated artists such as Ana Mendieta and Charles Ray. In addition, the program developed a substantial visiting artist tradition, bringing artists such as Dick Higgins, Vito Acconci, Allan Kaprow, Karen Finley, Robert Wilson, Eric Andersen and others to work directly with Intermedia students. Two other prominent University programs that focus on intermedia are the Intermedia program at Arizona State University and the Intermedia M.F.A. at the University of Maine, founded and directed by Fluxus scholar and author Owen Smith. Additionally, the Roski School of Fine Arts at the University of Southern California features Intermedia as an area of emphasis in their B.A. and B.F.A. programs. The University of Maryland, Baltimore County offers an M.F.A. in Intermedia and Digital Art. Concordia University in Montreal, QC offers a B.F.A. in Intermedia/Cyberarts. Herron School of Art and Design, Indiana University, Purdue University, Indianapolis, has a M.F.A. Program with Photography and Intermedia degrees. The University of Oregon offers a Master of Music degree in Intermedia Music Technology. The Pacific Northwest College of Art offers a B.F.A. in Intermedia. In the United Kingdom, Edinburgh College of Art (within the University of Edinburgh) introduced a BA (Hons) Degree in Intermedia Arts, and intermedia can be a focus of study in Masters programmes. The Academy of Fine Arts [AVU] in Prague offers a Masters in Intermedia Studies founded by Milan Knížák and The Hungarian University of Fine Arts has an Intermedia Program. See also Technoetics Fluxus Multimedia New media art Non-linear media Neo-Dada References Sources Owen Smith (1998), Fluxus: The History of an Attitude, San Diego State University Press Hannah B. Higgins, "The Computational Word Works of Eric Andersen and Dick Higgins" in H. Higgins, & D. Kahn (eds), Mainframe experimentalism: Early digital computing in the experimental arts. Berkeley, CA: University of California Press (2013). Ina Blom, The Intermedia Dynamic: An Aspect of Fluxus (PhD diss., University of Oslo, 1993). Natilee Harren, "The Crux of Fluxus: Intermedia, Rear-guard," in Art Expanded, 1958-1978, edited by Eric Crosby with Liz Glass. Vol. 2 of Living Collections Catalogue. Minneapolis: Walker Art Center, 2015. Jonas Mekas, “On the Plastic Inevitables and the Strobe Light (May 26, 1966),” in Movie Journal: The Rise of the New American Cinema, 1959–1971 (New York: Columbia University Press, 2016), 249–250. Contemporary art Visual music American art Conceptual art
Intermedia
Technology
923
22,622,659
https://en.wikipedia.org/wiki/MORE%20protocol
MORE, which stands for MAC independent Opportunistic Routing, is an opportunistic routing protocol designed for wireless mesh networks. The protocol removes the dependency that other opportunistic routing protocols, such as ExOR and SOAR have on the MAC layer. Both of these protocols make use of a scheduler, to co-ordinate transmission among the nodes. Only one node transmits at a given point of time and all the other nodes listen to this. The nodes that listen remove the packets which they have queued for retransmission. This ensures that the same packet is not redundantly retransmitted by different nodes. MORE makes use of network encoding techniques and brings about spatial re-use by allowing all the nodes to transmit at the same time. Given a file, the source node breaks up the file into K packets. The number of packets each file is divided into varies. The uncoded packets are called "native packets". The source node then creates a linear combination of K packets and forwards them. The code vector represents the random co-efficients chosen by the node to perform encoding. The source also attaches a MORE header to each packet along with a forwarding list. The forwarders listen to the transmission of the source node. If the node that listens to this packet is in the forwarding list, it checks if the packet has any new information which are called as innovative packets. If the packet is innovative, it performs a linear recombination of the packets. This is essentially the linear recombination of the native packets again. The node ignores all non-innovative packets. The destination receives the packets and checks for innovative-ness. Upon receiving K innovative packets, it sends back ACK to the source and continues decoding the packets. The intermediate nodes hear this ACK and stop further transmission followed by the purging of packets in the buffer. Practical Challenges Calculating the number of pockets that the forwarder has to send. The paper suggests a distributed heuristic to calculate this. Stopping rule:Once the destination has received K innovative packets, it is necessary to stop further flow pumped into the network. The ACK is sent before and the packets along the =#shortest path to stop $6 packets pumpe\d into the network. nodes i Overhead MORE introduces a few overheads in the network. The use of network encoding requires the nodes to have sufficient computing abilities. It also requires the nodes to have sufficient memory to store the packets and process them. Finally, the protocol adds an additional MORE header to each of the packet. References Wireless networking
MORE protocol
Technology,Engineering
533
4,005,703
https://en.wikipedia.org/wiki/102P/Shoemaker
102P/Shoemaker, also known as Shoemaker 1, is a periodic comet in the Solar System. It was first seen in 1984 and then again in 1991. Images taken of it in 1999 were not recognized until 2006 when it was once again observed. It was unexpectedly dim in each of these returns. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 102P/Shoemaker 1 – Seiichi Yoshida @ aerith.net 102P at Kronk's Cometography IAU Minor Planet Center, Minor Planet Electronic Circular No. 2006-O54 giving questionable observations from 1999/2000 and new observations from 2006. IAU Central Bureau for Astronomical Telegrams Circular No. 5361 giving visual magnitude estimates for 1991 observations. IAU Central Bureau for Astronomical Telegrams Circular No. 5336 giving calculated orbit based on 1991 observations. IAU Central Bureau for Astronomical Telegrams Circular No. 5286 Describing recovery of 102P/Shoemaker in 1991 IAU Central Bureau for Astronomical Telegrams Circular No. 4017. Describing more visual magnitude estimates and orbital parameters from 1984. IAU Central Bureau for Astronomical Telegrams Circular No. 4002. Describing visual magnitude estimates of comet from 1984. IAU Central Bureau for Astronomical Telegrams Circular No. 4000. Describing positions of comet observed in 1984. Also mentions close pass of Jupiter calculated to have occurred in 1980. IAU Central Bureau for Astronomical Telegrams Circular No. 3998 Describing the initial calculation of the comet's orbit in 1984 Periodic comets 0102 102P 102P 102P 102P 19840927
102P/Shoemaker
Astronomy
328
5,094,208
https://en.wikipedia.org/wiki/Fritz%20Carlson
Fritz David Carlson (23 July 1888 – 28 November 1952) was a Swedish mathematician. After the death of Torsten Carleman, he headed the Mittag-Leffler Institute. Carlson's contributions to analysis include Carlson's theorem, the Polyá–Carlson theorem on rational functions, and Carlson's inequality In number theory, his results include Carlson's theorem on Dirichlet series. Hans Rådström, Germund Dahlquist, and Tord Ganelius were among his students. Notes External links 1888 births 1952 deaths 20th-century Swedish mathematicians Academic staff of the KTH Royal Institute of Technology Mathematical analysts Directors of the Mittag-Leffler Institute People from Vimmerby Municipality
Fritz Carlson
Mathematics
145
62,165,952
https://en.wikipedia.org/wiki/Neopentylene%20fluorophosphate
Neopentylene fluorophosphate, also known as NPF, is an organophosphate compound that is classified as a nerve agent. It has a comparatively low potency, but is stable and persistent, with a delayed onset of action and long duration of effects. See also Diisopropyl fluorophosphate IPTBO References Organophosphate insecticides Acetylcholinesterase inhibitors Phosphorofluoridates Dioxaphosphorinanes
Neopentylene fluorophosphate
Chemistry
104
2,178,487
https://en.wikipedia.org/wiki/Epitope%20mapping
In immunology, epitope mapping is the process of experimentally identifying the binding site, or epitope, of an antibody on its target antigen (usually, on a protein). Identification and characterization of antibody binding sites aid in the discovery and development of new therapeutics, vaccines, and diagnostics. Epitope characterization can also help elucidate the binding mechanism of an antibody and can strengthen intellectual property (patent) protection. Experimental epitope mapping data can be incorporated into robust algorithms to facilitate in silico prediction of B-cell epitopes based on sequence and/or structural data. Epitopes are generally divided into two classes: linear and conformational/discontinuous. Linear epitopes are formed by a continuous sequence of amino acids in a protein. Conformational epitopes epitopes are formed by amino acids that are nearby in the folded 3D structure but distant in the protein sequence. Note that conformational epitopes can include some linear segments. B-cell epitope mapping studies suggest that most interactions between antigens and antibodies, particularly autoantibodies and protective antibodies (e.g., in vaccines), rely on binding to discontinuous epitopes. Importance for antibody characterization By providing information on mechanism of action, epitope mapping is a critical component in therapeutic monoclonal antibody (mAb) development. Epitope mapping can reveal how a mAb exerts its functional effects - for instance, by blocking the binding of a ligand or by trapping a protein in a non-functional state. Many therapeutic mAbs target conformational epitopes that are only present when the protein is in its native (properly folded) state, which can make epitope mapping challenging. Epitope mapping has been crucial to the development of vaccines against prevalent or deadly viral pathogens, such as chikungunya, dengue, Ebola, and Zika viruses, by determining the antigenic elements (epitopes) that confer long-lasting immunization effects. Complex target antigens, such as membrane proteins (e.g., G protein-coupled receptors [GPCRs]) and multi-subunit proteins (e.g., ion channels) are key targets of drug discovery. Mapping epitopes on these targets can be challenging because of the difficulty in expressing and purifying these complex proteins. Membrane proteins frequently have short antigenic regions (epitopes) that fold correctly only when in the context of a lipid bilayer. As a result, mAb epitopes on these membrane proteins are often conformational and, therefore, are more difficult to map. Importance for intellectual property (IP) protection Epitope mapping has become prevalent in protecting the intellectual property (IP) of therapeutic mAbs. Knowledge of the specific binding sites of antibodies strengthens patents and regulatory submissions by distinguishing between current and prior art (existing) antibodies. The ability to differentiate between antibodies is particularly important when patenting antibodies against well-validated therapeutic targets (e.g., PD1 and CD20) that can be drugged by multiple competing antibodies. In addition to verifying antibody patentability, epitope mapping data have been used to support broad antibody claims submitted to the United States Patent and Trademark Office. Epitope data have been central to several high-profile legal cases involving disputes over the specific protein regions targeted by therapeutic antibodies. In this regard, the Amgen v. Sanofi/Regeneron Pharmaceuticals PCSK9 inhibitor case hinged on the ability to show that both the Amgen and Sanofi/Regeneron therapeutic antibodies bound to overlapping amino acids on the surface of PCSK9. Methods There are several methods available for mapping antibody epitopes on target antigens: X-ray co-crystallography and cryogenic electron microscopy (cryo-EM). X-ray co-crystallography has historically been regarded as the gold-standard approach for epitope mapping because it allows direct visualization of the interaction between the antigen and antibody. Cryo-EM can similarly provide high-resolution maps of antibody-antigen interactions. However, both approaches are technically challenging, time-consuming, and expensive, and not all proteins are amenable to crystallization. Moreover, these techniques are not always feasible due to the difficulty in obtaining sufficient quantities of correctly folded and processed protein. Finally, neither technique can distinguish key epitope residues (energetic "hot spots") for mAbs that bind to the same group of amino acids. Array-based oligopeptide scanning. Also known as overlapping peptide scan or pepscan analysis, this technique uses a library of oligopeptide sequences from overlapping and non-overlapping segments of a target protein, and tests for their ability to bind the antibody of interest. This method is fast, relatively inexpensive, and specifically suited to profile epitopes for large numbers of candidate antibodies against a defined target. The epitope mapping resolution depends on the number of overlapping peptides that are used. The main disadvantage of this approach is that discontinuous epitopes are deconstructed into smaller peptides, which can cause lower binding affinities. However, advances have been made with technologies such as constrained peptides, which can be used to mimic conformational as well as discontinuous epitopes. For example, an antibody against CD20 was mapped in a study using array-based oligopeptide scanning, by combining non-adjacent peptide sequences from different parts of the target protein and enforcing conformational rigidity onto this combined peptide (e.g., by using CLIPS scaffolds). Replacement analysis on peptides also allows single amino acid resolution, and can therefore pinpoint key epitope residues. Site-directed mutagenesis mapping. The molecular biological technique of site-directed mutagenesis (SDM) can be used to enable epitope mapping. In SDM, systematic mutations of amino acids are introduced into the sequence of the target protein. Binding of an antibody to each mutated protein is tested to identify the amino acids that comprise the epitope. This technique can be used to map both linear and conformational epitopes but is labor-intensive and time-consuming, typically limiting analysis to a small number of amino-acid residues. High-throughput shotgun mutagenesis epitope mapping. Shotgun mutagenesis is a high-throughput approach for mapping the epitopes of mAbs. The shotgun mutagenesis technique begins with the creation of a mutation library of the entire target antigen, with each clone containing a unique amino acid mutation (typically an alanine substitution). Hundreds of plasmid clones from the library are individually arrayed in 384-well microplates, expressed in human cells, and tested for antibody binding. Amino acids of the target required for antibody binding are identified by a loss of immunoreactivity. These residues are mapped onto structures of the target protein to visualize the epitope. Benefits of high-throughput shotgun mutagenesis epitope mapping include: 1) the ability to identify both linear and conformational epitopes, 2) a shorter assay time than other methods, 3) the presentation of properly folded and post-translationally modified proteins, and 4) the ability to identify key amino acids that drive the energetic interactions (energetic "hot spots" of the epitope). Hydrogen–deuterium exchange (HDX). This method gives information about the solvent accessibility of various parts of the antigen and antibody, demonstrating reduced solvent accessibility in regions of protein-protein interactions. One of its advantages is that it determines the interaction site of the antigen-antibody complex in its native solution, and does not introduce any modifications (e.g. mutation) to either the antigen or the antibody. HDX epitope mapping has also been demonstrated to be the effective method to rapidly supply complete information for epitope structure. It does not usually provide data at the level of amino acid, but this limitation is being improved by new technology advancements. It has recently been recommended as a fast and cost-effective epitope mapping approach, using the complex protein system influenza hemagglutinin as an example. Cross-linking-coupled mass spectrometry. Antibody and antigen are bound to a labeled cross-linker, and complex formation is confirmed by high-mass MALDI detection. The binding location of the antibody to the antigen can then be identified by mass spectrometry (MS). The cross-linked complex is highly stable and can be exposed to various enzymatic and digestion conditions, allowing many different peptide options for detection. MS or MS/MS techniques are used to detect the amino-acid locations of the labelled cross-linkers and the bound peptides (both epitope and paratope are determined in one experiment). The key advantage of this technique is the high sensitivity of MS detection, which means that very little material (hundreds of micrograms or less) is needed. Other methods, such as yeast display, phage display, and limited proteolysis, provide high-throughput monitoring of antibody binding but lack resolution, especially for conformational epitopes. See also Epitope binning References External links Immunologic tests Antigenic determinant
Epitope mapping
Biology
1,933
331,080
https://en.wikipedia.org/wiki/Matryoshka%20doll
Matryoshka dolls ( ; ), also known as stacking dolls, nesting dolls, Russian tea dolls, or Russian dolls, are a set of wooden dolls of decreasing size placed one inside another. The name Matryoshka, is a diminutive form of Matryosha (), in turn a hypocoristic of the Russian female first name Matryona (). A set of matryoshkas consists of a wooden figure, which separates at the middle, top from bottom, to reveal a smaller figure of the same sort inside, which has, in turn, another figure inside of it, and so on. The first Russian nested doll set was made in 1890 by wood turning craftsman and wood carver Vasily Zvyozdochkin from a design by Sergey Malyutin, who was a folk crafts painter at Abramtsevo. Traditionally the outer layer is a woman, dressed in a sarafan, a long and shapeless traditional Russian peasant jumper dress. The figures inside may be of any gender; the smallest, innermost doll is typically a baby turned from a single piece of wood. Much of the artistry is in the painting of each doll, which can be very elaborate. The dolls often follow a theme; the themes may vary, from fairy tale characters to Soviet leaders. In some countries, matryoshka dolls are often referred to as babushka dolls, though they are not known by this name in Russian; babushka () means . History The first Russian nested doll set was carved in 1890 at the Children's Education Workshop by Vasily Zvyozdochkin and designed by Sergey Malyutin, who was a folk crafts painter in the Abramtsevo estate of Savva Mamontov, a Russian industrialist and patron of arts. Mamontov's brother, Anatoly Ivanovich Mamontov (1839–1905), created the Children's Education Workshop to make and sell children's toys. The doll set was painted by Malyutin. Malyutin's doll set consisted of eight dolls—the outermost was a mother in a traditional dress holding a red-combed rooster. The inner dolls were her children, girls and a boy, and the innermost a baby. The Children's Education Workshop was closed in the late 1890s, but the tradition of the matryoshka simply relocated to Sergiyev Posad, the Russian city known as a toy-making center since the fourteenth century. The inspiration for matryoshka dolls is not clear. Matryoshka dolls may have been inspired by a nesting doll imported from Japan. The Children's Education workshop where Zvyozdochkin was a lathe operator received a five piece, cylinder-shaped nesting doll featuring Fukuruma (Fukurokuju) in the late 1890s, which is now part of the collection at the Sergiev Posad Museum of Toys. Other east Asian dolls share similarities with matryoshka dolls such as the Kokeshi dolls, originating in Northern Honshū, the main island of Japan, although they cannot be placed one inside another, and the round hollow daruma doll depicting a Buddhist monk. Another possible source of inspiration is the nesting Easter eggs produced on a lathe by Russian woodworkers during the late 19th Century. Savva Mamontov's wife presented a set of matryoshka dolls at the Exposition Universelle in Paris in 1900, and the toy earned a bronze medal. Soon after, matryoshka dolls were being made in several places in Russia and shipped around the world. Manufacture Centers of Production The first matryoshka dolls were produced in the Children's Education (Detskoye vospitanie) workshop in Moscow. After it closed in 1904, production was transferred to the city of Sergiev Posad (Сергиев Посад), known as Sergiev (Сергиев) from 1919 to 1930 and Zagorsk from 1930 to 1991. Matryoshka factories were later established in other cities and villages: the village of Polkhovsky Maidan (Полховский-Майдан), which is the primary producer of matryoshka blanks, and its neighboring villages Krutets (Крутец) and Gorodets (Городец) the city of Semenov, (Семёнов) the city of Kirov (Киров), known as Vyatka (Вя́тка) (from 1780 to 1934 and renamed Kirov in 1934 although many of its institutions reverted to the name Vyatka (Viatka) in 1991 the city of Nolinsk (Нолинск) the city of Yoshkar-Ola (Йошкар-Ола) in the Republic of Mari-El Following the collapse of the Soviet Union, the closure of many matryoshka factories, and the loosening of restrictions, independent artists began to produce matryoshka dolls in homes and art studios. Method Ordinarily, matryoshka dolls are crafted from linden wood. There is a popular misconception that they are carved from one piece of wood. Rather, they are produced using: a lathe equipped with a balance bar; four heavy long distinct types of chisels (hook, knife, pipe, and spoon); and a "set of handmade wooden calipers particular to a size of the doll". The tools are hand forged by a village blacksmith from car axles or other salvage. A wood carver uniquely crafts each set of wooden calipers. Multiple pieces of wood are meticulously carved into the nesting set. Shape, Size, and Pieces per Set The standard shape approximates a human silhouette with a flared base on the largest doll for stability. Other shapes include potbelly, cone, bell, egg, bottle, sphere, and cylinder. The size and number of pieces varies widely. The industry standard from the Soviet period, which accounts for approximately 50% of all matryoshka produced, is six inches tall and consists of 5 dolls except for matryoshka dolls manufactured in Semenov whose standard is five inches tall and consists of 6 pieces. Other common sets are the 3 piece, the 7 piece, and the 10 piece. Common Characteristics Matryoshka dolls painted in the traditional style share common elements. They depict female figures wearing a peasant dress (sarafan) and scarf or shawl usually with an apron and flowers.  Each successively smaller doll is identical or nearly so. Distinctive regional styles developed in different areas of matryoshka manufacture. Themes in dolls Matryoshka dolls are often designed to follow a particular theme; for instance, peasant girls in traditional dress. Originally, themes were often drawn from tradition or fairy tale characters, in keeping with the craft tradition—but since the late 20th century, they have embraced a larger range, including Russian leaders and popular culture. Common themes of matryoshkas are floral and relate to nature. Often Christmas, Easter, and religion are used as themes for the doll. Modern artists create many new styles of nesting dolls, mostly as an alternative purchase option for tourism. These include animal collections, portraits, and caricatures of famous politicians, musicians, athletes, astronauts, "robots", and popular movie stars. Today, some Russian artists specialize in painting themed matryoshka dolls that feature specific categories of subjects, people, or nature. Areas with notable matryoshka styles include Sergiyev Posad, Semionovo (now the town of Semyonov), , and the city of Kirov. Political matryoshkas In the late 1980s and early 1990s during Perestroika, freedom of expression allowed the leaders of the Soviet Union to become a common theme of the matryoshka, with the largest doll featuring then-current leader Mikhail Gorbachev. These became very popular at the time, affectionately earning the nickname of a Gorba or Gorby, the namesake of Gorbachev. With the periodic succession of Russian leadership after the collapse of the Soviet Union, newer versions would start to feature Russian presidents Boris Yeltsin, Vladimir Putin, and Dmitry Medvedev. Most sets feature the current leader as the largest doll, with the predecessors decreasing in size. The remaining smaller dolls may feature other former leaders such as Leonid Brezhnev, Nikita Khrushchev, Joseph Stalin, Vladimir Lenin, and sometimes several historically significant Tsars such as Nicholas II and Peter the Great. Yuri Andropov and Konstantin Chernenko rarely appear due to the short length of their unusually brief tenures. Some less-common sets may feature the current leader as the smallest doll, with the predecessors increasing in size, usually with Stalin or Lenin as the largest doll. Some sets that include Yeltsin preceding Gorbachev were made during the brief period between the establishment of President of the RSFSR and the collapse of the Soviet Union, as both Yeltsin and Gorbachev were concurrently in prominent government positions. During Medvedev's presidency, Medvedev and Putin may both share the largest doll due to Putin still having a prominent role in the government as Prime Minister of Russia. As of Putin's re-election as the fourth President of Russia, Medvedev will usually succeed Yeltsin and precede Putin in stacking order, due to Putin's role solely as the largest doll. Political matryoshkas usually range between five and ten dolls per set. World record The largest set of matryoshka dolls in the world is a 51-piece set hand-painted by Youlia Bereznitskaia of Russia, completed in 2003. The tallest doll in the set measures ; the smallest, . Arranged side-by-side, the dolls span . As metaphor Nesting and onion metaphors Matryoshkas are also used metaphorically, as a design paradigm, known as the "matryoshka principle" or "nested doll principle". It denotes a recognizable relationship of "object-within-similar-object" that appears in the design of many other natural and crafted objects. Examples of this use include the matrioshka brain, the Matroska media-container format, and the Russian Doll model of multi-walled carbon nanotubes. The onion metaphor is similar. If the outer layer is peeled off an onion, a similar onion exists within. This structure is employed by designers in applications such as the layering of clothes or the design of tables, where a smaller table nests within a larger table, and a smaller one within that. The metaphor of the matryoshka doll (or its onion equivalent) is also used in the description of shell companies and similar corporate structures that are used in the context of tax-evasion schemes in low-tax jurisdictions (for example, offshore tax havens). It has also been used to describe satellites and suspected weapons in space. Other metaphors Matryoshka is often seen as a symbol of the feminine side of Russian culture. Matryoshka is associated in Russia with family and fertility. Matryoshka is used as the symbol for the epithet Mother Russia. Matryoshka dolls are a traditional representation of the mother carrying a child within her and can be seen as a representation of a chain of mothers carrying on the family legacy through the child in their wombs. Furthermore, matryoshka dolls are used to illustrate the unity of body, soul, mind, heart, and spirit. As an emoji In 2020, the Unicode Consortium approved the matryoshka doll (🪆) as one of the new emoji characters in release v.13. The matryoshka or nesting doll emoji was submitted to the consortium by Jef Gray and Samantha Sunne, as a non-religious, apolitical symbol of Russian-East European-Far East Asian culture. See also Amish doll Chinese boxes Droste effect Fractal Mise en abyme Infinity Recursion Culture of Russia Self-similarity Shaker-style pantry box Stacking (video game) Turducken Turtles all the way down References External links 1890s toys Culture of Armenia Containers Culture of Georgia (country) Handicrafts Nested containers Products introduced in 1890 Culture of Russia Russian inventions Infinity Recursion Culture of the Soviet Union Traditional dolls Culture of Ukraine Wooden dolls 1890 establishments in the Russian Empire
Matryoshka doll
Mathematics
2,585
49,489,200
https://en.wikipedia.org/wiki/Population%20proportion
In statistics a population proportion, generally denoted by or the Greek letter , is a parameter that describes a percentage value associated with a population. A census can be conducted to determine the actual value of a population parameter, but often a census is not practical due to its costs and time consumption. For example, the 2010 United States Census showed that 83.7% of the American population was identified as not being Hispanic or Latino; the value of .837 is a population proportion. In general, the population proportion and other population parameters are unknown. A population proportion is usually estimated through an unbiased sample statistic obtained from an observational study or experiment, resulting in a sample proportion, generally denoted by and in some textbooks by . For example, the National Technological Literacy Conference conducted a national survey of 2,000 adults to determine the percentage of adults who are economically illiterate; the study showed that 1,440 out of the 2,000 adults sampled did not understand what a gross domestic product is. The value of 72% (or 1440/2000) is a sample proportion. Mathematical definition A proportion is mathematically defined as being the ratio of the quantity of elements (a countable quantity) in a subset to the size of a set : where is the count of successes in the population, and is the size of the population. This mathematical definition can be generalized to provide the definition for the sample proportion: where is the count of successes in the sample, and is the size of the sample obtained from the population. Estimation One of the main focuses of study in inferential statistics is determining the "true" value of a parameter. Generally the actual value for a parameter will never be found, unless a census is conducted on the population of study. However, there are statistical methods that can be used to get a reasonable estimation for a parameter. These methods include confidence intervals and hypothesis testing. Estimating the value of a population proportion can be of great implication in the areas of agriculture, business, economics, education, engineering, environmental studies, medicine, law, political science, psychology, and sociology. A population proportion can be estimated through the usage of a confidence interval known as a one-sample proportion in the Z-interval whose formula is given below: where is the sample proportion, is the sample size, and is the upper critical value of the standard normal distribution for a level of confidence . Proof To derive the formula for the one-sample proportion in the Z-interval, a sampling distribution of sample proportions needs to be taken into consideration. The mean of the sampling distribution of sample proportions is usually denoted as and its standard deviation is denoted as: Since the value of is unknown, an unbiased statistic will be used for . The mean and standard deviation are rewritten respectively as: and Invoking the central limit theorem, the sampling distribution of sample proportions is approximately normal—provided that the sample is reasonably large and unskewed. Suppose the following probability is calculated: , where and are the standard critical values. The inequality can be algebraically re-written as follows: From the algebraic work done above, it is evident from a level of certainty that could fall in between the values of: . Conditions for inference In general the formula used for estimating a population proportion requires substitutions of known numerical values. However, these numerical values cannot be "blindly" substituted into the formula because statistical inference requires that the estimation of an unknown parameter be justifiable. For a parameter's estimation to be justifiable, there are three conditions that need to be verified: The data's individual observation have to be obtained from a simple random sample of the population of interest. The data's individual observations have to display normality. This can be assumed mathematically with the following definition: Let be the sample size of a given random sample and let be its sample proportion. If and , then the data's individual observations display normality. The data's individual observations have to be independent of each other. This can be assumed mathematically with the following definition: Let be the size of the population of interest and let be the sample size of a simple random sample of the population. If , then the data's individual observations are independent of each other. The conditions for SRS, normality, and independence are sometimes referred to as the conditions for the inference tool box in most statistical textbooks. For a more detailed look into regions where this simplification is not used look to (https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Jeffreys_interval ) Example Suppose a presidential election is taking place in a democracy. A random sample of 400 eligible voters in the democracy's voter population shows that 272 voters support candidate B. A political scientist wants to determine what percentage of the voter population support candidate B. To answer the political scientist's question, a one-sample proportion in the Z-interval with a confidence level of 95% can be constructed in order to determine the population proportion of eligible voters in this democracy that support candidate B. Solution It is known from the random sample that with sample size . Before a confidence interval is constructed, the conditions for inference will be verified. Since a random sample of 400 voters was obtained from the voting population, the condition for a simple random sample has been met. Let and , it will be checked whether and and The condition for normality has been met. Let be the size of the voter population in this democracy, and let . If , then there is independence. The population size for this democracy's voters can be assumed to be at least 4,000. Hence, the condition for independence has been met. With the conditions for inference verified, it is permissible to construct a confidence interval. Let and To solve for , the expression is used. By examining a standard normal bell curve, the value for can be determined by identifying which standard score gives the standard normal curve an upper tail area of 0.0250 or an area of 1 – 0.0250 = 0.9750. The value for can also be found through a table of standard normal probabilities. From a table of standard normal probabilities, the value of that gives an area of 0.9750 is 1.96. Hence, the value for is 1.96. The values for , , can now be substituted into the formula for one-sample proportion in the Z-interval: Based on the conditions of inference and the formula for the one-sample proportion in the Z-interval, it can be concluded with a 95% confidence level that the percentage of the voter population in this democracy supporting candidate B is between 63.429% and 72.571%. Value of the parameter in the confidence interval range A commonly asked question in inferential statistics is whether the parameter is included within a confidence interval. The only way to answer this question is for a census to be conducted. Referring to the example given above, the probability that the population proportion is in the range of the confidence interval is either 1 or 0. That is, the parameter is included in the interval range or it is not. The main purpose of a confidence interval is to better illustrate what the ideal value for a parameter could possibly be. Common errors and misinterpretations from estimation A very common error that arises from the construction of a confidence interval is the belief that the level of confidence, such as , means 95% chance. This is incorrect. The level of confidence is based on a measure of certainty, not probability. Hence, the values of fall between 0 and 1, exclusively. Estimation of P using ranked set sampling A more precise estimate of P can be obtained by choosing ranked set sampling instead of simple random sampling See also Binomial proportion confidence interval Confidence interval Prevalence Statistical hypothesis testing Statistical inference Statistical parameter Tolerance interval References Ratios
Population proportion
Mathematics
1,597
11,421,361
https://en.wikipedia.org/wiki/Potato%20virus%20X%20cis-acting%20regulatory%20element
The Potato virus X cis-acting regulatory element is a cis-acting regulatory element found in the 3' UTR of the Potato virus X genome. This element has been found to be required for minus strand RNA accumulation and is essential for efficient viral replication. See also Poxvirus AX element late mRNA cis-regulatory element References External links Cis-regulatory RNA elements
Potato virus X cis-acting regulatory element
Chemistry
74
580,972
https://en.wikipedia.org/wiki/%CE%91-Carotene
α-Carotene (alpha-carotene) is a form of carotene with a β-ionone ring at one end and an α-ionone ring at the opposite end. It is the second most common form of carotene. Human physiology In American and Chinese adults, the mean concentration of serum α-carotene was 4.71 μg/dL. Including 4.22 μg/dL among men and 5.31 μg/dL among women. Dietary sources The following vegetables are rich in alpha-carotene: Yellow-orange vegetables: Carrots (the main source for U.S. adults), sweet potatoes, pumpkin, winter squash Dark-green vegetables: Broccoli, green beans, green peas, spinach, turnip greens, collards, leaf lettuce, avocado Research A 2018 meta-analysis found that both dietary and circulating α-carotene are associated with a lower risk of all-cause mortality. The highest circulating α-carotene category, compared to the lowest, correlated with a 32% reduction in the risk of all-cause mortality, while increased dietary α-carotene intake was linked to a 21% decrease in the risk of all-cause mortality. References Carotenoids Tetraterpenes Cyclohexenes Vitamin A
Α-Carotene
Chemistry,Biology
277
32,028,466
https://en.wikipedia.org/wiki/List%20of%20Vocaloid%20products
The following is a list of products released for the Vocaloid software in order of release date. Products Vocaloid Vocaloid 2 VocaloWitter iVocaloid eVocaloid Vocaloid 3 Vocaloid 4 Vocaloid 5 Vocaloid 6 Mobile Vocaloid Editor Vocaloid Neo Commercially unreleased Notes References External links English product lineup Vocaloid products Vocaloid products Vocaloid
List of Vocaloid products
Technology
73
17,862,076
https://en.wikipedia.org/wiki/Phosphate%20test
A range of qualitative and quantitative tests have been developed to detect phosphate ions () in solution. Such tests find use in industrial processes, scientific research, and environmental water monitoring. Quantitative method A quantitative method to determine the amount of phosphate present in samples, such as boiler feedwater, is as follows. A measured amount of boiler water is poured into a mixing tube and ammonium heptamolybdate reagent is added. The tube is then stoppered and vigorously shaken. The next step is to add dilute stannous chloride reagent, which has been freshly prepared from concentrated stannous chloride reagent and distilled water, to the mixture in the tube. This will produce a blue colour (due to the formation of molybdenum blue) and the depth of the blue colour indicates the amount of phosphate in the boiler water. The absorbance of the blue solution can be measured with a colorimeter and the concentration of phosphate in the original solution can be calculated. Alternatively, a direct (but approximate) reading of phosphate concentration can be obtained by using a Lovibond comparator. This method for phosphate determination is known as Denigés' method. Qualitative method A simple qualitative method to determine the presence of phosphate ions in a sample is as follows. A small amount of the sample is acidified with concentrated nitric acid, to which a little ammonium molybdate is added. The presence of phosphate ions is indicated by the formation of a bright yellow precipitate layer of ammonium phosphomolybdate. The appearance of the precipitate can be facilitated by gentle heating. This test is also used to detect arsenic, a yellow precipitate being formed. See also Phosphate analysis References Chemical tests
Phosphate test
Chemistry
368