id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
4,055,891
https://en.wikipedia.org/wiki/High-energy%20X-rays
High-energy X-rays or HEX-rays are very hard X-rays, with typical energies of 80–1000 keV (1 MeV), about one order of magnitude higher than conventional X-rays used for X-ray crystallography (and well into gamma-ray energies over 120 keV). They are produced at modern synchrotron radiation sources such as the Cornell High Energy Synchrotron Source, SPring-8, and the beamlines ID15 and BM18 at the European Synchrotron Radiation Facility (ESRF). The main benefit is the deep penetration into matter which makes them a probe for thick samples in physics and materials science and permits an in-air sample environment and operation. Scattering angles are small and diffraction directed forward allows for simple detector setups. High energy (megavolt) X-rays are also used in cancer therapy, using beams generated by linear accelerators to suppress tumors. Advantages High-energy X-rays (HEX-rays) between 100 and 300 keV bear unique advantage over conventional hard X-rays, which lie in the range of 5–20 keV They can be listed as follows: High penetration into materials due to a strongly reduced photo absorption cross section. The photo-absorption strongly depends on the atomic number of the material and the X-ray energy. Several centimeter thick volumes can be accessed in steel and millimeters in lead containing samples. No radiation damage of the sample, which can pin incommensurations or destroy the chemical compound to be analyzed. The Ewald sphere has a curvature ten times smaller than in the low energy case and allows whole regions to be mapped in a reciprocal lattice, similar to electron diffraction. Access to diffuse scattering. This is absorption and not extinction limited at low energies while volume enhancement takes place at high energies. Complete 3D maps over several Brillouin zones can be easily obtained. High momentum transfers are naturally accessible due to the high momentum of the incident wave. This is of particular importance for studies of liquid, amorphous and nanocrystalline materials as well as pair distribution function analysis. Realization of the Materials oscilloscope. Simple diffraction setups due to operation in air. Diffraction in forward direction for easy registration with a 2D detector. Forward scattering and penetration make sample environments easy and straight forward. Negligible polarization effects due to relative small scattering angles. Special non-resonant magnetic scattering. LLL interferometry. Access to high-energy spectroscopic levels, both electronic and nuclear. Neutron-like, but complementary studies combined with high precision spatial resolution. Cross sections for Compton scattering are similar to coherent scattering or absorption cross sections. Applications With these advantages, HEX-rays can be applied for a wide range of investigations. An overview, which is far from complete: Structural investigations of real materials, such as metals, ceramics, and liquids. In particular, in-situ studies of phase transitions at elevated temperatures up to the melt of any metal. Phase transitions, recovery, chemical segregation, recrystallization, twinning and domain formation are a few aspects to follow in a single experiment. Materials in chemical or operation environments, such as electrodes in batteries, fuel cells, high-temperature reactors, electrolytes etc. The penetration and a well-collimated pencil beam allows focusing in the region and material of interest while it undergoes a chemical reaction. Study of 'thick' layers, such as oxidation of steel in its production and rolling process, which are too thick for classical reflectometry experiments. Interfaces and layers in complicated environments, such as the intermetallic reaction of Zincalume surface coating on industrial steel in the liquid bath. In situ studies of industrial like strip casting processes for light metals. A casting setup can be set up on a beamline and probed with the HEX-ray beam in real time. Bulk studies in single crystals differ from studies in surface-near regions limited by the penetration of conventional X-rays. It has been found and confirmed in almost all studies, that critical scattering and correlation lengths are strongly affected by this effect. Combination of neutron and HEX-ray investigations on the same sample, such as contrast variations due to the different scattering lengths. Residual stress analysis in the bulk with unique spatial resolution in centimeter thick samples; in-situ under realistic load conditions. In-situ studies of thermo-mechanical deformation processes such as forging, rolling, and extrusion of metals. Real time texture measurements in the bulk during a deformation, phase transition or annealing, such as in metal processing. Structures and textures of geological samples which may contain heavy elements and are thick. High resolution triple crystal diffraction for the investigation of single crystals with all the advantages of high penetration and studies from the bulk. Compton spectroscopy for the investigation of momentum distribution of the valence electron shells. Imaging and tomography with high energies. Dedicated sources can be strong enough to obtain 3D tomograms in a few seconds. Combination of imaging and diffraction is possible due to simple geometries. For example, tomography combined with residual stress measurement or structural analysis. See also Bremsstrahlung Cyclotron radiation Electromagnetic radiation Electron–positron annihilation Gamma ray Gamma-ray generation Ionization Synchrotron light source Synchrotron radiation X-radiation X-ray fluorescence X-ray generator X-ray tube References Further reading External links Applied and interdisciplinary physics Gamma rays Materials testing Synchrotron radiation Synchrotron-related techniques X-rays
High-energy X-rays
Physics,Materials_science,Engineering
1,144
40,460,770
https://en.wikipedia.org/wiki/NGC%206134
NGC 6134 is an open cluster in the constellation Norma. It was discovered by James Dunlop in 1826. References Open clusters Norma (constellation) 6134
NGC 6134
Astronomy
32
24,146,759
https://en.wikipedia.org/wiki/C29H35NO2
{{DISPLAYTITLE:C29H35NO2}} The molecular formula C29H35NO2 may refer to: Mifepristone, a medication typically used in combination to bring about an abortion during pregnancy Miproxifene, a nonsteroidal selective estrogen receptor modulator
C29H35NO2
Chemistry
65
8,266,376
https://en.wikipedia.org/wiki/CCL3
Chemokine (C-C motif) ligand 3 (CCL3) also known as macrophage inflammatory protein 1-alpha (MIP-1-alpha) is a protein that in humans is encoded by the CCL3 gene. Function CCL3 is a cytokine belonging to the CC chemokine family that is involved in the acute inflammatory state in the recruitment and activation of polymorphonuclear leukocytes through binding to the receptors CCR1, CCR4 and CCR5. Sherry et al. (1988) demonstrated 2 protein components of MIP1, called by them alpha (CCL3, this protein) and beta (CCL4). CCL3 produces a monophasic fever of rapid onset whose magnitude is equal to or greater than that of fevers produced with either recombinant human tumor necrosis factor or recombinant human interleukin-1. However, in contrast to these two endogenous pyrogens, the fever induced by MIP-1 is not inhibited by the cyclooxygenase inhibitor ibuprofen and CCL3 may participate in the febrile response that is not mediated through prostaglandin synthesis and clinically cannot be ablated by cyclooxygenase. Interactions CCL3 has been shown to interact with CCL4. Attracts macrophages, monocytes and neutrophils. See also Macrophage inflammatory proteins References External links Further reading Cytokines
CCL3
Chemistry
316
5,779,282
https://en.wikipedia.org/wiki/Projected%20dynamical%20system
Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations. A projected dynamical system is given by the flow to the projected differential equation where K is our constraint set. Differential equations of this form are notable for having a discontinuous vector field. History of projected dynamical systems Projected dynamical systems have evolved out of the desire to dynamically model the behaviour of nonstatic solutions in equilibrium problems over some parameter, typically take to be time. This dynamics differs from that of ordinary differential equations in that solutions are still restricted to whatever constraint set the underlying equilibrium problem was working on, e.g. nonnegativity of investments in financial modeling, convex polyhedral sets in operations research, etc. One particularly important class of equilibrium problems which has aided in the rise of projected dynamical systems has been that of variational inequalities. The formalization of projected dynamical systems began in the 1990s in Section 5.3 of the paper of Dupuis and Ishii. However, similar concepts can be found in the mathematical literature which predate this, especially in connection with variational inequalities and differential inclusions. Projections and Cones Any solution to our projected differential equation must remain inside of our constraint set K for all time. This desired result is achieved through the use of projection operators and two particular important classes of convex cones. Here we take K to be a closed, convex subset of some Hilbert space X. The normal cone to the set K at the point x in K is given by The tangent cone (or contingent cone) to the set K at the point x is given by The projection operator (or closest element mapping) of a point x in X to K is given by the point in K such that for every y in K. The vector projection operator of a vector v in X at a point x in K is given by Which is just the Gateaux Derivative computed in the direction of the Vector field Projected Differential Equations Given a closed, convex subset K of a Hilbert space X and a vector field -F which takes elements from K into X, the projected differential equation associated with K and -F is defined to be On the interior of K solutions behave as they would if the system were an unconstrained ordinary differential equation. However, since the vector field is discontinuous along the boundary of the set, projected differential equations belong to the class of discontinuous ordinary differential equations. While this makes much of ordinary differential equation theory inapplicable, it is known that when -F is a Lipschitz continuous vector field, a unique absolutely continuous solution exists through each initial point x(0)=x0 in K on the interval . This differential equation can be alternately characterized by or The convention of denoting the vector field -F with a negative sign arises from a particular connection projected dynamical systems shares with variational inequalities. The convention in the literature is to refer to the vector field as positive in the variational inequality, and negative in the corresponding projected dynamical system. See also Differential variational inequality Dynamical systems theory Ordinary differential equation Variational inequality Differential inclusion Complementarity theory References Henry, C., "Differential equations with discontinuous right-hand side for planning procedures", J. Econom. Theory, 4:545-551, 1972. Henry C., "An existence theorem for a class of differential equations with multivalued right-hand side", J. Math. Anal. Appl., 41:179-186, 1973. Aubin, J.P. and Cellina, A., Differential Inclusions, Springer-Verlag, Berlin (1984). Dupuis, P. and Ishii, H., On Lipschitz continuity of the solution mapping to the Skorokhod Problem, with applications, Stochastics and Stochastics Reports, 35, 31-62 (1991). Nagurney, A. and Zhang, D., Projected Dynamical Systems and Variational Inequalities with Applications, Kluwer Academic Publishers (1996). Cojocaru, M., and Jonker L., Existence of solutions to projected differential equations on Hilbert spaces, Proc. Amer. Math. Soc., 132(1), 183-193 (2004). Brogliato, B., and Daniilidis, A., and Lemaréchal, C., and Acary, V., "On the equivalence between complementarity systems, projected systems and differential inclusions", Systems and Control Letters, vol.55, pp.45-51 (2006) Differential equations Dynamical systems
Projected dynamical system
Physics,Mathematics
995
12,785,273
https://en.wikipedia.org/wiki/Nik%20Szymanek
Nicholas Szymanek, better known as Nik Szymanek, is a British amateur astronomer and prolific astrophotographer, based in Essex, England. Originally a train driver in the London Underground, he began his interest in astronomical CCD imaging shortly before 1991. His interest in this kind of observational astronomy rose in 1991, after he met Ian King, another amateur astronomer and a fellow from the local Havering Astronomical Society. Szymanek is most known for his deep sky CCD images, and his contributions to education and public outreach surrounding amateur astronomy. He collaborates with professional astronomers and works with research-grade telescopes located at La Palma in the Canary Islands, and at Mauna Kea Observatories at the Hawaiian Islands. He publishes his pictures in astronomical magazines and has written a book on astrophotography called Infinity Rising. Due to his imaging and image-processing accomplishments, Szymanek was awarded the Amateur Achievement Award of the Astronomical Society of the Pacific in 2004. Szymanek is also the drummer for UK Neo-Progressive band Trilogy. Gallery References External links CCDLand – Nik Szymanek's web page 20th-century British astronomers Amateur astronomers Astrophotographers 21st-century British astronomers Living people Year of birth missing (living people)
Nik Szymanek
Astronomy
265
17,732,810
https://en.wikipedia.org/wiki/Berkefeld%20filter
A Berkefeld filter is a water filter made of diatomaceous earth (Kieselgur). It was invented in Germany in 1891, and by 1922 was being marketed in the United Kingdom by the Berkefeld Filter Co. Berkefeld was the name of the owner of the mine in Hanover, Germany, where the ceramic material was obtained. The Berkefeld is a good bacterial water filter used in microbiological laboratories, in homes and out in the field. Design The filter housing consists of two metal or plastic cylinders sitting one on top of the other. The upper one has a lid and can be filled with impure water. In the bottom of the upper cylinder are one or more holes fitted with diatomaceous earth (Kieselgur) filter columns (filter candles). The water is forced through the filters by gravity, and then trickles down to the lower cylinder where it is stored and tapped off as required. Some types of filters are fitted with a carbon core to act as a deodorizing adsorbent. They may also be impregnated with silver to inhibit bacterial growth. Some types, depending on their grade of porosity, also remove certain microscopic fungi and particulate matter. The filters without silver impregnation are sterilized by autoclaving or by steam sterilizer after a thorough cleaning. Berkey Filters New Millenium Concepts (NMC) Ltd. received a license to distribute British Berkefeld filters in North America. However, they developed their own purification element in 2003, called the "Black Berkey," which is used in "Berkey" water filters. NMC's Black Berkey purification elements employ a mix of six different filtration media, whereas the British Berkefeld water filter's purification elements were composed primarily of diatomaceous earth. Rainfresh Ceramic Filters In the mid-1990s Envirogard Products Limited in Canada developed its own proprietary version of the Berkefeld ceramic filter that would be marketed under the brand Rainfresh. These ceramic filters also utilize diatomaceous earth, but also include a unique blend of other materials that result in a 0.3-micron absolute filter that provides >7-log reduction of pathogenic bacteria. Types The filters are classified according to the diameter of the pores in the ceramic material: V (Viel) - Coarsest pores N (Normal) - Intermediate sized pores W (Wenig) - Finest pores Usefulness The Berkefeld is a cheap, portable and efficient bacterial filter in general, though it does not remove viruses like Hepatitis A and some bacteria such as mycoplasma. Some companies claim that they filter out from between 100% of particles above 0.9 micrometre to 98% of particles above 0.5 micrometre in diameter. These are very durable filters, and the filter elements may be cleaned over 100 times before requiring replacement. Some of the first Berkefeld filters were used during the 1892 cholera epidemic in Hamburg. References Water filters Microbiology equipment
Berkefeld filter
Chemistry,Biology
630
6,732,326
https://en.wikipedia.org/wiki/Cn3D
Cn3D is a Windows, Macintosh and Unix-based software from the United States National Library of Medicine that acts as a helper application for web browsers to view three-dimensional structures from The National Center for Biotechnology Information's Entrez retrieval service. It "simultaneously displays structure, sequence, and alignment, and now has powerful annotation and alignment editing features", according to its official site. Cn3D is in public domain with source code available. The latest version of the software 4.3.1 was released 06 Dec 2013. This version has the ability to view superpositions of 3D structures with similar biological units and an enhanced version of the Vector Alignment Search Tool (VAST). See also List of molecular graphics systems Molecular graphics List of software for molecular mechanics modeling References External links Cn3D Home Page source code tarball of NCBI C++ toolkit which includes Cn3D Bioinformatics software Molecular modelling software Free software programmed in C++ Free science software Windows multimedia software MacOS multimedia software Science software for Linux Unix Internet software Software that uses wxWidgets
Cn3D
Chemistry,Biology
221
11,330,158
https://en.wikipedia.org/wiki/N%286%29-Carboxymethyllysine
N(6)-Carboxymethyllysine (CML), also known as Nε-(carboxymethyl)lysine, is an advanced glycation endproduct (AGE). CML has been the most used marker for AGEs in food analysis. Recently, it has been demonstrated that gut microbiota mediates an aging-associated decline in gut barrier function, allowing AGEs to leak into the bloodstream from the gut and impairing microglial function in the brain. It is suggested that the amount of CML in human blood samples may correlated with age. A humanized monoclonal antibody which binds to N6 – carboxymethyl lysine shows considerable promise as a possible therapeutic agent for treating pancreatic cancer. References Alpha-Amino acids Amino acid derivatives Advanced glycation end-products
N(6)-Carboxymethyllysine
Chemistry,Biology
179
48,366
https://en.wikipedia.org/wiki/Polyurethane
Polyurethane (; often abbreviated PUR and PU) refers to a class of polymers composed of organic units joined by carbamate (urethane) links. In contrast to other common polymers such as polyethylene and polystyrene, polyurethane term does not refer to the single type of polymer but a group of polymers. Unlike polyethylene and polystyrene polyurethanes can be produced from a wide range of starting materials resulting various polymers within the same group. This chemical variety produces polyurethanes with different chemical structures leading to many different applications. These include rigid and flexible foams, and coatings, adhesives, electrical potting compounds, and fibers such as spandex and polyurethane laminate (PUL). Foams are the largest application accounting for 67% of all polyurethane produced in 2016. A polyurethane is typically produced by reacting a polymeric isocyanate with a polyol. Since a polyurethane contains two types of monomers, which polymerize one after the other, they are classed as alternating copolymers. Both the isocyanates and polyols used to make a polyurethane contain two or more functional groups per molecule. Global production in 2019 was 25 million metric tonnes, accounting for about 6% of all polymers produced in that year. History Otto Bayer and his coworkers at IG Farben in Leverkusen, Germany, first made polyurethanes in 1937. The new polymers had some advantages over existing plastics that were made by polymerizing olefins or by polycondensation, and were not covered by patents obtained by Wallace Carothers on polyesters. Early work focused on the production of fibers and flexible foams and PUs were applied on a limited scale as aircraft coating during World War II. Polyisocyanates became commercially available in 1952, and production of flexible polyurethane foam began in 1954 by combining toluene diisocyanate (TDI) and polyester polyols. These materials were also used to produce rigid foams, gum rubber, and elastomers. Linear fibers were produced from hexamethylene diisocyanate (HDI) and 1,4-Butanediol (BDO). DuPont introduced polyethers, specifically poly(tetramethylene ether) glycol, in 1956. BASF and Dow Chemical introduced polyalkylene glycols in 1957. Polyether polyols were cheaper, easier to handle and more water-resistant than polyester polyols. Union Carbide and Mobay, a U.S. Monsanto/Bayer joint venture, also began making polyurethane chemicals. In 1960 more than 45,000 metric tons of flexible polyurethane foams were produced. The availability of chlorofluoroalkane blowing agents, inexpensive polyether polyols, and methylene diphenyl diisocyanate (MDI) allowed polyurethane rigid foams to be used as high-performance insulation materials. In 1967, urethane-modified polyisocyanurate rigid foams were introduced, offering even better thermal stability and flammability resistance. During the 1960s, automotive interior safety components, such as instrument and door panels, were produced by back-filling thermoplastic skins with semi-rigid foam. In 1969, Bayer exhibited an all-plastic car in Düsseldorf, Germany. Parts of this car, such as the fascia and body panels, were manufactured using a new process called reaction injection molding (RIM), in which the reactants were mixed and then injected into a mold. The addition of fillers, such as milled glass, mica, and processed mineral fibers, gave rise to reinforced RIM (RRIM), which provided improvements in flexural modulus (stiffness), reduction in coefficient of thermal expansion and better thermal stability. This technology was used to make the first plastic-body automobile in the United States, the Pontiac Fiero, in 1983. Further increases in stiffness were obtained by incorporating pre-placed glass mats into the RIM mold cavity, also known broadly as resin injection molding, or structural RIM. Starting in the early 1980s, water-blown microcellular flexible foams were used to mold gaskets for automotive panels and air-filter seals, replacing PVC polymers. Polyurethane foams are used in many automotive applications including seating, head and arm rests, and headliners. Polyurethane foam (including foam rubber) is sometimes made using small amounts of blowing agents to give less dense foam, better cushioning/energy absorption or thermal insulation. In the early 1990s, because of their impact on ozone depletion, the Montreal Protocol restricted the use of many chlorine-containing blowing agents, such as trichlorofluoromethane (CFC-11). By the late 1990s, blowing agents such as carbon dioxide, pentane, 1,1,1,2-tetrafluoroethane (HFC-134a) and 1,1,1,3,3-pentafluoropropane (HFC-245fa) were widely used in North America and the EU, although chlorinated blowing agents remained in use in many developing countries. Later, HFC-134a was also banned due to high ODP and GWP readings, and HFC-141B was introduced in early 2000s as an alternate blowing agent in developing nations. Chemistry Polyurethanes are produced by reacting diisocyanates with polyols, often in the presence of a catalyst, or upon exposure to ultraviolet radiation. Common catalysts include tertiary amines, such as DABCO, DMDEE, or metallic soaps, such as dibutyltin dilaurate. The stoichiometry of the starting materials must be carefully controlled as excess isocyanate can trimerise, leading to the formation of rigid polyisocyanurates. The polymer usually has a highly crosslinked molecular structure, resulting in a thermosetting material which does not melt on heating; although some thermoplastic polyurethanes are also produced. The most common application of polyurethane is as solid foams, which requires the presence of a gas, or blowing agent, during the polymerization step. This is commonly achieved by adding small amounts of water, which reacts with isocyanates to form CO2 gas and an amine, via an unstable carbamic acid group. The amine produced can also react with isocyanates to form urea groups, and as such the polymer will contain both these and urethane linkers. The urea is not very soluble in the reaction mixture and tends to form separate "hard segment" phases consisting mostly of polyurea. The concentration and organization of these polyurea phases can have a significant impact on the properties of the foam. The type of foam produced can be controlled by regulating the amount of blowing agent and also by the addition of various surfactants which change the rheology of the polymerising mixture. Foams can be either "closed-cell", where most of the original bubbles or cells remain intact, or "open-cell", where the bubbles have broken but the edges of the bubbles are stiff enough to retain their shape, in extreme cases reticulated foams can be formed. Open-cell foams feel soft and allow air to flow through, so they are comfortable when used in seat cushions or mattresses. Closed-cell foams are used as rigid thermal insulation. High-density microcellular foams can be formed without the addition of blowing agents by mechanically frothing the polyol prior to use. These are tough elastomeric materials used in covering car steering wheels or shoe soles. The properties of a polyurethane are greatly influenced by the types of isocyanates and polyols used to make it. Long, flexible segments, contributed by the polyol, give soft, elastic polymer. High amounts of crosslinking give tough or rigid polymers. Long chains and low crosslinking give a polymer that is very stretchy, short chains with many crosslinks produce a hard polymer while long chains and intermediate crosslinking give a polymer useful for making foam. The choices available for the isocyanates and polyols, in addition to other additives and processing conditions allow polyurethanes to have the very wide range of properties that make them such widely used polymers. Raw materials The main ingredients to make a polyurethane are di- and tri-isocyanates and polyols. Other materials are added to aid processing the polymer or to modify the properties of the polymer. PU foam formulation sometimes have water added too. Isocyanates Isocyanates used to make polyurethane have two or more isocyanate groups on each molecule. The most commonly used isocyanates are the aromatic diisocyanates, toluene diisocyanate (TDI) and methylene diphenyl diisocyanate, (MDI). These aromatic isocyanates are more reactive than aliphatic isocyanates. TDI and MDI are generally less expensive and more reactive than other isocyanates. Industrial grade TDI and MDI are mixtures of isomers and MDI often contains polymeric materials. They are used to make flexible foam (for example slabstock foam for mattresses or molded foams for car seats), rigid foam (for example insulating foam in refrigerators) elastomers (shoe soles, for example), and so on. The isocyanates may be modified by partially reacting them with polyols or introducing some other materials to reduce volatility (and hence toxicity) of the isocyanates, decrease their freezing points to make handling easier or to improve the properties of the final polymers. Aliphatic and cycloaliphatic isocyanates are used in smaller quantities, most often in coatings and other applications where color and transparency are important since polyurethanes made with aromatic isocyanates tend to darken on exposure to light. The most important aliphatic and cycloaliphatic isocyanates are 1,6-hexamethylene diisocyanate (HDI), 1-isocyanato-3-isocyanatomethyl-3,5,5-trimethyl-cyclohexane (isophorone diisocyanate, IPDI), and 4,4′-diisocyanato dicyclohexylmethane (H12MDI or hydrogenated MDI). Other more specialized isocyanates include Tetramethylxylylene diisocyanate (TMXDI). Polyols Polyols are polymers in their own right and have on average two or more hydroxyl groups per molecule. They can be converted to polyether polyols by co-polymerizing ethylene oxide and propylene oxide with a suitable polyol precursor. Polyester polyols are made by the polycondensation of multifunctional carboxylic acids and polyhydroxyl compounds. They can be further classified according to their end use. Higher molecular weight polyols (molecular weights from 2,000 to 10,000) are used to make more flexible polyurethanes while lower molecular weight polyols make more rigid products. Polyols for flexible applications use low functionality initiators such as dipropylene glycol (f = 2), glycerine (f = 3), or a sorbitol/water solution (f = 2.75). Polyols for rigid applications use higher functionality initiators such as sucrose (f = 8), sorbitol (f = 6), toluenediamine (f = 4), and Mannich bases (f = 4). Propylene oxide and/or ethylene oxide is added to the initiators until the desired molecular weight is achieved. The order of addition and the amounts of each oxide affect many polyol properties, such as compatibility, water-solubility, and reactivity. Polyols made with only propylene oxide are terminated with secondary hydroxyl groups and are less reactive than polyols capped with ethylene oxide, which contain primary hydroxyl groups. Incorporating carbon dioxide into the polyol structure is being researched by multiple companies. Graft polyols (also called filled polyols or polymer polyols) contain finely dispersed styrene–acrylonitrile, acrylonitrile, or polyurea (PHD) polymer solids chemically grafted to a high molecular weight polyether backbone. They are used to increase the load-bearing properties of low-density high-resiliency (HR) foam, as well as add toughness to microcellular foams and cast elastomers. Initiators such as ethylenediamine and triethanolamine are used to make low molecular weight rigid foam polyols that have built-in catalytic activity due to the presence of nitrogen atoms in the backbone. A special class of polyether polyols, poly(tetramethylene ether) glycols, which are made by polymerizing tetrahydrofuran, are used in high performance coating, wetting and elastomer applications. Conventional polyester polyols are based on virgin raw materials and are manufactured by the direct polyesterification of high-purity diacids and glycols, such as adipic acid and 1,4-butanediol. Polyester polyols are usually more expensive and more viscous than polyether polyols, but they make polyurethanes with better solvent, abrasion, and cut resistance. Other polyester polyols are based on reclaimed raw materials. They are manufactured by transesterification (glycolysis) of recycled poly(ethyleneterephthalate) (PET) or dimethylterephthalate (DMT) distillation bottoms with glycols such as diethylene glycol. These low molecular weight, aromatic polyester polyols are used in rigid foam, and bring low cost and excellent flammability characteristics to polyisocyanurate (PIR) boardstock and polyurethane spray foam insulation. Specialty polyols include polycarbonate polyols, polycaprolactone polyols, polybutadiene polyols, and polysulfide polyols. The materials are used in elastomer, sealant, and adhesive applications that require superior weatherability, and resistance to chemical and environmental attack. Natural oil polyols derived from castor oil and other vegetable oils are used to make elastomers, flexible bunstock, and flexible molded foam. Co-polymerizing chlorotrifluoroethylene or tetrafluoroethylene with vinyl ethers containing hydroxyalkyl vinyl ether produces fluorinated (FEVE) polyols. Two-component fluorinated polyurethanes prepared by reacting FEVE fluorinated polyols with polyisocyanate have been used to make ambient cure paints and coatings. Since fluorinated polyurethanes contain a high percentage of fluorine–carbon bonds, which are the strongest bonds among all chemical bonds, fluorinated polyurethanes exhibit resistance to UV, acids, alkali, salts, chemicals, solvents, weathering, corrosion, fungi and microbial attack. These have been used for high performance coatings and paints. Phosphorus-containing polyols are available that become chemically bonded to the polyurethane matrix for the use as flame retardants. This covalent linkage prevents migration and leaching of the organophosphorus compound. Bio-derived materials Interest in sustainable "green" products raised interest in polyols derived from vegetable oils. Various oils used in the preparation polyols for polyurethanes include soybean oil, cottonseed oil, neem seed oil, and castor oil. Vegetable oils are functionalized in various ways and modified to polyetheramides, polyethers, alkyds, etc. Renewable sources used to prepare polyols may be fatty acids or dimer fatty acids. Some biobased and isocyanate-free polyurethanes exploit the reaction between polyamines and cyclic carbonates to produce polyhydroxyurethanes. Chain extenders and cross linkers Chain extenders (f = 2) and cross linkers (f ≥ 3) are low molecular weight hydroxyl and amine terminated compounds that play an important role in the polymer morphology of polyurethane fibers, elastomers, adhesives, and certain integral skin and microcellular foams. The elastomeric properties of these materials are derived from the phase separation of the hard and soft copolymer segments of the polymer, such that the urethane hard segment domains serve as cross-links between the amorphous polyether (or polyester) soft segment domains. This phase separation occurs because the mainly nonpolar, low melting soft segments are incompatible with the polar, high melting hard segments. The soft segments, which are formed from high molecular weight polyols, are mobile and are normally present in coiled formation, while the hard segments, which are formed from the isocyanate and chain extenders, are stiff and immobile. As the hard segments are covalently coupled to the soft segments, they inhibit plastic flow of the polymer chains, thus creating elastomeric resiliency. Upon mechanical deformation, a portion of the soft segments are stressed by uncoiling, and the hard segments become aligned in the stress direction. This reorientation of the hard segments and consequent powerful hydrogen bonding contributes to high tensile strength, elongation, and tear resistance values. The choice of chain extender also determines flexural, heat, and chemical resistance properties. The most important chain extenders are ethylene glycol, 1,4-butanediol (1,4-BDO or BDO), 1,6-hexanediol, cyclohexane dimethanol and hydroquinone bis(2-hydroxyethyl) ether (HQEE). All of these glycols form polyurethanes that phase separate well and form well defined hard segment domains, and are melt processable. They are all suitable for thermoplastic polyurethanes with the exception of ethylene glycol, since its derived bis-phenyl urethane undergoes unfavorable degradation at high hard segment levels. Diethanolamine and triethanolamine are used in flex molded foams to build firmness and add catalytic activity. Diethyltoluenediamine is used extensively in RIM, and in polyurethane and polyurea elastomer formulations. Catalysts Polyurethane catalysts can be classified into two broad categories, basic and acidic amine. Tertiary amine catalysts function by enhancing the nucleophilicity of the diol component. Alkyl tin carboxylates, oxides and mercaptides oxides function as mild Lewis acids in accelerating the formation of polyurethane. As bases, traditional amine catalysts include triethylenediamine (TEDA, also called DABCO, 1,4-diazabicyclo[2.2.2]octane), dimethylcyclohexylamine (DMCHA), dimethylethanolamine (DMEA), Dimethylaminoethoxyethanol and bis-(2-dimethylaminoethyl)ether, a blowing catalyst also called A-99. A typical Lewis acidic catalyst is dibutyltin dilaurate. The process is highly sensitive to the nature of the catalyst and is also known to be autocatalytic. Factors affecting catalyst selection include balancing three reactions: urethane (polyol+isocyanate, or gel) formation, the urea (water+isocyanate, or "blow") formation, or the isocyanate trimerization reaction (e.g., using potassium acetate, to form isocyanurate rings). A variety of specialized catalysts have been developed. Surfactants Surfactants are used to modify the characteristics of both foam and non-foam polyurethane polymers. They take the form of polydimethylsiloxane-polyoxyalkylene block copolymers, silicone oils, nonylphenol ethoxylates, and other organic compounds. In foams, they are used to emulsify the liquid components, regulate cell size, and stabilize the cell structure to prevent collapse and sub-surface voids. In non-foam applications they are used as air release and antifoaming agents, as wetting agents, and are used to eliminate surface defects such as pin holes, orange peel, and sink marks. Production Polyurethanes are produced by mixing two or more liquid streams. The polyol stream contains catalysts, surfactants, blowing agents (when making polyurethane foam insulation) and so on. The two components are referred to as a polyurethane system, or simply a system. The isocyanate is commonly referred to in North America as the 'A-side' or just the 'iso'. The blend of polyols and other additives is commonly referred to as the 'B-side' or as the 'poly'. This mixture might also be called a 'resin' or 'resin blend'. In Europe the meanings for 'A-side' and 'B-side' are reversed. Resin blend additives may include chain extenders, cross linkers, surfactants, flame retardants, blowing agents, pigments, and fillers. Polyurethane can be made in a variety of densities and hardnesses by varying the isocyanate, polyol or additives. Health and safety Fully reacted polyurethane polymer is chemically inert. No exposure limits have been established in the U.S. by OSHA (Occupational Safety and Health Administration) or ACGIH (American Conference of Governmental Industrial Hygienists). It is not regulated by OSHA for carcinogenicity. Polyurethanes are combustible. Decomposition from fire can produce significant amounts of carbon monoxide and hydrogen cyanide, in addition to nitrogen oxides, isocyanates, and other toxic products. Due to the flammability of the material, it has to be treated with flame retardants (at least in case of furniture), almost all of which are considered harmful. California later issued Technical Bulletin 117 2013 which allowed most polyurethane foam to pass flammability tests without the use of flame retardants. Green Science Policy Institute states: "Although the new standard can be met without flame retardants, it does NOT ban their use. Consumers who wish to reduce household exposure to flame retardants can look for a TB117-2013 tag on furniture, and verify with retailers that products do not contain flame retardants." Liquid resin blends and isocyanates may contain hazardous or regulated components. Isocyanates are known skin and respiratory sensitizers. Additionally, amines, glycols, and phosphate present in spray polyurethane foams present risks. Exposure to chemicals that may be emitted during or after application of polyurethane spray foam (such as isocyanates) are harmful to human health and therefore special precautions are required during and after this process. In the United States, additional health and safety information can be found through organizations such as the Polyurethane Manufacturers Association (PMA) and the Center for the Polyurethanes Industry (CPI), as well as from polyurethane system and raw material manufacturers. Regulatory information can be found in the Code of Federal Regulations Title 21 (Food and Drugs) and Title 40 (Protection of the Environment). In Europe, health and safety information is available from ISOPA, the European Diisocyanate and Polyol Producers Association. Manufacturing The methods of manufacturing polyurethane finished goods range from small, hand pour piece-part operations to large, high-volume bunstock and boardstock production lines. Regardless of the end-product, the manufacturing principle is the same: to meter the liquid isocyanate and resin blend at a specified stoichiometric ratio, mix them together until a homogeneous blend is obtained, dispense the reacting liquid into a mold or on to a surface, wait until it cures, then demold the finished part. Dispensing equipment Although the capital outlay can be high, it is desirable to use a meter-mix or dispense unit for even low-volume production operations that require a steady output of finished parts. Dispense equipment consists of material holding (day) tanks, metering pumps, a mix head, and a control unit. Often, a conditioning or heater–chiller unit is added to control material temperature in order to improve mix efficiency, cure rate, and to reduce process variability. Choice of dispense equipment components depends on shot size, throughput, material characteristics such as viscosity and filler content, and process control. Material day tanks may be single to hundreds of gallons in size and may be supplied directly from drums, IBCs (intermediate bulk containers, such as caged IBC totes), or bulk storage tanks. They may incorporate level sensors, conditioning jackets, and mixers. Pumps can be sized to meter in single grams per second up to hundreds of pounds per minute. They can be rotary, gear, or piston pumps, or can be specially hardened lance pumps to meter liquids containing highly abrasive fillers such as chopped or hammer-milled glass fiber and wollastonite. The pumps can drive low-pressure (10 to 30 bar, 1 to 3 MPa) or high-pressure (125 to 250 bar, 12.5 to 25.0 MPa) dispense systems. Mix heads can be simple static mix tubes, rotary-element mixers, low-pressure dynamic mixers, or high-pressure hydraulically actuated direct impingement mixers. Control units may have basic on/off and dispense/stop switches, and analogue pressure and temperature gauges, or may be computer-controlled with flow meters to electronically calibrate mix ratio, digital temperature and level sensors, and a full suite of statistical process control software. Add-ons to dispense equipment include nucleation or gas injection units, and third or fourth stream capability for adding pigments or metering in supplemental additive packages. Tooling Distinct from pour-in-place, bun and boardstock, and coating applications, the production of piece parts requires tooling to contain and form the reacting liquid. The choice of mold-making material is dependent on the expected number of uses to end-of-life (EOL), molding pressure, flexibility, and heat transfer characteristics. RTV silicone is used for tooling that has an EOL in the thousands of parts. It is typically used for molding rigid foam parts, where the ability to stretch and peel the mold around undercuts is needed. The heat transfer characteristic of RTV silicone tooling is poor. High-performance, flexible polyurethane elastomers are also used in this way. Epoxy, metal-filled epoxy, and metal-coated epoxy is used for tooling that has an EOL in the tens of thousands of parts. It is typically used for molding flexible foam cushions and seating, integral skin and microcellular foam padding, and shallow-draft RIM bezels and fascia. The heat transfer characteristic of epoxy tooling is fair; the heat transfer characteristic of metal-filled and metal-coated epoxy is good. Copper tubing can be incorporated into the body of the tool, allowing hot water to circulate and heat the mold surface. Aluminum is used for tooling that has an EOL in the hundreds of thousands of parts. It is typically used for molding microcellular foam gasketing and cast elastomer parts, and is milled or extruded into shape. Mirror-finish stainless steel is used for tooling that imparts a glossy appearance to the finished part. The heat transfer characteristic of metal tooling is excellent. Finally, molded or milled polypropylene is used to create low-volume tooling for molded gasket applications. Instead of many expensive metal molds, low-cost plastic tooling can be formed from a single metal master, which also allows greater design flexibility. The heat transfer characteristic of polypropylene tooling is poor, which must be taken into consideration during the formulation process. Applications In 2008, the global consumption of polyurethane raw materials was above 12 million metric tons, and the average annual growth rate was about 5%. Revenues generated with PUR on the global market are expected to rise to approximately US$75 billion by 2022. As they are such an important class of materials, research is constantly taking place and papers published. Degradation and environmental fate Effects of visible light Polyurethanes, especially those made using aromatic isocyanates, contain chromophores that interact with light. This is of particular interest in the area of polyurethane coatings, where light stability is a critical factor and is the main reason that aliphatic isocyanates are used in making polyurethane coatings. When PU foam, which is made using aromatic isocyanates, is exposed to visible light, it discolors, turning from off-white to yellow to reddish brown. It has been generally accepted that apart from yellowing, visible light has little effect on foam properties. This is especially the case if the yellowing happens on the outer portions of a large foam, as the deterioration of properties in the outer portion has little effect on the overall bulk properties of the foam itself. It has been reported that exposure to visible light can affect the variability of some physical property test results. Higher-energy UV radiation promotes chemical reactions in foam, some of which are detrimental to the foam structure. Hydrolysis and biodegradation Polyurethanes may degrade due to hydrolysis. This is a common problem with shoes left in a closet, and reacting with moisture in the air. Microbial degradation of polyurethane is believed to be due to the action of esterase, urethanase, hydrolase and protease enzymes. The process is slow as most microbes have difficulty moving beyond the surface of the polymer. Susceptibility to fungi is higher due to their release of extracellular enzymes, which are better able to permeate the polymer matrix. Two species of the Ecuadorian fungus Pestalotiopsis are capable of biodegrading polyurethane in aerobic and anaerobic conditions such as found at the bottom of landfills. Degradation of polyurethane items at museums has been reported. Polyester-type polyurethanes are more easily biodegraded by fungus than polyether-type. See also Botanol, a material with higher plant-based content Passive fire protection Penetrant (mechanical, electrical, or structural) Polyaspartic Polyurethane dispersion Thermoplastic polyurethanes Thermoset polymer matrix References External links Center for the Polyurethanes Industry: information for EH&S issues related to polyurethanes developments Polyurethane synthesis, Polymer Science Learning Center, University of Southern Mississippi Polyurethane Foam Association: Industry information, educational materials and resources related to flexible polyurethane foam PU Europe: European PU insulation industry association (formerly BING): European voice for the national trade associations representing the polyurethane insulation industry ISOPA: European Diisocyanate & Polyol Producers Association: ISOPA represents the manufacturers in Europe of aromatic diisocyanates and polyols 1937 in Germany 1937 in science Adhesives Building insulation materials Coatings Elastomers Plastics Wood finishing materials German inventions of the Nazi period
Polyurethane
Physics,Chemistry
6,764
21,697,541
https://en.wikipedia.org/wiki/Lactosylceramide
The Lactosylceramides, also known as LacCer, are a class of glycosphingolipids composed of a variable hydrophobic ceramide lipid and a hydrophilic sugar moiety. Lactosylceramides are found in microdomains on the plasma layers of numerous cells. Moreover, they are a type of ceramide including lactose, which is an example of a globoside. Composition As with many lipids, the chemical formula and molecular weight varies depending on the fatty-acid present. As one example, the chemical formula of Lactosylceramide (d18:1/12:0) is C42H79NO13, which has 806.088 g/mol of molar mass, and the IUPAC name of this species is N-(dodecanoyl)-1-beta-lactosyl-sphing-4-enine. Function Lactosylceramides were initially called 'cytolipin H'. It is found in small amounts just in most tissues, however, it has various organic capacities and it is of significance as the biosynthetic forerunner of the greater part of the impartial oligoglycosylceramides, sulfatides and gangliosides. In tissues, biosynthesis of lactosylceramide includes expansion of the second monosaccharides unit (galactose) as its nucleotide subsidiary to monoglucosylceramide, catalyzed by a particular beta-1, 4-galactosyltransferase on the lumenal side of the Golgi mechanical assembly. In tissues, the precursor glucosylceramide is moved by the sphingolipid transport protein FAPP2 to the distal Golgi apparatus, where it initially cross from the cytosolic side of the membran by means of flippase activity. Biosynthesis of lactosylceramide then includes expansion of the second monosaccharides unit as its actuated nucleotide subordinate (UDP-galactose) to monoglucosylceramide on the lumenal side of the Golgi apparatus in a response catalyzed by β-1,4-galactosyltransferases of which two are known. The lactosylceramide created can be further glycosylated, or it very well may be moved to the plasma layer essentially by a non-vesicular system that is inadequately seen, however it can't be translocated back to the cytosolic flyer. It is likewise recovered by the catabolism of a considerable lot of the lipids for which it is the biosynthetic antecedent. Erasure of the lactosylceramide synthase by quality focusing on is embryonically deadly. Associated disorders Gaucher's disease is a sphingolipidosis described by a particular inadequacy in acidic glucocerebrosidase, which results in abnormal gathering of glucosylceramide essentially inside the lysosome. Gaucher's disease has been associated with instances of leukemia, myeloma, glioblastoma, lung malignancy, and hepatocellular carcinoma, in spite of the fact that the explanations behind the relationship are at present being discussed. Some suggest that the impact of Gaucher's disease might be connected to malignant growth, while others ensnare the treatments used to treat Gaucher's illness. This discussion is not completely astounding, as the theories connecting Gaucher's disease with cancer fail to address the roles of ceramide and glucosylceramide in malignant growth science. Gaucher disease is caused by mutations in GBA1, which encodes the lysosomal catalyst glucocerebrosidase (GCase). GBA1 transformations drive broad gathering of glucosylceramide (GC) in different natural and versatile resistant cells in the spleen, liver, lung and bone marrow, frequently prompting endless irritation. The systems that interface abundance GC to tissue aggravation stay obscure. See also Lactosylceramide 1,3-N-acetyl-beta-D-glucosaminyltransferase Lactosylceramide alpha-2,3-sialyltransferase GAL3ST1 ST3GAL5 References Glycolipids
Lactosylceramide
Chemistry
936
44,000,453
https://en.wikipedia.org/wiki/JACKPHY
In library automation the initialism JACKPHY refers to a group of language scripts not based on Roman characters, specifically: Japanese, Arabic, Chinese, Korean, Persian, Hebrew, and Yiddish. Focus on these seven writing systems by Library of Congress, based on sharing bibliographic records using MARC standards, included a partnership between 1979 and 1983 with the Research Libraries Group to develop cataloging capability for non-Roman scripts in the RLIN bibliographic utility. Ongoing efforts (JACKPHY Plus) enabled functionality for Cyrillic and then Greek in the MARC-8 character set. See also ALA-LC romanization Chinese Character Code for Information Interchange References External links LC Acquisitions and Bibliographic Access: General, Descriptive Cataloging § Non-Latin MARC 21 Specifications: Character Sets and Encoding Options Library automation Library of Congress Library cataloging and classification
JACKPHY
Engineering
170
43,016,216
https://en.wikipedia.org/wiki/Prandtl%20condition
In fluid mechanics the Prandtl condition was suggested by the German physicist Ludwig Prandtl to identify possible boundary layer separation points of incompressible fluid flows. Prandtl condition-in normal shock In the case of normal shock, flow is assumed to be in a steady state and thickness of shock is very small. It is further assumed that there is no friction or heat loss at the shock (because heat transfer is negligible because it occurs on a relatively small surface). It is customary in this field to denote x as the upstream and y as the downstream condition. Since the mass flow rate from the two sides of the shock are constant, the mass balance becomes, As there is no external force applied, momentum is conserved. Which give rises to the equation Because heat flow is negligible, the process can be treated as adiabatic. So the energy equation will be From the equation of state for perfect gas, P=ρRT As the temperature from both sides of the shock wave is discontinuous, the speed of sound is different in these adjoining medium. So it is convenient to define the star mach number that will be independent of the specific mach number. From star condition, the speed of sound at the critical condition can also be a good reference velocity. Speed of sound at that temperature is, And additional Mach number which is independent of specific mach number is, Since energy remains constant across the shock, dividing mass equation by momentum equation we will get From above equations, it will give rises to Which is called the prandtl condition in normal shock References Fluid dynamics
Prandtl condition
Chemistry,Engineering
322
1,335,645
https://en.wikipedia.org/wiki/Neanderthal%20extinction
Neanderthals became extinct around 40,000 years ago. Hypotheses on the causes of the extinction include violence, transmission of diseases from modern humans which Neanderthals had no immunity to, competitive replacement, extinction by interbreeding with early modern human populations, natural catastrophes, climate change and inbreeding depression. It is likely that multiple factors caused the demise of an already low population. Possible coexistence before extinction In research published in Nature in 2014, an analysis of radiocarbon dates from forty Neanderthal sites from Spain to Russia found that the Neanderthals disappeared in Europe between 41,000 and 39,000 years ago with 95% probability. The study also found with the same probability that modern humans and Neanderthals overlapped in Europe for between 2,600 and 5,400 years. Modern humans reached Europe between 45,000 and 43,000 years ago. Improved radiocarbon dating published in 2015 indicates that Neanderthals disappeared around 40,000 years ago, which overturns older carbon dating which indicated that Neanderthals may have lived as recently as 24,000 years ago, including in refugia on the south coast of the Iberian peninsula such as Gorham's Cave. Zilhão et al. (2017) argue for pushing this date forward by some 3,000 years, to 37,000 years ago. Inter-stratification of Neanderthal and modern human remains has been suggested, but is disputed. Stone tools that have been proposed to be linked to Neanderthals have been found at Byzovya (:ru:Бызовая) in the polar Urals, and dated to 31,000 to 34,000 years ago, but is also disputed. At Mandrin Cave the French palaeolontologist and colleagues developed a new method of analysing soot from fires. They were able to distinguish between fires made by Neanderthals and modern humans based on the differing food residues in the soot as a result of their different diets. The researchers found that the last layer of soot from Neanderthal fires was a year or less before the first made by modern humans, and in Slimak's view this shows that the two species met and supports the hypothesis that the Neanderthals disappeared due to competitive replacement. Possible causes of extinction Violence Kwang Hyun Ko discusses the possibility that Neanderthal extinction was either precipitated or hastened by violent conflict with Homo sapiens. Violence in early hunter-gatherer societies usually occurred as a result of resource competition following natural disasters. It is therefore plausible to suggest that violence, including primitive warfare, would have transpired between the two human species. The hypothesis that early humans violently replaced Neanderthals was first proposed by French paleontologist Marcellin Boule (the first person to publish an analysis of a Neanderthal) in 1912. Parasites and pathogens Infectious diseases carried by Homo sapiens may have passed to Neanderthals, who would have had poor protection to infections they had not previously been exposed to, leading to devastating consequences for Neanderthal populations. Homo sapiens were less vulnerable to Neanderthal diseases, partly because they had evolved to cope with the far higher disease load of the tropics and so were more able to cope with novel pathogens, and partly because the higher numbers of Homo sapiens meant that even devastating outbreaks would still have left enough survivors for a viable population. If viruses could easily jump between these two similar species, possibly because they lived near together, Homo sapiens might have infected Neanderthals and prevented the epidemic from burning out as Neanderthal numbers declined. The same process may also explain Homo sapiens' resilience to Neanderthal diseases and parasites. Novel human diseases likely moved from Africa into Eurasia. This purported "African advantage" remained until the agricultural revolution 10,000 years ago in Eurasia, after which domesticated animals surpassed other primates as the most prevalent source of new human infections, replacing the "African advantage" with a "Eurasian advantage". The catastrophic impact of Eurasian viruses on Native American populations in the historical past offers a sense of how modern humans may have affected hominin predecessor groups in Eurasia 40,000 years ago. Human and Neanderthal genomes and disease or parasite adaptations may give insight on this. Infectious illness interactions may express the prolonged period of stagnation before the modification, as per disease ecology. Mathematical models have been used to make forecasts for future investigations, giving information about inter-species interactions during the shift between the Middle and Upper Paleolithic eras. This can be useful given the sparse material record from this time and the potential of DNA sequencing and dating technology. Such modeling, together with modern technology and prehistoric archaeological methodologies, may provide a fresh understanding of this time in human origins. Competitive replacement Species specific disadvantages Slight competitive advantage on the part of modern humans may have accounted for Neanderthals' decline on a timescale of thousands of years. Generally small and widely dispersed fossil sites suggest that Neanderthals lived in less numerous and socially more isolated groups than contemporary Homo sapiens. Tools such as Mousterian flint stone flakes and Levallois points are remarkably sophisticated from the outset, yet they have a slow rate of variability and general technological inertia is noticeable during the entire fossil period. Artifacts are of utilitarian nature, and symbolic behavioral traits are undocumented before the arrival of modern humans in Europe around 40,000 to 35,000 years ago. The noticeable morphological differences in skull shape between the two human species also have cognitive implications. These include the Neanderthals' smaller parietal lobes and cerebellum, areas implicated in tool use, visuospatial integration, numeracy, creativity, and higher-order conceptualization. The differences, while slight, would have possibly been enough to affect natural selection and may underlie and explain the differences in social behaviors, technological innovation, and artistic output. Jared Diamond, a supporter of competitive replacement, points out in his book The Third Chimpanzee that the replacement of Neanderthals by modern humans is comparable to patterns of behavior that occur whenever people with advanced technology clash with people with less developed technology. Division of labour In 2006, it was posited that Neanderthal division of labour between the sexes was less developed than Middle Paleolithic Homo sapiens. Both male and female Neanderthals participated in the single occupation of hunting big game, such as bison, deer, gazelles, and wild horses. This hypothesis proposes that the Neanderthal's relative lack of labour division resulted in less efficient extraction of resources from the environment as compared to Homo sapiens. Anatomical differences and running ability Researchers such as Karen L. Steudel of the University of Wisconsin have highlighted the relationship of Neanderthal anatomy (shorter and stockier than that of modern humans) and the ability to run and the requirement of energy (30% more). Nevertheless, in the recent study, researchers Martin Hora and Vladimir Sladek of Charles University in Prague show that Neanderthal lower limb configuration, particularly the combination of robust knees, long heels, and short lower limbs, increased the effective mechanical advantage of the Neanderthal knee and ankle extensors, thus reducing the force needed and the energy spent for locomotion significantly. The walking cost of the Neanderthal male is now estimated to be 8–12% higher than that of anatomically modern males, whereas the walking cost of the Neanderthal female is considered to be virtually equal to that of anatomically modern females. Other researchers, like Yoel Rak, from Tel-Aviv University in Israel, have noted that the fossil records show that Neanderthal pelvises in comparison to modern human pelvises would have made it much harder for Neanderthals to absorb shocks and to bounce off from one step to the next, giving modern humans another advantage over Neanderthals in running and walking ability. However, Rak also notes that all archaic humans had wide pelvises, indicating that this is the ancestral morphology and that modern humans underwent a shift towards narrower pelvises in the late Pleistocene. Modern humans and alliance with dogs Pat Shipman argues that the domestication of the dog gave modern humans an advantage when hunting. Evidence shows the oldest remains of domesticated dogs were found in Belgium (31,700 BP) and in Siberia (33,000 BP). A survey of early sites of modern humans and Neanderthals with faunal remains across Spain, Portugal and France provided an overview of what modern humans and Neanderthals ate. Rabbit became more frequent, while large mammals – mainly eaten by the Neanderthals – became increasingly rare. In 2013, DNA testing on the "Altai dog", a paleolithic dog's remains from the Razboinichya Cave (Altai Mountains), has linked this 33,000-year-old dog with the present lineage of Canis familiaris. Interbreeding At the time of the last Neanderthals, approximately 45 to 40 thousand years ago, genetic analysis suggests that there was a gene flow from Neanderthals to modern humans of around 10%, but almost no flow from modern humans to Neanderthals. This may be an artifact due to the small number of late Neanderthal genomes, or because hybrids were not viable in Neanderthal groups, or because fertile Neanderthals were being absorbed into modern human groups but not vice versa. If the effect was real over an extended period, it would have increased the size of the modern human gene pool and reduced that of the already sparse Neanderthals, contributing to reduce their numbers below a viable population and thus to their extinction. Inbreeding According to a study by Rios et al, kinship patterns among recovered Neanderthal remains suggests that there was inbreeding, such as pairings between half-siblings and/or uncle/aunt and niece/nephew. Researchers hypothesize that Neanderthals may have become isolated into small groups during harsh climatic conditions, which contributed to inbreeding behaviours. Due to the lack of genetic diversity, Neanderthal populations would have become more vulnerable to climatic changes, diseases, and other stressors, which may have contributed to their extinction. A similar model to the inbreeding hypothesis can be seen among endangered lowland gorillas. Their populations are so small that it has caused inbreeding, making them even more vulnerable to extinction. Climate change Neanderthals went through a demographic crisis in Western Europe that seems to coincide with climate change that resulted in a period of extreme cold in Western Europe. "The fact that Neanderthals in Western Europe were nearly extinct, but then recovered long before they came into contact with modern humans came as a complete surprise to us," said Love Dalén, associate professor at the Swedish Museum of Natural History in Stockholm. If so, this would indicate that Neanderthals may have been very sensitive to climate change. The data reveal that sudden climatic change, although crucial locally, had a limited effect on the worldwide Neanderthal population. Interbreeding and assimilation, which were hypothesized as causes in the death of European Neanderthal populations, are successful only for low levels of food competition. Future research will examine models of interbreeding, and hybridization may be evaluated using genomic records from the last ice age (Fu et al., 2016). Natural catastrophe A number of researchers have argued that the Campanian Ignimbrite Eruption, a volcanic eruption near Naples, Italy, about 39,280 ± 110 years ago (older estimate ~37,000 years), erupting about of magma ( bulk volume) contributed to the extinction of Neanderthals. The argument has been developed by Golovanova et al. The hypothesis posits that although Neanderthals had encountered several Interglacials during 250,000 years in Europe, inability to adapt their hunting methods caused their extinction facing H. sapiens competition when Europe changed into a sparsely vegetated steppe and semi-desert during the last Ice Age. Studies of sediment layers at Mezmaiskaya Cave suggest a severe reduction of plant pollen. The damage to plant life would have led to a corresponding decline in plant-eating mammals hunted by the Neanderthals. See also References Further reading Extinction Upper Paleolithic Archaeology in Europe Pleistocene extinctions
Neanderthal extinction
Biology
2,562
13,678,080
https://en.wikipedia.org/wiki/History%20of%20the%20iPhone
The history of the iPhone by Apple Inc. spans from the early 2000s to about 2010. The first iPhone was unveiled at Macworld 2007 and released later that year. By the end of 2009, iPhone models had been released in all major markets. Genesis of the iPhone The idea of an Apple phone came from Jean-Marie Hullot, a software engineer who worked at NeXT, and later, Apple. Initially, making an Apple phone was not favored by CEO Steve Jobs, but eventually Hullot was able to convince him. The first team was created in Paris; however, it was not until a few years later that he took the project more seriously: the French engineers were asked to work back in the US, but Hullot declined and resigned from Apple with his team. Another engineer, Henri Lamiraux, became the new head of the project with Scott Forstall, to develop the iPhone software. Initial development Initially, the iPhone started from a conflict between Steve Jobs and his brother-in-law working at Microsoft, then convinced by a French high-level engineer, Jean-Marie Hullot, working for Apple France to do so. The project within Apple Inc. for developing the iPhone began with a request in 2004 from CEO Steve Jobs to the company's hardware engineer Tony Fadell, software engineer Scott Forstall and design engineer Sir Jonathan Ive to work on the highly confidential "Project Purple". While pitting two teams of engineers led by Fadell and Forstall, Jobs decided to investigate the use of touchscreen devices and tablet computers (which later came to fruition with the iPad). Jobs ended up pushing for a touch-screen device that many have noted has similarities to Apple's previous touch-screen portable device, the Newton MessagePad. Like the MessagePad, the iPhone is nearly all screen. Its form factor is credited to Apple's Chief Design Officer, Jonathan Ive. Jobs expressed his belief that tablet PCs and traditional PDAs were not good choices as high-demand markets for Apple to enter, despite receiving many requests for Apple to create another PDA. In 2002, after the iPod launched, Jobs realized that the overlap of mobile phones and music players would force Apple to get into the mobile phone business. After seeing millions of Americans carrying separate BlackBerrys, phones, and Apple's iPod MP3 players; he felt eventually consumers would prefer just one device. Jobs also saw that as cell phones and mobile devices would keep amassing more features, they will be challenging the iPod's dominance as a music player. To protect the iPod new product line, which by the start of 2007 was responsible for 48% of all of Apple's revenue, Jobs decided he would need to venture into the wireless world. So at that time, instead of focusing on a follow-up to their Newton PDA, Jobs had Apple focus on the iPod. Jobs also had Apple develop the iTunes software, which can be used to synchronize content with iPod devices. iTunes had been released in January 2001. Several enabling technologies made the iPhone possible. These included lithium-ion batteries that were small and powerful enough to power a mobile computer for a reasonable amount of time; multi-touch screens; energy-efficient but powerful CPUs, such as those using the ARM architecture; mobile phone networks; and web browsers. Apple approached glass manufacturer Corning in 2005 to investigate the possibility of a thin, flexible, and transparent material that could avoid the problem of metal keys scratching up phone screens. Corning reactivated some old research material that had not yet found an application to produce Gorilla Glass. Beta to production and announcement In an effort to bypass the carriers, Jobs approached Motorola. On September 7, 2005, Apple and Motorola collaborated to develop the Motorola ROKR E1, the first mobile phone to use iTunes. Steve Jobs was unhappy with the ROKR, among other deficiencies, the ROKR E1's firmware limited storage to only 100 iTunes songs to avoid competing with Apple's iPod nano. iTunes Music Store purchases could also not be downloaded wirelessly directly into the ROKR E1 and had to be done through a PC sync. Apple therefore decided to develop its own phone, which would incorporate the iPod's musical functions into a smartphone. Feeling that having to compromise with a non-Apple designer (Motorola) prevented Apple from designing the phone they wanted to make, Apple discontinued support for the ROKR in September 2006, and, after creating a deal with AT&T (at the time still called Cingular), released a version of iTunes that included references to an as-yet unknown mobile phone that could display pictures and video. This turned out to be the first iPhone (iPhone 2G). On January 9, 2007, Steve Jobs announced the first iPhone at the Macworld convention, receiving substantial media attention. On June 11, 2007, Apple announced at the Apple's Worldwide Developers Conference that the iPhone would support third party applications using the Safari engine. Third parties would be able to create Web 2.0 applications, which users could access via the Internet. Such applications appeared even before the release of the iPhone; the first of these, called OneTrip, was a program meant to keep track of users' shopping lists. On June 29, 2007, the first iPhone was released. The iPod Touch, which came with an iPhone-style touchscreen to the iPod range, was also released later in 2007. The iPad followed in 2010. Connection to AT&T When Apple announced the iPhone on January 9, 2007, it was sold only with AT&T (formerly Cingular) contracts in the United States. After 18 months of negotiations, Steve Jobs reached an agreement with the wireless division of AT&T to be the iPhone's exclusive carrier. Consumers were unable to use any other carrier without unlocking their device. Apple retained control of the design, manufacturing and marketing of the iPhone. Since some customers were jailbreaking their iPhones to leave their network, AT&T began charging them a $175 early-termination fee for leaving before the end of their contract. Court cases Questions arose about the legality of Apple's arrangement after the iPhone was released. Two class-action lawsuits were filed against the company in October 2007: one in Federal court and the other in state court. According to the suits, Apple's exclusive agreement with AT&T violated antitrust law. The state-court suit, filed by the law office of Damian R. Fernandez on behalf of California resident Timothy P. Smith, sought an injunction barring Apple from selling iPhones with a software lock and $200 million in damages. In Smith v. Apple Inc., the plaintiffs said that Apple failed to disclose to purchasers its five-year agreement with AT&T when they bought iPhones with a two-year contract and cited the Sherman Act's prohibition of monopolies. The second case was filed in the United States District Court for the Northern District of California. The plaintiff, Paul Holman, filed a complaint against Apple and AT&T Mobility that he could not switch carriers or change SIM cards without losing iPhone improvements to which he was entitled. Holman also cited a Sherman Act violation by the defendants. On July 8, 2010, the case was affirmed for class certification. On December 9 the court ordered a stay on the case, awaiting the Supreme Court's decision in AT&T v. Concepcion (disputed whether the state's basic standards of fairness were met by a clause in AT&T's contract limiting complaint resolution to arbitration). On April 27, 2011, the Supreme Court ruled that AT&T met the state's fairness standards. In 2017, Apple was sued after they admitted to slowing down older phone models. The plaintiffs, Stefan Bogdanovich and Dakota Speas, filed the lawsuit when their iPhone 6S was slower after an update. The plaintiffs were entitled to compensation due to the interferences and the economic damages they suffered. United States release On June 28, 2007, during an address to Apple employees, Steve Jobs announced that all full-time Apple employees and those part-time employees who had been with the company for at least one year would receive a free iPhone. Employees received their phones in July after the initial demand for the iPhone subsided. Initially priced at $499 () and $599 () for the 4 GB models and 8 GB models respectively, the iPhone went on sale on June 29, 2007. Apple closed its stores at 2:00pm local time to prepare for the 6:00pm iPhone launch, while hundreds of customers lined up at stores nationwide. In the US and some other countries, iPhones could be acquired only with a credit card, preventing completely anonymous purchases of iPhones. At the time, there was no way to opt out of the bundled AT&T data plan. At first, iPhones could not be added to an AT&T Business account, and any existing business account discounts could not be applied to an iPhone AT&T account. AT&T changed these restrictions in late January 2008. The Associated Press also reported in 2007 that some users were unable to activate their phones because, according to AT&T, "[a] high volume of activation requests [was] taxing the company's computer servers." Early estimates by technology analysts estimated sales of between 250,000 and 700,000 iPhones in the first weekend alone, with strong sales continuing after the initial weekend. As part of their quarterly earnings announcement, AT&T reported that 146,000 iPhones were activated in the first weekend. Though this figure does not include units that were purchased for resale on eBay or otherwise not activated until after the opening weekend, it is still less than most initial estimates. It is also estimated that 95% of the units sold were the 8 GB model. Oversized bills Stories of unexpected billing issues began to circulate in blogs and the technical press a little more than a month after iPhone's heavily advertised and anticipated release. The 300-page iPhone bill in a box received by iJustine on Saturday, August 11, 2007 became the subject of a viral video, posted by the following Monday, which quickly became an Internet meme. This video clip brought the voluminous bills to the attention of the mass media. Ten days later, after the video had been viewed more than 3 million times on the Internet, and had received international news coverage, AT&T sent iPhone users a text message outlining changes in its billing practices. Price drop outcry On September 5, 2007, the 4 GB model was discontinued, and the 8 GB model price was cut by a third, from $599 USD to $399 USD. Those who had purchased an iPhone in the 14-day period before the September 5, 2007, announcement were eligible for a $200 "price protection" rebate from Apple or AT&T. However, it was widely reported that some who bought between the June 29, 2007, launch and the August 22, 2007, price protection kick-in date complained that this was a larger-than-normal price drop for such a relatively short period and accused Apple of unfair pricing. In response to customer complaints, on September 6, 2007, Apple CEO Steve Jobs wrote in an open letter to iPhone customers that everyone who purchased an iPhone at the higher price "and who is not receiving a rebate or other consideration", would receive a $100 credit to be redeemed towards the purchase of any product sold in Apple's retail or online stores. iPhone 3G pricing model changes With the July 11, 2008, release of the iPhone 3G, Apple and AT&T changed the US pricing model from the previous generation. Following the de facto model for mobile phone service in the United States, AT&T would subsidize a sizable portion of the upfront cost for the iPhone 3G, followed by charging moderately higher monthly fees over a minimum two-year contract. iPhone 4 CDMA release There had been ongoing speculation in the United States that Apple might offer a CDMA-compatible iPhone for Verizon Wireless. This speculation increased on October 6, 2010, when The Wall Street Journal reported that Apple would begin producing a CDMA-compatible iPhone, with such a model going on sale in early 2011. On January 11, 2011, Verizon announced during a media event that it had reached an agreement with Apple and would begin selling a CDMA iPhone 4. The Verizon iPhone went on sale on February 10, 2011. The CDMA version was a bespoke model, lacking a SIM slot and with a revised metal chassis, the design of which would be reused on the iPhone 4S. During Apple's official unveiling of iPhone 4S on October 4, 2011, it was announced that Sprint would begin carrying the reconfigured CDMA iPhone 4 and iPhone 4S in the US on October 14. Cricket Wireless announced on May 31, 2012, that it would become the first prepaid carrier in the US to offer iPhone 4 and iPhone 4S, beginning June 22, 2012. A week later, Virgin Mobile USA became the second American prepaid carrier to offer iPhone 4 and 4S, announcing plans to release the phones on June 29, 2012. T-Mobile USA's inability to provide iPhone to customers raised its subscription churn rate, decreased the percentage of lucrative postpaid customers, and contributed to parent Deutsche Telekom's decision to sell it to AT&T in March 2011, although AT&T canceled the deal in December 2011 because of antitrust concerns. T-Mobile began offering iPhone on April 12, 2013. iPhone 5 and the Lightning connector With the release of the iPhone 5 on September 21, 2012, Apple introduced a thinner and stronger design for the iPhone. This design included the availability of the colors: black, white, grey, and gold; gold being used for the first time with this iteration of the iPhone. Sapphire materials were used for home button and the camera to help resist scratches and fingerprints, while anodized aluminum and ceramic glass were used for the phone's body. Support for 4G-LTE internet was also added. Apple abandoned the 30Pin Dock in favor of the Lightning Dock. This change was a surprise to business owners and consumers alike as this swap was unexpected. This was later changed with the release of iPhone 15, which switched to USB-C ports. International release timeline The international release of iPhone was staggered over several months. Today, the iPhone is available in most countries. † iPhone offered by multiple carriers under contract from Apple (country not carrier-exclusive) ‡ iPhone offered without contract and without carrier lock § MVNO with O2 Activation and SIM lock bypassing Intellectual property Apple has filed more than 200 patent applications related to the technology behind the iPhone. LG Electronics claimed the design of the iPhone was copied from the LG Prada. Woo-Young Kwak, head of LG Mobile Handset R&D Center, said at a press conference: "we consider that Apple copied Prada phone after the design was unveiled when it was presented in the iF Design Award and won the prize in September 2006." Conversely, the iPhone has also inspired its own share of high-tech clones. On September 3, 1993, Infogear filed for the U.S. trademark "I PHONE" and on March 20, 1996, applied for the trademark "IPhone". "I Phone" was registered in March 1998, and "IPhone" was registered in 1999. Since then, the I PHONE mark had been abandoned. Infogear trademarks cover "communications terminals comprising computer hardware and software providing integrated telephone, data communications and personal computer functions" (1993 filing), and "computer hardware and software for providing integrated telephone communication with computerized global information networks" (1996 filing). In 2000, Infogear filed an infringement claim against the owners of the iPhones.com domain name. The owners of the iPhones.com domain name challenged the infringement claim in the Northern District Court of California. In June 2000, Cisco Systems acquired Infogear, including the iPhone trademark. In September 2000, Cisco Systems settled with the owners of iPhones.com and allowed the owners to keep the iPhones.com domain name along with intellectual property rights to use any designation of the iPhones.com domain name for the sale of cellular phones, cellular phones with Internet access (WAP PHONES), handheld PDAs, storage devices, computer equipment (hardware/software), and digital cameras (hardware/software). The intellectual property rights were granted to the owners of the iPhones.com domain name by Cisco Systems in September 2000. In October 2002, Apple applied for the "iPhone" trademark in the United Kingdom, Australia, Singapore, and the European Union. A Canadian application followed in October 2004, and a New Zealand application in September 2006. As of October 2006, only the Singapore and Australian applications had been granted. In September 2006, a company called Ocean Telecom Services applied for an "iPhone" trademark in the United States, United Kingdom, and Hong Kong, following a filing in Trinidad and Tobago. As the Ocean Telecom trademark applications use exactly the same wording as the New Zealand application of Apple, it is assumed that Ocean Telecom is applying on behalf of Apple. The Canadian application was opposed in August 2005, by a Canadian company called Comwave who themselves applied for the trademark three months later. Comwave has been selling VoIP devices called iPhone since 2004. Shortly after Steve Jobs' January 9, 2007, announcement that Apple would be selling a product called iPhone in June 2007, Cisco issued a statement that it had been negotiating trademark licensing with Apple and expected Apple to agree to the final documents that had been submitted the night before. On January 10, 2007, Cisco announced it had filed a lawsuit against Apple over the infringement of the trademark iPhone, seeking an injunction in federal court to prohibit Apple from using the name. In February 2007, Cisco claimed that the trademark lawsuit was a "minor skirmish" that was not about money, but about interoperability. On February 2, 2007, Apple and Cisco announced that they had agreed to temporarily suspend litigation while they held settlement talks, and subsequently announced on February 20, 2007, that they had reached an agreement. Both companies will be allowed to use the "iPhone" name in exchange for "exploring interoperability" between their security, consumer, and business communications products. On October 22, 2009, Nokia filed a lawsuit against Apple for infringement of its GSM, UMTS and WLAN patents. Nokia alleges that Apple has been violating ten Nokia patents since the iPhone initial release. This and further lawsuits by Nokia were eventually settled. In December 2010, Reuters reported that some iPhone and iPad users were suing Apple Inc. because some applications were passing user information to third-party advertisers without permission. Some makers of the applications such as Textplus4, Paper Toss, The Weather Channel, Dictionary.com, Talking Tom Cat and Pumpkin Maker have also been named as co-defendants in the lawsuit. In August 2012, Apple won a smartphone patent lawsuit in the U.S. against Samsung, the world's largest maker of smartphones; however, on December 6, 2016, SCOTUS reversed the decision that awarded nearly $400 million to Apple and returned the case to Federal Circuit court to define the appropriate legal standard to define "article of manufacture" because it is not the smartphone itself but could be just the case and screen to which the design patents relate. Legal battles over brand name In Mexico, the trademark iFone was registered in 2003 by a communications systems and services company, iFone. Apple tried to gain control over its brand name, but a Mexican court denied the request. The case began in 2009, when the Mexican firm sued Apple. The Supreme Court of Mexico upheld that iFone is the rightful owner and held that Apple iPhone is a trademark violation. In Brazil, the brand IPHONE was registered in 2000 by the company then called Gradiente Eletrônica S.A., now IGB Eletrônica S.A. According to the filing, Gradiente foresaw the revolution in the convergence of voice and data over the Internet at the time. The final battle over the brand name concluded in 2008. On December 18, 2012, IGB launched its own line of Android smartphones under the tradename to which it has exclusive rights in the local market. In February 2013, the Brazilian Patent and Trademark Office (known as "Instituto Nacional da Propriedade Industrial") issued a ruling that Gradiente Eletrônica, not Apple, owned the "iPhone" mark in Brazil. The "iPhone" term was registered by Gradiente in 2000, seven years before Apple's release of its first iPhone. This decision came three months after Gradiente Eletrônica launched a lower-cost smartphone using the iPhone brand. In June 2014, Apple won, for the second time, the right to use the brand name in Brazil. The court ruling determined that the Gradiente's registration does not own exclusive rights on the brand. Although Gradiente intended to appeal, with the decision Apple can use freely the brand without paying royalties to the Brazilian company. In the Philippines, Solid Group launched the MyPhone brand in 2007. Stylized as "my|phone", Solid Broadband filed a trademark application of that brand. Apple later filed a trademark case at the Intellectual Property Office of the Philippines (IPOPHL) against Solid Broadband's MyPhone for "confusingly similar" to the iPhone and that it may likely "deceive" or "cause confusion" among consumers. Apple lost the trademark battle to Solid Group in a 2015 decision made by IPO director Nathaniel Arevalo, who also reportedly said that it was unlikely that consumers would be confused between the "iPhone" and the "MyPhone". "This is a case of a giant trying to claim more territory than what it is entitled to, to the great prejudice of a local 'Pinoy Phone' merchant who has managed to obtain a significant foothold in the mobile phone market through the marketing and sale of innovative products under a very distinctive trademark", Arevalo later added. See also History of Apple Inc. History of mobile phones Timeline of Apple Inc. products Timeline of iPhone models Timeline of iOS devices References Further reading IPhone iPhone iPhone
History of the iPhone
Technology
4,552
56,574,537
https://en.wikipedia.org/wiki/Lists%20of%20black%20holes
This is a list of lists of black holes: List of black holes List of most massive black holes List of nearest known black holes List of quasars See also Lists of astronomical objects Lists of astronomical objects
Lists of black holes
Physics,Astronomy
43
36,867,671
https://en.wikipedia.org/wiki/HD%20115004
HD 115004 is a single star in the northern constellation of Canes Venatici. It is faintly visible to the naked eye with an apparent visual magnitude of 4.94. Based upon an annual parallax shift of , it is located around 460 light years from the Sun. The star is moving closer with a heliocentric radial velocity of −22 km/s. HD 115004 will make its closest approach in about 1.7 million years at a separation of around . This is an evolved giant star, most likely (97% chance) on the horizontal branch, with a stellar classification of . The suffix notation indicates a mild overabundance of the CN molecule in the stellar atmosphere. It has an estimated 3.2 times the mass of the Sun and, at the age of 440 million years, has expanded to 23 times the Sun's radius. The star is radiating around 242 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,761 K. References G-type giants Canes Venatici Durchmusterung objects 115004 065450 4997
HD 115004
Astronomy
235
34,549,363
https://en.wikipedia.org/wiki/Critical%20code%20studies
Critical code studies (CCS) is an emerging academic subfield, related to software studies, digital humanities, cultural studies, computer science, human–computer interface, and the do-it-yourself maker culture. Its primary focus is on the cultural significance of computer code, without excluding or focusing solely upon the code's functional purpose. According to Mark C. Marino, it As introduced by Marino, critical code studies was initially a method by which scholars "can read and explicate code the way we might explicate a work of literature", but the concept also draws upon Espen Aarseth's conception of a cybertext as a "mechanical device for the production and consumption of verbal signs", arguing that in order to understand a digital artifact we must also understand the constraints and capabilities of the authoring tools used by the creator of the artifact, as well as the memory storage and interface required for the user to experience the digital artifact. Evidence that critical code studies has gained momentum since 2006 include an article by Matthew Kirschenbaum in The Chronicle of Higher Education, CCS sessions at the Modern Language Association in 2011 that were "packed" with attendees, several academic conferences devoted wholly to critical code studies, and a book devoted to the explication of a single line of computer code, titled 10 PRINT CHR$(205.5+RND(1)); : GOTO 10. See also Critical legal studies Critical theory Hermeneutics References Footnotes Bibliography Further reading Subfields of computer science Cultural studies Technology in society
Critical code studies
Technology
315
614,297
https://en.wikipedia.org/wiki/Solvay%20process
The Solvay process or ammonia–soda process is the major industrial process for the production of sodium carbonate (soda ash, Na2CO3). The ammonia–soda process was developed into its modern form by the Belgian chemist Ernest Solvay during the 1860s. The ingredients for this are readily available and inexpensive: salt brine (from inland sources or from the sea) and limestone (from quarries). The worldwide production of soda ash in 2005 was estimated at 42 million tonnes, which is more than six kilograms () per year for each person on Earth. Solvay-based chemical plants now produce roughly three-quarters of this supply, with the remaining being mined from natural deposits. This method superseded the Leblanc process. History The name "soda ash" is based on the principal historical method of obtaining alkali, which was by using water to extract it from the ashes of certain plants. Wood fires yielded potash and its predominant ingredient potassium carbonate (K2CO3), whereas the ashes from these special plants yielded "soda ash" and its predominant ingredient sodium carbonate (Na2CO3). The word "soda" (from the Middle Latin) originally referred to certain plants that grow in salt solubles; it was discovered that the ashes of these plants yielded the useful alkali soda ash. The cultivation of such plants reached a particularly high state of development in the 18th century in Spain, where the plants are named barrilla (or "barilla" in English). The ashes of kelp also yield soda ash and were the basis of an enormous 18th-century industry in Scotland. Alkali was also mined from dry lakebeds in Egypt. By the late 18th century these sources were insufficient to meet Europe's burgeoning demand for alkali for soap, textile, and glass industries. In 1791, the French physician Nicolas Leblanc developed a method to manufacture soda ash using salt, limestone, sulfuric acid, and coal. Although the Leblanc process came to dominate alkali production in the early 19th century, the expense of its inputs and its polluting byproducts (including hydrogen chloride gas) made it apparent that it was far from an ideal solution. It has been reported that in 1811 French physicist Augustin Jean Fresnel discovered that sodium bicarbonate precipitates when carbon dioxide is bubbled through ammonia-containing brines – which is the chemical reaction central to the Solvay process. The discovery wasn't published. As has been noted by Desmond Reilly, "The story of the evolution of the ammonium–soda process is an interesting example of the way in which a discovery can be made and then laid aside and not applied for a considerable time afterwards." Serious consideration of this reaction as the basis of an industrial process dates from the British patent issued in 1834 to H. G. Dyar and J. Hemming. There were several attempts to reduce this reaction to industrial practice, with varying success. In 1861, Belgian industrial chemist Ernest Solvay turned his attention to the problem; he was apparently largely unaware of the extensive earlier work. His solution was a gas absorption tower in which carbon dioxide bubbled up through a descending flow of brine. This, together with efficient recovery and recycling of the ammonia, proved effective. By 1864 Solvay and his brother Alfred had acquired financial backing and constructed a plant in Couillet, today a suburb of the Belgian town of Charleroi. The new process proved more economical and less polluting than the Leblanc method, and its use spread. In 1874, the Solvays expanded their facilities with a new, larger plant at Nancy, France. In the same year, Ludwig Mond visited Solvay in Belgium and acquired rights to use the new technology. He and John Brunner formed the firm of Brunner, Mond & Co., and built a Solvay plant at Winnington, near Northwich, Cheshire, England. The facility began operating in 1874. Mond was instrumental in making the Solvay process a commercial success. He made several refinements between 1873 and 1880 that removed byproducts that could slow or halt the process. In 1884, the Solvay brothers licensed Americans William B. Cogswell and Rowland Hazard to produce soda ash in the US, and formed a joint venture (Solvay Process Company) to build and operate a plant in Solvay, New York. By the 1890s, Solvay-process plants produced the majority of the world's soda ash. In 1938 large deposits of the mineral trona were discovered near the Green River in Wyoming from which sodium carbonate can be extracted more cheaply than produced by the process. The original Solvay New York plant closed in 1986, replaced in the US by a factory in Green River. Throughout the rest of the world, the Solvay process remains the major source of soda ash. Chemistry The Solvay process results in soda ash (predominantly sodium carbonate (Na2CO3)) from brine (as a source of sodium chloride (NaCl)) and from limestone (as a source of calcium carbonate (CaCO3)). The overall process is: 2NaCl + CaCO3 -> Na2CO3 + CaCl2 The actual implementation of this global, overall reaction is intricate. A simplified description can be given using the four different, interacting chemical reactions illustrated in the figure. In the first step in the process, carbon dioxide (CO2) passes through a concentrated aqueous solution of sodium chloride (table salt, NaCl) and ammonia (NH3). NaCl + CO2 + NH3 + H2O -> NaHCO3 + NH4Cl ---(I) In industrial practice, the reaction is carried out by passing concentrated brine (salt water) through two towers. In the first, ammonia bubbles up through the brine and is absorbed by it. In the second, carbon dioxide bubbles up through the ammoniated brine, and sodium bicarbonate (baking soda) precipitates out of the solution. Note that, in a basic solution, NaHCO3 is less water-soluble than sodium chloride. The ammonia (NH3) buffers the solution at a basic (high) pH; without the ammonia, a hydrochloric acid byproduct would render the solution acidic, and arrest the precipitation. Here, NH3 along with ammoniacal brine acts as a mother liquor. The necessary ammonia "catalyst" for reaction (I) is reclaimed in a later step, and relatively little ammonia is consumed. The carbon dioxide required for reaction (I) is produced by heating ("calcination") of the limestone at 950–1100 °C, and by calcination of the sodium bicarbonate (see below). The calcium carbonate (CaCO3) in the limestone is partially converted to quicklime (calcium oxide (CaO)) and carbon dioxide: CaCO3 -> CO2 + CaO ---(II) The sodium bicarbonate (NaHCO3) that precipitates out in reaction (I) is filtered out from the hot ammonium chloride (NH4Cl) solution, and the solution is then reacted with the quicklime (calcium oxide (CaO)) left over from heating the limestone in step (II). 2 NH4Cl + CaO -> 2 NH3 + CaCl2 + H2O ---(III) CaO makes a strong basic solution. The ammonia from reaction (III) is recycled back to the initial brine solution of reaction (I). The sodium bicarbonate (NaHCO3) precipitate from reaction (I) is then converted to the final product, sodium carbonate (washing soda: Na2CO3), by calcination (160–230 °C), producing water and carbon dioxide as byproducts: 2 NaHCO3 -> Na2CO3 + H2O + CO2 ---(IV) The carbon dioxide from step (IV) is recovered for re-use in step (I). When properly designed and operated, a Solvay plant can reclaim almost all its ammonia, and consumes only small amounts of additional ammonia to make up for losses. The only major inputs to the Solvay process are salt, limestone and thermal energy, and its only major byproduct is calcium chloride, which is sometimes sold as road salt. After the invention of the Haber and other new ammonia-producing processes in the 1910s and 1920s its price dropped, and there was less need in reclaiming it. So in the modified Solvay process developed by Chinese chemist Hou Debang in 1930s, the first few steps are the same as the Solvay process, but the CaCl2 is supplanted by ammonium chloride (NH4Cl). Instead of treating the remaining solution with lime, carbon dioxide and ammonia are pumped into the solution, then sodium chloride is added until the solution saturates at 40 °C. Next, the solution is cooled to 10 °C. Ammonium chloride precipitates and is removed by filtration, and the solution is recycled to produce more sodium carbonate. Hou's process eliminates the production of calcium chloride. The byproduct ammonium chloride can be refined, used as a fertilizer and may have greater commercial value than CaCl2, thus reducing the extent of waste beds. Additional details of the industrial implementation of this process are available in the report prepared for the European Soda Ash Producer's Association. Byproducts and wastes The principal byproduct of the Solvay process is calcium chloride (CaCl2) in aqueous solution. The process has other waste and byproducts as well. Not all of the limestone that is calcined is converted to quicklime and carbon dioxide (in reaction II); the residual calcium carbonate and other components of the limestone become wastes. In addition, the salt brine used by the process is usually purified to remove magnesium and calcium ions, typically to form carbonates (MgCO3, CaCO3); otherwise, these impurities would lead to scale in the various reaction vessels and towers. These carbonates are additional waste products. In inland plants, such as that in Solvay, New York, the byproducts have been deposited in "waste beds"; the weight of material deposited in these waste beds exceeded that of the soda ash produced by about 50%. These waste beds have led to water pollution, principally by calcium and chloride. The waste beds in Solvay, New York substantially increased the salinity in nearby Onondaga Lake, which used to be among the most polluted lakes in the U.S. and is a superfund pollution site. As such waste beds age, they do begin to support plant communities which have been the subject of several scientific studies. At seaside locations, such as those at Saurashtra, Gujarat, India, the CaCl2 solution may be discharged directly into the sea, apparently without substantial environmental harm (although small amounts of heavy metals in it may be a problem), the major concern is discharge location falls within the Marine National Park of Gulf of Kutch which serves as habitat for coral reefs, seagrass and seaweed community. At Osborne, South Australia, a settling pond is now used to remove 99% of the CaCl2 as the former discharge was silting up the shipping channel. At Rosignano Solvay in Tuscany, Italy the limestone waste produced by the Solvay factory has changed the landscape, producing the "Spiagge Bianche" ("White Beaches"). A report published in 1999 by the United Nations Environment Programme (UNEP), listed Spiagge Bianche among the priority pollution hot spots in the coastal areas of the Mediterranean Sea. Carbon sequestration and the Solvay process Variations in the Solvay process have been proposed for carbon sequestration. One idea is to react carbon dioxide, produced perhaps by the combustion of coal, to form solid carbonates (such as sodium bicarbonate) that could be permanently stored, thus avoiding carbon dioxide emission into the atmosphere. The Solvay process could be modified to give the overall reaction: 2 NaCl + CaCO3 + + → 2NaHCO3 + CaCl2 Variations in the Solvay process have been proposed to convert carbon dioxide emissions into sodium carbonates, but carbon sequestration by calcium or magnesium carbonates appears more promising. However, the amount of carbon dioxide which can be used for carbon sequestration with calcium or magnesium (when compared to the total amount of carbon dioxide exhausted by mankind) is very low. This is primarily due to the major feasibility difference between capturing carbon dioxide from controlled and concentrated emission sources such as from coal-fired power plants as compared to capturing carbon from non-concentrated small-scale carbon sources such as small fires, vehicle exhaust, human respiration etc. when using such methods. Moreover, variation on the Solvay process will most probably add an additional energy consuming step, which will increase carbon dioxide emissions unless carbon neutral energy sources like hydropower, nuclear energy, wind or solar power are used. See also Chloralkali process Hou's process, a production method similar to the Solvay process but ammonia is not recycled References Further reading The minimum energy required to calcine limestone is about per tonne. External links European Soda Ash Producer's Association (ESAPA) Timeline of US plant at Solvay, New York Salt and the Chemical Revolution Process flow diagram of Solvay process Ammonia Chemical processes Belgian inventions
Solvay process
Chemistry
2,816
39,945,265
https://en.wikipedia.org/wiki/Production%20flow%20analysis
In operations management and industrial engineering, production flow analysis refers to methods which share the following characteristics: Classification of machines Technological cycles information control Generating a binary product-machines matrix (1 if a given product requires processing in a given machine, 0 otherwise) Methods differ on how they group together machines with products. These play an important role in designing manufacturing cells. Rank order clustering Given a binary product-machines n-by-m matrix , rank order clustering is an algorithm characterized by the following steps: For each row i compute the number Order rows according to descending numbers previously computed For each column p compute the number Order columns according to descending numbers previously computed If on steps 2 and 4 no reordering happened go to step 6, otherwise go to step 1 Stop Similarity coefficients Given a binary product-machines n-by-m matrix, the algorithm proceeds by the following steps: Compute the similarity coefficient for all with being the number of products that need to be processed on both machine i and machine j, u comprises the number of components which visit machine j but not k and vice versa. Group together in cell k the tuple (i*,j*) with higher similarity coefficient, with k being the algorithm iteration index Remove row i* and column j* from the original binary matrix and substitute for the row and column of the cell k, Go to step 2, iteration index k raised by one Unless this procedure is stopped the algorithm eventually will put all machines in one single group. References Industrial engineering
Production flow analysis
Engineering
302
47,059,127
https://en.wikipedia.org/wiki/Penicillium%20parviverrucosum
Penicillium parviverrucosum is a species of fungus in the genus Penicillium. References parviverrucosum Fungi described in 2011 Fungus species
Penicillium parviverrucosum
Biology
38
14,341,044
https://en.wikipedia.org/wiki/Preselector
A preselector is a name for an electronic device that connects between a radio antenna and a radio receiver. The preselector is a band-pass filter that blocks troublesome out-of-tune frequencies from passing through from the antenna into the radio receiver (or preamplifier) that otherwise would be directly connected to the antenna. Purpose A preselector improves the performance of nearly any receiver, but is especially helpful to receivers with broadband front-ends that are prone to overload, such as scanners, wideband software-defined radio receivers, ordinary consumer-market shortwave and AM broadcast receivers – particularly with receivers operating below 10~20 MHz where static is pervasive. Sometimes faint signals that occupy a very narrow frequency span (such as radiotelegraph or 'CW') can be heard more clearly if the receiving bandwidth is made narrower than the narrowest that a general-purpose receiver may be able to tune; likewise, signals which individually use a fairly wide span of frequencies, such as broadcast AM, can be made less noisy by narrowing the bandwidth of the signal, even though making the span of received frequencies narrower than was transmitted will sacrifice some audio fidelity. A good preselector often can reduce a radio's receive bandwidth to a narrower frequency span than many general-purpose radios can manage on their own. A preselector typically is tuned to have a narrow bandwidth, centered on the receiver's operating frequency. The preselector passes through unchanged the signal on its tuned frequency (or only slightly diminished) but it reduces or removes off-frequency signals, cutting down or eliminating unwanted interference. Extra filtering can be useful because the first input stage ("front end") of receivers contains at least one RF amplifier, which has power limits ("dynamic range"). Most radios' front ends amplify all radio frequencies delivered to the antenna connection. So off-frequency signals constitute a load on the RF amplifier, wasting part of its dynamic range on unused and unwanted signals. "Limited dynamic range" means that the amplifier circuits have a limit to the total amount of incoming RF signal they can amplify without overloading; symptoms of overload are nonlinearity ("distortion") and ultimately clipping ("buzz"). When the front-end overloads the performance of the receiver is severely reduced, and in extreme cases can damage the receiver. In situations with noisy and crowded bands, or where there is loud interference from nearby, high-power stations, the dynamic range of the receiver can quickly be exceeded. Extra filtering by the preselector limits frequency range and power demands that are applied to all later stages of the receiver, only loading it with signals within the preselected band. Preselect filter bank Similar to conventional radios, spectrum analyzers, heavy-duty network analyzers, and other RF measuring equipment can incorporate switchable banks of preselector circuits to reject out-of-band signals that could result in spurious signals at the frequencies being analyzed. Automatically switched filter banks can likewise be incorporated into various broadband, general purpose receivers. Multifunction preselectors A preselector may be engineered with extra features, so that in addition to attenuating interference from unwanted frequencies it can provide additional services which may be helpful for a receiver: It can limit input signal voltage to protect a sensitive receiver from damage caused by static discharge, nearby voltage spikes, and overload from nearby transmitters' signals. It can provide a DC path to ground, to drain off noisy static charge that tends to collect on the antenna. It can also incorporate a small radio frequency amplifier stage to boost the filtered signal. None of these extra conveniences are necessary for the function of preselection, and in particular, for the typical noisy frequency bands where a preselector is needed, an amplifier in the preselector has no useful function. On the other hand, when an antenna preamplifier (preamp) is actually needed, it can be made "tunable" by incorporating a front-end preselector circuit to improve its performance. The integrated device is both a preamplifier and a preselector, and either name is correct. This ambiguity sometimes leads to confusion – conflating preselection with amplification. Ordinary, regular preselectors (that are just preselectors) contain no amplifier: They are entirely passive devices. A standard, ordinary preselector sometimes has the word "passive" prefixed – hence "passive preselector" means "ordinary preselector". The adjective is redundant, but emphasizes to those only familiar with tunable preamplifiers that the preselector is normal, and has no internal amplifier, and requires no power supply. Since all ordinary preselectors are "passive", adding the redundant word is pedantic, and in the noisy longwave, mediumwave, and shortwave bands where preselectors are typically used, when used with "modern" (post 1950) receivers they function with no noticeable loss of signal strength. Bandwidth vs. signal strength trade-off With all preselectors there is some very small loss at the tuned frequency; usually, most of the loss is in the inductor (the tuning coil). Turning up the inductance gives the preselector a narrower bandwidth (or higher , or greater selectivity) and slightly raises the loss, which nonetheless remains very small. Most preselectors have separate settings for one inductor and one capacitor (at least). So with at least two adjustments available to tune to just one frequency, there are often a variety of possible settings that will tune the preselector to frequencies in its middle-range. For the narrowest bandwidth (highest ), the preselector is tuned using the highest inductance and lowest capacitance for the desired frequency, but this produces the greatest loss. It also requires retuning the preselector more often while searching for faint signals, to keep the preselector's pass band overlapping the radio's receiving frequency. For lowest loss (and widest bandwidth), the preselector is tuned using the lowest inductance and highest capacitance (and the lowest , or least selectivity) for the desired frequency. The wider bandwidth allows more interference through from nearby frequencies, but reduces the need to retune the preselector while tuning the receiver, since any one low-inductance setting for the preselector will pass a broader span of nearby frequencies. Different from an antenna tuner Although a preselector is placed inbetween the radio and the antenna, in the same electrical location as a feedline matching unit, it serves a different purpose: A transmatch or "antenna" tuner connects two transmission lines with different impedances and only incidentally blocks out-of-tune frequencies (if it blocks any at all). A transmatch matches transmitter impedance to feedline impedance and phase, so that signal power from the radio transmitter smoothly transfers into the antenna's feed cable; a properly adjusted transmatch prevents transmitted power from being reflected back into the transmitter ("backlash current"). Some antenna tuner circuits can both impedance match and preselect, for example the Series Parallel Capacitor (SPC) tuner, and many 'tuned-transformer'-type matching circuits used in many balanced line tuners (BLT) can be adjusted to also function as band-pass filters. See also Antenna tuner Band-pass filter Footnotes References External links Radio electronics Receiver (radio) Wireless tuning and filtering
Preselector
Engineering
1,595
15,489,126
https://en.wikipedia.org/wiki/Nesting%20instinct
Nesting behavior is an instinct in animals during reproduction where they prepare a place with optimal conditions to nurture their offspring. The nesting place provides protection against predators and competitors that mean to exploit or kill offspring. It also provides protection against the physical environment. Nest building is important in family structure and is therefore influenced by different mating behaviours and social settings. It is found in a variety of animals such as birds, fish, mammals, amphibians, and reptiles. In mammals Female dogs may show signs of nesting behaviour about one week before they are due that include pacing and building a nest with items from around the house such as blankets, clothing, and stuffed animals. (They also sometimes do this in cases of false pregnancy, or pseudocyesis). Domestic cats often make nests by bringing straw, cloth scraps, and other soft materials to a selected nook or box; they particularly are attracted to haylofts as nest sites. Commercial whelping and queening boxes are available; however, children's wading pools (dogs) and plastic dishpans (cats) work just as well. In birds it is known as "going broody", and is characterized by the insistence to stay on the nest as much as possible, and by cessation of laying new eggs. Marsupials do not exhibit a nesting instinct per se, because the mother's pouch fulfills the function of housing the newborns. Nest building is performed in order to provide sufficient shelter and comfort to the arriving offspring. Threats, such as predators, that decrease the chance of survival will increase care of offspring. Pigs Under natural conditions, sows will leave the herd and travel up to a day prior to parturition in order to find the appropriate spot for a nest. The sows will use their forelimbs and snouts in order to create excavated depressions within the ground and to gather/transport nesting materials. Although the nests vary in radius dependent on the age of the sow, the nests are generally a round to oval shape and are usually located near trees, uprooted stumps or logs. The shelter provided by the nest built in sows is of utmost importance to thermoregulation. For the first two weeks of the piglets life their physiological thermoregulation is still developing, and due to a lack of amount of brown fat tissue, piglets require an increased surrounding temperature. Without the protection of the nest, the piglets will be subjected to climatic influences causing their internal temperature to drop to life-threatening levels. Farrowing crates have been widely implemented into modern pig husbandry in order to reduce piglet mortality via crushing. However, this type of housing disturbs the sows natural instinct to nest build due to lack of space. Thus, it is necessary for the sows to farrow without the performance of this natural pre-partum activity which results in high stress for the animal. Rodents In rodents and lagomorphs, the nesting instinct is typically characterized by the urge to seek the lowest sheltered spot available; this is where these mammals give birth. Rats, for example, prefer to burrow amongst dense areas of vegetation or around human settlements which they come into contact with often. Often some rodent species create burrows that develop microclimates. This is another way that nesting instinct aids in thermoregulation. Alzheimer's disease in rats has been observed to impair the nesting ability, especially in females. These impairments become exaggerated with age and progression of disease. Particularly among burrowing animals, such as groundhogs and prairie dogs nesting is used all across the burrows for uses such as insulation, bedding, litter chambers, transportation, comfort and various other uses. Marmot species such as groundhogs, and alpine marmots nest their borrows with thick grasses in advance of winter, this keeps a thermoregulated insulated comfortable environment for the marmots as they undergo hibernation. Hormones and nesting behavior Maternal nest-building is regulated by the hormonal actions of estradiol, progesterone, and prolactin. Given the importance of shelter to offspring survival and reproductive success, it is no wonder that a set of common hormonal signals has evolved. However, the exact timing and features of nest building vary among species, depending on endocrine and external factors. The initial drive to perform this behavior is stimulated internally via hormones, specifically a rise in prolactin levels. This increase is driven by an increase in prostaglandin and a decrease in progesterone. The second phase of nest building is driven by external stimuli, this phase is also known as the material-oriented phase. In this stage it is said that external stimuli such as the proper nest building materials must be present. Both internal and external stimuli must exist in conjunction with one another for nest building to commence. The cessation of the nest building is correlated with a rise in oxytocin which is the hormone responsible for the contraction of the uterus. Shortly after this, parturition will commence. In rabbits, nest building occurs towards the last third of pregnancy. The mother digs and builds a nest of straw and grass, which she lines with hair plucked from her body. This sequential motor pattern is produced by changes in estradiol, progesterone, and prolactin levels. Six to eight days pre-partum, high levels of estradiol and progesterone lead to a peak in digging behavior. Both estradiol and progesterone are produced and released by the ovaries. One to three days pre-partum, straw-carrying behavior is expressed as a function of decreasing progesterone levels, maintenance of high estradiol levels, and increasing prolactin levels. This release of prolactin (from the anterior pituitary) is likely caused by the increase in estrogen-to-progesterone ratio. One day pre-partum to four days post-partum, hair loosening and plucking occur as a result of low progesterone and high prolactin levels, together with a decrease in testosterone. In house mice and golden hamsters, nest-building takes place earlier, at the start or middle of pregnancy. For these species, nest-building coincides with high levels of estrogen and progestin. External factors also interact with hormones to influence maternal nest-building behavior. Pregnant rabbits that have been shaved will line their straw nest with available alternatives, such as male rabbit hair or synthetic hair. If given both straw and hair, mothers prefer straw during the straw-carrying period, and prefer hair during the nest-lining period. If given hair as the only material, shaved mothers collect the hair even when it is the straw-carrying period. In birds Research on avian paternal behavior shows that nest-building is triggered by different stimuli in the two sexes. Unlike the case for females, male nest-building among ring doves depends on the behavior of the prospective mate rather than on hormonal mechanisms. Males that are castrated and injected daily with testosterone either court females or build nests, depending purely on the behavior of the female. Hence, the male avian transition from courtship to nest-building is prompted by social cues and not by changes in hormone levels. In fish In sand goby (Pomatoschistus minutus) the males are the ones who build the nests. When males exhibit increased paternal care to eggs, they build nests with smaller entrances in comparison to males who provide less parental care. This helps prevent predators from entering the nest and consuming the offspring or developing eggs. In insects Nesting behavior is also present in many invertebrates. The best known example of nesting behavior in insects is that of the domestic honey bee. Most bees build nests. Solitary bees, like honey bees, make nests. However, solitary bees make individual nests for larvae and are not always in colonies. Solitary bees will burrow into the ground, dead wood and plants. See also Genetic memory Broodiness Parental brain References Human pregnancy Zoology
Nesting instinct
Biology
1,656
20,351,675
https://en.wikipedia.org/wiki/MDynaMix
Molecular Dynamics of Mixtures (MDynaMix) is a computer software package for general purpose molecular dynamics to simulate mixtures of molecules, interacting by AMBER- and CHARMM-like force fields in periodic boundary conditions. Algorithms are included for NVE, NVT, NPT, anisotropic NPT ensembles, and Ewald summation to treat electrostatic interactions. The code was written in a mix of Fortran 77 and 90 (with Message Passing Interface (MPI) for parallel execution). The package runs on Unix and Unix-like (Linux) workstations, clusters of workstations, and on Windows in sequential mode. MDynaMix is developed at the Division of Physical Chemistry, Department of Materials and Environmental Chemistry, Stockholm University, Sweden. It is released as open-source software under a GNU General Public License (GPL). Programs md is the main MDynaMix block makemol is a utility which provides help to create files describing molecular structure and the force field tranal is a suite of utilities to analyze trajectories mdee is a version of the program which implements expanded ensemble method to compute free energy and chemical potential (is not parallelized) mge provides a graphical user interface to construct molecular models and monitor dynamics process Field of application Thermodynamic properties of liquids Nucleic acid - ions interaction Modeling of lipid bilayers Polyelectrolytes Ionic liquids X-ray spectra of liquid water Force Field development See also References External links Ascalaph, graphical shell for MDynaMix (GNU GPL) Molecular dynamics software Free science software Free software programmed in C++ Free software programmed in Fortran
MDynaMix
Chemistry
344
13,722,979
https://en.wikipedia.org/wiki/Melco
Melco Holdings Inc. is a family business founded by Makoto Maki in 1975 and is located in Japan. The company's most recognizable brand is Buffalo Inc. Buffalo Inc. is currently one of the 16 subsidiaries of Melco Holdings Inc., initially founded as an audio equipment manufacturer, the company entered the computer peripheral market in 1981 with an EEPROM writer. The name BUFFALO is derived from one of company's first products, a printer buffer and the name for the American Bison (Buffalo). Name Melco's name stands for Maki Engineering Laboratory COmpany. History Melco Holdings Inc. was incorporated in 1986; currently its subsidiaries are involved in the manufacture of random-access memory products, Flash memory products, USB products, CD-ROM/DVD-RW drives, hard disk drives, local area network products, printer buffers, liquid-crystal displays, Microsoft Windows accelerators, personal computer components and CPU accelerators. A subsidiary of Melco provides corporate services in Japan like Internet set-up, computer terminal installation/set-up, computer education and computer maintenance. The company has also started selling solid-state drives in Japan. Buffalo Technology (USA) is the North American subsidiary of the group and is based in Austin, Texas. The company has been first to market on a number of new technologies. A Timeline of Firsts January 1999 – First Wireless Router December 2002 – First Draft-11g Wi-Fi Products Shipped November 2003 – First NAS Appliance January 2005 – First RAID NAS Appliance April 2006 – First Draft-11n Wi-Fi Products Shipped November 2009 – First USB 3.0 Storage January 2012 – First Draft-11ac Wi-Fi Solution Demonstrated at CES May 2012 – First Draft-11ac Wi-Fi products shipped June 2012 – First Thunderbolt + USB 3.0 Hybrid Device May 2013 – First DDR Memory Buffer DAS Drive Corporate Structure Products Nintendo Wi-Fi USB Connector Buffalo network-attached storage series External hard drive HD DVD Computer Drive AirStation (Residential gateway) AOSS Memory - SO-DIMM and DIMM USB flash drive UPnP Media Rendering Hardware Litigation In late 2006, the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) won a lawsuit against Buffalo Inc. under which it would receive a royalty for every WLAN product worldwide. The lawsuits basis was that CSIRO was granted US patent 5487069 in 1996, which grants elements of 802.11a/g wireless technology that had become an industry standard. In June 2007, the federal court in Texas granted an injunction to prevent any more wireless products from shipping until a license agreement had been reached. On September 19, 2008, the Federal Circuit ruled in Buffalo's favor and has remanded this case to the district court ruling that the district court's Summary Judgment was insufficient on the merits of obviousness of CSIRO's patent. Therefore, this case will be tried again before the district court. In this connection Buffalo is hopeful that it will shortly be permitted to, once again, sell IEEE 802.11a and 802.11g compliant products in the United States. See also AirStation Buffalo network-attached storage series References External links Buffalo's web page Computer companies established in 1975 Computer hardware companies Computer memory companies Computer peripheral companies Electronics companies of Japan Computer companies of Japan Defunct defense companies of Japan Manufacturing companies based in Nagoya Japanese companies established in 1975 Companies listed on the Tokyo Stock Exchange Japanese brands
Melco
Technology
700
11,693,892
https://en.wikipedia.org/wiki/Uromyces%20junci
Uromyces junci is a fungus species and plant pathogen which causes rust on various plants including (Rushes) Juncus species. It appears as a whitish peridium and a pale yellow mass of spores, it can be found on Pulicaria dysenterica, Juncus articulatus, Juncus bufonius, Juncus effusus, Juncus inflexus and Juncus subnodulosus. It is mainly found in Europe, North America, New Zealand and parts of South America. In 1994, it was found in Japan. References junci Fungal plant pathogens and diseases Fungi described in 1854 Fungus species
Uromyces junci
Biology
133
19,074,735
https://en.wikipedia.org/wiki/Wireless%20Home%20Digital%20Interface
Wireless Home Digital Interface (WHDI) is a consumer electronic specification for a wireless HDTV connectivity throughout the home. WHDI enables delivery of uncompressed high-definition digital video over a wireless radio channel connecting any video source (computers, mobile phones, Blu-ray players etc.) to any compatible display device. WHDI is supported and driven by Amimon, Hitachi, LG Electronics, Motorola, Samsung, Sharp Corporation and Sony. Versions The WHDI 1.0 specification was finalized in December 2009. Sharp Corporation will be one of the first companies to roll out wireless HDTVs. AT CES 2010 LG Electronics announced a WHDI wireless HDTV product line. In June 2010, WHDI announced an update to WHDI 1.0 which allows support for stereoscopic 3D, and WHDI 2.0 specification to be completed in Q2 2011. WHDI 3D update due in Q4 2010 will allow support for 3D formats defined in HDMI 1.4a specification WHDI 2.0 will increase available bandwidth even further, allowing additional 3D formats such as "dual 1080p60", and support for 4K × 2K resolutions. Technology WHDI 1.0 provides a high-quality, uncompressed wireless link which supports data rates of up to 3 Gbit/s (allowing 1920×1080 @ 60 Hz @ 24-bit) in a 40 MHz channel, and data rates of up to 1.5 Gbit/s (allowing 1280×720 @ 60 Hz @ 24-bit or 1920×1080 @ 30 Hz @ 24-bit) in a single 20 MHz channel of the 5 GHz unlicensed band, conforming to FCC and worldwide 5 GHz spectrum regulations. Range is beyond , through walls, and latency is less than one millisecond. History 2005 December AMIMON releases news of a device capable of "uncompressed high definition video streaming wirelessly." 2007 January AMIMON showcases its WHDI (wireless high definition interface) at CES. Sanyo demonstrates the "world's first wireless HD projector," using AMIMON's technology, which allows for the same quality as a DVI / HDMI cable. August AMIMON begins shipping its WHDI chips to manufacturers. December WHDI becomes High-Bandwidth Digital Content Protection (HDCP) Certified, garnering the necessary approval for any device to deliver HD video to another device, a requirement in Hollywood movie studios. It is considered an Approved Retransmission Technology (ART). The approval allows for WHDI to begin selling devices that will carry HD content to a broader market. 2008 April Sharp partners with AMIMON to offer Sharp's X-Series LCD HDTVs offered with WHDI wireless link, the first CE product to use WHDI technology. July AMIMON collaborates with Motorola, Samsung, Sony and Sharp in order to form 'a special interest group to develop a comprehensive new industry standard for multi-room audio, video and control connectivity'. August Mitsubishi announces that it will offer television sets in Japan capable of communicating with WHDI-enabled equipment. September JVC plans to produce a wireless HDMI box to launch in 2009. December AMIMON Ships Its 100,000th Wireless High-definition Chipset. ABI Research reports wireless HDTV vendors are putting money into products though few are available for consumption in North America. Stryker Endoscopy's WiSe HDTV will use WHDI and be the first HD wireless display specifically for the operating room, the first use of WHDI technology in the professional market. 2009 April AMIMON introduces its second-generation chipset operating in the 5 GHz unlicensed band with AMN 2120 transmitter and AMN 2220 receiver. The chipset is capable of full uncompressed 1080p/60 Hz HD and supports HDCP 2.0. The unit also becomes available to manufacturers. May Gefen begins shipping its WHDI towers, targeting the custom installation market. The towers use AMIMON's 5 GHz technology and can support a maximum of five remote receivers on the same video stream. They support 1080p with Dolby 5.1 surround audio. September Philips launches Wireless HDTV Link with an HDMI transmitter and receiver and 1080p/30 HD video transmission. Sony announces it will release the ZX5 LCD television in November. It is capable of receiving 1080p wirelessly. 2010 January LG announces a partnership with AMIMON and prepares shipment of a wireless HDTV product line with second-generation WHDI technology embedded. July WHDI becomes 3D video capable. September ASUS joins the WHDI Consortium and aligns with AMIMON to introduce the WiCast EW2000. The WiCast connects a PC via USB to an HDTV via HDMI. October Galaxy announces the GeForce GTX 460 WHDI Edition video card. The card is intended for PC gamers. AMIMON announces the WHDI stick reference design, a noticeably smaller device than those previously released. November HP announces the WHDI certified HP Wireless TV Connect 2011 January WHDI comes to TVs, PCs, tablets and a projector at the 2011 Consumer Electronics Show (CES). KFA2 (Galaxy) releases the first wireless graphics card, GeForce GTX460 WHDI 1024MB PCIe 2.0. The card uses five aerials to stream 1080p video from a PC to a WHDI-capable television. September AMIMON showcases the HD camera link Falcon-HD, a transmitter and receiver accessory for professional HD cameras and monitors at the International Broadcasting Convention (IBC) in Amsterdam. 2012 January AMIMON teams up with Lenovo to integrate WHDI technology in the IdeaPad S2 7, removing the need for an external transmitter. April AMIMON launches Falcon, a wireless transmitter/receiver system kit for the professional camera and monitor market, at the National Association of Broadcasters (NAB) Show in Las Vegas. June AMIMON announces the AMIMON Pro Line, using WHDI technology to expand uses from the CE market to the Professional market. Elmo introduces MO-1w Visual Presenter, the first use of WHDI technology in the presentation industry. Supporters Promoters AMIMON Hitachi LG Electronics Motorola Samsung Sharp Corporation Sony Contributors D-link Haier Maxim Mitsubishi Electric Rohde & Schwarz Toshiba Adopters Askey ASUS ATEN International Co., Ltd. Belkin Dfine Technology Domo Technologies Elmo Galaxy Microsystems Ltd. Gemtek Hefei Radio Hosiden HP Hunan space satellite Communication co.ltd. IOGear Jupiter (MTI) LiteOn Technology Corp. Murata Manufacturing Olympus Corporation Quanta Microsystems - QMI Seamon Science International SRI Radio Systems Syvio Image Limited TCL Corporation TDK Telecommunication Metrology Center Winstars Zinwell See also Ultra-wideband Wireless USB Wireless HDMI: Intel Wireless Display (WiDi) version 3.5 to 6.0 supports Miracast; discontinued Miracast WirelessHD WiGig Wi-Fi Direct ip based: Chromecast (proprietary media broadcast over ip: Google Cast for audio or audiovisual playback) AirPlay (proprietary ip based) Digital Living Network Alliance (DLNA) (ip based) port / standard for mobile equipment: Mobile High-Definition Link - MHL SlimPort (Mobility DisplayPort), also known as MyDP External links WHDI.org, the official website of WHDI SIG Developing Wireless High-Definition Video Modems for Consumer Electronics Devices by Guy Dorman, AMIMON VE829, FHD 5x2 HDMI Wireless Extender The Main Wireless HDMI Transmission Protocols and Their Typical Products, Comparison of main wireless HDMI transmission protocols References Networking standards Wireless display technologies
Wireless Home Digital Interface
Technology,Engineering
1,570
12,083,818
https://en.wikipedia.org/wiki/Filling%20area%20conjecture
In differential geometry, Mikhail Gromov's filling area conjecture asserts that the hemisphere has minimum area among the orientable surfaces that fill a closed curve of given length without introducing shortcuts between its points. Definitions and statement of the conjecture Every smooth surface or curve in Euclidean space is a metric space, in which the (intrinsic) distance between two points of is defined as the infimum of the lengths of the curves that go from to along . For example, on a closed curve of length , for each point of the curve there is a unique other point of the curve (called the antipodal of ) at distance from . A compact surface fills a closed curve if its border (also called boundary, denoted ) is the curve . The filling is said to be isometric if for any two points of the boundary curve , the distance between them along is the same (not less) than the distance along the boundary. In other words, to fill a curve isometrically is to fill it without introducing shortcuts. Question: How small can be the area of a surface that isometrically fills its boundary curve, of given length? For example, in three-dimensional Euclidean space, the circle (of length 2) is filled by the flat disk which is not an isometric filling, because any straight chord along it is a shortcut. In contrast, the hemisphere is an isometric filling of the same circle , which has twice the area of the flat disk. Is this the minimum possible area? The surface can be imagined as made of a flexible but non-stretchable material, that allows it to be moved around and bended in Euclidean space. None of these transformations modifies the area of the surface nor the length of the curves drawn on it, which are the magnitudes relevant to the problem. The surface can be removed from Euclidean space altogether, obtaining a Riemannian surface, which is an abstract smooth surface with a Riemannian metric that encodes the lengths and area. Reciprocally, according to the Nash-Kuiper theorem, any Riemannian surface with boundary can be embedded in Euclidean space preserving the lengths and area specified by the Riemannian metric. Thus the filling problem can be stated equivalently as a question about Riemannian surfaces, that are not placed in Euclidean space in any particular way. Conjecture (Gromov's filling area conjecture, 1983): The hemisphere has minimum area among the orientable compact Riemannian surfaces that fill isometrically their boundary curve, of given length. Gromov's proof for the case of Riemannian disks In the same paper where Gromov stated the conjecture, he proved that the hemisphere has least area among the Riemannian surfaces that isometrically fill a circle of given length, and are homeomorphic to a disk. Proof: Let be a Riemannian disk that isometrically fills its boundary of length . Glue each point with its antipodal point , defined as the unique point of that is at the maximum possible distance from . Gluing in this way we obtain a closed Riemannian surface that is homeomorphic to the real projective plane and whose systole (the length of the shortest non-contractible curve) is equal to . (And reciprocally, if we cut open a projective plane along a shortest noncontractible loop of length , we obtain a disk that fills isometrically its boundary of length .) Thus the minimum area that the isometric filling can have is equal to the minimum area that a Riemannian projective plane of systole can have. But then Pu's systolic inequality asserts precisely that a Riemannian projective plane of given systole has minimum area if and only if it is round (that is, obtained from a Euclidean sphere by identifying each point with its opposite). The area of this round projective plane equals the area of the hemisphere (because each of them has half the area of the sphere). The proof of Pu's inequality relies, in turn, on the uniformization theorem. Fillings with Finsler metrics In 2001, Sergei Ivanov presented another way to prove that the hemisphere has smallest area among isometric fillings homeomorphic to a disk. His argument does not employ the uniformization theorem and is based instead on the topological fact that two curves on a disk must cross if their four endpoints are on the boundary and interlaced. Moreover, Ivanov's proof applies more generally to disks with Finsler metrics, which differ from Riemannian metrics in that they need not satisfy the Pythagorean equation at the infinitesimal level. The area of a Finsler surface can be defined in various inequivalent ways, and the one employed here is the Holmes–Thompson area, which coincides with the usual area when the metric is Riemannian. What Ivanov proved is that The hemisphere has minimum Holmes–Thompson area among Finsler disks that isometrically fill a closed curve of given length. Let be a Finsler disk that isometrically fills its boundary of length . We may assume that is the standard round disk in , and the Finsler metric is smooth and strongly convex. The Holmes–Thompson area of the filling can be computed by the formula where for each point , the set is the dual unit ball of the norm (the unit ball of the dual norm ), and is its usual area as a subset of . Choose a collection of boundary points, listed in counterclockwise order. For each point , we define on M the scalar function . These functions have the following properties: Each function is Lipschitz on M and therefore (by Rademacher's theorem) differentiable at almost every point . If is differentiable at an interior point , then there is a unique shortest curve from to x (parametrized with unit speed), that arrives at x with a speed . The differential has norm 1 and is the unique covector such that . In each point where all the functions are differentiable, the covectors are distinct and placed in counterclockwise order on the dual unit sphere . Indeed, they must be distinct because different geodesics cannot arrive at with the same speed. Also, if three of these covectors (for some ) appeared in inverted order, then two of the three shortest curves from the points to would cross each other, which is not possible. In summary, for almost every interior point , the covectors are vertices, listed in counterclockwise order, of a convex polygon inscribed in the dual unit ball . The area of this polygon is (where the index i + 1 is computed modulo n). Therefore we have a lower bound for the area of the filling. If we define the 1-form , then we can rewrite this lower bound using the Stokes formula as . The boundary integral that appears here is defined in terms of the distance functions restricted to the boundary, which do not depend on the isometric filling. The result of the integral therefore depends only on the placement of the points on the circle of length 2L. We omitted the computation, and expressed the result in terms of the lengths of each counterclockwise boundary arc from a point to the following point . The computation is valid only if . In summary, our lower bound for the area of the Finsler isometric filling converges to as the collection is densified. This implies that , as we had to prove. Unlike the Riemannian case, there is a great variety of Finsler disks that isometrically fill a closed curve and have the same Holmes–Thompson area as the hemisphere. If the Hausdorff area is used instead, then the minimality of the hemisphere still holds, but the hemisphere becomes the unique minimizer. This follows from Ivanov's theorem since the Hausdorff area of a Finsler manifold is never less than the Holmes–Thompson area, and the two areas are equal if and only if the metric is Riemannian. Non-minimality of the hemisphere among rational fillings with Finsler metrics A Euclidean disk that fills a circle can be replaced, without decreasing the distances between boundary points, by a Finsler disk that fills the same circle =10 times (in the sense that its boundary wraps around the circle times), but whose Holmes–Thompson area is less than times the area of the disk. For the hemisphere, a similar replacement can be found. In other words, the filling area conjecture is false if Finsler 2-chains with rational coefficients are allowed as fillings, instead of orientable surfaces (which can be considered as 2-chains with integer coefficients). Riemannian fillings of genus one and hyperellipticity An orientable Riemannian surface of genus one that isometrically fills the circle cannot have less area than the hemisphere. The proof in this case again starts by gluing antipodal points of the boundary. The non-orientable closed surface obtained in this way has an orientable double cover of genus two, and is therefore hyperelliptic. The proof then exploits a formula by J. Hersch from integral geometry. Namely, consider the family of figure-8 loops on a football, with the self-intersection point at the equator. Hersch's formula expresses the area of a metric in the conformal class of the football, as an average of the energies of the figure-8 loops from the family. An application of Hersch's formula to the hyperelliptic quotient of the Riemann surface proves the filling area conjecture in this case. Almost flat manifolds are minimal fillings of their boundary distances If a Riemannian manifold (of any dimension) is almost flat (more precisely, is a region of with a Riemannian metric that is -near the standard Euclidean metric), then is a volume minimizer: it cannot be replaced by an orientable Riemannian manifold that fills the same boundary and has less volume without reducing the distance between some boundary points. This implies that if a piece of sphere is sufficiently small (and therefore, nearly flat), then it is a volume minimizer. If this theorem can be extended to large regions (namely, to the whole hemisphere), then the filling area conjecture is true. It has been conjectured that all simple Riemannian manifolds (those that are convex at their boundary, and where every two points are joined by a unique geodesic) are volume minimizers. The proof that each almost flat manifold is a volume minimizer involves embedding in , and then showing that any isometric replacement of can also be mapped into the same space , and projected onto , without increasing its volume. This implies that the replacement has not less volume than the original manifold . See also Filling radius Pu's inequality Systolic geometry References Conjectures Unsolved problems in geometry Riemannian geometry Differential geometry Differential geometry of surfaces Surfaces Area Systolic geometry
Filling area conjecture
Physics,Mathematics
2,233
62,105,397
https://en.wikipedia.org/wiki/Lofoten%20Declaration
The Lofoten Declaration, drafted in August 2017, is an international manifesto calling for the end of hydrocarbon exploration and further expansion of fossil fuel reserves for climate change mitigation. It calls for fossil fuel divestment and phase-out of use with a just transition to a low-carbon economy. A diverse group of signatories has signed the declaration, affirming demands for early leadership in efforts from the economies that have benefited the most from fossil fuel extraction. The Declaration was named for the Lofoten archipelago where public concern has successfully prevented offshore development of petroleum reserves. Signed by 600 organizations spanning 76 countries, the Declaration is believed to have helped influence the government of Norway to divest from investment in exploration and production. The Lofoten Declaration also helped mobilize efforts for a global treaty on a managed decline of fossil fuel production, such as the Fossil Fuel Non-Proliferation Treaty Initiative. References Climate action plans Emissions reduction Climate change policy Ethical investment Low-carbon economy Sustainable energy
Lofoten Declaration
Chemistry
202
71,001,950
https://en.wikipedia.org/wiki/Sulfoxidation
in chemistry, sulfoxidation refers to two distinct reactions. In one meaning, sulfoxidation refers to the reaction of alkanes with a mixture of sulfur dioxide and oxygen. This reaction is employed industrially to produce alkyl sulfonic acids, which are used as surfactants. The reaction requires UV-radiation. RH + SO2 + 1/2 O2 -> RSO3H The reaction favors secondary positions in accord with its free-radical mechanism. Mixtures are produced. Semiconductor-sensitized variants have been reported. Sulfoxidation can also refer to the oxidation of a thioether to a sulfoxide. R2S + O -> R2SO A typical source of "O" is hydrogen peroxide. References Sulfoxides
Sulfoxidation
Chemistry
166
47,971,685
https://en.wikipedia.org/wiki/Extended%20evolutionary%20synthesis
The Extended Evolutionary Synthesis (EES) consists of a set of theoretical concepts argued to be more comprehensive than the earlier modern synthesis of evolutionary biology that took place between 1918 and 1942. The extended evolutionary synthesis was called for in the 1950s by C. H. Waddington, argued for on the basis of punctuated equilibrium by Stephen Jay Gould and Niles Eldredge in the 1980s, and was reconceptualized in 2007 by Massimo Pigliucci and Gerd B. Müller. The extended evolutionary synthesis revisits the relative importance of different factors at play, examining several assumptions of the earlier synthesis, and augmenting it with additional causative factors. It includes multilevel selection, transgenerational epigenetic inheritance, niche construction, evolvability, and several concepts from evolutionary developmental biology. Not all biologists have agreed on the need for, or the scope of, an extended synthesis. Many have collaborated on another synthesis in evolutionary developmental biology, which concentrates on developmental molecular genetics and evolution to understand how natural selection operated on developmental processes and deep homologies between organisms at the level of highly conserved genes. The preceding "modern synthesis" The modern synthesis was the widely accepted early-20th-century synthesis reconciling Charles Darwin's theory of evolution by natural selection and Gregor Mendel's theory of genetics in a joint mathematical framework. It established evolution as biology's central paradigm. The 19th-century ideas of natural selection by Darwin and Mendelian genetics were united by researchers who included Ronald Fisher, J. B. S. Haldane and Sewall Wright, the three founders of population genetics, between 1918 and 1932. Julian Huxley introduced the phrase "modern synthesis" in his 1942 book, Evolution: The Modern Synthesis. Early history During the 1950s, English biologist C. H. Waddington called for an extended synthesis based on his research on epigenetics and genetic assimilation. In 1978, Michael J. D. White wrote about an extension of the modern synthesis based on new research from speciation. In the 1980s, entomologist Ryuichi Matsuda coined the term "pan-environmentalism" as an extended evolutionary synthesis which he saw as a fusion of Darwinism with neo-Lamarckism. He held that heterochrony is a main mechanism for evolutionary change and that novelty in evolution can be generated by genetic assimilation. An extended synthesis was also proposed by the Austrian zoologist Rupert Riedl, with the study of evolvability. Gordon Rattray Taylor in his 1983 book The Great Evolution Mystery called for an extended synthesis, noting that the modern synthesis is only a subsection of a more comprehensive explanation for biological evolution still to be formulated. In 1985, biologist Robert G. B. Reid authored Evolutionary Theory: The Unfinished Synthesis, which argued that the modern synthesis with its emphasis on natural selection is an incomplete picture of evolution, and emergent evolution can explain the origin of genetic variation. In 1988, ethologist John Endler wrote about developing a newer synthesis, discussing processes of evolution that he felt had been neglected. In 2000, Robert L. Carroll called for an "expanded evolutionary synthesis" due to new research from molecular developmental biology, systematics, geology and the fossil record. Punctuated equilibrium In the 1980s, the American palaeontologists Stephen Jay Gould and Niles Eldredge argued for an extended synthesis based on their idea of punctuated equilibrium, the role of species selection shaping large scale evolutionary patterns and natural selection working on multiple levels extending from genes to species. Contributions from evolutionary developmental biology Some researchers in the field of evolutionary developmental biology proposed another synthesis. They argue that the modern and extended syntheses should mostly center on genes and suggest an integration of embryology with molecular genetics and evolution, aiming to understand how natural selection operates on gene regulation and deep homologies between organisms at the level of highly conserved genes, transcription factors and signalling pathways. By contrast, a different strand of evo-devo following an organismal approach contributes to the extended synthesis by emphasizing (amongst others) developmental bias (both through facilitation and constraint), evolvability, and inherency of form as primary factors in the evolution of complex structures and phenotypic novelties. Recent history The idea of an extended synthesis was relaunched in 2007 by Massimo Pigliucci, and Gerd B. Müller, with a book in 2010 titled Evolution: The Extended Synthesis, which has served as a launching point for work on the extended synthesis. This includes: The role of prior configurations, genomic structures, and other traits in the organism in generating evolutionary variations. How increasing dimensionality of fitness landscapes affects our view of speciation. The role of multilevel selection in the major evolutionary transitions. New types of inheritance, including cultural and epigenetic inheritance. The way that organismal development and developmental plasticity channel evolutionary pathways and generates phenotypic novelty How organisms modify the environments they belong to through niche construction. Other processes such as evolvability, phenotypic plasticity, reticulate evolution, horizontal gene transfer, symbiogenesis are said by proponents to have been excluded or missed from the modern synthesis. The goal of Piglucci's and Müller's extended synthesis is to take evolution beyond the gene-centered approach of population genetics to consider more organism- and ecology-centered approaches. Many of these causes are currently considered secondary in evolutionary causation, and proponents of the extended synthesis want them to be considered first-class evolutionary causes. Michael R. Rose and Todd Oakley have called for a postmodern synthesis, they commented that "it is now abundantly clear that living things often attain a degree of genomic complexity far beyond simple models like the "gene library" genome of the Modern Synthesis". Biologist Eugene Koonin has suggested that the gradualism of the modern synthesis is unsustainable as gene duplication, horizontal gene transfer and endosymbiosis play a pivotal role in evolution. Koonin commented that "the new developments in evolutionary biology by no account should be viewed as refutation of Darwin. On the contrary, they are widening the trails that Darwin blazed 150 years ago and reveal the extraordinary fertility of his thinking." Arlin Stoltzfus and colleagues advocate mutational and developmental bias in the introduction of variation as an important source of orientation or direction in evolutionary change. They argue that bias in the introduction of variation was not formally recognized throughout the 20th century, due to the influence of neo-Darwinism on thinking about causation. Organism-centered evolution The early biologists of the organicist movement have influenced the modern extended evolutionary synthesis. Recent research has called for expanding the population genetic framework of evolutionary biology by a more organism-centered perspective. This has been described as "organism-centered evolution" which looks beyond the genome to the ways that individual organisms are participants in their own evolution. Philip Ball has written a research review on organism-centered evolution. Rui Diogo has proposed a revision of evolutionary theory, which he has termed ONCE: Organic Nonoptimal Constrained Evolution. According to ONCE, evolution is mainly driven by the behavioural choices and persistence of organisms themselves, whilst natural selection plays a secondary role. ONCE cites examples of reciprocal causation between organism and the environment, Baldwin effect, organic selection, developmental bias and niche construction. Predictions The extended synthesis is characterized by its additional set of predictions that differ from the standard modern synthesis theory: Change in phenotype can precede change in genotype Changes in phenotype are predominantly positive, rather than neutral (see: neutral theory of molecular evolution) Changes in phenotype are induced in many organisms, rather than one organism Revolutionary change in phenotype can occur through mutation, facilitated variation or threshold events Repeated evolution in isolated populations can be by convergent evolution or developmental bias Adaptation can be caused by natural selection, environmental induction, non-genetic inheritance, learning and cultural transmission (see: Baldwin effect, meme, transgenerational epigenetic inheritance, ecological inheritance, non-Mendelian inheritance) Rapid evolution can result from simultaneous induction, natural selection and developmental dynamics Biodiversity can be affected by features of developmental systems such as differences in evolvability Heritable variation is directed towards variants that are adaptive and integrated with phenotype Niche construction is biased towards environmental changes that suit the constructor's phenotype, or that of its descendants, and enhance their fitness Kin selection Multilevel selection Self-organization Symbiogenesis Testing From 2016 to 2019, there was an organized project entitled "Putting The Extended Evolutionary Synthesis To The Test" supported by a 7.5 million USD grant from the John Templeton Foundation, supplemented with further money from participating instutitions including Clark University, Indiana University, Lund University, Stanford University, University of Southampton and University of St Andrews. Publications from the project include over 200 papers, a special issue, and an anthology on Evolutionary Causation. In 2019 a final report of the 2016–2019 consortium was published, Putting the Extended Evolutionary Synthesis to the Test. The project was headed by Kevin N. Laland at the University of St Andrews and Tobias Uller at Lund University. According to Laland what the extended synthesis "really boils down to is recognition that, in addition to selection, drift, mutation and other established evolutionary processes, other factors, particularly developmental influences, shape the evolutionary process in important ways." Status Biologists disagree on the need for an extended synthesis. Opponents contend that the modern synthesis is able to fully account for the newer observations, whereas others criticize the extended synthesis for not being radical enough. Proponents think that the conceptions of evolution at the core of the modern synthesis are too narrow and that even when the modern synthesis allows for the ideas in the extended synthesis, using the modern synthesis affects the way that biologists think about evolution. For example, Denis Noble says that using terms and categories of the modern synthesis distorts the picture of biology that modern experimentation has discovered. Proponents therefore claim that the extended synthesis is necessary to help expand the conceptions and framework of how evolution is considered throughout the biological disciplines. In 2022, the John Templeton Foundation published a review of recent literature. References Further reading Defence of the extended synthesis Gilbert, Scott F. (2000). "A New Evolutionary Synthesis". In Developmental Biology, 6th edition. Sinauer. Lange, Axel (2023) Extending the Evolutionary Synthesis. Darwin's Legacy Redesigned. CRC Press. DOI https://doi.org/10.1201/9781003341413. Lodé, Thierry (2013). Manifeste pour une écologie évolutive, Darwin et après. Eds Odile Jacob, Paris. Messerly, J.G. (1992). Piaget's conception of evolution: Beyond Darwin and Lamarck. Lanham, MD: Rowman & Littlefield. . Postdarwinism: "The New Synthesis". A review of Ecological Developmental Biology: Integrating Epigenetics, Medicine, and Evolution, by Scott F. Gilbert and David Epel (Sinauer, 2009). "Post-modern synthesis?" A review of Developmental Plasticity and Evolution by Mary Jane West-Eberhard (Oxford University Press, 2003). Criticism of the extended synthesis Dickens, Thomas; Rahman, Qazi. (2012). "The extended evolutionary synthesis and the role of soft inheritance in evolution". Proceedings of the Royal Society: B biological sciences, 279 (1740). pp. 2913–2921. External links Extended Evolutionary Synthesis Should Evolutionary Theory Evolve?, By Bob Grant, January 1, 2010 The Scientist. Evolution Biology theories History of biology
Extended evolutionary synthesis
Biology
2,381
30,747,795
https://en.wikipedia.org/wiki/Polyfluorene
Polyfluorene is a polymer with formula , consisting of fluorene units linked in a linear chain — specifically, at carbon atoms 2 and 7 in the standard fluorene numbering. It can also be described as a chain of benzene rings linked in para positions (a polyparaphenylene) with an extra methylene bridge connecting every pair of rings. The two benzene rings in each unit make polyfluorene an aromatic hydrocarbon, specifically conjugated polymer, and give it notable optical and electrical properties, such as efficient photoluminescence. When spoken about as a class, polyfluorenes are derivatives of this polymer, obtained by replacing some of the hydrogen atoms by other chemical groups, and/or by substituting other monomers for some fluorene units. These polymers are being investigated for possible use in light-emitting diodes, field-effect transistors, plastic solar cells, and other organic electronic applications. They stand out among other luminescent conjugated polymers because the wavelength of their light output can be tuned through the entire visible spectrum by appropriate choice of the substituents. History Fluorene, the repeat unit in polyfluorene derivatives, was isolated from coal tar and discovered by Marcellin Berthelot prior to 1883. Its name originates from its interesting fluorescence (and not to fluorine, which is not one of its elements). Fluorene became the subject of chemical-structure related color variation (visible rather than luminescent), among other things, throughout the early to mid-20th century. Since it was an interesting chromophore researchers wanted to understand which parts of the molecule were chemically reactive, and how substituting these sites influenced the color. For instance, by adding various electron donating or electron accepting moieties to fluorene, and by reacting with bases, researchers were able to change the color of the molecule. The physical properties of the fluorene molecule were recognizably desirable for polymers; as early as the 1970s researchers began incorporating this moiety into polymers. For instance, because of fluorene’s rigid, planar shape a polymer containing fluorene was shown to exhibit enhanced thermo-mechanical stability. However, more promising was integrating the optoelectronic properties of fluorene into a polymer. Reports of the oxidative polymerization of fluorene (into a fully conjugated form) exist from at least 1972. However, it was not until after the highly publicized high conductivity of doped polyacetylene, presented in 1977 by Heeger, MacDiarmid and Shirakawa, that substantial interest in the electronic properties of conjugated polymers took off. As interest in conducting plastics grew, fluorene again found application. The aromatic nature of fluorene makes it an excellent candidate component of a conducting polymer because it can stabilize and conduct a charge; in the early 1980s fluorene was electropolymerized into conjugated polymer films with conductivities of 10−4 S cm−1. The optical properties (such as variable luminescence and visible light spectrum absorption) that accompany the extended conjugation in polymers of fluorene have become increasingly attractive for device applications. Throughout the 1990s and into the 2000s, many devices such as organic light-emitting diodes (OLEDs), organic solar cells., organic thin film transistors, and biosensors have all taken advantage of the luminescent, electronic and absorptive properties of polyfluorenes. Properties Polyfluorenes are an important class of polymers which have the potential to act as both electroactive and photoactive materials. This in part due to the shape of fluorene. Fluorene is generally planar; p-orbital overlap at the linkage between its two benzene rings results in conjugation across the molecule. This in turn allows for a reduced band gap as the excited state molecular orbitals are delocalized. Since the degree of delocalization and the spatial location of the orbitals on the molecule is influenced by the electron donating (or withdrawing) character of its substituents, the band gap energy can be varied. This chemical control over the band gap directly influences the color of the molecule by limiting the energies of light which it absorbs. Interest in polyfluorene derivatives has increased because of their high photoluminescence quantum efficiency, high thermal stability, and their facile color tunability, obtained by introducing low-band-gap co-monomers. Research in this field has increased significantly due to its potential application in tuning organic light-emitting diodes (OLEDs). In OLEDs, polyfluorenes are desirable because they are the only family of conjugated polymers that can emit colors spanning the entire visible range with high efficiency and low operating voltage. Furthermore, polyfluorenes are relatively soluble in most solvents, making them ideal for general applications. Another important quality of polyfluorenes is their thermotropic liquid crystallinity which allows the polymers to align on rubbed polyimide layers. Thermotropic liquid crystallinity refers to the polymers' ability to exhibit a phase transition into the liquid crystal phase as the temperature is changed. This is very important to the development of liquid crystal displays (LCDs) because the synthesis of liquid crystal displays requires that the liquid-crystal molecules at the two glass surfaces of the cell be aligned parallel to the two polarizer foils. This can only be done by coating the inner-surfaces of the cell with a thin, transparent film of polyamide which is then rubbed with a velvet cloth. Microscopic grooves are then generated in the polyamide layer and the liquid crystal in contact with the polyamide, the polyfluorene, can align in the rubbing direction. In addition to LCDs, polyfluorene can also be used to synthesize light-emitting diodes (LEDs). Polyfluorene has led to LEDs that can emit polarized light with polarization ratios of more than 20 and with brightness of 100 cd m−2. Even though this is very impressive, it is not sufficient for general applications. Challenges associated with polyfluorenes Polyfluorenes often show both excimer and aggregate formation upon thermal annealing or when current is passed through them. Excimer formation involves the generation of dimerized units of the polymer which emit light at lower energies than the polymer itself. This hinders the use of polyfluorenes for most applications, including light-emitting diodes (LED). When excimer or aggregate formation occurs this lowers the efficiency of the LEDs by decreasing the efficiency of charge carrier recombination. Excimer formation also causes a red shift in the emission spectrum. Polyfluorenes can also undergo decomposition. There are two known ways in which decomposition can occur. The first involves the oxidation of the polymer that leads to the formation of an aromatic ketone, quenching the fluorescence. The second decomposition process results in aggregation leading to a red-shifted fluorescence, reduced intensity, exciton migration and relaxation through excimers. Researchers have attempted to eliminate excimer formation and enhance the efficiency of polyfluorenes by copolymerizing polyfluorene with anthracene and end-capping polyfluorenes with bulky groups which could sterically hinder excimer formation. Additionally, researchers have tried adding large substituents at the nine position of the fluorene in order to inhibit excimer and aggregate formation. Furthermore, researchers have tried to improve LEDs by synthesizing fluorene-triarylamine copolymers and other multilayer devices that are based on polyfluorenes that can be cross-linked. These have been found to have brighter fluorescence and reasonable efficiencies. Aggregation has also been combated by varying the chemical structure. For example, when conjugated polymers aggregate, which is natural in the solid state, their emission can be self-quenched, reducing luminescent quantum yields and reducing luminescent device performance. In opposition to this tendency, researchers have used tri-functional monomers to create highly branched polyfluorenes which do not aggregate due to the bulkiness of the substituents. This design strategy has achieved luminescent quantum yields of 42% in the solid state. This solution reduces the ease of processability of the material because branched polymers have increased chain entanglement and poor solubility. Another problem commonly encountered by polyfluorenes is an observed broad green, parasitic emission which detracts from the color purity and efficiency needed for an OLED. Initially attributed to excimer emission, this green emission has been shown to be due to the formation of ketone defects along the fluorene polymer backbone (oxidation of the nine position on the monomer) when there are incomplete substitution at the nine positions of the fluorene monomer. Routes to combat this involve ensuring full substitution of the monomer’s active site, or including aromatic substituents. These solutions may present structures that lack optimal bulkiness or may be synthetically difficult. Synthesis and design Conjugated polymers, such as polyfluorene, can be designed and synthesized with different properties for a wide variety of applications. The color of the molecules can be designed through synthetic control over the electron donating or withdrawing character of the substituents on fluorene or the comonomers in polyfluorene. Solubility of the polymers are important because solution state processing is very common. Since conjugated polymers, with their planar structure, tend to aggregate, bulky side chains are added (to the 9 position of fluorene) to increase the solubility of the polymer. Oxidative polymerization The earliest polymerizations of fluorene were oxidative polymerization with AlCl3 or FeCl3, and more commonly electropolymerization. Electropolymerization is an easy route to obtain thin, insoluble conducting polymer films. However, this technique has a few disadvantages in that it does not provide controlled chain growth polymerizations, and processing and characterization are difficult as a result of its insolubility. Oxidative polymerization produces a similarly poor site-selectivity on the monomer for chain growth resulting in poor control over the regularity of the polymers structure. However, oxidative polymerization does produce soluble polymers (from side-chain containing monomers) which are more easily characterized with nuclear magnetic resonance. Cross coupling polymerizations The design of polymeric properties requires great control over the structure of the polymer. For instance, low band gap polymers require regularly alternating electron donating and electron accepting monomers. More recently, many popular cross-coupling chemistries have been applied to polyfluorenes and have enabled controlled polymerization; Palladium-catalyzed coupling reactions such as Suzuki coupling, Heck coupling, etc., as well as nickel catalyzed Yamamoto and Grignard coupling reactions have been applied to polymerization of fluorene derivatives. Such routes have enabled excellent control over the properties of polyfluorenes; the fluorene-thiophene-benzothiadiazole copolymer shown above, with a band gap of 1.78 eV when the side chains are alkoxy, appears blue because it is absorbing in the red wavelengths. Design Modern coupling chemistries allow other properties of polyfluorenes to be controlled through implementation of complex molecular designs. The above polymer structure pictured has excellent photoluminescent quantum yields (partly due to its fluorene monomer) excellent stability (due to its oxadiazole comonomer) good solubility (due to its many and branched alkyl side chains) and has an amine functionalized side chain for ease of tethering to other molecules or to a substrate. The luminescent color of polyfluorenes can be changed, for example, (from blue to green-yellow) by adding functional groups which participate in excited state intramolecular proton transfer. Exchanging the alkoxy side chains for alcohol side groups allows for energy dissipation (and a red-shift in emission) through reversible transfer of a proton from the alcohol to the nitrogen (on the oxadiazole). These complicated molecular structures were engineered to have these properties and were only able to be realized through careful control of their ordering and side group functionality. Applications Organic light-emitting diodes (OLEDs) In recent years many industrial efforts have focused on tuning the color of lights using polyfluorenes. It was found that by doping green or red emitting materials into polyfluorenes one could tune the color emitted by the polymers. Since polyfluorene homopolymers emit higher energy blue light, they can transfer energy via Förster resonance energy transfer (FRET) to lower energy emitters. In addition to doping, color of polyfluorenes can be tuned by copolymerizing the fluorene monomers with other low band gap monomers. Researchers at the Dow Chemical Company synthesized several fluorene-based copolymers by alternating copolymerization using 5,5-dibromo-2,2-bithiophene which showed yellow emission and 4,7-dibromo-2,1,3-benzothiadiazole, which showed green emission. Other copolymerizations are also suitable; researchers at IBM performed random copolymerization of fluorene with 3,9(10)-dibromoperylene,4,4-dibromo-R-cyanostilbene, and 1,4-bis(2-(4-bromophenyl)-1-cyanovinyl)-2-(2-ethylhexyl)-5-methoxybenzene. Only a small amount of the co-monomer, approximately 5%, was needed to tune the emission of the polyfluorene from blue to yellow. This example further illustrates that by introducing monomers that have a lower band gap than the fluorene monomer, one can tune the color that is emitted by the polymer. Substitution at the nine position with various moieties has also been examined as a means to control the color emitted by polyfluorene. In the past researchers have tried putting alkyl substituents on the ninth position, however it has been found that by putting bulkier groups, such as alkoxyphenyl groups, the polymers had enhanced blue emission stability and superior polymer light-emitting diode performance (compared to polymers which have alkyl substituents at the ninth position). Polymer solar cells Polyfluorenes are also used in polymer solar cells because of their affinity for property tuning. Copolymerization of fluorene with other monomers allows researchers to optimize the absorption and electronic energy levels as a means to increase the photovoltaic performance. For instance, by lowering the band gap of polyfluorenes, the absorption spectrum of the polymer can be adjusted to coincide with the maximum photon flux region of the solar spectrum. This helps the solar cell absorb more of the sun's energy and to increase its energy conversion efficiency; donor-acceptor structured copolymers of fluorene have achieved efficiencies above 4% when their absorption edge was pushed to 700 nm. The voltage of polymer solar cells has also been increased through the design of polyfluorenes. These devices are typically produced by blending electron accepting and electron donating molecules which help separate charge to produce power. In polymer blend solar cells, the voltage produced by the device is determined by the difference between the electron donating polymer’s highest occupied molecular orbital (HOMO) energy level and the electron accepting molecules lowest unoccupied molecular orbital (LUMO) energy level. By adding electron withdrawing pendant molecules to conjugated polymers, their HOMO energy level can be lowered. For instance by adding electronegative groups on the end of conjugated side chains, researchers lowered the HOMO of a polyfluorene copolymer to −5.30 eV and increased the voltage of a solar cell to 0.99 V. Typical polymer solar cells utilize fullerene molecules as electron acceptors because of their low LUMO energy level (high electron affinity). However the tunability of polyfluorenes allows their LUMO to be lowered to a level appropriate for use as an electron acceptor. Thus, polyfluorene copolymers have also been used in polymer:polymer blend solar cells, where their electron accepting, electron conducting and light absorbing properties permit device performance. References Further reading Physical organic chemistry Organic polymers
Polyfluorene
Chemistry
3,509
236,696
https://en.wikipedia.org/wiki/Postdigital
Postdigital, in artistic practice, is a term that describes works of art and theory that are more concerned with being human than with being digital, similar to the concept of "undigital" introduced in 1995, where technology and society advances beyond digital limitations to achieve a totally fluid multimediated reality that is free from artefacts of digital computation (quantization noise, pixelation, etc.). The postdigital is concerned with our rapidly changed and changing relationships with digital technologies and art forms. Theory According to Giorgio Agamben (2002), the postdigital is a paradigm that (as with post-humanism) does not aim to describe a life after digital, but rather, attempts to describe the present-day opportunity to explore the consequences of the digital and of the computer age. While the computer age has enhanced human capacity with inviting and uncanny prosthetics, the postdigital provides a paradigm with which it is possible to examine and understand this enhancement. In The Future of Art in a Postdigital Age, Mel Alexenberg defines "postdigital art" as artworks that address the humanization of digital technologies through interplay between digital, biological, cultural, and spiritual systems, between cyberspace and real space, between embodied media and mixed reality in social and physical communication, between high tech and high touch experiences, between visual, haptic, auditory, and kinesthetic media experiences, between virtual and augmented reality, between roots and globalization, between autoethnography and community narrative, and between web-enabled peer-produced wikiart and artworks created with alternative media through participation, interaction, and collaboration in which the role of the artist is redefined, and between tactile art and NFTs. Mel Alexenberg proposes that a postdigital age is defined in Wired by MIT Media Center director Nicholas Negroponte: "Like air and drinking water, being digital will be noticed only in its absence, not by its presence. Face it - the Digital Revolution is over" Music Kim Cascone uses the term in his article The Aesthetics of Failure: "Post-digital" Tendencies in Contemporary Computer Music. He begins the article with a quotation from MIT Media Lab cyberpundit Nicholas Negroponte: "The digital revolution is over." Cascone goes on to describe what he sees as a 'post-digital' line of flight in the music also commonly known as glitch or microsound music, observing that 'with electronic commerce now a natural part of the business fabric of the Western world and Hollywood cranking out digital fluff by the gigabyte, the medium of digital technology holds less fascination for composers in and of itself.' Japanese theorist, Ryota Matsumoto adapts the postdigital discourse of Kim Cascone to their ingenious culture and construes Japanese social structure as the postdigital after the collapse of capitalist accumulation and the subsequent integration of their tradition with the pharmacology of digital age. In Art after Technology, Maurice Benayoun lists possible tracks for "postdigital" art considering that the digital flooding has altered the entire social, economical, artistic landscape, and the artist posture will move in ways that try to escape the technological realm without being able to completely discard it. From lowtech to biotech and critical fusion - critical intrusion of fiction inside reality – new forms of art emerge from the digital era. See also Circuit bending Databending Digital Art Glitch New Aesthetic New media art References Further reading Alexenberg, Mel, (2019), Through a Bible Lens: Biblical Insights for Smartphone Photography and Social Media. Nashville, Tennessee: HarperCollins; . Alexenberg, Mel, (2011), The Future of Art in a Postdigital Age: From Hellenistic to Hebraic Consciousness. Bristol and Chicago: Intellect Books/University of Chicago Press; . Alexenberg, Mel, ed. (2008), Educating Artists for the Future: Learning at the Intersections of Art, Science, Technology, and Culture. Bristol and Chicago: Intellect Books/University of Chicago Press, 344 pp. . (postdigital chapters by Roy Ascott, Stephen Wilson, Eduardo Kac, and others). Ascott, R. (2003), Telematic Embrace. (E.Shaken, ed.) Berkeley: University of California Press. . Barreto, R. and Perissinotto, P. (2002), The Culture of Immanence, in Internet Art. Ricardo Barreto e Paula Perissinotto (orgs.). São Paulo, IMESP. . Benayoun, M. (2008), Art after Technology abstract of the text written by Maurice Benayoun in Technology Review - French edition, N°7 June–July 2008, MIT, ISSN 1957-1380. Full text in English. Benayoun, M., The Dump, 207 Hypotheses for Committing Art, bilingual (English/French), Fyp éditions, France, July 2011, . Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury. . Berry, D. M. and Dieter (2015) Postdigital Aesthetics: Art, Computation and Design, London: Palgrave. . Birnbaum, D and Kuo (2008) More than Real: Art in the Digital Age, 2018 Verbier Art Summit. London: Koenig Books. . Bolognini, M. (2008), Postdigitale, Rome: Carocci. . Ferguson, J., & Brown, A. R. (2016). "Fostering a post-digital avant-garde: Research-led teaching of music technology". Organised Sound, 21(2), 127–137. Ferreira, P. (2024), Audiovisual Disruption: Post-Digital Aesthetics in Contemporary Audiovisual Arts, Bielefeld, Germany: transcript Verlag. . Pepperell, R. and Punt, M. (2000), The Postdigital Membrane: Imagination, Technology and Desire, Intellect Books, Bristol, UK, 182 pp. Toshiko, Saneoki. (2019). Postigital Theory of Giorgio Agamben, Ryota Matsumoto, Kim Cascone, Japanese Art and Design. Hachimato, Tokyo Institute of Art, Tokyo, Japan. Toshimo, Saniev. (2019). “Postdigital, Giorgio Agamben, Ryota Matsumoto” Tokyo University Press Media Research Journal Japanese Text. Wilson, S. (2003), Information Arts: Intersections of Art, Science, and Technology. . External links Google Books: The Postdigital Membrane What is a paradigm by Giorgio Agamben Post-Digital Humanities: Computation and Cultural Critique in the Arts and Humanities Monoskop: Collection of resources related to Post-Digital Aesthetics Postdigital Science and Education journal Postdigital Science and Education book series Encyclopedia of Postdigital Science and Education Digital art Digital electronics Computer art New media New media art Interactive art Visual arts genres
Postdigital
Technology,Engineering
1,466
4,473,599
https://en.wikipedia.org/wiki/Electrofuge
In chemistry, an electrofuge is a leaving group which does not retain the lone pair of electrons from its previous bond with another species (in contrast to a nucleofuge, which does). It can result from the heterolytic breaking of covalent bonds. After this reaction an electrofuge may possess either a positive or a neutral charge; this is governed by the nature of the specific reaction. An example would be the loss of from a molecule of benzene during nitration. The word 'electrofuge' is commonly found in older literature, but its use in contemporary organic chemistry is now uncommon. See also Nucleofuge Nucleophile Electrophile References . Organic chemistry
Electrofuge
Chemistry
146
61,509,757
https://en.wikipedia.org/wiki/Maximilian%20Cercha
Maximilian (also spelled Maksymilian) Cercha (1818–1907) was a Polish painter and drawer. He was the nephew of Ezechiel Cercha (1790–1820) and the father of (1867–1919). Life Cercha was born on in Kraków. He studied at the Jan Matejko Academy of Fine Arts and at the Painting and Drawing School at the Technical Institute in Kraków with , Jan Nepomucen Głowacki and Wojciech Stattler. Cercha died in Kraków on and was buried at the Rakowicki Cemetery. Among Cercha's students are his son, Stanisław Cercha, and Stanisław Tarnowski. References External links 1818 births 1907 deaths Artists from Kraków Draughtsmen 19th-century Polish painters Polish male painters Painters from Austria-Hungary
Maximilian Cercha
Engineering
174
37,641,022
https://en.wikipedia.org/wiki/Edison%20Volta%20Prize
The Edison Volta Prize is awarded biennially by the European Physical Society (EPS) to individuals or groups of up to three people in recognition of outstanding achievements in physics. The award consists of a diploma, a medal, and 10,000 euros in prize money. The award has been established in 2012 by the Centro di Cultura Scientifica "Alessandro Volta", Edison S.p.A. and the European Physical Society. 2020 Laureates The 2020 EPS Edison Volta Prize was awarded to: Klaus Ensslin, ETH Zurich Laboratorium für Festkörperphysik, Switzerland Jurgen Smet, Max Planck Institute for Solid State Research, Stuttgart, Germany Dieter Weiss, Universität Regensburg Institut für experimentelle und angewandte Physik, Germany "for their seminal contributions to condensed matter nano-science". 2018 Laureates The 2018 EPS Edison Volta Prize was awarded to: Alain Brillet, CNRS, Observatoire de la Côte d’Azur, Nice, France Karsten Danzmann, Max-Planck- Institut für Gravitationsphysik and Leibniz University, Hannover, Germany Adalberto Giazotto, (died 2017), INFN, Pisa, Italy Jim Hough, University of Glasgow, UK for "the development, in their respective countries, of key technologies and innovative experimental solutions, that enabled the advanced interferometric gravitational wave detectors LIGO and Virgo to detect the first gravitational wave signals from mergers of Black Holes and of Neutron Stars" 2016 Laureate 2016 - The 2016 EPS Edison Volta Prize was awarded to Michel A.G. Orrit, University of Leiden, the Netherlands for "seminal contributions to optical science, to the field of single-molecule spectroscopy and imaging (first single molecule detection by fluorescence and first optical detection of magnetic resonance in single molecule) and for pioneering investigations into the photoblinking and photobleaching behaviors of individual molecules at the heart of many current optical super-resolution experiments." 2015 Laureates The 2015 EPS Edison Volta Prize has been awarded to the three principal scientific leaders of the European Space Agency's (ESA) Planck Mission: Nazzareno Mandolesi, University of Ferrara, Italy Jean-Loup Puget, Institut d'Astrophysique Spatiale, Université Paris Sud & CNRS, France Jan Tauber, Directorate of Science and Robotic Exploration, European Space Agency "for directing the development of the Planck payload and the analysis of its data, resulting in the refinement of our knowledge of the temperature fluctuations in the Cosmic Microwave Background as a vastly improved tool for doing precision cosmology at unprecedented levels of accuracy, and consolidating our understanding of the very early universe. " 2014 Laureate 2014 EPS Edison Volta Prize was awarded to: Jean-Michel Raimond, Université Pierre et Marie Curie, Professor "for seminal contribution to physics (that) have paved the way for novel explorations of quantum mechanics and have opened new routes in quantum information processing" 2012 Laureates 2012 EPS Edison Volta Prize was awarded 12 November 2012 to: Rolf-Dieter Heuer, CERN Director General, Sergio Bertolucci, CERN Director for Research and Computing, Stephen Myers, CERN Director for Accelerators and Technology, "for having led, building on decades of dedicated work by their predecessors, the culminating efforts in the direction, research and operation of the CERN Large Hadron Collider (LHC), which resulted in many significant advances in high energy particle physics, in particular, the first evidence of a Higgs-like boson in July 2012". See also List of physics awards References Physics awards Awards established in 2012 Awards of the European Physical Society Alessandro Volta
Edison Volta Prize
Technology
754
63,989,376
https://en.wikipedia.org/wiki/Archimedean%20ordered%20vector%20space
In mathematics, specifically in order theory, a binary relation on a vector space over the real or complex numbers is called Archimedean if for all whenever there exists some such that for all positive integers then necessarily An Archimedean (pre)ordered vector space is a (pre)ordered vector space whose order is Archimedean. A preordered vector space is called almost Archimedean if for all whenever there exists a such that for all positive integers then Characterizations A preordered vector space with an order unit is Archimedean preordered if and only if for all non-negative integers implies Properties Let be an ordered vector space over the reals that is finite-dimensional. Then the order of is Archimedean if and only if the positive cone of is closed for the unique topology under which is a Hausdorff TVS. Order unit norm Suppose is an ordered vector space over the reals with an order unit whose order is Archimedean and let Then the Minkowski functional of (defined by ) is a norm called the order unit norm. It satisfies and the closed unit ball determined by is equal to (that is, Examples The space of bounded real-valued maps on a set with the pointwise order is Archimedean ordered with an order unit (that is, the function that is identically on ). The order unit norm on is identical to the usual sup norm: Examples Every order complete vector lattice is Archimedean ordered. A finite-dimensional vector lattice of dimension is Archimedean ordered if and only if it is isomorphic to with its canonical order. However, a totally ordered vector order of dimension can not be Archimedean ordered. There exist ordered vector spaces that are almost Archimedean but not Archimedean. The Euclidean space over the reals with the lexicographic order is Archimedean ordered since for every but See also References Bibliography Functional analysis Order theory
Archimedean ordered vector space
Mathematics
405
73,192,604
https://en.wikipedia.org/wiki/1945%E2%80%931998
1945–1998 is a piece created by Isao Hashimoto showing a time-lapse of every nuclear explosion between 1945 and 1998. Contents The piece begins with the two nuclear explosions at Hiroshima and Nagasaki. The United States conducts several nuclear tests after the war. The Soviet Union and United Kingdom then gain nuclear weapons, increasing the number of explosions. The piece continues until it gets to Pakistan's first nuclear test in 1998. The total number of weapons detonated is 2053. The piece used sound and light to startle the viewer. Months (measured in seconds) are represented by a sound. When a nuclear explosion occurs, a musical sound plays. Different countries have different tones, which sometimes results in a polyphonic composition, overwhelming the viewer. Reception The piece is generally well received. The piece is praised for conveying the costs a nuclear war would cause. The piece has been described as "eerie", "scary", and "terrifying". References Nuclear weapons Nuclear warfare
1945–1998
Chemistry
199
2,404,348
https://en.wikipedia.org/wiki/Magnetization
In classical electromagnetism, magnetization is the vector field that expresses the density of permanent or induced magnetic dipole moments in a magnetic material. Accordingly, physicists and engineers usually define magnetization as the quantity of magnetic moment per unit volume. It is represented by a pseudovector M. Magnetization can be compared to electric polarization, which is the measure of the corresponding response of a material to an electric field in electrostatics. Magnetization also describes how a material responds to an applied magnetic field as well as the way the material changes the magnetic field, and can be used to calculate the forces that result from those interactions. The origin of the magnetic moments responsible for magnetization can be either microscopic electric currents resulting from the motion of electrons in atoms, or the spin of the electrons or the nuclei. Net magnetization results from the response of a material to an external magnetic field. Paramagnetic materials have a weak induced magnetization in a magnetic field, which disappears when the magnetic field is removed. Ferromagnetic and ferrimagnetic materials have strong magnetization in a magnetic field, and can be magnetized to have magnetization in the absence of an external field, becoming a permanent magnet. Magnetization is not necessarily uniform within a material, but may vary between different points. Definition The magnetization field or M-field can be defined according to the following equation: Where is the elementary magnetic moment and is the volume element; in other words, the M-field is the distribution of magnetic moments in the region or manifold concerned. This is better illustrated through the following relation: where m is an ordinary magnetic moment and the triple integral denotes integration over a volume. This makes the M-field completely analogous to the electric polarisation field, or P-field, used to determine the electric dipole moment p generated by a similar region or manifold with such a polarization: where is the elementary electric dipole moment. Those definitions of P and M as a "moments per unit volume" are widely adopted, though in some cases they can lead to ambiguities and paradoxes. The M-field is measured in amperes per meter (A/m) in SI units. In Maxwell's equations The behavior of magnetic fields (B, H), electric fields (E, D), charge density (ρ), and current density (J) is described by Maxwell's equations. The role of the magnetization is described below. Relations between B, H, and M The magnetization defines the auxiliary magnetic field H as (SI) (Gaussian system) which is convenient for various calculations. The vacuum permeability μ0 is, approximately, ). A relation between M and H exists in many materials. In diamagnets and paramagnets, the relation is usually linear: where χ is called the volume magnetic susceptibility, and μ is called the magnetic permeability of the material. The magnetic potential energy per unit volume (i.e. magnetic energy density) of the paramagnet (or diamagnet) in the magnetic field is: the negative gradient of which is the magnetic force on the paramagnet (or diamagnet) per unit volume (i.e. force density). In diamagnets () and paramagnets (), usually , and therefore . In ferromagnets there is no one-to-one correspondence between M and H because of magnetic hysteresis. Magnetic polarization Alternatively to the magnetization, one can define the magnetic polarization, (often the symbol is used, not to be confused with current density). (SI). This is by direct analogy to the electric polarization, . The magnetic polarization thus differs from the magnetization by a factor of : (SI). Whereas magnetization is given with the unit ampere/meter, the magnetic polarization is given with the unit tesla. Magnetization current The magnetization M makes a contribution to the current density J, known as the magnetization current. and for the bound surface current: so that the total current density that enters Maxwell's equations is given by where Jf is the electric current density of free charges (also called the free current), the second term is the contribution from the magnetization, and the last term is related to the electric polarization P. Magnetostatics In the absence of free electric currents and time-dependent effects, Maxwell's equations describing the magnetic quantities reduce to These equations can be solved in analogy with electrostatic problems where In this sense −∇⋅M plays the role of a fictitious "magnetic charge density" analogous to the electric charge density ρ; (see also demagnetizing field). Dynamics The time-dependent behavior of magnetization becomes important when considering nanoscale and nanosecond timescale magnetization. Rather than simply aligning with an applied field, the individual magnetic moments in a material begin to precess around the applied field and come into alignment through relaxation as energy is transferred into the lattice. Reversal Magnetization reversal, also known as switching, refers to the process that leads to a 180° (arc) re-orientation of the magnetization vector with respect to its initial direction, from one stable orientation to the opposite one. Technologically, this is one of the most important processes in magnetism that is linked to the magnetic data storage process such as used in modern hard disk drives. As it is known today, there are only a few possible ways to reverse the magnetization of a metallic magnet: an applied magnetic field spin injection via a beam of particles with spin magnetization reversal by circularly polarized light; i.e., incident electromagnetic radiation that is circularly polarized Demagnetization Demagnetization is the reduction or elimination of magnetization. One way to do this is to heat the object above its Curie temperature, where thermal fluctuations have enough energy to overcome exchange interactions, the source of ferromagnetic order, and destroy that order. Another way is to pull it out of an electric coil with alternating current running through it, giving rise to fields that oppose the magnetization. One application of demagnetization is to eliminate unwanted magnetic fields. For example, magnetic fields can interfere with electronic devices such as cell phones or computers, and with machining by making cuttings cling to their parent. See also Magnetometer Orbital magnetization References Electric and magnetic fields in matter
Magnetization
Physics,Chemistry,Materials_science,Engineering
1,325
39,765,053
https://en.wikipedia.org/wiki/Multi-stage%20programming
Multi-stage programming (MSP) is a variety of metaprogramming in which compilation is divided into a series of intermediate phases, allowing typesafe run-time code generation. Statically defined types are used to verify that dynamically constructed types are valid and do not violate the type system. In MSP languages, expressions are qualified by notation that specifies the phase at which they are to be evaluated. By allowing the specialization of a program at run-time, MSP can optimize the performance of programs: it can be considered as a form of partial evaluation that performs computations at compile-time as a trade-off to increase the speed of run-time processing. Multi-stage programming languages support constructs similar to the Lisp construct of quotation and eval, except that scoping rules are taken into account. References External links MetaOCaml Programming paradigms Type systems
Multi-stage programming
Mathematics
184
3,140,923
https://en.wikipedia.org/wiki/Linearly%20disjoint
In mathematics, algebras A, B over a field k inside some field extension of k are said to be linearly disjoint over k if the following equivalent conditions are met: (i) The map induced by is injective. (ii) Any k-basis of A remains linearly independent over B. (iii) If are k-bases for A, B, then the products are linearly independent over k. Note that, since every subalgebra of is a domain, (i) implies is a domain (in particular reduced). Conversely if A and B are fields and either A or B is an algebraic extension of k and is a domain then it is a field and A and B are linearly disjoint. However, there are examples where is a domain but A and B are not linearly disjoint: for example, A = B = k(t), the field of rational functions over k. One also has: A, B are linearly disjoint over k if and only if the subfields of generated by , resp. are linearly disjoint over k. (cf. Tensor product of fields) Suppose A, B are linearly disjoint over k. If , are subalgebras, then and are linearly disjoint over k. Conversely, if any finitely generated subalgebras of algebras A, B are linearly disjoint, then A, B are linearly disjoint (since the condition involves only finite sets of elements.) See also Tensor product of fields References P.M. Cohn (2003). Basic algebra Algebra
Linearly disjoint
Mathematics
351
5,270,898
https://en.wikipedia.org/wiki/Cauchy%27s%20functional%20equation
Cauchy's functional equation is the functional equation: A function that solves this equation is called an additive function. Over the rational numbers, it can be shown using elementary algebra that there is a single family of solutions, namely for any rational constant Over the real numbers, the family of linear maps now with an arbitrary real constant, is likewise a family of solutions; however there can exist other solutions not of this form that are extremely complicated. However, any of a number of regularity conditions, some of them quite weak, will preclude the existence of these pathological solutions. For example, an additive function is linear if: is continuous (Cauchy, 1821). In fact, it suffices for to be continuous at one point (Darboux, 1875). or for all . is monotonic on any interval. is bounded above or below on any interval. is Lebesgue measurable. for all real and some positive integer . The graph of is not dense in . On the other hand, if no further conditions are imposed on then (assuming the axiom of choice) there are infinitely many other functions that satisfy the equation. This was proved in 1905 by Georg Hamel using Hamel bases. Such functions are sometimes called Hamel functions. The fifth problem on Hilbert's list is a generalisation of this equation. Functions where there exists a real number such that are known as Cauchy-Hamel functions and are used in Dehn-Hadwiger invariants which are used in the extension of Hilbert's third problem from 3D to higher dimensions. This equation is sometimes referred to as Cauchy's additive functional equation to distinguish it from the other functional equations introduced by Cauchy in 1821, the exponential functional equation the logarithmic functional equation and the multiplicative functional equation Solutions over the rational numbers A simple argument, involving only elementary algebra, demonstrates that the set of additive maps , where are vector spaces over an extension field of , is identical to the set of -linear maps from to . Theorem: Let be an additive function. Then is -linear. Proof: We want to prove that any solution to Cauchy’s functional equation, , satisfies for any and . Let . First note , hence , and therewith from which follows . Via induction, is proved for any . For any negative integer we know , therefore . Thus far we have proved for any . Let , then and hence . Finally, any has a representation with and , so, putting things together, , q.e.d. Properties of nonlinear solutions over the real numbers We prove below that any other solutions must be highly pathological functions. In particular, it is shown that any other solution must have the property that its graph is dense in that is, that any disk in the plane (however small) contains a point from the graph. From this it is easy to prove the various conditions given in the introductory paragraph. Existence of nonlinear solutions over the real numbers The linearity proof given above also applies to where is a scaled copy of the rationals. This shows that only linear solutions are permitted when the domain of is restricted to such sets. Thus, in general, we have for all and However, as we will demonstrate below, highly pathological solutions can be found for functions based on these linear solutions, by viewing the reals as a vector space over the field of rational numbers. Note, however, that this method is nonconstructive, relying as it does on the existence of a (Hamel) basis for any vector space, a statement proved using Zorn's lemma. (In fact, the existence of a basis for every vector space is logically equivalent to the axiom of choice.) There exist models where all sets of reals are measurable which are consistent with ZF + DC, and therein all solutions are linear. To show that solutions other than the ones defined by exist, we first note that because every vector space has a basis, there is a basis for over the field i.e. a set with the property that any can be expressed uniquely as where is a finite subset of and each is in We note that because no explicit basis for over can be written down, the pathological solutions defined below likewise cannot be expressed explicitly. As argued above, the restriction of to must be a linear map for each Moreover, because for it is clear that is the constant of proportionality. In other words, is the map Since any can be expressed as a unique (finite) linear combination of the s, and is additive, is well-defined for all and is given by: It is easy to check that is a solution to Cauchy's functional equation given a definition of on the basis elements, Moreover, it is clear that every solution is of this form. In particular, the solutions of the functional equation are linear if and only if is constant over all Thus, in a sense, despite the inability to exhibit a nonlinear solution, "most" (in the sense of cardinality) solutions to the Cauchy functional equation are actually nonlinear and pathological. See also References External links Solution to the Cauchy Equation Rutgers University The Hunt for Addi(c)tive Monster Functional equations
Cauchy's functional equation
Mathematics
1,074
14,444,896
https://en.wikipedia.org/wiki/OGLE-TR-182
OGLE-TR-182 is a dim magnitude 17 star far off in the constellation Carina at a distance of approximately 12,700 light years. Planetary system This star is home to the transiting extrasolar planet OGLE-TR-182b discovered in October 2007. See also List of extrasolar planets Optical Gravitational Lensing Experiment OGLE References External links G-type main-sequence stars Planetary transit variables Carina (constellation) Planetary systems with one confirmed planet
OGLE-TR-182
Astronomy
95
6,392,687
https://en.wikipedia.org/wiki/Tripolium%20pannonicum
Tripolium pannonicum, called sea aster or seashore aster and often known by the synonyms Aster tripolium or Aster pannonicus, is a flowering plant, native to Eurasia and northern Africa, that is confined in its distribution to salt marshes, estuaries and occasionally to inland salt works. It is a perennial growing up to 50 cm tall with fleshy lanceolate leaves and purple ray florets flowering from July to September. The plants tend to be short-lived and populations need significant new recruitment each year from new seedlings. There are rayed as well as rayless varieties and only the former have long blue or white florets. The rayless form is yellow. The plant flowers well into autumn and hence provides a valuable source of nectar for late-flying butterflies such as painted lady and red admiral. Young leaves of this plant are edible and are collected for consumption on the floodplains of the Dutch province of Zeeland. Sea aster was celebrated as the subject of a definitive stamp issued by the Irish post office, An Post, designed by Susan Sex. References Astereae
Tripolium pannonicum
Chemistry
230
368,032
https://en.wikipedia.org/wiki/First%20Things%20First%201964%20manifesto
The First Things First manifesto was written 29 November 1963 and published in 1964 by Ken Garland. It was backed by over 400 graphic designers and artists and also received the backing of Tony Benn, radical left-wing MP and activist, who published it in its entirety in The Guardian newspaper. Reacting against a rich and affluent Britain of the 1960s, it tried to re-radicalize a design industry which the signatories felt had become lazy and uncritical. Drawing on ideas shared by critical theory, the Frankfurt School, and the counter-culture of the time, it explicitly reaffirmed the belief that design is not a neutral, value-free process. It rallied against the consumerist culture that was purely concerned with buying and selling things and tried to highlight a Humanist dimension to graphic design theory. It was later updated and republished with a new group of signatories as the First Things First 2000 manifesto. External links Text of the First Things First manifesto on Design is History Published writing by Ken Garland Art manifestos Design history Graphic design 1964 documents Frankfurt School
First Things First 1964 manifesto
Engineering
213
563,950
https://en.wikipedia.org/wiki/Coesite
Coesite () is a form (polymorph) of silicon dioxide (SiO2) that is formed when very high pressure (2–3 gigapascals), and moderately high temperature (), are applied to quartz. Coesite was first synthesized by Loring Coes, Jr., a chemist at the Norton Company, in 1953. Occurrences In 1960, a natural occurrence of coesite was reported by Edward C. T. Chao, in collaboration with Eugene Shoemaker, from Barringer Crater, in Arizona, US, which was evidence that the crater must have been formed by an impact. After this report, the presence of coesite in unmetamorphosed rocks was taken as evidence of a meteorite impact event or of an atomic bomb explosion. It was not expected that coesite would survive in high pressure metamorphic rocks. In metamorphic rocks, coesite was initially described in eclogite xenoliths from the mantle of the Earth that were carried up by ascending magmas; kimberlite is the most common host of such xenoliths. In metamorphic rocks, coesite is now recognized as one of the best mineral indicators of metamorphism at very high pressures (UHP, or ultrahigh-pressure metamorphism). Such UHP metamorphic rocks record subduction or continental collisions in which crustal rocks are carried to depths of or more. Coesite is formed at pressures above about 2.5 GPa (25 kbar) and temperature above about 700 °C. This corresponds to a depth of about 70 km in the Earth. It can be preserved as mineral inclusions in other phases because as it partially reverts to quartz, the quartz rim exerts pressure on the core of the grain, preserving the metastable grain as tectonic forces uplift and expose these rock at the surface. As a result, the grains have a characteristic texture of a polycrystalline quartz rim (see infobox figure). Coesite has been identified in UHP metamorphic rocks around the world, including the western Alps of Italy at Dora Maira, the Ore Mountains of Germany, the Lanterman Range of Antarctica, in the Kokchetav Massif of Kazakhstan, in the Western Gneiss region of Norway, the Dabie-Shan Range in Eastern China, the Himalayas of Eastern Pakistan, and in the Appalachian Mountains of Vermont. Crystal structure Coesite is a tectosilicate with each silicon atom surrounded by four oxygen atoms in a tetrahedron. Each oxygen atom is then bonded to two Si atoms to form a framework. There are two crystallographically distinct Si atoms and five different oxygen positions in the unit cell. Although the unit cell is close to being hexagonal in shape ("a" and "c" are nearly equal and β nearly 120°), it is inherently monoclinic and cannot be hexagonal. The crystal structure of coesite is similar to that of feldspar and consists of four silicon dioxide tetrahedra arranged in Si4O8 and Si8O16 rings. The rings are further arranged into chains. This structure is metastable within the stability field of quartz: coesite will eventually decay back into quartz with a consequent volume increase, although the metamorphic reaction is very slow at the low temperatures of the Earth's surface. The crystal symmetry is monoclinic C2/c, No.15, Pearson symbol mS48. See also Seifertite, forming at higher pressure than stishovite Stishovite, a higher-pressure polymorph References External links Coesite page Barringer Meteor Crater science education page Impact event minerals Silica polymorphs Monoclinic minerals Minerals in space group 15 Silicon dioxide
Coesite
Materials_science
787
40,943,987
https://en.wikipedia.org/wiki/Breakthrough%20Institute
The Breakthrough Institute is an environmental research center located in Berkeley, California. Founded in 2007 by Michael Shellenberger and Ted Nordhaus, The institute is aligned with ecomodernist philosophy. The Institute advocates for an embrace of modernization and technological development (including nuclear power and carbon capture) in order to address environmental challenges. Proposing urbanization, agricultural intensification, nuclear power, aquaculture, and desalination as processes with a potential to reduce human demands on the environment, allowing more room for non-human species. Since its inception, environmental scientists and academics have criticized Breakthrough's environmental positions. Popular press reception of Breakthrough's environmental ideas and policy has been mixed. Organization, funding and people The Breakthrough institute is registered as 501(c)(3) nonprofit organization and is supported by various public institutions and individuals. Breakthrough's executive director is Ted Nordhaus. Others associated with Breakthrough include former National Review executive editor Reihan Salam, journalist Gwyneth Cravens, political scientist Roger A. Pielke Jr., sociologist Steve Fuller, and environmentalist Stewart Brand. Nordhaus and Shellenberger have written on the subjects ranging from positive treatment of nuclear energy and shale gas to critiques of the planetary boundaries hypothesis. The Breakthrough Institute has argued that climate policy should be focused on higher levels of public funding on technology innovation to "make clean energy cheap", and has been critical of climate policies such as cap and trade and carbon pricing. Programs and philosophy Breakthrough Institute maintains programs in energy, conservation, and food. Their website states that the energy research is “focused on making clean energy cheap through technology innovation to deal with both global warming and energy poverty.” The conservation work “seeks to offer pragmatic new frameworks and tools for navigating" the challenges of the Anthropocene, offering up nuclear energy, synthetic fertilizers, and genetically modified foods as solutions. Jonathan Symons, Senior Lecturer at Macquarie University, Australia, has written an extensive survey of the Breakthrough Institute and its philosophy. He argues that ecomodernism is best understood as a social democratic response to environmental challenges, and that the Breakthrough Institute's argument for state investment in development and deployment of zero carbon technologies aligns with the IPCC’s position that new technologies are crucial to avoiding dangerous climate change. Criticism Scholars such as Professor of American and Environmental Studies Julie Sze and environmental humanist Michael Ziser criticize Breakthrough's philosophy as one that believes "community-based environmental justice poses a threat to the smooth operation of a highly capitalized, global-scale Environmentalism." Further, Environmental and Art Historian TJ Demos has argued that Breakthrough's ideas present "nothing more than a bad utopian fantasy" that function to support the oil and gas industry and work as "an apology for nuclear energy." Journalist Paul D. Thacker alleged that the Breakthrough Institute is an example of a think tank which lacks intellectual rigour, promoting contrarianist reasoning and cherry picking evidence. The institute has also been criticized for promoting industrial agriculture and processed foodstuffs while also accepting donations from the Nathan Cummings Foundation, whose board members have financial ties to processed food companies that rely heavily on industrial agriculture. After an IRS complaint about potential improper use of 501(c)(3) status, the Institute no longer lists the Nathan Cummings Foundation as a donor. However, as Thacker has noted, the institute's funding remains largely opaque. Climate scientist Michael E. Mann also questions the motives of the Breakthrough Institute. According to Mann, the self-declared mission of the BTI is to look for a breakthrough to solve the climate problem. However Mann states that basically the BTI "appears to be opposed to anything - be it a price on carbon or incentives for renewable energy - that would have a meaningful impact." He notes that the BTI "remains curiously preoccupied with opposing advocates for meaningful climate action and is coincidentally linked to natural gas interests" and criticises the BTI for advocating "continued exploitation of fossil fuels." Mann also questions that the BTI on the one hand seems to be "very pessimistic" about renewable energy, while on the other hand "they are extreme techno-optimists" regarding geoengineering. Publications "The Death of Environmentalism: Global Warming in a Post-Environmental World" In 2004, Breakthrough founders Ted Nordhaus and Michael Shellenberger coauthored the essay, “Death of Environmentalism: Global Warming Politics in a Post-Environmental World.” The paper argued that environmentalism is incapable of dealing with climate change and should "die" so that a new politics can be born. The paper was criticized by members of the mainstream environmental movement. Former Sierra Club Executive Director Carl Pope called the essay "unclear, unfair and divisive." He said it contained multiple factual errors and misinterpretations. However, former Sierra Club President Adam Werbach praised the authors' arguments. Former Greenpeace Executive Director John Passacantando said in 2005, referring to both Shellenberger and his coauthor Ted Nordhaus, "These guys laid out some fascinating data, but they put it in this over-the-top language and did it in this in-your-face way." Michel Gelobter and other environmental experts and academics wrote The Soul of Environmentalism: Rediscovering transformational politics in the 21st century in response, criticizing "Death" for demanding increased technological innovation rather than addressing the systemic concerns of people of color. Writing in The New York Times, Matthew Yglesias said in 2008 that "Nordhaus and Shellenberger persuasively argue, environmentalists must stop congratulating themselves for their own willingness to confront inconvenient truths and must focus on building a politics of shared hope rather than relying on a politics of fear.", adding that the paper "is more convincing in its case for a change in rhetoric." Break Through: From the Death of Environmentalism to the Politics of Possibility In 2007, Nordhaus and Shellenberger published their book Break Through: From the Death of Environmentalism to the Politics of Possibility. The book argues for a "post-environmental" politics that abandons the environmentalist focus on nature protection for a new focus on technological innovation to create a new, stronger U.S. economy. The Wall Street Journal wrote that, "If heeded, Nordhaus and Shellenberger's call for an optimistic outlook—embracing economic dynamism and creative potential—will surely do more for the environment than any U.N. report or Nobel Prize." NPR's science correspondent Richard Harris listed Break Through on his "recommended reading list" for climate change. However, Julie Sze and Michael Ziser argued that Break Through continued the trend Gelobter pointed out related the authors' commitment to technological innovation and economic growth instead of focusing on systemic inequalities that create environmental injustices. Specifically Sze and Ziser argue that Nordhaus and Shellenberger's "evident relish in their notoriety as the 'sexy' cosmopolitan 'bad boys' of environmentalism (their own words) introduces some doubt about their sincerity and reliability." The authors asserted that Shellenberger's work fails "to incorporate the aims of environmental justice while actively trading on suspect political tropes," such as blaming China and other Nations as large-scale polluters so that the United States may begin and continue Nationalistic technology-based research-and-development environmentalism, while continuing to emit more greenhouse gases than most other nations. In turn, Shellenberger and Nordhaus seek to move away from proven Environmental Justice tactics, "calling for a moratorium" on "community organizing." Such technology-based "approaches like those of Nordhaus and Shellenberger miss entirely" the "structural environmental injustice" that natural disasters like Hurricane Katrina make visible. Joseph Romm, a former US Department of Energy official now with the Center for American Progress, argued that "Pollution limits are far, far more important than R&D for what really matters -- reducing greenhouse-gas emissions and driving clean technologies into the marketplace." Environmental journalist David Roberts, writing in Grist, stated that while the BTI and its founders garner much attention, their policy is lacking, and ultimately they "receive a degree of press coverage that wildly exceeds their intellectual contributions." Reviewers for the San Francisco Chronicle, the American Prospect, and the Harvard Law Review argued that a critical reevaluation of green politics was unwarranted because global warming had become a high-profile issue and the Democratic Congress was preparing to act. "An Ecomodernist Manifesto" In April 2015, "An Ecomodernist Manifesto" was issued by John Asafu-Adjaye, Linus Blomqvist, Stewart Brand, Barry Brook. Ruth DeFries, Erle Ellis, Christopher Foreman, David Keith, Martin Lewis, Mark Lynas, Ted Nordhaus, Roger A. Pielke, Jr., Rachel Pritzker, Joyashree Roy, Mark Sagoff, Michael Shellenberger, Robert Stone, and Peter Teague. It proposed dropping the goal of “sustainable development” and replacing it with a strategy to shrink humanity's footprint by using natural resources more intensively through technological innovation. The authors argue that economic development is necessary to preserve the environment. According to The New Yorker, "most of the criticism of [the Manifesto] was more about tone than content. The manifesto's basic arguments, after all, are hardly radical. To wit: technology, thoughtfully applied, can reduce the suffering, human and otherwise, caused by climate change; ideology, stubbornly upheld, can accomplish the opposite." At The New York Times, Eduardo Porter wrote approvingly of ecomodernism's alternative approach to sustainable development. In an article titled "Manifesto Calls for an End to 'People Are Bad' Environmentalism", Slate's Eric Holthaus wrote "It's inclusive, it's exciting, and it gives environmentalists something to fight for for a change." The science journal Nature editorialized the manifesto. The Manifesto was met with critiques similar to Gelobter's evaluation of "Death" and Sze and Ziser's analysis of Break Through. Environmental historian Jeremy Caradonna and environmental economist Richard B. Norgaard led a group of environmental scholars in a critique, arguing that Ecomodernism as presented in the Manifesto "violates everything we know about ecosystems, energy, population, and natural resources," and "Far from being an ecological statement of principles, the Manifesto merely rehashes the naïve belief that technology will save us and that human ingenuity can never fail." Further, "The Manifesto suffers from factual errors and misleading statements." T.J. Demos agreed with Caradonna, and wrote in 2017 that "What is additionally striking about the Ecomodernist document, beyond its factual weaknesses and ecological falsehoods, is that there is no mention of social justice or democratic politics," and "no acknowledgement of the fact that big technologies like nuclear reinforce centralized power, the military-industrial complex, and the inequalities of corporate globalization." Breakthrough Journal In 2011, Breakthrough published the first issue of the Breakthrough Journal, which aims to "modernize political thought for the 21st century". The New Republic called Breakthrough Journal "among the most complete efforts to provide a fresh answer to" the question of how to modernize liberal thought, and the National Review called it "the most promising effort at self-criticism by our liberal cousins in a long time". References External links Breakthrough Institute Environmental organizations based in the San Francisco Bay Area Research institutes established in 2003 Environmental research institutes International educational organizations International research institutes
Breakthrough Institute
Environmental_science
2,395
27,956,374
https://en.wikipedia.org/wiki/Wright%20etch
The Wright etch (also Wright-Jenkins etch) is a preferential etch for revealing defects in <100>- and <111>-oriented, p- and n-type silicon wafers used for making transistors, microprocessors, memories, and other components. Revealing, identifying, and remedying such defects is essential for progress along the path predicted by Moore's law. It was developed by Margaret Wright Jenkins (1936-2018) in 1976 while working in research and development at Motorola Inc. in Phoenix, AZ. It was published in 1977. This etchant reveals clearly defined oxidation-induced stacking faults, dislocations, swirls and striations with minimum surface roughness or extraneous pitting. These defects are known causes of shorts and current leakage in finished semiconductor devices (such as transistors) should they fall across isolated junctions. A relatively low etch rate (~1 micrometre per minute) at room temperature provides etch control. The long shelf life of this etchant allows the solution to be stored in large quantities. Etch formula The composition of the Wright etch is as follows: 60 ml concentrated HF (hydrofluoric acid) 30 ml concentrated HNO3 (nitric acid) 30 ml of 5 mole CrO3 (mix 1 gram of chromium trioxide per 2 ml of water; the numbers are suspiciously round because the molecular weight of chromium trioxide is almost exactly 100). 2 grams Cu(NO3)2 . 3H2O (Copper II Nitrate Trihydrate) 60 ml concentrated CH3COOH (acetic acid) 60 ml H2O (deionized water) In mixing the solution, the best results are obtained by first dissolving the copper nitrate in the given amount of water; otherwise the order of mixing is not critical. Etch mechanism The Wright etch consistently produces well-defined etch figures of common defects on silicon surfaces. This effect is attributed to the interactions of the selected chemicals in the formula. Robbins and Schwartz described chemical etching of silicon in detail using an HF, HNO3 and H2O system; and an HF, HNO3, H2O and CH3COOH (acetic acid) system. Briefly, the etching of silicon is a two-step process. First, the top surface of the silicon is converted into a soluble oxide by a suitable oxidizing agent(s). Then the resulting oxide layer is removed from the surface by dissolution in a suitable solvent, usually HF. This is a continuous process during the etch cycle. In order to delineate a crystal defect, the defect area must be oxidized at a slower or faster rate than the surrounding area thereby forming a mound or pit during the preferential etch process. In the present system, the silicon is oxidized with HNO3, CrO3 solution (which in this case contains the Cr2O72− dichromate ion, since the pH is low - see the phase diagram in chromic acid) and Cu (NO3)2. The dichromate ion, a strong oxidizing agent, is considered to be the principal oxidizing agent. The ratio of HNO3 to CrO3 solution stated in the formula produces a superior etched surface. Other ratios produce less desirable finishes. With the addition of a small amount of Cu (NO3)2, the definition of the defect was enhanced. Therefore, it is believed that the Cu (NO3)2 affects the localized differential oxidation rate at the defect site. The addition of the acetic acid gave the background surface of the etched silicon a smooth finish. It is theorized that this effect is attributed to the wetting action of the acetic acid which prevents the formation of bubbles during etching. All experimental preferential etching to show defects was done on cleaned and oxidized wafers. All oxidations were performed at 1200 °C in steam for 75 minutes. Figure 1 (a) shows oxidation-induced stacking faults in <100>-oriented wafers after 30 minutes Wright etch, (b) and (c) show dislocation pits on <100>- and <111>-oriented wafers respectively after 20 minutes Wright etch. Figure 1 (a) shows oxidation-induced stacking faults on a <100>-oriented, 7-10 Ω-cm, boron-doped wafer after 30 minutes Wright etch (the A arrow in this figure points to the shape of faults that intersect the surface, while B points to bulk faults). Figure 1 (b) and (c) show dislocation pits on <100>- and <111>-oriented wafers respectively after 20 minutes Wright etch. Summary This etch process is a quick and reliable method of determining the integrity of pre-processed polished silicon wafers or to reveal defects that may be induced at any point during wafer processing. It has been demonstrated that Wright etch is superior in revealing stacking faults and dislocation etch figures when compared with those revealed by Sirtl and Secco etchings. This etch is widely used in failure analysis of electrical devices at various wafer processing stages. In comparison, the Wright etch was often the preferred etchant to reveal defects in silicon crystals. Figure 2 shows a comparison of oxidation-induced stacking fault delineation on <100>-oriented wafers after Wright etch, Secco and Sirtl etch respectively. Figure 3 shows a comparison of dislocation pits delineation on <100>-oriented wafers after Wright etch, Secco and Sirtl etch. The final figure 4 shows a comparison of dislocation pits revealed on a <111>-oriented wafer after etching with Wright etch, Secco and Sirtl etch respectively. Figure 3 shows a comparison of dislocation delineation on a <100>-oriented, 10-20 Ω-cm, boron doped wafer after oxidation and preferential etching. (a) After 20 minutes Wright etch, (b) 10 minutes Secco etch and (c) 6 minutes Sirtl etch. Figure 4 shows a comparison of dislocation delineation on a <111>-oriented, 10-20 Ω-cm, boron-doped wafer after oxidation and preferential etching. (a) After 10 minutes Wright etch, (b) 10 minutes Secco etch and (c) 3 minutes Sirtl etch. The arrows indicate slip direction. References Etching (microfabrication)
Wright etch
Materials_science
1,379
47,003,890
https://en.wikipedia.org/wiki/Penicillium%20ovatum
Penicillium ovatum is a species of fungus in the genus Penicillium which was isolated from forest soil in Kuala Lumpur in Malaysia. References ovatum Fungi described in 2011 Fungus species
Penicillium ovatum
Biology
43
46,849,118
https://en.wikipedia.org/wiki/Magnesium%20formate
Magnesium formate is a magnesium salt of formic acid. It is an inorganic compound. It consists of a magnesium cation and formate anion. It can be prepared by reacting magnesium oxide with formic acid. The dihydrate is formed when crystallizing from the solution. The dihydrate dehydrates at 105 °C to form anhydrate, then decomposes at 500 °C to produce magnesium oxide. Magnesium formate can be used for organic syntheses. References Formates Magnesium compounds
Magnesium formate
Chemistry
108
11,287,921
https://en.wikipedia.org/wiki/Otto%20van%20Veen
Otto van Veen, also known by his Latinized names Otto Venius or Octavius Vaenius (1556 – 6 May 1629), was a painter, draughtsman, and humanist active primarily in Antwerp and Brussels in the late 16th and early 17th centuries. He is known for his paintings of religious and mythological scenes, allegories and portraits, which he produced in his large workshop in Antwerp. He further designed several emblem books, and was from 1594 or 1595 to 1598 the teacher of Rubens. His role as a classically educated humanist artist (a pictor doctus), was influential on the young Rubens, who would take on that role himself. He was court painter of successive governors of the Habsburg Netherlands, including the Archdukes Albert and Isabella. Life Van Veen was born around 1556 in Leiden, as the son of Cornelis Jansz. van Veen (1519–1591), Burgomaster of Leiden, and Geertruyd Simons van Neck (born 1530). His father was a knight, Lord of Hogeveen, Desplasse, Vuerse, etc. and said to be descended from a natural son of John III, Duke of Brabant. He was also a doctor of law, legal advisor to the city of Leiden and representative of the County of Holland to the States General of the Habsburg Netherlands. He probably was a pupil of Isaac Claesz van Swanenburg until October 1572, when the capture of Leiden by the Protestant army caused the Catholic family to move to Antwerp, and then to Liège. In Liège he became for two years a page of the Prince-Bishop of Liège. He studied there for a time under Dominicus Lampsonius and Jean Ramey. Lampsonius was a Flemish humanist, poet and painter and secretary to various Prince-Bishops of Liège. He introduced van Veen to Classicist-Humanist literature. Van Veen is documented in Rome around 1574 or 1575. He stayed there for about five years, perhaps studying with Federico Zuccari. The contemporary Flemish biographer Karel van Mander relates that van Veen subsequently worked at the courts of Rudolf II in Prague and William V of Bavaria in Munich Prague and Munich were at the time the key centres of Northern Mannerist art and hosts to important Flemish artists such as Joris Hoefnagel and Bartholomeus Spranger. He returned to the Low Countries around 1580/81. he became in 1587 court painter to the governor of the Southern Netherlands, Alexander Farnese, Duke of Parma at the court in Brussels until 1592. He then moved to Antwerp where he became a master in the Guild of St. Luke in 1593. He bought on 19 November 1593 a house in Antwerp for 1,000 Carolus guilders. Van Veen received numerous commissions for church decorations, including altarpieces for the Antwerp cathedral and a chapel in the city hall. He also set up a large workshop, at which Rubens trained between about 1594 to 1598. Van Veen maintained his connection with the Brussels court. When Archduke Ernest of Austria became governor in 1594, van Veen may have aided the archduke in acquiring important paintings by the likes of Hieronymus Bosch and Pieter Bruegel the Elder. The artist later served as dean of the Guild of St. Luke of Antwerp in 1602. He was also a member of and became the dean of the Romanists in 1606. The Guild of Romanists was a society of notables and artists which was active in Antwerp from the 16th to 18th century. It was a condition of membership that the member had visited Rome. In the 17th century, van Veen often worked for the Archdukes Albert and Isabella. He also made paintings for the States General of the Dutch Republic such as the series of twelve paintings dated 1613 depicting the battles of the Romans and the Batavians, based on engravings he had already published of the subject. The Archdukes Albert and Isabella appointed van Veen in 1612 as the waerdeyn ('warden') of the revived Brussels Mint. With this nomination, the Archdukes aimed to achieve two very disparate objectives. Firstly, they wanted to find a decent position for their beloved but ageing painter, and merely followed what had previously been done in 1572 when the great sculptor and medalist Jacques Jonghelinck had been made waerdeyn of the Antwerp Mint. Secondly, they needed to put at the head of the Brussels Mint a competent person, since they were then involved in launching a new series of coins as part of a general monetary reform. Van Veen appears not to have been very enthusiastic about his new appointment as he tried to resign not long after taking up his office and applied for another position in Luxembourg. This may have been linked to the difficult relaunch of the Brussels Mint. He also was unhappy with the size of the accommodation provided to him which was insufficient for his large family. It was 1616 before he moved with his family from Antwerp to Brussels to take his position. He was able to make the position of waerdeyn hereditary, which allowed his descendants starting with his son Ernest to occupy the position for a century. Van Veen gave Jacob de Bie, an Antwerp engraver, publisher and numismatist with an interest in ancient coins the position of maître particulier at the Brussels Mint. The maître particulier was in charge of buying the required quantity of precious metals and organizing the coin production. Van Veen moved to Brussels in 1615, where he died in 1629. He had two brothers who were artists: Gijsbert van Veen (1558–1630) was a respected engraver and Pieter was an amateur painter. He was the uncle of three pastellists, Pieter's children, Apollonia, Symon, and Jacobus. His daughter Gertruida by his wife Maria Loets (Loots or Loos) also became a painter. The early artist biographer Arnold Houbraken writing almost a century after Otto van Veen's death, considered van Veen to be the most impressive artist of his day and put his portrait on the title page of his three volume De groote schouburgh der Nederlantsche konstschilders en schilderessen, which contained the biographies of famous Flemish and Dutch artists. Emblem books Van Veen was involved in the publication of Emblem books, including Quinti Horatii Flacci emblemata (1607), Amorum emblemata (1608), the Amoris divini emblemata (1615) and the Emblemata sive symbola (1624). In these works, van Veen's skills as an artist and learned humanist are on display. These works were also influential on the further development of the genre of emblem books. Quinti Horatii Flacci Emblemata His Quinti Horatii Flacci Emblemata was first published in 1607 in Antwerp by publisher Hieronymus Verdussen. It constituted a significant development in the design and conceptualization of emblem books as well as the optimization of their didactic impact. By transposing the works of the Roman poet Horace into innovative images through prints of a high technical quality the work could be used for philosophical and moral meditation. Two separate editions were printed in the first year of publication. The first one contained only text fragments by Horace and other authors from Antiquity, primarily in Latin (mainly Horace) with a facing-page allegorical engraving. In the second edition, published by the same Antwerp publisher Hieronymus Verdussen, the Latin texts were accompanied by Dutch and French quatrains, and the collection of texts and pictures started to look more like traditional emblems. In the third edition of 1612 Spanish and Italian verses were added. The book was the product of the collaboration of many artists, engravers, printers, classical scholars and van Veen. The full page illustrations were of very high quality and positioned on the recto of every page opening, with the letterpress opposite each illustration on the verso. The first edition had verses in Dutch and French below the Latin quotes (drawn largely from Horace, but also other sources). The Quinti Horatii Flacci Emblemata was circulated widely during the 17th and 18th centuries and was copied and pirated in France, Spain, Italy and England. The book was even used for the instruction of a future king in France as well as a pictorial-source book for the decoration of interiors. Amorum emblemata The Amorum emblemata was published in 1608 in Antwerp by Hieronymus Verdussen in three different polyglot versions: one with Latin, Dutch and French, one with Latin, Italian and French and one in Latin, English and Italian. The Amorum emblemata pictures 124 putti, enacting mottoes of, and quotations by, lyricists, philosophers and ancient writers on the powers of Love. Rubens' brother Philip Rubens, who was a classical scholar, wrote the foreword to this book in Latin. Van Veen's book of love emblems was following a trend which was launched in Amsterdam in 1601 with the publication of Jacob de Gheyn II's Quaeris quid sit Amor, which contained 24 love emblems produced with accompanying Dutch-language verses by Daniël Heinsius. Van Veen's Amorum emblemata is wider in scope with its 124 emblems. The maxims regarding love which accompany and interpret the pictures are for the most part taken from Ovid. Aimed at young people, the emblems depict love as an overpowering drive which should be obeyed to gain happiness. The Amorum emblemata became in the 17th century one of the most influential books of its time, not only as a model for other Flemish/Dutch and foreign emblem books, but also as a source of inspiration for many artists in other fields. One of the emblems is entitled in Dutch Ghewensten Strijdt (Desired Combat) and in French Combat Heureux (Happy Combat). It depicts two putti holding bows who have shot each other with arrows. The accompanying motto in Dutch reads: d’Een lief sich gheern laet van d’ander ’t hert doorwonden/De schichten niemant wijckt, maer elck sijn borste biedt/Om eerst te zijn ghequetst, d’een d’ander niet en vliedt:Want sy met eenen wil in liefde zijn ghebonden. In English translation: "The one lover gladly lets the other one pierce its heart, neither dodges the arrows, but rather offers its chest to be the first one to be wounded, neither flees the other because they are bound in love in one desire". Further quotations by Seneca the Elder, Philostratus and Cicero, printed above the Dutch and French mottoes, address a similar theme on the same page. Amoris divini emblemata The Amoris divini emblemata was published in 1615 in Antwerp by Martinus Nutius III and Jan van Meurs. Less popular than the previous emblem books, a second impression of Amoris divini emblemata did not appear until 1660 In his address to the reader of the book, van Veen relates how the archduchess Isabella had suggested that his earlier love emblems (Amorum emblemata, 1608) might be reworked 'in a spiritual and divine sense' since 'the effects of divine and human love are, as to the loved object, nearly equal.' The two books look very alike. Formally, the emblems are very much alike in structure: on the left-hand page, first a Latin motto, then a group of quotations in Latin, and finally verses in vernacular languages and on the right-hand page, the picture itself. The visual unity of Amorum emblemata, derived mainly from the presence of the Cupid figure in all emblems but one, is recreated in Amoris divini emblemata through the ubiquitous presence of the figures of Amor Divinus (Divine love) and the soul. While there is some re-use of imagery from the earlier publication, the Amoris divini emblemata also contains many emblems which are not converted from Amorum emblemata. Amoris divini emblemata was the starting point of a new tradition in religious emblem books and had an important influence on Herman Hugo's Pia desideria (1624). Notes References Belkin, Kristin Lohse: Rubens. Phaidon Press, 1998. . Bertini, Giuseppe: "Otto van Veen, Cosimo Masi and the Art Market in Antwerp at the End of the Sixteenth Century." Burlington Magazine vol. 140, no. 1139. (Feb. 1998), pp. 119–120. Montone, Tina, "'Dolci ire, dolci sdegni, e dolci paci': The Role of the Italian Collaborator in the Making of Otto Vaenius's Amorum Emblemata," in Alison Adams and Marleen van der Weij, Emblems of the Low Countries: A Book Historical Perspective. Glasgow Emblem Studies, vol. 8. Glasgow: University of Glasgow, 2003. p. 47. Rijksmuseum Amsterdam, Otto van Veen's Batavians defeating the Roman (sic) Van de Velde, Carl: "Veen [Vaenius; Venius], Otto van" Grove Art Online. Oxford University Press, [accessed 18 May 2007]. Veen, Otto van. Amorum Emblemata... Emblemes of Love, with verses in Latin, English, and Italian. Antwerp: [Typis Henrici Swingenii] Venalia apud Auctorem, 1608. External links Othonis Vaenii emblemata Emblem Project Utrecht – 3 editions of emblem books by Otto van Veen Amorum Emblemata 1608 on Internet Archive. Vita D. Thomae Aquinatis a manuscript by Otto van Veen (1610) 1550s births 1629 deaths Flemish Mannerist painters Flemish Renaissance humanists Artists from Leiden Dutch male painters Flemish Baroque painters Painters from Antwerp Dutch court painters Peter Paul Rubens
Otto van Veen
Engineering
2,957
23,824,756
https://en.wikipedia.org/wiki/Retrostium
Retrostium is a fungal genus in the division Ascomycota. This is a monotypic genus, containing the single species Retrostium amphiroae, described as new to science from Japan in 1997. Species Fungorum places it within the family Spathulosporaceae, which would place it with the Spathulosporales Order. References Fungi of Japan Monotypic Ascomycota genera Sordariomycetes Fungus species
Retrostium
Biology
94
18,952,167
https://en.wikipedia.org/wiki/Formulary%20%28pharmacy%29
A formulary is a list of pharmaceutical drugs, often decided upon by a group of people, for various reasons such as insurance coverage or use at a medical facility. Traditionally, a formulary contained a collection of formulas for the compounding and testing of medication (a resource closer to what would be referred to as a pharmacopoeia today). Today, the main function of a prescription formulary is to specify particular medications that are approved to be prescribed at a particular hospital, in a particular health system, or under a particular health insurance policy. The development of prescription formularies is based on evaluations of efficacy, safety, and cost-effectiveness of drugs. Depending on the individual formulary, it may also contain additional clinical information, such as side effects, contraindications, and doses. By the turn of the millennium, 156 countries had national or provincial essential medicines lists and 135 countries had national treatment. Australia In Australia, where there is a public health care system, medications are subsidised under the Pharmaceutical Benefits Scheme (PBS) and medications that are available under the PBS and the indications for which they can be obtained under said scheme can be found in at least two places, the PBS webpage and the Australian Medicines Handbook. Canada The Prescription Drug List is the national formulary that lists all medical ingredients for human and animal use available with a prescription with the exception of those under the Controlled Drugs and Substances Act. The Canadian Agency for Drugs and Technologies in Health (CADTH) is the advisory body that evaluates new medical technologies and prescription medication. Based on recommendations the provincial and territorial governments decide whether or not to implement changes to their healthcare system and public drug formularies. Provincial and territorial government provide partial prescription drug coverage and the overall drug payment is a mix of public taxation, private insurance and out-of-pocket expenses. Insurance coverage differs regionally, although each public drug coverage plan must meet standards set by the federal government. Regional health authorities are in charge of regulating and providing its residents insurance while the federal government provides insurance for specifically eligible veterans, First Nations, Inuit, Canadian Forces, federal inmates and some refugees. United States In the US, where a system of quasi-private healthcare is in place, a formulary is a list of prescription drugs available to enrollees, and a tiered formulary provides financial incentives for patients to select lower-cost drugs. For example, under a 3-tier formulary, the first tier typically includes generic drugs with the lowest cost sharing (e.g., 10% coinsurance), the second includes preferred brand-name drugs with higher cost sharing (e.g., 25%), and the third includes non-preferred brand-name drugs with the highest cost-sharing (e.g., 40%). When used appropriately, formularies can help manage drug costs imposed on the insurance policy. However, for drugs that are not on formulary, patients must pay a larger percentage of the cost of the drug, sometimes 100%. Formularies vary between drug plans and differ in the breadth of drugs covered and costs of co-pay and premiums. Most formularies cover at least one drug in each drug class, and encourage generic substitution (also known as a preferred drug list). Formularies have shown to cause issues in hospitals when patients are discharged when not aligned with outpatient drug insurance plans. United Kingdom In the UK, the National Health Service (NHS) provides publicly funded universal health care, financed by national health insurance. Here, formularies exist to specify which drugs are available on the NHS. The two main reference sources providing this information are the British National Formulary (BNF) and the Drug Tariff. There is a section in the Drug Tariff, known unofficially as the "Blacklist", detailing medicines which are not to be prescribed under the NHS and must be paid for privately by the patient. Recommendations for additions to the NHS formulary are provided by the National Institute for Health and Care Excellence. In addition to this, local NHS hospital trusts and Primary Care (General Practitioners) Clinical Commissioning Groups (CCGs), produce their own lists of medicines deemed preferable for prescribing within their locality or organisation; such lists are usually a subset of the more comprehensive BNF. These formularies are not absolutely binding, and physicians may prescribe a non-formulary medicine if they consider it necessary and justifiable. Often, these local formularies are shared between a Primary Care Organisation (PCO) and hospitals within that PCO's jurisdiction, in order to facilitate the procedure of transferring a patient from primary care to secondary care, thus causing fewer "interfacing" issues in the process. As in the United States, the NHS actively encourages prescribing of generic drugs, in order to save more of the budget allocated to them by the Department of Health. National formulary A national formulary contains a list of medicines that are approved for prescription throughout the country, indicating which products are interchangeable. It includes key information on the composition, description, selection, prescribing, dispensing and administration of medicines. Those drugs considered less suitable for prescribing are clearly identified. Examples of national formularies are: Australian Pharmaceutical Formulary (APF) Österreichisches Arzneibuch (ÖAB), the Austrian national formulary British National Formulary (BNF) and British National Formulary for Children (BNFC) Farmacotherapeutisch Kompas (FK), the Dutch national formulary Formularium Nasional (Fornas), the Indonesian national formulary Hrvatska Farmakopeja, the Croatian national formulary Japan National Health Insurance Drug Price List Pharmaceutical Schedule, New Zealand's publicly funded national formulary United States National Formulary, later bought out and merged with the United States Pharmacopeia (USP-NF) Farmaceutiska Specialiteter i Sverige (FASS), the Swedish national formulary. Usage of the database is free of charge and it has no promotional texts or advertising. FASS has been developed by the Swedish Association of the Pharmaceutical Industry (LIF) in close cooperation with Sweden's pharmaceutical industry, with additional assistance from the Medical Products Agency, the Pharmaceutical Benefits Board and the National Corporation of Pharmacies. Information on interactions is derived from a joint development between the Department of Pharmaceutical Biosciences at Uppsala University and the Swedish Association of the Pharmaceutical Industry (LIF). See also References External links A National Formulary for Canada, Department of Economics, University of Calgary, 2005 (archived 6 July 2011) The Kazakhstan National Formulary (archived 27 March 2022) Pharmacy Pharmaceuticals policy Pharmacological classification systems Pharmaceutical terminology Health care management Health care quality Health economics
Formulary (pharmacy)
Chemistry
1,380
17,140,858
https://en.wikipedia.org/wiki/Lumirubin
Lumirubin is a structural isomer of bilirubin, which is formed during phototherapy used to treat neonatal jaundice. This polar isomer resulting from the blue-green lights of phototherapy has an active site to albumin, and its effects are considered less toxic than those of bilirubin. Lumirubin is excreted into bile or urine. ZZ, ZE, EE and EZ are the four structural isomers of bilirubin. ZZ is the stable, more insoluble form. Other forms are relatively soluble and are known as lumirubins. Phototherapy converts the ZZ form into lumirubins. Monoglucuronylated lumirubins are easily excreted. References Hepatology Tetrapyrroles
Lumirubin
Chemistry,Biology
168
15,095,266
https://en.wikipedia.org/wiki/Horst%20Sachs
Horst Sachs (27 March 1927 – 25 April 2016) was a German mathematician, an expert in graph theory, a recipient of the Euler Medal (2000). He earned the degree of Doctor of Science (Dr. rer. nat.) from the Martin-Luther-Universität Halle-Wittenberg in 1958. Following his retirement in 1992, he was professor emeritus at the Institute of Mathematics of the Technische Universität Ilmenau. His encyclopedic book in spectral graph theory, Spectra of Graphs. Theory and Applications (with Dragos Cvetković and Michael Doob) has several editions and was translated in several languages. Two theorems in graph theory bear his name. One of them relates the coefficients of the characteristic polynomial of a graph to certain structural features of the graph. Another one is a simple relation between the characteristic polynomials of a graph and its line graph. Sachs subgraphs are also named after Sachs. References 1927 births 20th-century German mathematicians Graph theorists 2016 deaths Academic staff of Technische Universität Ilmenau
Horst Sachs
Mathematics
221
24,341,858
https://en.wikipedia.org/wiki/C18H21NO2
{{DISPLAYTITLE:C18H21NO2}} The molecular formulas C18H21NO2 (molar mass: 283.36 g/mol) may refer to : N,O-Dimethyl-4-(2-naphthyl)piperidine-3-carboxylate HDMP-28 Methyldesorphine 6-Methylenedihydrodesoxymorphine Naranol PRL-8-53 SCHEMBL5334361
C18H21NO2
Chemistry
105
25,782,336
https://en.wikipedia.org/wiki/Grand%20Ditch
The Grand Ditch, also known as the Grand River Ditch and originally known as the North Grand River Ditch, is a water diversion project in the Never Summer Mountains, in northern Colorado in the United States. It is long, wide, and deep on average. Streams and creeks that flow from the highest peaks of the Never Summer Mountains are diverted into the ditch, which flows over the Continental Divide at La Poudre Pass at , delivering the water into Long Draw Reservoir and the Cache La Poudre River for eastern plains farmers. The water would otherwise have gone into the Colorado River (formerly known as the "Grand River", from which the name of the ditch is derived) that flows west towards the Pacific; instead, the Cache La Poudre River goes East and through the Mississippi River discharges into the Gulf of Mexico. The ditch was started in 1890 and wasn't completed until 1936. The ditch diverts between 20 and 40% of the runoff from the Never Summer Mountains, and delivers an average of . It significantly impacts the ecology in the valley below and the National Park Service has fought in court to reduce the amount of diverted water. History The Grand Ditch was built by the Larimer County Ditch Company, mostly by Japanese crews, working with hand tools and black powder. The first diverted water flowed east across La Poudre Pass on October 15, 1890. Water Supply and Storage Company took over ownership in 1891. The ditch was extended every year, and reached three miles in 1894. After the creation of Rocky Mountain National Park, increasing demand for water led to the creation of Long Draw Reservoir, and the required land was taken away from the park by Congress in 1923. Long Draw reservoir opened in 1930. The Grand Ditch was then lengthened one more time—this also required an act of Congress since it was now in a national park—to in September 1936. In May 2003 a section of the ditch breached about south of La Poudre Pass, causing the water to cascade down the slopes and into the Colorado River. The flood left a visible scar on the mountainside: 20,000 trees were downed and 47,600 cubic yards of debris ended up in the Lulu Creek and the headwaters of the Colorado River. The Water Supply and Storage Company reached a settlement to pay $9 million in damages to Rocky Mountain National Park. Periodic overflows of the ditch, of which the 2003 flood was but one example, has left debris all over the western side of the valley; the eastern side is not affected. The Grand River Ditch was listed on the National Register of Historic Places on September 29, 1976. It can be hiked starting from the Bowen/Baker Trailhead in Rocky Mountain National Park. See also National Register of Historic Places listings in Grand County, Colorado The Colorado-Big Thompson Project, a 1930s tunnel project for diverting Colorado River waters in the same area References External links Historic American Engineering Record (HAER) documentation, filed under Grand Lake, Grand County, CO: Grand Ditch Breach Restoration Plan, Factsheet 2010 National Register of Historic Places in Rocky Mountain National Park Infrastructure completed in 1936 Water supply infrastructure on the National Register of Historic Places Buildings and structures in Grand County, Colorado Historic American Engineering Record in Colorado Interbasin transfer 1936 establishments in Colorado
Grand Ditch
Environmental_science
660
5,638,081
https://en.wikipedia.org/wiki/Sage%20oil
Sage oils are essential oils that come in several varieties: Dalmatian sage oil Also called English, Garden, and True sage oil. Made by steam distillation of Salvia officinalis partially dried leaves. Yields range from 0.5 to 1.0%. A colorless to yellow liquid with a warm camphoraceous, thujone-like odor and sharp and bitter taste. The main components of the oil are thujone (50%), camphor, pinene, and cineol. Clary sage oil Sometimes called muscatel. Made by steam or water distillation of Salvia sclarea flowering tops and foliage. Yields range from 0.7 to 1.5%. A pale yellow to yellow liquid with a herbaceous odor and a winelike bouquet. Produced in large quantities in France, Russia and Morocco. The oil contains linalyl acetate, linalool and other terpene alcohols (sclareol), as well as their acetates. Spanish sage oil Made by steam distillation of the leaves and twigs of S. officinalis subsp. lavandulifolia (syn. S. lavandulifolia). A colorless to pale yellow liquid with the characteristic camphoraceous odor. Unlike Dalmatian sage oil, Spanish sage oil contains no or only traces of thujone; camphor and eucalyptol are the major components. Greek sage oil Made by steam distillation of Salvia triloba leaves. Grows in Greece and Turkey. Yields range from 0.25% to 4%. The oil contains camphor, thujone, and pinene, the dominant component being eucalyptol. Judaean sage oil Made by steam distillation of Salvia judaica leaves. The oil contains mainly cubebene and ledol. References Essential oils Flavors
Sage oil
Chemistry
391
89,232
https://en.wikipedia.org/wiki/5-Methylcytosine
5-Methylcytosine is a methylated form of the DNA base cytosine (C) that regulates gene transcription and takes several other biological roles. When cytosine is methylated, the DNA maintains the same sequence, but the expression of methylated genes can be altered (the study of this is part of the field of epigenetics). 5-Methylcytosine is incorporated in the nucleoside 5-methylcytidine. In 5-methylcytosine, a methyl group is attached to the 5th atom in the 6-atom ring, counting counterclockwise from the NH-bonded nitrogen at the six o'clock position. This methyl group distinguishes 5-methylcytosine from cytosine. Discovery While trying to isolate the bacterial toxin responsible for tuberculosis, W.G. Ruppel isolated a novel nucleic acid named tuberculinic acid in 1898 from Tubercle bacillus. The nucleic acid was found to be unusual, in that it contained in addition to thymine, guanine and cytosine, a methylated nucleotide. In 1925, Johnson and Coghill successfully detected a minor amount of a methylated cytosine derivative as a product of hydrolysis of tuberculinic acid with sulfuric acid. This report was severely criticized because their identification was based solely on the optical properties of the crystalline picrate, and other scientists failed to reproduce the same result. But its existence was ultimately proven in 1948, when Hotchkiss separated the nucleic acids of DNA from calf thymus using paper chromatography, by which he detected a unique methylated cytosine, quite distinct from conventional cytosine and uracil. After seven decades, it turned out that it is also a common feature in different RNA molecules, although the precise role is uncertain. In vivo The function of this chemical varies significantly among species: In bacteria, 5-methylcytosine can be found at a variety of sites, and is often used as a marker to protect DNA from being cut by native methylation-sensitive restriction enzymes. In plants, 5-methylcytosine occurs at CpG, CpHpG and CpHpH sequences (where H = A, C or T). In fungi and animals, 5-methylcytosine predominantly occurs at CpG dinucleotides. Most eukaryotes methylate only a small percentage of these sites, but 70-80% of CpG cytosines are methylated in vertebrates. In mammalian cells, clusters of CpG at the 5' ends of genes are termed CpG islands. 1% of all mammalian DNA is 5mC. While spontaneous deamination of cytosine forms uracil, which is recognized and removed by DNA repair enzymes, deamination of 5-methylcytosine forms thymine. This conversion of a DNA base from cytosine (C) to thymine (T) can result in a transition mutation. In addition, active enzymatic deamination of cytosine or 5-methylcytosine by the APOBEC family of cytosine deaminases could have beneficial implications on various cellular processes as well as on organismal evolution. The implications of deamination on 5-hydroxymethylcytosine, on the other hand, remains less understood. In vitro The NH2 group can be removed (deamination) from 5-methylcytosine to form thymine with use of reagents such as nitrous acid; cytosine deaminates to uracil (U) under similar conditions. 5-methylcytosine is resistant to deamination by bisulfite treatment, which deaminates cytosine residues. This property is often exploited to analyze DNA cytosine methylation patterns with bisulfite sequencing. Addition and regulation with DNMTs (Eukaryotes) 5mC marks are placed on genomic DNA via DNA methyltransferases (DNMTs). There are 5 DNMTs in humans: DNMT1, DNMT2, DNMT3A, DNMT3B, and DNMT3L, and in algae and fungi 3 more are present (DNMT4, DNMT5, and DNMT6). DNMT1 contains the replication foci targeting sequence (RFTS) and the CXXC domain which catalyze the addition of 5mC marks. RFTS directs DNMT1 to loci of DNA replication to assist in the maintenance of 5mC on daughter strands during DNA replication, whereas CXXC contains a zinc finger domain for de novo addition of methylation to the DNA. DNMT1 was found to be the predominant DNA methyltransferase in all human tissue. Primarily, DNMT3A and DNMT3B are responsible for de novo methylation, and DNMT1 maintains the 5mC mark after replication. DNMTs can interact with each other to increase methylating capability. For example, 2 DNMT3L can form a complex with 2 DNMT3A to improve interactions with the DNA, facilitating the methylation. Changes in the expression of DNMT results in aberrant methylation. Overexpression produces increased methylation, whereas disruption of the enzyme decreased levels of methylation. The mechanism of the addition is as follows: first a cysteine residue on the DNMT's PCQ motif creates a nucleophillic attack at carbon 6 on the cytosine nucleotide that is to be methylated. S-Adenosylmethionine then donates a methyl group to carbon 5. A base in the DNMT enzyme deprotonates the residual hydrogen on carbon 5 restoring the double bond between carbon 5 and 6 in the ring, producing the 5-methylcytosine base pair. Demethylation After a cytosine is methylated to 5mC, it can be reversed back to its initial state via multiple mechanisms. Passive DNA demethylation by dilution eliminates the mark gradually through replication by a lack of maintenance by DNMT. In active DNA demethylation, a series of oxidations converts it to 5-hydroxymethylcytosine (5hmC), 5-formylcytosine (5fC), and 5-carboxylcytosine (5caC), and the latter two are eventually excised by thymine DNA glycosylase (TDG), followed by base excision repair (BER) to restore the cytosine. TDG knockout produced a 2-fold increase of 5fC without any statistically significant change to levels of 5hmC, indicating 5mC must be iteratively oxidized at least twice before its full demethylation. The oxidation occurs through the TET (Ten-eleven translocation) family dioxygenases (TET enzymes) which can convert 5mC, 5hmC, and 5fC to their oxidized forms. However, the enzyme has the greatest preference for 5mC and the initial reaction rate for 5hmC and 5fC conversions with TET2 are 4.9-7.6 fold slower. TET requires Fe(II) as cofactor, and oxygen and α-ketoglutarate (α-KG) as substrates, and the latter substrate is generated from isocitrate by the enzyme isocitrate dehydrogenase (IDH). Cancer however can produce 2-hydroxyglutarate (2HG) which competes with α-KG, reducing TET activity, and in turn reducing conversion of 5mC to 5hmC. Role in humans In cancer In cancer, DNA can become both overly methylated, termed hypermethylation, and under-methylated, termed hypomethylation. CpG islands overlapping gene promoters are de novo methylated resulting in aberrant inactivation of genes normally associated with growth inhibition of tumors (an example of hypermethylation). Comparing tumor and normal tissue, the former had elevated levels of the methyltransferases DNMT1, DNMT3A, and mostly DNMT3B, all of which are associated with the abnormal levels of 5mC in cancer. Repeat sequences in the genome, including satellite DNA, Alu, and long interspersed elements (LINE), are often seen hypomethylated in cancer, resulting in expression of these normally silenced genes, and levels are often significant markers of tumor progression. It has been hypothesized that there a connection between the hypermethylation and hypomethylation; over activity of DNA methyltransferases that produce the abnormal de novo 5mC methylation may be compensated by the removal of methylation, a type of epigenetic repair. However, the removal of methylation is inefficient resulting in an overshoot of genome-wide hypomethylation. The contrary may also be possible; over expression of hypomethylation may be silenced by genome-wide hypermethylation. Cancer hallmark capabilities are likely acquired through epigenetic changes that alter the 5mC in both the cancer cells and in surrounding tumor-associated stroma within the tumor microenvironment. The anticancer drug Cisplatin has been reported to react with 5mC. As a biomarker of aging "Epigenetic age" refers to the connection between chronological age and levels of DNA methylation in the genome. Coupling the levels of DNA methylation, in specific sets of CpGs called "clock CpGs", with algorithms that regress the typical levels of collective genome-wide methylation at a given chronological age, allow for epigenetic age prediction. During youth (0–20 years old), changes in DNA methylation occur at a faster rate as development and growth progresses, and the changes begin to slow down at older ages. Multiple epigenetic age estimators exist. Horvath's clock measures a multi-tissue set of 353 CpGs, half of which positively correlate with age, and the other half negatively, to estimate the epigenetic age. Hannum's clock utilizes adult blood samples to calculate age based on an orthogonal basis of 71 CpGs. Levine's clock, known as DNAm PhenoAge, depends on 513 CpGs and surpasses the other age estimators in predicting mortality and lifespan, yet displays bias with non-blood tissues. There are reports of age estimators with the methylation state of only one CpG in the gene ELOVL2. Estimation of age allows for prediction lifespan through expectations of age related conditions that individuals may be subject to based on their 5mC methylation markers. References Literature (available online at the United States National Center for Biotechnology Information) Nucleobases Pyrimidones Biomarkers Methyl compounds
5-Methylcytosine
Biology
2,276
3,768,963
https://en.wikipedia.org/wiki/Leibniz%20Prize
The Gottfried Wilhelm Leibniz Prize (), or Leibniz Prize, is awarded by the German Research Foundation to "exceptional scientists and academics for their outstanding achievements in the field of research". Since 1986, up to ten prizes have been awarded annually to individuals or research groups working at a research institution in Germany or at a German research institution abroad. It is considered the most important research award in Germany. The prize is named after the German polymath and philosopher Gottfried Wilhelm Leibniz (1646–1716). It is one of the highest endowed research prizes in Germany with a maximum of €2.5 million per award. Past prize winners include Stefan Hell (2008), Gerd Faltings (1996), Peter Gruss (1994), Svante Pääbo (1992), Theodor W. Hänsch (1989), Erwin Neher (1987), Bert Sakmann (1987), Jürgen Habermas (1986), Hartmut Michel (1986), and Christiane Nüsslein-Volhard (1986). Prizewinners 2020–2029 2025 2024 2023 2022 2021 2020 2025: Volker Haucke, Biochemistry and Cell Biology, Leibniz Research Institute for Molecular Pharmacology, Berlin Hannes Leitgeb, Theoretical Philosophy, LMU Munich Bettina Valeska Lotsch, Solid State and Materials Chemistry, Max Planck Institute for Solid State Research, Stuttgart Wolfram Pernice, Experimental Physics, University of Heidelberg Ana Pombo, Genome Biology, Max Delbrück Center for Molecular Medicine, Berlin Daniel Rueckert, Artificial Intelligence, Technical University of Munich Angkana Rüland, Applied Mathematics, University of Bonn , Catholic Theology, University of Münster Maria-Elena Torres-Padilla, Epigenetics, Helmholtz Zentrum München , Hemato-Oncology, University Hospital Freiburg 2024: , Experimental Solid State Physics, LMU Munich Tobias J. Erb, Synthetic Microbiology, Max Planck Institute for Terrestrial Microbiology, Marburg, and University of Marburg Jonas Grethlein, Classical Philology, University of Heidelberg Moritz Helmstaedter, Neuroscience, Max Planck Institute for Brain Research, Frankfurt am Main , Geoecology, Alfred Wegener Institute, Potsdam, and University of Potsdam , Cryptography, University of Bochum Rohini Kuner, Neuropharmacology, University of Heidelberg Jörn Leonhard, Modern and Contemporary History, University of Freiburg Peter Schreiner, Organic Molecular Chemistry, University of Giessen Eva Viehmann, Mathematics, University of Münster 2023: Lars T. Angenent, Bioengineering, University of Tübingen Claudia Höbartner, Biological Chemistry, Universität Würzburg Achim Menges, Architecture, Universität Stuttgart Sarah O'Connor, Natural Product Biosynthesis, Max Planck Institute for Chemical Ecology, Jena , Paediatric Oncology, Deutsches Krebsforschungszentrum (DKFZ) and Universität Heidelberg Hartmut Rosa, Sociology, Universität Jena und Universität Erfurt Georg Schett, Rheumatology, Universität Erlangen-Nürnberg Catharina Stroppel, Pure Mathematics, University of Bonn , Bio- and Medical Informatics, Helmholtz Zentrum München und Technische Universität München , Romance Literary Studies, Freie Universität Berlin 2022: , Ecosystem Research, Karlsruhe Institute of Technology (KIT) , Law, Max Planck Institute for Legal History and Legal Philosophy Iain Couzin, Behavioral Biology, Konstanz Eileen Furlong, Functional Genomics, European Molecular Biology Laboratory (EMBL) , Experimental Physics, University of Erlangen-Nuremberg Stefanie Dehnen, Inorganic Molecular Chemistry, University of Marburg , Theoretical Physics, GSI Helmholtz Centre for Heavy Ion Research und Technische Universität Darmstadt , Ancient History, Eberhard Karls University of Tübingen Karen Radner, Ancient Near Eastern Studies, Ludwig-Maximilians-University of Munich Moritz Schularick, Economics, University of Bonn 2021: Asifa Akhtar, Epigenetics, Max-Planck-Institut für Immunbiologie und Epigenetik, Freiburg Elisabeth André, Computer Science, Universität Augsburg Giuseppe Caire, Theoretical Communications Engineering, Technische Universität Berlin Nico Eisenhauer, Biodiversity Research, Universität Leipzig Veronika Eyring, Earth System Modelling, Deutsches Zentrum für Luft- und Raumfahrt, Standort Oberpfaffenhofen und Universität Bremen Katerina Harvati, Palaeoanthropology, Universität Tübingen und Senckenberg Centre for Human Evolution and Palaeoenvironment, Tübingen Steffen Mau, Sociology, Humboldt University of Berlin Rolf Müller, Pharmaceutical Biology, Helmholtz-Institut für Pharmazeutische Forschung Saarland (HIPS) and Saarland University Jürgen Ruland, Immunology, Klinikum rechts der Isar, Technische Universität München Volker Springel, Astrophysics, Max-Planck-Institut für Astrophysik, Garching 2020: Thorsten Bach, Chemistry, Technical University of Munich Baptiste Jean Germain Gault, Materials Science, Max Planck Institute for Iron Research Johannes Grave, Art History, Friedrich-Schiller University of Jena Thomas Kaufmann, Evangelical Theology, Georg August University of Göttingen Andrea Musacchio, Cell Biology, Max Planck Institute for Molecular Physiology Thomas Neumann, Computer Science, Technical University of Munich Marco Prinz, Neuropathology, Albert Ludwig University of Freiburg Markus Reichstein, Biogeochemistry, Max Planck Institute for Biogeochemistry Dagmar Schäfer, History of Science, Max Planck Institute for the History of Science Juliane Vogel, Literature, University of Konstanz 2019–2010 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2019: Sami Haddadin, Robotics, Technical University of Munich Rupert Huber, Experimental physics, University of Regensburg Andreas Reckwitz, Sociology, Viadrina European University, Frankfurt (Oder) Hans-Reimer Rodewald, Immunology, German Cancer Research Center (DKFZ), Heidelberg Melina Schuh, cell biology, Max Planck Institute for Biophysical Chemistry (Karl-Friedrich-Bonhoeffer-Institute), Göttingen Brenda Schulman, Biochemistry, Max Planck Institute of Biochemistry (MPIB), Martinsried Ayelet Shachar, Law and Political science, Max Planck Institute for the Study of Religious and Ethnic Diversity, Göttingen Michèle Tertilt, Economics, University of Mannheim Wolfgang Wernsdorfer, experimental Solid-state physics, Karlsruhe Institute of Technology (KIT) Matthias Wessling, Chemical reaction engineering, RWTH Aachen University and Leibniz-Institut für Interaktive Materialien (DWI), Aachen 2018: Jens Beckert, Sociology, Max Planck Institute for the Study of Societies, Cologne Alessandra Buonanno, Gravitational Physics, Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Potsdam Nicola Fuchs-Schündeln, Wirtschaftswissenschaften, Goethe University Frankfurt Veit Hornung, Immunologie, Genzentrum, Ludwig Maximilian University of Munich and Eicke Latz, Immunologie, Universitätsklinikum Bonn, Rheinische Friedrich-Wilhelms-Universität Bonn Heike Paul, Amerikanistik, Friedrich-Alexander-Universität Erlangen-Nürnberg Erika L. Pearce, Immunologie, Max-Planck-Institut für Immunbiologie und Epigenetik, Freiburg/Breisgau , Experimentelle Festkörperphysik, Georg-August-Universität Göttingen Oliver G. Schmidt, Materialwissenschaften, Leibniz-Institut für Festkörper- und Werkstoffforschung Dresden und Fakultät für Elektrotechnik und Informationstechnik, Technische Universität Chemnitz Bernhard Schölkopf, Maschinelles Lernen, Max-Planck-Institut für Intelligente Systeme, Tübingen László Székelyhidi, Angewandte Mathematik, Universität Leipzig 2017: Lutz Ackermann, Organic Molecular Chemistry, University of Göttingen Beatrice Gründler, Arabistics, Free University Berlin Ralph Hertwig, Cognition Psychology, Max-Planck-Institute for Education research Karl-Peter Hopfner, Structure Biology, Ludwig Maximilian University of Munich Frank Jülicher, Theoretical Biophysics, Max-Planck-Institute for Physics of complex systems Lutz Mädler, Mechanical Process engineering, University of Bremen Britta Nestler, Material science, Karlsruhe Institute for Technology Joachim P. Spatz, Biophysics, Max-Planck-Institute for Intelligent Systems and Ruprecht-Karls-University Heidelberg Anne Storch, Africanistics, University of Köln Jörg Vogel, Medical Microbiology, University of Würzburg 2016: Frank Bradke, Neuroregeneration, German Center for Neurodegenerative Diseases (DZNE), Bonn Emmanuelle Charpentier, Infection Biology, Max Planck Institute for Infection Biology, Berlin Daniel Cremers, Computer Vision, Chair of Informatics IX: Image Understanding and Knowledge-Based Systems, Technical University of Munich Daniel James Frost, Mineralogy/Experimental Petrology, University of Bayreuth Dag Nikolaus Hasse, Philosophy, Institute of Philosophy, University of Würzburg Benjamin List, Organic Molecular Chemistry, Department of Homogeneous Catalysis, Max Planck Institute for Coal Research, Mülheim an der Ruhr Christoph Möllers, Law, Chair of Public Law and Legal Philosophy, Humboldt University of Berlin Marina Rodnina, Biochemistry, Max Planck Institute for Biophysical Chemistry (Karl Friedrich Bonhoeffer Institute), Göttingen Bénédicte Savoy, History of Modern Art, Center for Metropolitan Studies, Technische Universität Berlin Peter Scholze, Arithmetic Algebraic Geometry, Mathematical Institute, University of Bonn 2015: Henry N Chapman, Biological Physics/X-Ray Physics, Deutsches Elektronen-Synchrotron (DESY), Hamburg, and University of Hamburg Hendrik Dietz, Biochemistry/Biophysics, Technical University of Munich Stefan Grimme, Theoretical Chemistry, University of Bonn Christian Hertweck, Biological Chemistry, Leibniz Institute for Natural Product Research and Infection Biology – Hans Knöll Institute (HKI), Jena, and University of Jena Friedrich Lenger, Modern and Contemporary History, University of Giessen Hartmut Leppin, Ancient History, Goethe University Frankfurt Steffen Martus, Modern German Literature, Humboldt University of Berlin , Auditory Sensing/Otolaryngology, University of Göttingen 2014: Artemis Alexiadou, Linguistics, University of Stuttgart Armin von Bogdandy, Foreign public law and international law, Max Planck Institute for Comparative Public Law and International Law, Heidelberg Andreas Dreizler, Combustion research, Technische Universität Darmstadt Christof Schulz, Combustion and gas dynamics, Universität Duisburg-Essen Nicole Dubilier, Marine ecology, Max Planck Institute for Marine Microbiology, Bremen, and Universität Bremen Leif Kobbelt, Informatics and computer graphics, RWTH Aachen Laurens Molenkamp, Experimental solid-state physics, Universität Würzburg Brigitte Röder, Biological psychology/neuro-psychology, Universität Hamburg Irmgard Sinning, Structural biology, Universität Heidelberg Rainer Waser, Nanoelectronics/Materials science, RWTH Aachen and Peter Grünberg Institute, Research Center Jülich Lars Zender, Hepatology/oncology, Universitätsklinikum Tübingen 2013: Thomas Bauer, Islamic studies, Universität Münster Ivan Đikić, Biochemistry/cell biology, Universität Frankfurt am Main Frank Glorius, Molecular chemistry, Universität Münster Onur Güntürkün, Biological psychology, Universität Bochum Peter Hegemann, Biophysics, Humboldt-Universität zu Berlin Marion Merklein, Metal forming technology/manufacturing engineering, Universität Erlangen-Nürnberg Roderich Moessner, Max Planck Institute for the Physics of Complex Systems, Dresden, together with Achim Rosch, Theoretical solid-state physics, Universität zu Köln Erika von Mutius, Paediatrics, Allergology, Epidemiology, Klinikum der Universität München Vasilis Ntziachristos, Bio-imaging with optical techniques, Technische Universität München Lutz Raphael, Modern and recent history, Universität Trier 2012: Michael Brecht – Neurophysiology/cellular neuroscience (Bernstein Zentrum für Computational Neuroscience Berlin und Humboldt-Universität zu Berlin) Rainer Forst – political philosophy/theory (Goethe University Frankfurt) Gunther Hartmann – Clinical pharmacology/natural immunity (Universitätsklinikum Bonn)Christian Kurts – Immunology/Nephrology (Universitätsklinikum Bonn) Matthias Mann – Biochemistry (Max Planck Institute of Biochemistry, Martinsried) Friederike Pannewick – Islamic studies/literature, theater, history of ideas (Philipps-Universität Marburg) Nikolaus Rajewsky – System biology (Max Delbrück Center for Molecular Medicine, Berlin) Ulf Riebesell – Oceanography (Leibniz-Institut für Meereswissenschaften (IFM-Geomar) an der University of Kiel) Peter Sanders – Theoretical computer science and algorithms (Karlsruher Institut für Technologie, KIT) Barbara Wohlmuth – Numerical analysis (Technische Universität München) Jörg Wrachtrup – Experimental physics (Universität Stuttgart) 2011: Michael Brecht, Neuroscience (Bernstein Center for Computational Neuroscience Berlin) Ulla Bonas, Microbiology / Molecular phytopathology (Universität Halle-Wittenberg) , Cognitive neuroscience (Universitätsklinikum Hamburg-Eppendorf) Anja Feldmann, Computer science / Computer networks / Internet (Technische Universität Berlin, T-Labs) Kai-Uwe Hinrichs, Organic geochemistry (Universität Bremen) Anthony A. Hyman, Cell biology / Microtubuli and cleavage (Max Planck Institute of Molecular Cell Biology and Genetics, Dresden) Bernhard Keimer, Experimental solid-state physics (Max Planck Institute for Solid State Research, Stuttgart) Franz Pfeiffer, X-ray physics (Technische Universität München) Joachim Friedrich Quack, Egyptology (Universität Heidelberg) Gabriele Sadowski, Thermodynamics (Technische Universität Dortmund) Christine Silberhorn, Quantum optics (Universität Paderborn) 2010: Jan Born, Neuroendocrinology / Sleep research (University of Lübeck) Peter Fratzl, Biomaterials (Max Planck Institute of Colloids and Interfaces, Potsdam) Roman Inderst, Macroeconomics (University Frankfurt/Main) Christoph Klein, Pediatrics / Oncology (Hannover Medical School) Ulman Lindenberger, Lifespan psychology (Max Planck Institute for Human Development, Berlin) Frank Neese, Theoretical chemistry (University of Bonn) Jürgen Osterhammel, Recent and modern history (University of Konstanz) Petra Schwille, Biophysics (Dresden University of Technology) Stefan Treue, Cognitive Neurosciences (German Primate Center, Göttingen) Joachim Weickert, Digital image processing (Saarland University) 2009–2000 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 2009: Antje Boetius, Max Planck Institute for Marine Microbiology, Bremen Holger Braunschweig, Inorganic chemistry, University of Würzburg Wolfram Burgard, Computer science, University of Freiburg Heinrich Detering, University of Göttingen Jürgen Eckert, IFW Dresden, and Dresden University of Technology Armin Falk, Economist, University of Bonn Frank Kirchhoff, University of Ulm Jürgen Rödel, Materials scientist, Technische Universität Darmstadt Karl Lenhard Rudolph, University of Ulm Burkhard Wilking, University of Münster Martin R. Zirnbauer, University of Cologne 2008: Susanne Albers – theoretical computer science, University of Freiburg Martin Beneke – theoretical particle physics, RWTH Aachen Holger Boche – telecommunications engineering and information theory, Technische Universität Berlin Martin Carrier philosophy, Bielefeld University Elena Conti – structural biology, Max Planck Institute of Biochemistry, Martinsried Elisa Izaurralde – cell biology, Max Planck Institute for Developmental Biology, Tübingen Holger Fleischer – economic law, University of Bonn Stefan W. Hell – biophysics, Max Planck Institute for Biophysical Chemistry, Göttingen Klaus Kern – physical solid state chemistry, Max Planck Institute for Solid State Research, Stuttgart Wolfgang Lück – algebraic topology, University of Münster; Jochen Mannhart – experimental solid state physics, University of Augsburg 2007: – molecular diabetes research, endocrinology (University of Cologne) Patrick Bruno – theoretical solid-state physics (MPI of Microstructure Physics, Halle/Saale) Magdalena Götz – neurology (GSF – Forschungszentrum für Umwelt und Gesundheit and Ludwig Maximilian University of Munich) Peter Gumbsch – material science (Universität Karlsruhe (TH) and Fraunhofer-Institut für Werkstoffmechanik, Freiburg i. Br. and Halle/Saale) Gerald Haug – paleoclimatology (GeoForschungsZentrum Potsdam and University of Potsdam) Bernhard Jussen – mediaeval history (Bielefeld University) Guinevere Kauffmann – astrophysics (MPI for Astrophysics, Garching) Falko Langenhorst – mineralogy and petrology (Friedrich Schiller University of Jena) Oliver Primavesi – classical philology (Ludwig Maximilian University of Munich) Detlef Weigel – plant biology (MPI for Developmental Biology, Tübingen) 2006: Matthias Beller and Peter Wasserscheid – homogeneous catalysis (Leibniz-Institute for Organic Catalysis at the University of Rostock) and chemical processing (Friedrich-Alexander-Universität Erlangen-Nürnberg) Patrick Cramer – structural biology (Ludwig Maximilian University of Munich) Peter Jonas – neurophysiology (Albert Ludwigs University of Freiburg) Ferenc Krausz – quantum optics (Ludwig Maximilian University of Munich and Max Planck Institute for Quantum Optics, Garching) Klaus Mezger – geochemistry (University of Münster) Thomas Mussweiler – social psychology (University of Cologne) Felix Otto – analysis of partial differential equations (University of Bonn) Dominik Perler – history of philosophy/theoretical philosophy (Humboldt University of Berlin) Gyburg Radke – classical philology and philosophy (Philipps University of Marburg) Marino Zerial – cell biology (Max Planck Institute for Molecular Cell Biology and Genetics, Dresden) 2005: Peter Becker – cell biology/biochemistry (Ludwig Maximilian University of Munich) Immanuel Bloch – quantum optics (Johannes Gutenberg University of Mainz) Stefanie Dimmeler – molecular cardiology (Johann Wolfgang Goethe University Frankfurt am Main) Jürgen Gauß – theoretical chemistry (Johannes Gutenberg University of Mainz) Günther Hasinger – astrophysics (Max Planck Institute for Extraterrestrial Physics, Garching) Christian Jung – plant breeding (University of Kiel) Axel Ockenfels – experimental economics (University of Cologne) Wolfgang Peukert – mechanical process engineering (Friedrich-Alexander-University, Erlangen-Nuremberg) Barbara Stollberg-Rilinger – History of early modern Europe (University of Münster) Andreas Tünnermann – microsystems technology (Fraunhofer Institute for Applied Optics and Precision Engineering, Jena) 2004: Frank Allgöwer – control engineering (University of Stuttgart) Gabriele Brandstetter – theatre science (Free University of Berlin) Thomas Carell – organic chemistry (Ludwig Maximilian University of Munich) Karl Christoph Klauer – social and cognitive psychology (University of Bonn) Hannah Monyer – neurobiology (Ruprecht Karls University of Heidelberg) Nikolaus Pfanner and Jürgen Soll – biochemistry/molecular cell biology of plants (Albert Ludwigs University of Freiburg and Ludwig Maximilian University of Munich) Klaus Dieter Pfeffer – immunology (Heinrich Heine University) Dierk Raabe – material science (Max Planck Institute for Iron Research GmbH, Düsseldorf) Konrad Samwer – solid state physics (Georg August University of Göttingen) Manfred Strecker – structural geology (University of Potsdam) 2003: Winfried Denk – medical optics (Max Planck Institute for Medical Research, Heidelberg) Hélène Esnault and Eckart Viehweg – algebraic geometry (University of Duisburg and Essen) Gerhard Huisken – geometrical analysis (Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Golm, Potsdam) Rupert Klein – computational fluid dynamics (Free University of Berlin and Potsdam Institute for Climate Impact Research) Albrecht Koschorke – Renaissance and modern German literature (University of Constance) Roland Lill – cell biology/biochemistry (Philipps University of Marburg) Christof Niehrs – molecular development biology (Deutsches Krebsforschungszentrum, Heidelberg) Ferdi Schüth – inorganic chemistry (Max Planck Institute für Kohlenforschung (Coal Research) (rechtsfähige Stiftung), Mülheim/Ruhr) Hans-Peter Seidel – graphics (Max Planck Institute for Informatics, Saarbrücken) Hubert Wolf – history of Christianity/Catholic theology (University of Münster) 2002: Carmen Birchmeier-Kohler – molecular biology (Max Delbrück Center for Molecular Medicine, Berlin-Buch) Wolfgang Dahmen – mathematics (RWTH Aachen) Wolf-Christian Dullo – paleontology (University of Kiel) Bruno Eckhardt – theoretical physics (Philipps University of Marburg) Michael Famulok – biochemistry (University of Bonn) Christian Haass – pathological biochemistry (Ludwig Maximilian University of Munich) Franz-Ulrich Hartl – cell biology (Max Planck Institute for Biochemistry, Martinsried) Thomas Hengartner – cultural anthropology (University of Hamburg) Reinhold Kliegl – general psychology (University of Potsdam) Wolfgang Kowalsky – optoelectronics (Technische Universität Braunschweig) Karl Leo – solid state physics (Technische Universität Dresden) Frank Vollertsen – forming and stretching manufacturing engineering (University of Paderborn) 2001: Jochen Feldmann – optoelectronical component (Ludwig Maximilian University of Munich) Eduard Christian Hurt – molecular biology (Ruprecht Karls University of Heidelberg) Hans Keppler – mineralogy (University of Tübingen) Arthur Konnerth – neurophysiology (Ludwig Maximilian University of Munich) Ulrich Konrad – musicology (University of Würzburg) Martin Krönke – immunology/cell biology (University of Cologne) Joachim Küpper – Romantic literary theory (Free University of Berlin) Christoph Markschies – history of Christianity (Ruprecht Karls University of Heidelberg) Wolfgang Marquardt – process systems engineering (RWTH Aachen) Helge Ritter – informatics (University of Bielefeld) Günter Ziegler – mathematics (Technische Universität Berlin) 2000: Klaus Fiedler – cognitive social psychology (Ruprecht Karls University of Heidelberg) Peter Greil – materials science (Friedrich-Alexander-University, Erlangen-Nuremberg) Matthias W. Hentze – molecular biology (European Molecular Biology Laboratory, Heidelberg) Peter M. Herzig – geochemistry and economic geology (Freiberg University of Mining and Technology) Reinhard Jahn – cellular biology (Max Planck Institute for Biophysical Chemistry (Karl Friedrich Bonhoeffer Institute), Göttingen) Aditi Lahiri – general linguistics (University of Constance) Gertrude Lübbe-Wolff – public law (University of Bielefeld) Dieter Lüst – theoretical physics (Humboldt University of Berlin) Stefan Müller – mathematics (Max Planck Institute for Mathematics in the Sciences, Leipzig) Manfred Pinkal – computational linguistics (Saarland University) Ilme Schlichting – biophysics (Max Planck Institute for Molecular Physiology, Dortmund) Friedrich Temps and Hans-Joachim Werner – physical chemistry (University of Kiel) and Theoretische Chemie (University of Stuttgart) Martin Wegener – solid state physics (University of Karlsruhe) 1999–1990 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1999: Ekkard Brinksmeier – manufacturing engineering (University of Bremen) Bernd Bukau – cellular biology (Albert Ludwigs University of Freiburg) Joachim Cuntz – mathematics (University of Münster) Alois Fürstner – organometalic chemistry (Max Planck Institute für Kohlenforschung (Coal Research) (rechtsfähige Stiftung), Mülheim/Ruhr) Friedrich Wilhelm Graf – Evangelical theology (University of Augsburg) Ulrich Herbert – modern and contemporary history (Albert Ludwigs University of Freiburg) Martin Johannes Lohse – pharmacology (University of Würzburg) Volker Mosbrugger – paleontology (University of Tübingen) Hans-Christian Pape – neurophysiology (Otto von Guericke University of Magdeburg) Joachim Ullrich – experimental physics (Albert Ludwigs University of Freiburg) 1998: Heinz Breer – zoology (University of Hohenheim) Nikolaus P. Ernsting and Klaus Rademann – physical chemistry (Humboldt University of Berlin) Hans-Jörg Fecht – metallic materials (University of Ulm) Ute Frevert – modern history (University of Bielefeld) Wolf-Bernd Frommer – molecular plant physiology (University of Tübingen) Christian Griesinger – organic chemistry (Johann Wolfgang Goethe University Frankfurt am Main) Regine Hengge-Aronis – microbiology (University of Constance) Onno Oncken – geology (GeoForschungsZentrum, Potsdam and Free University of Berlin) Hermann Parzinger – prehistoric and early historical Europe (German Archaeological Institute, Berlin) Ingo Rehberg – experimental physics (Otto von Guericke University of Magdeburg) Dietmar Vestweber – cellular biology/biochemistry (University of Münster) Annette Zippelius – solid state physics (Georg August University of Göttingen) 1997: Thomas Boehm – molecular development biology (Deutsches Krebsforschungszentrum, Heidelberg) Wolfgang Ertmer – experimental physics (University of Hannover) Angela D. Friederici – neuropsychology (Max Planck Institute for Neuropsychological Research, Leipzig) – microbiology (Albert Ludwigs University of Freiburg) Jean Karen Gregory – material science (Technical University of Munich) Andreas Kablitz – Romance philology/Italian studies (University of Cologne) Matthias Kleiner – sheet metal forming (Brandeburg University of Technology) Paul Knochel – organometallic chemistry (Philipps University of Marburg) Elisabeth Knust – development genetics (Heinrich Heine University Düsseldorf) Stephan W. Koch – theoretical physics (Philipps University of Marburg) Christian F. Lehner – molecular genetics (University of Bayreuth) Stefan M. Maul – ancient orientalism (Ruprecht Karls University of Heidelberg) Ernst Mayr – information theory (Technical University of Munich) Gerhard Wörner – mineralogy/geochemistry (Georg August University of Göttingen) 1996: Eduard Arzt – materials science (University of Stuttgart and Max Planck Institute for Metals Research, Stuttgart) Hans Werner Diehl – theoretical physics (University of Duisburg and Essen) Gerd Faltings – mathematics (Max Planck Institute for Mathematics, Bonn) Ulf-Ingo Flügge – biochemistry of plants, (University of Cologne) Wolfgang Klein – linguistics (Max Planck Institute for Psycholinguistics, Nijmegen) Dieter Langewiesche – modern history (University of Tübingen) – molecular biology (Philipps University of Marburg) Joachim Reitner – paleontology (Georg August University of Göttingen) Michael Reth – immunology (Max Planck Institute for Immunobiology, Freiburg) Wolfgang Schnick – solid state chemistry (University of Bayreuth) Winfried Schulze – history of early modern Europe (Ludwig Maximilian University of Munich) Reinhard Zimmermann – history of law and civil law (University of Regensburg) 1995: Siegfried Bethke – elementary particle physics (RWTH Aachen) Niels Birbaumer – psychophysiology (University of Tübingen) Hans-Joachim Freund – physical chemistry (Ruhr University Bochum) Martin Grötschel – applied mathematics (Technische Universität Berlin) Axel Haverich – surgery (University of Kiel) Gerhard Hirzinger – robotics (German Aerospace Center, Oberpfaffenhofen) – biochemistry (University of Hamburg) Gerd Jürgens – molecular plant development (University of Tübingen) Wolfgang Schleich – quantum optics (University of Ulm) Manfred G. Schmidt – political science (Ruprecht Karls University of Heidelberg) Thomas Schweizer (†) – cultural anthropology (University of Cologne) Elmar Weiler – plant physiology (Ruhr University Bochum) Emo Welzl – informatics (Free University of Berlin) 1994: Gisela Anton – experimental physics (University of Bonn) Manfred Broy and Ernst-Rüdiger Olderog – computer science (Technical University of Munich and University of Oldenburg) Ulrich R. Christensen – geophysics (Georg August University of Göttingen) Ulf Eysel – neurophysiology (Ruhr University Bochum) Theodor Geisel – theoretical physics (Johann Wolfgang Goethe University Frankfurt am Main) Peter Gruss – cellular biology (MPI for biophysical chemistry, Göttingen) Wolfgang Hackbusch – numerical mathematics (University of Kiel) Adrienne Héritier and Helmut Willke – sociology/politology (University of Bielefeld) Stefan Jentsch – molecular biology (Ruprecht Karls University of Heidelberg) Glenn W. Most – classical philology (Ruprecht Karls University of Heidelberg) Johann Mulzer – organic chemistry (Free University of Berlin) Peter Schäfer – Jewish studies (Free University of Berlin) 1993: Christian von Bar – international privatright (Universität Osnabrück) Johannes Buchmann and Claus-Peter Schnorr – information theory (Saarland University and Johann Wolfgang Goethe University Frankfurt am Main) Dieter Enders – organic chemistry (RWTH Aachen) Gunter Fischer – biochemistry (Martin Luther University of Halle-Wittenberg) – neuroanatomy (Albert Ludwigs University of Freiburg) Jürgen Jost – mathematics (Universität Bochum) Regine Kahmann – molecular genetics (Ludwig Maximilian University of Munich) Wolfgang Krätschmer – nuclear physics (MPI für Kernphysik, Heidelberg) Klaus Petermann – high frequency technics (Technische Universität Berlin) Wolfgang Prinz – psychology (MPI für für Psychologische Forschung, München) Rudolf G. Wagner – sinology (Ruprecht Karls University of Heidelberg) Jürgen Warnatz – technical combustion (University of Stuttgart) 1992: Georg W. Bornkamm – virology (GSF-Forschungszentrum für Umwelt und Gesundheit München) Christopher Deninger, Michael Rapoport, Peter Schneider and Thomas Zink – mathematics (University of Münster, Bergische University Wuppertal, University of Cologne and University of Bielefeld) Irmela Hijiya-Kirschnereit – japanology (Free University of Berlin) Jürgen Kocka – history of sociology (Free University of Berlin) Joachim Menz – mine surveying (Freiberg University of Mining and Technology) Friedhelm Meyer auf der Heide and Burkhard Monien – informatics (University of Paderborn) Jürgen Mlynek – experimental physics (University of Constance) Svante Pääbo – molecular biology (Ludwig Maximilian University of Munich) Wolfgang Raible – romanistics (Albert Ludwigs University of Freiburg) Hans-Georg Rammensee – immunology (Max Planck Institute for Developmental Biology, Tübingen) Jan Veizer – geochemistry of sediments (Ruhr University Bochum) 1991: Gerhard Ertl – physical chemistry (Fritz Haber Institute of the MPG, Berlin) Dieter Fenske and Michael Veith – inorganic chemistry (University of Karlsruhe and Saarland University) Ernst O. Göbel – solid state physics (University of Marburg) Dieter Häussinger – internal medicine (Albert Ludwigs University of Freiburg) Karl-Heinz Hoffmann – applied mathematics (University of Augsburg) Randolf Menzel – zoology/neurobiology (Free University of Berlin) Rolf Müller – biochemistry/molecular biology (University of Marburg) Hermann Riedel – material mechanics (Fraunhofer-Institut für Werkstoffmechanik Freiburg) Hans-Ulrich Schmincke – mineralogy/vulcanology (Forschungszentrum für Marine Geowissenschaften Kiel) Michael Stolleis – history of law (Johann Wolfgang Goethe University Frankfurt am Main) Martin Warnke – history of art (University of Hamburg) 1990: Reinhard Genzel – astrophysics (Max Planck Institute for Astrophysics, Garching) Rainer Greger – physiology (Albert Ludwigs University of Freiburg) Ingrid Grummt – microbiology (University of Würzburg) Martin Jansen and Arndt Simon – inorganic chemistry (University of Bonn and Max Planck Institute for Solid State Research, Stuttgart) Bert Hölldobler – zoology (University of Würzburg) Konrad Kleinknecht – experimental physics (Johannes Gutenberg University of Mainz) Norbert Peters – combustion engineering (RWTH Aachen) Helmut Schwarz – organic chemistry (Technische Universität Berlin) Dieter Stöffler – planetology (University of Münster) Richard Wagner – material science (GKSS-Forschungszentrum Geesthacht) 1989–1986 1989 1988 1987 1986 1989: Heinrich Betz – neurobiology (Ruprecht Karls University of Heidelberg) Claus Wilhelm Canaris – civil law (Ludwig Maximilian University of Munich) Herbert Gleiter – material science (Saarland University) Theodor W. Hänsch – laser physics (Ludwig Maximilian University of Munich and Max Planck Institute for Quantum Optics, Garching) Joachim Milberg – production technics (Technical University of Munich) Jürgen Mittelstraß – philosophy (University of Constance) Sigrid D. Peyerimhoff – theoretical chemistry (University of Bonn) Manfred T. Reetz – organic chemistry (Philipps University of Marburg) Michael Sarnthein and Jörn Thiede – marine geology (University of Kiel and Leibniz-Institut für Meereswissenschaften Kiel) Reinhard Stock – experimental nuclear physics (Johann Wolfgang Goethe University Frankfurt am Main) Wolfgang Stremmel – internal medicine (Heinrich Heine University of Düsseldorf) 1988: – high frequency technics (Technische Universität Braunschweig) Lothar Gall – modern history (Johann Wolfgang Goethe University Frankfurt am Main) Günter Harder – mathematics (University of Bonn) Walter Haug and Burghart Wachinger – older German literature science (University of Tübingen) Werner Hildenbrand – social economics (University of Bonn) Ingo Müller – theoretical physics (Technische Universität Berlin) Herbert W. Roesky and George Michael Sheldrick – inorganic chemistry (Georg August University of Göttingen) Wolfram Saenger and Volker Erdmann – biochemistry (Free University of Berlin) Günther Schütz – molecular biology (German Cancer Research Center, Heidelberg) Hans Wolfgang Spiess – physical chemistry (Max Planck Institute for Polymer Research, Mainz) Karl Otto Stetter – microbiology (University of Regensburg) Thomas Weiland – high energy physics (DESY (German electron synchrotron), Hamburg) 1987: Gerhard Abstreiter – semiconductor physics (Technical University of Munich) Knut Borchardt – history of economics/social economics (Ludwig Maximilian University of Munich) Nils Claussen – ceramic materials (Hamburg University of Technology) Bernd Giese – organic chemistry (Technische Universität Darmstadt) Wolfgang A. Herrmann and Hubert Schmidbaur – inorganic chemistry (Technical University of Munich) Günter Hotz, Kurt Mehlhorn and Wolfgang Paul – Computer Science (Saarland University) Erwin Neher and Bert Sakmann – biophysical chemistry (Max Planck Institute for Biophysical Chemistry / Karl Friedrich Bonhoeffer Institute), Göttingen Friedrich A. Seifert – mineralogy (University of Bayreuth) Rudolf K. Thauer – biochemical microbiology (Philipps University of Marburg) Hans-Peter Zenner – Otolaryngology/cell biology (University of Würzburg) 1986: Géza Alföldy – ancient history (Ruprecht Karls University of Heidelberg) Dietrich Dörner – psychology (Otto-Friedrich University) Jürgen Habermas – philosophy (Johann Wolfgang Goethe University Frankfurt am Main) Otto Ludwig Lange and Ulrich Heber – ecology and biochemistry (University of Würzburg) Hartmut Michel – biology (Max Planck Institute of Biochemistry, Martinsried) Christiane Nüsslein-Volhard and Herbert Jäckle – biology (Max Planck Institute for Developmental Biology, Tübingen) Peter R. Sahm – casting (RWTH Aachen) Fritz Peter Schäfer – laser physics (MPI für biophysikalische Chemie, Göttingen) Frank Steglich – solid state physics (Technische Universität Darmstadt) Albert H. Walenta – experimental physics (University of Siegen) Julius Wess – theoretical physics (University of Karlsruhe) See also List of general science and technology awards List of physics awards References External links Official description Recipients (in German) Science and technology awards Gottfried Wilhelm Leibniz Awards established in 1985 1985 establishments in West Germany German science and technology awards
Leibniz Prize
Technology
7,970
403,165
https://en.wikipedia.org/wiki/Maximum%20flow%20problem
In optimization theory, maximum flow problems involve finding a feasible flow through a flow network that obtains the maximum possible flow rate. The maximum flow problem can be seen as a special case of more complex network flow problems, such as the circulation problem. The maximum value of an s-t flow (i.e., flow from source s to sink t) is equal to the minimum capacity of an s-t cut (i.e., cut severing s from t) in the network, as stated in the max-flow min-cut theorem. History The maximum flow problem was first formulated in 1954 by T. E. Harris and F. S. Ross as a simplified model of Soviet railway traffic flow. In 1955, Lester R. Ford, Jr. and Delbert R. Fulkerson created the first known algorithm, the Ford–Fulkerson algorithm. In their 1955 paper, Ford and Fulkerson wrote that the problem of Harris and Ross is formulated as follows (see p. 5):Consider a rail network connecting two cities by way of a number of intermediate cities, where each link of the network has a number assigned to it representing its capacity. Assuming a steady state condition, find a maximal flow from one given city to the other.In their book Flows in Networks, in 1962, Ford and Fulkerson wrote:It was posed to the authors in the spring of 1955 by T. E. Harris, who, in conjunction with General F. S. Ross (Ret.), had formulated a simplified model of railway traffic flow, and pinpointed this particular problem as the central one suggested by the model [11].where [11] refers to the 1955 secret report Fundamentals of a Method for Evaluating Rail net Capacities by Harris and Ross (see p. 5). Over the years, various improved solutions to the maximum flow problem were discovered, notably the shortest augmenting path algorithm of Edmonds and Karp and independently Dinitz; the blocking flow algorithm of Dinitz; the push-relabel algorithm of Goldberg and Tarjan; and the binary blocking flow algorithm of Goldberg and Rao. The algorithms of Sherman and Kelner, Lee, Orecchia and Sidford, respectively, find an approximately optimal maximum flow but only work in undirected graphs. In 2013 James B. Orlin published a paper describing an algorithm. In 2022 Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva published an almost-linear time algorithm running in for the minimum-cost flow problem of which for the maximum flow problem is a particular case. For the single-source shortest path (SSSP) problem with negative weights another particular case of minimum-cost flow problem an algorithm in almost-linear time has also been reported. Both algorithms were deemed best papers at the 2022 Symposium on Foundations of Computer Science. Definition First we establish some notation: Let be a network with being the source and the sink of respectively. If is a function on the edges of then its value on is denoted by or Definition. The capacity of an edge is the maximum amount of flow that can pass through an edge. Formally it is a map Definition. A flow is a map that satisfies the following: Capacity constraint. The flow of an edge cannot exceed its capacity, in other words: for all Conservation of flows. The sum of the flows entering a node must equal the sum of the flows exiting that node, except for the source and the sink. Or: Remark. Flows are skew symmetric: for all Definition. The value of flow is the amount of flow passing from the source to the sink. Formally for a flow it is given by: Definition. The maximum flow problem is to route as much flow as possible from the source to the sink, in other words find the flow with maximum value. Note that several maximum flows may exist, and if arbitrary real (or even arbitrary rational) values of flow are permitted (instead of just integers), there is either exactly one maximum flow, or infinitely many, since there are infinitely many linear combinations of the base maximum flows. In other words, if we send units of flow on edge in one maximum flow, and units of flow on in another maximum flow, then for each we can send units on and route the flow on remaining edges accordingly, to obtain another maximum flow. If flow values can be any real or rational numbers, then there are infinitely many such values for each pair . Algorithms The following table lists algorithms for solving the maximum flow problem. Here, and denote the number of vertices and edges of the network. The value refers to the largest edge capacity after rescaling all capacities to integer values (if the network contains irrational capacities, may be infinite). For additional algorithms, see . Integral flow theorem The integral flow theorem states that If each edge in a flow network has integral capacity, then there exists an integral maximal flow. The claim is not only that the value of the flow is an integer, which follows directly from the max-flow min-cut theorem, but that the flow on every edge is integral. This is crucial for many combinatorial applications (see below), where the flow across an edge may encode whether the item corresponding to that edge is to be included in the set sought or not. Application Multi-source multi-sink maximum flow problem Given a network with a set of sources and a set of sinks instead of only one source and one sink, we are to find the maximum flow across . We can transform the multi-source multi-sink problem into a maximum flow problem by adding a consolidated source connecting to each vertex in and a consolidated sink connected by each vertex in (also known as supersource and supersink) with infinite capacity on each edge (See Fig. 4.1.1.). Maximum cardinality bipartite matching Given a bipartite graph , we are to find a maximum cardinality matching in , that is a matching that contains the largest possible number of edges. This problem can be transformed into a maximum flow problem by constructing a network , where contains the edges in directed from to . for each and for each . for each (See Fig. 4.3.1). Then the value of the maximum flow in is equal to the size of the maximum matching in , and a maximum cardinality matching can be found by taking those edges that have flow in an integral max-flow. Minimum path cover in directed acyclic graph Given a directed acyclic graph , we are to find the minimum number of vertex-disjoint paths to cover each vertex in . We can construct a bipartite graph from , where . Then it can be shown that has a matching of size if and only if has a vertex-disjoint path cover containing edges and paths, where is the number of vertices in . Therefore, the problem can be solved by finding the maximum cardinality matching in instead. Assume we have found a matching of , and constructed the cover from it. Intuitively, if two vertices are matched in , then the edge is contained in . Clearly the number of edges in is . To see that is vertex-disjoint, consider the following: Each vertex in can either be non-matched in , in which case there are no edges leaving in ; or it can be matched, in which case there is exactly one edge leaving in . In either case, no more than one edge leaves any vertex in . Similarly for each vertex in – if it is matched, there is a single incoming edge into in ; otherwise has no incoming edges in . Thus no vertex has two incoming or two outgoing edges in , which means all paths in are vertex-disjoint. To show that the cover has size , we start with an empty cover and build it incrementally. To add a vertex to the cover, we can either add it to an existing path, or create a new path of length zero starting at that vertex. The former case is applicable whenever either and some path in the cover starts at , or and some path ends at . The latter case is always applicable. In the former case, the total number of edges in the cover is increased by 1 and the number of paths stays the same; in the latter case the number of paths is increased and the number of edges stays the same. It is now clear that after covering all vertices, the sum of the number of paths and edges in the cover is . Therefore, if the number of edges in the cover is , the number of paths is . Maximum flow with vertex capacities Let be a network. Suppose there is capacity at each node in addition to edge capacity, that is, a mapping such that the flow has to satisfy not only the capacity constraint and the conservation of flows, but also the vertex capacity constraint In other words, the amount of flow passing through a vertex cannot exceed its capacity. To find the maximum flow across , we can transform the problem into the maximum flow problem in the original sense by expanding . First, each is replaced by and , where is connected by edges going into and is connected to edges coming out from , then assign capacity to the edge connecting and (see Fig. 4.4.1). In this expanded network, the vertex capacity constraint is removed and therefore the problem can be treated as the original maximum flow problem. Maximum number of paths from s to t Given a directed graph and two vertices and , we are to find the maximum number of paths from to . This problem has several variants: 1. The paths must be edge-disjoint. This problem can be transformed to a maximum flow problem by constructing a network from , with and being the source and the sink of respectively, and assigning each edge a capacity of . In this network, the maximum flow is iff there are edge-disjoint paths. 2. The paths must be independent, i.e., vertex-disjoint (except for and ). We can construct a network from with vertex capacities, where the capacities of all vertices and all edges are . Then the value of the maximum flow is equal to the maximum number of independent paths from to . 3. In addition to the paths being edge-disjoint and/or vertex disjoint, the paths also have a length constraint: we count only paths whose length is exactly , or at most . Most variants of this problem are NP-complete, except for small values of . Closure problem A closure of a directed graph is a set of vertices C, such that no edges leave C. The closure problem is the task of finding the maximum-weight or minimum-weight closure in a vertex-weighted directed graph. It may be solved in polynomial time using a reduction to the maximum flow problem. Real world applications Baseball elimination In the baseball elimination problem there are n teams competing in a league. At a specific stage of the league season, wi is the number of wins and ri is the number of games left to play for team i and rij is the number of games left against team j. A team is eliminated if it has no chance to finish the season in the first place. The task of the baseball elimination problem is to determine which teams are eliminated at each point during the season. Schwartz proposed a method which reduces this problem to maximum network flow. In this method a network is created to determine whether team k is eliminated. Let G = (V, E) be a network with being the source and the sink respectively. One adds a game nodeij – which represents the number of plays between these two teams. We also add a team node for each team and connect each game node with i < j to V, and connects each of them from s by an edge with capacity rij – which represents the number of plays between these two teams. We also add a team node for each team and connect each game node with two team nodes i and j to ensure one of them wins. One does not need to restrict the flow value on these edges. Finally, edges are made from team node i to the sink node t and the capacity of is set to prevent team i from winning more than . Let S be the set of all teams participating in the league and let . In this method it is claimed team k is not eliminated if and only if a flow value of size r(S − {k}) exists in network G. In the mentioned article it is proved that this flow value is the maximum flow value from s to t. Airline scheduling In the airline industry a major problem is the scheduling of the flight crews. The airline scheduling problem can be considered as an application of extended maximum network flow. The input of this problem is a set of flights F which contains the information about where and when each flight departs and arrives. In one version of airline scheduling the goal is to produce a feasible schedule with at most k crews. To solve this problem one uses a variation of the circulation problem called bounded circulation which is the generalization of network flow problems, with the added constraint of a lower bound on edge flows. Let G = (V, E) be a network with as the source and the sink nodes. For the source and destination of every flight i, one adds two nodes to V, node si as the source and node di as the destination node of flight i. One also adds the following edges to E: An edge with capacity [0, 1] between s and each si. An edge with capacity [0, 1] between each di and t. An edge with capacity [1, 1] between each pair of si and di. An edge with capacity [0, 1] between each di and sj, if source sj is reachable with a reasonable amount of time and cost from the destination of flight i. An edge with capacity [0, ∞] between s and t. In the mentioned method, it is claimed and proved that finding a flow value of k in G between s and t is equal to finding a feasible schedule for flight set F with at most k crews. Another version of airline scheduling is finding the minimum needed crews to perform all the flights. To find an answer to this problem, a bipartite graph is created where each flight has a copy in set A and set B. If the same plane can perform flight j after flight i, is connected to . A matching in induces a schedule for F and obviously maximum bipartite matching in this graph produces an airline schedule with minimum number of crews. As it is mentioned in the Application part of this article, the maximum cardinality bipartite matching is an application of maximum flow problem. Circulation–demand problem There are some factories that produce goods and some villages where the goods have to be delivered. They are connected by a networks of roads with each road having a capacity for maximum goods that can flow through it. The problem is to find if there is a circulation that satisfies the demand. This problem can be transformed into a maximum-flow problem. Add a source node and add edges from it to every factory node with capacity where is the production rate of factory . Add a sink node and add edges from all villages to with capacity where is the demand rate of village . Let G = (V, E) be this new network. There exists a circulation that satisfies the demand if and only if : . If there exists a circulation, looking at the max-flow solution would give the answer as to how much goods have to be sent on a particular road for satisfying the demands. The problem can be extended by adding a lower bound on the flow on some edges. Image segmentation In their book, Kleinberg and Tardos present an algorithm for segmenting an image. They present an algorithm to find the background and the foreground in an image. More precisely, the algorithm takes a bitmap as an input modelled as follows: ai ≥ 0 is the likelihood that pixel i belongs to the foreground, bi ≥ 0 in the likelihood that pixel i belongs to the background, and pij is the penalty if two adjacent pixels i and j are placed one in the foreground and the other in the background. The goal is to find a partition (A, B) of the set of pixels that maximize the following quantity , Indeed, for pixels in A (considered as the foreground), we gain ai; for all pixels in B (considered as the background), we gain bi. On the border, between two adjacent pixels i and j, we loose pij. It is equivalent to minimize the quantity because We now construct the network whose nodes are the pixel, plus a source and a sink, see Figure on the right. We connect the source to pixel i by an edge of weight ai. We connect the pixel i to the sink by an edge of weight bi. We connect pixel i to pixel j with weight pij. Now, it remains to compute a minimum cut in that network (or equivalently a maximum flow). The last figure shows a minimum cut. Extensions 1. In the minimum-cost flow problem, each edge (u,v) also has a cost-coefficient auv in addition to its capacity. If the flow through the edge is fuv, then the total cost is auvfuv. It is required to find a flow of a given size d, with the smallest cost. In most variants, the cost-coefficients may be either positive or negative. There are various polynomial-time algorithms for this problem. 2. The maximum-flow problem can be augmented by disjunctive constraints: a negative disjunctive constraint says that a certain pair of edges cannot simultaneously have a nonzero flow; a positive disjunctive constraints says that, in a certain pair of edges, at least one must have a nonzero flow. With negative constraints, the problem becomes strongly NP-hard even for simple networks. With positive constraints, the problem is polynomial if fractional flows are allowed, but may be strongly NP-hard when the flows must be integral. References Further reading Network flow problem Computational problems in graph theory
Maximum flow problem
Mathematics
3,704
14,107,396
https://en.wikipedia.org/wiki/Styrene-acrylonitrile%20resin
Styrene acrylonitrile resin (SAN) is a copolymer plastic consisting of styrene and acrylonitrile. It is widely used in place of polystyrene owing to its greater thermal resistance. The chains of between 70 and 80% by weight styrene and 20 to 30% acrylonitrile. Larger acrylonitrile content improves mechanical properties and chemical resistance, but also adds a yellow tint to the normally transparent plastic. Properties and uses SAN is similar in use to polystyrene. Like polystyrene itself, it is transparent and brittle. The copolymer has a glass transition temperature greater than 100 °C owing to the acrylonitrile units in the chain, thus making the material resistant to boiling water. It is structurally related to ABS plastic, where polybutadiene is copolymerised with SAN to give a much tougher material. The rubber chains form separate phases which are 10-20 micrometers in diameter. When the product is stressed, crazing from the particles helps to increase the strength of the polymer. The method of rubber toughening has been used to strengthen other polymers such as PMMA and nylon. Uses include food containers, water bottles, kitchenware, e.g., blenders and mixers, healthcare materials, cosmetic jars, computer products, packaging material, household equipment e.g., shower trays, battery cases and plastic optical fibers. References Plastics Household chemicals Packaging materials Copolymers
Styrene-acrylonitrile resin
Physics
315
50,799,929
https://en.wikipedia.org/wiki/Morchella%20fluvialis
Morchella fluvialis is a species of fungus in the family Morchellaceae. It was described as new to science in 2014 by Clowez and colleagues, following collections from riparian forests in Spain under Alnus glutinosa, Ulmus minor and Eucalyptus camaldulensis, although previous collections from Turkey under Pinus nigra have also been reported. This species, which corresponds to phylogenetic lineage Mes-18, is very close to Morchella esculenta, from which it differs in its elongated cap with oblong pits and predominantly longitudinal ridges, pronounced rufescence, as well as its Mediterranean hygrophilic distribution along rivers and streams. References External links Fungi described in 2014 Fungi of Europe fluvialis Fungus species
Morchella fluvialis
Biology
157
486,266
https://en.wikipedia.org/wiki/Aliquot%20sequence
In mathematics, an aliquot sequence is a sequence of positive integers in which each term is the sum of the proper divisors of the previous term. If the sequence reaches the number 1, it ends, since the sum of the proper divisors of 1 is 0. Definition and overview The aliquot sequence starting with a positive integer can be defined formally in terms of the sum-of-divisors function or the aliquot sum function in the following way: If the condition is added, then the terms after 0 are all 0, and all aliquot sequences would be infinite, and we can conjecture that all aliquot sequences are convergent, the limit of these sequences are usually 0 or 6. For example, the aliquot sequence of 10 is because: Many aliquot sequences terminate at zero; all such sequences necessarily end with a prime number followed by 1 (since the only proper divisor of a prime is 1), followed by 0 (since 1 has no proper divisors). See for a list of such numbers up to 75. There are a variety of ways in which an aliquot sequence might not terminate: A perfect number has a repeating aliquot sequence of period 1. The aliquot sequence of 6, for example, is An amicable number has a repeating aliquot sequence of period 2. For instance, the aliquot sequence of 220 is A sociable number has a repeating aliquot sequence of period 3 or greater. (Sometimes the term sociable number is used to encompass amicable numbers as well.) For instance, the aliquot sequence of 1264460 is Some numbers have an aliquot sequence which is eventually periodic, but the number itself is not perfect, amicable, or sociable. For instance, the aliquot sequence of 95 is Numbers like 95 that are not perfect, but have an eventually repeating aliquot sequence of period 1 are called aspiring numbers. The lengths of the aliquot sequences that start at are 1, 2, 2, 3, 2, 1, 2, 3, 4, 4, 2, 7, 2, 5, 5, 6, 2, 4, 2, 7, 3, 6, 2, 5, 1, 7, 3, 1, 2, 15, 2, 3, 6, 8, 3, 4, 2, 7, 3, 4, 2, 14, 2, 5, 7, 8, 2, 6, 4, 3, ... The final terms (excluding 1) of the aliquot sequences that start at are 1, 2, 3, 3, 5, 6, 7, 7, 3, 7, 11, 3, 13, 7, 3, 3, 17, 11, 19, 7, 11, 7, 23, 17, 6, 3, 13, 28, 29, 3, 31, 31, 3, 7, 13, 17, 37, 7, 17, 43, 41, 3, 43, 43, 3, 3, 47, 41, 7, 43, ... Numbers whose aliquot sequence terminates in 1 are 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 26, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, ... Numbers whose aliquot sequence known to terminate in a perfect number, other than perfect numbers themselves (6, 28, 496, ...), are 25, 95, 119, 143, 417, 445, 565, 608, 650, 652, 675, 685, 783, 790, 909, 913, ... Numbers whose aliquot sequence terminates in a cycle with length at least 2 are 220, 284, 562, 1064, 1184, 1188, 1210, 1308, 1336, 1380, 1420, 1490, 1604, 1690, 1692, 1772, 1816, 1898, 2008, 2122, 2152, 2172, 2362, ... Numbers whose aliquot sequence is not known to be finite or eventually periodic are 276, 306, 396, 552, 564, 660, 696, 780, 828, 888, 966, 996, 1074, 1086, 1098, 1104, 1134, 1218, 1302, 1314, 1320, 1338, 1350, 1356, 1392, 1398, 1410, 1464, 1476, 1488, ... A number that is never the successor in an aliquot sequence is called an untouchable number. 2, 5, 52, 88, 96, 120, 124, 146, 162, 188, 206, 210, 216, 238, 246, 248, 262, 268, 276, 288, 290, 292, 304, 306, 322, 324, 326, 336, 342, 372, 406, 408, 426, 430, 448, 472, 474, 498, ... Catalan–Dickson conjecture An important conjecture due to Catalan, sometimes called the Catalan–Dickson conjecture, is that every aliquot sequence ends in one of the above ways: with a prime number, a perfect number, or a set of amicable or sociable numbers. The alternative would be that a number exists whose aliquot sequence is infinite yet never repeats. Any one of the many numbers whose aliquot sequences have not been fully determined might be such a number. The first five candidate numbers are often called the Lehmer five (named after D.H. Lehmer): 276, 552, 564, 660, and 966. However, 276 may reach a high apex in its aliquot sequence and then descend; the number 138 reaches a peak of 179931895322 before returning to 1. Guy and Selfridge believe the Catalan–Dickson conjecture is false (so they conjecture some aliquot sequences are unbounded above (i.e., diverge)). Systematically searching for aliquot sequences The aliquot sequence can be represented as a directed graph, , for a given integer , where denotes the sum of the proper divisors of . Cycles in represent sociable numbers within the interval . Two special cases are loops that represent perfect numbers and cycles of length two that represent amicable pairs. See also Arithmetic dynamics Notes References Manuel Benito; Wolfgang Creyaufmüller; Juan Luis Varona; Paul Zimmermann. Aliquot Sequence 3630 Ends After Reaching 100 Digits. Experimental Mathematics, vol. 11, num. 2, Natick, MA, 2002, p. 201–206. W. Creyaufmüller. Primzahlfamilien - Das Catalan'sche Problem und die Familien der Primzahlen im Bereich 1 bis 3000 im Detail. Stuttgart 2000 (3rd ed.), 327p. External links Current status of aliquot sequences with start term below 4 million Tables of Aliquot Cycles (J.O.M. Pedersen) Aliquot Page (Wolfgang Creyaufmüller) Aliquot sequences (Christophe Clavier) Forum on calculating aliquot sequences (MersenneForum) Aliquot sequence summary page for sequences up to 100000 (there are similar pages for higher ranges) (Karsten Bonath) Active research site on aliquot sequences (Jean-Luc Garambois) Arithmetic functions Divisor function Arithmetic dynamics
Aliquot sequence
Mathematics
1,630
3,986,309
https://en.wikipedia.org/wiki/Strand%20jack
A strand jack (also known as strandjack) is a jack used to lift very heavy loads (e.g. thousands of tons or more with multiple jacks) for construction and engineering purposes. Strandjacking was invented by VSL Australia's Patrick Kilkeary & Bruce Ramsay in 1969 for concrete post tensioning systems, and is now widely used for heavy lifting, to erect bridges, offshore structures, refineries, power stations, major buildings and other structures where the use of conventional cranes is either impractical or too expensive. Use Strand jacks can be used horizontally for pulling objects and structures, and are widely used in the oil and gas industry for skidded loadouts. Oil rigs of 38,000 t have been moved in this way from the place of construction on to a barge. Since multiple jacks can be operated simultaneously by hydraulic controllers, they can be used in tandem to lift very large loads of thousands of tons. Tandem use of even two cranes is a very difficult operation. How it works A strand jack is a hollow hydraulic cylinder with a set of steel cables (the "strands") passing through the open centre, each one passing through two clamps - one mounted to either end of the cylinder. The jack operates in the manner of a caterpillar's walk: climbing (or descending) along the strands by releasing the clamp at one end, expanding the cylinder, clamping there, releasing the trailing end, contracting, and clamping the trailing end before starting over again. The real significance of this device lies in the facility for precision control. The expansion and contraction can be done at any speed, and paused at any location. Although a lone jack may lift only 1700 tons or so, there exist computer control systems that can operate 120 jacks simultaneously, offering fingertip feel movement control over extremely massive objects. In construction Strand jacking is a construction process whereby large pre-fabricated building sections are carefully lifted and precisely placed. The alternative would be to do all assembly in situ, even if expensive, technically risky, or dangerous. Strand jacks used for heavy lifting and skidding operations are owned and operated by a large number of construction and heavy lifting companies around the world. They are manufactured by a small number of companies based in Europe. Notable uses Kursk submarine disaster Uses outside of construction Costa Concordia disaster, in the ship salvage phase. References Construction equipment Machines
Strand jack
Physics,Technology,Engineering
489
407,374
https://en.wikipedia.org/wiki/113%20%28number%29
113 (one hundred [and] thirteen) is the natural number following 112 and preceding 114. Mathematics 113 is the 30th prime number (following 109 and preceding 127), so it can only be divided by one and itself. 113 is the smallest prime number before a prime gap of length 14. 113 is a Sophie Germain prime, an emirp, an isolated prime, a Chen prime and a Proth prime as it is a prime number of the form 113 is also an Eisenstein prime with no imaginary part and real part of the form . In decimal, this prime is a primeval number and a permutable prime with 131 and 311. 113 is a highly cototient number and a centered square number. 113 is the denominator of 355/113, an accurate approximation to . References Wells, D. The Penguin Dictionary of Curious and Interesting Numbers London: Penguin Group. (1987): 134 Integers
113 (number)
Mathematics
187
18,691,464
https://en.wikipedia.org/wiki/Plastic%20igniter%20cord
A plastic igniter cord (PIC) is a type of fuse used to initiate an explosive device or charge. In appearance, an igniter cord is similar to a safety fuse and when ignited, an intense flame spits perpendicular to the cord at a uniform rate as it burns along its length. In the construction and demolition industry an igniter cord is similar to a safety fuse, consisting of a pyrotechnic composition at the core, wrapped with a nylon sheath to provide shape and finally wrapped again in an outer plastic shell to provide water resistance. Normally, igniter cord also consists of a metal wire at the very center of the pyrotechnic core which also runs the entire length of the cord; the pyrotechnic composition will react with the metal wire (typically aluminum, iron or copper) to increase the energetics of the fuse. There are two types of PICs: the fast type which has nominal burning speed of 30 cm per second, a diameter of about 3 mm, and is brownish in color; and the slow type, which has a diameter of 2 mm, is greenish in color, and has a nominal burning speed of 3 cm per second. Trade names Trade names for PICs include Mantitor cord, manufactured by Orica Brazil, and ICI plastic cord, formerly manufactured by Imperial Chemical Industries in Scotland. See also fuse (explosives) Detonating cord References Detonators Explosives
Plastic igniter cord
Chemistry
290
38,088,957
https://en.wikipedia.org/wiki/Aspen%20Center%20for%20Physics
The Aspen Center for Physics (ACP) is a non-profit institution for physics research located in Aspen, Colorado, in the Rocky Mountains region of the United States. Since its foundation in 1962, it has hosted distinguished physicists for short-term visits during seasonal winter and summer programs, to promote collaborative research in fields including astrophysics, cosmology, condensed matter physics, string theory, quantum physics, biophysics, and more. To date, sixty-six of the center's affiliates have won Nobel Prizes in Physics and three have won Fields Medals in mathematics. Its affiliates have garnered a wide array of other national and international distinctions, among them the Abel Prize, the Dirac Medal, the Guggenheim Fellowship, the MacArthur Prize, and the Breakthrough Prize. Its visitors have included figures such as the cosmologist and gravitational theorist Stephen Hawking, the particle physicist Murray Gell-Mann, the condensed matter theorist Philip W. Anderson, and the former Prime Minister of the United Kingdom, Margaret Thatcher. In addition to serving as a locus for physics research, the ACP's mission has entailed public outreach: offering programs to educate the general public about physics and to stimulate interest in the subject among youth. History & public outreach The Aspen Center for Physics was founded in 1962 by three people: George Stranahan, Michael Cohen, and Robert W. Craig. George Stranahan, then a postdoctoral fellow at Purdue University, played a critical role in raising funds and early public support for the initiative. He later left physics to become a craft brewer, rancher, and entrepreneur, although he remained a lifelong supporter of the center. Stranahan's enterprises included the Flying Dog Brewery. Michael Cohen, a professor at the University of Pennsylvania, is a condensed matter physicist whose work has investigated the properties of real-world material systems such as ferroelectrics, liquid helium, and biological membranes. Robert W. Craig was the first director of the Aspen Institute, an international non-profit center which supports the exchange of ideas on matters relating to public policy. From its establishment, the ACP has developed a close relationship with the city of Aspen and has contributed to the cultural life of the local community. It has collaborated with other institutions such as the Aspen Institute, the Aspen Music Festival, the Wheeler Opera House, the Aspen Science Center, and the Pitkin County Library.   The center has benefitted from the generosity of public support, notably from the National Science Foundation, the US Department of Energy, NASA, and from the gifts of private donors. These funds have helped to bring hundreds of scientists to the center every year, and have enabled the ACP to host a wide array of public lectures and activities. In addition to sponsoring these public events at its campus in Aspen, the ACP has also broadcast programs on a local-access television station – the “Physics Preview” show on Grassrootstv.org – and on radio, via its ″Radio Physics″ program for high school students on the KDNK station. Supporters & donors The Aspen Center for Physics has benefitted over the years from many acts of philanthropy.  Gifts from Aspen donors as well as from George Stranahan, Martin Flug, the Smart Family Foundation, and affiliated physicists have been especially important to sustaining the center's development and operation. George Stranahan was an early driving force behind the establishment and funding of the ACP. After convincing the Aspen Institute to open in 1961 an independent physics division, where scientists could convene to conduct research, he began raising funds to open the Aspen Center for Physics, by collecting donations from locals and Aspen Institute participants. Stranahan raised funds for the original ACP building at a cost of $85,000, while contributing $38,000 himself. To recognize the central role that Stranahan played in establishing the center, the first building constructed on the ACP campus is named in his honor as Stranahan Hall. It was designed by Herbert Bayer, who pioneered Aspen's post-World War Two architectural revitalization. Martin Flug, an Aspen businessman who had been interested in physics since his undergraduate time at Harvard University, funded the construction of an auditorium and a lecture series to accompany it: the Flug Forum. The auditorium is named to honor Flug's father Samuel Flug, an investment banker who was born in Warsaw, Poland and who died in 1962. The Smart Family Foundation of Connecticut funded the construction of Smart Hall, a building on the ACP campus erected in 1996. The gift was arranged by A. Douglas Stone, a member of the Smart family, a physicist at Yale University, and a past ACP Scientific Secretary, Trustee, General Member, and Honorary Member. The third building on the ACP campus, Bethe Hall, is named after Hans Bethe, the German-American nuclear physicist, based at Cornell University. Bethe donated part of his prize money to the ACP after winning the Nobel Prize for Physics in 1967 for his work on stellar nucleosynthesis. Bethe was a long-standing participant at the center: he was vice president and Trustee in the 1970s, then an Honorary Trustee from the 1970s until his death in 2005. Following Bethe's example, several other physicists whose achievements merited awards went on to donate part of their prize money to the ACP.  Recognizing these scientist-donors, the ACP established the “Bethe Circle.” Luminaries ACP participants have included hundreds of post–doctoral fellows, professors, researchers and experimentalists who have come for short-term visits. Some had already achieved distinction before coming to the center; others won prizes or gained international recognition after spending time at the ACP early in their careers. Dozens of ACP physicists have received prestigious awards for their work, including the Nobel Prize in Physics. The following scientists have participated at the Aspen Center for Physics at least once. Several have attended for a number of years. References Aspen, Colorado 1962 establishments in Colorado Theoretical physics institutes Particle physics String theory Research institutes established in 1962 Research institutes in Colorado
Aspen Center for Physics
Physics,Astronomy
1,231
12,205,222
https://en.wikipedia.org/wiki/Coherence%20time%20%28communications%20systems%29
In communications systems, a communication channel may change with time. Coherence time is the time duration over which the channel impulse response is considered to be not varying. Such channel variation is much more significant in wireless communications systems, due to Doppler effects. Simple model In a simple model, a signal transmitted at time will be received as where is the channel impulse response (CIR) at time . A signal transmitted at time will be received as Now, if is relatively small, the channel may be considered constant within the interval to . Coherence time () will therefore be given by Relation with Doppler frequency Coherence time is the time-domain dual of Doppler spread and is used to characterize the time-varying nature of the frequency dispersiveness of the channel in the time domain. The Maximum Doppler spread and coherence time are inversely proportional to one another. That is, where is the maximum Doppler spread or, maximum Doppler frequency or, maximum Doppler shift given by with being the center frequency of the emitter. Coherence time is actually a statistical measure of the time duration over which the channel impulse response is essentially invariant, and quantifies the similarity of the channel response at different times. In other words, coherence time is the time duration over which two received signals have a strong potential for amplitude correlation. If the reciprocal bandwidth of the baseband signal is greater than the coherence time of the channel, then the channel will change during the transmission of the baseband message, thus causing distortion at the receiver. If the coherence time is defined as the time over which the time correlation function is above 0.5, then the coherence time is approximately, In practice, the first approximation of coherence time suggests a time duration during which a Rayleigh fading signal may fluctuate wildly, and the second approximation is often too restrictive. A popular rule of thumb for modern digital communications is to define the coherence time as the geometric mean of the two approximate values, also known as Clarke's model; from the maximum Doppler frequency we can obtain 50% coherence time Usually, we use the following relation References Wireless communication systems
Coherence time (communications systems)
Technology
459
27,952,270
https://en.wikipedia.org/wiki/WEPP
The Water Erosion Prediction Project (WEPP) model is a physically based erosion simulation model built on the fundamentals of hydrology, plant science, hydraulics, and erosion mechanics. The model was developed by an interagency team of scientists to replace the Universal Soil Loss Equation (USLE) and has been widely used in the United States and the world. WEPP requires four inputs, i.e., climate, topography, soil, and management (vegetation); and provides various types of outputs, including water balance (surface runoff, subsurface flow, and evapotranspiration), soil detachment and deposition at points along the slope, sediment delivery, and vegetation growth. The WEPP model has been improved continuously since its public delivery in 1995, and is applicable for a variety of areas (e.g., cropland, rangeland, forestry, fisheries, and surface coal mining). Capability and strength WEPP is applicable for a wide range of geographic and land-use and management conditions, and capable of predicting spatial and temporal distributions of soil detachment and deposition on an event or continuous basis at both small (hillslopes, roads, small parcels) and large (watershed) scales. Hillslope applications of the model can simulate a single profile having various distributions of soil, vegetation, and plant/management conditions. In WEPP watershed applications, multiple hillslopes, channels, and impoundments can be linked together, and runoff and sediment yield from the entire catchment predicted. The model has been parameterized for a large number of soils across the U.S. and model performance has been assessed under a wide variety of land-use and management conditions. In addition, WEPP can generate long-term daily climatic data with CLIGEN, an auxiliary stochastic climate generator. The CLIGEN database contains weather statistics from more than 2,600 weather stations in the United States. The WEPP climate database is supplemented by the PRISM database, which further refines the climatic data based on longitude, latitude, and elevation. WEPP can provide daily runoff, subsurface flow, and sediment output categorized into five particle-size classes: primary clay, primary silt, small aggregates, large aggregates, and primary sand, allowing calculation of selective sediment transport, and enrichment of the fine sediment sizes. Recent improvement Over the last decade, researchers have made significant improvements to the WEPP model. These include improved algorithms to simulate the effect of hydraulic structures and impoundments on runoff and sediment delivery, the addition of Penman–Monteith ET algorithms, subsurface converging lateral flow to represent variable source area runoff, improved canopy biomass routines for forested applications, and the incorporation of an alternative, energy-balance-based winter hydrologic routine. A number of modern graphical user interface programs have also been created, to assist in easier application of WEPP. The main interface for the model is a standalone Windows application (downloadable via: http://www.ars.usda.gov/Research/docs.htm?docid=10621), that allows a user to simulate hillslope profiles and small watersheds and have full control over all model inputs (Figure 1). Additionally, web-based interfaces allow rapid use of the model while accessing existing soil, climate, and management databases (Figure 2). A number of geospatial interfaces to WEPP (example in Figure 3) are also available: GeoWEPP – an ArcView or ArcGIS extension that runs in conjunction with the WEPP Windows interface On-line web-based GIS interface to WEPP using the open source MapServer GIS program Iowa Daily Erosion Project NetMap Forest and rangeland applications The U.S. Forest Service has developed a suite of internet interfaces, the Forest Service WEPP (FS WEPP) interfaces, for easier applications by stakeholders in forest and rangeland management (forest engineers, rangeland scientists, federal and state regulatory personnel) and the general public. The interfaces can be readily accessed and run through the internet (http://forest.moscowfsl.wsu.edu/fswepp/), and do not require any in-depth understanding of the hydrology, hydraulic and erosion principles embedded in the WEPP model. The FS WEPP interfaces include: Cross Drain – to predict sediment yield from a road segment across a buffer Rock:Clime – to create and download a modified WEPP climate file WEPP:Road – to predict erosion from a forest road segment WEPP:Road Batch – to predict erosion from multiple forest road segments Disturbed WEPP – to predict erosion from rangeland and forest disturbances (wildfire, harvest operations) Tahoe Basin Sediment Model (under construction) – to predict runoff and erosion for the Lake Tahoe Basin WEPP FuME (Fuel Management) – to predict erosion from fuel management practices ERMiT (Erosion Risk Management Tool) – to predict the probability of sediment delivery with various mitigation treatments in each of five years following wildfire See also Erosion Erosion prediction Erosion control Sediment control Hydrology (agriculture) Hydrological modelling Hydrological transport model Runoff model (reservoir) References External links WEPP – Official site – National Soil Erosion Research Laboratory (NSERL), USDA Agricultural Research Service WEPP Web Interfaces – NSERL, USDA Agricultural Research Service Forest Service WEPP Interfaces – USDA Forest Service Rocky Mountain Research Station GeoWEPP – SUNY Buffalo NetMap – Earth Systems Institute Environmental science Environmental soil science Water and the environment Water pollution Water resources management Hydrology models
WEPP
Chemistry,Biology,Environmental_science
1,137
1,785,252
https://en.wikipedia.org/wiki/Squinch
In architecture, a squinch is a structural element used to support the base of a circular or octagonal dome that surmounts a square-plan chamber. Squinches are placed to diagonally span each upper corner where the walls meet. Constructed from masonry, they have several forms, including: a graduated series of stepped arches; a hollow, open half-cone (like a funnel) laid horizontally; or a small half-dome niche. They are designed to spread the load of a dome to the intersecting walls on which they are built. By bridging corners, they also visually transition an angular space to a round or near-circular zone. Squinches originated in the Sassanid Empire of Ancient Persia, remaining in use across Central and West Asia into modern times. From its pre-Islamic origin, it developed into an influential structure for Islamic architecture. Georgia and Armenia also inherited the form from the Sassanids, where squinches were widely employed in buildings of all kinds. They are heavily featured in surviving or ruined medieval Christian churches of the region. An alternative approach to the structural problem of translating square space to round superstructure is the pendentive, much used in late Roman Empire and Byzantine architecture. Domes built in the Roman-influenced world utilised separately-evolved construction methods. History Middle East The dome chamber in the Palace of Ardashir, the Sassanid king, in Firuzabad, Iran, is the earliest surviving example of the use of the squinch. After the rise of Islam, it remained a feature of Islamic architecture, especially in Iran, and was often covered by corbelled stalactite-like structures known as muqarnas. It was used in the Middle East and in eastern Romanesque architecture, although pendentives are more common in Byzantine architecture. The Hagia Sophia features both squinches and pendentives, in combination. Western Europe It spread to the Romanesque architecture of western Europe, one example being the Normans' 12th-century church of San Cataldo, Palermo, in Sicily. This has three domes, each supported by four doubled squinches. Etymology The word may possibly originate, the Oxford English Dictionary suggests, from the French word escoinson, meaning "from an angle", which became the English word "scuncheon" and then "scunch". See also History of Persian domes History of early and simple domes History of Roman and Byzantine domes Splayed arch, an arch with conical intrados References Further reading External links Domes Arches and vaults Architectural elements Islamic architectural elements
Squinch
Technology,Engineering
523
66,513,472
https://en.wikipedia.org/wiki/Epoetin%20theta
Epoetin theta, sold under the brand name Biopoin among others, is a copy of the human hormone erythropoietin. The most common side effects include shunt thrombosis (clots that can form in blood vessels of patients on dialysis, a blood clearance technique), headache, hypertension (high blood pressure), hypertensive crisis (sudden, dangerously high blood pressure), skin reactions, arthralgia (joint pain) and influenza (flu)-like illness.Epoetin theta was approved for medical use in the European Union in October 2009. It is on the World Health Organization's List of Essential Medicines. Medical uses Epoetin theta is indicated for the treatment of symptomatic anemia in adults. References attribution contains material copied from Biopoin | European Medicines Agency https://www.ema.europa.eu/en/medicines/human/EPAR/biopoin 2021 External links Antianemic preparations Recombinant proteins
Epoetin theta
Biology
213
14,981,066
https://en.wikipedia.org/wiki/Forest%20Ecology%20and%20Management
Forest Ecology and Management is a semimonthly peer-reviewed scientific journal covering forest ecology and the management of forest resources. The journal publishes research manuscripts that report results of original research, review articles, and book reviews. Forestry-related topics are covered that apply biological and social knowledge to address problems encountered in forest management and conservation. See also List of forestry journals References External links Forestry journals Elsevier academic journals Semi-monthly journals Academic journals established in 1977 English-language journals
Forest Ecology and Management
Environmental_science
95
14,677,455
https://en.wikipedia.org/wiki/Selectfluor
Selectfluor, a trademark of Air Products and Chemicals, is a reagent in chemistry that is used as a fluorine donor. This compound is a derivative of the nucleophillic base DABCO. It is a colourless salt that tolerates air and even water. It has been commercialized for use for electrophilic fluorination. Preparation Selectfluor is synthesized by the N-alkylation of diazabicyclo[2.2.2]octane (DABCO) with dichloromethane in a Menshutkin reaction, followed by ion exchange with sodium tetrafluoroborate (replacing the chloride counterion for the tetrafluoroborate). The resulting salt is treated with elemental fluorine and sodium tetrafluoroborate: The cation is often depicted with one skewed ethylene ((CH2)2) group. In fact, these pairs of CH2 groups are eclipsed so that the cation has idealized C3h symmetry. Mechanism of fluorination Electrophilic fluorinating reagents could in principle operate by electron transfer pathways or an SN2 attack at fluorine. This distinction has not been decided. By using a charge-spin separated probe, it was possible to show that the electrophilic fluorination of stilbenes with Selectfluor proceeds through an SET/fluorine atom transfer mechanism. In certain cases Selectfluor can transfer fluorine to alkyl radicals. Applications The conventional source of "electrophilic fluorine", i.e. the equivalent to the superelectrophile F+, is gaseous fluorine, which requires specialised equipment for manipulation. Selectfluor reagent is a salt, the use of which requires only routine procedures. Like F2, the salt delivers the equivalent of F+. It is mainly used in the synthesis of organofluorine compounds: Specialized applications Selectfluor reagent also serves as a strong oxidant, a property that is useful in other reactions in organic chemistry. Oxidation of alcohols and phenols. As applied to electrophilic iodination, Selectfluor reagent activates the I–I bond in I2 molecule. Related reagents Similar to Selectfluor are N-fluorosulfonimides: References Patents Reagents for organic chemistry Tetrafluoroborates Fluorinating agents Quaternary ammonium compounds Nitrogen heterocycles Organochlorides Substances discovered in the 1990s
Selectfluor
Chemistry
547
64,413,209
https://en.wikipedia.org/wiki/Splash%20lubrication
Splash lubrication is a rudimentary form of lubrication found in early engines. Such engines could be external combustion engines (such as stationary steam engines), or internal combustion engines (such as petrol, diesel or paraffin engines). Description An engine that uses splash lubrication requires neither oil pump nor oil filter. Splash lubrication is an antique system whereby scoops on the big-ends of the connecting rods dip into the oil sump and splash the lubricant upwards towards the cylinders, creating an oil mist which settles into droplets. The oil droplets then pass through drillings to the bearings and thereby lubricate the moving parts. Provided that the bearing is a ball bearing or a roller bearing, splash lubrication would usually be sufficient; however, plain bearings typically need a pressure feed to maintain the oil film, loss of which leads to overheating and seizure. The splash lubrication system has simplicity, reliability, and cheapness within its virtues. However, splash lubrication can work only on very low-revving engines, as otherwise the sump oil would become a frothy mousse. The Norwegian firm, Sabb Motor, produced a number of small marine diesel engines, mostly single-cylinder or twin-cylinder units, that used splash lubrication. Modern use of splash lubrication Splash lubrication is still used in modern engines and mechanisms, such as: the Robinson R22 helicopter uses splash lubrication on some bevel gears. all BMW motorcycles with shaft drive use splash lubrication in the final drive hub. British pre-unit and unit construction motorcycles, (such as Triumph, BSA and Norton) used splash lubrication in their gearboxes. Cars and lorries invariably still use splash lubrication in their differentials. See also Oil pressure References External links International Council for Machinery Lubrication Machinery Lubrication magazine (archived) Lubrication Tribology Lubricants
Splash lubrication
Chemistry,Materials_science,Engineering
402
75,346,266
https://en.wikipedia.org/wiki/Stavros%20Avramidis
Stavros Avramidis (born in Kavala, Greece, in 1958) is a Greek Canadian wood scientist and professor at the University of British Columbia in Canada, who is an elected fellow (FIAWS) and president of the International Academy of Wood Science for the period 2023-2026. First years and education Avramidis was born in Kavala, Greece, on April 6, 1958, and grew up in Thessaloniki. He attended the Department of Forestry at the Aristotle University of Thessaloniki and received his university degree in 1981. Following that, he pursued research-based postgraduate (1982-1983) (M.S. in the area of composite products) and doctoral studies (1983-1986) in the United States at the State University of New York College of Environmental Science and Forestry, in the area of biopolymer physics under the guidance of John F. Siau. Academic career Avramidis began his academic career in 1987 in Canada at the University of British Columbia as an assistant professor at Department of Wood Science in the Faculty of Forestry. He was appointed associate professor in 1993 and full professor in 1998. Avramidis served as the Head of the UBC Department of Wood Science for two consecutive terms, from 2016 to the present. Avramidis's research team has presented research work on the physical and drying properties of wood. His applied research addresses practical issues in the Canadian wood industry related to energy optimization and upgrading production methods, using acoustic, electrical, and optical techniques, as well as radio wave methods, simulation, and artificial intelligence. Research work and recognition Avramidis along with his colleagues have authored over 250 scientific articles, more than 100 industrial studies, and his research work has received almost 3,000 citations in the Scopus database, until July 2024. In 2012, Avramidis was selected as a member of the editorial board of the journal, Wood Material Science and Engineering. He has been a member of the editorial boards of Holzforschung, Drying Technology, Wood Research, European Journal of Wood and Wood Products, Maderas. Ciencia y tecnologia and Drying Technology. In 2020, his name was included in the Mendeley Data, published in the journal Plos Biology for the international impact of his yearlong research in wood drying. In 2022, Avramidis received the Ternryd Award 2022 from the Swedish Linnaeus Academy Research Foundation for his research in wood science. In June 2023, Avramidis was elected as the president of the International Academy of Wood Science, for the years 2023–2026. In October 2023, a referred metaresearch conducted by John Ioannidis and his team at Stanford University, included Avramidis in Elsevier Data 2022, where he was placed at the top 2% of researchers in the area of wood physics. In August 2024, Avramidis has acquired the same international distinction for his research work in wood science (Elsevier Data 2023; career data). References External links Google Scholar ResearchGate 1958 births Living people Academic staff of the University of British Columbia State University of New York College of Environmental Science and Forestry alumni Aristotle University of Thessaloniki alumni Greek scientists Wood sciences Fellows of the International Academy of Wood Science Wood scientists
Stavros Avramidis
Materials_science,Engineering
674
11,237,206
https://en.wikipedia.org/wiki/Diethyl%20ether%20%28data%20page%29
This page provides supplementary chemical data on diethyl ether. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions. MSDS for diethyl ether is available at Mallinckrodt Baker. Structure and properties Thermodynamic properties Vapor pressure of liquid Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Distillation data Spectral data References Chemical data pages Chemical data pages cleanup
Diethyl ether (data page)
Chemistry
120
3,282,070
https://en.wikipedia.org/wiki/Land%20allocation%20decision%20support%20system
LADSS, or land allocation decision support system, is an agricultural land-use planning tool developed at The Macaulay Institute. More recently the term LADSS is used to refer to the research of the team behind the original planning tool. Overview of research The focus of the research of the LADSS team has evolved over time from land use decision support towards policy support, climate change and the concepts of resilience and adaptive capacity. Recent studies The team has recently published a study which examines, from a Scottish perspective, a number of alternative scenarios for reform of CAP Pillar 1 Area Payments. It focuses on two alternative classifications: the Macaulay Land Capability for Agriculture classification; and Less Favoured Area Designations; and includes analysis of the redistribution of payments from the current historical system. The study is entitled: Modelling Scenarios for CAP Pillar 1 Area Payments using Macaulay Land Capability for Agriculture (& Less Favoured Area Designations) and was used to inform the Pack Inquiry. The EU FP7 SMILE (Synergies in Multi-scale Inter-Linkages of Eco-social Systems) project, focuses on the concept of social metabolism that draws attention to how energy, material, money and ideas are utilised by society. The Aquarius project, which is aims is to find and implement sustainable, integrated land-water management through engaging with land managers. The COP15 website which provides a series of briefing and scoping papers produced by the United Nations Environment Programme (UNEP) and contributed to by The Macaulay Institute to raise the profile of the ecosystems approach in the UNFCC 15th Conference of the Parties meeting in Copenhagen to tackling not just climate change mitigation and adaptation, but also poverty alleviation, disaster risk reduction, biodiversity loss and many other environmental issues. LADSS planning tool The LADSS planning tool is implemented using the programming language G2 from Gensym alongside a Smallworld GIS application using the Magik programming language and an Oracle database. LADSS models crops using the CropSyst simulation model. LADSS also contains a livestock model plus social, environmental and economic impact assessments. LADSS has been used to address climate change issues affecting agriculture in Scotland and Italy. Part of this work has involved the use of General Circulation Models (also known as Global climate models) to predict future climate scenarios. Other work has included a study into how Common Agricultural Policy reform will affect the uplands of Scotland, an assessment of agricultural sustainability and rural development research within the AGRIGRID project. Resources Peer reviewed papers produced by LADSS are available for download in PDF format. References External links Official site Original LADSS planning tool SMILE project Aquarius project COP15 project GIS software Agriculture in Scotland Environmental soil science Environment of Scotland Land use Town and country planning in Scotland Land reform in Scotland
Land allocation decision support system
Environmental_science
558
22,456,146
https://en.wikipedia.org/wiki/Reductio%20ad%20absurdum
In logic, (Latin for "reduction to absurdity"), also known as (Latin for "argument to absurdity") or apagogical arguments, is the form of argument that attempts to establish a claim by showing that the opposite scenario would lead to absurdity or contradiction. This argument form traces back to Ancient Greek philosophy and has been used throughout history in both formal mathematical and philosophical reasoning, as well as in debate. Formally, the proof technique is captured by an axiom for "Reductio ad Absurdum", normally given the abbreviation RAA, which is expressible in propositional logic. This axiom is the introduction rule for negation (see negation introduction) and it is sometimes named to make this connection clear. It is a consequence of the related mathematical proof technique called proof by contradiction. Examples The "absurd" conclusion of a reductio ad absurdum argument can take a range of forms, as these examples show: The Earth cannot be flat; otherwise, since the Earth is assumed to be finite in extent, we would find people falling off the edge. There is no smallest positive rational number . If there were, then would be also a rational number, it would be positive, and we would have . This contradicts the hypothetical minimality of among positive rational numbers, so we conclude that there is no such smallest positive rational number. The first example argues that denial of the premise would result in a ridiculous conclusion, against the evidence of our senses (empirical evidence). The second example is a mathematical proof by contradiction (also known as an indirect proof), which argues that the denial of the premise would result in a logical contradiction (there is a "smallest" number and yet there is a number smaller than it). Greek philosophy Reductio ad absurdum was used throughout Greek philosophy. The earliest example of a argument can be found in a satirical poem attributed to Xenophanes of Colophon (c. 570 – c. 475 BCE). Criticizing Homer's attribution of human faults to the gods, Xenophanes states that humans also believe that the gods' bodies have human form. But if horses and oxen could draw, they would draw the gods with horse and ox bodies. The gods cannot have both forms, so this is a contradiction. Therefore, the attribution of other human characteristics to the gods, such as human faults, is also false. Greek mathematicians proved fundamental propositions using reductio ad absurdum. Euclid of Alexandria (mid-4th – mid-3rd centuries BCE) and Archimedes of Syracuse (c. 287 – c. 212 BCE) are two very early examples. The earlier dialogues of Plato (424–348 BCE), relating the discourses of Socrates, raised the use of arguments to a formal dialectical method (), also called the Socratic method. Typically, Socrates' opponent would make what would seem to be an innocuous assertion. In response, Socrates, via a step-by-step train of reasoning, bringing in other background assumptions, would make the person admit that the assertion resulted in an absurd or contradictory conclusion, forcing him to abandon his assertion and adopt a position of aporia. The technique was also a focus of the work of Aristotle (384–322 BCE), particularly in his Prior Analytics where he referred to it as demonstration to the impossible (, 62b). Another example of this technique is found in the sorites paradox, where it was argued that if 1,000,000 grains of sand formed a heap, and removing one grain from a heap left it a heap, then a single grain of sand (or even no grains) forms a heap. Buddhist philosophy Much of Madhyamaka Buddhist philosophy centers on showing how various essentialist ideas have absurd conclusions through reductio ad absurdum arguments (known as prasaṅga, "consequence" in Sanskrit). In the Mūlamadhyamakakārikā, Nāgārjuna's reductio ad absurdum arguments are used to show that any theory of substance or essence was unsustainable and therefore, phenomena (dharmas) such as change, causality, and sense perception were empty (sunya) of any essential existence. Nāgārjuna's main goal is often seen by scholars as refuting the essentialism of certain Buddhist Abhidharma schools (mainly Vaibhasika) which posited theories of svabhava (essential nature) and also the Hindu Nyāya and Vaiśeṣika schools which posited a theory of ontological substances (dravyatas). Example from Nāgārjuna's Mūlamadhyamakakārikā In 13.5, Nagarjuna wishes to demonstrate consequences of the presumption that things essentially, or inherently, exist, pointing out that if a "young man" exists in himself then it follows he cannot grow old (because he would no longer be a "young man"). As we attempt to separate the man from his properties (youth), we find that everything is subject to momentary change, and are left with nothing beyond the merely arbitrary convention that such entities as "young man" depend upon. 13:5 A thing itself does not change. Something different does not change. Because a young man does not grow old. And because an old man does not grow old either. Principle of non-contradiction Aristotle clarified the connection between contradiction and falsity in his principle of non-contradiction, which states that a proposition cannot be both true and false. That is, a proposition and its negation (not-Q) cannot both be true. Therefore, if a proposition and its negation can both be derived logically from a premise, it can be concluded that the premise is false. This technique, known as indirect proof or proof by contradiction, has formed the basis of arguments in formal fields such as logic and mathematics. See also Appeal to ridicule Argument from fallacy Contraposition List of Latin phrases Mathematical proof Prasangika Slippery slope Strawman Sources Pasti, Mary. Reductio Ad Absurdum: An Exercise in the Study of Population Change. United States, Cornell University, Jan., 1977. Daigle, Robert W.. The Reductio Ad Absurdum Argument Prior to Aristotle. N.p., San Jose State University, 1991. References External links Latin logical phrases Latin philosophical phrases Theorems in propositional logic Madhyamaka Arguments Pyrrhonism Greek philosophy Buddhist philosophical concepts
Reductio ad absurdum
Mathematics
1,335
65,794,256
https://en.wikipedia.org/wiki/Moto%20G%205G%20Plus
Moto G 5G Plus and Motorola One 5G are Android phablets developed by Motorola Mobility, a subsidiary of Lenovo. The Moto G 5G Plus was announced in July 2020 for Europe; the Motorola One 5G was announced in September 2020 for the United States. A version with mmWave support followed in October, exclusive to Verizon. References Android (operating system) devices Mobile phones introduced in 2020 Mobile phones with multiple rear cameras Motorola smartphones Mobile phones with 4K video recording
Moto G 5G Plus
Technology
103
956,888
https://en.wikipedia.org/wiki/Cetirizine
Cetirizine is a second-generation antihistamine used to treat allergic rhinitis (hay fever), dermatitis, and urticaria (hives). It is taken by mouth. Effects generally begin within thirty minutes and last for about a day. The degree of benefit is similar to other antihistamines such as diphenhydramine, which is a first-generation antihistamine. Common side effects include sleepiness, dry mouth, headache, and abdominal pain. The degree of sleepiness that occurs is generally less than with first-generation antihistamines because second-generation antihistamines are more selective for the H1 receptor. Compared to other second-generation antihistamines, cetirizine can cause drowsiness. Among second-generation antihistamines, cetirizine is more likely than fexofenadine and loratadine to cause drowsiness. Use in pregnancy appears safe, but use during breastfeeding is not recommended. The medication works by blocking histamine H1 receptors, mostly outside the brain. Cetirizine can be used for paediatric patients. The main side effect to be cautious about is somnolence. It was patented in 1983 and came into medical use in 1987. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 43rd most commonly prescribed medication in the United States, with more than 13million prescriptions. Medical uses Allergies Cetirizine's primary indication is for hay fever and other allergies. Because the symptoms of itching and redness in these conditions are caused by histamine acting on the H1 receptor, blocking those receptors temporarily relieves those symptoms. Cetirizine is also commonly prescribed to treat acute and (in particular cases) chronic urticaria (hives), more efficiently than any other second-generation antihistamine. Available forms Cetirizine is available over-the-counter in the US in the form of 5 and 10 mg tablets. A 20 mg strength is available by prescription only. It is also available as a 1 mg/mL syrup for oral administration by prescription. In the UK, up to 30 tablets of 10 mg are on the general sales list (of pharmaceuticals) and can be purchased without a prescription and pharmacist supervision. The drug can be in the form of tablets, capsules or syrup. Adverse effects Commonly reported side effects of cetirizine include headache, dry mouth, drowsiness, and fatigue, while more serious, but rare, adverse effects reported include tachycardia and edema. Pruritus after discontinuation of cetirizine Discontinuing cetirizine after prolonged use (typically, use beyond six months) may result in pruritus (generalized itchiness). The United States Food and Drug Administration (FDA) analyzed cases of pruritus after stopping cetirizine in the FDA Adverse Event Reporting System (FAERS) database and medical literature through 24 April 2017. Their report noted that some patients indicated the itchiness impacted their ability to work, sleep or perform normal daily activities. No specific schedule for weaning is currently provided in the drug information for cetirizine. Pharmacology Pharmacodynamics Cetirizine acts as a highly selective antagonist of the histamine H1 receptor. The Ki values for the H1 receptor are approximately 6 nM for cetirizine, 3 nM for levocetirizine, and 100 nM for dextrocetirizine, indicating that the levorotatory enantiomer is the main active form. Cetirizine has 600-fold or greater selectivity for the H1 receptor over a wide variety of other sites, including muscarinic acetylcholine, serotonin, dopamine, and α-adrenergic receptors, among many others. The drug shows 20,000-fold or greater selectivity for the H1 receptor over the five muscarinic acetylcholine receptors, and hence does not exhibit anticholinergic effects. It shows negligible inhibition of the hERG channel ( > 30 μM) and no cardiotoxicity has been observed with cetirizine at doses of up to 60 mg/day, six times the normal recommended dose and the highest dose of cetirizine that has been studied in healthy subjects. Cetirizine crosses the blood–brain barrier only slightly, and for this reason, produces minimal sedation compared to many other antihistamines. A positron emission tomography (PET) study found that brain occupancy of the H1 receptor was 12.6% for 10 mg cetirizine, 25.2% for 20 mg cetirizine, and 67.6% for 30 mg hydroxyzine. (A 10 mg dose of cetirizine equals about a 30 mg dose of hydroxyzine in terms of peripheral antihistamine effect.) PET studies with antihistamines have found that brain H1 receptor occupancy of more than 50% is associated with a high prevalence of somnolence and cognitive decline, whereas brain H1 receptor occupancy of less than 20% is considered to be non-sedative. In accordance, H1 receptor occupancy correlated well with subjective sleepiness for 30 mg hydroxyzine but there was no correlation for 10 or 20 mg cetirizine. As such, brain penetration and brain H1 receptor occupancy by cetirizine are dose-dependent, and in accordance, while cetirizine at doses of 5 to 10 mg have been reported to be non-sedating or mildly sedating, a higher dose of 20 mg has been found to induce significant drowsiness in other studies. Cetirizine also shows anti-inflammatory properties independent of H1 receptors. The effect is exhibited through suppression of the NF-κB pathway, and by regulating the release of cytokines and chemokines, thereby regulating the recruitment of inflammatory cells. It has been shown to inhibit eosinophil chemotaxis and LTB4 release. At a dosage of 20 mg, Boone et al. found that it inhibited the expression of VCAM-1 in patients with atopic dermatitis. Pharmacokinetics Absorption Cetirizine is rapidly and extensively absorbed upon oral administration in tablet or syrup form. The oral bioavailability of cetirizine is at least 70% and of levocetirizine is at least 85%. The Tmax of cetirizine is approximately 1.0 hour regardless of formulation. The pharmacokinetics of cetirizine have been found to increase linearly with dose across a range of 5 to 60 mg. Its Cmax following a single dose has been found to be 257 ng/mL for 10 mg and 580 ng/mL for 20 mg. Food has no effect on the bioavailability of cetirizine but has been found to delay the Tmax by 1.7 hours (i.e., to approximately 2.7 hours) and to decrease the Cmax by 23%. Similar findings were reported for levocetirizine, which had its Tmax delayed by 1.25 hours and its Cmax decreased by about 36% when administered with a high-fat meal. Steady-state levels of cetirizine occur within 3 days and there is no accumulation of the drug with chronic administration. Following once-daily administration of 10 mg cetirizine for ten days, the mean Cmax was 311 ng/mL. Distribution The mean plasma protein binding of cetirizine has been found to be 93 to 96% across a range of 25 to 1,000 ng/mL independent of concentration. Plasma protein binding of 88 to 96% has also been reported across multiple studies. The drug is bound to albumin with high affinity, while α1-acid glycoprotein and lipoproteins contribute much less to total plasma protein binding. The unbound or free fraction of levocetirizine has been reported to be 8%. The true volume of distribution of cetirizine is unknown but is estimated to be 0.3 to 0.45 L/kg. Cetirizine poorly and slowly crosses the blood–brain barrier, which is thought to be due to its chemical properties and its activity as a P-glycoprotein substrate. Metabolism Cetirizine is notably not metabolized by the cytochrome P450 system. Because of this, it does not interact significantly with drugs that inhibit or induce cytochrome P450 enzymes such as theophylline, erythromycin, clarithromycin, cimetidine, or alcohol. Studies with cetirizine synthesized with radioactive carbon-14 show that 90% of excreted cetirizine is unchanged at 2 hours, 80% at 10 hours, and 70% at 24 hours, indicating limited and slow metabolism. While cetirizine does not undergo extensive metabolism or metabolism by the cytochrome P450 enzyme, it does undergo some metabolism by other means, the metabolic pathways of which include oxidation and conjugation. The precise enzymes responsible for transformation of cetirizine have not been identified. Elimination Cetirizine is eliminated approximately 70 to 85% in the urine and 10 to 13% in the feces. In total, about 60% of cetirizine eliminated in the urine is unchanged. It is eliminated in the urine via an active transport mechanism. The elimination half-life of cetirizine ranges from 6.5 to 10 hours in healthy adults, with a mean across studies of approximately 8.3 hours. The elimination half-life of cetirizine is increased in the elderly (to 12 hours), in hepatic impairment (to 14 hours), and in renal impairment (to 20 hours). Concentrations of cetirizine in the skin decline much slower than concentrations in the blood plasma. Its duration of action is at least 24 hours. Chemistry Cetirizine contains L- and D-stereoisomers. Chemically, levocetirizine is the active L-enantiomer of cetirizine. The drug is a member of the diphenylmethylpiperazine group of antihistamines. Analogues include cyclizine and hydroxyzine. Synthesis The 1-(4-chlorophenylmethyl)-piperazine is alkylated with methyl (2-chloroethoxy)-acetate in the presence of sodium carbonate and xylene solvent to produce the Sn2 substitution product in 28% yield. Saponification of the acetate ester is done by refluxing with potassium hydroxide in absolute ethanol to afford a 56% yield of the potassium salt intermediate. This is then hydrolyzed with aqueous HCl and extracted to give an 81% yield of the carboxylic acid product. Availability Cetirizine is available without a prescription. In some countries, it is only available over-the-counter in packages containing seven or ten 10 mg doses. Cetirizine is available as a combination medication with pseudoephedrine, a decongestant. The combination is often marketed using the same brand name as the cetirizine with a "-D" suffix (for example, Zyrtec-D). Cetirizine is marketed under the brand names Alatrol, Alerid, Allacan, Allercet, Alzene, Cerchio, Cetirin, Cetizin, Cetriz, Cetzine, Cezin, Cetgel, Cirrus, Histec, Histazine, Humex, Letizen, Okacet (Cipla), Piriteze, Reactine, Razene, Rigix, Sensahist (Oethmann, South Africa), Triz, Zetop, Zirtec, Zirtek, Zodac, Zyllergy, Zynor, Zyrlek, and Zyrtec (Johnson & Johnson), inter alios. References Belgian inventions Acetic acids Ethers H1 receptor antagonists Drugs developed by Johnson & Johnson Peripherally selective drugs Drugs developed by Pfizer Chlorcyclizines World Health Organization essential medicines Wikipedia medicine articles ready to translate Piperazines 4-Chlorophenyl compounds
Cetirizine
Chemistry
2,626
3,742,568
https://en.wikipedia.org/wiki/Yeast%20display
Yeast display (or yeast surface display) is a protein engineering technique that uses the expression of recombinant proteins incorporated into the cell wall of yeast. This method can be used for several applications such as isolating and engineering antibodies and determining host-microbe interactions. Development The yeast display technique was first published by the laboratory of Professor K. Dane Wittrup and Eric T. Boder. The technology was sold to Abbott Laboratories in 2001. How it works A protein of interest is displayed as a fusion to the Aga2p protein on the surface of yeast. The Aga2p protein is used by yeast to mediate cell–cell contacts during yeast cell mating. As such, display of a protein via Aga2p likely projects the fusion protein from the cell surface, minimizing potential interactions with other molecules on the yeast cell wall. The use of magnetic separation and flow cytometry in conjunction with a yeast display library can be highly effective method to isolate high affinity protein ligands against nearly any receptor through directed evolution. Advantages and disadvantages Advantages of yeast display over other in vitro evolution methods include eukaryotic expression and post translational processing, quality control mechanisms of the eukaryotic secretory pathway, minimal avidity effects, and quantitative library screening through fluorescent-activated cell sorting (FACS). Yeast are eukaryotic organisms that allow for complex post-translational modifications to proteins that no other display libraries are able to provide. Disadvantages include smaller mutant library sizes compared to alternative methods and differential glycosylation in yeast compared to mammalian cells. Alternative methods for protein evolution in vitro are mammalian display, phage display, ribosome display, bacterial display, and mRNA display. References Further reading Boder, E.T., Wittrup, K.D.; Biotechnol. Prog., 1998, 14, 55–62. Boder E.T., Midelfort K.S., Wittrup K.D.; Proc Natl Acad Sci, 2000, 97(20):10701-10705. Graff, C.P., Chester, K., Begent, R., Wittrup, K.D.; Prot. Eng. Des. Sel., 2004, 17, 293–304. Feldhaus M, Siegel R.; Methods in Molecular Biology 263:311–332 (2004). Display techniques
Yeast display
Chemistry,Biology
505
12,687,622
https://en.wikipedia.org/wiki/Marina%20Ratner
Marina Evseevna Ratner (; October 30, 1938 – July 7, 2017) was a professor of mathematics at the University of California, Berkeley who worked in ergodic theory. Around 1990, she proved a group of major theorems concerning unipotent flows on homogeneous spaces, known as Ratner's theorems. Ratner was elected to the American Academy of Arts and Sciences in 1992, awarded the Ostrowski Prize in 1993 and elected to the National Academy of Sciences the same year. In 1994, she was awarded the John J. Carty Award from the National Academy of Sciences. Biographical information Ratner was born in Moscow, Russian SFSR to a Jewish family, where her father was a plant physiologist and her mother a chemist. Ratner's mother was fired from work in the 1940s for writing to her mother in Israel, then considered an enemy of the Soviet state. Ratner gained an interest in mathematics in her fifth grade. From 1956 to 1961, she studied mathematics and physics at Moscow State University. Here, she became interested in probability theory, inspired by A.N. Kolmogorov and his group. After graduation, she spent four years working in Kolmogorov's applied statistics group. Following this, she returned to Moscow State university for graduate studies were under Yakov G. Sinai, also a student of Kolmogorov. She completed her PhD thesis, titled "Geodesic Flows on Unit Tangent Bundles of Compact Surfaces of Negative Curvature", in 1969. In 1971 she emigrated from the Soviet Union to Israel and she taught at the Hebrew University from 1971 until 1975. She began to work with Rufus Bowen at Berkeley and later emigrated to the United States and became a professor of mathematics at Berkeley. Her work included proofs of conjectures dealing with unipotent flows on quotients of Lie groups made by S. G. Dani and M. S. Raghunathan. For this and other work, she won the John J. Carty Award for the Advancement of Science in 1994. she became only the third woman plenary speaker at International Congress of Mathematicians in 1994. Marina Ratner died July 7, 2017, at the age of 78. Selected publications References 1938 births 2017 deaths Fellows of the American Academy of Arts and Sciences Members of the United States National Academy of Sciences 20th-century American mathematicians 21st-century American mathematicians American people of Russian-Jewish descent Jewish Russian scientists Jewish American scientists University of California, Berkeley faculty Dynamical systems theorists 20th-century American women mathematicians 21st-century American women mathematicians Russian women scientists 20th-century Russian mathematicians 20th-century Russian women 21st-century American Jews
Marina Ratner
Mathematics
540
31,745,616
https://en.wikipedia.org/wiki/Robert%20H.%20Dodds%20Jr.
Robert H. Dodds Jr. is a professor in the Civil and Environmental Engineering Department at the University of Illinois Urbana-Champaign. He specializes in the field of structural engineering with a focus on non linear fracture mechanics of structural materials. He has done extensive research for the fields of fracture mechanics, fatigue, and engineering software development. In 2008, Dodds was elected a member of the National Academy of Engineering for contributions in non-linear fracture mechanics and applications to practice in nuclear power and space systems. Education For his undergraduate career, Dodds attended the University of Memphis Herff College of Engineering where he studied civil engineering. After attaining his Bachelor of Science (1973), he would leave Memphis and enroll at the University of Illinois Urbana-Champaign. At Illinois, he would go on to receive a Master of Science in civil engineering (1975) and eventually a doctorate of philosophy in Civil engineering (1978). Dodds has also been a loyal alumni donating annually to both the University of Illinois and the University of Memphis. He also contributed $50,000 to the construction of the M. T. Geoffrey Yeh Student Center, a new addition to the Newmark Civil Engineering Laboratory at the University of Illinois. Career After the completion of his post graduate degrees, Dodds would take an associate professor position on the faculty of the University of Kansas teaching civil engineering. After 8 years at Kansas, he returned to the University of Illinois in 1987. Back at Illinois, he served as an assistant professor and eventually received the professor title in 1992 teaching courses in Structural analysis, Finite element method, Fatigue, Fracture mechanics and Software design. Eventually, Dodds was named the Nathan M. Newmark endowed professor of civil engineering from 1996 to 2000. Then, in 2000, he became the inaugural M.T. Geoffrey Yeh endowed chair at the University of Illinois, a title he still maintains today. After continuing to serve as a professor, Dodds was named the head of the civil and environmental engineering department at Illinois in 2004 only to step down to renew his role as a mentor to students and focus on his continued research. While pursuing his career in higher education, Dodds became the co-editor of Engineering Fracture Mechanics in 1996. He also serves as an associate editor for the International Journal for Engineering with Computers and Engineering computations, as well as, a contributing editor to the International Journal for Mechanics of Advanced Materials and Structures. He has previously served as an associate editor for the ASCE Journal of Structural Engineering. Research While educating new generations of civil engineers, Dodds has also made large contributions to the structural engineering society via his research. He has published over 100 journal papers with a focus on non linear fracture mechanics, material fatigue, and the associated computational methods. In addition, he has taken the lead on numerous large scale research projects concerning fracture in structural metals for the US Navy, the US Nuclear Regulatory Commission, and NASA. The conclusions drawn from his research have wide-ranging effects into both different engineering disciplines and industrial applications. Honors Dodds has received the following accolades for his research and work with students: American Society of Civil Engineers Walter L. Huber Prize (1992) Nathan M. Newmark Medal (2001) George R. Irwin Medal from ASTM (2000) American Society of Testing Materials Award of Merit (2000) Munro Prize for best paper published in the International Journal of Engineering Structures (2000) (with his Ph.D. Student Carlos Matos) Chi Epsilon Chapter Honor Member (2000) Elected to the National Academy of Engineering (2008) References 1950 births American civil engineers American bridge engineers Structural engineers Living people Members of the United States National Academy of Engineering
Robert H. Dodds Jr.
Engineering
733
36,051,162
https://en.wikipedia.org/wiki/Macroemulsion
Macroemulsions are dispersed liquid-liquid, thermodynamically unstable systems with particle sizes ranging from 1 to 100 μm (orders of magnitude), which, most often, do not form spontaneously. Macroemulsions scatter light effectively and therefore appear milky, because their droplets are greater than a wavelength of light. They are part of a larger family of emulsions along with miniemulsions (or nanoemulsions). As with all emulsions, one phase serves as the dispersing agent. It is often called the continuous or outer phase. The remaining phase(s) are disperse or inner phase(s), because the liquid droplets are finely distributed amongst the larger continuous phase droplets. This type of emulsion is thermodynamically unstable, but can be stabilized for a period of time with applications of kinetic energy. Surfactants (as the main emulsifiers) are used to reduce the interfacial tension between the two phases, and induce macroemulsion stability for a useful amount of time. Emulsions can be stabilized otherwise with polymers, solid particles (Pickering emulsions) or proteins. Classification Macroemulsions can be divided into two main categories based on if they are a single emulsion or a double or multiple emulsion group. Both categories will be described using a typical oil (O) and water (W) immiscible fluid pairing. Single emulsions can be sub divided into two different types. For each single emulsion a single surfactant stabilizing layer exists as a buffer in between the two layers. In (O/W) oil droplets are dispersed in water. On the other hand, (W/O) involves water droplets finely dispersed in oil. Double or multiple emulsion classification is similar to single emulsion classification, except the immiscible phases are separated by at least two surfactant thin films. In a (W/O/W) combination, an immiscible oil phase exists between two separate water phases. In contrast, in an (O/W/O) combination the immiscible water phase separates two different oil phases. Formation Macroemulsions are formed in a variety of ways. Since they are not thermodynamically stable, they do not form spontaneously and require energy input, usually in the form of stirring or shaking of some kind to mechanically mix the otherwise immiscible phases. The resulting size of the macroemulsions typically depends on how much energy was used to mix the phases, with higher-energy mixing methods resulting in smaller emulsion particles. The energy required for this can be approximated using the following equation: Where is the energy Input, is the interfacial tension between the two phases, is the total volume of the mixture, and is the average radius of the newly created emulsions This equation gives the energy requirement just to separate the particles. In practice the energy cost is much higher, as most of the mechanical energy is simply converted to heat rather than mixing the phases. There are other ways to create emulsions between two liquids, such as adding one phase with droplets already being the required size. An emulsifying agent of some sort is also generally required. This helps form emulsions by reducing the interfacial tension between the two phases, usually by acting as a surfactant and adsorbing to the interface. This works because most emulsifiers have a hydrophilic and hydrophobic side, which means they can bond with both the oil-like phase and the water-like phase, thus reducing the number of water-oil molecular interactions at the surface. Reducing the number of these interactions reduces the interfacial energy, thus causing the emulsions to become more stable. The concentration of surfactant needed is much higher than its critical micelle concentration (CMC). This forms a surfactant monolayer which orients itself to minimize its surface to volume ratio. This ratio yields highly polydisperse spherical droplets in the range of 1 to 100 μm. The probability (P) of finding a certain sized droplet can be estimated for inner layer drops through the following equation: where R is the radius of the droplet, is the mean radius, and is the standard deviation. Determining which phase is the continuous phase and which phase is the dispersed phase is done by using the Bancroft Rule when the two phases have similar mole fractions. This rule states that the phase which the emulsifier is the most soluble in will be the continuous phase, even if it has a smaller volume fraction overall. For example, a mixture that is 60% Water and 40% Oil can form an emulsion where the water is the dispersed phase and the oil is the continuous phase if the emulsifier is more soluble in the oil. This is because the continuous phase is the phase that can coalesce the fastest upon mixing, which means it is the phase that can diffuse the emulsifying agent away from its own interfaces and into the bulk the fastest. It seems that this rule is very well followed in the case of surfactant-stabilized emulsions, but not for Pickering emulsions. For mixtures with overwhelmingly large amounts of one phase, the largest phase will often become the continuous phase. However, highly concentrated emulsions (looking like 'liquid-liquid foams') can also be obtained and stabilized. Stability Macroemulsions are, by definition, not thermodynamically stable. This means that from the moment they are created, they are always reverting to their original, immiscible and separate state. The reason why Macroemulsions can exist however, is because they are kinetically stable rather than thermodynamically stable. This means that, while they are continuously breaking down, it is done at such a slow pace that it is practically stable from a macroscopic perspective. The reasons why macroemulsions are stable is similar to the reasons why colloids can be stable. Based on DLVO theory, repulsive forces from the charged surfaces of the two phases repel each other enough to offset the attractive forces of the Hamaker Force Interactions. This creates a potential energy well at some distance, where the particles are in a local area of stability despite not being directly touching and therefore coalescing. However, since this is an area of local rather than total low potential, if any set of particles can randomly have enough thermal energy they can coalesce into an even more stable state, which is why all macroemulsions gradually coalesce over time. While it is energetically favorable for individual particles to coalesce due to the subsequent reduction of interfacial area, the adsorbed emulsifier prevents this. This is because it is more favorable for the emulsifying agent to be at an interface so reducing the interfacial area requires expending energy to return the emulsifying agent to the bulk. Stability of the Macroemulsions are based on numerous environmental factors including temperature, pH, and the ionic strength of the solvent. Progression of macroemulsions Flocculation Flocculation occurs when the dispersed drops group together throughout the continuous phase, but don't lose their individual identities. The driving force for flocculation is a weak van der Waals attraction between drops at large distances, which is known as the secondary energy minimum. An electrostatic repulsion between the surfaces prevents the drops from touching and merging, stabilizing the macroemulsion. The rate of diffusion limited encounters is equal to the upper limit for the decrease in droplet concentration and can be represented by the following equation: where D can be found using the Stokes-Einstein relation , R is the droplet radius, and c is the number of droplets per unit volume. This equation can be reduced to the following: where is the rate constant of flocculation . If the droplet radii are not all the same size and aggregation occurs, the flocculation rate constant is equal to . Creaming Creaming is the accumulation of drops in the dispersed phase at the top of the container. This occurs as a result of buoyancy forces. The density of the dispersed and continuous phases, as well as the viscosity of the continuous phase, greatly affect the creaming process. If the dispersed phase liquid is less dense than the continuous phase liquid, creaming is more likely to occur. Also, there is a greater chance of creaming at lower viscosities of the continuous phase liquid. Once all of the dispersed drops are in close proximity to each other, it is easier for them to coalesce. Coalescence Coalescence is the merging of two dispersed drops into one. The surfaces of two drops must be in contact for coalescence to occur. This surface contact is dependent on both the van der Waals attraction and surface repulsion forces between two drops. Once in contact, the two surface films are able to fuse together, which is more likely to occur in areas where the surface film is weak. The liquid inside each drop is now in direct contact, and the two drops are able to merge into one. Demulsification Demulsification is the act of destabilizing an emulsion. Once all of the drops have coalesced, two continuous phases exist instead of one dispersed phase and one continuous phase. This process may be accelerated by adding a cosurfactant or salt or by slowly stirring the liquid solution. Demulsification is beneficial for several macroemulsion applications. Applications Macroemulsions have nearly endless uses in scientific, industrial, and household applications. They are widely utilized today in automotive, beauty, cleaning and fabric care products as well as biotechnology and manufacturing techniques. Macroemulsions are often chosen over microemulsions for automotive and industrial applications because they are less expensive, easier to dispose of, and their tendency to demulsify more quickly is often desirable for lubricants. Soluble oil lubricants, usually containing fatty oil or mineral oil in water, are ideal for high speed and low pressure applications. They are often used for friction reducing needs and metalworking. Many skin care products, sun screens, and fabric softeners are made from silicone macroemulsions. Silicone's is chosen because of its non-irritating and lubricating properties. Different combinations of macroemulsions and surfactants are the subject of a wide range of biological research, especially in the area of cell cultures. The following table outlines a few examples of macroemulsions and their applications: References Liquids
Macroemulsion
Physics,Chemistry
2,170
43,469,488
https://en.wikipedia.org/wiki/Hapalopilus%20croceus
Hapalopilus croceus is a species of polypore fungus. It was originally described by Christian Hendrik Persoon in 1796 as Boletus croceus; Marinus Anton Donk transferred it to genus Hapalopilus in 1933 to give it the name by which it is currently known. The species is found in Asia, Europe, Oceania, and North America, where it grows on the rotting wood of deciduous trees. Description It is a semicircular orange to yellow polypore mushroom which can grow up to 8 inches wide. The underside is a reddish orange color. The spore print is white. It rarely grows in forests across Europe and North America. Distribution and Habitat It is mainly found decomposing old oak and Chessnut trees. This mushroom is considered rare as other mushroom decompose the trees first. Edibility Edibility is inedible and potentially poisonous References Polyporaceae Fungi described in 1796 Fungi of Asia Fungi of Europe Fungi of Oceania Fungi of North America Taxa named by Christiaan Hendrik Persoon Fungi without expected TNC conservation status Fungus species
Hapalopilus croceus
Biology
225
68,396,786
https://en.wikipedia.org/wiki/3%20Body%20Problem%20%28TV%20series%29
3 Body Problem is an American science fiction television series created by David Benioff, D. B. Weiss and Alexander Woo and is the third streaming adaptation of the Chinese novel series Remembrance of Earth's Past written by former computer engineer Liu Cixin. Its name comes from the first volume, The Three-Body Problem, of the novel series and refers to a classical physics problem dealing with Newton's laws of motion and gravitation. Previous adaptations include the animated The Three-Body Problem in Minecraft (2014–2020) and the live-action Three-Body (2023), both of which were exclusively in Mandarin. The series, set primarily in the United Kingdom and China, follows a diverse cast of scientists who all come into contact with an extraterrestrial civilization, leading to various threats and humanity-wide changes. It is Benioff and Weiss' first television project since the conclusion of their series Game of Thrones (2011–2019) and modifies part of the original works' Chinese setting to include foreign characters and locations. The first season was released on Netflix on March 21, 2024, with eight episodes, and received positive reviews. In May 2024, the series was renewed for a second season; a third season was confirmed later the same month. The series received six Primetime Emmy Award nominations, including Outstanding Drama Series. Premise Ye Wenjie, a Chinese astrophysicist, runs into trouble with the authorities after witnessing her father's death during a struggle session in the Cultural Revolution. Due to her scientific background, she is sent to a secret military base racing against other countries to make first contact with aliens during the Cold War. While there, she makes a choice that impacts humanity's future. In present-day London, a series of mysterious suicides and science-averse phenomena lead a government investigator and a group of friends called the "Oxford Five" into a mystery of extraterrestrial origin. Cast and characters Main Jovan Adepo as Dr. Saul Durand, a research assistant and member of the "Oxford Five" John Bradley as Jack Rooney, an entrepreneur and member of the "Oxford Five" Rosalind Chao and Zine Tseng as Dr. Ye Wenjie, an astrophysicist whose decisions as a young scientist threaten the survival of humanity. Chao portrays the character in the 2020s, while Tseng portrays a younger version in scenes set in the past. Liam Cunningham as Thomas Wade, chief of the Secret Intelligence Service (MI6), who also heads the global Strategic Intelligence Agency Eiza González as Dr. Augustina Salazar, a nanotechnologist and member of the "Oxford Five" Jess Hong as Dr. Jin Cheng, a theoretical physicist and member of the "Oxford Five" Marlo Kelly as Tatiana Haas, a fanatic member of a group of humans working for the San-Ti, an extraterrestrial civilization Alex Sharp as Dr. Will Downing, a sixth form physics teacher and member of the "Oxford Five" who is diagnosed with stage 4 cancer early in the series Sea Shimooka as "Sophon", a sophisticated intelligence platform used by the San-Ti to communicate with humanity Saamer Usmani as Prithviraj Varma, a Royal Navy officer and Cheng's boyfriend Benedict Wong as Clarence "Da" Shi, an MI6 officer working for Wade and the Strategic Intelligence Agency Jonathan Pryce as Mike Evans, a friend of Ye Wenjie since the 1960s who shares her view that humanity should be eliminated, and has become a prominent member of the small group of humans working for the San-Ti Recurring Vedette Lim as Vera Ye as Follower Ben Schnetzer as young Mike Evans John Dagleish as Felix Gerard Monaco as Collins Adrian Edmondson as Denys Porlock CCH Pounder as Lillian Joseph, the Secretary-General of the United Nations Lan Xiya as Tang Hongjing, a Red Guard Josh Brener as Sebastian Kent Guest Mark Gatiss as Isaac Newton when shown in the alien San-Ti's advanced virtual reality system Reece Shearsmith as Alan Turing Conleth Hill as Pope Gregory XIII Phil Wang as Aristotle Naoko Mori as Marie Curie Kevin Eldon as Thomas More Jason Forbes as Omar Khayyam Jim Howick as Harry Nitin Ganatra as Ranjit Varma Dustin Demri-Burns as Ted Tom Wu as Count of The West Guy Burnet as Rufus Jake Tapper as himself Bobak Ferdowsi as Flight director Stacy Abalogun as Thelma Episodes Production Development It was announced in September 2020 that David Benioff and D.B. Weiss were developing a television adaptation of the novel at Netflix, with Alexander Woo co-writing alongside them. Benioff and Weiss have said they are prepared to adapt the whole trilogy, which they expect to require three or four seasons. On December 25, 2020, Lin Qi, founder of Yoozoo Games and an executive producer on 3 Body Problem, died after ingesting a poisoned beverage, with four others becoming ill. An executive at Yoozoo Games, Xu Yao, was sentenced to death for murder in March 2024, the day after the series premiered on Netflix. Lin had purchased the rights to the book franchise and hired Xu Yao, a lawyer, in 2017, to manage the rights to Liu's novels; the poisonings were an attempt by Xu to take control of the subsidiary company that owned the rights to the series. On May 15, 2024, Netflix renewed the series for a second season. On May 31, 2024, Benioff, Weiss and Woo confirmed that the series was renewed for seasons 2–3 during the official Television Academy 3 Body Problem panel at the Netflix FYSEE space at Sunset Las Palmas in Los Angeles. According to Benioff, he had already worked with UK-based staff for Game of Thrones and wanted to use them in this production, and Benioff stated that the need to coordinate among those staff influenced the choice of the UK as a setting in 3 Body Problem. Casting In August 2021, Eiza González entered negotiations to join the cast. The same month, Derek Tsang was hired to direct the pilot episode. González would be confirmed as joining the cast by that October, with additional castings including Benedict Wong, Tsai Chin, John Bradley, Liam Cunningham and Jovan Adepo. In June 2022, Jonathan Pryce, Rosalind Chao, Ben Schnetzer and Eve Ridley were added to cast. Filming Production on the series began on November 8, 2021, with principal photography taking place in the United Kingdom. Filming took place in London over a nine month shoot between October 2021 and mid 2022. At Netflix's Tudum 2022 event, Alexander Woo, David Benioff, and D.B. Weiss announced that the production of the first season was completed. Netflix reportedly spent $20million per episode, for a total budget of $160 million for the first season. Release 3 Body Problem was released on March 21, 2024. A companion podcast to be hosted by Jason Concepcion and Maggie Aderin-Pocock was also announced alongside it. On January 10, 2024, SXSW Film & TV Festival announced 3 Body Problem as the Opening Night TV Premiere. Reception Critical response The review aggregator Rotten Tomatoes reported a 79% approval rating with an average rating of 6.9/10, based on 112 critic reviews. The website's critics' consensus reads, "Tackling its ambitious source material with impressive gusto, 3 Body Problems first season proves a solid start that should leave sci-fi fans eager for more." Metacritic assigned it a score of 70 out of 100, based on 41 critics, indicating "generally favorable reviews". Cindy White of The A.V. Club gave the series a B+ and said, "It may wear the garb of prestige television, but underneath its just a nerdy science-fiction show, with a healthy emphasis on the science." Reviewing the series for USA Today, Kelly Lawler gave a rating of 3/4 and wrote, "Benioff, Weiss and Woo took a book trilogy known more for its thought experiments in philosophy and theoretical physics than its plot and made a solid bit of hard sci-fi that is (mostly) accessible to more casual fans of the genre." Eric Deggans of NPR commented, "As the characters in 3 Body Problem lurch toward answers, we all get to bask in an ambitious narrative fueling an ultimately impressive tale. Just remember to be patient as the series sets the stage early on." Wenlei Ma of The Nightly described the series as "Ambitious, towering and crammed with big ideas about intellectual curiosity, exploration and our place in the universe while still managing to tell intimate stories about human relationships." Inkoo Kang of The New Yorker gave a positive review, writing "The Netflix adaptation of Liu Cixin's trilogy mixes heady theoretical questions with genuine spectacle and heart." Ben Travers of IndieWire gave a critical review, writing that "3 Body Problem is a sprawling drag, at turns disorienting in its use of inconsistent CGI to convey the story's momentousness and aggravating in its approach to character development and existential quandaries. The plot is easy enough to track, but the relief of realizing you can keep up with this motley crew of scientist pals — as they try to figure out why so many of their peers are dying off — is short-lived." Charles Pulliam-Moore of The Verge gave a mixed review, writing that "though David Benioff, D. B. Weiss, and Alexander Woo's 3 Body Problem is impressive, it really feels like just an introduction to Cixin Liu's deeper ideas." He opined that future seasons could explore the world of Liu's later novels. Response in China 3 Body Problem received a mixed response in China. While Netflix is blocked there, viewers can use VPNs to circumvent geo-restrictions, or view pirated versions. According to The Guardian, the 3 Body Problem hashtag had been read 2.3 billion times and discussed 1.424 million times on the Chinese social media platform Weibo. Viewers criticised the racebending and gender swapping of several protagonists, cultural appropriation, as well as the "dumbing-down" of concepts to appeal to non-Chinese audiences, and compared it unfavorably to the 2023 Chinese television adaptation, which received much critical acclaim there. The Chinese film review website Mtszimu praised the Netflix adaptation as "not only a new interpretation of Liu Cixin's original work but also an important contribution to global science-fiction literature". China Military Online, the official newspaper of the People's Liberation Army, criticized the series for retaining Chinese villains while doing away with portrayals of the country's modern development. In response to social media criticism about racebending, cast member Benedict Wong said that Liu had given the showrunners his blessing to move the story towards a global one. Wong also cited the presence of several Asian cast members including himself, Jess Hong, Rosalind Chao and Zine Tseng. Hong and Chao also said that the Netflix adaptation preserved the novel's depiction of the Cultural Revolution and its legacy. Hong said that the adaptation sought to "globalize a story that was very heavily Eastern-focused into a Western perspective, a global perspective. Because, we're all from different countries, for the actors, you get to pull in all of these brilliant storylines into one emotional core, which is quite brilliant." Aja Romano of Vox suggested that the media exaggerated Chinese social media nationalistic outrage against the Netflix show. They found that the Chinese audience "praising the show and criticizing it in equal parts", and shared similar critical commentary to the ones from the Western audience, underscoring that criticism of the show is universal. The original author, Liu Cixin, commented on the series, saying, "I enjoyed the part of the series where many characters were added, and their relationships were explored. However, it was strange how all these characters seemed to know each other already. Fighting against the alien invasion should be a collective effort of all humanity, but instead, it was depicted as if a group of classmates were drafted to fight against the aliens." Depiction and interpretation 3 Body Problem contains a realistic depiction of the struggle session during the Chinese Cultural Revolution, which was met with divided opinions in China and the United States. In an interview, David Benioff told The Hollywood Reporter the show "isn't a commentary on cancel culture", but agreed the fiction has parallels with the contemporary sociopolitical landscape. Derek Tsang, the director of the first two episodes, was recruited due to his Chinese background to ensure the authenticity of the Cultural Revolution period. Tsang explained that the goal of the episode was to convince the audience to empathize with the protagonist, Ye Wenjie, and understand her motivation and position in the story. Joel Stein of The Hollywood Reporter noted the Cultural Revolution scene sparked split interpretations from American liberal and conservative critics, with the conservative side focusing on its warning against wokeness and the "far-left agenda", including gender theories, whereas the liberal viewers found the scenes as a warning to conservative-led populism and faith-based agendas. Accolades See also Three-body problem – a problem of mathematics and physics, as well as classical and quantum mechanics, regarding any three objects and the gravitational effects they may have on one another. References External links 2020s American drama television series 2020s American mystery television series 2020s American science fiction television series 2024 American television series debuts Television about alien invasions American fantasy drama television series American thriller television series American English-language television shows Cultural depictions of Alan Turing Cultural depictions of Isaac Newton Cultural depictions of Marie Curie Cultural depictions of Omar Khayyam Cultural depictions of Thomas More Netflix television dramas Television series about the Cultural Revolution Television series created by D. B. Weiss Television series created by David Benioff Television series set in the 1960s Television series set in the 1970s Television series set in 1985 Television series set in 2024 Television shows set in China Television shows set in Oxford Television shows set in Panama Television shows based on Chinese novels Television shows filmed in China Television shows filmed in the United Kingdom Television shows filmed at Shepperton Studios Adaptations of works by Liu Cixin
3 Body Problem (TV series)
Astronomy
2,934
677,829
https://en.wikipedia.org/wiki/Global%20digital%20divide
The global digital divide describes global disparities, primarily between developed and developing countries, in regards to access to computing and information resources such as the Internet and the opportunities derived from such access. The Internet is expanding very quickly, and not all countries—especially developing countries—can keep up with the constant changes. The term "digital divide" does not necessarily mean that someone does not have technology; it could mean that there is simply a difference in technology. These differences can refer to, for example, high-quality computers, fast Internet, technical assistance, or telephone services. Statistics There is a large inequality worldwide in terms of the distribution of installed telecommunication bandwidth. In 2014 only three countries (China, US, Japan) host 50% of the globally installed bandwidth potential (see pie-chart Figure on the right). This concentration is not new, as historically only ten countries have hosted 70–75% of the global telecommunication capacity (see Figure). The U.S. lost its global leadership in terms of installed bandwidth in 2011, being replaced by China, which hosts more than twice as much national bandwidth potential in 2014 (29% versus 13% of the global total). Versus the digital divide The global digital divide is a special case of the digital divide; the focus is set on the fact that "Internet has developed unevenly throughout the world" causing some countries to fall behind in technology, education, labor, democracy, and tourism. The concept of the digital divide was originally popularized regarding the disparity in Internet access between rural and urban areas of the United States of America; the global digital divide mirrors this disparity on an international scale. The global digital divide also contributes to the inequality of access to goods and services available through technology. Computers and the Internet provide users with improved education, which can lead to higher wages; the people living in nations with limited access are therefore disadvantaged. This global divide is often characterized as falling along what is sometimes called the North–South divide of "northern" wealthier nations and "southern" poorer ones. Obstacles to a solution Some people argue that necessities need to be considered before achieving digital inclusion, such as an ample food supply and quality health care. Minimizing the global digital divide requires considering and addressing the following types of access: Physical access Involves "the distribution of ICT devices per capita…and land lines per thousands". Individuals need to obtain access to computers, landlines, and networks in order to access the Internet. This access barrier is also addressed in Article 21 of the convention on the Rights of Persons with Disabilities by the United Nations. Financial access The cost of ICT devices, traffic, applications, technician and educator training, software, maintenance, and infrastructures require ongoing financial means. Financial access and "the levels of household income play a significant role in widening the gap". Socio-demographic access Empirical tests have identified that several socio-demographic characteristics foster or limit ICT access and usage. Among different countries, educational levels and income are the most powerful explanatory variables, with age being a third one. While a Global Gender Gap in access and usage of ICT's exist, empirical evidence shows that this is due to unfavorable conditions concerning employment, education and income and not to technophobia or lower ability. In the contexts understudy, women with the prerequisites for access and usage turned out to be more active users of digital tools than men. In the US, for example, the figures for 2018 show 89% of men and 88% of women use the Internet. Cognitive access In order to use computer technology, a certain level of information literacy is needed. Further challenges include information overload and the ability to find and use reliable information. Design access Computers need to be accessible to individuals with different learning and physical abilities including complying with Section 508 of the Rehabilitation Act as amended by the Workforce Investment Act of 1998 in the United States. Institutional access In illustrating institutional access, Wilson states "the numbers of users are greatly affected by whether access is offered only through individual homes or whether it is offered through schools, community centers, religious institutions, cybercafés, or post offices, especially in poor countries where computer access at work or home is highly limited". Political access Guillen & Suarez argue that "democratic political regimes enable faster growth of the Internet than authoritarian or totalitarian regimes." The Internet is considered a form of e-democracy, and attempting to control what citizens can or cannot view is in contradiction to this. Recently situations in Iran and China have denied people the ability to access certain websites and disseminate information. Iran has prohibited the use of high-speed Internet in the country and has removed many satellite dishes in order to prevent the influence of Western culture, such as music and television. Cultural access Many experts claim that bridging the digital divide is not sufficient and that the images and language needed to be conveyed in a language and images that can be read across different cultural lines. A 2013 study conducted by Pew Research Center noted how participants taking the survey in Spanish were nearly twice as likely not to use the internet. Examples In the early 21st century, residents of developed countries enjoy many Internet services which are not yet widely available in developing countries, including: Mobile phones and small electronic communication devices; E-communities and social-networking; Fast broadband Internet connections, enabling advanced Internet applications; Affordable and widespread Internet access, either through personal computers at home or work, through public terminals in public libraries and Internet cafes, and through wireless access points; E-commerce enabled by efficient electronic payment networks like credit cards and reliable shipping services; Virtual globes featuring street maps searchable down to individual street addresses and detailed satellite and aerial photography; Online research systems which enable users to peruse newspaper and magazine articles that may be centuries old, without having to leave home; Electronic readers such as Kindle, Sony Reader, Samsung Papyrus and Iliad by iRex Technologies; Price engines which help consumers find the best possible online prices and similar services which find the best possible prices at local retailers; Electronic services delivery of government services, such as the ability to pay taxes, fees, and fines online. Further civic engagement through e-government and other sources such as finding information about candidates regarding political situations. Proposed remedies There are four specific arguments why it is important to "bridge the gap": Economic equality – For example, the telephone is often seen as one of the most important components, because having access to a working telephone can lead to higher safety. If there were to be an emergency, one could easily call for help if one could use a nearby phone. In another example, many work-related tasks are online, and people without access to the Internet may not be able to complete work up to company standards. The Internet is regarded by some as a basic component of civic life that developed countries ought to guarantee for their citizens. Additionally, welfare services, for example, are sometimes offered via the Internet. Social mobility – Computer and Internet use is regarded as being very important to development and success. However, some children are not getting as much technical education as others, because lower socioeconomic areas cannot afford to provide schools with computer facilities. For this reason, some kids are being separated and not receiving the same chance as others to be successful. Democracy – Some people believe that eliminating the digital divide would help countries become healthier democracies. They argue that communities would become much more involved in events such as elections or decision making. Economic growth – It is believed that less-developed nations could gain quick access to economic growth if the information infrastructure were to be developed and well used. By improving the latest technologies, certain countries and industries can gain a competitive advantage. While these four arguments are meant to lead to a solution to the digital divide, there are a couple of other components that need to be considered. The first one is rural living versus suburban living. Rural areas used to have very minimal access to the Internet, for example. However, nowadays, power lines and satellites are used to increase the availability in these areas. Another component to keep in mind is disabilities. Some people may have the highest quality technologies, but a disability they have may keep them from using these technologies to their fullest extent. Using previous studies (Gamos, 2003; Nsengiyuma & Stork, 2005; Harwit, 2004 as cited in James), James asserts that in developing countries, "internet use has taken place overwhelmingly among the upper-income, educated, and urban segments" largely due to the high literacy rates of this sector of the population. As such, James suggests that part of the solution requires that developing countries first build up the literacy/language skills, computer literacy, and technical competence that low-income and rural populations need in order to make use of ICT. It has also been suggested that there is a correlation between democrat regimes and the growth of the Internet. One hypothesis by Gullen is, "The more democratic the polity, the greater the Internet use...The government can try to control the Internet by monopolizing control" and Norris et al. also contends, "If there is less government control of it, the Internet flourishes, and it is associated with greater democracy and civil liberties. From an economic perspective, Pick and Azari state that "in developing nations…foreign direct investment (FDI), primary education, educational investment, access to education, and government prioritization of ICT as all-important". Specific remedies proposed by the study include: "invest in stimulating, attracting, and growing creative technical and scientific workforce; increase the access to education and digital literacy; reduce the gender divide and empower women to participate in the ICT workforce; emphasize investing in intensive Research and Development for selected metropolitan areas and regions within nations". There are projects worldwide that have implemented, to various degrees, the remedies outlined above. Many such projects have taken the form of Information Communications Technology Centers (ICT centers). Rahnman explains that "the main role of ICT intermediaries is defined as an organization providing effective support to local communities in the use and adaptation of technology. Most commonly, an ICT intermediary will be a specialized organization from outside the community, such as a non-governmental organization, local government, or international donor. On the other hand, a social intermediary is defined as a local institution from within the community, such as a community-based organization. Other proposed remedies that the Internet promises for developing countries are the provision of efficient communications within and among developing countries so that citizens worldwide can effectively help each other to solve their problems. Grameen Banks and Kiva loans are two microcredit systems designed to help citizens worldwide to contribute online towards entrepreneurship in developing communities. Economic opportunities range from entrepreneurs who can afford the hardware and broadband access required to maintain Internet cafés to agribusinesses having control over the seeds they plant. At the Massachusetts Institute of Technology, the IMARA organization (from Swahili word for "power") sponsors a variety of outreach programs which bridge the Global Digital Divide. Its aim is to find and implement long-term, sustainable remedies which will increase the availability of educational technology and resources to domestic and international communities. These projects are run under the aegis of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and staffed by MIT volunteers who give training, install and donate computer setups in greater Boston, Massachusetts, Kenya, Indian reservations the American Southwest such as the Navajo Nation, the Middle East, and the Fiji Islands. The CommuniTech project strives to empower underserved communities through sustainable technology and education. According to Dominik Hartmann of the MIT's Media Lab, interdisciplinary approaches are needed to bridge the global digital divide. Building on the premise that any effective solution must be decentralized, allowing the local communities in developing nations to generate their content, one scholar has posited that social media—like Facebook, YouTube, and Twitter—may be useful tools in closing the divide. As Amir Hatem Ali suggests, "the popularity and generative nature of social media empower individuals to combat some of the main obstacles to bridging the digital divide". Facebook's statistics reinforce this claim. According to Facebook, more than seventy-five percent of its users reside outside of the US. Moreover, more than seventy languages are presented on its website. The reasons for the high number of international users are due to many the qualities of Facebook and other social media. Amongst them, are its ability to offer a means of interacting with others, user-friendly features, and the fact that most sites are available at no cost. The problem with social media, however, is that it can be accessible, provided that there is physical access. Nevertheless, with its ability to encourage digital inclusion, social media can be used as a tool to bridge the global digital divide. Some cities in the world have started programs to bridge the digital divide for their residents, school children, students, parents and the elderly. One such program, founded in 1996, was sponsored by the city of Boston and called the Boston Digital Bridge Foundation. It especially concentrates on school children and their parents, helping to make both equally and similarly knowledgeable about computers, using application programs, and navigating the Internet. Free Basics Free Basics is a partnership between social networking services company Facebook and six companies (Samsung, Ericsson, MediaTek, Opera Software, Nokia and Qualcomm) that plans to bring affordable access to selected Internet services to less developed countries by increasing efficiency, and facilitating the development of new business models around the provision of Internet access. In the whitepaper realised by Facebook's founder and CEO Mark Zuckerberg, connectivity is asserted as a "human right", and Internet.org is created to improve Internet access for people around the world. "Free Basics provides people with access to useful services on their mobile phones in markets where internet access may be less affordable. The websites are available for free without data charges, and include content about news, employment, health, education and local information etc. By introducing people to the benefits of the internet through these websites, we hope to bring more people online and help improve their lives." However, Free Basics is also accused of violating net neutrality for limiting access to handpicked services. Despite a wide deployment in numerous countries, it has been met with heavy resistance notably in India where the Telecom Regulatory Authority of India eventually banned it in 2016. Satellite constellations Several projects to bring internet to the entire world with a satellite constellation have been devised in the last decade, one of these being Starlink by Elon Musk's company SpaceX. Unlike Free Basics, it would provide people with a full internet access and would not be limited to a few selected services. In the same week Starlink was announced, serial-entrepreneur Richard Branson announced his own project OneWeb, a similar constellation with approximately 700 satellites that was already procured communication frequency licenses for their broadcast spectrum and could possibly be operational on 2020. The biggest hurdle to these projects is the astronomical, financial, and logistical cost of launching so many satellites. After the failure of previous satellite-to-consumer space ventures, satellite industry consultant Roger Rusch said "It's highly unlikely that you can make a successful business out of this." Musk has publicly acknowledged this business reality, and indicated in mid-2015 that while endeavoring to develop this technically-complicated space-based communication system he wants to avoid overextending the company and stated that they are being measured in the pace of development. As of 2023, Starlink is being actively deployed with the goal to clear licensure hurdles in every country open to its services. One Laptop per Child One Laptop per Child (OLPC) was an attempt by an American non-profit to narrow the digital divide. This organization, founded in 2005, provided inexpensively produced "XO" laptops (dubbed the "$100 laptop", though actual production costs vary) to children residing in poor and isolated regions within developing countries. Each laptop belonged to an individual child and provides a gateway to digital learning and Internet access. The XO laptops were designed to withstand more abuse than higher-end machines, and they contained features in context to the unique conditions that remote villages present. Each laptop was constructed to use as little power as possible, had a sunlight-readable screen, and was capable of automatically networking with other XO laptops in order to access the Internet—as many as 500 machines can share a single point of access. The project went defunct in 2014. World Summit on the Information Society Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003 directly address the digital divide. See also Starlink Project Kuiper References Bibliography Azam, M. (2007). "Working together toward the inclusive digital world". Digital Opportunity Forum. Unpublished manuscript. Retrieved July 17, 2009, from http://www.dof.or.kr/pdf/Bangladesh%5BPPT%5D.pdf Borland, J. (April 13, 1998). "Move Over Megamalls, Cyberspace Is the Great Retailing Equalizer". Knight Ridder/Tribune Business News. Brynjolfsson, Erik and Michael D. Smith (2000). "The great equalizer? Consumer choice behavior at Internet shopbots". Sloan Working Paper 4208–01. eBusiness@MIT Working Paper 137. July 2000. Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts. James, J. (2004). Information Technology and Development: A new paradigm for delivering the Internet to rural areas in developing countries. New York: Routledge. (print). (e-book). Southwell, B. G. (2013). Social networks and popular understanding of science and health: sharing disparities. Baltimore: Johns Hopkins University Press. (book). World Summit on the Information Society (WSIS), 2005. "What's the state of ICT access around the world?" Retrieved July 17, 2009. World Summit on the Information Society (WSIS), 2008. "ICTs in Africa: Digital Divide to Digital Opportunity". Retrieved July 17, 2009. External links E-inclusion, an initiative of the European Commission to ensure that "no one is left behind" in enjoying the benefits of Information and Communication Technologies (ICT). eEurope – An information society for all, a political initiative of the European Union. Digital Inclusion Network , an online exchange on topics related to the digital divide and digital inclusion, E-Democracy.org. "The Digital Divide Within Education Caused by the Internet", Benjamin Todd, Acadia University, Nova Scotia, Canada, Undergraduate Research Journal for the Human Sciences, Volume 11 (2012). Statistics from the International Telecommunication Union (ITU) Mobile Phones and Access is an animated video produced by TechChange and USAID which explores issues of access related to global mobile phone usage. Digital media Technology development Economic geography Cultural globalization Global inequality Rural economics Social inequality
Global digital divide
Technology
3,898
6,026,902
https://en.wikipedia.org/wiki/Waste%20treatment%20technologies
There are a number of different waste treatment technologies for the disposal, recycling, storage, or energy recovery from different waste types. Each type has its own associated methods of waste management. Landfill Municipal solid waste consists mainly of household and commercial waste which is disposed of by or on behalf of a local authority. Landfills waste are categorized by either being hazardous, non-hazardous or inert waste. In order for a landfill design to be considered it must abide by the following requirements: final landforms profile, site capacity, settlement, waste density, materials requirements and drainage. Incineration The advantages of the incineration are reduction of volume and mass by burning, reduction to a percentage of sterile ash, source of energy, increase of income by selling bottom ash, and is also environmentally acceptable. The disadvantages of incineration are the following: higher cost and longer payback period due to high capital investment since incineration is design on the basis of a certain calorific value removing paper and plastics for recycling lowers the overall calorific value that may affect the incinerator performance the process still produces a solid waste residue at the end which still requires treatment and management Emissions from incinerators consist of particulates, heavy metals, pollutant gases, odor dust and litter. Due to incomplete combustion, products such as dioxins and furans are formed. Bioremediation The human sewage and the process waste from the manufacturing industries are the two major sources of the waste water. In Thailand, the total volume of the wastewater from industries is much greater than that of the domestic sewage. As a result, an effective method is needed. Microbial remediation of xenobiotics has shown to be effective and the low-cost technology, but it still has several limitations. Consequently, the genetic engineering approaches are used to create the new strain of microbes (Genetically engineered microorganisms, GEMS) which have better catabolic potential than the wild type species for bioremediation. There are four major approaches to GEM development for the bioremediation application which include the modification of enzyme specificity and affinity, pathway construction and regulation, bioprocess development, monitoring and control and lastly, bio-affinity bio-receptor sensor application for chemical sensing, toxicity reduction and end point analysis. These allow the extensive use of genetically engineered microorganism. In the far future, the genetically engineered microorganisms could possibly be used to control the green house gases, convert the waste to the value-added product as well as to reduce and capture the carbon dioxide gases from the atmosphere (carbon sequestration), but much research is still required to realise the potential. There is a concern regarding the use of genetically engineered microbes for the remediation of pollutants. Once the genetically microorganisms has been added, it may disperse uncontrollably and hard to be removed. Pyrolysis Pyrolysis is thermochemical conversion process in which the feeding material is converted into char, oil and combustible gas in an inert atmosphere (complete absence of oxidizing agent). References See also List of radioactive waste treatment technologies List of solid waste treatment technologies List of wastewater treatment technologies Waste-to-energy Waste management Waste treatment technology
Waste treatment technologies
Chemistry,Engineering
673
40,862,848
https://en.wikipedia.org/wiki/Linear%20equation%20over%20a%20ring
In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or "typically Noetherian integral domain". In the case of a single equation, the problem splits in two parts. First, the ideal membership problem, which consists, given a non-homogeneous equation with and in a given ring , to decide if it has a solution with in , and, if any, to provide one. This amounts to decide if belongs to the ideal generated by the . The simplest instance of this problem is, for and , to decide if is a unit in . The syzygy problem consists, given elements in , to provide a system of generators of the module of the syzygies of that is a system of generators of the submodule of those elements in that are solutions of the homogeneous equation The simplest case, when amounts to find a system of generators of the annihilator of . Given a solution of the ideal membership problem, one obtains all the solutions by adding to it the elements of the module of syzygies. In other words, all the solutions are provided by the solution of these two partial problems. In the case of several equations, the same decomposition into subproblems occurs. The first problem becomes the submodule membership problem. The second one is also called the syzygy problem. A ring such that there are algorithms for the arithmetic operations (addition, subtraction, multiplication) and for the above problems may be called a computable ring, or effective ring. One may also say that linear algebra on the ring is effective. The article considers the main rings for which linear algebra is effective. Generalities To be able to solve the syzygy problem, it is necessary that the module of syzygies is finitely generated, because it is impossible to output an infinite list. Therefore, the problems considered here make sense only for a Noetherian ring, or at least a coherent ring. In fact, this article is restricted to Noetherian integral domains because of the following result. Given a Noetherian integral domain, if there are algorithms to solve the ideal membership problem and the syzygies problem for a single equation, then one may deduce from them algorithms for the similar problems concerning systems of equations. This theorem is useful to prove the existence of algorithms. However, in practice, the algorithms for the systems are designed directly. A field is an effective ring as soon one has algorithms for addition, subtraction, multiplication, and computation of multiplicative inverses. In fact, solving the submodule membership problem is what is commonly called solving the system, and solving the syzygy problem is the computation of the null space of the matrix of a system of linear equations. The basic algorithm for both problems is Gaussian elimination. Properties of effective rings Let be an effective commutative ring. There is an algorithm for testing if an element is a zero divisor: this amounts to solving the linear equation . There is an algorithm for testing if an element is a unit, and if it is, computing its inverse: this amounts to solving the linear equation . Given an ideal generated by , there is an algorithm for testing if two elements of have the same image in : testing the equality of the images of and amounts to solving the equation ; linear algebra is effective over : for solving a linear system over , it suffices to write it over and to add to one side of the th equation (for ), where the are new unknowns. Linear algebra is effective on the polynomial ring if and only if one has an algorithm that computes an upper bound of the degree of the polynomials that may occur when solving linear systems of equations: if one has solving algorithms, their outputs give the degrees. Conversely, if one knows an upper bound of the degrees occurring in a solution, one may write the unknown polynomials as polynomials with unknown coefficients. Then, as two polynomials are equal if and only if their coefficients are equal, the equations of the problem become linear equations in the coefficients, that can be solved over an effective ring. Over the integers or a principal ideal domain There are algorithms to solve all the problems addressed in this article over the integers. In other words, linear algebra is effective over the integers; see Linear Diophantine system for details. More generally, linear algebra is effective on a principal ideal domain if there are algorithms for addition, subtraction and multiplication, and Solving equations of the form , that is, testing whether is a divisor of , and, if this is the case, computing the quotient , Computing Bézout's identity, that is, given and , computing and such that is a greatest common divisor of and . It is useful to extend to the general case the notion of a unimodular matrix by calling unimodular a square matrix whose determinant is a unit. This means that the determinant is invertible and implies that the unimodular matrices are exactly the invertible matrices such all entries of the inverse matrix belong to the domain. The above two algorithms imply that given and in the principal ideal domain, there is an algorithm computing a unimodular matrix such that (This algorithm is obtained by taking for and the coefficients of Bézout's identity, and for and the quotient of and by ; this choice implies that the determinant of the square matrix is .) Having such an algorithm, the Smith normal form of a matrix may be computed exactly as in the integer case, and this suffices to apply the described in Linear Diophantine system for getting an algorithm for solving every linear system. The main case where this is commonly used is the case of linear systems over the ring of univariate polynomials over a field. In this case, the extended Euclidean algorithm may be used for computing the above unimodular matrix; see for details. Over polynomials rings over a field Linear algebra is effective on a polynomial ring over a field . This has been first proved in 1926 by Grete Hermann. The algorithms resulting from Hermann's results are only of historical interest, as their computational complexity is too high for allowing effective computer computation. Proofs that linear algebra is effective on polynomial rings and computer implementations are presently all based on Gröbner basis theory. References External links Commutative algebra Linear algebra Equations
Linear equation over a ring
Mathematics
1,360
35,805,453
https://en.wikipedia.org/wiki/%CE%9415N
{{DISPLAYTITLE:δ15N}} In geochemistry, hydrology, paleoclimatology and paleoceanography, δ15N (pronounced "delta fifteen n") or delta-N-15 is a measure of the ratio of the two stable isotopes of nitrogen, 15N:14N. Formulas Two very similar expressions for are in wide use in hydrology. Both have the form ‰ (‰ = permil or parts per thousand) where s and a are the relative abundances of 15N in respectively the sample and the atmosphere. The difference is whether the relative abundance is with respect to all the nitrogen, i.e. 14N plus 15N, or just to 14N. Since the atmosphere is 99.6337% 14N and 0.3663% 15N, a is 0.003663 in the former case and 0.003663/0.996337 = 0.003676 in the latter. However s varies similarly; for example if in the sample 15N is 0.385% and 14N is 99.615%, s is 0.003850 in the former case and 0.00385/0.99615 = 0.003865 in the latter. The value of is then 51.05‰ in the former case and 51.38‰ in the latter, an insignificant difference in practice given the typical range of for . Applications The ratio of 15N to 14N is of relevance because in most biological contexts, 14N is preferentially uptaken as the lighter isotope. As a result, samples enriched in 15N can often be introduced through a non-biological context. One use of 15N is as a tracer to determine the path taken by fertilizers applied to anything from pots to landscapes. Fertilizer enriched in 15N to an extent significantly different from that prevailing in the soil (which may be different from the atmospheric standard a) is applied at a point and other points are then monitored for variations in . Another application is the assessment of human waste water discharge into bodies of water. The abundance of 15N is greater in human waste water than in natural water sources. Hence in benthic sediment gives an indication of the contribution of human waste to the total nitrogen in the sediment. Sediment cores analyzed for yield an historical record of such waste, with older samples at greater depths. is also used to measure food chain length and the trophic level of a given organism; high values are positively correlated with higher trophic levels; likewise, organisms low on the food chain generally exhibit lower values. Higher values in apex predators generally indicate longer food chains. References See also Isotopic signature Isotope analysis Isotope geochemistry Bioindicators Nitrogen Isotopes of nitrogen Environmental isotopes Geochemistry
Δ15N
Chemistry,Environmental_science
583
4,078,061
https://en.wikipedia.org/wiki/Papagoite
Papagoite is a rare cyclosilicate mineral. Chemically, it is a calcium copper aluminium silicate hydroxide, found as a secondary mineral on slip surfaces and in altered granodiorite veins, either in massive form or as microscopic crystals that may form spherical aggregates. Its chemical formula is Ca Cu Al Si2O6(O H)3. It was discovered in 1960 in Ajo, Arizona, US, and was named after the Hia C-ed O'odham people (also known as the Sand Papago) who inhabit the area. This location is the only papagoite source within the United States, while worldwide it is also found in South Africa and Namibia. It is associated with aurichalcite, shattuckite, ajoite and baryte in Arizona, and with quartz, native copper and ajoite in South Africa. Its bright blue color is the mineral's most notable characteristic. It is used as a gemstone. References Calcium minerals Copper(II) minerals Aluminium minerals Cyclosilicates Monoclinic minerals Minerals in space group 12 Gemstones
Papagoite
Physics
232
39,308,546
https://en.wikipedia.org/wiki/Lilotomab
Lilotomab (formerly tetulomab, HH1) is a murine monoclonal antibody against CD37, a glycoprotein which is expressed on the surface of mature human B cells. It was generated at the Norwegian Radium Hospital. As of 2016 it was under development by the Norwegian company Nordic Nanovector ASA as a radioimmunotherapeutic in which lilotomab is conjugated to the beta radiation-emitting isotope lutetium-177 by means of a linker called satetraxetan, a derivative of DOTA. This compound is called 177Lu-HH1 or lutetium (177Lu) lilotomab satetraxetan (trade name Betalutin). As of 2016, a phase 1/2 clinical trial in people with non-Hodgkin lymphoma was underway. References Further reading Experimental cancer drugs Radiopharmaceuticals Lutetium complexes
Lilotomab
Chemistry
206
32,009,646
https://en.wikipedia.org/wiki/B-box%20zinc%20finger
In molecular biology the B-box-type zinc finger domain is a short protein domain of around 40 amino acid residues in length. B-box zinc fingers can be divided into two groups, where types 1 and 2 B-box domains differ in their consensus sequence and in the spacing of the 7-8 zinc-binding residues. Several proteins contain both types 1 and 2 B-boxes, suggesting some level of cooperativity between these two domains. Occurrence B-box domains are found in over 1500 proteins from a variety of organisms. They are found in TRIM (tripartite motif) proteins that consist of an N-terminal RING finger (originally called an A-box), followed by 1-2 B-box domains and a coiled-coil domain (also called RBCC for Ring, B-box, Coiled-Coil). TRIM proteins contain a type 2 B-box domain, and may also contain a type 1 B-box. In proteins that do not contain RING or coiled-coil domains, the B-box domain is primarily type 2. Many type 2 B-box proteins are involved in ubiquitinylation. Proteins containing a B-box zinc finger domain include transcription factors, ribonucleoproteins and proto-oncoproteins; for example, MID1, MID2, TRIM9, TNL, TRIM36, TRIM63, TRIFIC, NCL1 and CONSTANS-like proteins. The microtubule-associated E3 ligase MID1 (EC) contains a type 1 B-box zinc finger domain. MID1 specifically binds Alpha-4, which in turn recruits the catalytic subunit of phosphatase 2A (PP2Ac). This complex is required for targeting of PP2Ac for proteasome-mediated degradation. The MID1 B-box coordinates two zinc ions and adopts a beta/beta/alpha cross-brace structure similar to that of ZZ, PHD, RING and FYVE zinc fingers. Homologs Prokaryotic homologs of the domain are present in several bacterial lineages and methanogenic archaea, and often show fusions to peptidase domains such as the rhomboid-like serine peptidase, and Zn-dependent metallopeptidase. Other versions typically contain transmembrane helices and might also show fusions to domains such as DNAJ, FHA, SH3, WD40 and tetratricopeptide repeats. Together these associations suggest a role for the prokaryotic homologs of the B-box zinc finger domain in proteolytic processing, folding or stability of membrane-associated proteins. The domain architectural syntax is remarkably similar to that seen in prokaryotic homologs of the AN1 zinc finger and LIM domains. References External links CO-like family, DBB family at PlantTFDB: Plant Transcription Factor Database See also Zinc finger Protein domains
B-box zinc finger
Biology
612
71,920,859
https://en.wikipedia.org/wiki/Triphenyl%20phosphite%20ozonide
Triphenyl phosphite ozonide (TPPO) is a chemical compound with the formula PO3(C6H5O)3 that is used to generate singlet oxygen. When TPPO is mixed with amines, the ozonide breaks down into singlet oxygen and leaves behind triphenyl phosphite. Pyridine is the only known amine that can effectively cause the breakdown of TPPO while not quenching any of the produced oxygen. Synthesis Triphenyl phosphite ozonide is created by bubbling dry ozone through dichloromethane with triphenyl phosphite being added dropwise at -78 °C. If triphenyl phosphite is added in excess in the synthesis, TPPO can be reduced to triphenyl phosphite oxide, PO(C6H5O)3, and oxygen gas. References Ozonides Organophosphites Phenol ethers
Triphenyl phosphite ozonide
Chemistry
206
2,363,918
https://en.wikipedia.org/wiki/Lift%20chair
Lift chairs, also known as lift recliners or riser armchairs, are chairs that feature a powered lifting mechanism that pushes the entire chair up from its base and so assists the user to a standing position. In the United States, lift chairs qualify as durable medical equipment under Medicare Part B. In a February 1989 report released by the Inspector General of the US Department of Health and Human Services, it was found that: lift chairs might not possibly meet Medicare's requirements for Durable Medical Equipment (DME) and lift chair claims need to be re-regulated. The report was stimulated by an increase in lift chair claims between 1984 and 1985 from 200,000 to 700,000. A New York Times article stated that aggressive TV ads were pushing consumers to inquire about lift chairs and, once consumers called in, a form was sent to them for their physicians to sign. Some companies would ship lift chairs before receiving a physician's signature; therefore, forcing the physicians to sign or else their patient will be forced to pay for the chair. Medicare may only cover the cost of the lift-mechanism rather than the entire chair. Before Medicare can be considered for covering the cost, patients will need to have a visit with their physician to discuss the need for this particular equipment. The DME provider will then request a prescription and a certificate of medical necessity (CMN). The CMN typical involves five questions that the physician needs to answer. Typically, the questions are (1) Does the patient have severe arthritis, (2) Does the patient have a neuromuscular disease, (3) Is the patient incapable of getting up from a regular chair in their home, (4) Can the patient walk once standing, and (5) Have all other therapeutic measures been taken? If any of the questions are answered "NO", it may likely result in a denial of the claim. Typically, DME providers require full payment for the lift chair and will offer reimbursement upon approval from Medicare. DME providers cannot bill Medicare without first providing the equipment. Lift chairs can also come with a number of additional feature options and addons. These include heat and massage, adjustable head pillows, adjustable lumbar, battery backups, and premium fabrics. See also List of chairs Massage chair Mobility aid Recliner References External links Mobility devices Chairs chair Medical equipment Accessibility
Lift chair
Physics,Technology,Engineering,Biology
480
714,163
https://en.wikipedia.org/wiki/Cross-correlation
In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy. In probability and statistics, the term cross-correlations refers to the correlations between the entries of two random vectors and , while the correlations of a random vector are the correlations between the entries of itself, those forming the correlation matrix of . If each of and is a scalar random variable which is realized repeatedly in a time series, then the correlations of the various temporal instances of are known as autocorrelations of , and the cross-correlations of with across time are temporal cross-correlations. In probability and statistics, the definition of correlation always includes a standardising factor in such a way that correlations have values between −1 and +1. If and are two independent random variables with probability density functions and , respectively, then the probability density of the difference is formally given by the cross-correlation (in the signal-processing sense) ; however, this terminology is not used in probability and statistics. In contrast, the convolution (equivalent to the cross-correlation of and ) gives the probability density function of the sum . Cross-correlation of deterministic signals For continuous functions and , the cross-correlation is defined as:which is equivalent towhere denotes the complex conjugate of , and is called displacement or lag. For highly-correlated and which have a maximum cross-correlation at a particular , a feature in at also occurs later in at , hence could be described to lag by . If and are both continuous periodic functions of period , the integration from to is replaced by integration over any interval of length :which is equivalent toSimilarly, for discrete functions, the cross-correlation is defined as:which is equivalent to:For finite discrete functions , the (circular) cross-correlation is defined as:which is equivalent to:For finite discrete functions , , the kernel cross-correlation is defined as:where is a vector of kernel functions and is an affine transform. Specifically, can be circular translation transform, rotation transform, or scale transform, etc. The kernel cross-correlation extends cross-correlation from linear space to kernel space. Cross-correlation is equivariant to translation; kernel cross-correlation is equivariant to any affine transforms, including translation, rotation, and scale, etc. Explanation As an example, consider two real valued functions and differing only by an unknown shift along the x-axis. One can use the cross-correlation to find how much must be shifted along the x-axis to make it identical to . The formula essentially slides the function along the x-axis, calculating the integral of their product at each position. When the functions match, the value of is maximized. This is because when peaks (positive areas) are aligned, they make a large contribution to the integral. Similarly, when troughs (negative areas) align, they also make a positive contribution to the integral because the product of two negative numbers is positive. With complex-valued functions and , taking the conjugate of ensures that aligned peaks (or aligned troughs) with imaginary components will contribute positively to the integral. In econometrics, lagged cross-correlation is sometimes referred to as cross-autocorrelation. Properties Cross-correlation of random vectors Definition For random vectors and , each containing random elements whose expected value and variance exist, the cross-correlation matrix of and is defined byand has dimensions . Written component-wise:The random vectors and need not have the same dimension, and either might be a scalar value. Where is the expectation value. Example For example, if and are random vectors, then is a matrix whose -th entry is . Definition for complex random vectors If and are complex random vectors, each containing random variables whose expected value and variance exist, the cross-correlation matrix of and is defined bywhere denotes Hermitian transposition. Cross-correlation of stochastic processes In time series analysis and statistics, the cross-correlation of a pair of random process is the correlation between values of the processes at different times, as a function of the two times. Let be a pair of random processes, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Cross-correlation function Suppose that the process has means and and variances and at time , for each . Then the definition of the cross-correlation between times and iswhere is the expected value operator. Note that this expression may be not defined. Cross-covariance function Subtracting the mean before multiplication yields the cross-covariance between times and :Note that this expression is not well-defined for all time series or processes, because the mean or variance may not exist. Definition for wide-sense stationary stochastic process Let represent a pair of stochastic processes that are jointly wide-sense stationary. Then the cross-covariance function and the cross-correlation function are given as follows. Cross-correlation function or equivalently Cross-covariance function or equivalently where and are the mean and standard deviation of the process , which are constant over time due to stationarity; and similarly for , respectively. indicates the expected value. That the cross-covariance and cross-correlation are independent of is precisely the additional information (beyond being individually wide-sense stationary) conveyed by the requirement that are jointly wide-sense stationary. The cross-correlation of a pair of jointly wide sense stationary stochastic processes can be estimated by averaging the product of samples measured from one process and samples measured from the other (and its time shifts). The samples included in the average can be an arbitrary subset of all the samples in the signal (e.g., samples within a finite time window or a sub-sampling of one of the signals). For a large number of samples, the average converges to the true cross-correlation. Normalization It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the cross-correlation function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "cross-correlation" and "cross-covariance" are used interchangeably. The definition of the normalized cross-correlation of a stochastic process isIf the function is well-defined, its value must lie in the range , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation. For jointly wide-sense stationary stochastic processes, the definition isThe normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations. Properties Symmetry property For jointly wide-sense stationary stochastic processes, the cross-correlation function has the following symmetry property:Respectively for jointly WSS processes: Time delay analysis Cross-correlations are useful for determining the time delay between two signals, e.g., for determining time delays for the propagation of acoustic signals across a microphone array. After calculating the cross-correlation between the two signals, the maximum (or minimum if the signals are negatively correlated) of the cross-correlation function indicates the point in time where the signals are best aligned; i.e., the time delay between the two signals is determined by the argument of the maximum, or arg max of the cross-correlation, as inTerminology in image processing Zero-normalized cross-correlation (ZNCC) For image-processing applications in which the brightness of the image and template can vary due to lighting and exposure conditions, the images can be first normalized. This is typically done at every step by subtracting the mean and dividing by the standard deviation. That is, the cross-correlation of a template with a subimage is where is the number of pixels in and , is the average of and is standard deviation of . In functional analysis terms, this can be thought of as the dot product of two normalized vectors. That is, ifandthen the above sum is equal towhere is the inner product and is the L² norm. Cauchy–Schwarz then implies that ZNCC has a range of . Thus, if and are real matrices, their normalized cross-correlation equals the cosine of the angle between the unit vectors and , being thus if and only if equals multiplied by a positive scalar. Normalized correlation is one of the methods used for template matching, a process used for finding instances of a pattern or object within an image. It is also the 2-dimensional version of Pearson product-moment correlation coefficient. Normalized cross-correlation (NCC) NCC is similar to ZNCC with the only difference of not subtracting the local mean value of intensities: Nonlinear systems Caution must be applied when using cross correlation function which assumes Gaussian variance for nonlinear systems. In certain circumstances, which depend on the properties of the input, cross correlation between the input and output of a system with nonlinear dynamics can be completely blind to certain nonlinear effects. This problem arises because some quadratic moments can equal zero and this can incorrectly suggest that there is little "correlation" (in the sense of statistical dependence) between two signals, when in fact the two signals are strongly related by nonlinear dynamics. See also Autocorrelation Autocovariance Coherence Convolution Correlation Correlation function Cross-correlation matrix Cross-covariance Cross-spectrum Digital image correlation Phase correlation Scaled correlation Spectral density Wiener–Khinchin theorem References Further reading External links Cross Correlation from Mathworld http://scribblethink.org/Work/nvisionInterface/nip.html http://www.staff.ncl.ac.uk/oliver.hinton/eee305/Chapter6.pdf Bilinear maps Covariance and correlation Signal processing Time domain analysis
Cross-correlation
Technology,Engineering
2,215
11,564,906
https://en.wikipedia.org/wiki/Luopan
The luopan or geomantic compass is a Chinese magnetic compass, also known as a feng shui compass. It is used by a feng shui practitioner to determine the precise direction of a structure, place or item. Luo Pan contains a lot of information and formulas regarding its functions. The needle points towards the south magnetic pole. Form and function Like a conventional compass, a luopan is a direction finder. However, a luopan differs from a compass in several important ways. The most obvious difference is the feng shui formulas embedded in up to 40 concentric rings on the surface. This is a metal or wooden plate known as the heaven dial. The circular metal or wooden plate typically sits on a wooden base known as the earth plate. The heaven dial rotates freely on the earth plate. A red wire or thread that crosses the earth plate and heaven dial at 90-degree angles is the Heaven Center Cross Line, or Red Cross Grid Line. This line is used to find the direction and note position on the rings. A conventional compass has markings for four or eight directions, while a luopan typically contains markings for 24 directions. This translates to 15 degrees per direction. The Sun takes approximately 15.2 days to traverse a solar term, a series of 24 points on the ecliptic. Since there are 360 degrees on the luopan and approximately 365.25 days in a mean solar year, each degree on a luopan approximates a terrestrial day. Unlike a typical compass, a luopan does not point to the north magnetic pole of Earth. The needle of a luopan points to the south magnetic pole (it does not point to the geographic South Pole). The Chinese word for compass, 指南針 (zhǐnánzhēn in Mandarin), translates to “south-pointing needle.” Types Since the Ming and Qing dynasties, three types of luopan have been popular. They have some formula rings in common, such as the 24 directions and the early and later heaven arrangements. San He This luopan was said to have been used in the Tang dynasty. The San He contains three basic 24-direction rings. Each ring relates to a different method and formula. (The techniques grouped under the name "Three Harmonies" are the San He methods.) San Yuan This luopan, also known as the jiang pan (after Jiang Da Hong) or the Yi Pan (because of the presence of Yijing hexagrams) incorporates many formulas used in San Yuan (Three Cycles). It contains one 24-direction ring, known as the Earth Plate Correct Needle, the ring for the 64 hexagrams, and others. (The techniques grouped under the name "Flying Stars" are an example of San Yuan methods.) Zong He This luopan combines rings from the San He and San Yuan. It contains three 24-direction-rings and the 64 trigrams ring. Other types Each feng shui master may design a luopan to suit preference and to offer students. Some designs incorporate the bagua (trigram) numbers, directions from the Eight Mansions () methods, and English equivalents. History and development The luopan is an image of the cosmos (a world model) based on tortoise plastrons used in divination. At its most basic level it serves as a means to assign proper positions in time and space, like the Ming Tang (Hall of Light). The markings are similar to those on a liubo board. The oldest precursors of the luopan are the or , meaning astrolabe or diviner's board—also sometimes called liuren astrolabes—unearthed from tombs that date between 278 BCE and 209 BCE. These astrolabes consist of a lacquered, two-sided board with astronomical sightlines. Along with divination for Da Liu Ren, the boards were commonly used to chart the motion of Taiyi through the nine palaces. The markings are virtually unchanged from the shi to the first magnetic compasses. The schematic of earth plate, heaven plate, and grid lines is part of the "two cords and four hooks" () geometrical diagram in use since at least the Warring States period. The zhinan zhen or south-pointing needle, is the original magnetic compass, and was developed for feng shui. It featured the two cords and four hooks diagram, direction markers, and a magnetized spoon in the center. See also Automatic writing Chu Silk Manuscript Dowsing Geomancy References Bibliography Further reading An account of the various types of luo pan, and details of 75 separate rings. The Lowdown on the Luo pan - Feng Shui for Modern Living Magazine Orientation (geometry) Chinese inventions Magnetic devices Geomancy Feng Shui
Luopan
Physics,Mathematics
982
9,022,604
https://en.wikipedia.org/wiki/Cathelicidin%20antimicrobial%20peptide
Cathelicidin antimicrobial peptide (CAMP) is an antimicrobial peptide encoded in the human by the CAMP gene. The active form is LL-37. In humans, CAMP encodes the peptide precursor CAP-18 (18 kDa), which is processed by proteinase 3-mediated extracellular cleavage into the active form LL-37. The cathelicidin family includes 30 types of which LL-37 is the only cathelicidin in the human. Cathelicidins are stored in the secretory granules of neutrophils and macrophages and can be released following activation by leukocytes. Cathelicidin peptides are dual-natured molecules called amphiphiles: one end of the molecule is attracted to water and repelled by fats and proteins, and the other end is attracted to fat and proteins and repelled by water. Members of this family react to pathogens by disintegrating, damaging, or puncturing cell membranes. Cathelicidins thus serve a critical role in mammalian innate immune defense against invasive bacterial infection. The cathelicidin family of peptides are classified as antimicrobial peptides (AMPs). The AMP family also includes the defensins. Whilst the defensins share common structural features, cathelicidin-related peptides are highly heterogeneous. Members of the cathelicidin family of antimicrobial polypeptides are characterized by a highly conserved region (cathelin domain) and a highly variable cathelicidin peptide domain. Cathelicidin peptides have been isolated from many different species of mammals, including marsupials. Cathelicidins are mostly found in neutrophils, monocytes, mast cells, dendritic cells and macrophages after activation by bacteria, viruses, fungi, parasites or the hormone 1,25-D, which is the hormonally active form of vitamin D. They have been found in some other cells, including epithelial cells and human keratinocytes. Etymology The term was coined in 1995 from cathelin, due to the characteristic cathelin-like domain present in cathelicidins. The name cathelin itself is coined from cathepsin L inhibitor in 1989. Mechanism of antimicrobial activity The general rule of the mechanism triggering cathelicidin action, like that of other antimicrobial peptides, involves the disintegration (damaging and puncturing) of cell membranes of organisms toward which the peptide is active. Cathelicidins rapidly destroy the lipoprotein membranes of microbes enveloped in phagosomes after fusion with lysosomes in macrophages. Therefore, LL-37 can inhibit the formation of bacterial biofilms. Other activities LL-37 plays a role in the activation of cell proliferation and migration, contributing to the wound closure process. All these mechanisms together play an essential role in tissue homeostasis and regenerative processes. Moreover, it has an agonistic effect on various pleiotropic receptors, for example, formyl peptide receptor like-1 (FPRL-1), purinergic receptor P2X7, epidermal growth factor receptor (EGFR). Furthermore, it induces angiogenesis and regulates apoptosis. Characteristics Cathelicidins range in size from 12 to 80 amino acid residues and have a wide range of structures. Most cathelicidins are linear peptides with 23-37 amino acid residues, and fold into amphipathic α-helices. Additionally cathelicidins may also be small-sized molecules (12-18 residues) with beta-hairpin structures, stabilized by one or two disulphide bonds. Even larger cathelicidin peptides (39-80 amino acid residues) are also present. These larger cathelicidins display repetitive proline motifs forming extended polyproline-type structures. In 1995, Gudmundsson et al. assumed that the active antimicrobial peptide is formed of a 39-residue C-terminal domain (termed FALL-39). However, only a year later stated that the matured AMP, now called LL-37, is in reality two amino acids shorter than FALL-39. The cathelicidin family shares primary sequence homology with the cystatin family of cysteine proteinase inhibitors, although amino acid residues thought to be important in such protease inhibition are usually lacking. Non-human orthologs Cathelicidin peptides have been found in humans, monkeys, mice, rats, rabbits, guinea pigs, pandas, pigs, cattle, frogs, sheep, goats, chickens, horses and wallabies. Antibodies to the human LL-37/hCAP-18 have been used to find cathelicidin-like compounds in a marsupial. About 30 cathelicidin family members have been described in mammals, with only one (LL-37) found in humans. Currently identified cathelicidin peptides include the following: Human: hCAP-18 (cleaved into LL-37) Rhesus monkey: RL-37 Mice:CRAMP-1/2, (Cathelicidin-related Antimicrobial Peptide Rats: Rabbits: CAP-18 Guinea pig: CAP-11 Pigs: PR-39, Prophenin, PMAP-23,36,37 Cattle: BMAP-27,28,34 (Bovine Myeloid Antimicrobial Peptides); Bac5, Bac7 Frogs: cathelicidin-AL (found in Amolops loloensis) Chickens: Four cathelicidins, fowlicidins 1,2,3 and cathelicidin Beta-1 Tasmanian Devil: Saha-CATH5 Salmonids: CATH1 and CATH2 Clinical significance Patients with rosacea have elevated levels of cathelicidin and elevated levels of stratum corneum tryptic enzymes (SCTEs). Cathelicidin is cleaved into the antimicrobial peptide LL-37 by both kallikrein 5 and kallikrein 7 serine proteases. Excessive production of LL-37 is suspected to be a contributing cause in all subtypes of Rosacea. Antibiotics have been used in the past to treat rosacea, but antibiotics may only work because they inhibit some SCTEs. Lower plasma levels of human cathelicidin antimicrobial protein (hCAP18) appear to significantly increase the risk of death from infection in dialysis patients. The production of cathelicidin is up-regulated by vitamin D. SAAP-148 (a synthetic antimicrobial and antibiofilm peptide) is a modified version of LL-37 that has enhanced antimicrobial activities compared to LL-37. In particular, SAAP-148 was more efficient in killing bacteria under physiological conditions. In addition, SAAP-148 synergises with the repurposed antibiotic halicin against antibiotic-resistant bacteria and biofilms. LL-37 is thought to play a role in psoriasis pathogenesis (along with other anti-microbial peptides). In psoriasis, damaged keratinocytes release LL-37 which forms complexes with self-genetic material (DNA or RNA) from other cells. These complexes stimulate dendritic cells (a type of antigen presenting cell) which then release interferon α and β which contributes to differentiation of T-cells and continued inflammation. LL-37 has also been found to be a common auto-antigen in psoriasis; T-cells specific to LL-37 were found in the blood and skin in two thirds of patients with moderate to severe psoriasis. LL-37 binds to the peptide Ab, which is associated with Alzheimer's disease. An imbalance between LL-37 and Ab may be a factor affecting AD-associated fibrils and plaques. Chronic, oral Porphyromonas gingivalis, and herpesvirus (HSV-1) infections may contribute to the progression of Alzheimer's dementia. Applications Research into the AMP family—particularly in regards to their mechanism of action—has been ongoing for nearly 20 years. Despite sustained interest, treatments derived or utilizing AMPs have not been widely adopted for clinical use for several reasons. One, drug candidates from AMPs have a narrow window of bioavailability, because peptides are quickly broken down by proteases. Two, peptide drugs are more expensive than small molecule drugs to produce, which is problematic since peptide drugs must be given in large doses to counter rapid enzymatic breakdown. These qualities also limit routes of administration, typically to injection, infusion, or slow release therapy. See also Antimicrobial peptides Innate immune system Peptoid References Further reading External links Immune system Antimicrobial peptides Leukocytes Protein families
Cathelicidin antimicrobial peptide
Biology
1,860
2,526,936
https://en.wikipedia.org/wiki/Isotopes%20of%20samarium
Naturally occurring samarium (62Sm) is composed of five stable isotopes, 144Sm, 149Sm, 150Sm, 152Sm and 154Sm, and two extremely long-lived radioisotopes, 147Sm (half life: 1.066 y) and 148Sm (6.3 y), with 152Sm being the most abundant (26.75% natural abundance). 146Sm (9.20 y) is also fairly long-lived, but is not long-lived enough to have survived in significant quantities from the formation of the Solar System on Earth, although it remains useful in radiometric dating in the Solar System as an extinct radionuclide. It is the longest-lived nuclide that has not yet been confirmed to be primordial. Other than the naturally occurring isotopes, the longest-lived radioisotopes are 151Sm, which has a half-life of 94.6 years, and 145Sm, which has a half-life of 340 days. All of the remaining radioisotopes, which range from 129Sm to 168Sm, have half-lives that are less than two days, and the majority of these have half-lives that are less than 48 seconds. This element also has twelve known isomers with the most stable being 141mSm (t1/2 22.6 minutes), 143m1Sm (t1/2 66 seconds) and 139mSm (t1/2 10.7 seconds). The long lived isotopes, 146Sm, 147Sm, and 148Sm, primarily decay by alpha decay to isotopes of neodymium. Lighter unstable isotopes of samarium primarily decay by electron capture to isotopes of promethium, while heavier ones decay by beta decay to isotopes of europium. A 2012 paper revising the estimated half-life of 146Sm from 10.3(5)×107 y to 6.8(7)×107 y was retracted in 2023. Isotopes of samarium are used in samarium–neodymium dating for determining the age relationships of rocks and meteorites. 151Sm is a medium-lived fission product and acts as a neutron poison in the nuclear fuel cycle. The stable fission product 149Sm is also a neutron poison. Samarium is theoretically the lightest element with even atomic number with no stable isotopes (all isotopes of it can theoretically go either alpha decay or beta decay or double beta decay), other such elements are those with atomic numbers > 66 (dysprosium, which is the heaviest theoretically stable nuclide). List of isotopes |-id=Samarium-129 | 129Sm | style="text-align:right" | 62 | style="text-align:right" | 67 | 128.95464(54)# | 550(100) ms | | | 5/2+# | | |-id=Samarium-130 | 130Sm | style="text-align:right" | 62 | style="text-align:right" | 68 | 129.94892(43)# | 1# s | β+ | 130Pm | 0+ | | |-id=Samarium-131 | rowspan=2|131Sm | rowspan=2 style="text-align:right" | 62 | rowspan=2 style="text-align:right" | 69 | rowspan=2|130.94611(32)# | rowspan=2|1.2(2) s | β+ | 131Pm | rowspan=2|5/2+# | rowspan=2| | rowspan=2| |- | β+, p (rare) | 130Nd |-id=Samarium-132 | rowspan=2|132Sm | rowspan=2 style="text-align:right" | 62 | rowspan=2 style="text-align:right" | 70 | rowspan=2|131.94069(32)# | rowspan=2|4.0(3) s | β+ | 132Pm | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p | 131Nd |-id=Samarium-133 | rowspan=2|133Sm | rowspan=2 style="text-align:right" | 62 | rowspan=2 style="text-align:right" | 71 | rowspan=2|132.93867(21)# | rowspan=2|2.90(17) s | β+ | 133Pm | rowspan=2|(5/2+) | rowspan=2| | rowspan=2| |- | β+, p | 132Nd |-id=Samarium-134 | 134Sm | style="text-align:right" | 62 | style="text-align:right" | 72 | 133.93397(21)# | 10(1) s | β+ | 134Pm | 0+ | | |-id=Samarium-135 | rowspan=2|135Sm | rowspan=2 style="text-align:right" | 62 | rowspan=2 style="text-align:right" | 73 | rowspan=2|134.93252(17) | rowspan=2|10.3(5) s | β+ (99.98%) | 135Pm | rowspan=2|(7/2+) | rowspan=2| | rowspan=2| |- | β+, p (.02%) | 134Nd |-id=Samarium-135m | style="text-indent:1em" | 135mSm | colspan="3" style="text-indent:2em" | 0(300)# keV | 2.4(9) s | β+ | 135Pm | (3/2+, 5/2+) | | |-id=Samarium-136 | 136Sm | style="text-align:right" | 62 | style="text-align:right" | 74 | 135.928276(13) | 47(2) s | β+ | 136Pm | 0+ | | |-id=Samarium-136m | style="text-indent:1em" | 136mSm | colspan="3" style="text-indent:2em" | 2264.7(11) keV | 15(1) μs | | | (8−) | | |-id=Samarium-137 | 137Sm | style="text-align:right" | 62 | style="text-align:right" | 75 | 136.92697(5) | 45(1) s | β+ | 137Pm | (9/2−) | | |-id=Samarium-137m | style="text-indent:1em" | 137mSm | colspan="3" style="text-indent:2em" | 180(50)# keV | 20# s | β+ | 137Pm | 1/2+# | | |-id=Samarium-138 | 138Sm | style="text-align:right" | 62 | style="text-align:right" | 76 | 137.923244(13) | 3.1(2) min | β+ | 138Pm | 0+ | | |-id=Samarium-139 | 139Sm | style="text-align:right" | 62 | style="text-align:right" | 77 | 138.922297(12) | 2.57(10) min | β+ | 139Pm | 1/2+ | | |-id=Samarium-139m | rowspan=2 style="text-indent:1em" | 139mSm | rowspan=2 colspan="3" style="text-indent:2em" | 457.40(22) keV | rowspan=2|10.7(6) s | IT (93.7%) | 139Sm | rowspan=2|11/2− | rowspan=2| | rowspan=2| |- | β+ (6.3%) | 139Pm |-id=Samarium-140 | 140Sm | style="text-align:right" | 62 | style="text-align:right" | 78 | 139.918995(13) | 14.82(12) min | β+ | 140Pm | 0+ | | |-id=Samarium-141 | 141Sm | style="text-align:right" | 62 | style="text-align:right" | 79 | 140.918476(9) | 10.2(2) min | β+ | 141Pm | 1/2+ | | |-id=Samarium-141m | rowspan=2 style="text-indent:1em" | 141mSm | rowspan=2 colspan="3" style="text-indent:2em" | 176.0(3) keV | rowspan=2|22.6(2) min | β+ (99.69%) | 141Pm | rowspan=2|11/2− | rowspan=2| | rowspan=2| |- | IT (.31%) | 141Sm |-id=Samarium-142 | 142Sm | style="text-align:right" | 62 | style="text-align:right" | 80 | 141.915198(6) | 72.49(5) min | β+ | 142Pm | 0+ | | |-id=Samarium-143 | 143Sm | style="text-align:right" | 62 | style="text-align:right" | 81 | 142.914628(4) | 8.75(8) min | β+ | 143Pm | 3/2+ | | |-id=Samarium-143m1 | rowspan=2 style="text-indent:1em" | 143m1Sm | rowspan=2 colspan="3" style="text-indent:2em" | 753.99(16) keV | rowspan=2|66(2) s | IT (99.76%) | 143Sm | rowspan=2|11/2− | rowspan=2| | rowspan=2| |- | β+ (.24%) | 143Pm |-id=Samarium-143m2 | style="text-indent:1em" | 143m2Sm | colspan="3" style="text-indent:2em" | 2793.8(13) keV | 30(3) ms | | | 23/2(−) | | |-id=Samarium-144 | 144Sm | style="text-align:right" | 62 | style="text-align:right" | 82 | 143.911999(3) | colspan=3 align=center|Observationally stable | 0+ | 0.0307(7) | |-id=Samarium-144m | style="text-indent:1em" | 144mSm | colspan="3" style="text-indent:2em" | 2323.60(8) keV | 880(25) ns | | | 6+ | | |-id=Samarium-145 | 145Sm | style="text-align:right" | 62 | style="text-align:right" | 83 | 144.913410(3) | 340(3) d | EC | 145Pm | 7/2− | | |-id=Samarium-145m | style="text-indent:1em" | 145mSm | colspan="3" style="text-indent:2em" | 8786.2(7) keV | 990(170) ns[0.96(+19−15) μs] | | | (49/2+) | | |-id=Samarium-146 | 146Sm | style="text-align:right" | 62 | style="text-align:right" | 84 | 145.913041(4) | 9.20(26) y | α | 142Nd | 0+ | Trace | |- | 147Sm | style="text-align:right" | 62 | style="text-align:right" | 85 | 146.9148979(26) | 1.066(5) y | α | 143Nd | 7/2− | 0.1499(18) | |-id=Samarium-148 | 148Sm | style="text-align:right" | 62 | style="text-align:right" | 86 | 147.9148227(26) | 6.3(13) y | α | 144Nd | 0+ | 0.1124(10) | |- | 149Sm | style="text-align:right" | 62 | style="text-align:right" | 87 | 148.9171847(26) | colspan=3 align=center|Observationally stable | 7/2− | 0.1382(7) | |-id=Samarium-150 | 150Sm | style="text-align:right" | 62 | style="text-align:right" | 88 | 149.9172755(26) | colspan=3 align=center|Observationally stable | 0+ | 0.0738(1) | |- | 151Sm | style="text-align:right" | 62 | style="text-align:right" | 89 | 150.9199324(26) | 94.6(6) y | β− | 151Eu | 5/2− | | |-id=Samarium-151m | style="text-indent:1em" | 151mSm | colspan="3" style="text-indent:2em" | 261.13(4) keV | 1.4(1) μs | | | (11/2)− | | |-id=Samarium-152 | 152Sm | style="text-align:right" | 62 | style="text-align:right" | 90 | 151.9197324(27) | colspan=3 align=center|Observationally stable | 0+ | 0.2675(16) | |- | 153Sm | style="text-align:right" | 62 | style="text-align:right" | 91 | 152.9220974(27) | 46.2846(23) h | β− | 153Eu | 3/2+ | | |-id=Samarium-153m | style="text-indent:1em" | 153mSm | colspan="3" style="text-indent:2em" | 98.37(10) keV | 10.6(3) ms | IT | 153Sm | 11/2− | | |-id=Samarium-154 | 154Sm | style="text-align:right" | 62 | style="text-align:right" | 92 | 153.9222093(27) | colspan=3 align=center|Observationally stable | 0+ | 0.2275(29) | |-id=Samarium-155 | 155Sm | style="text-align:right" | 62 | style="text-align:right" | 93 | 154.9246402(28) | 22.3(2) min | β− | 155Eu | 3/2− | | |-id=Samarium-156 | 156Sm | style="text-align:right" | 62 | style="text-align:right" | 94 | 155.925528(10) | 9.4(2) h | β− | 156Eu | 0+ | | |-id=Samarium-156m | style="text-indent:1em" | 156mSm | colspan="3" style="text-indent:2em" | 1397.55(9) keV | 185(7) ns | | | 5− | | |-id=Samarium-157 | 157Sm | style="text-align:right" | 62 | style="text-align:right" | 95 | 156.92836(5) | 8.03(7) min | β− | 157Eu | (3/2−) | | |-id=Samarium-158 | 158Sm | style="text-align:right" | 62 | style="text-align:right" | 96 | 157.92999(8) | 5.30(3) min | β− | 158Eu | 0+ | | |-id=Samarium-159 | 159Sm | style="text-align:right" | 62 | style="text-align:right" | 97 | 158.93321(11) | 11.37(15) s | β− | 159Eu | 5/2− | | |-id=Samarium-160 | 160Sm | style="text-align:right" | 62 | style="text-align:right" | 98 | 159.93514(21)# | 9.6(3) s | β− | 160Eu | 0+ | | |-id=Samarium-161 | 161Sm | style="text-align:right" | 62 | style="text-align:right" | 99 | 160.93883(32)# | | β− | 161Eu | 7/2+# | | |-id=Samarium-162 | 162Sm | style="text-align:right" | 62 | style="text-align:right" | 100 | 161.94122(54)# | | β− | 162Eu | 0+ | | |-id=Samarium-163 | 163Sm | style="text-align:right" | 62 | style="text-align:right" | 101 | 162.94536(75)# | | β− | 163Eu | 1/2−# | | |-id=Samarium-164 | 164Sm | style="text-align:right" | 62 | style="text-align:right" | 102 | 163.94828(86)# | | β− | 164Eu | 0+ | | |-id=Samarium-165 | rowspan=2|165Sm | rowspan=2 style="text-align:right" | 62 | rowspan=2 style="text-align:right" | 103 | rowspan=2|164.95298(97)# | rowspan=2| | β− (98.64%) | 165Eu | rowspan=2|5/2−# | rowspan=2| | rowspan=2| |- | β−, n (1.36%) | 164Eu |-id=Samarium-166 | rowspan=2|166Sm | rowspan=2 style="text-align:right" | 62 | rowspan=2 style="text-align:right" | 104 | rowspan=2| | rowspan=2| | β− (95.62%) | 166Eu | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (4.38%) | 165Eu |-id=Samarium-167 | rowspan=2|167Sm | rowspan=2 style="text-align:right" | 62 | rowspan=2 style="text-align:right" | 105 | rowspan=2| | rowspan=2| | β− | 167Eu | rowspan=2| | rowspan=2| | rowspan=2| |- | β−, n | 166Eu |-id=Samarium-168 | rowspan=2|168Sm | rowspan=2 style="text-align:right" | 62 | rowspan=2 style="text-align:right" | 106 | rowspan=2| | rowspan=2| | β− | 168Eu | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n | 167Eu Samarium-149 Samarium-149 (149Sm) is an observationally stable isotope of samarium (predicted to decay, but no decays have ever been observed, giving it a half-life at least several orders of magnitude longer than the age of the universe), and a product of the decay chain from the fission product 149Nd (yield 1.0888%). 149Sm is a neutron-absorbing nuclear poison with significant effect on nuclear reactor operation, second only to 135Xe. Its neutron cross section is 40140 barns for thermal neutrons. The equilibrium concentration (and thus the poisoning effect) builds to an equilibrium value in about 500 hours (about 20 days) of reactor operation, and since 149Sm is stable, the concentration remains essentially constant during further reactor operation. This contrasts with xenon-135, which accumulates from the beta decay of iodine-135 (a short lived fission product) and has a high neutron cross section, but itself decays with a half-life of 9.2 hours (so does not remain in constant concentration long after the reactor shutdown), causing the so-called xenon pit. Samarium-151 Samarium-151 (151Sm) has a half-life of 94.6 years, undergoing low-energy beta decay, and has a fission product yield of 0.4203% for thermal neutrons and 235U, about 39% of 149Sm's yield. The yield is somewhat higher for 239Pu. Its neutron absorption cross section for thermal neutrons is high at 15200 barns, about 38% of 149Sm's absorption cross section, or about 20 times that of 235U. Since the ratios between the production and absorption rates of 151Sm and 149Sm are almost equal, the two isotopes should reach similar equilibrium concentrations. Since 149Sm reaches equilibrium in about 500 hours (20 days), 151Sm should reach equilibrium in about 50 days. Since nuclear fuel is used for several years (burnup) in a nuclear power plant, the final amount of 151Sm in the spent nuclear fuel at discharge is only a small fraction of the total 151Sm produced during the use of the fuel. According to one study, the mass fraction of 151Sm in spent fuel is about 0.0025 for heavy loading of MOX fuel and about half that for uranium fuel, which is roughly two orders of magnitude less than the mass fraction of about 0.15 for the medium-lived fission product 137Cs. The decay energy of 151Sm is also about an order of magnitude less than that of 137Cs. The low yield, low survival rate, and low decay energy mean that 151Sm has insignificant nuclear waste impact compared to the two main medium-lived fission products 137Cs and 90Sr. ANL factsheet Samarium-153 Samarium-153 (153Sm) has a half-life of 46.3 hours, undergoing β− decay into 153Eu. As a component of samarium lexidronam, it is used in palliation of bone cancer. It is treated by the body in a similar manner to calcium, and it localizes selectively to bone. References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Samarium Samarium
Isotopes of samarium
Chemistry
5,190