id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
74,858,966
https://en.wikipedia.org/wiki/Injury%20in%20plants
Injury in plants is damage caused by other organisms or by the non-living (abiotic) environment to plants. Animals that commonly cause injury to plants include insects, mites, nematodes, and herbivorous mammals; damage may also be caused by plant pathogens including fungi, bacteria, and viruses. Abiotic factors that can damage plants include heat, freezing, flooding, lightning, ozone gas, and pollutant chemicals. Plants respond to injury by signalling that damage has occurred, by secreting materials to seal off the damaged area, by producing antimicrobial chemicals, and in woody plants by regrowing over wounds. Factors Biotic Animals that commonly cause injury to plants include pests such as insects, mites, and nematodes. These variously bite or abrade plant parts such as leaves, stems, and roots, or as is common among the true bugs, pierce the plant's surface and suck plant juices. The resulting injuries may admit plant pathogens such as bacteria and fungi, which may extend the injury. Caterpillar larvae of agricultural pests such as cabbage white butterflies (Pieridae) can completely defoliate Brassica crops. Molluscs such as snails graze on plants including grasses and forbs, abrading them with their rasp-like radula; they can inflict substantial damage to crops. Grazing mammals including livestock such as cattle, too, bite off or break parts of plants including grasses, forbs, and forest trees, causing injury, and again, potentially admitting pathogens. Abiotic Abiotic factors that can damage plants include heat, freezing, flooding, lightning strikes, ozone gas, and pollutant chemicals. Heat can kill any plant, given a sufficient temperature. Alpine plants tend to die at around 47 Celsius; temperate plants at around 51 Celsius; and tropical plants at nearly 58 Celsius: but there is some overlap depending on species. Similarly among cereal crops, temperate barley and oat die at around 49 Celsius, but tropical maize at 55 Celsius. Freezing affects plants variously, according to each species' ability to resist frost damage. Many forbs, including many garden flowers, are tender with little tolerance to frost, and die or are seriously damaged when frozen. Many woody plants are able to supercool, with tough buds and stems containing molecules that lower the freezing point or help to prevent the nucleation of ice crystals, and cell walls that mechanically protect cells against freezing. Flooding of soil quickly kills or injures many plants. The leaves become yellow (chlorosis) and die, progressively up the stem, within about five days after the roots are flooded. The roots lose the ability to absorb water and nutrients. Lightning strikes kill or injure plants, from root crops like beet and potato, which are instantly cooked in the ground, to trees such as coconut, through effects such as sudden heat and pressure shock waves created when water inside the plant flashes to steam. This can rupture stems and scorch any plant parts. Ozone, a gas, causes injury to leaves at concentrations from as little as 0.1 part per million in the atmosphere, such as may be found in or near large cities. It is one of many pollutant chemicals that can damage plants. Plant responses Plants respond to injury by signalling that damage has occurred, by secreting materials to seal off the damaged area, by producing antimicrobials to limit the spread of pathogens, and in some woody plants by regrowing over the wound. Signalling Plants produce chemicals at the injury site that signal the presence of damage and may help to reduce further damage. The chemicals involved depend to some extent on the plant species, though several of them are shared among species; and the signals given depend on the cause of the injury. Plants injured by spider mites release volatile chemicals that attract predatory mites, serving to reduce the attack on the plants. As another example, maize plants damaged by the caterpillars of noctuid moths release a mixture of terpenoid substances which attract the parasitoid wasp Cotesia marginiventris, which kills caterpillars. Many plants give off such herbivory-induced signals. Wound occlusion Plants secrete a variety of chemicals to help seal off damaged areas. For example, the grape vine Vitis vinifera is able to block the xylem water-transport tubes in its stems using the chemical tylose in summertime, and gels in wintertime when the plant is dormant. Tylose helps to prevent pathogens such as wood-rotting fungi and the bacterium Xylella fastidiosa from spreading through the plant: the chemical is produced as a response both to the bacterium and to mechanical damage such as viticultural pruning. Chemical defence Many woody plants produce resins and antimicrobial chemicals to limit the spread of pathogens after an injury. Wound healing Many woody plants regrow around injuries, such as those caused by pruning. In time, such regrowth often completely covers the damaged area as the cambium growth layer produces new tissues. Well-pruned trees with undamaged branch collars often recover well, where poorly-pruned trees rot below the wound. See also Injury in animals References Plant physiology Herbivory Chemical ecology
Injury in plants
Chemistry,Biology
1,098
356,904
https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Ganymede
Ganymede is the largest moon in the Solar System, and has a hard surface with many craters. Most of them are named after figures from Egyptian, Mesopotamian, and other ancient Middle Eastern myths. List Dropped or not approved names External links USGS: Ganymede nomenclature USGS: Ganymede Nomenclature: Craters Ganymede
List of craters on Ganymede
Astronomy
70
45,000,873
https://en.wikipedia.org/wiki/Costa%20Rican%20units%20of%20measurement
A number of units of measurement were used in Costa Rica to measure measurements in length, mass, area, capacity, etc. In Costa Rica, metric system has been adopted since 1910, and has been compulsory since 1912, by a joint convention among Costa Rica, Guatemala, Honduras, Nicaragua and Salvador. Pre-metric units Before the metric system, a number of modified Spanish (i.e., Spanish Castilian), English and local units were used. Length A number of units were used to measure length. One vara was equal to 0.8393 m. Some other units are given below: 1 cuarta = vara 1 tercia = vara 1 mecate = 24 varas. Mass Several units were used to measure mass in Costa Rica, Guatemala, Honduras, Nicaragua, and El Salvador. Some units are given below: 1 caja = 16 kg 1 fanega = 92 kg 1 carga = 161 kg. As a typical coffee measure, fanega was equal to 46 kg of coffee. Area Several units were used to measure area in Costa Rica, Guatemala, Honduras, Nicaragua, and El Salvador. One manzana was equal to 10,000 square varas or 6960.5 m2. One caballeria was equal to 64 manzanas. Capacity Several units were used to measure capacity in Costa Rica, Guatemala, Honduras, Nicaragua, and El Salvador. One botella was equal to 0.63 to 0.67 L. One cajuela was equal to 16.6 L. The capacity of one cuartillo is very variable. References Culture of Costa Rica Costa Rica
Costa Rican units of measurement
Mathematics
331
25,191,227
https://en.wikipedia.org/wiki/SMA%2A
SMA* or Simplified Memory Bounded A* is a shortest path algorithm based on the A* algorithm. The main advantage of SMA* is that it uses a bounded memory, while the A* algorithm might need exponential memory. All other characteristics of SMA* are inherited from A*. Process Properties SMA* has the following properties It works with a heuristic, just as A* It is complete if the allowed memory is high enough to store the shallowest solution It is optimal if the allowed memory is high enough to store the shallowest optimal solution, otherwise it will return the best solution that fits in the allowed memory It avoids repeated states as long as the memory bound allows it It will use all memory available Enlarging the memory bound of the algorithm will only speed up the calculation When enough memory is available to contain the entire search tree, then calculation has an optimal speed Implementation The implementation of Simple memory bounded A* is very similar to that of A*; the only difference is that nodes with the highest f-cost are pruned from the queue when there isn't any space left. Because those nodes are deleted, simple memory bounded A* has to remember the f-cost of the best forgotten child of the parent node. When it seems that all explored paths are worse than such a forgotten path, the path is regenerated. Pseudo code: function simple memory bounded A*-star(problem): path queue: set of nodes, ordered by f-cost; begin queue.insert(problem.root-node); while True do begin if queue.empty() then return failure; //there is no solution that fits in the given memory node := queue.begin(); // min-f-cost-node if problem.is-goal(node) then return success; s := next-successor(node) if !problem.is-goal(s) && depth(s) == max_depth then f(s) := inf; // there is no memory left to go past s, so the entire path is useless else f(s) := max(f(node), g(s) + h(s)); // f-value of the successor is the maximum of // f-value of the parent and // heuristic of the successor + path length to the successor end if if no more successors then update f-cost of node and those of its ancestors if needed if node.successors ⊆ queue then queue.remove(node); // all children have already been added to the queue via a shorter way if memory is full then begin bad Node := shallowest node with highest f-cost; for parent in bad Node.parents do begin parent.successors.remove(bad Node); if needed then queue.insert(parent); end for end if queue.insert(s); end while end External links Simplified Memory Bounded A Star Search Algorithm | SMA* Search | Solved Example in by Mahesh Huddar References Graph algorithms Routing algorithms Search algorithms Game artificial intelligence Articles with example pseudocode
SMA*
Mathematics
642
27,080,011
https://en.wikipedia.org/wiki/Poul%20S.%20Jessen
Poul S. Jessen holds the position of Professor of Optical Sciences with a joint appointment in Physics at the University of Arizona. He is a founding member of the Center for Quantum Information and Control. He has done experimental research in the areas of optical lattices, quantum information, quantum chaos, and quantum optics. Education Jessen received a BSc in physics and chemistry from University of Aarhus, Denmark in 1987, and a PhD from Aarhus in 1993. While studying at Aarhus, Jessen travelled to the United States and worked with William Daniel Phillips at the National Institute of Standards and Technology. When his original doctoral thesis adviser at Aarhus retired, Phillips took over as his thesis adviser. Career In 1990 and 1992, he was a guest researcher at NIST; in 1993, he was a postdoctoral fellow at University of Maryland; from 1993 to 1998, he was an assistant professor at the University of Arizona; from 1998 to 2002, he was associate professor at the University of Arizona; and from 2002 he has been a full professor at the University of Arizona. He has co-authored more than twenty papers. References External links Poul Jessen's profile at the University of Arizona Living people Year of birth missing (living people) 21st-century Danish physicists Quantum physicists Optical physicists Aarhus University alumni University of Arizona faculty
Poul S. Jessen
Physics
265
63,402,239
https://en.wikipedia.org/wiki/Somatochlora%20lingyinensis
Somatochlora lingyinensis is a species of dragonfly in the family Corduliidae. It was described in 1979 based on a specimen from Lingyin in Zhejiang, China. No other specimens are known. References Corduliidae Odonata of Asia Insects of China Endemic fauna of Zhejiang Insects described in 1979 Species known from a single specimen
Somatochlora lingyinensis
Biology
72
859,064
https://en.wikipedia.org/wiki/Norzoanthamine
Norzoanthamine is an alkaloid found in soft corals of the genus Zoanthus Norzoanthamine has been shown to suppress the loss of bone weight and strength in mice. Some derivatives of norzoanthamine also suppress development of some kind of leukemia cell lines and human platelet aggregation. A laboratory synthesis of this compound was developed in 2004. References Quinoline alkaloids Oxygen heterocycles Epoxides Heterocyclic compounds with 7 or more rings Triketones
Norzoanthamine
Chemistry
108
77,066,984
https://en.wikipedia.org/wiki/Spiroheptane
Spiroheptane refers to spirocyclic hydrocarbons with the formula . The parent symmetrical member of this group of compounds is spiro[3.3]heptane, which features a pair of cyclobutane rings sharing one carbon. The parent unsymmetrical member is spiro[2.4]heptane, which features cyclopropyl and cyclopentyl rings sharing one carbon. An early example of a spiro[3.3]heptane is the dicarboxylic acid , also called Fecht's acid in honor of H. Fecht, of the Strasbourg Institute of Chemistry, the person who obtained this compound. His route involved alkylation of malonic esters with the tetrabromide of pentaerythritol, a method modeled after the work on spiropentane. References Dicarboxylic acids Cyclobutanes Spiro compounds Polycyclic nonaromatic hydrocarbons
Spiroheptane
Chemistry
211
78,389,788
https://en.wikipedia.org/wiki/Eranga%20Weeraratne
Eranga Udesh Weeraratne is a Sri Lankan politician, engineer, and business executive who was appointed as a National List Member of Parliament in 2024 under the National People's Power (NPP) government. Subsequently he was appointed as deputy minister of Digital Economy. Early life and education Eranga is the eldest of three siblings. His sister, Chamindi, works in the real estate industry, and his brother, Suranga, serves as the Head of Finance and Administration at Omobio. Weeraratne completed his primary education at Mahinda College and secondary education at Mahanama College. In 1990, he achieved 7 Distinctions and 1 Credit in the General Certificate of Education Ordinary Level (G.C.E. O/L) examination. Subsequently, in 1993, he obtained 2 A-grades and 2 B-grades in Physical Science at the General Certificate of Education Advanced Level (G.C.E. A/L). In 2000, he graduated from the University of Moratuwa with a Bachelor's degree in Computer Science and Engineering. Career After graduating with a degree in Computer Science and Engineering, Weeraratne joined Infotechs Limited as an e-Business engineer during the early growth of internet technologies. He spearheaded the development of *OmniBIS*, the first Sri Lankan portal offering comprehensive digital collaboration tools. These innovative services included free email, online calendar, instant messaging, file storage, and personal organization—predating similar platforms like Gmail and Dropbox. A distinctive feature of OmniBIS was its capability to send and receive emails via SMS. Subsequently, he became a founding member of WaveNET, a software development company specializing in telecom-based products and services. Initially launched with just three members, WaveNET has since expanded to a robust team of over 120 professionals. Weeraratne served as the CTO at WaveNET from 2003 to 2010. In 2012, he assumed dual roles as CEO and CTO of Omobio Pvt. Ltd. Under his leadership, the Omobio team has been committed to creating innovative solutions with a positive global impact. The company has garnered numerous prestigious accolades, including National Best Quality ICT Awards for the Dialog Self-Care Application, Customer-Care Application, and TEAM – SMS Box platform solutions. Omobio has successfully served more than twenty telecom operators and service providers across twenty countries. Weeraratne's strategic vision has positioned Omobio as a global leader in technological innovation, particularly in artificial intelligence (AI), machine learning (ML), and Internet of Things (IoT) solutions. The company has established international offices and continues to pursue expansion into Europe, Africa, and the Americas. From 2010 to 2013, he concurrently held the position of CTO at Telfinity Systems (Pvt) Ltd. Beyond his role at Omobio, Weeraratne has diversified his professional portfolio by assuming leadership positions in various organisations: Chairman/Director at eimSky (Pvt) Ltd Director at Textware (Pvt) Ltd Chairman at Matrix Plantations Chairman/Director at Spiceyaya (Pvt) Ltd Chairman/Director at Spice Fortress (Pvt) Ltd Director at E-Lottery Solutions (Pvt) Ltd Political career In 2019, Eranga entered active politics through the National Intellectuals Organisation (NIO), an organization of which he is a founding member. NIO is affiliated with the NPP party. Weeraratne was appointed to the 17th Parliament of Sri Lanka as part of the NPP's National List in 2024. His selection reflects the party's emphasis on including experienced professionals and innovative leaders to guide policy and national development. His appointment was received postively by the IT industry in Sri Lanka. Personal life Eranga is married to Prabhashini Herath, and they have two children. Controversies In 2012, social media posts and a gossip article alleged that Eranga Weeraratne was involved in a legal controversy regarding the theft of a laptop belonging to the late Professor Gihan Wikramanayake, then Director of the University of Colombo’s Computer Studies Institute (UCSC). The claims also suggested that 74 CDs and DVDs, including content alleged to be pornographic, were recovered from Weeraratne's residence during a police investigation and that a case was filed in the Fort Magistrate’s Court. In December 2024, a detailed investigation by the fact-checking organization Fact Crescendo Sri Lanka concluded that these allegations were entirely fabricated. The findings revealed: No official records or mainstream media reports exist to substantiate the claims. The University of Colombo Computer Studies Institute confirmed that no such theft occurred, nor were any police complaints filed regarding the alleged incident. Dr. D.A.S. Athukorala, the current director of the institute, stated that neither Weeraratne nor the co-accused were staff members of the institute. The allegations appear to have originated from an old gossip article published in 2012, and no credible evidence supports them. Fact Crescendo further suggested that the false narrative might have been a smear campaign by rival businesses in the software industry. The Digital Deputy Minister’s office also denied the allegations, confirming that Weeraratne was not affiliated with the University of Colombo at the time. See also National People's Power Parliament of Sri Lanka References Members of the 17th Parliament of Sri Lanka National People's Power politicians Sri Lankan engineers Sri Lankan business executives Alumni of the University of Moratuwa People in information technology 1974 births Living people
Eranga Weeraratne
Technology
1,128
8,762,082
https://en.wikipedia.org/wiki/Mark%20I%20Fire%20Control%20Computer
The Mark 1, and later the Mark 1A, Fire Control Computer was a component of the Mark 37 Gun Fire Control System deployed by the United States Navy during World War II and up to 1991 and possibly later. It was originally developed by Hannibal C. Ford of the Ford Instrument Company and William Newell. It was used on a variety of ships, ranging from destroyers (one per ship) to battleships (four per ship). The Mark 37 system used tachymetric target motion prediction to compute a fire control solution. It contained a target simulator which was updated by further target tracking until it matched. Weighing more than , the Mark 1 itself was installed in the plotting room, a watertight compartment that was located deep inside the ship's hull to provide as much protection against battle damage as possible. Essentially an electromechanical analog computer, the Mark 1 was electrically linked to the gun mounts and the Mark 37 gun director, the latter mounted as high on the superstructure as possible to afford maximum visual and radar range. The gun director was equipped with both optical and radar range finding, and was able to rotate on a small barbette-like structure. Using the range finders and telescopes for bearing and elevation, the director was able to produce a continuously varying set of outputs, referred to as line-of-sight (LOS) data, that were electrically relayed to the Mark 1 via synchro motors. The LOS data provided the target's present range, bearing, and in the case of aerial targets, altitude. Additional inputs to the Mark 1A were continuously generated from the stable element, a gyroscopic device that reacted to the roll and pitch of the ship, the pitometer log, which measured the ship's speed through the water, and an anemometer, which provided wind speed and direction. The Stable Element would now be called a vertical gyro. In "Plot" (the plotting room), a team of sailors stood around the Mark 1 and continuously monitored its operation. They would also be responsible for calculating and entering the average muzzle velocity of the projectiles to be fired before action started. This calculation was based on the type of propellant to be used and its temperature, the projectile type and weight, and the number of rounds fired through the guns to date. Given these inputs, the Mark 1 automatically computed the lead angles to the future position of the target at the end of the projectile's time of flight, adding in corrections for gravity, relative wind, the magnus effect of the spinning projectile, and parallax, the latter compensation necessary because the guns themselves were widely displaced along the length of the ship. Lead angles and corrections were added to the LOS data to generate the line-of-fire (LOF) data. The LOF data, bearing and elevation, as well as the projectile's fuze time, was sent to the mounts by synchro motors, whose motion actuated hydraulic servos with excellent dynamic accuracy to aim the guns. Once the system was "locked" on the target, it produced a continuous fire control solution. While these fire control systems greatly improved the long-range accuracy of ship-to-ship and ship-to-shore gunfire, especially on heavy cruisers and battleships, it was in the anti-aircraft warfare mode that the Mark 1 made the greatest contribution. However, the anti-aircraft value of analog computers such as the Mark 1 was greatly reduced with the introduction of jet aircraft, where the relative motion of the target became such that the computer's mechanism could not react quickly enough to produce accurate results. Furthermore, the target speed, originally limited to 300 knots by a mechanical stop, was twice doubled to 600, then 1,200 knots by gear ratio changes. The design of the postwar Mark 1A may have been influenced by the Bell Labs Mark 8, which was developed as an all electrical computer, incorporating technology from the M9 gun data computer as a safeguard to ensure adequate supplies of fire control computers for the USN during WW2. Surviving Mark 1 computers were upgraded to the Mark 1A standard after World War II ended. Among the upgrades were removing the vector solver from the Mark 1 and redesigning the reverse coordinate conversion scheme that updated target parameters. The scheme kept the four component integrators, obscure devices not included in explanations of basic fire control mechanisms. They worked like a ball–type computer mouse, but had shaft inputs to rotate the ball and to determine the angle of its axis of rotation. The round target course indicator on the right side of the star shell computer with the two panic buttons is a holdover from WW II days when early tracking data and initial angle–output position of the vector solver caused target speed to decrease. Pushbuttons slewed the vector solver quickly. See also Ship gun fire-control system Admiralty Fire Control Table High Angle Control System Gun data computer Stabilisierter Leitstand References External links Fire Control Fundamentals Manual for the Mark 1 and Mark 1a Computer Maintenance Manual for the Mark 1 Computer Manual for the Mark 6 Stable Element Gun Fire Control System Mark 37 Operating Instructions at ibiblio.org Director section of Mark 1 Mod 1 computer operations at NavSource.org Naval Ordnance and Gunnery, Vol. 2, Chapter 25, AA Fire Control Systems Artillery operation Mechanical computers Military computers Fire-control computers of World War II Military equipment introduced in the 1930s
Mark I Fire Control Computer
Physics,Technology
1,096
21,484,770
https://en.wikipedia.org/wiki/SGR%20J1550%E2%88%925418
SGR J1550−5418 is a soft gamma repeater (SGR), the sixth to be discovered, located in the constellation Norma. Long known as an X-ray source, it was noticed to have become active on 23 October 2008, and then after a relatively quiescent interval, became much more active on 22 January 2009. It has been observed by the Swift satellite, and by the Fermi Gamma-ray Space Telescope, launched in 2008, as well as in X-ray and radio emission. It has been observed to emit intense bursts of gamma rays at a rate of up to several per minute. At its estimated distance of 30,000 light years (~10 kpc), the most intense flares equal the total energy emission of the Sun in ~20 years. The underlying object is believed to be a rotating neutron star, of the type known as magnetars, which have magnetic fields up to 1015 gauss, about 1000 times that of more typical neutron star X-ray sources. See orders of magnitude (magnetic field) for examples of other magnetic field strengths. The rotation period, ~2.07 s, is the fastest yet observed for a magnetar. The first observation of "light echos" from a gamma-ray source, a phenomenon long known for visible stars such as novas, was observed from SGR J1550−5418. The location of SGR J1550−5418 (aka AXP 1E 1547.0-5408), is RA(J2000) = 15h50m54.11s, Dec(J2000) = −54°18'23.7". References Soft gamma repeaters Norma (constellation) Magnetars
SGR J1550−5418
Astronomy
357
3,396,069
https://en.wikipedia.org/wiki/Algebraic%20connectivity
The algebraic connectivity (also known as Fiedler value or Fiedler eigenvalue after Miroslav Fiedler) of a graph is the second-smallest eigenvalue (counting multiple eigenvalues separately) of the Laplacian matrix of . This eigenvalue is greater than 0 if and only if is a connected graph. This is a corollary to the fact that the number of times 0 appears as an eigenvalue in the Laplacian is the number of connected components in the graph. The magnitude of this value reflects how well connected the overall graph is. It has been used in analyzing the robustness and synchronizability of networks. Properties The algebraic connectivity of undirected graphs with nonnegative weights is , with the inequality being strict if and only if is connected. However, the algebraic connectivity can be negative for general directed graphs, even if is a connected graph. Furthermore, the value of the algebraic connectivity is bounded above by the traditional (vertex) connectivity of a graph, , unless the graph is complete (the algebraic connectivity of a complete graph is its order ). For an undirected connected graph with nonnegative edge weights, vertices, and diameter , the algebraic connectivity is also known to be bounded below by , and in fact (in a result due to Brendan McKay) by . For the example graph with 6 nodes show above (), these bounds would be calculated as:Unlike the traditional form of graph connectivity, defined by local configurations whose removal would disconnect the graph, the algebraic connectivity is dependent on the global number of vertices, as well as the way in which vertices are connected. In random graphs, the algebraic connectivity decreases with the number of vertices, and increases with the average degree. The exact definition of the algebraic connectivity depends on the type of Laplacian used. Fan Chung has developed an extensive theory using a rescaled version of the Laplacian, eliminating the dependence on the number of vertices, so that the bounds are somewhat different. In models of synchronization on networks, such as the Kuramoto model, the Laplacian matrix arises naturally, so the algebraic connectivity gives an indication of how easily the network will synchronize. Other measures, such as the average distance (characteristic path length) can also be used, and in fact the algebraic connectivity is closely related to the (reciprocal of the) average distance. The algebraic connectivity also relates to other connectivity attributes, such as the isoperimetric number, which is bounded below by half the algebraic connectivity. Fiedler vector The original theory related to algebraic connectivity was produced by Miroslav Fiedler. In his honor the eigenvector associated with the algebraic connectivity has been named the Fiedler vector. The Fiedler vector can be used to partition a graph. Partitioning a graph using the Fiedler vector For the example graph in the introductory section, the Fiedler vector is . The negative values are associated with the poorly connected vertex 6, and the neighbouring articulation point, vertex 4; while the positive values are associated with the other vertices. The signs of the values in the Fiedler vector can therefore be used to partition this graph into two components: . Alternatively, the value of 0.069 (which is close to zero) can be placed in a class of its own, partitioning the graph into three components: or moved to the other partition , as pictured. The squared values of the components of the Fiedler vector, summing up to one since the vector is normalized, can be interpreted as probabilities of the corresponding data points to be assigned to the sign-based partition. See also Connectivity (graph theory) Graph property References Algebraic graph theory Graph connectivity Graph invariants
Algebraic connectivity
Mathematics
773
11,127,611
https://en.wikipedia.org/wiki/Discostroma%20corticola
Discostroma corticola is a plant pathogen. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Xylariales Fungi described in 1976 Fungus species
Discostroma corticola
Biology
41
6,023,946
https://en.wikipedia.org/wiki/Metamath
Metamath is a formal language and an associated computer program (a proof assistant) for archiving and verifying mathematical proofs. Several databases of proved theorems have been developed using Metamath covering standard results in logic, set theory, number theory, algebra, topology and analysis, among others. By 2023, Metamath had been used to prove 74 of the 100 theorems of the "Formalizing 100 Theorems" challenge. At least 19 proof verifiers use the Metamath format. The Metamath website provides a database of formalized theorems which can be browsed interactively. Metamath language The Metamath language is a metalanguage for formal systems. The Metamath language has no specific logic embedded in it. Instead, it can be regarded as a way to prove that inference rules (asserted as axioms or proven later) can be applied. The largest database of proved theorems follows conventional first-order logic and ZFC set theory. The Metamath language design (employed to state the definitions, axioms, inference rules and theorems) is focused on simplicity. Proofs are checked using an algorithm based on variable substitution. The algorithm also has optional provisos for what variables must remain distinct after a substitution is made. Language basics The set of symbols that can be used for constructing formulas is declared using $c (constant symbols) and $v (variable symbols) statements; for example: $( Declare the constant symbols we will use $) $c 0 + = -> ( ) term wff |- $. $( Declare the metavariables we will use $) $v t r s P Q $. The grammar for formulas is specified using a combination of $f (floating (variable-type) hypotheses) and $a (axiomatic assertion) statements; for example: $( Specify properties of the metavariables $) tt $f term t $. tr $f term r $. ts $f term s $. wp $f wff P $. wq $f wff Q $. $( Define "wff" (part 1) $) weq $a wff t = r $. $( Define "wff" (part 2) $) wim $a wff ( P -> Q ) $. Axioms and rules of inference are specified with $a statements along with ${ and $} for block scoping and optional $e (essential hypotheses) statements; for example: $( State axiom a1 $) a1 $a |- ( t = r -> ( t = s -> r = s ) ) $. $( State axiom a2 $) a2 $a |- ( t + 0 ) = t $. ${ min $e |- P $. maj $e |- ( P -> Q ) $. $( Define the modus ponens inference rule $) mp $a |- Q $. $} Using one construct, $a statements, to capture syntactic rules, axiom schemas, and rules of inference is intended to provide a level of flexibility similar to higher order logical frameworks without a dependency on a complex type system. Proofs Theorems (and derived rules of inference) are written with $p statements; for example: $( Prove a theorem $) th1 $p |- t = t $= $( Here is its proof: $) tt tze tpl tt weq tt tt weq tt a2 tt tze tpl tt weq tt tze tpl tt weq tt tt weq wim tt a2 tt tze tpl tt tt a1 mp mp $. Note the inclusion of the proof in the $p statement. It abbreviates the following detailed proof: tt $f term t tze $a term 0 1,2 tpl $a term ( t + 0 ) 3,1 weq $a wff ( t + 0 ) = t 1,1 weq $a wff t = t 1 a2 $a |- ( t + 0 ) = t 1,2 tpl $a term ( t + 0 ) 7,1 weq $a wff ( t + 0 ) = t 1,2 tpl $a term ( t + 0 ) 9,1 weq $a wff ( t + 0 ) = t 1,1 weq $a wff t = t 10,11 wim $a wff ( ( t + 0 ) = t -> t = t ) 1 a2 $a |- ( t + 0 ) = t 1,2 tpl $a term ( t + 0 ) 14,1,1 a1 $a |- ( ( t + 0 ) = t -> ( ( t + 0 ) = t -> t = t ) ) 8,12,13,15 mp $a |- ( ( t + 0 ) = t -> t = t ) 4,5,6,16 mp $a |- t = t The "essential" form of the proof elides syntactic details, leaving a more conventional presentation: a2 $a |- ( t + 0 ) = t a2 $a |- ( t + 0 ) = t a1 $a |- ( ( t + 0 ) = t -> ( ( t + 0 ) = t -> t = t ) ) 2,3 mp $a |- ( ( t + 0 ) = t -> t = t ) 1,4 mp $a |- t = t Substitution All Metamath proof steps use a single substitution rule, which is just the simple replacement of a variable with an expression and not the proper substitution described in works on predicate calculus. Proper substitution, in Metamath databases that support it, is a derived construct instead of one built into the Metamath language itself. The substitution rule makes no assumption about the logic system in use and only requires that the substitutions of variables are correctly done. Here is a detailed example of how this algorithm works. Steps 1 and 2 of the theorem 2p2e4 in the Metamath Proof Explorer (set.mm) are depicted left. Let's explain how Metamath uses its substitution algorithm to check that step 2 is the logical consequence of step 1 when you use the theorem opreq2i. Step 2 states that . It is the conclusion of the theorem opreq2i. The theorem opreq2i states that if , then . This theorem would never appear under this cryptic form in a textbook but its literate formulation is banal: when two quantities are equal, one can replace one by the other in an operation. To check the proof Metamath attempts to unify with . There is only one way to do so: unifying with , with , with and with . So now Metamath uses the premise of opreq2i. This premise states that . As a consequence of its previous computation, Metamath knows that should be substituted by and by . The premise becomes and thus step 1 is therefore generated. In its turn step 1 is unified with df-2. df-2 is the definition of the number 2 and states that 2 = ( 1 + 1 ). Here the unification is simply a matter of constants and is straightforward (no problem of variables to substitute). So the verification is finished and these two steps of the proof of 2p2e4 are correct. When Metamath unifies with it has to check that the syntactical rules are respected. In fact has the type class thus Metamath has to check that is also typed class. Metamath proof checker The Metamath program is the original program created to manipulate databases written using the Metamath language. It has a text (command line) interface and is written in C. It can read a Metamath database into memory, verify the proofs of a database, modify the database (in particular by adding proofs), and write them back out to storage. It has a prove command that enables users to enter a proof, along with mechanisms to search for existing proofs. The Metamath program can convert statements to HTML or TeX notation; for example, it can output the modus ponens axiom from set.mm as: Many other programs can process Metamath databases, in particular, there are at least 19 proof verifiers for databases that use the Metamath format. Metamath databases The Metamath website hosts several databases that store theorems derived from various axiomatic systems. Most databases (.mm files) have an associated interface, called an "Explorer", which allows one to navigate the statements and proofs interactively on the website, in a user-friendly way. Most databases use a Hilbert system of formal deduction though this is not a requirement. Metamath Proof Explorer The Metamath Proof Explorer (recorded in set.mm) is the main database. It is based on classical first-order logic and ZFC set theory (with the addition of Tarski-Grothendieck set theory when needed, for example in category theory). The database has been maintained for over thirty years (the first proofs in set.mm are dated September 1992). The database contains developments, among other fields, of set theory (ordinals and cardinals, recursion, equivalents of the axiom of choice, the continuum hypothesis...), the construction of the real and complex number systems, order theory, graph theory, abstract algebra, linear algebra, general topology, real and complex analysis, Hilbert spaces, number theory, and elementary geometry. The Metamath Proof Explorer references many text books that can be used in conjunction with Metamath. Thus, people interested in studying mathematics can use Metamath in connection with these books and verify that the proved assertions match the literature. Intuitionistic Logic Explorer This database develops mathematics from a constructive point of view, starting with the axioms of intuitionistic logic and continuing with axiom systems of constructive set theory. New Foundations Explorer This database develops mathematics from Quine's New Foundations set theory. Higher-Order Logic Explorer This database starts with higher-order logic and derives equivalents to axioms of first-order logic and of ZFC set theory. Databases without explorers The Metamath website hosts a few other databases which are not associated with explorers but are nonetheless noteworthy. The database peano.mm written by Robert Solovay formalizes Peano arithmetic. The database nat.mm formalizes natural deduction. The database miu.mm formalizes the MU puzzle based on the formal system MIU presented in Gödel, Escher, Bach. Older explorers The Metamath website also hosts a few older databases which are not maintained anymore, such as the "Hilbert Space Explorer", which presents theorems pertaining to Hilbert space theory which have now been merged into the Metamath Proof Explorer, and the "Quantum Logic Explorer", which develops quantum logic starting with the theory of orthomodular lattices. Natural deduction Because Metamath has a very generic concept of what a proof is (namely a tree of formulas connected by inference rules) and no specific logic is embedded in the software, Metamath can be used with species of logic as different as Hilbert-style logics or sequents-based logics or even with lambda calculus. However, Metamath provides no direct support for natural deduction systems. As noted earlier, the database nat.mm formalizes natural deduction. The Metamath Proof Explorer (with its database set.mm) instead uses a set of conventions that allow the use of natural deduction approaches within a Hilbert-style logic. Other works connected to Metamath Proof checkers Using the design ideas implemented in Metamath, Raph Levien has implemented very small proof checker, mmverify.py, at only 500 lines of Python code. Ghilbert is a similar though more elaborate language based on mmverify.py. Levien would like to implement a system where several people could collaborate and his work is emphasizing modularity and connection between small theories. Using Levien’s seminal work, many other implementations of the Metamath design principles have been implemented for a broad variety of languages. Juha Arpiainen has implemented his own proof checker in Common Lisp called Bourbaki and Marnix Klooster has coded a proof checker in Haskell called Hmm. Although they all use the overall Metamath approach to formal system checker coding, they also implement new concepts of their own. Editors Mel O'Cat designed a system called Mmj2, which provides a graphic user interface for proof entry. The initial aim of Mel O'Cat was to allow the user to enter the proofs by simply typing the formulas and letting Mmj2 find the appropriate inference rules to connect them. In Metamath on the contrary you may only enter the theorems names. You may not enter the formulas directly. Mmj2 has also the possibility to enter the proof forward or backward (Metamath only allows to enter proof backward). Moreover Mmj2 has a real grammar parser (unlike Metamath). This technical difference brings more comfort to the user. In particular Metamath sometimes hesitates between several formulas it analyzes (most of them being meaningless) and asks the user to choose. In Mmj2 this limitation no longer exists. There is also a project by William Hale to add a graphical user interface to Metamath called Mmide. Paul Chapman in its turn is working on a new proof browser, which has highlighting that allows you to see the referenced theorem before and after the substitution was made. Milpgame is a proof assistant and a checker (it shows a message only something gone wrong) with a graphic user interface for the Metamath language(set.mm), written by Filip Cernatescu, it is an open source(MIT License) Java application (cross-platform application: Window, Linux, Mac OS). User can enter the demonstration(proof) in two modes : forward and backward relative to the statement to prove. Milpgame checks if a statement is well formed (has a syntactic verifier). It can save unfinished proofs without the use of dummylink theorem. The demonstration is shown as tree, the statements are shown using html definitions (defined in typesetting chapter). Milpgame is distributed as Java .jar(JRE version 6 update 24 written in NetBeans IDE). See also Automated theorem proving Computer-assisted proof Proof assistant References External links Metamath: official website. What do mathematicians think of Metamath: opinions on Metamath. Free mathematics software Free theorem provers Large-scale mathematical formalization projects Proof assistants
Metamath
Mathematics
3,089
49,242,470
https://en.wikipedia.org/wiki/Sulfate%20permease
The sulfate permease (SulP) family (TC# 2.A.53) is a member of the large APC superfamily of secondary carriers. The SulP family is a large and ubiquitous family of proteins derived from archaea, bacteria, fungi, plants and animals. Many organisms including Bacillus subtilis, Synechocystis sp, Saccharomyces cerevisiae, Arabidopsis thaliana and Caenorhabditis elegans possess multiple SulP family paralogues. Many of these proteins are functionally characterized, and most are inorganic anion uptake transporters or anion:anion exchange transporters. Some transport their substrate(s) with high affinities, while others transport it or them with relatively low affinities. Others may catalyze SO:HCO exchange, or more generally, anion:anion antiport. For example, the mouse homologue, SLC26A6 (TC# 2.A.53.2.7), can transport sulfate, formate, oxalate, chloride and bicarbonate, exchanging any one of these anions for another. A cyanobacterial homologue can transport nitrate. Some members can function as channels. SLC26A3 (2.A.53.2.3) and SLC26A6 (2.A.53.2.7 and 2.A.53.2.8) can function as carriers or channels, depending on the transported anion. In these porters, mutating a glutamate, also involved in transport in the CIC family (TC# 2.A.49), (E357A in SLC26A6) created a channel out of the carrier. It also changed the stoichiometry from 2Cl−/HCO to 1Cl−/HCO. Structure All SulPs are homodimers. where two subunits do not function independently. The dimeric structure probably represents the native state of SulP transporters. A low-resolution structure of a bacterial SulP transporter revealed a dimeric stoichiometry, stabilized via its transmembrane core and mobile intracellular domains. The cytoplasmic STAS domain projects away from the transmembrane domain and is not involved in dimerization. The structure suggests that large movements of the STAS domain underlie the conformational changes that occur during transport. The bacterial proteins vary in size from 434 residues to 573 residues with only a few exceptions. The eukaryotic proteins vary in size from 611 residues to 893 residues with a few exceptions. Thus, the eukaryotic proteins are usually larger than the prokaryotic homologues. These proteins exhibit 10-13 putative transmembrane α-helical spanners (TMSs) depending on the protein. Crystal structures Several crystal structures are available for members of the SulP family through RCSB: , , , Homologues One of the distant SulP homologues has been shown to be a bicarbonate:Na+ symporter (TC# 2.A.53.5.1). Bioinformatic work has identified additional homologues with fused domains. Some of these fused proteins have SulP homologues fused to carbonic anhydrase homologues (TC# 2.A.53.8.1). These are also presumed to be bicarbonate uptake permeases. Another has SulP fused to Rhodanese, a sulfate:cyanide sulfotransferase (TC# 2.A.53.9.1). This SulP homologue is presumably a sulfate transporter. Homologues currently characterized in the SulP family can be found in the Transporter Classification Database. SLC26A3 in mice One member of the SulP family, SLC26A3, has been knocked out in mice. Apical membrane chloride/base exchange activity was sharply reduced, and the luminal content was more acidic in SLC26A3-null mouse colon. The epithelial cells in the colon displayed unique adaptive regulation of ion transporters; NHE3 expression was enhanced in the proximal and distal colon, whereas colonic H+/K+-ATPase and the epithelial sodium channel showed massive up-regulation in the distal colon. Plasma aldosterone was increased in SLC26A3-null mice. Thus, SLC26A3 may be the major apical chloride/base exchanger and is essential for the absorption of chloride in the colon. In addition, SLC26A3 regulates colonic crypt proliferation. Deletion of SLC26A3 results in chloride-rich diarrhea and is associated with compensatory adaptive up-regulation of ion-absorbing transporters. MOT1 MOT1 from Arabidopsis thaliana (TC# 2.A.53.11.1, 456aas; 8-10 TMSs), a distant homologue of the SulP and BenE (2.A.46) families, is expressed in both roots and shoots, and is localized to plasma membranes and intracellular vesicles. MOT1 is required for efficient uptake and translocation of molybdate as well as for normal growth under conditions of limited molybdate supply. Kinetic studies in yeast revealed that the K(m) value of MOT1 for molybdate is approximately 20 nM. Mo uptake by MOT1 in yeast is not affected by the presence of sulfate. MOT1 did not complement a sulfate transporter-deficient yeast mutant strain. MOT1 is thus probably specific for molybdate. The high affinity of MOT1 allows plants to obtain scarce Mo from soil when its concentration is about 10nM. SLC26 SLC26 proteins function as anion exchangers and Cl− channels. Ousingsawat et al. (2012) examined the functional interaction between CF transmembrane conductance regulator (CFTR) and SLC26A9 in polarized airway epithelial cells and in non-polarized HEK293 cells expressing CFTR and SLC26A9 (2.A.56.2.10). They found that SLC26A9 provides a constitutively active basal Cl− conductance in polarized grown CFTR-expressing CFBE airway epithelial cells, but not in cells expressing F508del-CFTR. In polarized CFTR-expressing cells, SLC26A9 also contributes to both Ca2+- and CFTR-activated Cl− secretion. In contrast in non-polarized HEK293 cells co-expressing CFTR/SLC26A9, the baseline Cl− conductance provided by SLC26A9 was inhibited during activation of CFTR. Thus, SLC26A9 and CFTR behave differentially in polarized and non-polarized cells, explaining earlier conflicting data. Transport Reaction The generalized transport reactions catalyzed by SulP family proteins are: (1) SO (out) + nH+ (out) → SO (in) + nH+ (in). (2) SO (out) + nHCO (in) ⇌ SO (in) + nHCO (out). (3) I− and other anions (out) ⇌ I− and other anions (in). (4) HCO (out) + nH+ (out) → HCO (in) + nH+ (in). See also Solute carrier family Transporter Classification Database Membrane transport protein References Protein families Transmembrane transporters Integral membrane proteins
Sulfate permease
Biology
1,619
50,850,022
https://en.wikipedia.org/wiki/George%20Blasse
George Blasse (28 August 1934 – 30 December 2020) was a Dutch chemist. He was a professor of solid-state chemistry at Utrecht University for most of his career. Blasse was born on 28 August 1934 in Amsterdam. He studied chemistry at the University of Amsterdam. In 1964 he obtained his PhD under E.W. Gorter at Leiden University with a dissertation titled: Chrystal chemistry and some magnetic properties of mixed metal oxides with spinel structure. From 1960 to 1970 Blasse was employed by the Philips Natuurkundig Laboratorium. In 1970 he was appointed as professor of solid-state chemistry at Utrecht University. He retired in 1996. During his career he performed research into luminescent materials. He discovered the phosphor that made white light LEDs possible. Blasse was elected a member of the Royal Netherlands Academy of Arts and Sciences in 1982. In 1992 he was awarded the Academy's Gilles Holst Medal. Blasse was elected a member of the Academia Europaea in 1993. In 1996 he was made a Knight in the Order of the Netherlands Lion. After his retirement he moved to Munich, Germany. He died there on 30 December 2020, aged 86. After his death the ECS Journal of Solid State Science and Technology had a focus issue in his honor. References 1934 births 2020 deaths 20th-century Dutch chemists Academic staff of Utrecht University Knights of the Order of the Netherlands Lion Leiden University alumni Members of Academia Europaea Members of the Royal Netherlands Academy of Arts and Sciences Scientists from Amsterdam Solid state chemists University of Amsterdam alumni
George Blasse
Chemistry
327
41,591,738
https://en.wikipedia.org/wiki/Serpentine%20geometry%20plasma%20actuator
The serpentine plasma actuator represents a broad class of plasma actuator. The actuators vary from the standard type in that their electrode geometry has been modified in to be periodic across its span. History This class of plasma actuators was developed at the Applied Physics Research Group (APRG) at the University of Florida in 2008 by Subrata Roy for the purpose of controlling laminar and turbulent boundary layer flows. Since then, APRG has continued to characterize and develop uses for this class of plasma actuators. Several patents resulted from the early work on serpentine geometry plasma actuators. In 2013, these actuators started to get broader attention in the scientific press, and several articles were written about these actuators, including articles in AIP's EurekAlert, Inside Science and various blogs. Current research and operating mechanisms Serpentine plasma actuators (like other Dielectric Barrier Discharge actuators, i.e. plasma actuators) are able to induce an atmospheric plasma and introduce an electrohydrodynamic body force to a fluid. This body force can be used to implement flow control, and there are a range of potential applications, including drag reduction for aircraft and flow stabilization in combustion chambers. The important distinction between serpentine plasma actuators and more traditional geometries is that the geometry of the electrodes has been modified in order to be periodic across its span. As the electrode has been made periodic, the resulting plasma and body force are also spanwise periodic. With this spanwise periodicity, three-dimensional flow effects can be induced in the flow, which cannot be done with more traditional plasma actuator geometries. It is thought that the introduction of three-dimensional flow effects allow for the plasma actuators to apply much greater levels of control authority as they allow for the plasma actuators to project onto a greater range of physical mechanisms (such as boundary layer streaks or secondary instabilities of the Tollmien-Schlichting wave). Recent work indicate that these plasma actuators may have a significant impact on controlling laminar and transitional flows on a flat plate. In addition, the serpentine actuator has been experimentally demonstrated to increase lift, decrease drag and generate controlling rolling moments when applied to aircraft wing geometries. With the greater level of control authority that these plasma actuators may potentially possess, there is currently research being performed at several labs in the United States and in the United Kingdom looking to apply these actuators for real world applications. Recent numerical work predicted significant turbulent drag reduction by collocating serpentine plasma actuators in a pattern to modify energetic modes of transitional flow. See also Plasma actuator Wingless Electromagnetic Air Vehicle Applied Physics Research Group University of Florida University of Florida College of Engineering References Plasma technology and applications Actuators
Serpentine geometry plasma actuator
Physics
586
22,063,462
https://en.wikipedia.org/wiki/List%20of%20solid-state%20drive%20manufacturers
This is the list of manufacturers of solid-state drives (SSDs) for computers and other electronic devices that require data storage. In the list those manufacturers that also produce hard disk drives or flash memory are identified. Additionally, the type of memory used in their solid-state drives is noted. This list does not include the manufacturers of specific components of SSDs, such as flash memory controllers. See also History of hard disk drives List of computer hardware manufacturers List of defunct hard disk manufacturers References Lists of computer hardware Lists of consumer electronics manufacturers Lists of manufacturers Manufacturers
List of solid-state drive manufacturers
Technology
113
43,410,961
https://en.wikipedia.org/wiki/Inocybe%20tahquamenonensis
Inocybe tahquamenonensis is an inedible species of agaric fungus in the family Inocybaceae. Found in the United States, it was formally described in 1954 by mycologist Daniel E. Stuntz. The fruit bodies have bell-shaped to convex to flattened caps measuring in diameter. Its color is dark purplish brown to reddish- or blackish-brown, with reddish-purple flesh. The gills are attached to the stipe and are somewhat distantly spaced. They are initially reddish brown before turning to chocolate brown, sometimes developing whitish edges. The spore print is brown; spores measure 6–8.5 by 5–6 μm. Fruit bodies grow singly, scattered, or in group under deciduous trees. See also List of Inocybe species References External links tahquamenonensis Fungi described in 1954 Fungi of the United States Inedible fungi Fungi without expected TNC conservation status Fungus species
Inocybe tahquamenonensis
Biology
199
2,310,753
https://en.wikipedia.org/wiki/Radial%20basis%20function
In mathematics a radial basis function (RBF) is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that , or some other fixed point , called a center, so that . Any function that satisfies the property is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection which forms a basis for some function space of interest, hence the name. Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988, which stemmed from Michael J. D. Powell's seminal research from 1977. RBFs are also used as a kernel in support vector classification. The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications. Definition A radial function is a function . When paired with a norm on a vector space , a function of the form is said to be a radial kernel centered at . A radial function and the associated radial kernels are said to be radial basis functions if, for any finite set of nodes , all of the following conditions are true: Examples Commonly used types of radial basis functions include (writing and using to indicate a shape parameter that can be used to scale the input of the radial kernel): Approximation Radial basis functions are typically used to build up function approximations of the form where the approximating function is represented as a sum of radial basis functions, each associated with a different center , and weighted by an appropriate coefficient The weights can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights . Approximation schemes of this kind have been particularly used in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour and 3D reconstruction in computer graphics (for example, hierarchical RBF and Pose Space Deformation). RBF Network The sum can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number of radial basis functions is used. The approximant is differentiable with respect to the weights . The weights could thus be learned using any of the standard iterative methods for neural networks. Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically (equidistant data points are ideal). However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly. RBFs for PDEs Radial basis functions are used to approximate functions and so can be used to discretize and numerically solve Partial Differential Equations (PDEs). This was first done in 1990 by E. J. Kansa who developed the first RBF based numerical method. It is called the Kansa method and was used to solve the elliptic Poisson equation and the linear advection-diffusion equation. The function values at points in the domain are approximated by the linear combination of RBFs: The derivatives are approximated as such: where are the number of points in the discretized domain, the dimension of the domain and the scalar coefficients that are unchanged by the differential operator. Different numerical methods based on Radial Basis Functions were developed thereafter. Some methods are the RBF-FD method, the RBF-QR method and the RBF-PUM method. See also Matérn covariance function Radial basis function interpolation Kansa method References Further reading Sirayanone, S., 1988, Comparative studies of kriging, multiquadric-biharmonic, and other methods for solving mineral resource problems, PhD. Dissertation, Dept. of Earth Sciences, Iowa State University, Ames, Iowa. Artificial neural networks Interpolation Numerical analysis
Radial basis function
Mathematics
862
46,342,258
https://en.wikipedia.org/wiki/Darrieus%E2%80%93Landau%20instability
The Darrieus–Landau instability or density fingering refers to a instability of chemical fronts propagating into a denser medium, named after Georges Jean Marie Darrieus and Lev Landau. This instability is one of the key instrinsic flame instability that occurs in premixed flames, caused by the density variation due to the thermal expansion of the gas produced by the combustion process. In simple terms, the stability inquires whether a steadily propagating plane sheet with a discontinuous jump in density is stable or not. Yakov Zeldovich notes that Lev Landau generously suggested this problem to him to investigate and Zeldovich however made error in calculations which led Landau himself to complete the work. The instability analysis behind the Darrieus–Landau instability considers a planar, premixed flame front subjected to very small perturbations. It is useful to think of this arrangement as one in which the unperturbed flame is stationary, with the reactants (fuel and oxidizer) directed towards the flame and perpendicular to it with a velocity u1, and the burnt gases leaving the flame also in a perpendicular way but with velocity u2. The analysis assumes that the flow is an incompressible flow, and that the perturbations are governed by the linearized Euler equations and, thus, are inviscid. With these considerations, the main result of this analysis is that, if the density of the burnt gases is less than that of the reactants, which is the case in practice due to the thermal expansion of the gas produced by the combustion process, the flame front is unstable to perturbations of any wavelength. Another result is that the rate of growth of the perturbations is inversely proportional to their wavelength; thus small flame wrinkles (but larger than the characteristic flame thickness) grow faster than larger ones. In practice, however, diffusive and buoyancy effects that are not taken into account by the analysis of Darrieus and Landau may have a stabilizing effect. Dispersion relation If the disturbances to the steady planar flame sheet are of the form , where is the transverse coordinate system that lies on the undisturbed stationary flame sheet, is the time, is the wavevector of the disturbance and is the temporal growth rate of the disturbance, then the dispersion relation is given by where is the laminar burning velocity (or, the flow velocity far upstream of the flame in a frame that is fixed to the flame), and is the ratio of burnt to unburnt gas density. In combustion always and therefore the growth rate for all wavenumbers. This implies that a plane sheet of flame with a burning velocity is unstable for all wavenumbers. In fact, Amable Liñán and Forman A. Williams quote in their book that in view of laboratory observations of stable, planar, laminar flames, publication of their theoretical predictions required courage on the part of Darrieus and Landau. If the buoyancy forces are taken into account (in others words, accounts of Rayleigh–Taylor instability are considered) for planar flames that are perpendicular to the gravity vector, then some level of stability can be anticipated for flames propagating vertically downwards (or flames that held stationary by a vertically upward flow) since in these cases, the denser unburnt gas lies beneath the lighter burnt gas mixture. Of course, flames that are propagating vertically upwards or those that are held stationary by a vertically downward flow, both the Darrieus–Landau mechanism and the Rayleigh–Taylor mechanism contributes to the destabilizing effect. The dispersion relation when buoyance forces are included becomes where corresponds to gravitational acceleration for flames propagating downwards and corresponds to gravitational acceleration for flames propagating upwards. The above dispersion implies that gravity introduces stability for downward propagating flames when , where is a characteristic buoyancy length scale. For small values of , the growth rate becomes Limitations Darrieus and Landau's analysis treats the flame as a plane sheet to investigate its stability with the neglect of diffusion effects, whereas in reality, the flame has a definite thickness, say the laminar flame thickness , where is the thermal diffusivity, wherein diffusion effects cannot be neglected. Accounting for the flame structure, as first envisioned by George H. Markstein, are found to stabilize the flames for small wavelengths , except when fuel diffusion coefficient and thermal diffusivity differ from each other significantly leading to the so-called (Turing) diffusive-thermal instability. Darrieus–Landau instability manifests in the range for downward propagating flames and for upward propagating flames. Dispersion relation under Darcy's law The classical dispersion relation was based on the assumption that the hydrodynamics is governed by Euler equations. In strongly confinement system such as a Hele-Shaw cell or in porous media, the hydrodynamics is however governed by Darcy's law. The dispersion relation based on Darcy's law was derived by J. Daou and P. Rajamanickam. The dispersion relation under Darcy's law reads where is the density ratio, is the ratio of friction factor which involves the viscosity and the permeability (in Hele-Shaw cells, , where is the cell width, so that is simply the viscosity ratio) and is the speed of a uniform imposed flow. When , the imposed flow opposes flame propagation and when , it aids flame propagation. As before, corresponds to downward flame propagation and to upward flame propagation. The three terms in the above formula, respectively, corresponds to Darrieus–Landau instability (density fingering), Saffman–Taylor instability (viscous fingering) and Rayleigh–Taylor instability (gravity fingering), in the context of Darcy's law. The Saffman–Taylor instability is specific to confined flames and does not exist in unconfined flames. See also Michelson–Sivashinsky equation Clavin–Garcia equation References Fluid dynamics Combustion Fluid dynamic instabilities Lev Landau
Darrieus–Landau instability
Chemistry,Engineering
1,265
11,936,677
https://en.wikipedia.org/wiki/HD%2017156
HD 17156, named Nushagak by the IAU, is a yellow subgiant star approximately 255 light-years away in the constellation of Cassiopeia. The apparent magnitude is 8.17, which means it is not visible to the naked eye but can be seen with good binoculars. A search for a binary companion star using adaptive optics at the MMT Observatory was negative. The star is more massive and larger than the Sun while Its absolute magnitude of 3.70 and spectral type of G0, show that it is both hotter and more luminous. Based on asteroseismic density constraints and stellar isochrones, it was found that the age is 3.37 billion years making it about two thirds as old as the Sun. Spectral observations show that the star is metal-rich. An extrasolar planet, HD 17156 b, was discovered with the radial velocity method in 2007, and subsequently was observed to transit the star. At the time it was the transiting planet with the longest period. Name The star was given the name Nushagak by the IAU, chosen by United States representatives for the NameExoWorlds content, with the comment that "Nushagak is a regional river near Dilingham, Alaska, which is famous for its wild salmon that sustain local Indigenous communities." HD 17156 b was given the designation Mulchatna, as Mulchatna is a tributary of the Nushagak river. Planetary system It is the first star in Cassiopeia around which an orbiting planet was discovered (in 2007) using the radial velocity method. Later observations showed that this planet also transited the star. In February 2008, a second planet was proposed, with a 5:1 mean motion resonance to the inner planet HD 17156 b, though in 2017 this planet candidate was retracted. See also List of stars with extrasolar planets References External links Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona Cassiopeia (constellation) Planetary transit variables Planetary systems with one confirmed planet G-type subgiants 017156 013192 Durchmusterung objects Nushagak
HD 17156
Astronomy
451
38,378,108
https://en.wikipedia.org/wiki/Digital%20Storm
Digital Storm is a privately owned boutique computer manufacturer in the United States that primarily specializes in high-performance gaming desktop and laptop computers. Headquartered in Gilroy, California, the company also sells upgrade components and gaming peripherals, such as headsets, gaming mice, custom keyboards and high-resolution computer monitors. History Digital Storm was founded in 2002. Originally an internet retailer of computer components, the company began building custom gaming PCs after repeated requests by customers for pre-assembled systems. The first custom-built PC system the company ever marketed was the Digital Storm Twister. In 2012, the company began designing proprietary designs, starting with their Aventum and the Bolt models. Products Focusing heavily on the gaming market, Digital Storm's designs for gaming desktops and laptops focus primarily on high-performance custom PC configurations, though they also produce workstation models. They specialize in customizing each machine with features such as overclocking, dual video card implementations (such as SLI), RAID arrays, liquid-cooling systems and noise-reduction modifications. Digital Storm also sells upgrade PC components such as computer memory, video cards, CPUs, motherboards, hard drives, cooling systems and computer monitors. They also offer accessories aimed at gamers. Services Custom case designs In 2013, they began offering a service called LaserMark, which allows custom images to be etched onto computer cases. Case mods. Aftermarket sound dampening foam can be added to case interior on customer request. Some cases are designed and manufactured in-house, exclusive to digital storm and never sold empty to the public. Overclocking The company offers custom overclocking of CPUs and GPUs through its “Twister Boost” technology on many of its gaming computers. Stress-testing Before shipping out an order, a technician for Digital Storm performs a stress testing and quality control to screen for assembly errors, faulty components and other quality issues. The PC is shipped with a certificate that all tests were passed and a display folio with other paper work, highlighting build specification. Liquid-cooling On most desktop models, Digital Storm offers Hydro Lux, Cryo-TEC Sub-Zero liquid cooling systems. With tubing and fittings in a wide variety of colours, materials and finishes. Such as nickel, painted, gold plated, RGB fittings, copper, PETG (polyethylene tetrafluoride glass), or acrylic tubing. Custom liquid cooling components not offered can be requested (ie EKWB, Raijentek, Gigabyte Wateforce WB). Custom control boards In-house design and manufactured control boards for fan speed, power distribution, temperature and RGB lighting control can be optioned. Shipping In wood crate with expanding foam in PC case to prevent movement of internal hardware. Accolades Digital Storm's systems are often reviewed by technology writers and gaming industry publications. For their more notable systems, they have received critical acclaim and awards. In 2012, the company was recognized as a Design and Engineering Award Honoree for its Cryo-TEC cooling system. The Bolt, Digital Storm's most successful gaming PC model, received Maximum PC’s "Kick-Ass Award" in 2013, and also received special attention for the compact design and performance measurements. Ubergizmo called it the "thinnest gaming PC in the world." The Aventum, another of the company's more popular models, won the "2012 Best of What’s New" award from the editors of Popular Science Magazine, who called it a "melt-down proof computer." See also List of computer system manufacturers References External links DS Unlocked Tech support hub Gaming Community Forum DS Labs Press reviews Computer hardware companies Computer systems companies Computer companies of the United States Companies based in Morgan Hill, California Computer enclosure companies
Digital Storm
Technology
779
340,094
https://en.wikipedia.org/wiki/Estimated%20time%20of%20arrival
The estimated time of arrival (ETA) is the time when a ship, vehicle, aircraft, cargo, person, or emergency service is expected to arrive at a certain place. Overview One of the more common uses of the phrase is in public transportation where the movements of trains, buses, airplanes and the like can be used to generate estimated times of arrival depending on either a static timetable or through measurements on traffic intensity. In this respect, the phrase or its abbreviation is often paired with its complement, estimated time of departure (ETD), to indicate the expected start time of a particular journey. This information is often conveyed to a passenger information system as part of the core functionality of intelligent transportation systems. For example, a certain flight may have a calculated ETA based on the speed by which it has covered the distance traveled so far. The remaining distance is divided by the speed previously measured to roughly estimate the arrival time. This particular method does not take into account any unexpected events (such as new wind directions) which may occur on the way to the flight's destination. ETA is also used metaphorically in situations where nothing actually moves physically, as in describing the time estimated for a certain task to complete (e.g. work undertaken by an individual; a computation undertaken by a computer program; or a process undertaken by an organization). The associated term is "estimated time of accomplishment", which may be a backronym. Applications Accurate and timely estimations of times of arrival are important in several application areas: In air traffic control arrival sequencing and scheduling, where scheduling aircraft arrival according to the first-come-first-served order of ETA at the runway minimizes delays. In airport gate assignment methods, to optimize gate utilization. In elevator control, to minimize the average waiting time or journey time of passengers (destination dispatch). References Time Airline tickets Passenger rail transport
Estimated time of arrival
Physics,Mathematics
379
43,113,928
https://en.wikipedia.org/wiki/Dental%20torque%20wrench
A dental torque wrench or restorative torque wrench is a torque wrench used to precisely apply a specific torque to a fastener bolt for fixation of an abutment, dentures or prosthetics on a dental implant. Manual mechanical torque wrench Toggle torque wrenches (friction-style) and beam wrenches (spring-style) are the most common types in dentistry as manual mechanical torque-limiting devices. Beam type wrenches in general are more consistent to its calibration than toggle types. The beam types with a dial indicator are the most precise to set the Tare torque (zero point reset). Because steam sterilization processes like an autoclave are applied to the dental torque wrenches and the length of time in use presents stress on the material, fatigue can occur. Surgical motor The surgical motor is an electronic controlled torque-limiting device that also controls the speed. It is used with a twisted drill to make space in the bone for the implant or to fasten the screw (torque control can be with a torque-limiting attachment) with a screwdriver bit. In high precision areas such as aerospace applications motor or pneumatic torque wrenches are set at a lower torque value after which the final torque is set with a manual mechanical torque wrench, they are calibrated before every use, if a wrench breaks or loses calibration every fastener done with that wrench is redone. Calibration Various studies point to deviations of 10% and higher than the desired torque, regular recalibration with a torque tester restores the required torque values. Re-torquing As the settling effect (the flattening of the material's micro-surface under pressure) causes a lesser torque of around 10% in a relative short time, re-torquing the fastener after 10 minutes reduces this effect as the parts get more seated. Wet and dry torque Wet torques (bolts lubricated with saliva) have a higher mean torque than dry torques (unlubricated). See also Torque limiter References Wrenches Torque Dental equipment
Dental torque wrench
Physics
434
2,324,181
https://en.wikipedia.org/wiki/Cathole
A cathole or cat hole or sometimes pighole is a pit for human feces. Catholes are frequently used for the purpose of disposing of bowel movements or waste water (such as the water from cleaning the kitchen dishes) by hikers and others engaging in outdoor recreation. They can also be used to dispose of menstruum from a menstrual cup. According to the Leave No Trace Center for Outdoor Ethics, catholes should be dug at least from water sources, walking trails or campsites. Additionally, the same cathole should not be used twice. Catholes should be between deep and disguised after use to prevent access by animals, some of which are coprophagous. The digging of catholes is forbidden in some regions of high elevation where the climate can hinder the decomposition of waste. See also Pit toilet Trowel Hudo (scouting) References External links Sanitation - instructions from Olympic National Park. Toilets Hiking Defecation
Cathole
Biology
198
33,193,723
https://en.wikipedia.org/wiki/International%20Congress%20on%20Fracture
International Congress on Fracture (ICF), or the International Conference on Fracture, is an international body for promoting worldwide cooperation among scientists and engineers concerned with the mechanics and mechanisms of fracture, fatigue and strength of solids. History The idea for an International Congress on Fracture dates to 1961 and a meeting at MIT when an “Interim International Conference Committee” was established under the Chairmanship of Takeo Yokobori. In November 1965, ICF1 was organised in Sendai, Japan. In April 1969, at ICF2 in Brighton, England, ICF was formally founded with statutes and by-laws, a Council and Executive. Thereafter ICF organised a major conference every four years. ICF also organised “interquadrennial” conferences, the first of which was in Beijing, China in November 1983. ICF established national organisations in their member nations, one of the first being the “Australasian Fracture Group” in 1971. ICF became more than a conference organiser and rather a society for the broad field of structural integrity, fracture, fatigue, creep, corrosion and reliability – from biological to geophysical materials: metals, alloys, ceramics, composites, electronic and natural materials. The scope evolved through ICF1-1CF12 from nano to macro scales, from basic science, engineering and mathematics to practical technology and systems modelling for safe design. At an ICF Interquadrennial Conference in Anaheim, California, May 2011 ICF was renamed “ICF: The World Academy of Structural Integrity”. Structure ICF-WASI is governed by a Council which comprises members from each member nation, with one nation one vote. The Council meets once every four years at each ICF-WASI Quadrennial. Council delegates the management of ICF-WASI to a President and an Executive Committee. Since ICF 6, the Treasurer has acted as the de facto CEO working closely with the President and Secretary-General. The Council elects Fellows every four years who are now termed “Academicians” (50). ICF-WASI in its widest sense consists also of the “Associates” who comprise the whole community of the up to 10,000 delegates who have attended Quadrennial and Interquadrennial conferences. Past meetings ICF-1 Sendi (Japan) 1965 ICF-2 Brighton (UK) 1969 ICF-3 Munich (Germany) 1973 ICF-4 Waterloo (Canada) 1977 ICF-5 Cannes (France) 1981 ICF-6 New Delhi (India) 1984 ICF-7 Houston (USA) 1989 ICF-8 Kiev (Ukraine) 1993 ICF-9 Sydney (Australia) 1997 ICF-10 Honolulu (USA) 2001 ICF-11 Turin (Italy) 2005 ICF-12 Ottawa (Canada) 2009 ICF-13 Beijing (China) 2013 ICF-14 Rhodes (Greece) 2017 ICF-15 Atlanta (USA) 2023 The next Conference (ICF-16) is scheduled for 2027. References External links www.icfweb.org International Congress on Fracture ICF15 International conferences Scientific organizations established in 1965 Materials science organizations
International Congress on Fracture
Materials_science,Engineering
644
19,898,373
https://en.wikipedia.org/wiki/Michael%20D.%20Smith%20%28economist%29
Michael D. Smith is an American academic who is the J. Erik Jonsson Professor of Information Technology and Marketing at the Heinz College of Carnegie Mellon University with joint-appointment at the Tepper School of Business. Education Smith earned a Bachelor of Science in electrical engineering and a Master of Science in telecommunications science from the University of Maryland, College Park. He then received his PhD in management science and information technology from the MIT Sloan School of Management in 2000 Career Smith’s research uses economic and statistical techniques to analyze firm and consumer behavior in online markets, specifically markets for digital information and digital media products. His research in this area has been published in leading management science, economics, and marketing journals and in leading professional journals, including The Harvard Business Review and The Sloan Management Review. His research has also been covered by press outlets, including The Economist, The Wall Street Journal, The New York Times, Wired, and Business Week. Smith is co-author of the book Streaming, Sharing, Stealing: Big Data and the Future of Entertainment (MIT Press, 2016). Smith has received several awards for his teaching and research, including the National Science Foundation’s prestigious CAREER Research Award, the 2017 Carol & Bruce Mallen Award for lifetime published scholarly contributions to motion picture industry economic studies, the 2009 and 2004 Best Teacher Awards in Carnegie Mellon's Masters of Information Systems Management program, and the 2018 Dick Wittink Award for the best paper published in the journal Quantitative Marketing and Economics. He was also recently selected as one of the top 100 “emerging engineering leaders in the United States” by the National Academy of Engineering. Smith has served on the editorial boards of a variety of top journals, including as a senior editor at Information Systems Research and as an associate editor at Management Science and Management Information Systems Quarterly. External links Home Page Profile at the Heinz College Profile at the Tepper School of Business SSRN Page with full list of working papers Carnegie Mellon University faculty Living people MIT Sloan School of Management alumni Year of birth missing (living people) University of Maryland, College Park alumni Information systems researchers
Michael D. Smith (economist)
Technology
417
43,413,500
https://en.wikipedia.org/wiki/Palestinian%20tunnel%20warfare%20in%20the%20Gaza%20Strip
A vast network of underground tunnels used for smuggling and warfare exists in the Gaza Strip. This infrastructure runs throughout the Gaza Strip and towards Egypt and Israel, and has been developed by Hamas and other Palestinian military organizations to facilitate the storing and shielding of weapons; the gathering and moving of fighters, including for training and communication purposes; the launching of offensive attacks against Israel; and the transportation of Israeli hostages. On several occasions, Palestinian militants have also used this tunnel network, which is colloquially referred to as the Gaza metro,‌ to infiltrate Israel and Egypt while masking their presence and activities within the Gaza Strip itself. According to Iranian military officer Hassan Hassanzadeh, who commands the Islamic Revolutionary Guard Corps from Tehran, the Gaza Strip's tunnels run for more than throughout the territory. History During the Macedonian siege of Gaza in 332 BC, both the Macedonian army and the Persian army (and Persia's Arab mercenaries) engaged in tunnel warfare. The digging of such tunnels was made possible by the area's loose soil, as is the case today. It enabled Alexander III to later invade and conquer the Second Egyptian Satrapy. Size and dimensions The total size and dimensions of the Palestinian tunnel network in the Gaza Strip is unknown, with all parties involved keeping the details classified. In 2016, Ismail Haniyeh, the former Prime Minister of the Palestinian National Authority and later Chairman of the Hamas Government, indicated that the tunnel network was double the size of the Củ Chi tunnels, which were developed by the Việt Cộng during the Vietnam War. Citing a private briefing in February 2015, Daniel Rubinstein wrote that Israel discovered of tunnels during the 2014 Gaza War, one-third of which intruded upon Israeli territory; Ynet's Alex Fishman reported the same figure in 2017. Haaretz reporter Yaniv Kubovich reported in June 2021 that Hamas had constructed "hundreds of kilometers of tunnels the length and breadth of the Gaza Strip" after some of them were damaged during Operation Guardian of the Walls. The tunnel system runs beneath many Gazan towns and cities, such as Khan Yunis, Jabalia and the Shati refugee camp. Typically, tunnel access points are hidden inside buildings, such as private homes or mosques, or camouflaged by brush, which impedes their detection via aerial imaging or drones. According to Eyal Weizman, "most tunnels have several access points and routes, starting in several homes or in chicken coops, joining together into a main route, and then branching off again into several separate passages leading into buildings on the other side." During the 2014 Gaza War the IDF encountered "complex tunnels, with a number of entry and exit shafts", and "[t]he main tunnel was often split, and sometimes there were parallel routes." The tunnels are usually to beneath the surface. On average, each tunnel is approximately high by wide, and equipped with lights, electricity, and sometimes tracks for transporting materials. The tunnels are often booby trapped with improvised explosive devices. An IDF engineering officer tasked with locating tunnels told Haaretz that three tunnels discovered in 2013 opened the Israelis' eyes to proportions of the network. The engineering officer described "wide tunnels, with internal communication systems that had been dug deep beneath the surface and the sides were reinforced with layers of concrete" in which "[y]ou could walk upright in them without any difficulty." An Israeli army spokesman said that the tunnel system is "like the Underground, the Metro, or the subway." In November 2022, the UN Relief and Works Agency (UNRWA) for Palestine refugees reported that it found a tunnel underneath an elementary school operated by the agency. "The Agency protested strongly to the relevant authorities in Gaza to express outrage and condemnation of the presence of such a structure underneath one of its installations", which it complained was "a serious violation of the Agency's neutrality and a breach of international law" that "exposes children and Agency staff to significant security and safety risks." The UNRWA said in a statement that the agency had "cordoned off the area and swiftly took the necessary measures to render the school safe, including permanently sealing the cavity." On 24 October 2023, Hamas released the 85-year-old Yocheved Lifshitz, who had been taken hostage in Hamas's attack on Israel on 7 October 2023. Lifshitz described walking for two to three hours through damp tunnels until she and other hostages reached a large hall. Lifshitz told reporters that Hamas has a "huge network" of tunnels that resembled a "spiderweb." According to Lifshitz, Hamas had prepared clean rooms with mattresses on the ground and the hostages received regular visits from doctors in their underground positions. Largest tunnels The largest known tunnel was discovered by the IDF on 17 December 2023, during the Israel–Hamas war. The tunnel has several branches and junctions, along with plumbing, electricity and communication lines. The largest of the branches discovered had a length of approximately four kilometers and goes down to a depth of 50 meters underground in some areas. The tunnel was wide enough for vehicles to travel inside. IDF also captured footage of the tunnel's construction which was released to the internet and showed Hamas using tunnel-boring machines. The tunnel was discovered a quarter of a mile from a border crossing, and was described by Israel as designed for "moving massive assets." In one video shown to journalists, Yahya Sinwar's brother Mohammad Sinwar is seen driving a car through what Israel described as the tunnel. Origins and construction The tunnel network used for warfare purposes has its origins in the smuggling tunnels connecting the Gaza Strip to Egypt. Tunnels have connected the Egyptian and Gazan sides of Rafah since the early 1980s, when the Philadelphi Route artificially divided the city. These tunnels grew in size, sophistication, and importance as a result of the Egyptian and Israeli economic blockade in 2007. The implementation of the tunnel network was reportedly coordinated under the direction of Mohammed Deif, leader of Al-Qassam Brigades; and before that, Ahmed Jabari, formerly the head of operations for the Brigades before being killed by the IDF. The tunnels into Israel were constructed using the expertise of the Rafah families who have specialized in digging tunnels into Egypt for commerce and smuggling. According to Eado Hecht, an Israeli defence analyst specialising in underground warfare, "[T]hese underground complexes are fairly similar in concept to the Viet Cong tunnels dug beneath the jungles of South Vietnam, though the quality of finishing is better, with concrete walls and roofs, electricity and other required amenities for lengthy sojourn." The Israeli military has provided estimates in 2014 that Hamas spent around $30 to $90 million, and poured 600,000 tons of concrete, in order to build three dozen tunnels. Some tunnels were estimated to have cost $3 million to construct. The Mako network published a description of the working conditions on the tunnels, citing an unnamed Israeli informant who said he worked on them, including the following details: Workers spent 8–12 hours a day on construction under precarious conditions and received a monthly wage of $150–$300. Hamas used electric or pneumatic jackhammers for digging tunnels. Tunnels were dug 18–25 meters (60–82 feet) underground at the rate of 4–5 meters a day. Tunnels were usually dug through sandy soil requiring their roof to be supported by a more durable level of clay. Tunnels were also reinforced by concrete panels manufactured in workshops adjacent to each tunnel. As of 2014, according to Yiftah S. Shapir and Gal Perel, the cost of digging a tunnel was around $100,000 and takes about three months to build. According to reporting from Al-Monitor, individuals digging the tunnels spend long periods underground and use a device with a pedal-powered chain, similar to a bicycle, to dig through the dirt while lying on his back and pedaling with his feet. Construction and use of the tunnels is associated with mortal danger due to accidental detonation of explosives and tunnel collapses. Hamas reported that 22 members of its armed wing died in tunnel accidents in 2017; another militant was killed on 22 April 2018. Iranian involvement After the 2007 imposition of a blockade on the Gaza Strip by Israel and Egypt, the Iranian Quds Force under the longtime direction of General Qasem Soleimani has been active in supporting the further construction of tunnels under Gaza and the smuggling of weapons through these tunnels to the armed wings of Hamas and the Palestinian Islamic Jihad. In 2021 senior Hamas representative to Lebanon, Ahmad Abd al-Hadi said: The idea of [digging] tunnels... Today there are 360 kilometers of tunnels in Gaza. There are more than 360 kilometers of tunnels underground. I won't go into details on this. Two people came up with the idea of digging these tunnels: The first is the martyred commander Imad Mughniyeh, and the second is Hajj Qasem Soleimani who went to Gaza more than once and contributed to the defense plan from the moment it was first drafted. I am not divulging any secret, by the way. The enemies know all this but what the enemies do not know is way more than what they do know. Iranian Brigadier-General Abdolfattah Ahvazian, adviser to the Commander of the Quds Force, said in November 2023 regarding Soleimani's role in the construction and proliferation of the Gaza tunnel network: After the martyrdom of Hajj Qasem [Soleimani], the guys from Hamas showed us a movie. I watched the movie, and according to the people of Hamas there, Hajj Qasem had gone into Gaza. He said to them: 'Why are you sitting idly by?' They answered: 'Hajj, there is no way.' So he gave the order to take a Jihadi action, and dig hundreds of tunnels, crossing the [Gaza] borders. Within three years, the Palestinians have dug hundreds of tunnels, approximately 800 km-long, with pickaxes and hoes. These are not the kind of tunnels that only mice can use. These tunnels allow the passages of cars, mules with ammunition, and motorcycles. 700 kilometers with nothing but pickaxes and hoes. Retired Islamic Revolutionary Guard Corps General Ezzatollah Zarghami admitted in November 2023 of having visited and inspected the Gaza tunnels himself along with senior Hamas members, during his active service with the Quds Force: Fajr-3, which is a 240 mm rocket, was one of our products. Later, we made its warhead smaller and it had a range of 70 km. My first mission was to take this rocket... I say this with the utmost pride and with no fear of anyone. The Leader has already said that we were helping [Hamas]. We support the oppressed everywhere – Shiite Hezbollah as well as Sunni Hamas. These are what [Khamenei] has declared in the past. I traveled to the region as the production manager of those rockets, and I supplied them both to Hezbollah and the Palestinians. For some time, I was inside the very same tunnels that they are fighting from. Six or seven years ago, I posted about this and got the nickname 'yellow canary.' In the tunnels, I provided training about the usage and specification of the rockets. These training courses were highly successful. I saw that they had cages of singing canaries in the tunnels. I praised their commander about their acumen to have music during military work. The commander replied that the birds are not meant for singing, they are meant to be [oxygen] sensors in case the airflow is disrupted. If the airflow becomes weaker, the birds stop singing and drop dead. When the bird dies, we realize that there is a problem with airflow. In December 2023 Mansour Haghighatpour, also a retired Quds Force General, stated that the creation of the tunnels under Gaza was an effort not only by the Palestinians but by the whole "Axis of Resistance": The other thing I would like to point out is that the resistance axis, which planned with the Palestinians to build more than 400 kilometers of tunnels under an area of land that did not exceed 40 square kilometers, took various possible “scenarios” into consideration. These scenarios include [Israel] flooding the tunnels with water, pumping toxic gas into them, or blowing up parts of them. Therefore, the Palestinian side in the tunnels knows very well how to deal with all possible challenges. In January 2024 the Shi'ite cleric Sheikh Jaffer Ladak asserted that Soleimani had played a major role in influencing the strategy of the Palestinian factions, turning it away from the suicide bombing attacks widely employed at the time of the Second Intifada and towards an underground warfare strategy: Zarqawi was the one who began the idea of suicide bombings and then, he used this influence upon the Palestinians who then felt it was needful to be able to do suicide bombings in the occupied territories. Suicide bombings, of course, not only has a great problem with it, it is not with the flavor of Islamic resistance. It doesn't yield the goals, and also drew the ire of the world community on the Palestinian resistance. Enter people like martyr Qasem Soleimani. And, with his influence, you would actually see that the structure of the Palestinian resistance was overhauled. The tunnels that were being tug, and its relationship with the rest of the Islamic world, particularly those in Lebanon, particularly those in Iran, flourished, to such an extent that now, the so-called strongest army in West Asia still cannot defeat those people who have been starved for more than three months. During the November 2012 Israeli operation in the Gaza Strip, the Commander-in-Chief of the Islamic Revolutionary Guard Corps, Major-General Mohammad Ali Jafari said that due to the geographical isolation of the Gaza Strip, Iran cannot directly provide weapons to Hamas but still provides them with the technology and parts through the tunnels, which is then used by the al-Qassam Brigades to manufacture a Palestinian homemade version of the Iranian Fajr-5 missile that has managed to hit Israeli targets within Israel's capital Tel Aviv. Strategic objectives and uses According to Eado Eado Hecht, an Israeli defence analyst specialising in underground warfare, "Three different kinds of tunnels existed beneath Gaza, smuggling tunnels between Gaza and Egypt; defensive tunnels inside Gaza, used for command centres and weapons storage; and—connected to the defensive tunnels—offensive tunnels used for cross-border attacks on Israel", including the capture of Israeli soldiers. The Jerusalem Center for Public Affairs, an Israeli security think tank, describes tunnel warfare as a shifting of the balance of power: "Tunnel warfare provided armies facing a technologically superior adversary with an effective means for countering its air superiority." According to the center, tunnels conceal missile launchers, facilitate attacks on strategic targets like Ben-Gurion Airport, and allow cross-border access to Israeli territory. An editorial in The Washington Post described the tunnels as "using tons of concrete desperately needed for civilian housing" and also as endangering civilians because they were constructed under civilian homes in the "heavily populated Shijaiyah district" and underneath the al-Wafa Hospital. Working on the tunnel system provides an outlet for Hamas militants to be productively engaged in relative peacetime. In May 2024, Daphné Richemond-Barak, the author of “Underground Warfare,” wrote in Foreign Policy magazine: "Never in the history of tunnel warfare has a defender been able to spend months in such confined spaces. The digging itself, the innovative ways Hamas has made use of the tunnels and the group’s survival underground for this long have been unprecedented." Defensive uses An Al-Monitor report described tunnels within Gaza and away from the border that serve two purposes: storing and shielding weapons including rockets and launchers, and providing security and mobility to Hamas militants. The report indicated that the latter function occurs in a set of "security tunnels": "Every single leader of Hamas, from its lowest ranking bureaucrats to its most senior leaders, is intimately familiar with the route to the security tunnel assigned to him and his family." Twenty-three militants in the Qassam Brigades, the military wing of Hamas, survived Israeli shelling on 17 July 2014 and remained alive but trapped in a tunnel until the early August ceasefire. In October 2013, Ihab al-Ghussein, spokesman of the Interior Ministry of the Palestinian National Authority, described the tunnels as an exercise of Gaza's "right to protect itself." In October 2014, Hamas leader Khalid Mishal denied that the tunnels were ever to be used to attack civilians: "Have any of the tunnels been used to kill any civilian or any of the residents of such towns? No. Never! . . . [Hamas] used them either to strike beyond the back lines of the Israeli army or to raid some military sites . . . This proves that Hamas is only defending itself." The tunnels are used to conceal and protect weapons and militants and facilitate communication, making detection from the air difficult. In 2014, Hamas leader Khalid Meshal said in an interview with Vanity Fair that the tunnel system is a defensive structure, designed to place obstacles against Israel's powerful military arsenal and engage in counter-strikes behind the lines of the IDF. He said that the tunnels are used for infiltration of Israel, but said that offensive operations had never caused the death of civilians in Israel, and denied allegations of planned mass attacks on Israeli civilians. In 1989, Hamas logistics officer and weapons smuggler Mahmoud al-Mabhouh escaped IDF forces through a smuggling tunnel into Egypt. During Operation Pillar of Defense in 2012, Palestinian militants frequently made use of tunnels and bunkers to take cover from Israeli air strikes. Offensive uses Palestinian military personnel in Gaza explained to news website al-Monitor that the purpose of a cross-border tunnel was to conduct operations behind enemy lines in the event of an Israeli operation against Gaza. Hamas leader Yahya Sinwar, commenting on the strategic importance of the tunnels, stated: "Today, we are the ones who invade them; they do not invade us." The tunnels have been described by former Hamas Prime Minister Ismail Haniyeh as representative of "a new strategy in confronting the occupation and in the conflict with the enemy from underground and from above the ground." A Palestinian militia document obtained by al-Monitor and also published in The Washington Post described the objectives of the under-border tunnels: The tunnel war is one of the most important and most dangerous military tactics in the face of the Israeli army because it features a qualitative and strategic dimension, because of its human and moral effects, and because of its serious threat and unprecedented challenge to the Israeli military machine, which is heavily armed and follows security doctrines involving protection measures and preemption. ... [The tactic is] to surprise the enemy and strike it a deadly blow that doesn't allow a chance for survival or escape or allow him a chance to confront and defend itself. Israeli spokespersons have maintained that the aim of the tunnels is to harm Israel civilians. According to Prime Minister Benjamin Netanyahu, the "sole purpose" of the cross-border tunnels from Gaza to Israel is "the destruction of our citizens and killing of our children." The Israeli government has called the tunnels "terror tunnels," stating that they have a potential to target civilians and soldiers in Israel. Prime Minister Benjamin Netanyahu has said that the aim was to abduct and kill civilians. An IDF spokesman said the goal is "to abduct or kill civilians but will make do with a soldier, too." The Israeli newspaper Ma'ariv reported that, according to unnamed Israeli security sources, the tunnels were to be utilized in a mass casualty terror attack planned to take place on the Jewish high holy day of Rosh Hashanah, 24 September 2014. The plan was described to reporter Ariel Kahane by the sources, and reportedly revealed to the Israeli security Cabinet by Prime Minister Binyamin Netanyahu. The alleged plot entailed a planned assault in which two hundred of heavily armed Hamas fighters would have emerged at night from more than a dozen tunnels to infiltrate Israeli territory, killing and/or abducting Israeli citizens. In September 2001, a Gazan tunnel was used to carry out an attack for the first time, in the context of the Second Palestinian Intifada, when "Palestinians detonated a 200-kilogram bomb inside a tunnel underneath the IDF border outpost of Termit on the Philadelphi corridor", resulting in the near-complete destruction of the outpost located in Rafah. In June 2004, Hamas used tunnel bombs to attack an IDF outpost in Gaza, killing one soldier and injuring five. In December 2004, shortly after the death of Yasser Arafat and purportedly in retaliation for the same, Hamas and Fatah tunneled under a border-crossing checkpoint at Rafah and detonated a bomb, killing five Israelis in the IDF outpost bombing attack, killing five Israeli soldiers and wounding six. In June 2006, Hamas used a tunnel that exited near Kerem Shalom to conduct a cross-border raid that resulted in the death of two IDF soldiers and the kidnapping of a third, Gilad Shalit. In November 2012, one Israeli soldier conducting maintenance work on the border fence was injured when Hamas's military wing, Izz al-Din Qassam Brigades, detonated a booby-trapped tunnel, and a 13-year-old Palestinian boy was killed by Israeli machine-gun fire following the explosion. The tunnels were used in warfare on numerous occasions during the 2014 conflict. On at least four occasions during the conflict, Palestinian militants crossing the border through the tunnels engaged in combat with Israeli soldiers. Israeli officials reported four "incidents in which members of Palestinian armed groups emerged from tunnel exits located between 1.1 and 4.7 km from civilian homes." The Israeli government refers to cross-border tunnels as "attack tunnels" or "terror tunnels." According to Israel, the tunnels enabled the launch of rockets by remote control, and were intended to facilitate hostage-taking and mass-casualty attacks. On 17 July 2014, Hamas militants carrying RPGs and assault rifles crossed the Israeli border through a tunnel about a mile away from the farming village of Sufa but were stopped by Israeli Defense Forces. The Israeli military reported that thirteen armed men had exited the tunnel, and shared video footage of them being hit by the explosion of an airstrike. Israeli authorities claimed the purpose had been to attack civilians. On 21 July 2014, two squads of armed Palestinian militants crossed the Israeli border through a tunnel near Kibbutz Nir Am. The first squad of ten was killed by an Israeli air strike. A second squad killed four Israeli soldiers using an anti-tank weapon. The Jerusalem Post reported that the attackers sought to infiltrate Kibbutz Nir Am, but a senior intelligence source told the Times of Israel that "the Hamas gunmen were not in motion or en route to a kibbutz but rather had camouflaged themselves in the field, laying an ambush for an army patrol." On 28 July 2014, Hamas and Islamic Jihad militants attacked an Israeli military outpost near Nahal Oz using a tunnel, killing five Israeli soldiers. One attacker was also killed. On 1 August 2014, Hamas militants emerging from a tunnel attacked an Israeli patrol in Rafah, thus violating a humanitarian ceasefire, killing two Israeli soldiers. The militants returned to Rafah through a tunnel, bringing the body of Lieutenant Hadar Goldin with them. Israel at first believed that the militants had abducted Goldin and were holding him, but later determined that he had also been killed. An unnamed senior intelligence source told The Times of Israel on 28 July 2014 that of the nine cross-border tunnels detected, none stretched into a civilian community, and that in the five infiltrations to that time Hamas had targeted soldiers rather than civilians. On 31 July 2014 IDF Army Radio quoted an unnamed senior military official as saying that "all the tunnels were aimed at military targets and not at the Gaza-vicinity communities". A UNHRC Commission of Inquiry on the Gaza Conflict published a report in 2015 concluding that during the 2014 conflict, "the tunnels were only used to conduct attacks directed at IDF positions in Israel in the vicinity of the Green Line, which are legitimate military targets." An Israeli intelligence source that spoke to Times of Israel indicated that none of the nine cross-border tunnels were aimed at civilian border communities. All the infiltration attempts focused on attacking military targets. The main aim of the attacks seems to have been to capture an IDF prisoner. Israeli officials condemned the UNHRC report. On 7 October 2023, Hamas launched an attack on Israel, taking 252 people hostage. On 24 October 2023, a hostage was released and told reporters that she was transported and kept in the tunnel network with a group of 25 hostages. Psychological impact Eitan Shamir and Hecht wrote that, during the 2014 Gaza War, one objective of conducting cross-border raids on Israeli settlements using tunnels was to inflict psychological shock on the Israeli populations. A UN Commission of Inquiry found that "these tunnels and their use by Palestinian armed groups during the hostilities [of the 2014 Gaza War] caused great anxiety among Israelis that the tunnels might be used to attack civilians." According to Slesinger, the tunnels disrupt the Israelis' notion of territorial sovereignty and decreases the confidence of Israeli politicians' in their ability to manage external risks through ordinary border enforcement mechanisms such as patrols, fences, walls, and checkpoints–which in turn compromises the Israeli citizenry's "faith in the state's ability to provide security. Kidnapping Israel describes kidnapping Israeli civilians or taking Israeli soldiers hostage as one of the primary goals of tunnel construction. The Wall Street Journal described an attack tunnel inspected by one of its reporters as "designed for launching murder and kidnapping raids", noting that the "3-mile-long tunnel was reinforced with concrete, lined with telephone wires, and included cabins unnecessary for infiltration operations but useful for holding hostages." In October 2013, the newspaper Haaretz noted that "[t]he IDF's working assumption [was] that such tunnels [would] be made operative whenever there is an escalation in the area, whether initiated by Hamas or by Israel, and [would] be used for attacks and abduction attempts", adding that "[i]f Hamas initiates such an escalation while holding several Israeli citizens or soldiers, it would be in a much stronger position." According to The New York Times, one tunnel contained "a kidnapping kit of tranquilizers and plastic handcuffs". Israel's countermeasures Throughout the Second Intifada, the IDF launched numerous raids to counter-act the Palestinian use of tunnels, and had destroyed over 100 tunnels by June 2004. In October 2006, the IDF identified 13 smuggling tunnels along the Philadelphi Route and The Jerusalem Post reported that IDF destroyed "five more tunnels". In November 2007, the IDF identified a tunnel complex concealed in a tomato hothouse with exits near Netiv HaAsara and Erez. In November 2008, six militants were killed and a tunnel within of the border fence was destroyed by Israeli forces. In November 2012, the IDF carried out a one-week operation that targeted 140 smuggling tunnels and 66 tunnels used for attacks, of the estimated 500 tunnels thought to exist at that time. By the end of the operation, the network of attack tunnels had been largely destroyed. A tunnel discovered in 2013 began in the Gazan village of Abasan al-Saghira with an initial depth at the entrance of , a length of approximately , a width of approximately , a height of approximately , and a final depth at the exit of , opening into a spot some from the Israeli settlement of Ein HaShlosha. According to Israel, between January and October 2013, three tunnels under the border were identified, two of which were packed with explosives. In November 2013, the IDF demolished two cross-border tunnels. Destroying the tunnels was a primary objective of Israeli forces in the July–August 2014 conflict. The IDF reported that it "neutralized" 32 tunnels, fourteen of which crossed into Israel. A column in the Wall Street Journal cited Yigal Carmon, head of the Middle East Media Research Institute, as saying that it was the tunnels, and not the 2014 Gush Etzion kidnapping and murder that was the immediate cause of war in the summer of 2014. According to Carmon's reading of the situation, the tunnels gave Hamas the ability to stake a mass-casualty attack on the scale of the 2008 Islamist terror attack on the Taj Hotel in Mumbai that killed 164 people. On 5 July 2014, an Israeli airstrike damaged a tunnel near Kibbutz Kerem Shalom, and a group of Hamas military inspectors were killed in an explosion at the tunnel on 6 July 2014. According to Carmon, this may have persuaded Hamas that Israel was becoming aware of the scale of the capacity for militants to infiltrate Israel via tunnels, making a successful surprise mass-casualty attack less likely, and convincing the Hamas leadership to go to war immediately before more of the tunnels could be discovered and destroyed. On 6 July 2014, the IDF killed six Hamas militants in an attack on a cross-border tunnel near Rafah. This resulted in an escalation in the Israeli-Palestinian conflict, and was a key impetus to the 2014 Israel–Gaza conflict. On 17 July 2014, the IDF foiled an attempt by 13 militants to launch an attack near Kibbutz Sufa. On 11 August 2014, the IDF announced they had successfully tested a system that could be used to detect these tunnels. This new system uses a combination of sensors and special transmitters to locate tunnels. The IDF expects development to cost up to NIS 1.5 billion, and could be deployed within the year. In May 2016, the IDF located a cross-border tunnel exiting near the area of Holit that had apparently been rebuilt as a bypass after initially being destroyed during Operation Protective Edge. In the summer of 2017, Israel began the construction of a border wall which stretched several meters underground to counter tunnel assaults. The structure is equipped with sensors to detect future tunnel construction. Concrete for the structure was produced using five concrete factories dedicated to the project and about 10 meters was completed daily. The structure was placed entirely on Israeli land. On 30 October 2017, Israeli forces destroyed a tunnel that crossed the Gaza border into Israeli territory. Twelve Palestinians, including ten members of Islamic Jihad Movement in Palestine and two Hamas militants, were killed in the blast and subsequent rescue efforts. The most senior person killed was Brigade Commander Arafat Marshood of Islamic Jihad's al-Quds Brigades. On 10 December 2017, Israeli forces destroyed an additional tunnel that crossed the border. In January 2018, following the destruction of an attack tunnel from Gaza that crossed into Egypt and Israel, IDF Major General Yoav Mordechai, speaking in Arabic, said, "I want to send a message to everyone who is digging or gets too close to the tunnels: As you've seen in the past two months, these tunnels bring only death," referring to Hamas tunnels that had recently been destroyed by Israel. Major General Eyal Zamir stated that more Hamas tunnels into Israel would be destroyed as the construction of a barrier around the Gaza Strip will soon be completed. In April 2018, the Israeli military announced it destroyed a tunnel that was called the "longest ever" and stretched several kilometers from inside the Gaza Strip, near Jabalia, and reached several meters into Israel, towards Nahal Oz, though no exit had yet been built In June 2018, for the first time, an Israeli airstrike destroyed a naval tunnel belonging to Hamas. In August 2018, the Israeli Ministry of Defense released the first pictures of an underwater barrier with Gaza designed to prevent Hamas infiltrations by sea. Construction of the barrier started two months before and is expected to be completed by the end of the year, stretching two hundred meters into the Mediterranean. In October 2018, the Israeli military destroyed a tunnel in the southern Gaza Strip that was long, of which encroached upon Israeli territory. In May 2021, Israeli airstrikes destroyed over 100 kilometers of tunnel network inside Gaza during Operation Guardian of the Walls. In December 2021, the Israeli Ministry of Defense announced that a 65-kilometer underground barrier to deal with the threat of cross-border tunnels along the border with Gaza had been completed. In October 2023, the Israeli Defense Forces were reported to be considering the use of sponge bombs as a non-lethal means of sealing tunnels during their incursion into the Gaza Strip. The Egyptian government has also regarded the tunnels as a security risk. In 2013, Egypt attempted to destroy certain tunnels along its Gaza border by filling them with sewage and demolishing houses that hid their entrances, according to Joel Roskin, a geology professor at Bar-Ilan University. See also Anti-tunnel barrier along the Gaza–Israel border Gaza Strip smuggling tunnels Sinaloa Cartel#Tijuana Airport/Drug Super Tunnels 2014 Nahal Oz attack Hezbollah tunnels Củ Chi tunnels References Bibliography External links The Gaza Tunnel Industry, Israel Defense Forces (including photos), 2018 Peering Into Darkness Beneath the Israel-Gaza Border, The New York Times, 25 July 2014 (including a map of some of the tunnels) IDF footage of tunnel entrance built in basement of Gaza mosque Tunnel warfare 2014 Gaza War 2014 in Israel 2014 in the Gaza Strip Gaza–Israel conflict Tunnels in Palestine Articles containing video clips Transport in the State of Palestine Tunnel construction
Palestinian tunnel warfare in the Gaza Strip
Engineering
6,857
15,725,860
https://en.wikipedia.org/wiki/Highly%20charged%20ion
Highly charged ions (HCI) are ions in very high charge states due to the loss of many or most of their bound electrons by energetic collisions or high-energy photon absorption. Examples are 13-fold ionized iron, or Fe XIV in spectroscopic notation, found in the Sun's corona, or naked uranium, (U XCIII in spectroscopic notation), which is bare of all bound electrons, and which requires very high energy for its production. HCI are found in stellar corona, in active galactic nuclei, in supernova remnants, and in accretion disks. Most of the visible matter found in the universe consists of highly charged ions. High temperature plasmas used for nuclear fusion energy research also contain HCI generated by the plasma-wall interaction (see Tokamak). In the laboratory, HCI are investigated by means of heavy ion particle accelerators and electron beam ion traps. They might have applications in improving atomic clocks, advances in quantum computing, and more accurate measurement of fundamental physical constants. References Atomic physics Astrophysics
Highly charged ion
Physics,Chemistry,Astronomy
217
16,743
https://en.wikipedia.org/wiki/Karl%20Marx
Karl Marx (; 5 May 1818 – 14 March 1883) was a German-born philosopher, political theorist, political economist, historian, sociologist, journalist, and revolutionary socialist. His best-known works are the 1848 pamphlet The Communist Manifesto (with Friedrich Engels) and his three-volume (1867–1894); the latter employs his critical approach of historical materialism in an analysis of capitalism, in the culmination of his intellectual endeavours. Marx's ideas and their subsequent development, collectively known as Marxism, have had enormous influence on modern intellectual, economic and political history. Born in Trier in the Kingdom of Prussia, Marx studied at the universities of Bonn, Berlin, and Jena, and received a doctorate in philosophy from the latter in 1841. A Young Hegelian, he was influenced by the philosophy of Georg Wilhelm Friedrich Hegel, and both critiqued and developed Hegel's ideas in works such as The German Ideology (written 1846) and the Grundrisse (written 1857–1858). While in Paris in 1844, Marx wrote his Economic and Philosophic Manuscripts and met Engels, who became his closest friend and collaborator. After moving to Brussels in 1845, they were active in the Communist League, and in 1848 they wrote The Communist Manifesto, which expresses Marx's ideas and lays out a programme for revolution. Marx was expelled from Belgium and Germany, and in 1849 moved to London, where he wrote The Eighteenth Brumaire of Louis Bonaparte (1852) and . From 1864, Marx was involved in the International Workingmen's Association (First International), in which he fought the influence of anarchists led by Mikhail Bakunin. In his Critique of the Gotha Programme (1875), Marx wrote on revolution, the state and the transition to communism. He died stateless in 1883 and was buried in Highgate Cemetery. Marx's critiques of history, society and political economy hold that human societies develop through class conflict. In the capitalist mode of production, this manifests itself in the conflict between the ruling classes (known as the bourgeoisie) that control the means of production and the working classes (known as the proletariat) that enable these means by selling their labour power in return for wages. Employing his historical materialist approach, Marx predicted that capitalism produced internal tensions like previous socioeconomic systems and that these tensions would lead to its self-destruction and replacement by a new system known as the socialist mode of production. For Marx, class antagonisms under capitalism—owing in part to its instability and crisis-prone nature—would eventuate the working class's development of class consciousness, leading to their conquest of political power and eventually the establishment of a classless, communist society constituted by a free association of producers. Marx actively pressed for its implementation, arguing that the working class should carry out organised proletarian revolutionary action to topple capitalism and bring about socio-economic emancipation. Marx has been described as one of the most influential figures of the modern era, and his work has been both lauded and criticised. Marxism has exerted major influence on socialist thought and political movements, with Marxist schools of thought such as Marxism–Leninism and its offshoots becoming the guiding ideologies of revolutionary governments that took power in many countries during the 20th century, known as communist states. Marx's work in economics has had a strong influence on modern heterodox theories of labour and capital, and he is often cited as one of the principal architects of modern sociology. Biography Childhood and early education: 1818–1836 Karl Marx was born on 5 May 1818 to Heinrich Marx and Henriette Pressburg. He was born at Brückengasse 664 in Trier, an ancient city then part of the Kingdom of Prussia's Province of the Lower Rhine. Marx's family was originally non-religious Jewish but had converted formally to Christianity before his birth. His maternal grandfather was a Dutch rabbi, while his paternal line had supplied Trier's rabbis since 1723, a role taken by his grandfather Meier Halevi Marx. His father, as a child known as Herschel, was the first in the line to receive a secular education. He became a lawyer with a comfortably upper middle class income and the family owned a number of Moselle vineyards, in addition to his income as an attorney. Prior to his son's birth and after the abrogation of Jewish emancipation in the Rhineland, Herschel converted from Judaism to join the state Evangelical Church of Prussia, taking on the German forename Heinrich over the Yiddish Herschel. Largely non-religious, Heinrich was a man of the Enlightenment, interested in the ideas of the philosophers Immanuel Kant and Voltaire. A classical liberal, he took part in agitation for a constitution and reforms in Prussia, which was then an absolute monarchy. In 1815, Heinrich Marx began working as an attorney and in 1819 moved his family to a ten-room property near the Porta Nigra. His wife, Henriette Pressburg, was a Dutch Jew from a prosperous business family that later founded the company Philips Electronics. Her sister Sophie Pressburg (1797–1854) married Lion Philips (1794–1866) and was the grandmother of both Gerard and Anton Philips and great-grandmother to Frits Philips. Lion Philips was a wealthy Dutch tobacco manufacturer and industrialist, upon whom Karl and Jenny Marx would later often come to rely for loans while they were exiled in London. Little is known of Marx's childhood. The third of nine children, he became the eldest son when his brother Moritz died in 1819. Marx and his surviving siblings, Sophie, Hermann, Henriette, Louise, Emilie, and Caroline, were baptised into the Lutheran Church on 28 August 1824, and their mother in November 1825. Marx was privately educated by his father until 1830 when he entered Trier High School (), whose headmaster, Hugo Wyttenbach, was a friend of his father. By employing many liberal humanists as teachers, Wyttenbach incurred the anger of the local conservative government. Subsequently, police raided the school in 1832 and discovered that literature espousing political liberalism was being distributed among the students. Considering the distribution of such material a seditious act, the authorities instituted reforms and replaced several staff during Marx's attendance. In October 1835 at the age of 16, Marx travelled to the University of Bonn wishing to study philosophy and literature, but his father insisted on law as a more practical field. Due to a condition referred to as a "weak chest", Marx was excused from military duty when he turned 18. While at the University at Bonn, Marx joined the Poets' Club, a group containing political radicals that were monitored by the police. Marx also joined the Trier Tavern Club drinking society () where many ideas were discussed and at one point he served as the club's co-president. Additionally, Marx was involved in certain disputes, some of which became serious: in August 1836 he took part in a duel with a member of the university's Borussian Korps. Although his grades in the first term were good, they soon deteriorated, leading his father to force a transfer to the more serious and academic University of Berlin. Hegelianism and early journalism: 1836–1843 Spending summer and autumn 1836 in Trier, Marx became more serious about his studies and his life. He became engaged to Jenny von Westphalen, an educated member of the petty nobility who had known Marx since childhood. As she had broken off her engagement with a young aristocrat to be with Marx, their relationship was socially controversial owing to the differences between their religious and class origins, but Marx befriended her father Ludwig von Westphalen (a liberal aristocrat) and later dedicated his doctoral thesis to him. Seven years after their engagement, on 19 June 1843, they married in a Protestant church in Kreuznach. In October 1836, Marx arrived in Berlin, matriculating in the university's faculty of law and renting a room in the Mittelstrasse. During the first term, Marx attended lectures of Eduard Gans (who represented the progressive Hegelian standpoint, elaborated on rational development in history by emphasising particularly its libertarian aspects, and the importance of social question) and of Karl von Savigny (who represented the Historical School of Law). Although studying law, he was fascinated by philosophy and looked for a way to combine the two, believing that "without philosophy nothing could be accomplished". Marx became interested in the recently deceased German philosopher Georg Wilhelm Friedrich Hegel, whose ideas were then widely debated among European philosophical circles. During a convalescence in Stralau, he joined the Doctor's Club (), a student group which discussed Hegelian ideas, and through them became involved with a group of radical thinkers known as the Young Hegelians in 1837. They gathered around Ludwig Feuerbach and Bruno Bauer, with Marx developing a particularly close friendship with Adolf Rutenberg. Like Marx, the Young Hegelians were critical of Hegel's metaphysical assumptions but adopted his dialectical method to criticise established society, politics and religion from a left-wing perspective. Marx's father died in May 1838, resulting in a diminished income for the family. Marx had been emotionally close to his father and treasured his memory after his death. By 1837, Marx was writing both fiction and non-fiction, having completed a short novel, Scorpion and Felix; a drama, Oulanem; as well as a number of love poems dedicated to his wife. None of this early work was published during his lifetime. The love poems were published posthumously in the Collected Works of Karl Marx and Frederick Engels: Volume 1. Marx soon abandoned fiction for other pursuits, including the study of both English and Italian, art history and the translation of Latin classics. He began co-operating with Bruno Bauer on editing Hegel's Philosophy of Religion in 1840. Marx was also engaged in writing his doctoral thesis, The Difference Between the Democritean and Epicurean Philosophy of Nature, which he completed in 1841. It was described as "a daring and original piece of work in which Marx set out to show that theology must yield to the superior wisdom of philosophy". The essay was controversial, particularly among the conservative professors at the University of Berlin. Marx decided instead to submit his thesis to the more liberal University of Jena, whose faculty awarded him his Ph.D. in April 1841. As Marx and Bauer were both atheists, in March 1841 they began plans for a journal entitled (Atheistic Archives), but it never came to fruition. In July, Marx and Bauer took a trip to Bonn from Berlin. There they scandalised their class by getting drunk, laughing in church and galloping through the streets on donkeys. Marx was considering an academic career, but this path was barred by the government's growing opposition to classical liberalism and the Young Hegelians. Marx moved to Cologne in 1842, where he became a journalist, writing for the radical newspaper (Rhineland News), expressing his early views on socialism and his developing interest in economics. Marx criticised right-wing European governments as well as figures in the liberal and socialist movements, whom he thought ineffective or counter-productive. The newspaper attracted the attention of the Prussian government censors, who checked every issue for seditious material before printing, which Marx lamented: "Our newspaper has to be presented to the police to be sniffed at, and if the police nose smells anything un-Christian or un-Prussian, the newspaper is not allowed to appear". After the published an article strongly criticising the Russian monarchy, Tsar Nicholas I requested it be banned and Prussia's government complied in 1843. Paris: 1843–1845 In 1843, Marx became co-editor of a new, radical left-wing Parisian newspaper, the (German-French Annals), then being set up by the German activist Arnold Ruge to bring together German and French radicals. Therefore Marx and his wife moved to Paris in October 1843. Initially living with Ruge and his wife communally at 23 Rue Vaneau, they found the living conditions difficult, so moved out following the birth of their daughter Jenny in 1844. Although intended to attract writers from both France and the German states, the was dominated by the latter and the only non-German writer was the exiled Russian anarchist collectivist Mikhail Bakunin. Marx contributed two essays to the paper, "Introduction to a Contribution to the Critique of Hegel's Philosophy of Right" and "On the Jewish Question", the latter introducing his belief that the proletariat were a revolutionary force and marking his embrace of communism. Only one issue was published, but it was relatively successful, largely owing to the inclusion of Heinrich Heine's satirical odes on King Ludwig of Bavaria, leading the German states to ban it and seize imported copies (Ruge nevertheless refused to fund the publication of further issues and his friendship with Marx broke down). After the paper's collapse, Marx began writing for the only uncensored German-language radical newspaper left, (Forward!). Based in Paris, the paper was connected to the League of the Just, a utopian socialist secret society of workers and artisans. Marx attended some of their meetings but did not join. In , Marx refined his views on socialism based upon Hegelian and Feuerbachian ideas of dialectical materialism, at the same time criticising liberals and other socialists operating in Europe. On 28 August 1844, Marx met the German socialist Friedrich Engels at the Café de la Régence, beginning a lifelong friendship. Engels showed Marx his recently published The Condition of the Working Class in England in 1844, convincing Marx that the working class would be the agent and instrument of the final revolution in history. Soon, Marx and Engels were collaborating on a criticism of the philosophical ideas of Marx's former friend, Bruno Bauer. This work was published in 1845 as The Holy Family. Although critical of Bauer, Marx was increasingly influenced by the ideas of the Young Hegelians Max Stirner and Ludwig Feuerbach, but eventually Marx and Engels abandoned Feuerbachian materialism as well. During the time that he lived at 38 Rue Vaneau in Paris (from October 1843 until January 1845), Marx engaged in an intensive study of political economy (Adam Smith, David Ricardo, James Mill, etc.), the French socialists (especially Claude Henri St. Simon and Charles Fourier) and the history of France. The study of, and critique, of political economy is a project that Marx would pursue for the rest of his life and would result in his major economic workthe three-volume series called Das Kapital. Marxism is based in large part on three influences: Hegel's dialectics, French utopian socialism and British political economy. Together with his earlier study of Hegel's dialectics, the studying that Marx did during this time in Paris meant that all major components of "Marxism" were in place by the autumn of 1844. Marx was constantly being pulled away from his critique of political economynot only by the usual daily demands of the time, but additionally by editing a radical newspaper and later by organising and directing the efforts of a political party during years of potentially revolutionary popular uprisings of the citizenry. Still, Marx was always drawn back to his studies where he sought "to understand the inner workings of capitalism". An outline of "Marxism" had definitely formed in the mind of Karl Marx by late 1844. Indeed, many features of the Marxist view of the world had been worked out in great detail, but Marx needed to write down all of the details of his world view to further clarify the new critique of political economy in his own mind. Accordingly, Marx wrote The Economic and Philosophical Manuscripts. These manuscripts covered numerous topics, detailing Marx's concept of alienated labour. By the spring of 1845, his continued study of political economy, capital and capitalism had led Marx to the belief that the new critique of political economy he was espousing—that of scientific socialism—needed to be built on the base of a thoroughly developed materialistic view of the world. The Economic and Philosophical Manuscripts of 1844 had been written between April and August 1844, but soon Marx recognised that the Manuscripts had been influenced by some inconsistent ideas of Ludwig Feuerbach. Accordingly, Marx recognised the need to break with Feuerbach's philosophy in favour of historical materialism, thus a year later (in April 1845) after moving from Paris to Brussels, Marx wrote his eleven "Theses on Feuerbach". The "Theses on Feuerbach" are best known for Thesis 11, which states that "philosophers have only interpreted the world in various ways, the point is to change it". This work contains Marx's criticism of materialism (for being contemplative), idealism (for reducing practice to theory), and, overall, philosophy (for putting abstract reality above the physical world). It thus introduced the first glimpse at Marx's historical materialism, an argument that the world is changed not by ideas but by actual, physical, material activity and practice. In 1845, after receiving a request from the Prussian king, the French government shut down , with the interior minister, François Guizot, expelling Marx from France. Brussels: 1845–1848 Unable either to stay in France or to move to Germany, Marx decided to emigrate to Brussels in Belgium in February 1845. However, to stay in Belgium he had to pledge not to publish anything on the subject of contemporary politics. In Brussels, Marx associated with other exiled socialists from across Europe, including Moses Hess, Karl Heinzen and Joseph Weydemeyer. In April 1845, Engels moved from Barmen in Germany to Brussels to join Marx and the growing cadre of members of the League of the Just now seeking home in Brussels. Later, Mary Burns, Engels' long-time companion, left Manchester, England to join Engels in Brussels. In mid-July 1845, Marx and Engels left Brussels for England to visit the leaders of the Chartists, a working-class movement in Britain. This was Marx's first trip to England and Engels was an ideal guide for the trip. Engels had already spent two years living in Manchester from November 1842 to August 1844. Not only did Engels already know the English language, but he had also developed a close relationship with many Chartist leaders. Indeed, Engels was serving as a reporter for many Chartist and socialist English newspapers. Marx used the trip as an opportunity to examine the economic resources available for study in various libraries in London and Manchester. In collaboration with Engels, Marx also set about writing a book which is often seen as his best treatment of the concept of historical materialism, The German Ideology. In this work, Marx broke with Ludwig Feuerbach, Bruno Bauer, Max Stirner and the rest of the Young Hegelians, while he also broke with Karl Grün and other "true socialists" whose philosophies were still based in part on "idealism". In German Ideology, Marx and Engels finally completed their philosophy, which was based solely on materialism as the sole motor force in history. German Ideology is written in a humorously satirical form, but even this satirical form did not save the work from censorship. Like so many other early writings of his, German Ideology would not be published in Marx's lifetime and was published only in 1932. After completing German Ideology, Marx turned to a work that was intended to clarify his own position regarding "the theory and tactics" of a truly "revolutionary proletarian movement" operating from the standpoint of a truly "scientific materialist" philosophy. This work was intended to draw a distinction between the utopian socialists and Marx's own scientific socialist philosophy. Whereas the utopians believed that people must be persuaded one person at a time to join the socialist movement, the way a person must be persuaded to adopt any different belief, Marx knew that people would tend, on most occasions, to act in accordance with their own economic interests, thus appealing to an entire class (the working class in this case) with a broad appeal to the class's best material interest would be the best way to mobilise the broad mass of that class to make a revolution and change society. This was the intent of the new book that Marx was planning, but to get the manuscript past the government censors he called the book The Poverty of Philosophy (1847) and offered it as a response to the "petty-bourgeois philosophy" of the French anarchist socialist Pierre-Joseph Proudhon as expressed in his book The Philosophy of Poverty (1840). These books laid the foundation for Marx and Engels's most famous work, a political pamphlet that has since come to be commonly known as The Communist Manifesto. While residing in Brussels in 1846, Marx continued his association with the secret radical organisation League of the Just. As noted above, Marx thought the League to be just the sort of radical organisation that was needed to spur the working class of Europe toward the mass movement that would bring about a working-class revolution. However, to organise the working class into a mass movement the League had to cease its "secret" or "underground" orientation and operate in the open as a political party. Members of the League eventually became persuaded in this regard. Accordingly, in June 1847 the League was reorganised by its membership into a new open "above ground" political society that appealed directly to the working classes. This new open political society was called the Communist League. Both Marx and Engels participated in drawing up the programme and organisational principles of the new Communist League. In late 1847, Marx and Engels began writing what was to become their most famous work – a programme of action for the Communist League. Written jointly by Marx and Engels from December 1847 to January 1848, The Communist Manifesto was first published on 21 February 1848. The Communist Manifesto laid out the beliefs of the new Communist League. No longer a secret society, the Communist League wanted to make aims and intentions clear to the general public rather than hiding its beliefs as the League of the Just had been doing. The opening lines of the pamphlet set forth the principal basis of Marxism: "The history of all hitherto existing society is the history of class struggles". It goes on to examine the antagonisms that Marx claimed were arising in the clashes of interest between the bourgeoisie (the wealthy capitalist class) and the proletariat (the industrial working class). Proceeding on from this, the Manifesto presents the argument for why the Communist League, as opposed to other socialist and liberal political parties and groups at the time, was truly acting in the interests of the proletariat to overthrow capitalist society and to replace it with socialism. Later that year, Europe experienced a series of protests, rebellions, and often violent upheavals that became known as the Revolutions of 1848. In France, a revolution led to the overthrow of the monarchy and the establishment of the French Second Republic. Marx was supportive of such activity and having recently received a substantial inheritance from his father (withheld by his uncle Lionel Philips since his father's death in 1838) of either 6,000 or 5,000 francs he allegedly used a third of it to arm Belgian workers who were planning revolutionary action. Although the veracity of these allegations is disputed, the Belgian Ministry of Justice accused Marx of it, subsequently arresting him and he was forced to flee back to France, where with a new republican government in power he believed that he would be safe. Cologne: 1848–1849 Temporarily settling down in Paris, Marx transferred the Communist League executive headquarters to the city and also set up a German Workers' Club with various German socialists living there. Hoping to see the revolution spread to Germany, in 1848 Marx moved back to Cologne where he began issuing a handbill entitled the Demands of the Communist Party in Germany, in which he argued for only four of the ten points of the Communist Manifesto, believing that in Germany at that time the bourgeoisie must overthrow the feudal monarchy and aristocracy before the proletariat could overthrow the bourgeoisie. On 1 June, Marx started the publication of a daily newspaper, the , which he helped to finance through his recent inheritance from his father. Designed to put forward news from across Europe with his own Marxist interpretation of events, the newspaper featured Marx as a primary writer and the dominant editorial influence. Despite contributions by fellow members of the Communist League, according to Friedrich Engels it remained "a simple dictatorship by Marx". Whilst editor of the paper, Marx and the other revolutionary socialists were regularly harassed by the police and Marx was brought to trial on several occasions, facing various allegations including insulting the Chief Public Prosecutor, committing a press misdemeanor and inciting armed rebellion through tax boycotting, although each time he was acquitted. Meanwhile, the democratic parliament in Prussia collapsed and the king, Frederick William IV, introduced a new cabinet of his reactionary supporters, who implemented counterrevolutionary measures to expunge left-wing and other revolutionary elements from the country. Consequently, the was soon suppressed, and Marx was ordered to leave the country on 16 May 1849. Marx returned to Paris, which was then under the grip of both a reactionary counterrevolution and a cholera epidemic, and was soon expelled by the city authorities, who considered him a political threat. With his wife Jenny expecting their fourth child and with Marx not able to move back to Germany or Belgium, in August 1849 he sought refuge in London. Move to London and further writing: 1850–1860 Marx moved to London in early June 1849 and would remain based in the city for the rest of his life. The headquarters of the Communist League also moved to London. However, in the winter of 1849–1850, a split within the ranks of the Communist League occurred when a faction within it led by August Willich and Karl Schapper began agitating for an immediate uprising. Willich and Schapper believed that once the Communist League had initiated the uprising, the entire working class from across Europe would rise "spontaneously" to join it, thus creating revolution across Europe. Marx and Engels protested that such an unplanned uprising on the part of the Communist League was "adventuristic" and would be suicide for the Communist League. Such an uprising as that recommended by the Schapper/Willich group would easily be crushed by the police and the armed forces of the reactionary governments of Europe. Marx maintained that this would spell doom for the Communist League itself, arguing that changes in society are not achieved overnight through the efforts and will power of a handful of men. They are instead brought about through a scientific analysis of economic conditions of society and by moving toward revolution through different stages of social development. In the present stage of development (circa 1850), following the defeat of the uprisings across Europe in 1848 he felt that the Communist League should encourage the working class to unite with progressive elements of the rising bourgeoisie to defeat the feudal aristocracy on issues involving demands for governmental reforms, such as a constitutional republic with freely elected assemblies and universal (male) suffrage. In other words, the working class must join with bourgeois and democratic forces to bring about the successful conclusion of the bourgeois revolution before stressing the working-class agenda and a working-class revolution. After a long struggle that threatened to ruin the Communist League, Marx's opinion prevailed and eventually, the Willich/Schapper group left the Communist League. Meanwhile, Marx also became heavily involved with the socialist German Workers' Educational Society. The Society held their meetings in Great Windmill Street, Soho, central London's entertainment district. This organisation was also racked by an internal struggle between its members, some of whom followed Marx while others followed the Schapper/Willich faction. The issues in this internal split were the same issues raised in the internal split within the Communist League, but Marx lost the fight with the Schapper/Willich faction within the German Workers' Educational Society and on 17 September 1850 resigned from the Society. New-York Daily Tribune and journalism In the early period in London, Marx committed himself almost exclusively to his studies, such that his family endured extreme poverty. His main source of income was Engels, whose own source was his wealthy industrialist father. In Prussia as editor of his own newspaper, and contributor to others ideologically aligned, Marx could reach his audience, the working classes. In London, without finances to run a newspaper themselves, he and Engels turned to international journalism. At one stage they were being published by six newspapers from England, the United States, Prussia, Austria, and South Africa. Marx's principal earnings came from his work as European correspondent, from 1852 to 1862, for the New-York Daily Tribune, and from also producing articles for more "bourgeois" newspapers. Marx had his articles translated from German by , until his proficiency in English had become adequate. The New-York Daily Tribune had been founded in April 1841 by Horace Greeley. Its editorial board contained progressive bourgeois journalists and publishers, among them George Ripley and the journalist Charles Dana, who was editor-in-chief. Dana, a fourierist and an abolitionist, was Marx's contact. The Tribune was a vehicle for Marx to reach a transatlantic public, such as for his "hidden warfare" against Henry Charles Carey. The journal had wide working-class appeal from its foundation; at two cents, it was inexpensive; and, with about 50,000 copies per issue, its circulation was the widest in the United States. Its editorial ethos was progressive and its anti-slavery stance reflected Greeley's. Marx's first article for the paper, on the British parliamentary elections, was published on 21 August 1852. On 21 March 1857, Dana informed Marx that due to the economic recession only one article a week would be paid for, published or not; the others would be paid for only if published. Marx had sent his articles on Tuesdays and Fridays, but, that October, the Tribune discharged all its correspondents in Europe except Marx and B. Taylor, and reduced Marx to a weekly article. Between September and November 1860, only five were published. After a six-month interval, Marx resumed contributions from September 1861 until March 1862, when Dana wrote to inform him that there was no longer space in the Tribune for reports from London, due to American domestic affairs. In 1868, Dana set up a rival newspaper, the New York Sun, at which he was editor-in-chief. In April 1857, Dana invited Marx to contribute articles, mainly on military history, to the New American Cyclopedia, an idea of George Ripley, Dana's friend and literary editor of the Tribune. In all, 67 Marx-Engels articles were published, of which 51 were written by Engels, although Marx did some research for them in the British Museum. By the late 1850s, American popular interest in European affairs waned and Marx's articles turned to topics such as the "slavery crisis" and the outbreak of the American Civil War in 1861 in the "War Between the States". Between December 1851 and March 1852, Marx worked on his theoretical work about the French Revolution of 1848, titled The Eighteenth Brumaire of Louis Napoleon. In this he explored concepts in historical materialism, class struggle, dictatorship of the proletariat, and victory of the proletariat over the bourgeois state. The 1850s and 1860s may be said to mark a philosophical boundary distinguishing the young Marx's Hegelian idealism and the more mature Marx's scientific ideology associated with structural Marxism. However, not all scholars accept this distinction. For Marx and Engels, their experience of the Revolutions of 1848 to 1849 were formative in the development of their theory of economics and historical progression. After the "failures" of 1848, the revolutionary impetus appeared spent and not to be renewed without an economic recession. Contention arose between Marx and his fellow communists, whom he denounced as "adventurists". Marx deemed it fanciful to propose that "will power" could be sufficient to create the revolutionary conditions when in reality the economic component was the necessary requisite. The recession in the United States' economy in 1852 gave Marx and Engels grounds for optimism for revolutionary activity, yet this economy was seen as too immature for a capitalist revolution. Open territories on America's western frontier dissipated the forces of social unrest. Moreover, any economic crisis arising in the United States would not lead to revolutionary contagion of the older economies of individual European nations, which were closed systems bounded by their national borders. When the so-called Panic of 1857 in the United States spread globally, it broke all economic theory models, and was the first truly global economic crisis. First International and Das Kapital Marx continued to write articles for the New York Daily Tribune as long as he was sure that the Tribunes editorial policy was still progressive. However, the departure of Charles Dana from the paper in late 1861 and the resultant change in the editorial board brought about a new editorial policy. No longer was the Tribune to be a strong abolitionist paper dedicated to a complete Union victory. The new editorial board supported an immediate peace between the Union and the Confederacy in the Civil War in the United States with slavery left intact in the Confederacy. Marx strongly disagreed with this new political position and in 1863 was forced to withdraw as a writer for the Tribune. In 1864, Marx became involved in the International Workingmen's Association (also known as the First International), to whose General Council he was elected at its inception in 1864. In that organisation, Marx was involved in the struggle against the anarchist wing centred on Mikhail Bakunin (1814–1876). Although Marx won this contest, the transfer of the seat of the General Council from London to New York in 1872, which Marx supported, led to the decline of the International. The most important political event during the existence of the International was the Paris Commune of 1871 when the citizens of Paris rebelled against their government and held the city for two months. In response to the bloody suppression of this rebellion, Marx wrote one of his most famous pamphlets, "The Civil War in France", a defence of the Commune. Given the repeated failures and frustrations of workers' revolutions and movements, Marx also sought to understand and provide a critique suitable for the capitalist mode of production, and hence spent a great deal of time in the reading room of the British Museum studying. By 1857, Marx had accumulated over 800 pages of notes and short essays on capital, landed property, wage labour, the state, and foreign trade, and the world market, though this work did not appear in print until 1939, under the title Grundrisse der Kritik der Politischen Ökonomie (). In 1859, Marx published A Contribution to the Critique of Political Economy, his first serious critique of political economy. This work was intended merely as a preview of his three-volume Das Kapital (English title: Capital: Critique of Political Economy), which he intended to publish at a later date. In A Contribution to the Critique of Political Economy, Marx began to critically examine axioms and categories of economic thinking. The work was enthusiastically received, and the edition sold out quickly. The successful sales of A Contribution to the Critique of Political Economy stimulated Marx in the early 1860s to finish work on the three large volumes that would compose his major life's work – and the Theories of Surplus Value, which discussed and critiqued the theoreticians of political economy, particularly Adam Smith and David Ricardo. Theories of Surplus Value is often referred to as the fourth volume of and constitutes one of the first comprehensive treatises on the history of economic thought. In 1867, the first volume of was published, a work which critically analysed capital. proposes an explanation of the "laws of motion" of the mode of production from its origins to its future by describing the dynamics of the accumulation of capital, with topics such as the growth of wage labour, the transformation of the workplace, capital accumulation, competition, the banking system, the tendency of the rate of profit to fall and land-rents, as well as how waged labour continually reproduce the rule of capital. Marx proposes that the driving force of capital is in the exploitation of labour, whose unpaid work is the ultimate source of surplus value. Demand for a Russian language edition of soon led to the printing of 3,000 copies of the book in the Russian language, which was published on 27 March 1872. By the autumn of 1871, the entire first edition of the German-language edition of had been sold out and a second edition was published. Volumes II and III of remained mere manuscripts upon which Marx continued to work for the rest of his life. Both volumes were published by Engels after Marx's death. Volume II of was prepared and published by Engels in July 1893 under the name Capital II: The Process of Circulation of Capital. Volume III of was published a year later in October 1894 under the name Capital III: The Process of Capitalist Production as a Whole. Theories of Surplus Value derived from the sprawling Economic Manuscripts of 1861–1863, a second draft for , the latter spanning volumes 30–34 of the Collected Works of Marx and Engels. Specifically, Theories of Surplus Value runs from the latter part of the Collected Works' thirtieth volume through the end of their thirty-second volume; meanwhile, the larger Economic Manuscripts of 1861–1863 run from the start of the Collected Works''' thirtieth volume through the first half of their thirty-fourth volume. The latter half of the Collected Works' thirty-fourth volume consists of the surviving fragments of the Economic Manuscripts of 1863–1864, which represented a third draft for , and a large portion of which is included as an appendix to the Penguin edition of , volume I. A German-language abridged edition of Theories of Surplus Value was published in 1905 and in 1910. This abridged edition was translated into English and published in 1951 in London, but the complete unabridged edition of Theories of Surplus Value was published as the "fourth volume" of in 1963 and 1971 in Moscow. During the last decade of his life, Marx's health declined, and he became incapable of the sustained effort that had characterised his previous work. He did manage to comment substantially on contemporary politics, particularly in Germany and Russia. His Critique of the Gotha Programme opposed the tendency of his followers Wilhelm Liebknecht and August Bebel to compromise with the state socialist ideas of Ferdinand Lassalle in the interests of a united socialist party. This work is also notable for another famous Marx quote: "From each according to his ability, to each according to his need". In a letter to Vera Zasulich dated 8 March 1881, Marx contemplated the possibility of Russia's bypassing the capitalist stage of development and building communism on the basis of the common ownership of land characteristic of the village mir. While admitting that Russia's rural "commune is the fulcrum of social regeneration in Russia", Marx also warned that in order for the mir to operate as a means for moving straight to the socialist stage without a preceding capitalist stage it "would first be necessary to eliminate the deleterious influences which are assailing it [the rural commune] from all sides". Given the elimination of these pernicious influences, Marx allowed that "normal conditions of spontaneous development" of the rural commune could exist. However, in the same letter to Vera Zasulich he points out that "at the core of the capitalist system ... lies the complete separation of the producer from the means of production". In one of the drafts of this letter, Marx reveals his growing passion for anthropology, motivated by his belief that future communism would be a return on a higher level to the communism of our prehistoric past. He wrote that "the historical trend of our age is the fatal crisis which capitalist production has undergone in the European and American countries where it has reached its highest peak, a crisis that will end in its destruction, in the return of modern society to a higher form of the most archaic type – collective production and appropriation". He added that "the vitality of primitive communities was incomparably greater than that of Semitic, Greek, Roman, etc. societies, and, a fortiori, that of modern capitalist societies". Before he died, Marx asked Engels to write up these ideas, which were published in 1884 under the title The Origin of the Family, Private Property and the State. Personal life Family Marx and von Westphalen had seven children together, but partly owing to the poor conditions in which they lived whilst in London, only three survived to adulthood. Their children were: Jenny Caroline (m. Longuet; 1844–1883); Jenny Laura (m. Lafargue; 1845–1911); Edgar (1847–1855); Henry Edward Guy ("Guido"; 1849–1850); Jenny Eveline Frances ("Franziska"; 1851–1852); Jenny Julia Eleanor (1855–1898) and one more who died before being named (July 1857). According to his son-in-law, Paul Lafargue, Marx was a loving father. In 1962, there were allegations that Marx fathered a son, Freddy, out of wedlock by his housekeeper, Helene Demuth, but the claim is disputed for lack of documented evidence. Helene Demuth was also largely entrusted as a confidante. In her obituary, penned by Friedrich Engels, her role is revealed as: "Marx took counsel of Helena Demuth, not only in difficult and intricate party matters, but even in respect of his economical writings". Marx frequently used pseudonyms, often when renting a house or flat, apparently to make it harder for the authorities to track him down. While in Paris, he used that of "Monsieur Ramboz", whilst in London, he signed off his letters as "A. Williams". His friends referred to him as "Moor", owing to his dark complexion and black curly hair, while he encouraged his children to call him "Old Nick" and "Charley". He also bestowed nicknames and pseudonyms on his friends and family, referring to Friedrich Engels as "General", his housekeeper Helene as "Lenchen" or "Nym", while one of his daughters, Jennychen, was referred to as "Qui Qui, Emperor of China" and another, Laura, was known as "Kakadou" or "the Hottentot". Health Marx drank heavily after joining the Trier Tavern Club drinking society in the 1830s, and continued to do so until his death. Marx was afflicted by poor health, what he himself described as "the wretchedness of existence", and various authors have sought to describe and explain it. His biographer Werner Blumenberg attributed it to liver and gall problems which Marx had in 1849 and from which he was never afterward free, exacerbated by an unsuitable lifestyle. The attacks often came with headaches, eye inflammation, neuralgia in the head, and rheumatic pains. A serious nervous disorder appeared in 1877 and protracted insomnia was a consequence, which Marx fought with narcotics. The illness was aggravated by excessive nocturnal work and faulty diet. Marx was fond of highly seasoned dishes, smoked fish, caviare, pickled cucumbers, "none of which are good for liver patients", but he also liked wine and liqueurs and smoked an enormous amount "and since he had no money, it was usually bad-quality cigars". From 1863, Marx complained a lot about boils: "These are very frequent with liver patients and may be due to the same causes". The abscesses were so bad that Marx could neither sit nor work upright. According to Blumenberg, Marx's irritability is often found in liver patients: The illness emphasised certain traits in his character. He argued cuttingly, his biting satire did not shrink at insults, and his expressions could be rude and cruel. Though in general Marx had blind faith in his closest friends, nevertheless he himself complained that he was sometimes too mistrustful and unjust even to them. His verdicts, not only about enemies but even about friends, were sometimes so harsh that even less sensitive people would take offence ... There must have been few whom he did not criticize like this ... not even Engels was an exception. According to Princeton historian Jerrold Seigel, in his late teens, Marx may have had pneumonia or pleurisy, the effects of which led to his being exempted from Prussian military service. In later life whilst working on (which he never completed), Marx suffered from a trio of afflictions. A liver ailment, probably hereditary, was aggravated by overwork, a bad diet, and lack of sleep. Inflammation of the eyes was induced by too much work at night. A third affliction, eruption of carbuncles or boils, "was probably brought on by general physical debility to which the various features of Marx's style of life – alcohol, tobacco, poor diet, and failure to sleep – all contributed. Engels often exhorted Marx to alter this dangerous regime". In Seigel's thesis, what lay behind this punishing sacrifice of his health may have been guilt about self-involvement and egoism, originally induced in Karl Marx by his father. In 2007, a retrodiagnosis of Marx's skin disease was made by dermatologist Sam Shuster of Newcastle University and for Shuster, the most probable explanation was that Marx suffered not from liver problems, but from hidradenitis suppurativa, a recurring infective condition arising from blockage of apocrine ducts opening into hair follicles. This condition, which was not described in the English medical literature until 1933 (hence would not have been known to Marx's physicians), can produce joint pain (which could be misdiagnosed as rheumatic disorder) and painful eye conditions. To arrive at his retrodiagnosis, Shuster considered the primary material: the Marx correspondence published in the 50 volumes of the Marx/Engels Collected Works. There, "although the skin lesions were called 'furuncles', 'boils' and 'carbuncles' by Marx, his wife, and his physicians, they were too persistent, recurrent, destructive and site-specific for that diagnosis". The sites of the persistent 'carbuncles' were noted repeatedly in the armpits, groins, perianal, genital (penis and scrotum) and suprapubic regions and inner thighs, "favoured sites of hidradenitis suppurativa". Professor Shuster claimed the diagnosis "can now be made definitively". Shuster went on to consider the potential psychosocial effects of the disease, noting that the skin is an organ of communication and that hidradenitis suppurativa produces much psychological distress, including loathing and disgust and depression of self-image, mood, and well-being, feelings for which Shuster found "much evidence" in the Marx correspondence. Professor Shuster went on to ask himself whether the mental effects of the disease affected Marx's work and even helped him to develop his theory of alienation. Death Following the death of his wife Jenny in December 1881, Marx developed a catarrh that kept him in ill health for the last 15 months of his life. It eventually brought on the bronchitis and pleurisy that killed him in London on 14 March 1883, when he died a stateless person at age 64. Family and friends in London buried his body in Highgate Cemetery (East), London, on 17 March 1883 in an area reserved for agnostics and atheists. For example, George Eliot's grave is nearby. According to Francis Wheen, there were between nine and eleven mourners at his funeral. Research from contemporary sources identifies thirteen named individuals attending the funeral: Friedrich Engels, Eleanor Marx, Edward Aveling, Paul Lafargue, Charles Longuet, Helene Demuth, Wilhelm Liebknecht, Gottlieb Lemke, Frederick Lessner, G Lochner, Sir Ray Lankester, Carl Schorlemmer and Ernest Radford. A contemporary newspaper account claims that twenty-five to thirty relatives and friends attended the funeral. A writer in The Graphic noted: 'By a strange blunder ... his death was not announced for two days, and then as having taken place at Paris. The next day the correction came from Paris; and when his friends and followers hastened to his house in Haverstock Hill, to learn the time and place of burial, they learned that he was already in the cold ground. But for this secresy [sic] and haste, a great popular demonstration would undoubtedly have been held over his grave'. Several of his closest friends spoke at his funeral, including Wilhelm Liebknecht and Friedrich Engels. Engels' speech included the passage: Marx's surviving daughters Eleanor and Laura, as well as Charles Longuet and Paul Lafargue, Marx's two French socialist sons-in-law, were also in attendance. He had been predeceased by his wife and his eldest daughter, the latter dying a few months earlier in January 1883. Liebknecht, a founder and leader of the German Social Democratic Party, gave a speech in German, and Longuet, a prominent figure in the French working-class movement, made a short statement in French. Two telegrams from workers' parties in France and Spain were also read out. Together with Engels's speech, this constituted the entire programme of the funeral. Non-relatives attending the funeral included three communist associates of Marx: Friedrich Lessner, imprisoned for three years after the Cologne Communist Trial of 1852; G. Lochner, whom Engels described as "an old member of the Communist League"; and Carl Schorlemmer, a professor of chemistry in Manchester, a member of the Royal Society, and a communist activist involved in the 1848 Baden revolution. Another attendee of the funeral was Ray Lankester, a British zoologist who would later become a prominent academic. Marx left a personal estate valued for probate at £250, equivalent to £38,095 in 2024. Upon his own death in 1895, Engels left Marx's two surviving daughters a "significant portion" of his considerable estate, valued in 2024 at US$6.8 million. Marx and his family were reburied on a new site nearby in November 1954. The tomb at the new site, unveiled on 14 March 1956, bears the carved message: "Workers of All Lands Unite", the final line of The Communist Manifesto; and, from the 11th "Thesis on Feuerbach" (as edited by Engels), "The philosophers have only interpreted the world in various waysthe point however is to change it". The Communist Party of Great Britain (CPGB) had the monument with a portrait bust by Laurence Bradshaw erected and Marx's original tomb had only humble adornment. Black civil rights leader and CPGB activist Claudia Jones was later buried beside Karl Marx's tomb. The Marxist historian Eric Hobsbawm remarked: "One cannot say Marx died a failure." Although he had not achieved a large following of disciples in Britain, his writings had already begun to make an impact on the left-wing movements in Germany and Russia. Within twenty-five years of his death, the continental European socialist parties that acknowledged Marx's influence on their politics had contributed to significant gains in their representative democratic elections. Thought Influences Marx's thought demonstrates influence from many sources, including but not limited to: Georg Wilhelm Friedrich Hegel's philosophy The classical political economy (economics) of Adam Smith and David Ricardo, as well as Jean Charles Léonard de Sismondi's critique of laissez-faire economics and analysis of the precarious state of the proletariat French socialist thought, in particular the thought of Jean-Jacques Rousseau, Henri de Saint-Simon, Pierre-Joseph Proudhon and Charles Fourier Earlier German philosophical materialism among the Young Hegelians, particularly that of Ludwig Feuerbach and Bruno Bauer, as well as the French materialism of the late 18th century, including Diderot, Claude Adrien Helvétius and d'Holbach Friedrich Engels' analysis of the working class, as well as the early descriptions of class provided by French liberals and Saint-Simonians such as François Guizot and Augustin Thierry Marx's Judaic legacy has been identified as formative to both his moral outlook and his materialist philosophy. Marx's view of history, which came to be called historical materialism (controversially adapted as the philosophy of dialectical materialism by Engels and Lenin), certainly shows the influence of Hegel's claim that one should view reality (and history) dialectically. However, whereas Hegel had thought in idealist terms, putting ideas in the forefront, Marx sought to conceptualise dialectics in materialist terms, arguing for the primacy of matter over idea. Where Hegel saw the "spirit" as driving history, Marx saw this as an unnecessary mystification, obscuring the reality of humanity and its physical actions shaping the world. He wrote that Hegelianism stood the movement of reality on its head, and that one needed to set it upon its feet. Despite his dislike of mystical terms, Marx used Gothic language in several of his works: in The Communist Manifesto he proclaims "A spectre is haunting Europe – the spectre of communism. All the powers of old Europe have entered into a holy alliance to exorcise this spectre", and in The Capital he refers to capital as "necromancy that surrounds the products of labour". Though inspired by French socialist and sociological thought, Marx criticised utopian socialists, arguing that their favoured small-scale socialistic communities would be bound to marginalisation and poverty and that only a large-scale change in the economic system could bring about real change. Other important contributions to Marx's revision of Hegelianism came from Engels's book, The Condition of the Working Class in England in 1844, which led Marx to conceive of the historical dialectic in terms of class conflict and to see the modern working class as the most progressive force for revolution, as well as from the social democrat Friedrich Wilhelm Schulz, who in described the movement of society as "flowing from the contradiction between the forces of production and the mode of production." Marx believed that he could study history and society scientifically, discerning tendencies of history and thereby predicting the outcome of social conflicts. Some followers of Marx, therefore, concluded that a communist revolution would inevitably occur. However, Marx famously asserted in the eleventh of his "Theses on Feuerbach" that "philosophers have only interpreted the world, in various ways; the point however is to change it" and he clearly dedicated himself to trying to alter the world. Marx's theories inspired several theories and disciplines of future, including but not limited to: Contemporary critique of political economy Kondratiev wave and Kuznets swing Theory of Underconsumption Creative destruction Crisis theory Quantitative Economic History World-systems theory Philosophy and social thought Marx has been called "the first great user of critical method in social sciences", a characterisation stemming from his frequent use of polemics throughout his work to effect critiques of other thinkers. He criticised speculative philosophy, equating metaphysics with ideology. By adopting this approach, Marx attempted to separate key findings from ideological biases. This set him apart from many contemporary philosophers. Human nature Like Tocqueville, who described a faceless and bureaucratic despotism with no identifiable despot, Marx also broke with classical thinkers who spoke of a single tyrant and with Montesquieu, who discussed the nature of the single despot. Instead, Marx set out to analyse "the despotism of capital". Fundamentally, Marx assumed that human history involves transforming human nature, which encompasses both human beings and material objects. Humans recognise that they possess both actual and potential selves.See Marx K (1997). "Critique of Hegel's dialectic and philosophy in general". In K Marx, Writings of the Young Marx on Philosophy and Society (LD Easton & KH Guddat, Trans.), pp. 314–47. Indianapolis: Hackett Publishing Company, Inc. Original work published 1844. For both Marx and Hegel, self-development begins with an experience of internal alienation stemming from this recognition, followed by a realisation that the actual self, as a subjective agent, renders its potential counterpart an object to be apprehended. Marx further argues that by moulding nature in desired ways the subject takes the object as its own and thus permits the individual to be actualised as fully human. For Marx, the human nature – , or species-being – exists as a function of human labour. Fundamental to Marx's idea of meaningful labour is the proposition that for a subject to come to terms with its alienated object it must first exert influence upon literal, material objects in the subject's world. Marx acknowledges that Hegel "grasps the nature of work and comprehends objective man, authentic because actual, as the result of his ", but characterises Hegelian self-development as unduly "spiritual" and abstract. Marx thus departs from Hegel by insisting that "the fact that man is a corporeal, actual, sentient, objective being with natural capacities means that he has actual, sensuous objects for his nature as objects of his life-expression, or that he can only express his life in actual sensuous objects". Consequently, Marx revises Hegelian "work" into material "labour" and in the context of human capacity to transform nature the term "labour power". Labour, class struggle and false consciousness Marx had a special concern with how people relate to their own labour power. He wrote extensively about this in terms of the problem of alienation. As with the dialectic, Marx began with a Hegelian notion of alienation but developed a more materialist conception. Capitalism mediates social relationships of production (such as among workers or between workers and capitalists) through commodities, including labour, that are bought and sold on the market. For Marx, the possibility that one may give up ownership of one's own labour – one's capacity to transform the world – is tantamount to being alienated from one's own nature and it is a spiritual loss. Marx described this loss as commodity fetishism, in which the things that people produce, commodities, appear to have a life and movement of their own to which humans and their behaviour merely adapt. Commodity fetishism provides an example of what Engels called "false consciousness", which relates closely to the understanding of ideology. By "ideology", Marx and Engels meant ideas that reflect the interests of a particular class at a particular time in history, but which contemporaries see as universal and eternal. Marx and Engels's point was not only that such beliefs are at best half-truths, as they serve an important political function. Put another way, the control that one class exercises over the means of production include not only the production of food or manufactured goods but also the production of ideas (this provides one possible explanation for why members of a subordinate class may hold ideas contrary to their own interests). An example of this sort of analysis is Marx's understanding of religion, summed up in a passage from the preface to his 1843 Contribution to the Critique of Hegel's Philosophy of Right: Whereas his Gymnasium senior thesis at the argued that religion had as its primary social aim the promotion of solidarity, here Marx sees the social function of religion in terms of highlighting/preserving political and economic status quo and inequality. Marx was an outspoken opponent of child labour, saying that British industries "could but live by sucking blood, and children's blood too", and that U.S. capital was financed by the "capitalized blood of children". Critique of political economy, history and society Marx's thoughts on labour and its function in reproducing capital were related to the primacy he gave to social relations in determining the society's past, present and future. Critics have called this economic determinism. Labour is the precondition for the existence of, and accumulation of capital, which both shape the social system. For Marx, social change was driven by conflict between opposing interests, by parties situated in the historical situation of their mode of production. This became the inspiration for the body of works known as the conflict theory. In his evolutionary model of history, he argued that human history began with free, productive and creative activities that was over time coerced and dehumanised, a trend most apparent under capitalism. Marx noted that this was not an intentional process, but rather due to the immanent logic of the current mode of production which demands more human labour (abstract labour) to reproduce the social relationships of capital. The organisation of society depends on means of production. The means of production are all things required to produce material goods, such as land, natural resources, and technology but not human labour. The relations of production are the social relationships people enter into as they acquire and use the means of production. Together, these compose the mode of production and Marx distinguished historical eras in terms of modes of production. Marx differentiated between base and superstructure, where the base (or substructure) is the economic system and superstructure is the cultural and political system. Marx regarded this mismatch between economic base and social superstructure as a major source of social disruption and conflict. Despite Marx's stress on the critique of capitalism and discussion of the new communist society that should replace it, his explicit critique is guarded, as he saw it as an improved society compared to the past ones (slavery and feudalism). Marx never clearly discusses issues of morality and justice, but scholars agree that his work contained implicit discussion of those concepts. Marx's view of capitalism was two-sided. On one hand, in the 19th century's deepest critique of the dehumanising aspects of this system he noted that defining features of capitalism include alienation, exploitation and recurring, cyclical depressions leading to mass unemployment. On the other hand, he characterised capitalism as "revolutionising, industrialising and universalising qualities of development, growth and progressivity" (by which Marx meant industrialisation, urbanisation, technological progress, increased productivity and growth, rationality, and scientific revolution) that are responsible for progress, at in contrast to earlier forms of societies. Marx considered the capitalist class to be one of the most revolutionary in history because it constantly improved the means of production, more so than any other class in history and was responsible for the overthrow of feudalism. Capitalism can stimulate considerable growth because the capitalist has an incentive to reinvest profits in new technologies and capital equipment. According to Marx, capitalists take advantage of the difference between the labour market and the market for whatever commodity the capitalist can produce. Marx observed that in practically every successful industry, input unit-costs are lower than output unit-prices. Marx called the difference "surplus value" and argued that it was based on surplus labour, the difference between what it costs to keep workers alive, and what they can produce. Although Marx describes capitalists as vampires sucking worker's blood, he notes that drawing profit is "by no means an injustice" since Marx, according to Allen W. Wood "excludes any trans-epochal standpoint from which one can comment" on the morals of such particular arrangements. Marx also noted that even the capitalists themselves cannot go against the system. The problem is the "cancerous cell" of capital, understood not as property or equipment, but the social relations between workers and owners, (the selling and purchasing of labour power) – the societal system, or rather mode of production, in general. At the same time, Marx stressed that capitalism was unstable and prone to periodic crises. He suggested that over time capitalists would invest more and more in new technologies and less and less in labour. Since Marx believed that profit derived from surplus value appropriated from labour, he concluded that the rate of profit would fall as the economy grows. Marx believed that increasingly severe crises would punctuate this cycle of growth and collapse. Moreover, he believed that in the long-term, this process would enrich and empower the capitalist class and impoverish the proletariat. In section one of The Communist Manifesto, Marx describes feudalism, capitalism and the role internal social contradictions play in the historical process: Marx believed that those structural contradictions within capitalism necessitate its end, giving way to socialism, or a post-capitalistic, communist society: Thanks to various processes overseen by capitalism, such as urbanisation, the working class, the proletariat, should grow in numbers and develop class consciousness, in time realising that they can and must change the system. Marx believed that if the proletariat were to seize the means of production, they would encourage social relations that would benefit everyone equally, abolishing the exploiting class and introducing a system of production less vulnerable to cyclical crises. Marx argued in The German Ideology that capitalism will end through the organised actions of an international working class: In this new society, the alienation would end and humans would be free to act without being bound by selling their labour. It would be a democratic society, enfranchising the entire population. In such a utopian world, there would also be little need for a state, whose goal was previously to enforce the alienation. Marx theorised that between capitalism and the establishment of a socialist/communist system, would exist a period of dictatorship of the proletariat – where the working class holds political power and forcibly socialises the means of production. As he wrote in his Critique of the Gotha Program, "between capitalist and communist society there lies the period of the revolutionary transformation of the one into the other. Corresponding to this is also a political transition period in which the state can be nothing but the revolutionary dictatorship of the proletariat". While he allowed for the possibility of peaceful transition in some countries with strong democratic institutional structures (such as Britain, the United States, and the Netherlands), he suggested that in other countries in which workers cannot "attain their goal by peaceful means" the "lever of our revolution must be force". International relations Marx viewed Russia as the main counter-revolutionary threat to European revolutions. During the Crimean War, Marx backed the Ottoman Empire and its allies Britain and France against Russia. He was absolutely opposed to Pan-Slavism, viewing it as an instrument of Russian foreign policy. Marx had considered the Slavic nations except Poles as 'counter-revolutionary'. Marx and Engels published in the Neue Rheinische Zeitung in February 1849: Marx and Engels sympathised with the Narodnik revolutionaries of the 1860s and 1870s. When the Russian revolutionaries assassinated Tsar Alexander II of Russia, Marx expressed the hope that the assassination foreshadowed 'the formation of a Russian commune'. Marx supported the Polish uprisings against tsarist Russia. He said in a speech in London in 1867: Marx supported the cause of Irish independence. In 1867, he wrote Engels: "I used to think the separation of Ireland from England impossible. I now think it inevitable. The English working class will never accomplish anything until it has got rid of Ireland. ... English reaction in England had its roots ... in the subjugation of Ireland." Marx spent some time in French Algeria, which had been invaded and made a French colony in 1830, and had the opportunity to observe life in colonial North Africa. He wrote about the colonial justice system, in which "a form of torture has been used (and this happens 'regularly') to extract confessions from the Arabs; naturally it is done (like the English in India) by the 'police'; the judge is supposed to know nothing at all about it." Marx was surprised by the arrogance of many European settlers in Algiers and wrote in a letter: "when a European colonist dwells among the 'lesser breeds,' either as a settler or even on business, he generally regards himself as even more inviolable than handsome William I [a Prussian king]. Still, when it comes to bare-faced arrogance and presumptuousness vis-à-vis the 'lesser breeds,' the British and Dutch outdo the French." According to the Stanford Encyclopedia of Philosophy: "Marx's analysis of colonialism as a progressive force bringing modernization to a backward feudal society sounds like a transparent rationalization for foreign domination. His account of British domination, however, reflects the same ambivalence that he shows towards capitalism in Europe. In both cases, Marx recognizes the immense suffering brought about during the transition from feudal to bourgeois society while insisting that the transition is both necessary and ultimately progressive. He argues that the penetration of foreign commerce will cause a social revolution in India." Marx discussed British colonial rule in India in the New York Herald Tribune in June 1853: Legacy Marx's ideas have had a profound impact on world politics and intellectual thought, in particular in the aftermath of the 1917 Russian Revolution. Followers of Marx have often debated among themselves over how to interpret Marx's writings and apply his concepts to the modern world. The legacy of Marx's thought has become contested between numerous tendencies, each of which sees itself as Marx's most accurate interpreter. In the political realm, these tendencies include political theories such as Leninism, Marxism–Leninism, Trotskyism, Maoism, Luxemburgism, libertarian Marxism, and Open Marxism. Various currents have also developed in academic Marxism, often under influence of other views, resulting in structuralist Marxism, historical materialism, phenomenological Marxism, analytical Marxism, and Hegelian Marxism. From an academic perspective, Marx's work contributed to the birth of modern sociology. He has been cited as one of the 19th century's three masters of the "school of suspicion", alongside Friedrich Nietzsche and Sigmund Freud, and as one of the three principal architects of modern social science along with Émile Durkheim and Max Weber. In contrast to other philosophers, Marx offered theories that could often be tested with the scientific method. Both Marx and Auguste Comte set out to develop scientifically justified ideologies in the wake of European secularisation and new developments in the philosophies of history and science. Working in the Hegelian tradition, Marx rejected Comtean sociological positivism in an attempt to develop a science of society. Karl Löwith considered Marx and Søren Kierkegaard to be the two greatest philosophical successors of Hegel. In modern sociological theory, Marxist sociology is recognised as one of the main classical perspectives. Isaiah Berlin considers Marx the true founder of modern sociology "in so far as anyone can claim the title". Beyond social science, he has also had a lasting legacy in philosophy, literature, the arts, and the humanities. Social theorists of the 20th and 21st centuries have pursued two main strategies in response to Marx. One move has been to reduce it to its analytical core, known as analytical Marxism. Another, more common move has been to dilute the explanatory claims of Marx's social theory and emphasise the "relative autonomyworking-class agenda" of aspects of social and economic life not directly related to Marx's central narrative of interaction between the development of the "forces of production" and the succession of "modes of production". This has been the neo-Marxist theorising adopted by historians inspired by Marx's social theory such as E. P. Thompson and Eric Hobsbawm. It has also been a line of thinking pursued by thinkers and activists such as Antonio Gramsci who have sought to understand the opportunities and the difficulties of transformative political practice, seen in the light of Marxist social theory.Aron, Raymond. Main Currents in Sociological Thought. Garden City, NY: Anchor Books, 1965.Hobsbawm, E. J. How to Change the World: Marx and Marxism, 1840–2011 (London: Little, Brown, 2011), 314–44. Marx's ideas had a profound influence on subsequent artists and art history, with avant-garde movements across literature, visual art, music, film, and theatre. Politically, Marx's legacy is more complex. Throughout the 20th century, revolutions in dozens of countries labelled themselves "Marxist"most notably the Russian Revolution, which led to the founding of the Soviet Union. Major world leaders including Vladimir Lenin, Mao Zedong, Fidel Castro, Salvador Allende, Josip Broz Tito, Kwame Nkrumah, Jawaharlal Nehru, Nelson Mandela, Xi Jinping and Thomas Sankara have all cited Marx as an influence. Beyond where Marxist revolutions took place, Marx's ideas have informed political parties worldwide. In countries associated with Marxism, some events have led political opponents to blame Marx for millions of deaths, while others argue for a distinction between the legacy and influence of Marx specifically, and the legacy and influence of those who have shaped his ideas for political purposes. Arthur Lipow describes Marx and his collaborator Friedrich Engels as "the founders of modern revolutionary democratic socialism." The cities of Marks, Russia and Karl-Marx-Stadt, Germany, now known as Chemnitz, were named after Marx.Chemnitzer Tourismus-Broschüre, Herausgeber: City-Management und Tourismus Chemnitz GmbH, 4. Jahrgang • Ausgabe 12 • Sommer 2010 ; O-Ton-Nachweis im Chemnitzer Stadtarchiv In May 2018, to mark the bicentenary of his birth, a 4.5m statue of him by leading Chinese sculptor Wu Weishan and donated by the Chinese government was unveiled in his birthplace of Trier, Germany. The then-European Commission president Jean-Claude Juncker defended Marx's memory, saying that today Marx "stands for things which he is not responsible for and which he didn't cause because many of the things he wrote down were redrafted into the opposite". In 2017, a feature film, titled The Young Karl Marx, featuring Marx, his wife Jenny Marx, and Engels, among other revolutionaries and intellectuals prior to the Revolutions of 1848, received good reviews for both its historical accuracy and its brio in dealing with intellectual life. Selected bibliography The Difference Between the Democritean and Epicurean Philosophy of Nature (doctoral thesis), 1841 The Philosophical Manifesto of the Historical School of Law, 1842 Critique of Hegel's Philosophy of Right, 1843 On the Jewish Question, 1843 Notes on James Mill, 1844 Economic and Philosophic Manuscripts of 1844, 1844 The Holy Family, 1845 Theses on Feuerbach, written 1845, first published posthumously 1888 by Engels. The German Ideology, 1845 The Poverty of Philosophy, 1847 Wage Labour and Capital, 1847 Manifesto of the Communist Party, 1848 The Class Struggles in France, 1850 The Eighteenth Brumaire of Louis Napoleon, 1852 Grundrisse (Foundations of a Critique of Political Economy), 1857 A Contribution to the Critique of Political Economy, 1859 Writings on the U.S. Civil War, 1861 Theories of Surplus Value, (posthumously published by Kautsky) 3 volumes, 1862 Address of the International Working Men's Association to Abraham Lincoln, 1864 Value, Price and Profit, 1865 Capital. Volume I: A Critique of Political Economy The Process of Production of Capital (Das Kapital), 1867 The Civil War in France, 1871 Critique of the Gotha Programme, 1875 Notes on Adolph Wagner, 1883 Das Kapital, Volume II (posthumously published by Engels), 1885 Das Kapital, Volume III (posthumously published by Engels), 1894 See also 2807 Karl Marx, an asteroid Criticisms of Marxism Karl Marx in film Marxian class theory Marx Memorial Library Marx's method Marx Reloaded, a 2011 movie Mathematical manuscripts of Karl Marx Pre-Marx socialists Scientific socialism Timeline of Karl Marx Why Socialism?, an article by Albert Einstein Biographies of Karl Marx Notes References Sources Further reading Biographies Barnett, Vincent. Marx (Routledge, 2009) Berlin, Isaiah. Karl Marx: His Life and Environment (Oxford University Press, 1963) Gemkow, Heinrich. Karl Marx: A Biography. Dresden: Verlag Zeit im Bild. 1968. Liedman, Sven-Eric. A World to Win: The Life and Works of Karl Marx. [2015] Jeffrey N. Skinner, trans. London: Verso Books, 2018. McLellan, David. Karl Marx: his Life and Thought Harper & Row, 1973 Mehring, Franz. Karl Marx: The Story of His Life (Routledge, 2003) McLellan, David. Marx before Marxism (1980), Macmillan, Rubel, Maximilien. Marx Without Myth: A Chronological Study of his Life and Work (Blackwell, 1975) Segrillo, Angelo. Two Centuries of Karl Marx Biographies: An Overview (LEA Working Paper Series, nº 4, March 2019). Sperber, Jonathan. Karl Marx: A Nineteenth-Century Life. New York: W.W. Norton & Company, 2013. Stedman Jones, Gareth. Karl Marx: Greatness and Illusion (Allen Lane, 2016). . Walker, Frank Thomas. Karl Marx: a Bibliographic and Political Biography. (bj.publications), 2009. Wheen, Francis. Karl Marx: A Life, (Fourth Estate, 1999), Commentaries on Marx Althusser, Louis. For Marx. London: Verso, 2005. Althusser, Louis and Balibar, Étienne. Reading Capital. London: Verso, 2009. Attali, Jacques. Karl Marx or the thought of the world. 2005 Avineri, Shlomo. The Social and Political Thought of Karl Marx (Cambridge University Press, 1968) Avineri, Shlomo. Karl Marx: Philosophy and Revolution (Yale University Press, 2019) Axelos, Kostas. Alienation, Praxis, and Techne in the Thought of Karl Marx (translated by Ronald Bruzina, University of Texas Press, 1976). Blackledge, Paul. Reflections on the Marxist Theory of History (Manchester University Press, 2006) Blackledge, Paul. Marxism and Ethics (SUNY Press, 2012) Bottomore, Tom, ed. A Dictionary of Marxist Thought. Oxford: Blackwell, 1998. Cleaver, Harry. Reading Capital Politically (AK Press, 2000) G.A. Cohen. Karl Marx's Theory of History: A Defence (Princeton University Press, 1978) Collier, Andrew. Marx (Oneworld, 2004) Draper, Hal, Karl Marx's Theory of Revolution (4 volumes) Monthly Review Press Duncan, Ronald and Wilson, Colin. (editors) Marx Refuted, (Bath, UK, 1987) Eagleton, Terry. Why Marx Was Right (New Haven & London: Yale University Press, 2011). Fine, Ben. Marx's Capital. 5th ed. London: Pluto Press, 2010. Foster, John Bellamy. Marx's Ecology: Materialism and Nature. New York: Monthly Review Press, 2000. Gould, Stephen Jay. A Darwinian Gentleman at Marx's Funeral – E. Ray Lankester, p. 1, Find Articles.com (1999) Harvey, David. A Companion to Marx's Capital. London: Verso Books, 2010. Harvey, David. The Limits of Capital. London: Verso, 2006. Henry, Michel. Marx I and Marx II. 1976 Holt, Justin P. The Social Thought of Karl Marx. Sage, 2015. Iggers, Georg G. "Historiography: From Scientific Objectivity to the Postmodern Challenge."(Wesleyan University Press, 1997, 2005) Kołakowski, Leszek. Main Currents of Marxism Oxford: Clarendon Press, OUP, 1978 Kurz, Robert. Read Marx: The most important texts of Karl Marx for the 21st Century (2000) Little, Daniel. The Scientific Marx, (University of Minnesota Press, 1986) Mandel, Ernest. Marxist Economic Theory. New York: Monthly Review Press, 1970. Mandel, Ernest. The Formation of the Economic Thought of Karl Marx. New York: Monthly Review Press, 1977. Miller, Richard W. Analyzing Marx: Morality, Power, and History. Princeton, N.J: Princeton University Press, 1984. Rothbard, Murray. An Austrian Perspective on the History of Economic Thought Volume II: Classical Economics (Edward Elgar Publishing Ltd., 1995) Saad-Filho, Alfredo. The Value of Marx: Political Economy for Contemporary Capitalism. London: Routledge, 2002. Saito, Kohei. Karl Marx's Ecosocialism: Capital, Nature, and the Unfinished Critique of Political Economy, Monthly Review Press 2017. Schmidt, Alfred. The Concept of Nature in Marx. London: NLB, 1971. Strathern, Paul. "Marx in 90 Minutes", (Ivan R. Dee, 2001) Thomas, Paul. Karl Marx and the Anarchists. London: Routledge & Kegan Paul, 1980. Uno, Kozo. Principles of Political Economy. Theory of a Purely Capitalist Society, Brighton, Sussex: Harvester; Atlantic Highlands, N.J.: Humanities, 1980. Vianello, F. [1989], "Effective Demand and the Rate of Profits: Some Thoughts on Marx, Kalecki and Sraffa", in: Sebastiani, M. (ed.), Kalecki's Relevance Today, London, Macmillan, . Wendling, Amy. Karl Marx on Technology and Alienation (Palgrave Macmillan, 2009) Wheen, Francis. Marx's Das Kapital, (Atlantic Books, 2006) Wilson, Edmund. To the Finland Station: A Study in the Writing and Acting of History, Garden City, NY: Doubleday, 1940 Fiction works External links Archive of Karl Marx / Friedrich Engels Papers at the International Institute of Social History The Collected Works of Marx and Engels, in English translation and in 50 volumes, are published in London by Lawrence & Wishart and in New York by International Publishers. (These volumes were at one time put online by the Marxists Internet Archive, until the original publishers objected on copyright grounds: ) They are available online and searchable, for purchase or through subscribing libraries, in the Social Theory () collection published by Alexander Street Press in collaboration with the University of Chicago. "Marx" , BBC Radio 4 discussion with Anthony Grayling, Francis Wheen & Gareth Stedman Jones (In Our Time'', 14 July 2005) The 1887 NY Times review of Das Kapital 1818 births 1883 deaths 19th-century atheists 19th-century German historians 19th-century German philosophers Anti-consumerists Anti-globalization activists Anti-imperialists Anti-nationalists Atheist philosophers Burials at Highgate Cemetery Conflict theory Critics of Judaism Critics of political economy Critics of religions Critics of work and the work ethic Deaths from bronchitis Epistemologists Fellows of the Royal Society of Arts German anti-capitalists German anti-poverty advocates German atheism activists German writers on atheism German communist writers German emigrants to England German expatriates in Belgium German expatriates in England German expatriates in France German male journalists German journalists German Marxist historians German Marxist writers German Marxists German opinion journalists German people of Dutch-Jewish descent German political philosophers German revolutionaries German socialist feminists German socialists German sociologists German tax resisters Historians of economic thought Humboldt University of Berlin alumni Jewish communists Jewish socialists Journalists from Brussels Journalists from London Journalists from Paris Marxian economists Marxist journalists Marxist theorists Materialists Members of the International Workingmen's Association Metaphysicians Ontologists Pamphleteers People from Soho People from the Grand Duchy of the Lower Rhine People from Trier Philosophers of culture Philosophers of economics Philosophers of education Philosophers of history Philosophers of law Philosophers of mind Philosophers of religion Philosophers of science Philosophers of technology Philosophical anthropology Social philosophers Socialist economists Socialist feminists Stateless people Theorists on Western civilization University of Bonn alumni University of Jena alumni Writers about activism and social change Writers about globalization Writers about religion and science Writers from Cologne Writers from the City of Westminster
Karl Marx
Physics
17,346
2,505,402
https://en.wikipedia.org/wiki/Ceiling%20projector
The ceiling projector or cloud searchlight is used to measure the height of the base of clouds (called the ceiling) above the ground. It is used in conjunction with an alidade, usually positioned 1000 ft (300 m) away and wherever possible set at the same level. The projector is normally set at 90°, although 71° 31' may be used, in relation to the terrain. The projector consists of a 430-watt incandescent bulb set in a weatherproof housing. Inside the housing are two mirrors; the first, above the bulb, reflects the light downwards to the second mirror, that then reflects the light upwards to the cloud. Both mirrors are focused to produce a high intensity beam of light that renders a visible spot on the base of the cloud. The alidade is mounted on a post at a height of 5 ft (1.5 m) from the ground. It consists of an arm with a pointer and open sight at one end and a rubber eyepiece at the other. The arm is mounted onto a curved scale that is marked both in meters and the coded cloud height (feet). The observer looks through the eyepiece and sets the sight onto the spot projected on the cloud and reads the height from the attached scale. When the cloud is thin the beam of light may penetrate into the cloud. The observer should read the scale where the light first enters the cloud and not at the top. However, a remark may be made as to how far into the cloud the light was able to penetrate as this may be useful. In the case of fog or blizzard conditions the observer should read the scale where the beam disappears. See also Ceilometer Trigonometric function Triangle References Department of Transport (Canada) - Meteorological Branch - Ceiling Projectors and Associated Equipment - Manual 70 Meteorological instrumentation and equipment Cloud and fog physics Searchlights
Ceiling projector
Technology,Engineering
379
520,796
https://en.wikipedia.org/wiki/List%20of%20radio%20telescopes
This is a list of radio telescopes – over one hundred – that are or have been used for radio astronomy. The list includes both single dishes and interferometric arrays. The list is sorted by region, then by name; unnamed telescopes are in reverse size order at the end of the list. The first radio telescope was invented in 1932, when Karl Jansky at Bell Telephone Laboratories observed radiation coming from the Milky Way. Africa Antarctica Asia Australia Europe North America South America Arctic Ocean Atlantic Ocean Indian Ocean Pacific Ocean Space-based Under construction or planned construction Proposed telescopes Gallery of big dishes See also Category:Radio telescopes List of astronomical observatories Lists of telescopes Radio telescope References External links List of radio telescopes outside the US List of radio telescopes in the US East Anglian Amateur Radio Observatory Website Radio telescopes
List of radio telescopes
Astronomy
161
70,038,961
https://en.wikipedia.org/wiki/Kappa%20Mensae
Kappa Mensae, Latinized from κ Mensae, is a solitary star in the southern circumpolar constellation Mensa. Its distance of 296 light years based on its parallax shift gives it an apparent magnitude of 5.45, making it faintly visible to the naked eye. However, it is receding from the Sun with a heliocentric radial velocity of . Kappa Mensae has a stellar classification of B9 V, indicating that it is an ordinary B-type main-sequence star. At present it has 3.44 times the mass of the Sun and a diameter of . It radiates at 66 times the luminosity from its photosphere at an effective temperature of , giving it a bluish white hue. The star is very young, aged 115 million years, having completed only 33.7% of its main sequence lifetime. Kappa Mensae has a high rate of spin, rotating with a projected rotational velocity of . References B-type main-sequence stars Mensa (constellation) Mensae, Kappa Mensa, 32 040593 27566 2129 Durchmusterung objects
Kappa Mensae
Astronomy
234
63,722,403
https://en.wikipedia.org/wiki/Latent%20period%20%28epidemiology%29
In epidemiology, particularly in the discussion of infectious disease dynamics (modeling), the latent period (also known as the latency period or the pre-infectious period) is the time interval between when an individual or host is infected by a pathogen and when that individual becomes infectious, i.e. capable of transmitting pathogens to other susceptible individuals. Relationship with related concepts in infectious disease dynamics To understand the spreading dynamics of an infectious disease or an epidemic, three important time periods should be carefully distinguished: incubation period, pre-infectious or latent period and infectious period. Two other relevant and important time period concepts are generation time and serial interval. The infection of a disease begins when a pathogenic (disease-causing) infectious agent, or a pathogen, is successfully transmitted from one host to another. Pathogens leave the body of one host through a portal of exit, are carried by some mode of transmission and after coming into contact (exposure) with a new susceptible host, they enter the host's body through an appropriate portal of entry. Upon entering the new host, they take a period of time to overcome or evade the immune response of the body and to multiply or replicate after having traveled to their favored sites within the host’s body (tissue invasion and tropism). When the pathogens become sufficiently numerous and toxic to cause damage to the body, the host begins to display symptoms of a clinical disease (i.e. the host becomes symptomatic). Incubation period The time interval from the time of invasion by an infectious pathogen to the time of onset (first appearance) of symptoms of the disease in question is called the incubation period. After the incubation period is over, the host enters the symptomatic period. Moreover, at a certain point in time after infection, the host becomes capable of transmitting pathogens to others, i.e. they become infectious or communicable. Depending on the disease, the host individual may or may not be infectious during the incubation period. The incubation period is important in the dynamics of disease transmission because it determines the time of case detection relative to the time of infection. This helps in the evaluation of the outcomes of control measures based on symptomatic surveillance. The incubation period is also useful to count the number of infected people. The period from the time of infection to the time of becoming infectious is called the pre-infectious period or the latent period. During the pre-infectious or latent period, a host may or may not show symptoms (i.e. the incubation period may or may not be over), but in both cases, the host is not capable of infecting other hosts i.e. transmitting pathogens to other hosts. The latent period, rather than the incubation period, has more influence on the spreading dynamics of an infectious disease or epidemic. Infectious period The time interval during which the host is infectious, i.e. the pathogens can be transmitted directly or indirectly from the infected host to another individual, is called the infectious period (or the period of communicability), defined as the period from the end of the pre-infectious period or the latent period until the time when the host can no longer transmit the infection to other individuals. During the infectious period, a host may or may not show symptoms, but they are capable of infecting other individuals. The duration of the infectious period depends on the ability of the infected host individual to mount an immune response. Latent period In some cases, the pre-infectious or latent period and the incubation period coincide and are mostly of the same duration. In this case, the infected individual becomes infectious at around the same time they start showing symptoms. In certain other infectious diseases such as smallpox or SARS, the host becomes infectious after the onset of symptoms. In this case, the latent period is longer than the incubation period. In these two cases, the disease can be effectively controlled using symptomatic surveillance. A related term is the duration of shedding or the shedding period, which is defined as the time duration during which a host or patient excretes pathogens through saliva, urine, feces or other bodily fluids. However, for some infectious diseases, the symptoms of the clinical disease may appear after the host becomes infectious. In this case, the pre-infectious or latent period has a shorter duration than the incubation period, the infectious period begins before the end of the incubation period and the host can infect others for some time without showing any noticeable symptoms. This early or mild stage of infection whose symptoms stay below the level of clinical detection is called subclinical infection and the individual concerned is called an asymptomatic carrier of the disease. For example, in HIV/AIDS, the incubation period lasts years longer than the latent period. So an HIV infected individual can show no symptoms and unwittingly infect other susceptible individuals for many years. In COVID-19, the infectious period begins approximately 2 days before the onset of symptoms and 44% of the secondary infections may happen during this pre-symptomatic stage. In these kinds of cases with a significant number of pre-symptomatic (asymptomatic) transmissions, symptomatic surveillance-based disease control measures (such as isolation, contact tracing, enhanced hygiene, etc.) are likely to have their effectiveness reduced, because a significant portion of the transmission may take place before the onset of symptoms and this has to be taken into account when designing control measures. The infectious period is a very important element in the infectious disease spreading dynamics. If the infectious period is long, then the measure of secondary infections (represented by the basic reproduction number, R0) will generally be larger, regardless of the infectiousness of the disease. For example, even though HIV/AIDS has a very low transmission potential per sexual act, its basic reproduction number is still very high because of its unusually long infectious period spanning many years. From the viewpoint of controlling an epidemic, the goal is to reduce the effective infectious period either by treatment or by isolating the patient from the community. Sometimes a treatment can paradoxically increase the effective infectious period by preventing death through supportive care and thereby increasing the probability of infection of other individuals. Generation time The generation time (or generation interval) of an infectious disease is the time interval between the beginning of infection in an individual (infector) to the time that person transmits to another individual (infectee). The generation time specifies how fast infections are spreading in the community with the passing of each generation. In contrast, the effective reproductive number determines in what number the infections are spreading in the community with the passing of each generation. The latent period and the infectious period helps determine the generation time of an infection. The mean generation time is equal to the sum of the mean latent period and one-half of the mean infectious period, given that infectiousness is evenly distributed across the infectious period. Since the precise moment of infection is very difficult and almost impossible to detect, the generation time is not properly observable for two successive hosts. Generally, in infectious disease statistics, the onset of clinical symptoms for all the hosts are reported. For two successive generations (or cases or hosts) in a chain of infection, the serial interval is defined as the period of time between the onset of clinical symptoms in the first host (infector) and the onset of analogous clinical symptoms in the second host (infectee). Just like the generation time, the length of the serial interval depends on the lengths of the latent period, the infectious period and the incubation period. Therefore the serial interval is often used as a proxy measure to estimate the generation time. Usage of the term outside epidemiology Outside the confines of epidemiology, the term "latent period" may be defined in some general-purpose dictionaries (e.g. the Collins English Dictionary or Merriam-Webster Online Dictionary) as being the time interval between infection by a pathogen and the onset of symptoms, i.e., as a synonymous term for the epidemiologically different concept of "incubation period". In the discussion of cancers (a non-infectious disease), the term "latency period" is used to indicate the time that passes between being exposed to something that can cause disease (such as radiation or a virus) and having symptoms. Doctors and medical journals may speak of "latent" tumors, which are present but not active or causing symptoms. In the discussion of syphilis (a sexually transmitted infectious disease), the term "latent" refers to asymptomatic periods with different degrees of infectiousness. See also Incubation period Infectious period Viral shedding Generation time Serial interval Basic reproduction number Asymptomatic carrier References Epidemiology Infectious diseases
Latent period (epidemiology)
Environmental_science
1,842
1,424,999
https://en.wikipedia.org/wiki/Firing%20order
The firing order of an internal combustion engine is the sequence of ignition for the cylinders. In a spark ignition (e.g. gasoline/petrol) engine, the firing order corresponds to the order in which the spark plugs are operated. In a diesel engine, the firing order corresponds to the order in which fuel is injected into each cylinder. Four-stroke engines must also time the valve openings relative to the firing order, as the valves do not open and close on every stroke. Firing order affects the vibration, sound and evenness of power output from the engine and heavily influences crankshaft design. Cylinder numbering Numbering systems for car engines The numbering system for cylinders is generally based on the cylinder numbers increasing from the front to the rear of an engine (See engine orientation below). However, there are differences between manufacturers in how this is applied; some commonly used systems are as listed below. Straight engine In a straight engine the cylinders are numbered from front (#1 cylinder) to rear. V engine In a V engine the frontmost cylinder is usually #1, however there are two common approaches: Numbering the cylinders in each bank sequentially (e.g. 1-2-3-4 along the left bank and 5-6-7-8 along the right bank). This approach is typically used by V8 engines from Audi, Ford and Porsche. Numbering the cylinders based on their position along the crankshaft (e.g. 1-3-5-7 along the right bank and 2-4-6-8 along the left bank). This approach is typically used by V8 engines from General Motors, and Chrysler. The selection of whether the #1 cylinder is on the left bank or right bank usually depends on which bank is closer to the front of the crankshaft. However, the Ford Flathead V8 and Pontiac V8 engine actually have the #1 cylinder behind the cylinder from the opposite bank. This was done so that all Ford engines would have cylinder #1 on the right bank and all Pontiac engines would have cylinder #1 on the left bank, to simplify the process of identifying the cylinders. Radial engine In a radial engine the cylinders are numbered around the circle, in clockwise direction with the #1 cylinder at the top. Engine orientation within cars The simplest situation is a longitudinal engine located at the front of the car, which means the engine's orientation is the same as the car's. This illustrates that the rear of the engine is the end that connects to the transmission, while the front end often has the drive belt for accessories (such as the alternator and water pump). The left bank of the engine is on the left side of the car (when looking from behind the car), and vice versa for the right bank of the engine. For a transverse engine located at the front of the car, whether the front of the engine is at the left-hand or right-hand side of the car is best determined based on the side of the car where the transmission is located (which corresponds to the rear of the engine). Most transverse engine front-wheel drive models have the front of the engine at the right-hand side of the car (except for many Honda cars). As a consequence, the left bank of a transversely V engine is usually closest to the front of the car. For cars where the engine is installed 'backwards' (i.e. the transmission is closer to the front of the car than the engine, or under the engine), cylinder #1 is located towards the firewall, the rear of the car. This is the case for the Citroën Traction Avant, Saab 99, Saab 900 and many rear-engine cars. Numbering systems for ship engines Contrary to most car engines, a ship's engines are often numbered starting from the end of the engine with the power output. Large diesel truck and locomotive engines, particularly of European manufacture, may also be numbered this way. Cylinders on V engines often include a letter representing the cylinder bank. For example, a V6 engine could have cylinders A1-A2-A3-B1-B2-B3, with cylinders A1 and B1 located at the power output end of the engine. Common firing orders Common firing orders are listed below. For V engines and flat engines, the numbering system is L1 for the front cylinder of the left bank, R1 for the front cylinder of the right bank, etc. In two-cylinder engines, the cylinders can either fire simultaneously (such as in a flat-twin engine) or one after the other (such as in a straight-twin engine, where either an even-fire or an odd-fire configuration is possible for a four-stroke engine). In straight-three engines, there is no effective difference between the possible firing orders of 1-2-3 and 1-3-2. Straight-four engines typically use a firing order of 1-3-4-2, however some British engines used a firing order of 1-2-4-3. Flat-four engines typically use a firing order of R1-R2-L1-L2. Straight-five engines typically use a firing order of 1-2-4-5-3, in order to minimise the primary vibration from the rocking couple. Straight-six engines typically use a firing order of 1-5-3-6-2-4, which results in perfect primary and secondary balance. However, a firing order of 1-2-4-6-5-3 is common on medium-speed marine engines. V6 engines with an angle of 90 degrees between the cylinder banks have used a firing orders of R1-L2-R2-L3-L1-R3 or R1-L3-R3-L2-R2-L1. Several V6 engines with an angle of 60 degrees have used a firing order of R1-L1-R2-L2-R3-L3. Flat-six engines have used firing orders of R1-L2-R3-L1-R2-L3 or R1-L3-R2-L1-R3-L2. V8 engines use various different firing orders, even using different firing orders between engines from the same manufacturer. [citation needed] V10 engines used firing orders of either R1-L5-R5-L2-R2-L3-R3-L4-R4-L1 or R1-L1-R5-L5-R2-L2-R3-L3-R4-L4. V12 engines use various different firing orders. In a radial engine, there are always an odd number of cylinders in each bank, as this allows for a constant alternate cylinder firing order: for example, with a single bank of 7 cylinders, the order would be 1-3-5-7-2-4-6. Moreover, unless there is an odd number of cylinders, the ring cam around the nose of the engine would be unable to provide the inlet valve open - exhaust valve open sequence required by the four-stroke cycle. Firing interval To minimise vibrations, most engines use an evenly spaced firing interval. This means that the timing of the power stroke is evenly spaced between cylinders. For a four-stroke engine, this requires a firing interval of 720° divided by the number of cylinders, for example a six-cylinder engine would have a firing interval of 120°. On the other hand, a six-cylinder engine with an uneven firing interval could have intervals of 90° and 150°. Engines with an even firing interval will sound smoother, have less vibration and provide more even pressure pulses in the exhaust gas to the turbocharger. Engines with an uneven firing interval usually have a burble or a throaty, growling engine sound and more vibrations. The main application of uneven firing intervals is motorcycle engines, such as big-bang firing order engines. Examples of odd-firing engines are most four-stroke V-twin engines, 1961-1977 Buick V6 engine, 1985-present Yamaha VMAX, 1986–present Honda VFR 750/800, 1992-2017 Dodge Viper V10, 2008-present Audi/Lamborghini 5.2 V10 40v FSI and the 2009-2020 Yamaha R1 (inline four engine with a crossplane crankshaft). See also Engine configuration Four-stroke engine Two-stroke engine Wasted spark system References Engine technology
Firing order
Technology
1,736
3,173,350
https://en.wikipedia.org/wiki/Alagebrium
Alagebrium (formerly known as ALT-711, dimethyl-3-N-phenacylthiazolium chloride) was a drug candidate developed by Alteon, Inc. It was the first drug candidate to be clinically tested for the purpose of breaking the crosslinks caused by advanced glycation endproducts (AGEs), thereby reversing one of the main mechanisms of aging. Through this effect Alagebrium is designed to reverse the stiffening of blood vessel walls that contributes to hypertension and cardiovascular disease, as well as many other forms of degradation associated with protein crosslinking. Alagebrium has proven effective in reducing systolic blood pressure and providing therapeutic benefit for patients with diastolic heart failure. Mechanism Advanced glycation end-products (AGEs) are proteins that become glycated as a result of exposure to sugars. They are a bio-marker implicated in aging and the development, or worsening, of many degenerative diseases, such as diabetes, atherosclerosis, chronic kidney disease, and Alzheimer's disease. Pharmacologic intervention with alagebrium directly targets the biochemical pathway leading to AGEs. Although alagebrium may break some important AGE crosslinks, there is no evidence that it is effective against the most prevalent crosslink: glucosepane. History Alteon said that it had selected ALT-711 as its lead AGE-breaker based on preclinical results in its annual report for the year 1997 and that it was preparing an IND filing. The INN name was proposed in 2004 and recommended in 2005. In 2006 Alteon merged with a company called HaptoGuard that had cash and a potential diagnostic test for haptoglobin; as part of the merger Genentech, which held preferred shares in Alteon, converted their shares to common ones and received the right to get milestone payments and royalties on sales of alagebrium, and option rights to license ALT-2074. In 2007, the company changed its name to Synvista Therapeutics, Inc. Synvista announced that it was terminating clinical trials of alagebrium in January 2009 in order to focus on the diagnostic test and another clinical candidate SYI-2074 (formerly ALT-2074). The company seems to have discontinued operations and their website is no longer available. See also Glycation Glycosylation Glucosepane Advanced glycation end product References Post-translational modification Thiazoles
Alagebrium
Chemistry
520
12,134,485
https://en.wikipedia.org/wiki/Nuclear%20receptor%204A2
The nuclear receptor 4A2 (NR4A2) (nuclear receptor subfamily 4 group A member 2) also known as nuclear receptor related 1 protein (NURR1) is a protein that in humans is encoded by the NR4A2 gene. NR4A2 is a member of the nuclear receptor family of intracellular transcription factors. NR4A2 plays a key role in the maintenance of the dopaminergic system of the brain. Mutations in this gene have been associated with disorders related to dopaminergic dysfunction, including Parkinson's disease and schizophrenia. Misregulation of this gene may be associated with rheumatoid arthritis. Four transcript variants encoding four distinct isoforms have been identified for this gene. Additional alternate splice variants may exist, but their full-length nature has not been determined. This protein is thought to be critical to development of the dopaminergic phenotype in the midbrain, as mice without NR4A2 are lacking expression of this phenotype. This is further confirmed by studies showing that forced NR4A2 expression in naïve precursor cells leads to complete dopaminergic phenotype gene expression. While NR4A2 is a key protein in inducing this phenotype, there are other factors required, as expressing NR4A2 in isolation fails to produce it. One of these suggested factors is winged-helix transcription factor 2 (Foxa2). Studies have found these two factors to be within the same region of developing dopaminergic neurons, and both were required to have expression for the dopaminergic phenotype. Structure One investigation conducted research on the structure and found that NR4A2 does not contain a ligand-binding cavity but a patch filled with hydrophobic side chains. Non-polar amino acid residues of NR4A2’s co-regulators, SMRT and NCoR, bind to this hydrophobic patch. Analysis of tertiary structure has shown that the binding surface of the ligand-binding domain is located on the grooves of the 11th and 12th alpha helices. This study also found essential structural components of this hydrophobic patch, to be the three amino acids residues, F574, F592, L593; mutation of any these three inhibits LBD activity. Clinical significance Role in disease Mutations in NR4A2 have been associated with various disorders, including Parkinson's disease, schizophrenia, manic depression, and autism. De novo gene deletions that affect NR4A2 have been identified in some individuals with intellectual disability and language impairment, some of whom meet DSM-5 criteria for an autism diagnosis. Inflammation Research has been conducted on NR4A2’s role in inflammation, and may provide important information in treating disorders caused by dopaminergic neuron disease. Inflammation in the central nervous system can result from activated microglia (macrophage analogs for the central nervous system) and other pro-inflammatory factors, such as bacterial lipopolysaccharide (LPS). LPS binds to toll-like receptors (TLR), which induces inflammatory gene expression by promoting signal-dependent transcription factors. To determine which cells are dopaminergic, experiments measured the enzyme tyrosine hydroxylase (TH), which is needed for dopamine synthesis. It has been shown that NR4A2 protects dopaminergic neurons from LPS-induced inflammation by reducing inflammatory gene expression in microglia and astrocytes. When a short hairpin RNA for NR4A2 was expressed in microglia and astrocytes, these cells produced inflammatory mediators such as TNF-alpha, nitric oxide synthase, and interleukin-1 beta (IL-1β), supporting the conclusion that reduced NR4A2 promotes inflammation and leads to cell death of dopaminergic neurons. NR4A2 interacts with the transcription factor complex NF-κB-p65 on the inflammatory gene promoters. However, NR4A2 is dependent on other factors to be able to participate in these interactions. NR4A2 needs to be sumoylated and its co-regulating factor, glycogen synthase kinase 3, needs to be phosphorylated for these interactions to occur. Sumolyated NR4A2 recruits CoREST, a complex made of several proteins that assembles chromatin remodeling enzymes. The NR4A2/CoREST complex inhibits transcription of inflammatory genes. Applications NR4A2 induces tyrosine hydroxylase (TH) expression, which eventually leads to differentiation into dopaminergic neurons. NR4A2 has been demonstrated to induce differentiation in CNS precursor cells in vitro but they require additional factors to reach full maturity and dopaminergic differentiation. Therefore, NR4A2 modulation may be promising for generation of dopaminergic neurons for Parkinson's disease research, yet implantation of these induced cells as therapy treatments, has had limited results. NR4A2 mRNA may be a useful biomarker for Parkinson's disease in combination with inflammatory cytokines. Knockout studies Studies have shown that heterozygous knockout mice for the NR4A2 gene demonstrate reduced dopamine release. Initially this was compensated for by a decrease in the rate of dopamine reuptake; however, over time this reuptake could not make up for the reduced amount of dopamine being released. Coupled with the loss of dopamine receptor neurons, this can result in the onset of symptoms for Parkinson's disease. Interactions NR4A2 has been shown to interact with: Beta-catenin, Pituitary homeobox 3, Retinoic acid receptor alpha, and Retinoic acid receptor beta. References Further reading External links Intracellular receptors Transcription factors
Nuclear receptor 4A2
Chemistry,Biology
1,206
294,866
https://en.wikipedia.org/wiki/Correlation%20function%20%28astronomy%29
In astronomy, a correlation function describes the distribution of galaxies in the universe. By default, "correlation function" refers to the two-point autocorrelation function. The two-point autocorrelation function is a function of one variable (distance); it describes the excess probability of finding two galaxies separated by this distance (excess over and above the probability that would arise if the galaxies were simply scattered independently and with uniform probability). It can be thought of as a "clumpiness" factor - the higher the value for some distance scale, the more "clumpy" the universe is at that distance scale. The following definition (from Peebles 1980) is often cited: Given a random galaxy in a location, the correlation function describes the probability that another galaxy will be found within a given distance. However, it can only be correct in the statistical sense that it is averaged over a large number of galaxies chosen as the first, random galaxy. If just one random galaxy is chosen, then the definition is no longer correct, firstly because it is meaningless to talk of just one "random" galaxy, and secondly because the function will vary wildly depending on which galaxy is chosen, in contradiction with its definition as a function. Assuming the universe is isotropic (which observations suggest), the correlation function is a function of a scalar distance. The two-point correlation function can then be written as where is a unitless measure of overdensity, defined at every point. Letting , it can also be expressed as the integral The spatial correlation function is related to the Fourier space power spectrum of the galaxy distribution, , as The n-point autocorrelation functions for n greater than 2 or cross-correlation functions for particular object types are defined similarly to the two-point autocorrelation function. The correlation function is important for theoretical models of physical cosmology because it provides a means of testing models which assume different things about the contents of the universe. See also Ripley's K and Besag's L function Correlation function in statistics Spatial point process References Peebles, P.J.E. 1980, The large scale structure of the universe Theuns, Physical Cosmology Extragalactic astronomy Covariance and correlation
Correlation function (astronomy)
Astronomy
458
38,176,368
https://en.wikipedia.org/wiki/Lactarius%20pallescens
Lactarius pallescens is a Western North American "milk-cap" mushroom, of which the milk turns violet when the flesh is damaged. The fungi generally identified as L. pallescens are part of a complex of closely related species and varieties which have a peppery taste and are difficult to delimit definitively. The gray-brown cap ranges from 3 to 10 cm in width, with a mucilaginous surface, whitish flesh and white latex. The gills are whitish and sometimes slightly decurrent. The viscid stalk ranges from 3 to 8 cm long and 1 to 2 cm wide. The spores are pale yellow to orange, elliptical, and bumpy. The flesh of the mushroom stains lilac. In age, reddish stains develop. Distribution Lactarius pallescens is found on the West Coast of the United States. In the Pacific Northwest, it can be found in conifer forests. Related species Lactarius uvidus (a close relative) and Lactarius californiensis are similar. See also List of Lactarius species References External links Lactarius pallescens at Mykoweb Distribution Map pallescens Fungi described in 1979 Fungi of North America Taxa named by Alexander H. Smith Fungus species
Lactarius pallescens
Biology
257
7,517,195
https://en.wikipedia.org/wiki/M51%20Group
The M51 Group is a group of galaxies located in Canes Venatici. The group is named after the brightest galaxy in the group, the Whirlpool Galaxy (M51A). Other notable members include the companion galaxy to the Whirlpool Galaxy (M51B) and the Sunflower Galaxy (M63). Members The table below lists galaxies that have been consistently identified as group members in the Nearby Galaxies Catalog, the survey of Fouque et al., the Lyons Groups of Galaxies (LGG) Catalog, and the three group lists created from the Nearby Optical Galaxy sample of Giuricin et al. Other probable members (galaxies listed in two or more of the lists from the above references) include IC 4263 and UGC 8320. The exact membership is somewhat uncertain. Nearby Groups The M51 Group is located to the southeast of the M101 Group and the NGC 5866 Group. The distances to these three groups (as determined from the distances to the individual member galaxies) are similar, which suggests that the M51 Group, the M101 Group, and the NGC 5866 Group are actually part of a large, loose, elongated structure. However, most group identification methods (including those used by the references cited above) identify these three groups as separate entities. See also Virgo Supercluster References External links M51 group, SEDS Messier pages Canes Venatici
M51 Group
Astronomy
293
26,684,058
https://en.wikipedia.org/wiki/Building%20transportation%20systems
Building transportation systems include: Elevator Escalator Moving walkway Paternoster elevator References Building engineering Some other non-mechanical building transportation include: (this may vary depending on definition) staircases walkways
Building transportation systems
Engineering
42
48,456,363
https://en.wikipedia.org/wiki/Laetiporus%20zonatus
Laetiporus zonatus is a species of polypore fungus in the family Fomitopsidaceae. It is found in southwestern China, where it grows on oak. The species was described as new to science in 2014 by Baokai Cui and Jie Song. The specific epithet zonatus refers to the concentric rings on the upper surface of the white to cream-colored fruit body. The fungus produces ellipsoid to pear-shaped (pyriform) or drop-shaped basidiospores that measure 5.8–7.2 by 4.3–5.5 μm. Molecular analysis of internal transcribed spacer DNA sequences indicate that L. zonatus is a unique lineage in the genus Laetiporus. References Fungi described in 2014 Fungi of China Fungal plant pathogens and diseases zonatus Taxa named by Bao-Kai Cui Fungus species
Laetiporus zonatus
Biology
184
9,087
https://en.wikipedia.org/wiki/Dynamical%20system
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it. At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables. In physics, a dynamical system is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives". In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized. The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept. Overview The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit. Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because: The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability. The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood. The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid. The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos. History Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state. Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system. In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics. Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others. Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft. Formal definition In the most general sense, a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function with (where is the 2nd projection map) and for any x in X: for and , where we have defined the set for any x in X. In particular, in the case that we have for every x in X that and thus that Φ defines a monoid action of T on X. The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system. We often write if we take one of the variables as constant. The function is called the flow through x and its graph is called the trajectory through x. The set is called the orbit through x. The orbit through x is the image of the flow through x. A subset S of the state space X is called Φ-invariant if for all x in S and all t in T Thus, in particular, if S is Φ-invariant, for all x in S. That is, the flow through x must be defined for all time for every element of S. More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor. Geometrical definition In the geometrical definition, a dynamical system is the tuple . is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with ) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain . Real dynamical system A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow. Discrete dynamical system A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade. Cellular automaton A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice. Multidimensional generalization Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing. Compactification of a dynamical system Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*). In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected. Measure theoretical definition A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system. The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated. Relation to geometric definition The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance. Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution. For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems. Construction of dynamical systems The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following: where represents the velocity of the material point x M is a finite dimensional manifold v: T × M → TM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM. There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions. Depending on the properties of this vector field, the mechanical system is called autonomous, when v(t, x) = v(x) homogeneous when v(t, 0) = 0 for all t The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above The dynamical system is then (T, M, Φ). Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy where is a functional from the set of evolution functions to the field of the complex numbers. This equation is useful when modeling mechanical systems with complicated constraints. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations. Examples Arnold's cat map Baker's map is an example of a chaotic piecewise linear map Billiards and outer billiards Bouncing ball dynamics Circle map Complex quadratic polynomial Double pendulum Dyadic transformation Hénon map Irrational rotation Kaplan–Yorke map List of chaotic maps Lorenz system Quadratic map simulation system Rössler map Swinging Atwood's machine Tent map Linear dynamical systems Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t). Flows For a flow, the vector field v(x) is an affine function of the position in the phase space, that is, with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b: When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0, When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin. The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior. Maps A discrete-time, affine dynamical system has the form of a matrix difference equation: with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map. As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point. There are also many other discrete dynamical systems. Local dynamics The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible. Rectification A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem. The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches. Near periodic orbits In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0. The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem. Conjugation results The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic. In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic. The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point. Bifurcation theory When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation. Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems. The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory. Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations. Ergodic systems In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure. In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution. For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms. One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω). The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator, By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U. The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems. Nonlinear dynamical systems and chaos Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold). This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?" The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear. Solutions of finite duration For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line. As example, the equation: Admits the finite duration solution: that is zero for and is not Lipschitz continuous at its ending time See also Behavioral modeling Cognitive modeling Complex dynamics Dynamic approach to second language development Feedback passivation Infinite compositions of analytic functions List of dynamical system topics Oscillation People in systems and control Sharkovskii's theorem Conley's fundamental theorem of dynamical systems System dynamics Systems theory Principle of maximum caliber References online version of first edition on the EMIS site . Further reading Works providing a broad coverage: (available as a reprint: ) Encyclopaedia of Mathematical Sciences () has a sub-series on dynamical systems with reviews of current research. Introductory texts with a unique perspective: Textbooks Popularizations: External links Arxiv preprint server has daily submissions of (non-refereed) manuscripts in dynamical systems. Encyclopedia of dynamical systems A part of Scholarpedia — peer-reviewed and written by invited experts. Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens Sci.Nonlinear FAQ 2.0 (Sept 2003) provides definitions, explanations and resources related to nonlinear science Online books or lecture notes Geometrical theory of dynamical systems. Nils Berglund's lecture notes for a course at ETH at the advanced undergraduate level. Dynamical systems. George D. Birkhoff's 1927 book already takes a modern approach to dynamical systems. Chaos: classical and quantum. An introduction to dynamical systems from the periodic orbit point of view. Learning Dynamical Systems. Tutorial on learning dynamical systems. Ordinary Differential Equations and Dynamical Systems. Lecture notes by Gerald Teschl Research groups Dynamical Systems Group Groningen, IWI, University of Groningen. Chaos @ UMD. Concentrates on the applications of dynamical systems. , SUNY Stony Brook. Lists of conferences, researchers, and some open problems. Center for Dynamics and Geometry, Penn State. Control and Dynamical Systems, Caltech. Laboratory of Nonlinear Systems, Ecole Polytechnique Fédérale de Lausanne (EPFL). Center for Dynamical Systems, University of Bremen Systems Analysis, Modelling and Prediction Group, University of Oxford Non-Linear Dynamics Group, Instituto Superior Técnico, Technical University of Lisbon Dynamical Systems , IMPA, Instituto Nacional de Matemática Pura e Applicada. Nonlinear Dynamics Workgroup , Institute of Computer Science, Czech Academy of Sciences. UPC Dynamical Systems Group Barcelona, Polytechnical University of Catalonia. Center for Control, Dynamical Systems, and Computation, University of California, Santa Barbara. Systems theory Mathematical and quantitative methods (economics)
Dynamical system
Physics,Mathematics
7,304
3,501,997
https://en.wikipedia.org/wiki/Conidiation
Conidiation is a biological process in which filamentous fungi reproduce asexually from spores. Rhythmic conidiation is the most obvious output of fungal circadian rhythms. Neurospora species are most often used to study this rhythmic conidiation. Physical stimuli, such as light exposure and mechanical injury to the mycelium trigger conidiation; however, conidiogenesis itself is a holistic response determined by the cell's metabolic state, as influenced by the environment and endogenous biological rhythms. See also Conidium References Further reading Mycology
Conidiation
Biology
117
3,854,049
https://en.wikipedia.org/wiki/Annie%20Ernaux
Annie Thérèse Blanche Ernaux (; ; born 1 September 1940) is a French writer who was awarded the 2022 Nobel Prize in Literature "for the courage and clinical acuity with which she uncovers the roots, estrangements and collective restraints of personal memory". Her literary work, mostly autobiographical, maintains close links with sociology. Early life and education Ernaux was born in Lillebonne in Normandy, France, and grew up in nearby Yvetot, where her parents, Blanche (Dumenil) and Alphonse Duchesne, ran a café and grocery in a working-class part of town. In 1960, she travelled to London, England, where she worked as an au pair, an experience she would later relate in 2016's Mémoire de fille (A Girl's Story). Upon returning to France, she studied at the universities of Rouen and then Bordeaux, qualified as a schoolteacher, and earned a higher degree in modern literature in 1971. She worked for a time on a thesis project, unfinished, on Pierre de Marivaux. In the early 1970s, Ernaux taught at a lycée in Bonneville, Haute-Savoie, at the college of Évire in Annecy-le-Vieux, then in Pontoise, before joining the National Centre for Distance Education, where she was employed for 23 years. Literary career Ernaux started her literary career in 1974 with Les Armoires vides (Cleaned Out), an autobiographical novel. In 1984, she won the Renaudot Prize for another of her works La Place (A Man's Place), an autobiographical narrative focusing on her relationship with her father and her experiences growing up in a small town in France, and her subsequent process of moving into adulthood and away from her parents' place and her class of origin. Early in her career, Ernaux turned from fiction to focus on autobiography. Her work combines historic and individual experiences. She charts her parents' social progression (La Place, La Honte), her teenage years (Ce qu'ils disent ou rien), her marriage (La Femme gelée), her passionate affair with an Eastern European man (Passion simple), her abortion (L'Événement), Alzheimer's disease (Je ne suis pas sortie de ma nuit), the death of her mother (Une femme), and breast cancer (L'usage de la photo). Ernaux also wrote L'écriture comme un couteau (Writing as Sharp as a Knife) with Frédéric-Yves Jeannet. A Woman's Story (Une femme), A Man's Place, and Simple Passion were recognised as The New York Times Notable Books, and A Woman's Story was a finalist for the Los Angeles Times Book Prize. Shame was named a Publishers Weekly Best Book of 1998, I Remain in Darkness a Top Memoir of 1999 by The Washington Post, and The Possession was listed as a Top Ten Book of 2008 by More magazine. Ernaux's 2008 historical memoir Les Années (The Years), well received by French critics, is considered by many to be her magnum opus. In this book, Ernaux writes about herself in the third person ('elle', or 'she' in English) for the first time, providing a vivid look at French society just after the Second World War until the early 2000s. It is the story of a woman and of the evolving society she lived in. The Years won the 2008 , the 2008 Marguerite Duras Prize, the 2008 Prix de la langue française, the 2009 Télégramme Readers Prize, and the 2016 Strega European Prize. Translated by Alison L. Strayer, The Years was a finalist for the 31st Annual French-American Foundation Translation Prize, was nominated for the International Booker Prize in 2019, and won the 2019 Warwick Prize for Women in Translation. Her popularity in anglophone countries increased sharply after The Years was shortlisted for the International Booker. On 6 October 2022, it was announced that Ernaux would be awarded the 2022 Nobel Prize in Literature "for the courage and clinical acuity with which she uncovers the roots, estrangements and collective restraints of personal memory". Ernaux is the 16th French writer, and the first Frenchwoman, to receive the literature prize. In congratulating her, the president of France, Emmanuel Macron, said that she was the voice "of the freedom of women and of the forgotten". Many of Ernaux's works have been translated into English and published by Fitzcarraldo Editions and Seven Stories Press. Ernaux is one of the seven founding authors from whom the latter Press takes its name. Political activism Ernaux supported Jean-Luc Mélenchon in the 2012 French presidential election. In 2018, she expressed her support for the yellow vests protests. Ernaux has repeatedly indicated her support for the BDS movement, a Palestinian-led campaign promoting boycott, divestment and sanctions against Israel. In 2018, the author signed a letter alongside about 80 other artists that opposed the holding of the Israel–France cross-cultural season by the Israeli and French governments. In 2019, Ernaux signed a letter calling on a French state-owned broadcasting network not to air the Eurovision Song Contest, which was held in Israel that year. In 2021, after the Operation Guardian of the Walls, she signed another letter that called Israel an apartheid state, claiming that "To frame this as a war between two equal sides is false and misleading. Israel is the colonizing power. Palestine is colonized." In October 2024, Ernaux signed an open letter alongside several thousand authors pledging to boycott Israeli cultural institutions. Ernaux signed a letter that supported the release of Georges Abdallah, who was sentenced to life imprisonment in 1982 for the assassination of an American military attaché, Lt. Col. Charles R. Ray, and an Israeli diplomat, Yacov Barsimantov. According to the letter, the victims were "active Mossad and CIA agents, while Abdallah fought for the Palestinian people and against colonization". Following the announcement of the award of the Nobel Prize, Ernaux showed solidarity with people's uprising in Iran against their government. The protests that followed the death of a young woman in the custody of Guidance Patrol (Morality Police) initially started against compulsory hijab law in Iran but soon took a broader focus on liberty. Ernaux said in an interview she was "absolutely in favour of women revolting against this absolute constraint". Personal life Ernaux was previously married to Philippe Ernaux, with whom she has two sons, Éric (born in 1964) and David (born in 1968). The couple divorced in 1981. She has been a resident of Cergy-Pontoise, a new town in the Paris suburbs, since the mid-1970s. Works Les Armoires vides, Paris: Gallimard, 1974; Gallimard, 1984, Ce qu'ils disent ou rien, Paris: Gallimard, 1977; French & European Publications, Incorporated, 1989, La Femme gelée, Paris: Gallimard, 1981; French & European Publications, Incorporated, 1987, La Place, Paris: Gallimard, 1983; Distribooks Inc, 1992, Une Femme, Paris: Gallimard, 1988 Passion simple, Paris: Gallimard, 1991; Gallimard, 1993, Journal du dehors, Paris: Gallimard, 1993 La Honte, Paris: Gallimard, 1997 Shame, translator Tanya Leslie, Seven Stories Press, 1998, Je ne suis pas sortie de ma nuit, Paris: Gallimard, 1997 La Vie extérieure : 1993–1999, Paris: Gallimard, 2000 L'Événement, Paris: Gallimard, 2000, Se perdre, Paris: Gallimard, 2001 Getting Lost, translator Allison L. Strayer, Seven Stories Press, 2022 L'Occupation, Paris: Gallimard, 2002 L'Usage de la photo, with Marc Marie, Paris: Gallimard, 2005 Les Années, Paris: Gallimard, 2008, L'Autre fille, Paris: Nil 2011 L'Atelier noir, Paris: éditions des Busclats, 2011 Écrire la vie, Paris: Gallimard, 2011 Retour à Yvetot, éditions du Mauconduit, 2013 Regarde les lumières mon amour, Seuil, 2014 Mémoire de fille, Gallimard, 2016 Hôtel Casanova, Gallimard Folio, 2020 Le jeune homme, Gallimard, 2022 Adaptations In addition to numerous theatrical and radio adaptations, Ernaux's novels have been adapted for the cinema on three occasions: L'Événement (2021), released in English as Happening and directed by Audrey Diwan. It received the Golden Lion at the 2021 Venice Film Festival. Passion simple (2020; English title: Simple Passion) directed by Danielle Arbid. It was selected to be shown at that year's Cannes Film Festival. L'Autre (2008), based on L'Occupation and titled The Other One in English. Awards and distinctions 1977 Prix d'Honneur for Ce qu'ils disent ou rien 1984 Prix Renaudot for La Place 2008 Prix Marguerite-Duras for Les Années 2008 Prix François-Mauriac for Les Années 2008 Prix de la langue française for the entirety of her oeuvre 2014 Doctor honoris causa of Cergy-Pontoise University 2016 Strega European Prize for The Years (translated into Italian as Gli Anni) (L'Orma) 2017 Prix Marguerite Yourcenar, awarded by the Civil Society of Multimedia Authors, for the entirety of her oeuvre 2018 Premio Hemingway per la letteratura for the entirety of her oeuvre 2019 Prix Formentor 2019 Premio Gregor von Rezzori for Una Donna (Une Femme) 2019 Shortlisted for the International Booker Prize for The Years 2021 Elected a Royal Society of Literature International Writer 2022 Nobel Prize in Literature The Prix Annie-Ernaux, of which she is the "godmother", bears her name. References Further reading Loraine Day, Writing Shame and Desire: The Work of Annie Ernaux, Peter Lang, 2007 Alison Fell, Ernaux: La Place and La Honte; Grant and Cutler, Critical Guides to French Studies, 2006. Alison Fell and Edward Welch, "Annie Ernaux: Socio-Ethnographer of Contemporary France", Nottingham French Studies, June 2009. Pierre-Louis Fort (ed.), Annie Ernaux, L'Herne, 2022. Elise Hugueny-Léger, Annie Ernaux, une poétique de la transgression, Peter Lang, 2009. Siobhán McIlvanney, Annie Ernaux, The Return to Origins, Liverpool University Press, 2001. Lyn Thomas, Annie Ernaux: An Introduction to the Writer and her Audience, Berg, 1999. Lyn Thomas, Annie Ernaux, à la première personne, Stock, 2005. Lyn Thomas, "Voix blanche? Annie Ernaux, French feminisms and the challenge of intersectionality", in M. Atack, A. Fell, D.Holmes and I. Long (eds) Making Waves: French Feminisms and their Legacies 1975–2015.; Liverpool University Press, 2019, p. 201–214. S. J. McIlvanney, "Gendering mimesis. Realism and feminism in the works of Annie Ernaux and Claire Etcherelli". Graduate thesis, University of Oxford 1994 Sarah Elizabeth Cant, "Self-referentiality and the works of Annie Ernaux, Patrick Modiano, and Daniel Pennac". Thesis, University of Oxford, 2000 Georges Gaillard, "Traumatisme, solitude et auto-engendrement. Annie Ernaux: L'événement". Filigrane, écoutes psychothérapiques, 15, 1. Montréal, Spring 2006 en ligne; p. 67–86. Patrick Autréaux, Two Annies – 3:AM Magazine, 2024 External links Critical bibliography (Auteurs.contemporain.info) 1940 births 20th-century French novelists 20th-century French women writers 21st-century French novelists 21st-century French women writers French activists for Palestinian solidarity French anti-Zionists French communist writers French feminist writers French Nobel laureates French socialist feminists French socialists French women novelists Living people Nobel laureates in Literature People from Lillebonne Prix Renaudot winners University of Bordeaux alumni University of Rouen Normandy alumni Women Nobel laureates
Annie Ernaux
Technology
2,634
23,499,524
https://en.wikipedia.org/wiki/Hansen%27s%20problem
In trigonometry, Hansen's problem is a problem in planar surveying, named after the astronomer Peter Andreas Hansen (1795–1874), who worked on the geodetic survey of Denmark. There are two known points , and two unknown points . From and an observer measures the angles made by the lines of sight to each of the other three points. The problem is to find the positions of and . See figure; the angles measured are . Since it involves observations of angles made at unknown points, the problem is an example of resection (as opposed to intersection). Solution method overview Define the following angles: As a first step we will solve for and . The sum of these two unknown angles is equal to the sum of and , yielding the equation A second equation can be found more laboriously, as follows. The law of sines yields Combining these, we get Entirely analogous reasoning on the other side yields Setting these two equal gives Using a known trigonometric identity this ratio of sines can be expressed as the tangent of an angle difference: Where This is the second equation we need. Once we solve the two equations for the two unknowns , we can use either of the two expressions above for to find since is known. We can then find all the other segments using the law of sines. Solution algorithm We are given four angles and the distance . The calculation proceeds as follows: Calculate Calculate Let and then Calculate or equivalently If one of these fractions has a denominator close to zero, use the other one. Solutions via Geometric Algebra In addition to presenting algorithms for solving the problem via Vector Geometric Algebra and Conformal Geometric Algebra, Ventura et al. review previous methods, and compare the various methods' computational speeds and sensitivity to measurement error. See also Solving triangles Snell's problem References Trigonometry Surveying Mathematical problems
Hansen's problem
Mathematics,Engineering
376
34,381,085
https://en.wikipedia.org/wiki/Kepler-35
Kepler-35 is a binary star system in the constellation of Cygnus. These stars, called Kepler-35A and Kepler-35B have masses of 89% and 81% solar masses respectively, and both are assumed to be of spectral class G. They are separated by 0.176 AU, and complete an eccentric orbit around a common center of mass every 20.73 days. Description The Kepler-35 system consists of two stars slightly less massive than the sun in a 21-day orbit aligned edge-on to us so that the stars eclipse each other. The orbit has a semi-major axis and a mild eccentricity of 0.16. of The precise measurements made by the Kepler satellite allow doppler beaming to be detected, as well as brightness variations due to the ellipsoidal shape of the stars and reflections of one star on the other. The primary star has a mass of and a radius fractionally larger than the sun. With an effective temperature of , its luminosity is . The secondary star has a mass of , a radius of , an effective surface temperature of , and a bolometric luminosity of . Planetary system Kepler-35b is a gas giant that orbits the two stars in the Kepler-35 system. The planet is over an eighth of Jupiter's mass and has a radius of 0.728 Jupiter radii. The planet completes a somewhat eccentric orbit every 131.458 days from a semimajor axis of just over 0.6 AU, only about 3.5 times the semi-major axis between the parent stars. The proximity and eccentricity of the binary star as well as both stars have similar masses results the planet's orbit to significantly deviate from Keplerian orbit. Studies have suggested that this planet must have been formed outside its current orbit and migrated inwards later. The eccentricity of planetary orbit is acquired on the last stage of migration, due to interaction with the residual debris disk. Numerical simulation of formation of planetary system Kepler-35 has shown the formation of additional rocky planets in the habitable zone is highly likely, and these planetary orbits are stable. See also Kepler-16 Kepler-34 Kepler-38 References Further reading Cygnus (constellation) Eclipsing binaries Planetary transit variables 2937 G-type main-sequence stars Circumbinary planets Planetary systems with one confirmed planet J19375927+4641231
Kepler-35
Astronomy
495
32,412,281
https://en.wikipedia.org/wiki/Hydra%20in%20Chinese%20astronomy
The modern constellation Hydra lies across two of the quadrants, symbolized by the Azure Dragon of the East (東方青龍, Dōng Fāng Qīng Lóng) and the Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què), that divide the sky in traditional Chinese uranography. The name of the western constellation in modern Chinese is 長蛇座 (cháng shé zuò), which means "the long snake constellation". Stars The map of Chinese constellation in constellation Hydra area consists of: See also Chinese astronomy Traditional Chinese star names Chinese constellations References External links Hydra – Chinese associations 香港太空館研究資源 中國星區、星官及星名英譯表 天象文學 台灣自然科學博物館天文教育資訊網 中國古天文 中國古代的星象系統 Astronomy in China Hydra (constellation)
Hydra in Chinese astronomy
Astronomy
189
11,127,278
https://en.wikipedia.org/wiki/Pair-instability%20supernova
A pair-instability supernova is a type of supernova predicted to occur when pair production, the production of free electrons and positrons in the collision between atomic nuclei and energetic gamma rays, temporarily reduces the internal radiation pressure supporting a supermassive star's core against gravitational collapse. This pressure drop leads to a partial collapse, which in turn causes greatly accelerated burning in a runaway thermonuclear explosion, resulting in the star being blown completely apart without leaving a stellar remnant behind. Pair-instability supernovae can only happen in stars with a mass range from around 130 to 250 solar masses and low to moderate metallicity (low abundance of elements other than hydrogen and helium – a situation common in Population III stars). Physics Photon emission Photons given off by a body in thermal equilibrium have a black-body spectrum with an energy density proportional to the fourth power of the temperature, as described by the Stefan–Boltzmann law. Wien's law states that the wavelength of maximum emission from a black body is inversely proportional to its temperature. Equivalently, the frequency, and the energy, of the peak emission is directly proportional to the temperature. Photon pressure in stars In very massive, hot stars with interior temperatures above about (), photons produced in the stellar core are primarily in the form of very high-energy gamma rays. The pressure from these gamma rays fleeing outward from the core helps to hold up the upper layers of the star against the inward pull of gravity. If the level of gamma rays (the energy density) is reduced, then the outer layers of the star will begin to collapse inwards. Gamma rays with sufficiently high energy can interact with nuclei, electrons, or one another. One of those interactions is to form pairs of particles, such as electron-positron pairs, and these pairs can also meet and annihilate each other to create gamma rays again, all in accordance with Albert Einstein's mass-energy equivalence equation At the very high density of a large stellar core, pair production and annihilation occur rapidly. Gamma rays, electrons, and positrons are overall held in thermal equilibrium, ensuring the star's core remains stable. By random fluctuation, the sudden heating and compression of the core can generate gamma rays energetic enough to be converted into an avalanche of electron-positron pairs. This reduces the pressure. When the collapse stops, the positrons find electrons and the pressure from gamma rays is driven up, again. The population of positrons provides a brief reservoir of new gamma rays as the expanding supernova's core pressure drops. Pair-instability As temperatures and gamma ray energies increase, more and more gamma ray energy is absorbed in creating electron–positron pairs. This reduction in gamma ray energy density reduces the radiation pressure that resists gravitational collapse and supports the outer layers of the star. The star contracts, compressing and heating the core, thereby increasing the rate of energy production. This increases the energy of the gamma rays that are produced, making them more likely to interact, and so increases the rate at which energy is absorbed in further pair production. As a result, the stellar core loses its support in a runaway process, in which gamma rays are created at an increasing rate; but more and more of the gamma rays are absorbed to produce electron–positron pairs, and the annihilation of the electron–positron pairs is insufficient to halt further contraction of the core. Finally, the thermal runaway ignites detonation fusion of oxygen and heavier elements. When the temperature reaches the level when electrons and positrons carry the same energy fraction as gamma-rays, pair production cannot increase any further, it is balanced by annihilation. Contraction no longer accelerates, but the core now produces much more energy than prior to collapse, and this results in a supernova: the outer layers of the star are blown away by sudden large increase of power production in the core. Calculations suggest that so much of the outer layers are lost that the very hot core itself is no longer under sufficient pressure to keep it intact, and it is completely disrupted too. Stellar susceptibility For a star to undergo pair-instability supernova, the increased creation of positron/electron pairs by gamma ray collisions must reduce outward pressure enough for inward gravitational pressure to overwhelm it. High rotational speed and/or metallicity can prevent this. Stars with these characteristics still contract as their outward pressure drops, but unlike their slower or less metal-rich cousins, these stars continue to exert enough outward pressure to prevent gravitational collapse. Stars formed by collision mergers having a metallicity Z between 0.02 and 0.001 may end their lives as pair-instability supernovae if their mass is in the appropriate range. Very large high-metallicity stars are probably unstable due to the Eddington limit, and would tend to shed mass during the formation process. Stellar behavior Several sources describe the stellar behavior for large stars in pair-instability conditions. Below 100 solar masses Gamma rays produced by stars of fewer than 100 or so solar masses are not energetic enough to produce electron-positron pairs. Some of these stars will undergo supernovae of a different type at the end of their lives, but the causative mechanisms do not involve pair-instability. 100 to 130 solar masses These stars are large enough to produce gamma rays with enough energy to create electron-positron pairs, but the resulting net reduction in counter-gravitational pressure is insufficient to cause the core-overpressure required for supernova. Instead, the contraction caused by pair-creation provokes increased thermonuclear activity within the star that repulses the inward pressure and returns the star to equilibrium. It is thought that stars of this size undergo a series of these pulses until they shed sufficient mass to drop below 100 solar masses, at which point they are no longer hot enough to support pair-creation. Pulsing of this nature may have been responsible for the variations in brightness experienced by Eta Carinae in 1843, though this explanation is not universally accepted. 130 to 250 solar masses For very high-mass stars, with mass at least 130 and up to perhaps roughly 250 solar masses, a true pair-instability supernova can occur. In these stars, the first time that conditions support pair production instability, the situation runs out of control. The collapse proceeds to efficiently compress the star's core; the overpressure is sufficient to allow runaway nuclear fusion to burn it in several seconds, creating a thermonuclear explosion. With more thermal energy released than the star's gravitational binding energy, it is completely disrupted; no black hole or other remnant is left behind. This is predicted to contribute to a "mass gap" in the mass distribution of stellar black holes. (This "upper mass gap" is to be distinguished from a suspected "lower mass gap" in the range of a few solar masses.) In addition to the immediate energy release, a large fraction of the star's core is transformed to nickel-56, a radioactive isotope which decays with a half-life of 6.1 days into cobalt-56. Cobalt-56 has a half-life of 77 days and then further decays to the stable isotope iron-56 (see Supernova nucleosynthesis). For the hypernova SN 2006gy, studies indicate that perhaps 40 solar masses of the original star were released as Ni-56, almost the entire mass of the star's core regions. Collision between the exploding star core and gas it ejected earlier, and radioactive decay, release most of the visible light. 250 solar masses or more A different reaction mechanism, photodisintegration, follows the initial pair-instability collapse in stars of at least 250 solar masses. This endothermic (energy-absorbing) reaction absorbs the excess energy from the earlier stages before the runaway fusion can cause a hypernova explosion; the star then collapses completely into a black hole. Appearance Luminosity Pair-instability supernovae are popularly thought to be highly luminous. This is only the case for the most massive progenitors since the luminosity depends strongly on the ejected mass of radioactive 56Ni. They can have peak luminosities of over 1037 W, brighter than type Ia supernovae, but at lower masses peak luminosities are less than 1035 W, comparable to or less than typical type II supernovae. Spectrum The spectra of pair-instability supernovae depend on the nature of the progenitor star. Thus they can appear as type II or type Ib/c supernova spectra. Progenitors with a significant remaining hydrogen envelope will produce a type II supernova, those with no hydrogen but significant helium will produce a type Ib, and those with no hydrogen and virtually no helium will produce a type Ic. Light curves In contrast to the spectra, the light curves are quite different from the common types of supernova. The light curves are highly extended, with peak luminosity occurring months after onset. This is due to the extreme amounts of 56Ni expelled, and the optically dense ejecta, as the star is entirely disrupted. Remnant Pair-instability supernovae completely destroy the progenitor star and do not leave behind a neutron star or black hole. The entire mass of the star is ejected, so a nebular remnant is produced and many solar masses of heavy elements are ejected into interstellar space. Pair-instability supernovae candidates Some supernovae candidates for classification as pair-instability supernovae include: SN 2006gy SN 2007bi, SN 2213-1745 SN 1000+0216, SN 2010mb OGLE14-073, SN 2016aps SN 2016iet, SN 2018ibb, See also Pair production Pulsational pair-instability supernova Thermal runaway Type Ia supernova, "thermonuclear supernova" Intermediate-mass black hole References External links List of possible pair-instability supernovae at The Open Supernova Catalog . Supernovae Hypernovae de:Supernova#Paarinstabilitätssupernova
Pair-instability supernova
Chemistry,Astronomy
2,077
25,126
https://en.wikipedia.org/wiki/Postage%20stamp
A postage stamp is a small piece of paper issued by a post office, postal administration, or other authorized vendors to customers who pay postage (the cost involved in moving, insuring, or registering mail). Then the stamp is affixed to the face or address-side of any item of mail—an envelope or other postal cover (e.g., packet, box, mailing cylinder)—which they wish to send. The item is then processed by the postal system, where a postmark or cancellation mark—in modern usage indicating date and point of origin of mailing—is applied to the stamp and its left and right sides to prevent its reuse. Next the item is delivered to its address. Always featuring the name of the issuing nation (with the exception of the United Kingdom), a denomination of its value, and often an illustration of persons, events, institutions, or natural realities that symbolize the nation's traditions and values, every stamp is printed on a piece of usually rectangular, but sometimes triangular or otherwise shaped special custom-made paper whose back is either glazed with an adhesive gum or self-adhesive. Because governments issue stamps of different denominations in unequal numbers and routinely discontinue some lines and introduce others, and because of their illustrations and association with the social and political realities of the time of their issue, they are often prized for their beauty and historical significance by stamp collectors, whose study of their history and of mailing systems is called philately. Because collectors often buy stamps from an issuing agency with no intention to use them for postage, the revenues from such purchases and payments of postage can make them a source of net profit to that agency. On 1 May 1840, the Penny Black, the first adhesive postage stamp, was issued in the United Kingdom. Within three years postage stamps were introduced in Switzerland and Brazil, a little later in the United States, and by 1860, they were in 90 countries around the world. The first postage stamps did not need to show the issuing country, so no country name was included on them. Thus the United Kingdom remains the only country in the world to omit its name on postage stamps; the monarch's image signifies the United Kingdom as the country of origin. Invention Throughout modern history numerous methods were used to indicate that postage had been paid on a mailed item, so several different men have received credit for inventing the postage stamp. William Dockwra In 1680, William Dockwra, an English merchant in London, and his partner Robert Murray established the London Penny Post. The LPP was a mail system that delivered letters and small parcels inside the city of London for the sum of one penny. Confirmation of paid postage was indicated by the use of a hand stamp to frank the mailed item. Though this "stamp" was applied to the letter or parcel itself, rather than to a separate piece of paper, it is considered by many historians to be the world's first postage stamp. Lovrenc Košir In 1835, the civil servant Lovrenc Košir from Ljubljana in Austria-Hungary (now Slovenia), suggested the use of "artificially affixed postal tax stamps" using "gepresste Papieroblate" ("pressed paper wafers"), but although civil bureaucrats considered the suggestion in detail, it was not adopted. The 'Papieroblate' were to produce stamps as paper decals so thin as to prevent their reuse. Rowland Hill In 1836, Robert Wallace, a Member of (British) Parliament, gave Sir Rowland Hill numerous books and documents about the postal service, which Hill described as a "half hundred weight of material". After a detailed study, on 4 January 1837 Hill submitted a pamphlet entitled Post Office Reform: Its Importance and Practicability to the Chancellor of the Exchequer, Thomas Spring Rice, which was marked "private and confidential", and not released to the general public. The Chancellor summoned Hill to a meeting at which he suggested improvements and changes to be presented in a supplement, which Hill duly produced and submitted on 28 January 1837. Summoned to give evidence before the Commission for Post Office Enquiry on 13 February 1837, Hill read from the letter he wrote to the Chancellor that included a statement saying that the notation of paid postage could be created... by using a bit of paper just large enough to bear the stamp, and covered at the back with a glutinous wash..." This would eventually become the first unambiguous description of a modern adhesive postage stamp (though the term "postage stamp" originated at a later date). Shortly afterward, Hill's revision of the booklet, dated 22 February 1837, containing some 28,000 words, incorporating the supplement given to the Chancellor and statements he made to the commission, was published and made available to the general public. Hansard records that on 15 December 1837, Benjamin Hawes asked the Chancellor of the Exchequer "whether it was the intention of the Government to give effect to the recommendation of the Commissioners of the Post-office, contained in their ninth report relating to the reduction of the rates of postage, and the issuing of penny stamps?" Hill's ideas for postage stamps and charging paid-postage based on weight soon took hold, and were adopted in many countries throughout the world. With the new policy of charging by weight, using envelopes for mailing documents became the norm. Hill's brother Edwin invented a prototype envelope-making machine that folded paper into envelopes quickly enough to match the pace of the growing demand for postage stamps. Rowland Hill and the reforms he introduced to the United Kingdom postal system appear on several of its commemorative stamps. James Chalmers In the 1881 book The Penny Postage Scheme of 1837, Scotsman Patrick Chalmers claimed that his father, James Chalmers, published an essay in August 1834 describing and advocating a postage stamp, but submitted no evidence of the essay's existence. Nevertheless, until he died in 1891, Patrick Chalmers campaigned to have his father recognized as the inventor of the postage stamp. The first independent evidence for Chalmers' claim is an essay, dated 8 February 1838 and received by the Post Office on 17 February 1838, in which he proposed adhesive postage stamps to the General Post Office. In this approximately 800-word document concerning methods of indicating that postage had been paid on mail he states: "Therefore, of Mr Hill's plan of a uniform rate of postage... I conceive that the most simple and economical mode... would be by Slips... in the hope that Mr Hill's plan may soon be carried into operation I would suggest that sheets of Stamped Slips should be prepared... then be rubbed over on the back with a strong solution of gum...". Chalmers' original document is now in the United Kingdom's National Postal Museum. Since Chalmers used the same postage denominations that Hill had proposed in February 1837, it is clear that he was aware of Hill's proposals, but whether he obtained a copy of Hill's booklet or simply read about it in one or both of the two detailed accounts (25 March 1837 and 20 December 1837) published in The Times is unknown. Neither article mentioned "a bit of paper just large enough to bear the stamp", so Chalmers could not have known that Hill had made such a proposal. This suggests that either Chalmers had previously read Hill's booklet and was merely elaborating Hill's idea, or he had independently developed the idea of the modern postage stamp. James Chalmers organized petitions "for a low and uniform rate of postage". The first such petition was presented in the House of Commons on 4 December 1837 (from Montrose). Further petitions which he organized were presented on 1 May 1838 (from Dunbar and Cupar), 14 May 1838 (from the county of Forfar), and 12 June 1839. At this same time, other groups organized petitions and presented them to Parliament. All petitions for consumer-oriented, low-cost, volume-based postal rates followed publication of Hill's proposals. Other claimants Other claimants include or have included John Gray of the British Museum Samuel Forrester, a Scottish tax official Charles Whiting, a London stationer Samuel Roberts of Llanbrynmair, Wales Francis Worrell Stevens, schoolmaster at Loughton Ferdinand Egarter of Spittal, Austria Curry Gabriel Treffenberg from Sweden History The nineteenth century Postage stamps have facilitated the delivery of mail since the 1840s. Before then, ink and hand-stamps (hence the word 'stamp'), usually made from wood or cork, were often used to frank the mail and confirm the payment of postage. The first adhesive postage stamp, commonly referred to as the Penny Black, was issued in the United Kingdom in 1840. The invention of the stamp was part of an attempt to improve the postal system in the United Kingdom of Great Britain and Ireland, which, in the early 19th century, was in disarray and rife with corruption. There are varying accounts of the inventor or inventors of the stamp. Before the introduction of postage stamps, mail in the United Kingdom was paid for by the recipient, a system that was associated with an irresolvable problem: the costs of delivering mail were not recoverable by the postal service when recipients were unable or unwilling to pay for delivered items, and senders had no incentive to restrict the number, size, or weight of items sent, whether or not they would ultimately be paid for. The postage stamp resolved this issue in a simple and elegant manner, with the additional benefit of room for an element of beauty to be introduced. Concurrently with the first stamps, the United Kingdom offered wrappers for mail. Later related inventions include postal stationery such as prepaid-postage envelopes, post cards, lettercards, aerogrammes, and postage meters. The postage stamp afforded convenience for both the mailer and postal officials, more effectively recovered costs for the postal service, and ultimately resulted in a better, faster postal system. With the conveniences stamps offered, their use resulted in greatly increased mailings during the 19th and 20th centuries. Postage stamps released during this era were the most popular way of paying for mail; however by the end of the 20th century were rapidly being eclipsed by the use of metered postage and bulk mailing by businesses. As postage stamps with their engraved imagery began to appear on a widespread basis, historians and collectors began to take notice. The study of postage stamps and their use is referred to as philately. Stamp collecting can be both a hobby and a form of historical study and reference, as government-issued postage stamps and their mailing systems have always been involved with the history of nations. Although a number of people laid claim to the concept of the postage stamp, it is well documented that stamps were first introduced in the United Kingdom of Great Britain and Ireland on 1 May 1840 as a part of postal reforms promoted by Sir Rowland Hill. With its introduction the postage fee was paid by the sender and not the recipient, though it was still possible to send mail without prepaying. From when the first postage stamps were used, postmarks were applied to prevent the stamps being used again. The first stamp, the "Penny black", became available for purchase 1 May 1840, to be valid as of 6 May 1840. Two days later, 8 May 1840, the Two penny blue was introduced. The Penny black was sufficient for a letter less than half an ounce to be sent anywhere within the United Kingdom. Both stamps included an engraving of the young Queen Victoria, without perforations, as the first stamps were separated from their sheets by cutting them with scissors. The first stamps did not need to show the issuing country, so no country name was included on them. The United Kingdom remains the only country to omit its name on postage stamps, using the reigning monarch's head as country identification. Following the introduction of the postage stamp in the United Kingdom, prepaid postage considerably increased the number of letters mailed. Before 1839, the number of letters sent in the United Kingdom was typically 76 million. By 1850, this increased five-fold to 350 million, continuing to grow rapidly until the end of the 20th century when newer methods of indicating the payment of postage reduced the use of stamps. Other countries soon followed the United Kingdom with their own stamps. The canton of Zürich in Switzerland issued the Zürich 4 and 6 rappen on 1 March 1843. Although the Penny black could be used to send a letter less than half an ounce anywhere within the United Kingdom, the Swiss did not initially adopt that system, instead continuing to calculate mail rates based on distance to be delivered. Brazil issued the Bull's Eye stamp on 1 August 1843. Using the same printer used for the Penny black, Brazil opted for an abstract design instead of the portrait of Emperor Pedro II, so his image would not be disfigured by a postmark. In 1845, some postmasters in the United States issued their own stamps, but it was not until 1847 that the first official United States stamps were issued: 5 and 10 cent issues depicting Benjamin Franklin and George Washington. A few other countries issued stamps in the late 1840s. The famous Mauritius "Post Office" stamps were issued by Mauritius in September 1847. Many others, such as India, started their use in the 1850s, and by the 1860s most countries issued stamps. Perforation of postage stamps began in January 1854. The first officially perforated stamps were issued in February 1854. Stamps from Henry Archer's perforation trials were issued in the last few months of 1850; during the 1851 parliamentary session at the House of Commons of the United Kingdom; and finally in 1853/54 after the United Kingdom government paid Archer £4,000 for his machine and the patent. The Universal Postal Union, established in 1874, prescribed that nations shall only issue postage stamps according to the quantity of real use, and no living persons shall be taken as subjects. The latter rule lost its significance after World War I. The twentieth and twenty-first century After World War II, it became customary in some countries, especially small Arab nations, to issue postage stamps en masse as it was realized how profitable that was. During the 21st century, the amount of mail—and the use of postage stamps, accordingly—has reduced in the world because of electronic mail and other technological innovations. Iceland has already announced that it will no longer issue new stamps for collectors because sales have decreased and there are enough stamps in stock. In 2013 the Netherlands PostNL introduced Postzegelcodes, a nine-character alphanumeric code that is written as a 3x3 grid on the piece of mail as an alternative to stamps. In December 2020, 590,000 people sent cards with these handwritten codes. Design When the first postage stamps were issued in the 1840s, they followed an almost identical standard in shape, size and general subject matter. They were rectangular in shape. They bore the images of queens, presidents and other political figures. They also depicted the denomination of the postage-paid, and with the exception of the United Kingdom, depicted the name of the country from which issued. Nearly all early postage stamps depict images of national leaders only. Soon after the introduction of the postage stamp, other subjects and designs began to appear. Some designs were welcome, others widely criticized. For example, in 1869, the United States Post Office broke the tradition of depicting presidents or other famous historical figures, instead using other subjects including a train and horse.(See: 1869 Pictorial Issue.) The change was greeted with general disapproval, and sometimes harsh criticism from the American public. Perforations Perforations are small holes made between individual postage stamps on a sheet of stamps, facilitating separation of a desired number of stamps. The resulting frame-like, rippled edge surrounding the separated stamp defines a characteristic meme for the appearance of a postage stamp. In the first decade of postage stamps' existence (depending on the country), stamps were issued without perforations. Scissors or other cutting mechanisms were required to separate a desired number of stamps from a full sheet. If cutting tools were not used, individual stamps were torn off. This is evidenced by the ragged edges of surviving examples. Mechanically separating stamps from a sheet proved an inconvenience for postal clerks and businesses, both dealing with large numbers of individual stamps on a daily basis. By 1850, methods such as rouletting wheels were being devised in efforts of making stamp separation more convenient, and less time-consuming. The United Kingdom was the first country to issue postage stamps with perforations. The first machine specifically designed to perforate sheets of postage stamps was invented in London by Henry Archer, an Irish landowner and railroad man from Dublin, Ireland. The 1850 Penny Red was the first stamp to be perforated during trial course of Archer's perforating machine. After a period of trial and error and modifications of Archer's invention, new machines based on the principles pioneered by Archer were purchased and in 1854 the United Kingdom postal authorities started continuously issuing perforated postage stamps in the Penny Red and all subsequent designs. In the United States, the use of postage stamps caught on quickly and became more widespread when on 3 March 1851, the last day of its legislative session, Congress passed the Act of March 3, 1851 (An Act to reduce and modify the Rates of Postage in the United States). Similarly introduced on the last day of the Congressional session four years later, the Act of March 3, 1855 required the prepayment of postage on all mailings. Thereafter, postage stamp use in the United States quickly doubled, and by 1861 had quadrupled. In 1856, under the direction of Postmaster General James Campbell, Toppan and Carpenter, (commissioned by the United States government to print United States postage stamps through the 1850s) purchased a rotary machine designed to separate stamps, patented in England in 1854 by William and Henry Bemrose, who were printers in Derby, England. The original machine cut slits into the paper rather than punching holes, but the machine was soon modified. The first stamp issue to be officially perforated, the 3-cent George Washington, was issued by the United States Post Office on 24 February 1857. Between 1857 and 1861, all stamps originally issued between 1851 and 1856 were reissued with perforations. Initial capacity was insufficient to perforate all stamps printed, thus perforated issues used between February and July 1857 are scarce and quite valuable. Shapes and materials In addition to the most common rectangular shape, stamps have been issued in geometric (circular, triangular and pentagonal) and irregular shapes. The United States issued its first circular stamp in 2000 as a hologram of the Earth. Sierra Leone and Tonga have issued stamps in the shapes of fruit. Stamps that are printed on sheets are generally separated by perforations, though, more recently, with the advent of gummed stamps that do not have to be moistened prior to affixing them, designs can incorporate smooth edges (although a purely decorative perforated edge is often present). Stamps are most commonly made from paper designed specifically for them, and are printed in sheets, rolls, or small booklets. Less commonly, postage stamps are made of materials other than paper, such as embossed foil (sometimes of gold). Switzerland made a stamp that contained a bit of lace and one of wood. The United States produced one of plastic. East Germany issued a stamp of synthetic chemicals. In the Netherlands a stamp was made of silver foil. Bhutan issued one with its national anthem on a playable record. Graphic characteristics The subjects found on the face of postage stamps are generally what defines a particular stamp issue to the public and are often a reason why they are saved by collectors or history enthusiasts. Graphical subjects found on postage stamps have ranged from the early portrayals of kings, queens and presidents to later depictions of ships, birds and satellites, famous people, historical events, comics, dinosaurs, hobbies (knitting, stamp collecting), sports, holiday themes, and a plethora of other subjects too numerous to list. Artists, designers, engravers and administrative officials are involved with the choice of subject matter and the method of printing stamps. Early stamp images were almost always produced from an engraving—a design etched into a steel die, which was then hardened and whose impression was transferred to a printing plate. Using an engraved image was deemed a more secure way of printing stamps as it was nearly impossible to counterfeit a finely detailed image with raised lines for anyone but a master engraver. In the mid-20th century, stamp issues produced by other forms of printing began to emerge, such as lithography, photogravure, intaglio and web offset printing. These later printing methods were less expensive and typically produced images of lesser quality. Scents Occasionally, postal authorities issue novelty "scented" or "aromatic" stamps which contain a scent, more readily apparent when rubbed. The effect is achieved by using ink which contains microcapsules that provide the desired fragrance when broken. The scent usually only lasts for a limited time after production, such as a few months or years. Such stamps are usually related to aromatic subjects including coffee, roses, grapes, chocolate, vanilla, cinnamon, pine needles or freshly baked bread. The first scented stamps were issued by Bhutan in 1973. Types Airmail stamp – for payment of airmail service. The term "airmail" or an equivalent is usually printed on special airmail stamps. Airmail stamps typically depict images of airplanes and/or famous pilots and were used when airmail was a special type of mail delivery separate from mail delivered by train, ship or automobile. Aside from mail with local destinations, today almost all other mail is transported by aircraft and thus airmail is now the standard method of delivery. Scott has a separate category and listing for United States Airmail Postage. Prior to 1940, the Scott Catalogue did not have a special designation for airmail stamps. The various major stamp catalogs have different numbering systems and may not always list airmail stamps the same way. ATM stamp – stamps dispensed by automates and have their value imprinted only at the time of purchase Booklet stamp – stamps produced and issued in booklet format Carrier's stamp Certified mail stamp Cinderella stamp (see also: Poster stamp) Coil stamps – tear-off stamps issued individually in a vending machine, or purchased in a roll Commemorative stamp – a stamp which is issued for a limited time to commemorate a person or event Anniversaries of birthdays and historical events are among the most common examples. Computer vended postage – advanced secure postage that uses information-based indicia (IBI) technology. IBI uses a two-dimensional bar code (Datamatrix or PDF417) to encode the originating address, date of mailing, postage and a digital signature to verify the stamp. Customised stamp – a stamp on which the image can be chosen by the purchaser by sending in a photograph or by use of the computer. Some are not true stamps but technically meter labels. Definitive stamps – stamps for everyday postage and are usually produced to meet current postal rates. They often have less appealing designs than commemoratives, though there are notable exceptions. The same design may be used for many years. The use of the same design over an extended period may lead to unintended colour varieties. This may make them just as interesting to philatelists as are commemoratives. A good example would be the US 1903 regular issues, their designs being very picturesque and ornamental. Definitive stamps are often issued in a series of stamps with different denominations. Express mail stamp / special delivery stamp Late fee stamp – issued to show payment of a fee to allow inclusion of a letter or package in the outgoing dispatch although it has been turned in after the cut-off time Local post stamps – used on mail in a local post; a postal service that operates only within a limited geographical area, typically a city or a single transportation route. Some local posts have been operated by governments, while others, known as private local posts, have been operated by for-profit companies. Make up stamp – a stamp with a very small value, used to make up the difference when postage rates are increased Military stamp – stamp for a country's armed forces, usually using a special postal system Minisheet – a commemorative issue smaller than a regular full sheet of stamps, but with more than one stamp Minisheets often contain a number of different stamps and often have a decorative border. See also souvenir sheets Newspaper stamp – used to pay the cost of mailing newspapers and other periodicals Official mail stamp – issued for use by the government or a government agency Occupation stamp – a stamp for use by an occupying army or by the occupying army or authorities for use by civilians Non-denominated postage – postage stamp that remains valid even after the price has risen. It is also known as a permanent or "forever" stamp. Overprint – a regularly issued stamp, such as a commemorative or a definitive issue, that has been changed after issuance by "printing over" some part of the stamp. Denominations can be changed in this manner. Perforated stamps – While this term usually refers to perforations around a stamp to divide a sheet into individual stamps, it can also be used for stamps perforated across the middle with letters or a pattern or monogram. They are known as "perfins". The modified stamps are usually purchased by corporations to guard against theft by employees. Personalised stamps – allow the user to add their own image Pneumatic post stamps – for mail sent using pressurized air tubes, only produced in Italy Postage and revenue stamps – stamps which were equally valid for postal and fiscal use Postage currency postage stamps used as currency rather than as postage Postage due – a stamp showing that the full postage has not been paid, and indicating the amount owed. The United States Post Office Department has issued "parcel post postage due" stamps Postal tax – a stamp indicating that a tax above the postage rate required for sending letters has been paid. This is often mandatory on mail issued on a particular day or for a few days. Poster stamp (see also: Cinderella stamp) Self-adhesive stamp – not requiring moisture to stick, self-sticking Semi-postal / charity stamp – a stamp with an additional charge for charity. The use of semi-postal stamps is at the option of the purchaser. Countries including Belgium and Switzerland often use charitable fund-raising design stamps which are desirable for collectors. Souvenir sheet – a commemorative issue in large format valid for postage often containing a perforated or imperforate stamp as part of its design See also minisheet. Specimen stamp – sent to postmasters and postal administrations so that they are able to identify valid stamps and to avoid forgeries Test stamp – a label not valid for postage, used by postal authorities to test sorting and cancelling machines or machines that can detect a stamp on an envelope. May also be known as dummy or training stamps Variable value stamps - dispensed by machines that print the cost of the postage at the time the stamp is dispensed War tax stamp – a variation on the postal tax stamp to defray the cost of war Water-activated stamp – For many years, water-activated stamps were the only type available, so this term entered into use with the advent of self-adhesive stamps. The adhesive or gum on a water-activated stamp must be moistened (usually by licking, thus the stamps are also known as "lick and stick"). Apart from these, there are also revenue stamps (used to collect taxes or fees on items like documents, tobacco, alcoholic drinks, hunting licenses, and medicines) and telegraph stamps (for sending telegrams), which fall in a separate category from postage stamps. First day covers Postage stamps are first issued on a specific date, often referred to as the First day of issue. A first day cover usually consists of an envelope, a postage stamp and a postmark with the date of the stamp's first day of issue thereon. Starting in the mid-20th century some countries began assigning the first day of issue to a place associated with the subject of the stamp design, such as a specific town or city. There are two basic types of First Day Covers (FDCs) noted by collectors. The first and often most desirable type among advanced collectors is a cover sent through the mail in the course of everyday usage, without the intention of the envelope and stamp ever being retrieved and collected. The second type of FDC is often referred to as "Philatelic", that is, an envelope and stamp sent by someone with the intention of retrieving and collecting the mailed item at a later time and place. The envelope used for this type of FDC often bears a printed design or cachet of its own in correspondence with the stamp's subject and is usually printed well in advance of the first day of issue date. The latter type of FDC is usually far more common; it is usually inexpensive and relatively easy to acquire. Covers which were sent without any secondary purpose are considered non-philatelic and often are much more challenging to find and collect. Souvenir or miniature sheets Postage stamps are sometimes issued in souvenir sheets or miniature sheets containing one or a small number of stamps. Souvenir sheets typically include additional artwork or information printed on the selvage, the border surrounding the stamps. Sometimes the stamps make up a greater picture. Some countries, and some issues, are produced as individual stamps as well as sheets. Stamp collecting Stamp collecting is a hobby. Collecting is not the same as philately, which is defined as the study of stamps. The creation of a valuable or comprehensive collection, however, may require some philatelic knowledge. Stamp collectors are an important source of revenue for some small countries that create limited runs of elaborate stamps designed mainly to be bought by stamp collectors. The stamps produced by these countries may far exceed their postal needs. Hundreds of countries, each producing scores of different stamps each year, resulted in 400,000 different types of stamps in existence by 2000. Annual world output averages about 10,000 types. Some countries authorize the production of postage stamps that have no postal use, but are intended instead solely for collectors. Other countries issue large numbers of low denomination stamps that are bundled together in starter packs for new collectors. Official reprints are often printed by companies who have purchased or contracted for those rights and such reprints see no postal use. All of these stamps are often found "canceled to order", meaning they are postmarked without ever having passed through the postal system. Most national post offices produce stamps that would not be produced if there were no collectors, some to a far more prolific degree than others. Sales of stamps to collectors who do not use them for mailing can result in large profits. Examples of excessive issues have been the stamps produced by Nicholas F. Seebeck and stamps produced for the component states of the United Arab Emirates. Seebeck operated in the 1890s as an agent of Hamilton Bank Note Company. He approached Latin American countries with an offer to produce their entire postage stamp needs for free. In return. he would have exclusive rights to market stamps to collectors. Each year a new issue would be produced, but would expire at the end of the year. This assured Seebeck of a continuing supply of remainders. In the 1960s, printers such as the Barody Stamp Company contracted to produce stamps for the separate Emirates and other countries. The sparse population of the desert states made it wholly unlikely that many of these stamps would ever be used for mailing purposes, and earned them the name of the "sand dune" countries. Famous stamps Basel Dove British Guiana 1c magenta Hawaiian Missionaries Inverted Head 4 Annas Inverted Jenny Mauritius "Post Office" Penny Black Red Revenue "Small One Dollar" Scinde Dawk Treskilling Yellow Uganda Cowries See also Artistamp Cancellation (mail) Errors, freaks, and oddities List of entities that have issued postage stamps (A–E) List of entities that have issued postage stamps (F–L) List of entities that have issued postage stamps (M–Z) List of most expensive philatelic items List of stamp catalogues Mail Art Philatelic fakes and forgeries Stamp catalog Notes References External links Stamp Collecting News – Provides updates on new stamp issues from around the world History of postage stamps and collecting of stamps A Brief History Of Stamps Paper products Stamp Philatelic terminology British inventions Scottish inventions 19th-century inventions
Postage stamp
Technology
6,628
35,389,201
https://en.wikipedia.org/wiki/Rhododendrin
Rhododendrin (betuloside) is an arylbutanoid glycoside and a phenylpropanoid, a type of natural phenol. It can be found in the leaves of Rhododendron aureum or in Cistus salviifolius. In vitro, it shows analgesic, anti-inflammatory and diuretic properties. References Phenylpropanoids Rhododendron 4-Hydroxyphenyl compounds
Rhododendrin
Chemistry
105
11,461,974
https://en.wikipedia.org/wiki/M-Xylene%20%28data%20page%29
This page provides supplementary chemical data on m-Xylene. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions. Structure and properties Thermodynamic properties Vapor pressure of liquid Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Distillation data See also: p-xylene (data page) o-xylene (data page) Spectral data References Xylene Chemical data pages cleanup
M-Xylene (data page)
Chemistry
127
10,426,606
https://en.wikipedia.org/wiki/Xylulose%205-phosphate
D-Xylulose 5-phosphate (D-xylulose-5-P) is an intermediate in the pentose phosphate pathway. It is a ketose sugar formed from ribulose-5-phosphate by ribulose-5-phosphate epimerase. In the non-oxidative branch of the pentose phosphate pathway, xylulose-5-phosphate acts as a donor of two-carbon ketone groups in transketolase reactions. Xylulose-5-phosphate also plays a crucial role in the regulation of glycolysis through its interaction with the bifunctional enzyme PFK2/FBPase2. Specifically, it activates protein phosphatase, which then dephosphorylates PFK2/FBPase2. This inactivates the FBPase2 activity of the bifunctional enzyme and activates its PFK2 activity. As a result, the production of fructose 2,6-bisphosphate increases, ultimately leading to an upregulation of glycolysis. Although previously thought of mainly as an intermediary in the pentose phosphate pathway, recent research reported that the sugar also has a role in gene expression, mainly by promoting the ChREBP transcription factor in the well-fed state. However, more recent study showed that D-glucose-6-phosphate, rather than D-xylulose-5-phosphate, is essential for the activation of ChREBP in response to glucose. References Monosaccharide derivatives Organophosphates Pentose phosphate pathway
Xylulose 5-phosphate
Chemistry
340
1,814,446
https://en.wikipedia.org/wiki/Telecare
Telecare is technology-based healthcare such as the monitoring of patient vital organs so that they may remain safe and independent in their own homes. Devices may include health and fitness apps, such as exercise tracking tools and digital medication reminder apps, or technologies that issue early warning and detection. The use of sensors may be part of a package which can provide support for people with illnesses such as dementia, or people at risk of falling. Most telecare mitigates harm by reacting to untoward events and raising a help response quickly. Some telecare, such as safety confirmation and lifestyle monitoring have a preventive function in that a deterioration in the telecare user's wellbeing can be spotted at an early stage. Telecare is specifically different from telemedicine and telehealth. Telecare refers to the idea of enabling people to remain independent in their own homes by providing person-centred technologies to support the individual or their carers. Mobile telecare is an emerging service where state of the art mobile devices with roaming SIMs are utilised to allow a client to go outside their home but still have a 24/7 telecare service available to support them. Typical devices that do this are such things as the Pebbell mobile GPS tracker. The meaning and usage of the term 'telecare' has not yet settled into consistent use. In the UK it is grounded in the social care framework and focuses on the meaning described above. In other countries 'telecare' may be applied to the practice of healthcare at a distance. Uses of Telecare In its simplest form, it can refer to a fixed or mobile telephone with a connection to a monitoring centre through which the user can raise an alarm. Technologically more advanced systems use sensors, whereby a range of potential risks can be monitored. These may include falls, as well as environmental changes in the home such as floods, fire and gas leaks. Carers of people with dementia may be alerted if the person leaves the house or other defined area. When a sensor is activated it sends a radio signal to a central unit in the user's home, which then automatically calls a 24-hour monitoring centre where trained operators can take appropriate action, whether it be contacting a local key holder, doctor or the emergency services. Telecare also comprises standalone telecare which does not send signals to a response centre but supports carers through providing local (in-house) alerts in a person's home to let the carer know when a person requires attention. It is important to note that 'telecare' is not just a warning system if someone strays from home but is also preventative measure whereby people are brought back and kept in the community through regular communication. There are now a large range of telecare services available with some of the most well known being the pendant alarm, mobile carephone system, pill dispenser, telephone prompt service the movement monitoring, fall detector and more. Multi-lingual telecare services have now been introduced opening the service up to a wider audience. All play a role in maintaining people's independence and allowing people to stay in their own homes. The future of Telecare Technological advances result in the possibility of promoting independence and for providing care from the social initiative sector, which now contemplates eCare, and navigation/positioning systems, such as GPS for people with dementia or other cognitive impairments. Telecare in the UK In 2005 the UK's Department of Health published Building Telecare in England to coincide with the announcement of a grant to help encourage its take up by local councils with social care responsibilities. The UK’s Department of Health’s Whole System Demonstrator (WSD) launched in May 2008. It is the largest randomised control trial of telehealth and telecare in the world, involving 6191 patients and 238 GP practices across three sites, Newham, Kent and Cornwall. The trials were evaluated by: City University London, University of Oxford, University of Manchester, Nuffield Trust, Imperial College London and London School of Economics. The WSD headline findings after the telehealth trial, involving 3154 patients, included these outcomes: 45% reduction in mortality rates 20% reduction in emergency admissions 15% reduction in A&E visits 14% reduction in elective admissions 14% reduction in bed days 8% reduction in tariff costs The telecare findings were supposed to be published at some point in the future. In fact they have never surfaced. Some patients are still hopeful that telecare will lead to substantial improvements in the quality of services. The research showed that the telecare approach was not cost effective, with an incremental cost per QALY when added to usual care of £92,000. The Government's Care Services minister, Paul Burstow, stated in 2012 that telehealth and telecare would be extended over the next five years (2012-2017) to reach three million people. This ambition was formally abandoned in November 2013. In September 2014 NHS England announced a replacement, but much lower profile, new “technology enabled care services” programme. See also Assistive technology Friendly caller program Wandering (dementia) References External links International Society for Telemedicine & eHealth Telecommunication services Welfare Telehealth Health informatics
Telecare
Biology
1,095
10,575,288
https://en.wikipedia.org/wiki/NGC%207354
NGC 7354 is a planetary nebula located in the northern circumpolar constellation of Cepheus, at a distance of approximately from the Sun. It was discovered by German-born astronomer William Herschel on November 3, 1787. John L. E. Dreyer described it as, "a planetary nebula, bright, small, round, pretty gradually a very little brighter middle". This nebula is the result of an aging star casting off its outer atmosphere. Overall the nebula is elliptical in form, with a complex interior structure having inner and outer shells, several bright equatorial knots, and two jet-like features near the nebula poles. The rim of the inner shell is ellipsoidal with an aspect ratio of 1.6 and a major axis spanning . The outer shell is more circular, and is approximately in diameter. The faint outer shell is expanding with a higher velocity than the inner shell, and the knots are moving at the same velocity as the outer shell. The outer shell has an estimated age of 2,500 years, while the inner shell is 1,600 years old. The morphological features of the nebula may be explained by an interacting binary star system with one of the pair passing through the asymptotic giant branch phase. The jets may be generated by an accretion disk surrounding the resulting white dwarf star. Additionally, an analysis of Gaia data suggests that the central star is binary. References External links Planetary nebulae 7354 Cepheus (constellation)
NGC 7354
Astronomy
298
23,667,987
https://en.wikipedia.org/wiki/MOCADI
MOCADI is a Monte Carlo simulation program used to calculate the transport of charged particle beams--as well as fragmentation and fission products from nuclear reactions in target materials--through ion optical systems described by transfer matrices (including up to third order Taylor expansion coefficients) and through layers of matter. References Monte Carlo molecular modelling software
MOCADI
Chemistry
67
36,872,124
https://en.wikipedia.org/wiki/Reach%20for%20the%20Stars%20%28will.i.am%20song%29
"Reach for the Stars" (and their instrumental-driven version subtitled "Mars Edition" and "NASA Edition"), is a song written, produced and recorded by American recording artist will.i.am in commemoration of the landing of the Curiosity rover on the planet Mars. First released on August 28, 2012 as a promotional single, the song also appears on the deluxe edition of his fourth studio album #willpower (2013). "Reach for the Stars (Mars Edition)" became the first song in history to be broadcast from another planet, completing a journey of more than 300 million miles between Mars and Earth. Background and development "Reach for the Stars" was written in February 2011, after NASA asked will.i.am to write and produce a song for the Curiosity rover's landing on Mars. The songwriter said that the experience with NASA administrator Charles Bolden discussing the possibility of broadcasting a song from Mars was "surreal", The song is part of NASA's educational outreach, with will.i.am stating that the song "aims to encourage youth to study science." Rather than produce a song via the computer, will.i.am said that he wanted to show "human collaboration", which featured a 40-piece orchestra. He added that "people in my field aren't supposed to try and execute something classical, or orchestral, so I wanted to break that stigma, [and have something] that would be timeless and translated in different cultures." NASA confirmed during the Mars Science Laboratory launch tweet-up on November 24, 2011 that it partnered with will.i.am to deliver a song for Curiositys landing. After being uploaded to the rover, which landed near the equator of Mars, the song was broadcast live from the planet, completing a journey of more than 300 million miles (approximately 482 million kilometers). It became the first song in history to be broadcast from another planet and the second song to be broadcast in space, after the Beatles' "Across the Universe" was beamed into space by NASA in 2008. Track listing Digital download "Reach for the Stars (Mars Edition)" – 4:21 Credits and personnel will.i.am – co-writer, producer, recording Jordan Miller – vocals Lil Jon – background vocals Dante Santiago – background vocals Dr. Luke – producer Release history See also Music in space References 2012 singles Will.i.am songs Interscope Records singles 2012 songs Songs written by will.i.am Songs written by Dr. Luke Song recordings produced by Dr. Luke Music in space
Reach for the Stars (will.i.am song)
Astronomy
520
22,488,173
https://en.wikipedia.org/wiki/Gautieria%20sinensis
Gautieria sinensis is a species of hypogeal fungus in the family Gomphaceae. Gautieria sinuses is typically found between paving slabs in Eastern Europe. Local traditions dictate the fungus, when digested, can cure impotency. References Gomphaceae Fungus species
Gautieria sinensis
Biology
64
27,139,110
https://en.wikipedia.org/wiki/Institute%20of%20Chemical%20Process%20Fundamentals
Institute of Chemical Process Fundamentals, Academy of Sciences of the Czech Republic, v.v.i. () is one of the six institutes belonging to the CAS chemical sciences section and is a research centre in a variety of fields such as chemistry, biochemistry, catalysis and environment. Its research topics include multiphase reaction systems for the design of chemical synthesis chemical processes and new materials development, energetics and protection of environment. Its national and international reputation is ascertained by its participation in EU financed research projects, such as EUCAARI or MULTIPRO. The MATINOES project was evaluated to belong to 20 best projects of the 6th Frame Programme. History The institute was founded at the Czechoslovak Academy of Sciences in 1960 and, from its beginning, was intended to be a multidisciplinary research institution. Its founder and first director, Professor Vladimír Bažant, was a chemical technologist with a broad perspective who valued modern concepts without which development of new processes would not be possible. This led him to invite Professor George L. Standart, a chemical engineer and a US native, who paved the way for the development of chemical engineering in the former Czechoslovakia in the 1950s and 60s. Chemical engineering research could not be done without a solid base in physical chemistry. This field of research was brought into the institute by the arrival in 1964 of Professor Eduard Hála and his team of physical chemists to the newly built site in the Prague suburban area of Suchdol-Lysolaje. Gradually new branches of chemical engineering and chemical technology research were being developed such as reaction engineering, homogeneous catalysis, studies of Non-Newtonian fluids, sublimation, separation processes, dynamics and control of chemical systems etc. Most of these new topics were introduced as necessary support to a large and long-term project of development of a complete production technology of terephthalic acid a polyesters. In 1989 several restructurings had been carried out that lead to a gradual decrease of staff by 50%. The research was rationalized into today's institute's structure. Present The institute of chemical process fundamentals research activities currently include the theory of chemical processes especially in chemical engineering, physical chemistry, chemical technology and environmental engineering. Main research activities molecular theory and computer simulations of fluid systems thermodynamics of fluid systems, PVT behaviour of pure compounds and mixtures and phase equilibrium research and development of microreactors fundamentals of processes using supercritical fluids advanced catalytical processes, morphology, and properties of catalysts, preparation of catalysts study and preparation of nanomaterials and nanofibers texture of porous substances and transport phenomena in porous substances membrane separations, pervaporation a permeation study and application of biocatalysts, bioremediation structure, reactivity and catalytic activity of organometallic complexes NMR spectroscopy fluidized bed combustion and gasification photochemical reactions in microwave field and microwave technology fluid dynamics and transport phenomena in multiphase systems rheologic properties of microdispersions and liquids aerosol chemistry and physics laser-induced chemical reactions and aerosol processes for preparation of new compounds and composites Organization structure Management Director: Ing. Michal Šyc, Ph.D. Chairman of Institute Board: Dr. Ing. Vladimír Ždímal Scientific Secretary: Ing. Vladimír Církva, Dr. Research departments Department of Membrane Separation Processes - Head: Ing. Pavel Izák, Ph.D., DSc. Department of Aerosols Chemistry and Physics - Head: Dr. Ing. Vladimír Ždímal Department of Catalysis and Reaction Engineering - Head: Ing. Olga Šolcová, CSc. Department of Multiphase Reactors - Head: Doc. Ing. Marek Růžička, CSc. Department of Analytical Chemistry - Head: Ing. Jan Sýkora, Ph.D. Department of Environmental Engineering - Head: Ing. Michal Šyc, Ph.D. Department of Molecular and Mesoscopic Modelling - Head: prof. Ing. Lísal Martin, DSc. Department of Laser Chemistry - Head: RNDr. Radek Fajgar, CSc. Department of Advanced Materials and Organic Synthesis - Head: Ing. Jan Storch, Ph.D. Department of Bioorganic Compounds and Nanocomposites - Head: Ing. Tomáš Strašák, Ph.D. Supervisory board Prof. Ing. Vladimír Mareček, DrSc. - chairman Institute board Dr. Ing. Vladimír Ždímal – chairman Postgraduate studies Postgraduate studies are accredited by the Ministry of Education, Youth, and Sports of the Czech Republic for mutual programmes of ICPF and all the faculties of ICT Prague and other faculties of Czech universities in the following fields: Chemical engineering Physical chemistry Organic technology Organic chemistry Inorganic chemistry Biotechnology Chemistry and technology of environmental protection Research projects ICPF research teams are currently working on dozens of interesting basic and applied scientific projects financed both from national and foreign resources. Selected topics in the following list show the broadness and multi-disciplinarity of the research carried out in the institute's laboratories: F3 Factory - Flexible, fast and future production processes Study of polymeric membrane swelling and make use of this effect for increasing its permeability Separation of volatile organic compounds from air Optimization of supercritical fluid extraction for maximal yield of biologically active substances from plants Determination of the phase and state behaviour of fluids and fluid mixtures for process-es at superambient conditions: molecular-based theory and experiment Computer modelling of structural, dynamical and transport properties of fluids in nanospace Preparation of hierarchic nanomaterials HUGE2 - Hydrogen Oriented Underground Coal Gasification for Europe - Environmental and Safety Aspects Special catalytic processes and materials Modern theoretical methods for the analysis of chemical bonding Supported oxidic catalysts containing low amount of active species as catalysts for N2O decomposition Reactive chemical barriers for decontamination of heavily polluted waters Removing endocrine disruptors from wastewaters and drinking water using photocatalytic and biological processes Transport and reaction processes in complex multiphase systems Determination of the coalescence efficiency of bubbles in liquids Wall effect in flowing microdisperse liquids: apparent slip and electrokinetical potential Novel inorganic-organic hybrid nanomaterials Releasing hydrogen on formation of chemical bonds catalysed by titanium complexes Whole-cell optical sensors Preparation of helicene-based chiral stationary phases for HPLC FLEXGAS - Near zero advanced fluidised bed gasification Advanced methods of fluid and burner coal and biomass co-gasification Waste as raw material and energy source Development and validation of thermal desorption technology using microwave radiation EUSAAR - European Supersites for Atmospheric Aerosol Research Influence of surface processes and electromagnetic radiation on transfer phenomena in aerosol systems with nanoparticles and porous bodies with nanopores Development and application of new experimental methods to measure heterogeneous particles in superheated steam Preparation of Ti/O/Si based photocatalysts by laser induced CVD and sol-gel technique See also Academy of Sciences of the Czech Republic External links ICPF Homepage Research institutes in the Czech Republic Czech Academy of Sciences 1960 establishments in Czechoslovakia Research institutes established in 1960 Chemical research institutes
Institute of Chemical Process Fundamentals
Chemistry
1,469
4,709,175
https://en.wikipedia.org/wiki/LPA512
LPA512 (Serbian ЛПА512) was an industrial programmable logic controller—a small (438 x 286 x 278 mm), portable computer developed by the Ivo Lola Ribar Institute of Serbia in 1986 as an enhancement to its prior product, PA512. It was first deployed in the Maribor car factory. References Portable computers Industrial automation
LPA512
Technology,Engineering
76
38,502,047
https://en.wikipedia.org/wiki/Heinz%20Isler
Heinz Isler (July 26, 1926 – June 20, 2009) was a Swiss structural engineer. He is famous for his thin concrete shells. Early life and education Heinz Isler was born in the municipality of Zollikon. He showed talent as an artist as a student, but his father advised him to seek a career in engineering first. Isler studied thin concrete shells at the Federal Institute of Technology (ETH) in Zurich. Career Upon graduating from the ETH in 1950 with a degree in civil engineering, Isler worked as a teaching assistant with Pierre Lardy, a professor at the ETH, from 1951 to 1953. He opened his own office in 1954 in Burgdorf, Switzerland. His first project as a shellbuilder was a concert hall roof for the Hotel Kreuz in Langenthal which was completed between 1954 and 1955. The form of the shell was loosely inspired by the shape of a plumped-up pillow on his bed. Death Isler died from a stroke on June 20, 2009 at the age of 82. Bibliography See also Christian Menn Othmar H. Ammann Robert Maillart References External links Heinz Isler information at Structurae Heinz Isler and Structural art 1926 births 2009 deaths ETH Zurich alumni Swiss civil engineers Structural engineers People from Meilen District
Heinz Isler
Engineering
271
37,601,769
https://en.wikipedia.org/wiki/HD%2040307%20f
HD 40307 f is an extrasolar planet orbiting the star HD 40307. It is located 42 light-years away in the direction of the southern constellation Pictor. The planet was discovered by the radial velocity method, using the European Southern Observatory's HARPS apparatus by a team of astronomers led by Mikko Tuomi at the University of Hertfordshire and Guillem Anglada-Escude of the University of Göttingen, Germany. The existence of planet was confirmed in 2015. Planetary characteristics This planet is the fifth planet from the star, at a distance of about 0.25 AU (compared to 0.39 AU for Mercury) with negligible eccentricity. HD 40307 f's minimum mass is 5.2 that of Earth, and dynamical models suggest it cannot be much more (and so is measured close to edge-on). Planets like this in that system have been presumed "super-Earth". Even though HD 40307 f is closer to the star than Mercury is to the Sun, it gets (slightly) less insolation than Mercury gets because the parent star is dimmer than our home star. It still gets more heat than Venus gets (like Gliese 581 c), and it has more gravitational potential than Venus has. HD 40307 f is more likely a super-Venus than a "super-Earth". Moreover, planets b, c, and d are presumed to have migrated in from outer orbits; and planet b is predicted a sub-Neptune. References External links HD 40307 Pictor Exoplanets discovered in 2012 Exoplanets detected by radial velocity Super-Earths Exoplanets in the Gliese Catalog
HD 40307 f
Astronomy
352
7,127,508
https://en.wikipedia.org/wiki/Open%20Prosthetics%20Project
The Open Prosthetics Project (OPP) is an open design effort, dedicated to public domain prosthetics. By creating an online collaboration between prosthetic users and designers, the project aims to make new technology available for anyone to use and customize. On the project's website, medical product designers can post new ideas for prosthetic devices as CAD files, which are then available to the public free of charge. Prosthetic users or other designers can download the Computer-aided design (CAD) data, customize or improve upon the prosthesis, and repost the modifications to the web site. Users are free to take 3D models to a fabricator and have the hardware built for less cost than buying a manufactured limb. The project was started by Jonathon Kuniholm, a member of United States Marine Corps Reserve who lost part of his right arm to an improvised explosive device (IED) in Iraq. Upon returning home and receiving his first myoelectric hand, he decided there must be a better solution. References Sources Public domain Prosthetics Medical and health organizations based in North Carolina Open content projects Open-source hardware
Open Prosthetics Project
Engineering,Biology
240
41,510,897
https://en.wikipedia.org/wiki/Hemichrome
A hemichrome (FeIII) is a form of low-spin methemoglobin (metHb). Hemichromes, which precede the denaturation processes of hemoglobin (Hb), are mainly produced by partially denaturated hemoglobins and form histidine complexes. Hemichromes are usually associated with blood disorders. Types of hemichromes Hemichromes can be classified in two main categories: reversible and irreversible. Reversible hemichromes (Hch-1) have the ability to return to their native formation (hemoglobin). Some hemichromes can be reduced to the high-spin state of deoxyhemoglobin, while others are first being reduced to hemochromes (FeII) and then to deoxyhemoglobin through anaerobic dialysis. Photolysis, in the presence of oxygen from CO and its reaction with the hemochrome, can quickly convert a hemichrome to oxyhemoglobin (HbO2). Irreversible hemichromes (Hch-2) cannot be converted to their native form. Both the reversible and irreversible hemichromes have a similar rate during proteolytic degradation and they both have a lower percentage of alpha helixes. Hemichrome in bloodstains Upon blood exiting the body, hemoglobin in blood transits from bright red to dark brown, which is attributed to oxidation of oxy-hemoglobin (HbO2) to methemoglobin (met-Hb) and ending up in hemichrome (HC). For forensic purposes, the fractions of HbO2, met-Hb and HC in a bloodstain can be used for age determination of bloodstains when measured with Reflectance Spectroscopy . Hemichrome stability Hemichromes form an insoluble macromolecule (macromolecular aggregate) by copolymerization with the cytoplasm of band 3. Covalent bonds reinforce the aggregate interactions of the hemichromes which are accumulated on the surface of the membrane. However, hemichromes are less stable than their native form. Normal formation Hemoglobin A in humans can form hemichromes even under physiological conditions as a result of pH and temperature alterations, and the autoxidation of oxyhemoglobin. Hemichrome formation, followed by a band 3 clustering and the formation of Heinz bodies, can take place during the physiological clearance of damaged red blood cells. The difference between a normal red blood cell (RBC) and a red blood cell with unstable hemoglobin (such as in the case of hemolytic anaemia) is that, in a normal RBC, the formation of Heinz bodies is significantly delayed. In cells with unstable hemoglobin, hemichromes are formed soon after the cell has been released into the bloodstream and they precipitate on the membrane's surface. Abnormal formation Source: When hemoglobin is exposed to certain conditions, reversible or irreversible hemichromes are formed. Reversible hemichrome formation occurs in the presence of: Fatty acids Aliphatic alcohol (n-butanol) Dehydration High concentration of glycerol Polyethylene glycol Irreversible hemichrome formation occurs in the presence of: Phenylhydrazine Sodium dodecyl sulphate (SDS) References Hemoproteins Cellular respiration
Hemichrome
Chemistry,Biology
762
23,470,492
https://en.wikipedia.org/wiki/Bidomain%20model
The bidomain model is a mathematical model to define the electrical activity of the heart. It consists in a continuum (volume-average) approach in which the cardiac microstructure is defined in terms of muscle fibers grouped in sheets, creating a complex three-dimensional structure with anisotropical properties. Then, to define the electrical activity, two interpenetrating domains are considered, which are the intracellular and extracellular domains, representing respectively the space inside the cells and the region between them. The bidomain model was first proposed by Schmitt in 1969 before being formulated mathematically in the late 1970s. Since it is a continuum model, rather than describing each cell individually, it represents the average properties and behaviour of group of cells organized in complex structure. Thus, the model results to be a complex one and can be seen as a generalization of the cable theory to higher dimensions and, going to define the so-called bidomain equations. Many of the interesting properties of the bidomain model arise from the condition of unequal anisotropy ratios. The electrical conductivity in anisotropic tissues is not unique in all directions, but it is different in parallel and perpendicular direction with respect to the fiber one. Moreover, in tissues with unequal anisotropy ratios, the ratio of conductivities parallel and perpendicular to the fibers are different in the intracellular and extracellular spaces. For instance, in cardiac tissue, the anisotropy ratio in the intracellular space is about 10:1, while in the extracellular space it is about 5:2. Mathematically, unequal anisotropy ratios means that the effect of anisotropy cannot be removed by a change in the distance scale in one direction. Instead, the anisotropy has a more profound influence on the electrical behavior. Three examples of the impact of unequal anisotropy ratios are the distribution of transmembrane potential during unipolar stimulation of a sheet of cardiac tissue, the magnetic field produced by an action potential wave front propagating through cardiac tissue, the effect of fiber curvature on the transmembrane potential distribution during an electric shock. Formulation Bidomain domain The bidomain domain is principally represented by two main regions: the cardiac cells, called intracellular domain, and the space surrounding them, called extracellular domain. Moreover, usually another region is considered, called extramyocardial region. The intracellular and extracellular domains, which are separate by the cellular membrane, are considered to be a unique physical space representing the heart (), while the extramyocardial domain is a unique physical space adjacent of them (). The extramyocardial region can be considered as a fluid bath, especially when one wants to simulate experimental conditions, or as a human torso to simulate physiological conditions. The boundary of the two principal physical domains defined are important to solve the bidomain model. Here the heart boundary is denoted as while the torso domain boundary is Unknowns and parameters The unknowns in the bidomain model are three, the intracellular potential , the extracellular potential and the transmembrane potential , which is defined as the difference of the potential across the cell membrane . Moreover, some important parameters need to be taken in account, especially the intracellular conductivity tensor matrix the extracellular conductivity tensor matrix The transmembrane current flows between the intracellular and extracellular regions and it is in part described by the corresponding ionic current over the membrane per unit area . Moreover, the membrane capacitance per unit area and the surface to volume ratio of the cell membrane need to be considered to derive the bidomain model formulation, which is done in the following section. Standard formulation The bidomain model is defined through two partial differential equations (PDE) the first of which is a reaction diffusion equation in terms of the transmembrane potential, while the second one computes the extracellular potential starting from a given transmembran potential distribution. Thus, the bidomain model can be formulated as follows: where and can be defined as applied external stimulus currents. Ionic current equation The ionic current is usually represented by an ionic model through a system of ordinary differential equations (ODEs). Mathematically, one can write where is called ionic variable. Then, in general, for all , the system reads Different ionic models have been proposed: phenomenological models, which are the simplest ones and used to reproduce macroscopic behavior of the cell. physiological models, which take into account both macroscopic behaviour and cell physiology with a quite detailed description of the most important ionic current. Model of an extramyocardial region In some cases, an extramyocardial region is considered. This implies the addition to the bidomain model of an equation describing the potential propagation inside the extramyocardial domain. Usually, this equation is a simple generalized Laplace equation of type where is the potential in the extramyocardial region and is the corresponding conductivity tensor. Moreover, an isolated domain assumption is considered, which means that the following boundary conditions are added being the unit normal directed outside of the extramyocardial domain. If the extramyocardial region is the human torso, this model gives rise to the forward problem of electrocardiology. Derivation The bidomain equations are derived from the Maxwell's equations of the electromagnetism, considering some simplifications. The first assumption is that the intracellular current can flow only between the intracellular and extracellular regions, while the intracellular and extramyocardial regions can comunicate between them, so that the current can flow into and from the extramyocardial regions but only in the extracellular space. Using Ohm's law and a quasi-static assumption, the gradient of a scalar potential field can describe an electrical field , which means that Then, if represent the current density of the electric field , two equations can be obtained where the subscript and represent the intracellular and extracellular quantities respectively. The second assumption is that the heart is isolated so that the current that leaves one region need to flow into the other. Then, the current density in each of the intracellular and extracellular domain must be equal in magnitude but opposite in sign, and can be defined as the product of the surface to volume ratio of the cell membrane and the transmembrane ionic current density per unit area, which means that By combining the previous assumptions, the conservation of current densities is obtained, namely from which, summing the two equations This equation states exactly that all currents exiting one domain must enter the other. From here, it is easy to find the second equation of the bidomain model subtracting from both sides. In fact, and knowing that the transmembral potential is defined as Then, knowing the transmembral potential, one can recover the extracellular potential. Then, the current that flows across the cell membrane can be modelled with the cable equation, Combining equations () and () gives Finally, adding and subtracting on the left and rearranging , one can get the first equation of the bidomain model which describes the evolution of the transmembrane potential in time. The final formulation described in the standard formulation section is obtained through a generalization, considering possible external stimulus which can be given through the external applied currents and . Boundary conditions In order to solve the model, boundary conditions are needed. The more classical boundary conditions are the following ones, formulated by Tung. First of all, as state before in the derive section, there ca not been any flow of current between the intracellular and extramyocardial domains. This can be mathematically described as where is the vector that represents the outwardly unit normal to the myocardial surface of the heart. Since the intracellular potential is not explicitily presented in the bidomain formulation, this condition is usually described in terms of the transmembrane and extracellular potential, knowing that , namely For the extracellular potential, if the myocardial region is presented, a balance in the flow between the extracellular and the extramyocardial regions is considered Here the normal vectors from the perspective of both domains are considered, thus the negative sign are necessary. Moreover, a perfect transmission of the potential on the cardiac boundary is necessary, which gives Instead, if the heart is considered as isolated, which means that no myocardial region is presented, a possible boundary condition for the extracellular problem is Reduction to monodomain model By assuming equal anisotropy ratios for the intra- and extracellular domains, i.e. for some scalar , the model can be reduced to one single equation, called monodomain equation where the only variable is now the transmembrane potential, and the conductivity tensor is a combination of and Formulation with boundary conditions in an isolated domain If the heart is considered as an isolated tissue, which means that no current can flow outside of it, the final formulation with boundary conditions reads Numerical solution There are various possible techniques to solve the bidomain equations. Between them, one can find finite difference schemes, finite element schemes and also finite volume schemes. Special considerations can be made for the numerical solution of these equations, due to the high time and space resolution needed for numerical convergence. See also Monodomain model Forward problem of electrocardiology References External links Scholarpedia article about the bidomain model Cardiac electrophysiology Electrophysiology Partial differential equations Mathematical modeling Numerical analysis
Bidomain model
Mathematics
1,941
61,116,424
https://en.wikipedia.org/wiki/Oxyfluorfen
Oxyfluorfen is a chemical compound used as an herbicide. It is manufactured by Dow AgroSciences, Adama Agricultural Solutions and 4Farmers under the trade names Goal, Galigan, and Oxyfluorfen 240. Oxyfluorfen is used to control broadleaf and grassy weeds in a variety of nut, tree fruit, vine, and field crops, especially wine grapes and almonds. It is also used for residential weed control. Toxicity Oxyfluorfen has low acute oral, dermal, and inhalation toxicity in humans. The primary toxic effects are in the liver and alterations in blood parameters (anemia). It is classified as a possible human carcinogen. Its LD50 is over 5000 mg/kg. Environmental impact Oxyfluorfen is classified as an environmental hazard under the GHS due to being "very toxic to aquatic life with long lasting effects". Oxyfluorfen is toxic to plants, invertebrates, and fish. Birds and mammals may also experience subchronic and chronic effects from oxyfluorfen. It is persistent in soil and has been shown to drift from application sites to nearby areas. It can contaminate surface water through spray drift and runoff. Oxyflurofen's waterborne LC50 for trout is less than 0.5 mg/L. Mode of action Oxyfluorfen is a diphenyl ether herbicide and acts via inhibition of protoporphyrinogen oxidase, (destroying chlorophill production and cell membranes), making its HRAC resistance class Group G (Aus), Group E (Global) and 14 (numerical). Oxyfluorfen suffers from poor translocation, despite rapid shoot and foliar uptake. Desiccation in affected weeds begins in hours, with necrosis and death following in days. Application Oxyfluorfen is used in the USA and Australia, at rates of up to 1500 g/Ha. It has been used on crops of tree fruit, nuts, onion, tobacco, vines, almonds, apples, apricots, grapevine, macadamias, peaches, pears, pecans, plums, walnuts, Duboisia, Avocado, custard apple, kiwi fruit, Longan, Lychees, mango, Passionfruit, Pawpaw, Rambutan, Brassica crops, broccoli, cabbages, cauliflower, pyrethrum and (before sowing) cotton or winter cereals. References Herbicides Chloroarenes Trifluoromethyl compounds Nitrobenzene derivatives Ethoxy compounds Diphenyl ethers
Oxyfluorfen
Biology
568
2,651,165
https://en.wikipedia.org/wiki/2%2C4-Dinitrotoluene
2,4-Dinitrotoluene (DNT) or dinitro is an organic compound with the formula C7H6N2O4. This pale yellow crystalline solid is well known as a precursor to trinitrotoluene (TNT) but is mainly produced as a precursor to toluene diisocyanate. Isomers of dinitrotoluene Six positional isomers are possible for dinitrotoluene. The most common one is 2,4-dinitrotoluene. The nitration of toluene gives sequentially mononitrotoluene, DNT, and finally TNT. 2,4-DNT is the principal product from dinitration, the other main product being about 30% 1,3-DN2-T. The nitration of 4-nitrotoluene gives 2,4-DNT. Applications Most DNT is used in the production of toluene diisocyanate, which is used to produce flexible polyurethane foams. DNT is hydrogenated to produce 2,4-toluenediamine, which in turn is phosgenated to give toluene diisocyanate. In this way, about 1.4 billion kilograms are produced annually, as of the years 1999–2000. Other uses include the explosives industry. It is not used by itself as an explosive, but some of the production is converted to TNT. Dinitrotoluene is frequently used as a plasticizer, deterrent coating, and burn rate modifier in propellants (e.g., smokeless gunpowders). As it is carcinogenic and toxic, modern formulations tend to avoid its use. In this application it is often used together with dibutyl phthalate. Toxicity Dinitrotoluenes are highly toxic with a threshold limit value (TLV) of 1.5 mg/m3. It converts hemoglobin into methemoglobin. 2,4-Dinitrotoluene is also a listed hazardous waste under 40 CFR 261.24. Its United States Environmental Protection Agency (EPA) Hazardous Waste Number is D030. The maximum concentration that may be contained to not have toxic characteristics is 0.13 mg/L. References External links Explosive chemicals IARC Group 2B carcinogens Nitrotoluenes Plasticizers
2,4-Dinitrotoluene
Chemistry
506
6,286,051
https://en.wikipedia.org/wiki/Blue%E2%80%93white%20screen
The blue–white screen is a screening technique that allows for the rapid and convenient detection of recombinant bacteria in vector-based molecular cloning experiments. This method of screening is usually performed using a suitable bacterial strain, but other organisms such as yeast may also be used. DNA of transformation is ligated into a vector. The vector is then inserted into a competent host cell viable for transformation, which are then grown in the presence of X-gal. Cells transformed with vectors containing recombinant DNA will produce white colonies; cells transformed with non-recombinant plasmids (i.e. only the vector) grow into blue colonies. Background Molecular cloning is one of the most commonly used procedures in molecular biology. A gene of interest may be inserted into a plasmid vector via ligation, and the plasmid is then transformed into Escherichia coli cells. However, not all the plasmids transformed into cells may contain the desired gene insert, and checking each individual colony for the presence of the insert is time-consuming. Therefore, a method for the detection of the insert would be useful for making this procedure less time- and labor-intensive. One of the early methods developed for the detection of insert is blue–white screening which allows for identification of successful products of cloning reactions through the colour of the bacterial colony. The method is based on the principle of α-complementation of the β-galactosidase gene. This phenomenon of α-complementation was first demonstrated in work done by Agnes Ullmann in the laboratory of François Jacob and Jacques Monod, where the function of an inactive mutant β-galactosidase with deleted sequence was shown to be rescued by a fragment of β-galactosidase in which that same sequence, the α-donor peptide, is still intact. Langley et al. showed that the mutant non-functional β-galactosidase was lacking in part of its N-terminus with its residues 11—41 deleted, but it may be complemented by a peptide formed of residues 3—90 of β-galactosidase. M13 filamentous phage containing sequence coding for the first 145 amino acid was later constructed by Messing et al., and α-complementation via the use of a vector was demonstrated by the formation of blue plaques when cells containing the inactive protein were infected by the phage and then grown in plates containing X-gal. The pUC series of plasmid cloning vectors by Vieira and Messing was developed from the M13 system and were the first plasmids constructed to take advantage of this screening method. In this method, DNA ligated into the plasmid disrupts the α peptide and therefore the complementation process, and no functional β-galactosidase can form. Cells transformed with plasmid containing an insert therefore form white colonies, while cells transformed with plasmid without an insert form blue colonies; result of a successful ligation can thus be easily identified by the white coloration of cells formed from the unsuccessful blue ones. Molecular mechanism β-galactosidase is a protein encoded by the lacZ gene of the lac operon, and it exists as a homotetramer in its active state. However, a mutant β-galactosidase derived from the M15 strain of E. coli has its N-terminal residues 11—41 deleted and this mutant, the ω-peptide, is unable to form a tetramer and is inactive. This mutant form of protein however may return fully to its active tetrameric state in the presence of an N-terminal fragment of the protein, the α-peptide. The rescue of function of the mutant β-galactosidase by the α-peptide is called α-complementation. In this method of screening, the host E. coli strain carries the lacZ deletion mutant (lacZΔM15) which contains the ω-peptide, while the plasmids used carry the lacZα sequence which encodes the first 59 residues of β-galactosidase, the α-peptide. Neither is functional by itself. However, when the two peptides are expressed together, as when a plasmid containing the lacZα sequence is transformed into a lacZΔM15 cells, they form a functional β-galactosidase enzyme. The blue–white screening method works by disrupting this α-complementation process. The plasmid carries within the lacZα sequence an internal multiple cloning site (MCS). This MCS within the lacZα sequence can be cut by restriction enzymes so that the foreign DNA may be inserted within the lacZα gene, thereby disrupting the gene that produces α-peptide. Consequently, in cells containing the plasmid with an insert, no functional β-galactosidase may be formed. The presence of an active β-galactosidase can be detected by X-gal, a colourless analog of lactose that may be cleaved by β-galactosidase to form 5-bromo-4-chloro-indoxyl, which then spontaneously dimerizes and oxidizes to form a bright blue insoluble pigment 5,5'-dibromo-4,4'-dichloro-indigo. This results in a characteristic blue colour in cells containing a functional β-galactosidase. Blue colonies therefore show that they may contain a vector with an uninterrupted lacZα (therefore no insert), while white colonies, where X-gal is not hydrolyzed, indicate the presence of an insert in lacZα which disrupts the formation of an active β-galactosidase. The recombinant clones can be further analyzed by isolating and purifying small amounts of plasmid DNA from the transformed colonies and restriction enzymes can be used to cut the clone and determine if it has the fragment of interest. If the DNA is necessary to be sequenced, the plasmids from the colonies will need to be isolated at a point, whether to cut using restriction enzymes or performing other assays. Practical considerations The correct type of vector and competent cells are important considerations when planning a blue–white screen. The plasmid must contain the lacZα, and examples of such plasmids are pUC19 and pBluescript. The E. coli cell should contain the mutant lacZ gene with deleted sequence (i.e. lacZΔM15), and some of the commonly used cells with such genotype are JM109, DH5α, and XL1-Blue. It should also be understood that the lac operon is affected by the presence of glucose. The protein EIIAGlc, which is involved in glucose import, shuts down lactose permease when glucose is being transported into the cell. The media used in agar plate therefore should not include glucose. X-gal is light-sensitive and therefore its solution and plates containing X-gal should be stored in the dark. Isopropyl β-D-1-thiogalactopyranoside (IPTG), which functions as the inducer of the lac operon, may be used in the media to enhance the expression of LacZ. X-gal is an expensive material, thus other methods have been developed in order to screen bacteria. GFP has been developed as an alternative to help screen bacteria. The concept is similar to α-complementation in which a DNA insert can disrupt the coding sequence within a vector and thus disrupt the GFP production resulting in non-fluorescing bacteria. Bacteria that have recombinant vectors (vector + insert), will be white and not express the GFP protein, while non-recombinant (vector), will and fluoresce under UV light. GFP in general has been used as a reporter gene where individuals can definitively determine if a clone carries a gene that researchers are analyzing. On occasion, the medium in which the colonies grow can influence the screen and introduce false-positive results. X-gal on the medium can occasionally degrade to produce a blue color or GFP can lose its fluorescence because of the medium and can impact researchers capabilities to determine colonies with the desire recombinant and those that do not possess it. Drawbacks Some white colonies may not contain the desired recombinant plasmid for a number of reasons. The ligated DNA may not be the correct one or not properly ligated, and it is possible for some linearized vector to be transformed, its ends "repaired" and ligated together such that no LacZα is produced and no blue colonies may be formed. Mutation can also lead to the α-fragment not being expressed. A colony with no vector at all will also appear white, and may sometimes appear as satellite colonies after the antibiotic used has been depleted. It is also possible that blue colonies may contain the insert. This occurs when the insert is "in frame" with the LacZα gene and a STOP codon is absent in the insert. This can lead to the expression of a fusion protein that has a functional LacZα if its structure is not disrupted. The correct recombinant construct can sometimes give lighter blue colonies which may complicate its identification. See also Complementation test pBLU pGreen pUC19 Recombinant DNA References Genetic engineering Genetics techniques
Blue–white screen
Chemistry,Engineering,Biology
1,977
72,628,560
https://en.wikipedia.org/wiki/F200DB-045
F200DB-045 is a candidate high-redshift galaxy, with an estimated redshift of approximately z = 20.4, corresponding to 168 million years after the Big Bang. If confirmed, it would be one of the earliest and most distant known galaxies observed. F200DB-045 would have a light-travel distance (lookback time) of 13.7 billion years, and, due to the expansion of the universe, a present proper distance of 36.1 billion light-years. Nonetheless, the redshift value of the galaxy presented by the procedure in one study may differ from the values presented in other studies using different procedures. Discovery The candidate high-redshift galaxy F200DB-045 was discovered within the data from the Early Release Observations (ERO) that was obtained using the Near Infrared Camera of the James Webb Space Telescope (JWST) in July 2022. This data included a nearby galaxy cluster SMACS J0723.3–7327, a massive cluster known as a possible "cosmic telescope" in amplifying background galaxies, including the F200DB-045 background galaxy. Distance Only a photometric redshift has been determined for F200DB-045; follow-up spectroscopic measurements will be required to confirm the redshift (see spectroscopic redshift). Spectroscopy could also determine the chemical composition, size and temperature of the galaxy. If confirmed, the galaxy may have existed in its star formation phase in the early universe, when it would have been composed mostly of dust as well as young and massive population III stars. See also CEERS-93316 Earliest galaxies GLASS-z12 HD1 (galaxy) JADES-GS-z13-0 List of the most distant astronomical objects Peekaboo Galaxy References Astronomical objects discovered in 2022 Galaxies Discoveries by the James Webb Space Telescope Volans
F200DB-045
Astronomy
392
39,762,654
https://en.wikipedia.org/wiki/Resurs-P%20No.1
Resurs-P No.1 was a Russian commercial Earth observation satellite capable of acquiring high-resolution imagery (resolution up to 1.0 m). It is one of a series of Resurs-P spacecraft. The spacecraft was operated by Roscosmos as a replacement of the Resurs-DK No.1 satellite until it ceased operations in 2021. In 2024 the satellite broke up, releasing objects into low earth orbit which required the crew of the ISS to take shelter. Mission The satellite was designed for multi-spectral remote sensing of the Earth's surface aimed at acquiring high-quality visible images in near real-time as well as on-line data delivery via radio link and providing a wide range of consumers with value-added processed data. In January 2022 the general director of Progress Rocket Space Centre, Dimitriy Baranov, announced that the satellite had been decommissioned in December 2021 because of "the failure of onboard equipment". Breakup Between June 26, 2024 at 13:05 UTC and June 27, 2024 at 00:51 UTC, Resurs-P1 released "a number of fragments" at an approximately 350 x 363 km Low Earth orbit according to debris-tracking service LeoLabs. United States Space Command later confirmed that Resurs-P1 had broken up into over 100 pieces of trackable space debris at approximately 16:00 UTC on 26 June 2024; LeoLabs later that afternoon announced that it was tracking 180 pieces of debris. Although there was no immediate threat to other satellites, because this orbit was close to that of the International Space Station its crew took shelter on the docked spacecraft for an hour as a precautionary measure. This breakup likely happened because the satellite's passivation was not performed properly or performed at all. The use of an anti-satellite weapon was not in question since nothing of the sort was detected by any American or European assets. See also Resurs-P References External links Roscosmos official website Resurs-P remote sensing satellite - RussianSpaceWeb.com Spacecraft launched by Soyuz-2 rockets Spacecraft launched in 2013 Spacecraft decommissioned in 2021 Spacecraft that broke apart in space Resurs satellites
Resurs-P No.1
Technology
443
54,174,510
https://en.wikipedia.org/wiki/Dark%20pattern
A dark pattern (also known as a "deceptive design pattern") is a user interface that has been carefully crafted to trick users into doing things, such as buying overpriced insurance with their purchase or signing up for recurring bills. User experience designer Harry Brignull coined the neologism on 28 July 2010 with the registration of darkpatterns.org, a "pattern library with the specific goal of naming and shaming deceptive user interfaces". In 2023 he released the book Deceptive Patterns. In 2021 the Electronic Frontier Foundation and Consumer Reports created a tip line to collect information about dark patterns from the public. Patterns Privacy Zuckering "Privacy Zuckering" – named after Facebook co-founder and Meta Platforms CEO Mark Zuckerberg – is a practice that tricks users into sharing more information than they intended to. Users may give up this information unknowingly or through practices that obscure or delay the option to opt out of sharing their private information. California has approved regulations that limit this practice by businesses in the California Consumer Privacy Act. Privacy Zuckering for AI model training In mid-2024, Meta Platforms announced plans to utilize user data from Facebook and Instagram to train its AI technologies, including generative AI systems. This initiative included processing data from public and non-public posts, interactions, and even abandoned accounts. Users were given until June 26, 2024, to opt out of the data processing. However, critics noted that the process was fraught with obstacles, including misleading email notifications, redirects to login pages, and hidden opt-out forms that were difficult to locate. Even when users found the forms, they were required to provide a reason for opting out, despite Meta's policy stating that any reason would be accepted, raising questions about the necessity of this extra step. The European Center for Digital Rights (Noyb) responded to Meta’s controversial practices by filing complaints in 11 EU countries. Noyb alleged that Meta's use of "dark patterns" undermined user consent, violating the General Data Protection Regulation (GDPR). These complaints emphasized that Meta's obstructive opt-out process included hidden forms, redirect mechanisms, and unnecessary requirements like providing reasons for opting out—tactics exemplifying "dark patterns," deliberately designed to dissuade users from opting out. Additionally, Meta admitted it could not guarantee that opted-out data would be fully excluded from its training datasets, raising further concerns about user privacy and data protection compliance. Amid mounting regulatory and public pressure, the Irish Data Protection Commission (DPC) intervened, leading Meta to pause its plans to process EU/EEA user data for AI training. This decision, while significant, did not result in a legally binding amendment to Meta’s privacy policy, leaving questions about its long-term commitment to respecting EU data rights. Outside the EU, however, Meta proceeded with its privacy policy update as scheduled on June 26, 2024, prompting critics to warn about the broader implications of such practices globally. The incident underscored the pervasive issue of dark patterns in privacy settings and the challenges of holding large technology companies accountable for their data practices. Advocacy groups called for stronger regulatory frameworks to prevent deceptive tactics and ensure that users can exercise meaningful control over their personal information. Bait-and-switch Bait-and-switch patterns advertise a free (or at a greatly reduced price) product or service that is wholly unavailable or stocked in small quantities. After announcing the product's unavailability, the page presents similar products of higher prices or lesser quality. Drip pricing Drip pricing is a pattern where a headline price is advertised at the beginning of a purchase process, followed by the incremental disclosure of additional fees, taxes or charges. The objective of drip pricing is to gain a consumer's interest in a misleadingly low headline price without the true final price being disclosed until the consumer has invested time and effort in the purchase process and made a decision to purchase. Confirmshaming Confirmshaming uses shame to drive users to act, such as when websites word an option to decline an email newsletter in a way that shames visitors into accepting. Misdirection Common in software installers, misdirection presents the user with a button in the fashion of a typical continuation button. A dark pattern would show a prominent "I accept these terms" button asking the user to accept the terms of a program unrelated to the one they are trying to install. Since the user typically will accept the terms by force of habit, the unrelated program can subsequently be installed. The installer's authors do this because the authors of the unrelated program pay for each installation that they procure. The alternative route in the installer, allowing the user to skip installing the unrelated program, is much less prominently displayed, or seems counter-intuitive (such as declining the terms of service). Some websites that ask for information that is not required also use misdirection. For example, one would fill out a username and password on one page, and after clicking the "next" button, the page asks the user for their email address with another "next" button as the only option. This hides the option to press "next" without entering the information. In some cases, the page shows the method to skip the step as a small, greyed-out link instead of a button, so it does not stand out to the user. Other examples include sites offering a way to invite friends by entering their email address, to upload a profile picture, or to identify interests. Confusing wording may be also used to trick users into formally accepting an option which they believe has the opposite meaning. For example a personal data processing consent button using a double-negative such as "don't not sell my personal information". Roach motel A roach motel or a trammel net design provides an easy or straightforward path to get in but a difficult path to get out. Examples include businesses that require subscribers to print and mail their opt-out or cancellation request. For example, during the 2020 United States presidential election, Donald Trump's WinRed campaign employed a similar dark pattern, pushing users towards committing to a recurring monthly donation. Another common version of this pattern is any service which enables one to sign-up and start the service online, but which requires a phone call (often with long wait times) to terminate the service. Examples include services like cable TV and internet services, and credit monitoring. In 2021, in the United States, the Federal Trade Commission (FTC) has announced they will ramp up enforcement against dark patterns like roach motel that trick consumers into signing up for subscriptions or making it difficult to cancel. The FTC has stated key requirements related to information transparency and clarity, express informed consent, and simple and easy cancellation. Research In 2016 and 2017 research has documented social media anti-privacy practices using dark patterns. In 2018 the Norwegian Consumer Council (Forbrukerrådet) published "Deceived by Design," a report on deceptive user interface designs of Facebook, Google and Microsoft. A 2019 study investigated practices on 11,000 shopping web sites. It identified 1818 dark patterns total and grouped them into 15 categories. Research from April 2022 found that dark patterns are still commonly used in the marketplace, highlighting a need for further scrutiny of such practices by the public, researchers and regulators. Under the European Union General Data Protection Regulation (GDPR), all companies must obtain unambiguous, freely-given consent from customers before they collect and use ("process") their personally identifiable information. A 2020 study found that "big tech" companies often used deceptive user interfaces in order to discourage their users from opting out. In 2022 a report by the European Commission found that "97% of the most popular websites and apps used by EU consumers deployed at least one dark pattern." Research on advertising network documentation shows that information presented to mobile app developers on these platforms is focused on complying with legal regulations, and puts the responsibility for such decisions on the developer. Also, sample code and settings often have privacy-unfriendly defaults laced with dark patterns to nudge developers’ decisions towards privacy-unfriendly options such as sharing sensitive data to increase revenue. Legality United States Bait-and-switch is a form of fraud that violates US law. On 9 April 2019, US senators Deb Fischer and Mark Warner introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act, which would make it illegal for companies with more than 100 million monthly active users to use dark patterns when seeking consent to use their personal information. In March 2021, California adopted amendments to the California Consumer Privacy Act, which prohibits the use of deceptive user interfaces that have "the substantial effect of subverting or impairing a consumer's choice to opt-out." In October 2021, the Federal Trade Commission issued an enforcement policy statement, announcing a crackdown on businesses using dark patterns that "trick or trap consumers into subscription services." As a result of rising numbers of complaints, the agency is responding by enforcing these consumer protection laws. In 2022, New York Attorney General Letitia James fined Fareportal $2.6 million for using deceptive marketing tactics to sell airline tickets and hotel rooms and the Federal Court of Australia fined Expedia Group's Trivago A$44.7 million for misleading consumers into paying higher prices for hotel room bookings. In March 2023, the United States Federal Trade Commission fined Fortnite developer Epic Games $245 million for use of "dark patterns to trick users into making purchases." The $245 million will be used to refund affected customers and is the largest refund amount ever issued by the FTC in a gaming case. European Union In the European Union, the GDPR requires that a user's informed consent to processing of their personal information be unambiguous, freely-given, and specific to each usage of personal information. This is intended to prevent attempts to have users unknowingly accept all data processing by default (which violates the regulation). According to the European Data Protection Board, the "principle of fair processing laid down in Article 5 (1) (a) GDPR serves as a starting point to assess whether a design pattern actually constitutes a 'dark pattern'." At the end of 2023 the final version of the Data Act was adopted. It is one of the three EU legislations which deal expressly with dark patterns. Another one being the Digital Services Act. The third EU legislation on dark patterns in force is the directive financial services contracts concluded at a distance. The Public German Consumer Protection Organisation claims Big Tech uses dark patterns to violate the Digital Services Act. United Kingdom In April 2019, the UK Information Commissioner's Office (ICO) issued a proposed "age-appropriate design code" for the operations of social networking services when used by minors, which prohibits using "nudges" to draw users into options that have low privacy settings. This code would be enforceable under the Data Protection Act 2018. It took effect 2 September 2020. See also Anti-pattern Confusopoly Gamification Growth hacking Jamba! Opt-in email Opt-out Revolving credit Shadow banning References External links Deceptive Design (formerly darkpatterns.org) Tip line to report dark patterns to the Electronic Frontier Foundation and Consumer Reports Dark patterns at the UX Pedagogy and Practice Lab at Purdue University Graphic design Web design Consumerism Computer ethics Technology neologisms 2010 neologisms
Dark pattern
Technology,Engineering
2,379
35,677,225
https://en.wikipedia.org/wiki/Progesterone%20receptor%20A
The progesterone receptor A (PR-A) is one of three known isoforms of the progesterone receptor (PR), the main biological target of the endogenous progestogen sex hormone progesterone. The other isoforms of the PR include the PR-B and PR-C. See also Membrane progesterone receptor References Intracellular receptors Progestogens Transcription factors
Progesterone receptor A
Chemistry,Biology
87
52,198,292
https://en.wikipedia.org/wiki/Caeoma%20elegans
Caeoma elegans is a species of rusts. References External links Caeoma elegans at Mycobank Pucciniales Fungi described in 1823 Fungus species
Caeoma elegans
Biology
38
63,474,946
https://en.wikipedia.org/wiki/NGC%203003
NGC 3003 is a nearly edge-on barred spiral galaxy in the constellation of Leo Minor, discovered by William Herschel on December 7, 1785. It has an apparent visual magnitude of 11.78, at a distance of 19.5 Mpc from the Sun. It has a recessional velocity of 1474 km/s. Supernova One supernova has been observed in NGC 3003: SN 1961F (typeII, mag. 13.1) was discovered by Paul Wild on 21 February 1961. References External links Astronomical objects discovered in 1785 Discoveries by William Herschel Galaxies discovered in 1785 3003 Barred spiral galaxies Leo Minor 028186
NGC 3003
Astronomy
134
63,568,558
https://en.wikipedia.org/wiki/Plectocomia%20pierreana
Plectocomia pierreana is a species of liana in the Arecaceae, or palm tree, family. It is a spiny climber, with either a single stem or a cluster of stems up to 35 m in length, stems are 1 to 9 cm in diameter. Its spines are up to 2 cm long. The palm is native to Thailand, Cambodia, Laos, Vietnam and China. It occurs in the dense forests and stunted forest of Cambodia, particularly in Kampot and Kampong Chhnang provinces. Growing in Bokor National Park, in Kampot, it occurs in the stunted forest community, called forêt sempervirente basse de montagne by Pauline Dy Phon, that occurs around 920 m, though the plant possibly occurs up to 1014 m. It has also been reported as very common in Phnom Kulen National Park, Siem Reap province, Cambodia, growing particularly in the Evergreen Forest community. In Vietnam it has been identified in Lào Cai, Tuyên Quang and Vĩnh Phúc provinces. Present in Yunnan, Guangxi and Guangdong provinces in China, it is found in lowland to montane rainforests below 1200 m, growing rapidly and abundantly. In Cambodia, the plant is known as phdau traèhs, phdau ach moën or phdao sno, and the stalk/trunk is used to make ropes and in basketwork. Producing large rattan, between 20 and 40 mm in diameter, out of which furniture, baskets, fish-traps, e.t.c are made, wai teleuk (local name in Lao language, wai means rattan) is commercially exploited in Laos. References Biota of Cambodia Flora of Cambodia Flora of China Flora of Indo-China Flora of Vietnam pierreana Trees of Cambodia Trees of China Trees of Vietnam
Plectocomia pierreana
Biology
379
74,751,456
https://en.wikipedia.org/wiki/Ship%20of%20Harkinian
Ship of Harkinian is an unofficial source port of the 1998 Nintendo 64 video game The Legend of Zelda: Ocarina of Time that runs on Microsoft Windows, Linux, macOS, Wii U, and Nintendo Switch. It was first released in March 2022 for Windows, four months after Ocarina of Time's source code was decompiled and released. Since then, Ship of Harkinian has received ports to Linux and macOS, and homebrew ports to Wii U and Nintendo Switch. Updates to Ship of Harkinian have attracted media attention, as they often integrate options and features which aren't present in any official release of Ocarina of Time. The title of the project is an allusion to the philosophical thought experiment Ship of Theseus, as well as the name of the King from The Legend of Zelda CD-i games, which were infamous for the internet memes spawned from them. Development Decompilation of Ocarina of Time In November 2021, after 21 months of development, the Zelda Reverse Engineering Team (ZRET) successfully decompiled the executable to The Legend of Zelda: Ocarina of Time into human-readable code. While the decompilation project was principally carried out for the sake of documenting the game's creation and backend functionality, it also made possible the potential creation of source ports of Ocarina of Time, which would allow the game to be recompiled and run on platforms it wasn't originally developed for. Speaking to Ars Technica, ZRET member Rozlette stated that source ports were "outside the scope of what we do". Early development and release In June 2020, developers Jack Walker and Kenix discussed the potential of a PC port of Ocarina of Time based on the ZRET decompilation project's work; at the time, Ocarina of Time's decompilation was only 17% complete. Development on what would later become Ship of Harkinian began in November 2021, coinciding with the decompilation project reaching completion. In January 2022, a group of community developers named Harbour Masters released footage and screenshots of Ocarina of Time running natively on Microsoft Windows, in a widescreen aspect ratio not supported by the original Nintendo 64 release. The project was titled "Ship of Harkinian", a reference to Zelda: Wand of Gamelon. Speaking to Video Games Chronicle, Kenix, now part of Harbour Masters, estimated the project was "approximately 90%" complete. Prior to Ship of Harkinian's release, Harbour Masters showcased various experimental game modifications to Ocarina of Time, such as gyroscopic aiming and 4K texture support. Ship of Harkinian launched for Windows in March 2022. Additional platform support and features In May 2022, Harbour Masters announced the release of a Linux port of Ship of Harkinian via "Ship of Harkinian Direct", an online video parody presentation of Nintendo Direct. Additional features noted in this Direct include save states, an integrated cheat menu, accessibility options, and support for running the game at 60 frames per second. Two months later, in July 2022, an additional Ship of Harkinian Direct was released, announcing the release of Ship of Harkinian for macOS and Wii U. Additional features promoted in this Direct include a graphic interface for rebinding controls, a "randomizer" which randomizes various elements of the game to enhance replayability, and the ability to set an arbitrary framerate (up to 250FPS). Ship of Harkinian received Nintendo Switch support in the September 2022 "Zhora Alfa" update. In April 2023, a new Ship of Harkinian Direct was released, announcing custom texture and model support. Multiplayer functionality was added to the port; a second player can take control of Ivan the Fairy, described as a fairy "who likes to play tricks", whose abilities can either help or hinder the main player. Other releases Harbour Masters have expressed intent to create a source port for The Legend of Zelda: Majora's Mask shortly after ZRET completes their decompilation of the game. In November 2023, Harbour Masters revealed that they had fully decompiled Majora's Mask, and are currently working on a PC port provisionally called 2Ship2Harkinian, with its first version being released on 26 May 2024. In December 2024, Harbour Masters released the first version of StarShip, a PC port of Star Fox 64. Reception Reception to Ship of Harkinian has been generally positive. Ship of Harkinian has been favorably compared against the Nintendo Switch Online version of Ocarina of Time. Nick Rodriguez of Screen Rant deemed Ship of Harkinian "significantly better than the current Switch version in almost every regard", with The Verge's Derek Hill expressing similar sentiment: "As long as Nintendo is content putting out alarmingly low-quality versions of their classic games for shockingly high prices, Ship of Harkinian is proof that the unofficial option is sometimes the best option." Some outlets expressed apprehension over Ship of Harkinian, fearing that Nintendo's perceived litigiousness could jeopardize the project. In discussing Ship of Harkinian's long-term prospects, GameSpot writer Jenny Zheng remarked that that Ship of Harkinian's "odds aren't great", while characterizing Nintendo as "notoriously copyright-lawsuit-happy". Luke Plunkett of Kotaku referred to Ship of Harkinian's legality as "murky", but noted that other projects built off of reverse engineering efforts were still active as of writing. In a statement to GamesRadar+, Harbour Masters contributor Kenix defended the legality of Ship of Harkinian: "The [Ocarina of Time] assets will be ripped from a user's own ROM that they must provide and then be exported into an archive compatible with the Ship of Harkinian. None of Nintendo's own property is involved in the process." Harbour Masters encouraged users to support official releases of Ocarina of Time, and offered a unique role on their Discord server to those who can provide proof of ownership. References 2022 video games The Legend of Zelda video games Linux games MacOS games Nintendo fan games Open-world video games Reverse engineering Single-player video games Unauthorized video games Video games with time manipulation Windows games
Ship of Harkinian
Engineering
1,327
44,223,676
https://en.wikipedia.org/wiki/African%20Wildlife%20Defence%20Force
The African Wildlife Defence Force (AWDF), Kikosi cha ulinzi ya wanyama pori barani Afrika (Swahili), is a private park ranger and anti-poaching organization based in Dungu, in the north-east of the Democratic Republic of the Congo. AWDF uses direct action tactics to protect wildlife and rainforests. The organization was founded in 2012 by Congolese-Belgian philanthropist Jean Kiala-Inkisi. It is proposed as an alternative to regular park ranger organizations who struggle with corruption, and seeks to eliminate the increasing levels of violence which poachers face. History The AWDF was found in 2012 after founder Jean Kiala-Inkisi travelled through Africa. He did not believe the rest of the world was doing enough to help the parks in central Africa. After he conducted ground surveys on the border of the Democratic Republic of the Congo and South Sudan, and had been contracted on a private ranch in South Africa, he decided to train a team of Congolese rangers. In April 2014, the AWDF began with the selection of candidate rangers in the Democratic Republic of the Congo, South Sudan, Kenya and South Africa for training of Advanced Force (AFR) and Special Force (SFR) rangers. Kiala-Inkisi approached a former French Legionnaire in aim of providing training. Organization The AWDF is a non-profit private ranger organization. 80% of the organization's revenue is spent on its programs and 20% on administration and fundraising. It is supported by private and corporate donations, internet advertising and grants. The group is operated by both paid rangers and volunteers. The organization chooses to operate with a few Special Operations Task Forces only. It provides services including anti-poaching, wildlife management, forestry management and agroforestry consulting. It is also involved in ranger training, close quarter training and specialist rural security services. The AWDF is open to African citizens except those from countries located north of the Sahel. They refuse African expats from outside Africa for ranger functions. In general, foreigners from outside Africa can only work as an instructor or scientist. AWDF rangers wear their insignia on the left side of the beret, to distinguish themselves from the regular park ranger organizations. As a private ranger services contractor, the AWDF focuses on wildlife conservation and rainforest conservation. The AWDF has expertise to also intervene in mangroves, lakes and waterways but does not work at sea. Their major working field is central Africa in the parks located in the border region of Democratic Republic of the Congo, South Sudan, Uganda and Central African Republic. Departments A working group, Convention on African Trade in Endangered Species of Wild Fauna, was formed in protest against what the AWDF calls the disastrous policies of CITES. The group works on wildlife law enforcement and promote non-conventional livestock farming including insect farming, crocodile farming and game farming. They also examine the pros and cons of rhino horn farming. Rangers are trained in Basic Military Training and Wildlife Management courses. Advanced Force Rangers consist of parachute/commando units, operating as a support-reconnaissance unit. Special Force Rangers are a special forces unit selected from the AFR units, trained in three specialities. These are free fall from high altitude HAHO/HALO, underwater fighting skills and operating in mountainous terrain. The Special Operations Affiliate Ranger Group are a special forces unit, mostly US veterans working pro-bono for a short or long-term basis. The AWDF focuses on the conservation of the rainforest from the basin of the Congo which has 70% of Africa's plant cover. The AWDF plans to start a nursery for Wild Edible Plants (WEP) & Non-Timber Forest Products (NTFP)'' and Tropical Hardwood. The AWDF also created a list of 150 tree species for multiplication to avoid the alienation and destruction of the rainforest. Actions On 13 August 2014, Kiala-Inkisi flew to Burbank, California to do a hunger strike. The goal was to get media attention in Hollywood for his cause. After sending 2500 emails without receiving any reply, he talked with workers of Animal Defenders International (ADI) in Los Angeles. He was also invited by rhino zookeeper and Anti-Poaching Ranger Mike Daniels to visit the San Diego Zoo Safari Park and to look behind the scenes. On invitation of Matt Rossell, (ADI) Campaigns Director, a dinner took place with actress and animal activist Georja Umano. Later he also met the actress and editor Moon Hi Hanson. Since 2012, a team of rangers is deployed near the border of Bengangai Game Reserve, Bire Kpatous Game Reserve and Mbarizunga Game reserve of neighboring country South Sudan to quest the Lord's Resistance Army (LRA) of Joseph Kony. Projects Nutrecul Agroforestry Project References External links Environmental organizations established in 2012 Ecology organizations Wildlife rehabilitation and conservation centers Wildlife conservation organizations Environmental treaties Endangered species Wildlife smuggling Military units and formations established in 2012 Business services companies established in 2012 Private military contractors Security consulting firms Animal rights organizations Ranches Animal welfare organisations based in the Democratic Republic of the Congo
African Wildlife Defence Force
Biology
1,042
423,830
https://en.wikipedia.org/wiki/Konica%20Minolta
is a Japanese multinational technology company headquartered in Marunouchi, Chiyoda, Tokyo, with offices in 49 countries worldwide. The company manufactures business and industrial imaging products, including copiers, laser printers, multi-functional peripherals (MFPs) and digital print systems for the production printing market. Konica Minolta's Managed Print Service (MPS) is called Optimised Print Services. The company also makes optical devices, including lenses and LCD film; medical and graphic imaging products, such as X-ray image processing systems, colour proofing systems, and X-ray film; photometers, 3-D digitizers, and other sensing products; and textile printers. It once had camera and photo operations inherited from Konica and Minolta but they were sold in 2006 to Sony, with Sony's Alpha series being the successor SLR division brand. History Company history Konica Minolta was formed by a merger between Japanese imaging firms Konica and Minolta, announced on 7 January 2003 with the corporate structure completing the re-organization in October 2003. Different group companies, such as the operations in the headquarters and national operating companies, began the process around the same time, however the exact dates vary for each group company. Konica Minolta uses a "Globe Mark" logo that is similar to but slightly different from that of the former company. It also uses the same corporate slogan as the former Minolta company: "The Essentials of Imaging". On 19 January 2006 the company announced that it was quitting the camera business due to high financial losses. SLR camera service operations were handed over to Sony starting on 31 March 2006 and Sony has continued development of cameras that are compatible with Minolta autofocus lenses. Originally, in the negotiations, Konica Minolta wanted cooperation with Sony in camera equipment production rather than a sell-out deal, but Sony vehemently refused, saying that it would either acquire everything or leave everything that had to do with the camera equipment sector of KM. Konica Minolta withdrew from the photography business on 30 September 2006. Three thousand seven hundred employees were laid off. Konica Minolta closed down its photo imaging division in March 2007. The color film, color paper, photo chemical and digital mini-lab machine divisions have ceased operations. Dai Nippon Printing purchased Konica's Odawara factory, with plans to continue to produce paper under Dai Nippon's brand. CPAC acquired the Konica chemical factory. Konica expanded its business presence and currently sells its products in the Americas, Asia Pacific, Europe, Middle East and Africa. Camera history Manual focus 35mm film SLRs Konica and Minolta have been competitors in the 35 mm SLR market since the development of the manual-focus (MF) SRT and other models in the mid-1960s. Minolta positioned most of its cameras to compete in the amateur market, though it did produce a very high quality MF SLR in the XD-11. Konica left the SLR market in 1987. Minolta's last MF SLR cameras were the X370 and X700. Shanghai Optical Co. (Seagull) purchased tools and production plant from Minolta at different times, making some X300 series for Minolta branding, and continues to release MD mount film SLRs compatible with the old system under the Seagull name. Autofocus 35mm film SLRs Until the sale of Konica Minolta's Photo Imaging unit to Sony in 2006, Konica Minolta produced the former Minolta range of 35 mm autofocus single-lens reflex cameras, variously named "Minolta Maxxum" in North America, "Minolta Dynax" in Europe, and "Minolta Alpha" in Japan and the rest of Asia. This range was introduced in 1985 with the Minolta Maxxum 7000, and culminated with the professional (1997) later made in a titanium body (9Ti) and technically advanced 7 (1999). The final Minolta 35 mm SLR AF cameras were the Maxxum 50 and 70 (Dynax 40 and 60), built in China. Digital cameras Konica Minolta had a line of digital point and shoot cameras to compete in the digital photography market. Their Dimage line (originally styled as Dimâge, later as DiMAGE) included digital cameras and imaging software as well as film scanners. They created a new category of "SLR-like" cameras with the introduction of the DiMAGE 7 and DiMAGE 5. These cameras mixed many of the features of a traditional SLR camera with the special abilities of a digital camera. They had a mechanical zoom ring and electronic focus ring on the lens barrel and used an electronic viewfinder (EVF) showing 100 per cent of the lens view. They added many high level features such as a histogram and made the cameras TTL-compatible with Minolta's final generation of flashes for film SLRs. The controls were designed to be used by people familiar with SLR cameras, however the manual zoom auto-focus lens was not interchangeable. The model 5 had a 1/1.8-inch sensor with 3.3 megapixels, and the fixed zoom was equal to a 35–250 mm (relative to 24×36mm format). The DiMAGE 7, later 7i, 7Hi and A1 had 5-megapixel sensors for which the same lens provided 28–200 mm equivalent coverage. The later A2 and A200 increased the sensor resolution to 8 megapixels. The DiMAGE 5 and 7 original models were more sensitive to infrared light than later models, which incorporated more aggressive IR sensor filters, so have become popular for infrared photography. The DiMAGE A1/A2/A200 integrated a sensor-based, piezoelectrically actuated anti-camera-shake system. Before the closure of the Photo Imaging unit, the DiMAGE lineup included the long-zoom Z line, the E/G lines (the G series finally incorporating former Konica models), the thin/light X line, and the advanced A line. The DiMAGE G500 was a five-megapixel compact digital camera manufactured by Konica Minolta in 2003. It came in a stainless steel case, 3x zoom lens with a retractable barrel, and dual Secure Digital and MagicGate card slots. The camera has a 1.3-second startup time. Digital SLRs Minolta made some early forays into digital SLRs with the RD-175 in 1995 and the Minolta Dimâge RD 3000 in 1999 but were the last of the large camera manufacturers to launch a successful digital SLR camera using a current 35 mm AF mount in late 2004. The RD-175 was based on the Maxxum/Dynax 505si 35 mm film SLR and used three different ½-inch CCD image sensors—two for green and one for red and blue—supplied with images by a light splitting mechanism using prisms mounted behind the lens. The RD 3000 used Minolta V-mount APS format lenses and again used multiple CCDs—this time two 1.5 MP ½-inch sensors stitched to give a 2.7 MP output image. It was not until late 2004 (after the merger with Konica) that they launched the Dynax/Maxxum/α 7D, a digital SLR based on the very successful Dynax/Maxxum 7 35 mm SLR body. The unique feature of this camera is that it features an in-body Anti-Shake system to compensate for camera shake. However, by 2004 Canon and Nikon had a whole range of digital SLR cameras and many serious photographers had already switched, thus leading Konica Minolta to withdraw from the market and transfer assets to Sony. The only two Konica Minolta digital SLRs to reach production before the company's withdrawal were the Dynax/Maxxum 7D and the Dynax/Maxxum 5D (which is an entry-level model that shared the 7D's sensor and Anti-Shake technology). In early 2006 Sony announced its Sony α (Alpha) line of digital SLRs, (based on Konica Minolta technology) and stated they were scheduled to launch production in the summer of 2006. The Sony Alpha 100, announced on June 6, 2006, is generally agreed to have been a Konica Minolta design based on the 5D with minimal Sony input. The range of 21 Sony lenses announced at that time also included only revisions of earlier Minolta designs, or models which had been in development, rebadged and with minor cosmetic changes. The Sony Alpha DSLR range utilizing the 'A'-mount has remained compatible with all Minolta AF system lenses, and most accessories, from 1985 onwards. In 2000 Minolta announced the introduction of Super Sonic Motor (SSM) focusing to a limited number of new lenses. This dispensed with a mechanical drive between camera and lens, but only SLRs made from 1999 onwards (the Dynax/Maxxum 7 and later) were compatible, the professional Dynax 9 requiring a factory upgrade to operate. Sony announced a program in 2008 to fit more future lenses with SSM and these designs may, therefore, not be compatible with 1985–1999 SLR bodies. Business equipment history Multifunctional devices For some time after the merger between Konica and Minolta, both product lines continued to be sold, while research and development efforts were underway to create new products. The first Konica Minolta badged products were almost entirely "Konica" or "Minolta" products however, as they were the next generation products being produced by both companies before the merger. These products included MFPs such as the Konica Minolta bizhub C350 (a "Minolta" design, also badged as the Konica 8022 and Minolta CF2203), and Konica Minolta 7235 (A "Konica" design). Successive models included greater integration between the two sets of technologies, and current products such as the bizhub C451 (pictured below in this article) contain many technologies from both histories. Some products such as the bizhub 501 are more noticeably an engine design from one company rather the other, however the system itself, including operation, features and RIP technologies are in the "new style" that holds little legacy from either former company. As with many MFP manufacturers, some of the market segments are not produced directly by the manufacturer. In Konica Minolta's case, many smaller SOHO MFP products (such as the bizhub 130f, wearing Minebea marks in hardware and in software drivers) are produced by third parties. By the same token, many other companies also re-badge Konica Minolta products under OEM agreements. Printers As the printer operations of the former Konica company were limited to "printer models" of MFP models, or re-badged printers from other manufacturers, while the printer operations of the former Minolta company were strong since the purchase of QMS (completed in 2000 after increasing influence and shareholding by Minolta), printer operations were initially not affected greatly by the Konica Minolta merger. In the 1980s QMS made the KISS laser printer, the most inexpensive then available at $1995. Due to the increased complexity of both MFP and printer devices, Konica Minolta increased technology sharing between the two lines of products. In many regions, this has led to the integration of the Printer products company into the Business equipment products company. Business companies Konica Minolta has spun off business units into separate companies. Konica Minolta Business Technologies, Inc. The Konica Minolta Business Technologies division develops multifunction printers, copiers, computer printers, facsimile machines, microfilm systems and related supplies. The divisional head office is in Tokyo, with regional offices in Worldwide headquarters are also located in: Germany (Konica Minolta Europe), USA (Konica Minolta Business Solutions USA), New Zealand (Konica Minolta Business Solutions New Zealand), Australia (Konica Minolta Australia) and China (Konica Minolta China). These headquarters are responsible for sales and support of the Konica Minolta companies in each country within their region, including distributors and the dealer networks. In an effort to improve profitability in a declining printer market, Konica Minolta Business Solutions begins to acquire enterprise content management (ECM) service and software solution providers. The division has approximately 19,600 employees. Multi-functional peripherals (MFPs) Pursuing advanced imaging markets Konica Minolta's digital multi-functional peripherals (MFPs), branded the "bizhub" series, are equipped with multiple functions (copying, printing, faxing, scanning), and can integrate into any corporate network environments. They allow users to consolidate the administration of office equipment connected to a network by using a series of network management software programs and even to manage and share both scanned data and computer-generated data. Konica Minolta Printing Solutions Advanced generation of compact, lightweight and high-performance color laser printers. The market for color laser printers continues to expand, fuelled by the rapid shift of business documents from monochrome to color. Konica Minolta's color laser printers—branded the "Magicolor" series and using toner technology inherited from QMS/Qume include what was then the world's smallest and lightest color laser printer with 2400 dpi photographic quality, the Magicolor 2430DL of 2005. This printer also offered direct output from digital cameras using PictBridge and EXIFII Print Order Management technology, via USB. The Magicolor series covers from entry level home/office models like the 2430s successors, to large print stations for corporate environments. As of May 2007 Printing Solutions (Europe) business was merged with Konica Minolta Business Solutions (Europe) as part of radical reforms within the company. Konica Minolta Opto, Inc. Konica Minolta Opto, Inc. develops optical components, units, and systems. Konica Minolta Medical & Graphic, Inc. Konica Minolta Medical & Graphic, Inc. is involved in the manufacturing, sale, and related services of film and processing equipment for medical and graphic imaging. The company is located in Grand Rapids, MI, and manufactures and distributes both conventional and digital graphic arts supplies including: analog and digital films, graphics arts papers, conventional and CTP printing plates, processing chemicals, film and plate processors, imagesetters, platesetters, digital color proofers and software. The company serves the printing and publishing, corporate communications and newspaper industries. Konica Minolta Sensing, Inc. Konica Minolta Sensing offer products, software, and services utilizing light control and measurement technology within four main product areas: Color Measurement, Display Measurement, 3D measurement and Medical Measurement. Color Measurement: Spectrophotometers and tristimulus colorimeters (Chroma Meters) for measuring reflected and transmitted color of objects. These are used in industrial fields and other areas for color quality control, grading by color, and CCM applications on a wide variety of subjects, including automotive parts, paint, plastic, textiles, construction materials and foods, and correcting vision problems. Spectroscopy: Spectroscopy equipment for laboratory and scientific work across the uv/visible/nir spectrum. Spectroscopy equipment can be used for restoring aged or ancient artwork, analyzing the color of food and beverages, and measuring blood alcohol content. Display Measurement: Display colour analysers, spectral colorimeters, and spectral radiometers for testing display performance and quality, examining and adjusting white balance and contrast, and precisely measuring display chroma, brightness and balance. Subjects include various types of TVs and computer displays (plasma, LCD), as well as other displays (mobile phones, digital cameras, car navigation equipment). 3D Measurement: 3D digitizers scan three-dimensional objects and import the 3D data to computers. The data can be used for medical applications, academic research, 3D archiving, archeological studies, and computer graphics production, as well as for industrial applications such as reverse engineering, design verification, and quality inspection. Medical Measurement: Products for non-invasive measurements of physiological values. These include pulse oximeters which determines oxygen saturation in the blood and compact jaundice meters that can test newborn babies for jaundice without taking blood samples. Konica Minolta Healthcare Americas, Inc Konica Minolta Healthcare Americas, Inc., formerly known as Konica Minolta Medical Imaging USA, Inc., is a business unit of Konica Minolta, Inc., and is headquartered in Wayne, NJ. The unit provides digital radiography, ultrasound imaging, healthcare IT and services to hospitals, imaging centers, clinics and private practices across the US, Canada and Latin America. In July 2017, the company acquired Aliso Viejo, California-based genetic testing firm Ambry Genetics, for a reported US$1 billion. Print shops (Kinko's Japan and Kinko's Korea) In 2012, Konica Minolta bought the Japanese operations of FedEx Kinko's. The deal consisted of the sale of 61 printing offices across Japan. Subsequently, in 2013, Konica Minolta bought FedEx Kinko's operations in South Korea. The Kinko's operations in both countries were later rebranded to remove a reference to FedEx, but retained the Kinko's name. In Japan, the Kinko's stores in Kyushu, Chugoku and Shikoku regions are continued to be operated by GA Creous, a subsidiary of General Asahi. Sponsorships Konica Minolta's sponsorships include: CNN Heroes (2014 Oct - 2014 Dec) Redlands Konica Minolta Art Prize (1996–present) Wayne Taylor Racing #10 in the IMSA SportsCar Championship (2014-Present) See also Tower Hotel (Niagara Falls), previously called "Minolta Tower" References Specific references General references Dynax 4/Dynax 3/Maxxum 4 Instruction Manual Maxxum 5D Brochure Robert E. Mayer, Minolta Classic Cameras (a Magic Lantern Guide) Konica Minolta Corporate Profile 2005 External links Japanese companies established in 2003 Companies listed on the Tokyo Stock Exchange Companies in the Nikkei 225 Computer companies of Japan Computer hardware companies Computer printer companies Electronics companies of Japan Electronics companies established in 2003 Manufacturing companies established in 2003 Japanese brands Lens manufacturers Midori-kai Multinational companies headquartered in Japan Photography equipment manufacturers of Japan
Konica Minolta
Technology
3,884
52,810,462
https://en.wikipedia.org/wiki/Beneish%20M-score
The Beneish model is a statistical model that uses financial ratios calculated with accounting data of a specific company in order to check if it is likely (high probability) that the reported earnings of the company have been manipulated. How to calculate The Beneish M-score is calculated using 8 variables (financial ratios): Days Sales in Receivables Index (DSRI) DSRI = (Net Receivablest / Salest) / (Net Receivablest-1 / Salest-1) Gross Margin Index (GMI) GMI = [(Salest-1 - COGSt-1) / Salest-1] / [(Salest - COGSt) / Salest] Asset Quality Index (AQI) AQI = [1 - (Current Assetst + PP&Et + Securitiest) / Total Assetst] / [1 - ((Current Assetst-1 + PP&Et-1 + Securitiest-1) / Total Assetst-1)] Sales Growth Index (SGI) SGI = Salest / Salest-1 Depreciation Index (DEPI) DEPI = (Depreciationt-1/ (PP&Et-1 + Depreciationt-1)) / (Depreciationt / (PP&Et + Depreciationt)) Sales General and Administrative Expenses Index (SGAI) SGAI = (SG&A Expenset / Salest) / (SG&A Expenset-1 / Salest-1) Leverage Index (LVGI) LVGI = [(Current Liabilitiest + Total Long Term Debtt) / Total Assetst] / [(Current Liabilitiest-1 + Total Long Term Debtt-1) / Total Assetst-1] Total Accruals to Total Assets (TATA) TATA = (Income from Continuing Operationst - Cash Flows from Operationst) / Total Assetst The formula to calculate the M-score is: M-score = −4.84 + 0.92 × DSRI + 0.528 × GMI + 0.404 × AQI + 0.892 × SGI + 0.115 × DEPI −0.172 × SGAI + 4.679 × TATA − 0.327 × LVGI How to interpret The threshold value is -1.78 for the model whose coefficients are reported above. (see Beneish 1999, Beneish, Lee, and Nichols 2013, and Beneish and Vorst 2020). If M-score is less than -1.78, the company is unlikely to be a manipulator. For example, an M-score value of -2.50 suggests a low likelihood of manipulation. If M-score is greater than −1.78, the company is likely to be a manipulator. For example, an M-score value of -1.50 suggests a high likelihood of manipulation. Aggregate recession predictor A 2023 research paper will use an aggregate score of many companies to predict recessions. It finds that the score in early 2023 is the highest in some 40 years. Important notices Beneish M-score is a probabilistic model, so it cannot detect companies that manipulate their earnings with 100% accuracy. Financial institutions were excluded from the sample in Beneish paper when calculating M-score since these institutions make money through different routes. Sales and receivables which are two main ingredients that go into the Beneish formula are not used when analyzing a financial institution. Example of successful application Enron Corporation was correctly identified 1998 as an earnings manipulator by students from Cornell University using M-score Noticeably, Wall Street financial analysts were still recommending to buy Enron shares at that point in time.. Further reading on financial statement manipulation A sequence of articles on Alpha Architect blog. An article on Investopedia about different types of financial statement manipulation ("smoke and mirrors", "elder abuse", "fleeing town", and others). See also Data analysis techniques for fraud detection Benford's law Piotroski F-score Ohlson O-score Altman Z-score References Corporate finance Financial ratios Financial risk management Valuation (finance)
Beneish M-score
Mathematics
875
27,032,263
https://en.wikipedia.org/wiki/Andreas%20Luttge
Andreas Lüttge is Professor of Earth Science and Professor of Chemistry at Rice University in Houston, Texas (USA). He was also director of the National Corrosion Center (NCC) until 2010. The primary concerns of his research are surface chemical processes at minerals and rocks from low-temperature conditions up to the pressure and temperature conditions throughout the Earth's crust. Andreas Luttge’s degrees are a Habilitation [venia legendi] (1995) and a PhD [Dr. rer. nat.] (1990) from the University of Tübingen (Germany). In 1995 the Alexander von Humboldt Foundation awarded a Feodor Lynen fellowship to Andreas Luttge to visit Yale University and to work with Prof. A.C. Lasaga. Luttge published numerous studies about the surface dynamics of minerals, glasses and metals, including investigations of microbial activity at interfaces. He applies various experimental techniques using Vertical Scanning Interferometry, Electron and Atomic Force Microscopy and modeling techniques like Monte Carlo and ab initio methods. Resulting quantitative kinetic rate data are key prerequisites to provide a better understanding of the dynamics governing many geologic and technologic processes. References External links Luttge's research statement at Rice University Rice University faculty Living people Year of birth missing (living people) Place of birth missing (living people) Inorganic chemists 21st-century American chemists Yale University fellows
Andreas Luttge
Chemistry
289
753,962
https://en.wikipedia.org/wiki/Circulation%20%28physics%29
In physics, circulation is the line integral of a vector field around a closed curve embedded in the field. In fluid dynamics, the field is the fluid velocity field. In electrodynamics, it can be the electric or the magnetic field. In aerodynamics, circulation was first used independently by Frederick Lanchester,Ludwig Prandtl, Martin Kutta and Nikolay Zhukovsky. It is usually denoted (Greek uppercase gamma). Definition and properties If is a vector field and is a vector representing the differential length of a small element of a defined curve, the contribution of that differential length to circulation is : Here, is the angle between the vectors and . The circulation of a vector field around a closed curve is the line integral: In a conservative vector field this integral evaluates to zero for every closed curve. That means that a line integral between any two points in the field is independent of the path taken. It also implies that the vector field can be expressed as the gradient of a scalar function, which is called a potential. Relation to vorticity and curl Circulation can be related to curl of a vector field and, more specifically, to vorticity if the field is a fluid velocity field, By Stokes' theorem, the flux of curl or vorticity vectors through a surface S is equal to the circulation around its perimeter, Here, the closed integration path is the boundary or perimeter of an open surface , whose infinitesimal element normal is oriented according to the right-hand rule. Thus curl and vorticity are the circulation per unit area, taken around a local infinitesimal loop. In potential flow of a fluid with a region of vorticity, all closed curves that enclose the vorticity have the same value for circulation. Uses Kutta–Joukowski theorem in fluid dynamics In fluid dynamics, the lift per unit span (L') acting on a body in a two-dimensional flow field is directly proportional to the circulation, i.e. it can be expressed as the product of the circulation Γ about the body, the fluid density , and the speed of the body relative to the free-stream : This is known as the Kutta–Joukowski theorem. This equation applies around airfoils, where the circulation is generated by airfoil action; and around spinning objects experiencing the Magnus effect where the circulation is induced mechanically. In airfoil action, the magnitude of the circulation is determined by the Kutta condition. The circulation on every closed curve around the airfoil has the same value, and is related to the lift generated by each unit length of span. Provided the closed curve encloses the airfoil, the choice of curve is arbitrary. Circulation is often used in computational fluid dynamics as an intermediate variable to calculate forces on an airfoil or other body. Fundamental equations of electromagnetism In electrodynamics, the Maxwell-Faraday law of induction can be stated in two equivalent forms: that the curl of the electric field is equal to the negative rate of change of the magnetic field, or that the circulation of the electric field around a loop is equal to the negative rate of change of the magnetic field flux through any surface spanned by the loop, by Stokes' theorem Circulation of a static magnetic field is, by Ampère's law, proportional to the total current enclosed by the loop For systems with electric fields that change over time, the law must be modified to include a term known as Maxwell's correction. See also Maxwell's equations Biot–Savart law in aerodynamics Kelvin's circulation theorem References Fluid dynamics Physical quantities Electromagnetism
Circulation (physics)
Physics,Chemistry,Mathematics,Engineering
734
23,507,440
https://en.wikipedia.org/wiki/Astrapian%20sicklebill
The astrapian sicklebill, also known as the green-breasted riflebird, is a bird in the Paradisaeidae family that was proposed by Erwin Stresemann to be an intergeneric hybrid between an Arfak astrapia and black sicklebill, an identity since confirmed by DNA analysis. History Only one adult male specimen of this hybrid is known, held by the American Museum of Natural History, and presumably deriving from the Vogelkop Peninsula of north-western New Guinea. Notes References Hybrid birds of paradise Birds of the Doberai Peninsula Intergeneric hybrids
Astrapian sicklebill
Biology
124
2,829,935
https://en.wikipedia.org/wiki/Profundal%20zone
The profundal zone is the deep zone of a lake, located below the range of effective light penetration. This is typically below the thermocline, the vertical zone in the water through which temperature drops rapidly. The temperature difference may be large enough to hamper mixing with the littoral zone in some seasons which causes a decrease in oxygen concentrations. The profundal is often defined, as the deepest, vegetation-free, and muddy zone of the lacustrine benthal. The profundal zone is often part of the aphotic zone. Sediment in the profundal zone primarily comprises silt and mud. Organisms The lack of light and oxygen in the profundal zone determines the type of biological community that can live in this region, which is distinctly different from the community in the overlying waters. The profundal macrofauna is therefore characterized by physiological and behavioural adaptations to low oxygen concentration. While benthic fauna differs between lakes, Chironomidae and Oligochaetae often dominate the benthic fauna of the profundal zone because they possess hemoglobin-like molecules to extract oxygen from poorly oxygenated water. Due to the low productivity of the profundal zone, organisms rely on detritus sinking from the photic zone. Species richness in the profundal zone is often similar to that in the limnetic zone. Microbial levels in the profundal benthos are higher than those in the littoral benthos, potentially due to a smaller average sediment particle size. Benthic macroinvertebrates are believed to be regulated by top-down pressure. Nutrient cycling Nutrient fluxes in the profundal zone are primarily driven by release from the benthos. The anoxic nature of the profundal zone drives ammonia release from benthic sediment. This can drive phytoplankton production, to the point of a phytoplankton bloom, and create toxic conditions for many organisms, particularly at a high pH. Hypolimnetic anoxia can also contribute to buildups of iron, manganese, and sulfide in the profundal zone. See also Benthic zone Littoral zone Limnetic zone Lake stratification References Aquatic ecology Aquatic biomes
Profundal zone
Biology
467
60,040,145
https://en.wikipedia.org/wiki/Hachimoji%20DNA
Hachimoji DNA (from Japanese hachimoji, "eight letters") is a synthetic nucleic acid analog that uses four synthetic nucleotides in addition to the four present in the natural nucleic acids, DNA and RNA. This leads to four allowed base pairs: two unnatural base pairs formed by the synthetic nucleobases in addition to the two normal pairs. Hachimoji bases have been demonstrated in both DNA and RNA analogs, using deoxyribose and ribose respectively as the backbone sugar. Benefits of such a nucleic acid system may include an enhanced ability to store data, as well as insights into what may be possible in the search for extraterrestrial life. The hachimoji DNA system produced one type of catalytic RNA (ribozyme or aptamer) in vitro. Description Natural DNA is a molecule carrying the genetic instructions used in the growth, development, functioning, and reproduction of all known living organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids; alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life. DNA is a polynucleotide as it is composed of simpler monomeric units called nucleotides; when double-stranded, the two chains coil around each other to form a double helix. In natural DNA, each nucleotide is composed of one of four nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound to each other with hydrogen bonds, according to base pairing rules (A with T and C with G), to make double-stranded DNA. Hachimoji DNA is similar to natural DNA but differs in the number, and type, of nucleobases. Unnatural nucleobases, more hydrophobic than natural bases, are used in successful hachimoji DNA. Such a DNA always formed the standard double helix, no matter what sequence of bases were used. An enzyme (T7 polymerase) was adapted by the researchers to be used in vitro to transcribe hachimoji DNA into hachimoji RNA, which, in turn, produced chemical activity in the form of a glowing green fluorophore. New base pairs DNA and RNA are naturally composed of four nucleotide bases that form hydrogen bonds in order to pair. Hachimoji DNA uses an additional four synthetic nucleotides to form four types of base pairs, two of which are unnatural: P binds with Z and B binds with S (dS in DNA, rS in RNA). {| ! colspan="2"| Base !! Name !! Structure |- ! colspan="2"|P | 2-Aminoimidazo[1,2a][1,3,5]triazin-4(1H)-one | |- ! colspan="2"|Z | 6-Amino-5-nitropyridin-2-one | |- style="border-top:solid #ccc;" ! colspan="2"|B | Isoguanine | |- ! rowspan="2"|S ! rS | Isocytosine | |- ! dS | 1-Methylcytosine | |} Background Earlier, the research group responsible for the hachimoji DNA system, headed by Harvard University chemist Steven Benner, had studied a synthetic DNA analog system, named Artificially Expanded Genetic Information System (AEGIS), that used twelve different nucleotides, including the four found in DNA. Biology Scripps Research chemist Floyd Romesberg, noted for creating the first Unnatural Base Pair (UBP), and expanding the genetic alphabet of four letters to six in 2012, stated that the invention of the hachimoji DNA system is an example of the fact that the natural bases (G, C, A and T) "are not unique". Creating new life forms may be possible, at least theoretically, with the new DNA system. For now, however, the hachimoji DNA system is not self-sustaining; the system needs a steady supply of unique building blocks and proteins found only in the laboratory. As a result, "Hachimoji DNA can go nowhere if it escapes the laboratory." Applications NASA funded this research to "expand[s] the scope of the structures that we might encounter as we search for life in the cosmos". According to Lori Glaze of the Planetary Science Division of NASA, "Life detection is an increasingly important goal of NASA's planetary science missions, and this new work [with hachimoji DNA] will help us to develop effective instruments and experiments that will expand the scope of what we look for." Research team leader Steven Benner notes, "By carefully analyzing the roles of shape, size and structure in hachimoji DNA, this work expands our understanding of the types of molecules that might store information in extraterrestrial life on alien worlds." According to researchers, hachimoji DNA could also be used "to develop clean diagnostics for human diseases, in DNA digital data storage, DNA barcoding, self-assembling nanostructures, and to make proteins with unusual amino acids. Parts of this hachimoji DNA are already being commercially produced by Firebird Biomolecular Sciences LLC". See also Abiogenesis Astrobiology Carbon chauvinism Carbon-based life Center for Life Detection Science Earliest known life forms Extraterrestrial life Human Nature (2019 CRISPR film documentary) Hypothetical types of biochemistry Nucleic acid analogue xDNA References Further reading Hypothesis paper. External links Astronomy FAQ - Why do we assume that other beings must be based on carbon? Why couldn't organisms be based on other substances? Astrobiology Biotechnology Biological contamination DNA Genetically modified organisms Genetic engineering Helices
Hachimoji DNA
Chemistry,Astronomy,Engineering,Biology
1,327
77,307,655
https://en.wikipedia.org/wiki/NGC%201419
NGC 1419 is an elliptical galaxy located 62 million light years away in the constellation of Eridanus. The galaxy was discovered by astronomer John Herschel on October 22, 1835, and is a member of the Fornax Cluster. NGC 1419 is a host to a supermassive black hole with an estimated mass of 25 million solar masses. 155 known globular clusters have been observed surrounding NGC 1419, along with 21 planetary nebulae. These planetary nebulae reveal that the distance to NGC 1419 is approximately 18.9 Mpc, while measurements using surface brightness fluctuations reveal that NGC 1419 is approximately 22.9 ± 0.9 Mpc away. The measurements using planetary nebulae confirm that NGC 1419 is a member of the Fornax Cluster. See also List of NGC objects (1001–2000) External links References 1419 013534 Eridanus (constellation) Astronomical objects discovered in 1865 Elliptical galaxies Fornax Cluster
NGC 1419
Astronomy
195
33,986,075
https://en.wikipedia.org/wiki/Ekman%20velocity
In oceanography, Ekman velocity – also referred as a kind of the residual ageostrophic velocity as it deviates from geostrophy – is part of the total horizontal velocity (u) in the upper layer of water of the open ocean. This velocity, caused by winds blowing over the surface of the ocean, is such that the Coriolis force on this layer is balanced by the force of the wind. Typically, it takes about two days for the Ekman velocity to develop before it is directed at right angles to the wind. The Ekman velocity is named after the Swedish oceanographer Vagn Walfrid Ekman (1874–1954). Theory Through vertical eddy viscosity, winds act directly and frictionally on the Ekman layer, which typically is the upper 50–100m in the ocean. The frictional surface flow (u) is at an angle to the right of the wind in the northern hemisphere, to the left in the southern hemisphere (45 degrees if viscosity is uniform in the vertical z-direction). This surface flow then modifies the flow slightly beneath it, which then is slightly more to the right, and finally the exponentially-weaker-with-depth flow vectors die down at around 50–100 meters, and finally form a spiral, called the Ekman spiral. The angle of each successive layer downward through the spiral depends on the strength and vertical distribution of the vertical eddy viscosity. When the contributions from all the vertical layers are added up – the integration of the velocity over depth, from the bottom to the top of the Ekman layer – the total "Ekman transport" is exactly 90 degrees to the right of the wind direction in the Northern Hemisphere and left in the Southern Hemisphere. Mathematical formulation Suppose geostrophic balance is achieved in the Ekman layer, and wind stress is applied at the water surface: (1) where is the applied stress divided by (the mean density of water in the Ekman layer); is the unit vector in the vertical direction (opposing the direction of gravity). The definition of Ekman velocity is the difference between the total horizontal velocity () and the geostrophic velocity (): (2) As the geostrophic velocity () is defined as (3) therefore (4) or (5) Next, the Ekman transport is obtained by integrating the Ekman velocity from the bottom level () – at which the Ekman velocity vanishes – to the surface (). (6) The SI unit of Ekman transport is: m2·s−1, which is the horizontal velocity integrated in the vertical direction. Usage Based on Ekman theory and geostrophic dynamics, the analysis of near-surface currents, i.e. tropical Pacific near-surface currents, can be generated by using high resolution data of wind and altimeter sea level. The surface velocity is defined as the motion of a standard World Ocean Circulation Experiment/Tropical Ocean-Global Atmosphere (WOCE/TOGA) 15m drogue drifter. Near-surface Ekman velocity can be estimated with variables which best represent the ageostrophic motion of the WOCE/TOGA 15m drogue drifters relative to the surface wind stress. Geostrophic velocities are calculated with sea level gradients which are derived from TOPEX/Poseidon sea surface height analyses (TOPEX/Poseidon altimeter sea level anomalies from along-track data is used here, interpolated to 1°×1°grid, spanning the domain of 25°N-25°S,90°E-290°E, during Oct1992-Sep1998). Geostrophic and Ekman velocity are assumed to satisfy the lowest-order dynamics of the surface velocity, and they can be obtained independently from surface height and wind stress data. Standard f plan satisfies geostrophic balance, the lowest-order balance for quasi-steady circulation at higher latitudes. However, coriolis parameter f is close to zero near the equator, the geostrophic balance is not satisfied as the velocity is proportional to the height gradient divided by the coriolis parameter f. It has been shown in many studies that beta plane geostrophic approximation involving the second derivative of surface height is in good agreement with the observed velocities in the equatorial undercurrent, as a result, geostrophic currents which are near equator are obtained with a weighted blend of the equatorial beta-plane and conventional f-plane geostrophic equations. A negative sea surface temperature (SST) anomaly prevails in the eastern equatorial Pacific from October to January. A zone of strong easterly Ekman flow propagates westward into the central Pacific basin near the date line during December–February. Relaxation of trade winds in the eastern coincided with the eastward propagation of the geostrophic flow east of 240°E (particularly in February), while westward currents dominated in the central and western equatorial region, which then reversal in the east, with weak local trade winds and weak upwelling along the coast, coincided with the onset of warm SST anomaly.(This anomaly first appeared off South America in March and April). Geostrophic current anomaly, like a Kelvin wave signature propagating eastward to South America between December and April can be easily discerned, and this arrival to South America also coincided with the above-mentioned SST anomaly onset. The geostrophic reversed in April to a strong eastward jet spanning the whole equatorial Pacific. As the El Niño SST anomaly developed during May and June, this eastward geostrophic flow persisted. See also Footnotes References Fu, L., E. J. Christensen, C. A. Yamarone, M. Lefebvfe, Y. Menard, M. Dorer, and P. Escudier, 1994: TOPEX/POSEIDON mission overview, J. Geophys. Res., 99, 24,369-24,382,doi:10.1029/94JC01761 Pedlosky, J., Geophysical Fluid Dynamics, 624 pp., Springer-Verlag, New York, 1979. Lukas, R., and E. Firing, 1984:The geostrophic balance of the Pacific Equatorial Undercurrent, Deep Sea Res., Part A, 31, 61-66. doi:10.1016/0198-0149(84)90072-4 Picaut, J., S. P. Hayes, and M. J. McPhaden, 1989:Use of the geostrophic approximation to estimate time-varying zonal currents at the equator, J. Geophys Res., 94, 3228-323. doi:10.1029/JC094iC03p03228 External links SIO 210: Introduction to Physical Oceanography – Dynamics IV: rotation and wind stress (Ekman layers) and other mixed layer topics Chapter 9 – Response of the Upper Ocean to Winds – Ekman Mass Transports Ocean Dynamics – Spring 2005 El Nino Tropical Pacific Currents Oceanography
Ekman velocity
Physics,Environmental_science
1,480
40,403,438
https://en.wikipedia.org/wiki/Streptonigrin
Streptonigrin is an aminoquinone antitumor and antibacterial antibiotic produced by Streptomyces flocculus. Streptonigrin was a successful target of Total synthesis in 2011. Notes Antitumor antibiotic streptonigrin and its derivatives as inhibitors of nitric oxide-dependent activation of soluble guanylyl cyclase References Antibiotics
Streptonigrin
Biology
82
32,110,355
https://en.wikipedia.org/wiki/Precise%20Point%20Positioning
Precise Point Positioning (PPP) is a global navigation satellite system (GNSS) positioning method that calculates very precise positions, with errors as small as a few centimeters under good conditions. PPP is a combination of several relatively sophisticated GNSS position refinement techniques that can be used with near-consumer-grade hardware to yield near-survey-grade results. PPP uses a single GNSS receiver, unlike standard RTK methods, which use a temporarily fixed base receiver in the field as well as a relatively nearby mobile receiver. PPP methods overlap somewhat with DGNSS positioning methods, which use permanent reference stations to quantify systemic errors. Methods PPP relies on two general sources of information: direct observables and ephemerides. Direct observables are data that the GPS receiver can measure on its own. One direct observable for PPP is carrier phase, i.e., not only the timing message encoded in the GNSS signal, but also whether the wave of that signal is going "up" or "down" at a given moment. Loosely speaking, phase can be thought of as the digits after the decimal point in the number of waves between a given GNSS satellite and the receiver. By itself, phase measurement cannot yield even an approximate position, but once other methods have narrowed down the position estimate to within a diameter corresponding to a single wavelength (roughly 20 cm), phase information can refine the estimate. Another important direct observable is the differential delay between GNSS signals of different frequencies. This is useful because a major source of position error is variability in how GNSS signals are slowed in the ionosphere, which is affected relatively unpredictably by space weather. The ionosphere is dispersive, meaning that signals of different frequency are slowed by different amounts. By measuring the difference in the delays between signals of different frequencies, the receiver software (or later post-processing) can model and remove the delay at any frequency. This process is only approximate, and non-dispersive sources of delay remain (notably from water vapor moving around in the troposphere), but it improves accuracy significantly. Ephemerides are precise measurements of the GNSS satellites' orbits, made by the geodetic community (the International GNSS Service and other public and private organizations) with global networks of ground stations. Satellite navigation works on the principle that the satellites' positions at any given time are known, but in practice, micrometeoroid impacts, variation in solar radiation pressure, and so on mean that orbits are not perfectly predictable. The ephemerides that the satellites broadcast are earlier forecasts, up to a few hours old, and are less accurate (by up to a few meters) than carefully processed observations of where the satellites actually were. Therefore, if a GNSS receiver system stores raw observations, they can be processed later against a more accurate ephemeris than what was in the GNSS messages, yielding more accurate position estimates than what would be possible with standard realtime calculations. This post-processing technique has long been standard for GNSS applications that need high accuracy. More recently, projects such as APPS , the Automatic Precise Positioning Service of NASA JPL, have begun publishing improved ephemerides over the internet with very low latency. PPP uses these streams to apply in near realtime the same kind of correction that used to be done in post-processing. Applications Precise positioning is increasingly used in the fields including robotics, autonomous navigation, agriculture, construction, and mining. The major weaknesses of PPP, compared with conventional consumer GNSS methods, are that it takes more processing power, it requires an outside ephemeris correction stream, and it takes some time (up to tens of minutes) to converge to full accuracy. This makes it relatively unappealing for applications such as fleet tracking, where centimeter-scale precision is generally not worth the extra complexity, and more useful in areas like robotics, where there may already be an assumption of onboard processing power and frequent data transfer. References "Precise Point Positioning and Its Challenges, Aided-GNSS and Signal Tracking" Inside GNSS Geomatics engineering Global Positioning System Surveying
Precise Point Positioning
Technology,Engineering
870
1,056,915
https://en.wikipedia.org/wiki/Concept%20art
Concept art is a form of visual art used to convey an idea for use in film, video games, animation, comic books, television shows, or other media before it is put into the final product. The term was used by the Walt Disney Animation Studios as early as the 1930s. Concept art usually refers to world-building artwork used to inspire the development of media products, and is not the same as storyboard, though they are often confused. Concept art is developed through several iterations. Multiple solutions are explored before settling on the final design. Concept art is not only used to develop the work but also to show the project's progress to directors, clients, and investors. Once the development of the work is complete, concept art may be reworked and used for advertising materials. Overview of the Industry A concept artist is an individual who generates a visual design for an item, character, or area that does not yet exist. This includes, but is not limited to, film, animation, and more recently, video game production. Being a concept artist takes commitment, vision, and a clear understanding of the role. While it is necessary to have the skills of a fine artist, a concept artist must also be able to work under strict deadlines in the capacity of a graphic designer. Some concept artists may start as fine artists, industrial designers, animators, or even special effects artists. Interpretation of ideas and how they are realized is where the concept artist's individual creativity is most evident, but subject matter is often beyond their control. Many concept artists work in a studio or from home remotely as freelancers. Working for a studio has the advantage of an established salary. In the United States, the average annual gross salary for a concept artist in video game industry was $60,000-$70,000 a year in 2017. In 2024, entry level concept art positions ranged from $60,000-$95,000, with the average salary at about $112,000. Digital media production, including the television and video game industries, has grown substantially in the 21st century. From 2009 to 2012, the value of the United States video game industry jumped from $19 million to $37 billion, and from 2008 to 2016, the value of the mobile game industry in China increased from 240 million yuan to 37.48 billion yuan. The need for concept artists in this rapidly growing industry has skyrocketed; 65% of the game development industry's staff are artists. As such, there is a push for countries across the world to increase the availability of art education so that local artists have the skills to capitalize on the booming media industry. The art education community has made changes, offering new courses in art universities order better prepare students for the digital art field. Certain educators are pushing to move away from historic art education towards more standardized, industry-paced approaches that will better prepare students for this modern workforce. Other programs are pushing to create artist workshops in order to train artists in digital software. Materials Concept art has embraced the use of digital technology. Raster graphics editors for digital painting have become more easily available, as well as hardware such as graphics tablets, enabling more efficient working methods. Prior to this, any number of traditional mediums such as oil paints, acrylic paints, markers and pencils were used. Many modern paint packages are programmed to simulate the blending of color in the same way paint would blend on a canvas; proficiency with traditional media is often paramount to a concept artist's ability to use painting software. Popular programs for concept artists include Photoshop and Corel Painter. Others include Manga Studio, Procreate and ArtRage. Most concept artists have switched to digital media because of ease of editing and speed. A lot of concept work has tight deadlines where a highly polished piece is needed in a short amount of time. Themes and styles Concept art has always had to cover many subjects, being the primary medium in film poster design since the early days of Hollywood, but the two most widely covered areas are science fiction and fantasy. Since the recent rise of its use in video game production, concept art has expanded to cover genres from fantasy to realistic depending on the final product. Concept art ranges from stylized to photorealistic depending on the needs of the project. Artists working on a project often produce a large turnover in the early 'blue sky' stage of production. This provides a broad range of interpretations, most being in the form of sketches, speed paints, and 3D overpaints. Later pieces, such as matte paintings, are produced as realistically as required. Concept artists will often have to adapt to the style of the studio they are hired for. The ability to produce multiple styles is valued in a concept artist. Specialization Concept art is a broad field, including specializations on a wide range of fictional and nonfiction subjects, such as character design, environment design, set design, and more industrial applications like retail design, architecture design, fashion design, and object design. Specialization is regarded as better for freelancers than concept artists who want to work in-house, where flexibility is key. Knowing the foundations of art, such as anatomy, perspective, color theory, design, and lighting are essential to all specializations. See also Key art Illustration 3D modeling Architectural rendering Artist's impression Matte painting Storyboard Concept car Digital painting References External links Illustration Industrial design
Concept art
Engineering
1,090
1,059,781
https://en.wikipedia.org/wiki/Epimer
In stereochemistry, an epimer is one of a pair of diastereomers. The two epimers have opposite configuration at only one stereogenic center out of at least two. All other stereogenic centers in the molecules are the same in each. Epimerization is the interconversion of one epimer to the other epimer. Doxorubicin and epirubicin are two epimers that are used as drugs. Examples The stereoisomers β-D-glucopyranose and β-D-mannopyranose are epimers because they differ only in the stereochemistry at the C-2 position. The hydroxy group in β-D-glucopyranose is equatorial (in the "plane" of the ring), while in β-D-mannopyranose the C-2 hydroxy group is axial (up from the "plane" of the ring). These two molecules are epimers but, because they are not mirror images of each other, are not enantiomers. (Enantiomers have the same name, but differ in D and L classification.) They are also not sugar anomers, since it is not the anomeric carbon involved in the stereochemistry. Similarly, β-D-glucopyranose and β-D-galactopyranose are epimers that differ at the C-4 position, with the former being equatorial and the latter being axial. In the case that the difference is the -OH groups on C-1, the anomeric carbon, such as in the case of α-D-glucopyranose and β-D-glucopyranose, the molecules are both epimers and anomers (as indicated by the α and β designation). Other closely related compounds are epi-inositol and inositol and lipoxin and epilipoxin. Epimerization Epimerization is a chemical process where an epimer is converted to its diastereomeric counterpart. It can happen in condensed tannins depolymerization reactions. Epimerization can be spontaneous (generally a slow process), or catalysed by enzymes, e.g. the epimerization between the sugars N-acetylglucosamine and N-acetylmannosamine, which is catalysed by renin-binding protein. The penultimate step in Zhang & Trudell's classic epibatidine synthesis is an example of epimerization. Pharmaceutical examples include epimerization of the erythro isomers of methylphenidate to the pharmacologically preferred and lower-energy threo isomers, and undesired in vivo epimerization of tesofensine to brasofensine. References Stereochemistry
Epimer
Physics,Chemistry
609
27,449,270
https://en.wikipedia.org/wiki/GRB%20020813
GRB 020813 was a gamma-ray burst (GRB) that was detected on 13 August 2002 at 02:44 UTC. A gamma-ray burst is a highly luminous flash associated with an explosion in a distant galaxy and producing gamma rays, the most energetic form of electromagnetic radiation, and often followed by a longer-lived "afterglow" emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, and radio). Observations GRB 020813 was detected on 13 August 2002 02:44 UTC by multiple instruments on the High Energy Transient Explorer. The burst lasted approximately 125 seconds. The initial position was estimated to be at a right ascension of and a declination of . In less than two hours after the burst had been detected, optical observations of the region were made with the Katzman Automatic Imaging Telescope which reveal the burst's optical afterglow. In the days following the event, observations were made by the Chandra X-ray Observatory, which detected a fading X-ray afterglow. The redshift for this event was approximately z = 1.254. Supernova relation Previous to this burst, there had not yet been any concrete evidence linking gamma-ray bursts to supernovae, though it had long been hypothesized that the two phenomena were results of the same type of event. The spectrum of GRB 011211 was reported to include emission lines associated with the chemical elements magnesium, silicon, sulphur, argon, and calcium, which supported the theory that gamma-ray bursts are preceded by highly massive stars undergoing a supernova collapse. However, these results were considered statistically insignificant and somewhat controversial due to the low resolution of the instruments used. The spectrum of GRB 020813 was also found to display emission lines of elements associated with supernovae, in this case sulphur and silicon. This evidence confirmed the connection between supernovae and gamma-ray bursts. References External links HETE Trigger Summary 020813 20020813 August 2002 Sagittarius (constellation)
GRB 020813
Astronomy
425
1,454,728
https://en.wikipedia.org/wiki/Malthusianism
Malthusianism is a theory that population growth is potentially exponential, according to the Malthusian growth model, while the growth of the food supply or other resources is linear, which eventually reduces living standards to the point of triggering a population decline. This event, called a Malthusian catastrophe (also known as a Malthusian trap, population trap, Malthusian check, Malthusian crisis, Point of Crisis, or Malthusian crunch) has been predicted to occur if population growth outpaces agricultural production, thereby causing famine or war. According to this theory, poverty and inequality will increase as the price of assets and scarce commodities goes up due to fierce competition for these dwindling resources. This increased level of poverty eventually causes depopulation by decreasing birth rates. If asset prices keep increasing, social unrest would occur, which would likely cause a major war, revolution, or a famine. Societal collapse is an extreme but possible outcome from this process. The theory posits that such a catastrophe would force the population to "correct" back to a lower, more easily sustainable level (quite rapidly, due to the potential severity and unpredictable results of the mitigating factors involved, as compared to the relatively slow time scales and well-understood processes governing unchecked growth or growth affected by preventive checks). Malthusianism has been linked to a variety of political and social movements, but almost always refers to advocates of population control. These concepts derive from the political and economic thought of the Reverend Thomas Robert Malthus, as laid out in his 1798 writings, An Essay on the Principle of Population. Malthus suggested that while technological advances could increase a society's supply of resources, such as food, and thereby improve the standard of living, the abundance of resources would enable population growth, which would eventually bring the supply of resources for each person back to its original level. Some economists contend that since the Industrial Revolution in the early 19th century, mankind has broken out of the trap. Others argue that the continuation of extreme poverty indicates that the Malthusian trap continues to operate. Others further argue that due to lack of food availability coupled with excessive pollution, developing countries show more evidence of the trap as compared to developed countries. A similar, more modern concept, is that of human overpopulation. Neo-Malthusianism is the advocacy of human population planning to ensure resources and environmental integrities for current and future human populations as well as for other species. In Britain the term "Malthusian" can also refer more specifically to arguments made in favour of family planning, hence organizations such as the Malthusian League. Neo-Malthusians differ from Malthus's theories mainly in their support for the use of birth control. Malthus, a devout Christian, believed that "self-control" (i.e., abstinence) was preferable to artificial birth control. He also worried that the effect of contraceptive use would be too powerful in curbing growth, conflicting with the common 18th century perspective (to which Malthus himself adhered) that a steadily growing population remained a necessary factor in the continuing "progress of society", generally. Modern neo-Malthusians are generally more concerned than Malthus with environmental degradation and catastrophic famine than with poverty. Malthusianism has attracted criticism from diverse schools of thought, including Georgists, Marxists and socialists, libertarians and free market advocates, feminists, Catholics, and human rights advocates, characterising it as excessively pessimistic, insufficiently researched, misanthropic or inhuman. Many critics believe Malthusianism has been discredited since the publication of Principle of Population, often citing advances in agricultural techniques and modern reductions in human fertility. Some modern proponents believe that the basic concept of population growth eventually outstripping resources is still fundamentally valid, and that positive checks are still likely to occur in humanity's future if no action is taken to intentionally curb population growth. In spite of the variety of criticisms against it, the Malthusian argument remains a major discourse based on which national and international environmental regulations are promoted. History Malthus' theoretical argument In 1798, Thomas Malthus proposed his hypothesis in An Essay on the Principle of Population. He argued that although human populations tend to increase, the happiness of a nation requires a like increase in food production. "The happiness of a country does not depend, absolutely, upon its poverty, or its riches, upon its youth, or its age, upon its being thinly, or fully inhabited, but upon the rapidity with which it is increasing, upon the degree in which the yearly increase of food approaches to the yearly increase of an unrestricted population." However, the propensity for population increase also leads to a natural cycle of abundance and shortages: Malthus faced opposition from economists both during his life and since. A vocal critic several decades later was Friedrich Engels. Early history Malthus was not the first to outline the problems he perceived. The original essay was part of an ongoing intellectual discussion at the end of the 18th century regarding the origins of poverty. Principle of Population was specifically written as a rebuttal to thinkers like William Godwin and the Marquis de Condorcet, and Malthus's own father who believed in the perfectibility of humanity. Malthus believed humanity's ability to reproduce too rapidly doomed efforts at perfection and caused various other problems. His criticism of the working class's tendency to reproduce rapidly, and his belief that this led to their poverty, brought widespread criticism of his theory. Malthusians perceived ideas of charity to the poor, typified by Tory paternalism, were futile, as these would only result in increased numbers of the poor; these theories played into Whig economic ideas exemplified by the Poor Law Amendment Act of 1834. The Act was described by opponents as "a Malthusian bill designed to force the poor to emigrate, to work for lower wages, to live on a coarser sort of food", which initiated the construction of workhouses despite riots and arson. Malthus revised his theories in later editions of An Essay on the Principles of Population, taking a more optimistic tone, although there is some scholarly debate on the extent of his revisions. According to Dan Ritschel of the Center for History Education at the University of Maryland, Baltimore County, The great Malthusian dread was that "indiscriminate charity" would lead to exponential growth in the population in poverty, increased charges to the public purse to support this growing army of the dependent, and, eventually, the catastrophe of national bankruptcy. Though Malthusianism has since come to be identified with the issue of general over-population, the original Malthusian concern was more specifically with the fear of over-population by the dependent poor. One proponent of Malthusianism was the novelist Harriet Martineau whose circle of acquaintances included Charles Darwin, and the ideas of Malthus were a significant influence on the inception of Darwin's theory of evolution. Darwin was impressed by the idea that population growth would eventually lead to more organisms than could possibly survive in any given environment, leading him to theorize that organisms with a relative advantage in the struggle for survival and reproduction would be able to pass their characteristics on to further generations. Proponents of Malthusianism were in turn influenced by Darwin's ideas, both schools coming to influence the field of eugenics. Henry Fairfield Osborn Jr. advocated "humane birth selection through humane birth control" in order to avoid a Malthusian catastrophe by eliminating the "unfit". Malthusianism became a less common intellectual tradition as the 19th century advanced, mostly as a result of technological increases, the opening of new territory to agriculture, and increasing international trade. Although a "conservationist" movement in the United States concerned itself with resource depletion and natural protection in the first half of the twentieth century, Desrochers and Hoffbauer write, "It is probably fair to say ... that it was not until the publication of Osborn's and Vogt's books [1948] that a Malthusian revival took hold of a significant segment of the American population". Modern formulation The modern formulation of the Malthusian theory was developed by Quamrul Ashraf and Oded Galor. Their theoretical structure suggests that as long as higher income has a positive effect on reproductive success, and land is a limiting factor in resource production, then technological progress has only a temporary effect on income per capita (per person). While in the short run technological progress increases income per capita, resource abundance created by technological progress would enable population growth, and would eventually bring the per capita income back to its original long-run level. The testable prediction of the theory is that during the Malthusian epoch technologically advanced economies were characterized by higher population density, but their level of income per capita was not different from the level in societies that are technologically backward. Preventive vs. positive population controls To manage population growth with respect to food supply, Malthus proposed methods which he described as preventive or positive checks: A preventive check according to Malthus is that in which nature may alter population changes. Some primary examples are celibacy and chastity but also contraception, which Malthus condemned as morally indefensible along with infanticide, abortion and adultery. In other words, preventive checks control the population by reducing fertility rates. A positive check is any event or circumstance that shortens the human life span. The primary examples of this are war, plague and famine. However, poor health and economic conditions are also considered instances of positive checks. When these lead to high rates of premature death, the result is termed a Malthusian catastrophe. The adjacent diagram depicts the abstract point at which such an event would occur, in terms of existing population and food supply: when the population reaches or exceeds the capacity of the shared supply, positive checks are forced to occur, restoring balance. (In reality the situation would be significantly more nuanced due to complex regional and individual disparities around access to food, water, and other resources.) Neo-Malthusian theory Malthusian theory is a recurrent theme in many social science venues. John Maynard Keynes, in Economic Consequences of the Peace, opens his polemic with a Malthusian portrayal of the political economy of Europe as unstable due to Malthusian population pressure on food supplies. Many models of resource depletion and scarcity are Malthusian in character: the rate of energy consumption will outstrip the ability to find and produce new energy sources, and so lead to a crisis. In France, terms such as "" ("Malthusian politics") refer to population control strategies. The concept of restriction of the population associated with Malthus morphed, in later political-economic theory, into the notion of restriction of production. In the French sense, a "Malthusian economy" is one in which protectionism and the formation of cartels is not only tolerated but encouraged. Vladimir Lenin, the leader of the Bolshevik Party and the main architect of the Soviet Union was a critic of Neo-Malthusian theory (but not of birth control and abortion in general). "Neo-Malthusianism" is a concern that overpopulation as well as overconsumption may increase resource depletion and/or environmental degradation will lead to ecological collapse or other hazards. The rapid increase in the global population of the past century exemplifies Malthus's predicted population patterns; it also appears to describe socio-demographic dynamics of complex pre-industrial societies. These findings are the basis for neo-Malthusian modern mathematical models of long-term historical dynamics. There was a general "neo-Malthusian" revival in the mid-to-late 1940s, continuing through to the 2010s after the publication of two influential books in 1948 (Fairfield Osborn's Our Plundered Planet and William Vogt's Road to Survival). During that time the population of the world rose dramatically. Many in environmental movements began to sound the alarm regarding the potential dangers of population growth. Paul R. Ehrlich has been one of the most prominent neo-Malthusians since the publication of The Population Bomb in 1968. In 1968, ecologist Garrett Hardin published an influential essay in Science that drew heavily from Malthusian theory. His essay, "The Tragedy of the Commons", argued that "a finite world can support only a finite population" and that "freedom to breed will bring ruin to all." The Club of Rome published a book entitled The Limits to Growth in 1972. The report and the organisation soon became central to the neo-Malthusian revival. Leading ecological economist Herman Daly has acknowledged the influence of Malthus on his concept of a steady-state economy. Other prominent Malthusians include the Paddock brothers, authors of Famine 1975! America's Decision: Who Will Survive? The neo-Malthusian revival has drawn criticism from writers who claim the Malthusian warnings were overstated or premature because the green revolution has brought substantial increases in food production and will be able to keep up with continued population growth. Julian Simon, a cornucopian, has written that contrary to neo-Malthusian theory, Earth's "carrying capacity" is essentially limitless. Simon argues not that there is an infinite physical amount of, say, copper, but for human purposes that amount should be treated as infinite because it is not bounded or limited in any economic sense, because: 1) known reserves are of uncertain quantity 2) New reserves may become available, either through discovery or via the development of new extraction techniques 3) recycling 4) more efficient utilization of existing reserves (e.g., "It takes much less copper now to pass a given message than a hundred years ago." [The Ultimate Resource 2, 1996, footnote, p. 62]) 5) development of economic equivalents, e.g., optic fibre in the case of copper for telecommunications. Responding to Simon, Al Bartlett reiterates the potential of population growth as an exponential (or as expressed by Malthus, "geometrical") curve to outstrip both natural resources and human ingenuity. Bartlett writes and lectures particularly on energy supplies, and describes the "inability to understand the exponential function" as the "greatest shortcoming of the human race". Prominent neo-Malthusians such as Paul Ehrlich maintain that ultimately, population growth on Earth is still too high, and will eventually lead to a serious crisis. The 2007–2008 world food price crisis inspired further Malthusian arguments regarding the prospects for global food supply. From approximately 2004 to 2011, concerns about "peak oil" and other forms of resource depletion became widespread in the United States, and motivated a large if short-lived subculture of neo-Malthusian "peakists". A United Nations Food and Agriculture Organization study conducted in 2009 said that food production would have to increase by 70% over the next 40 years, and food production in the developing world would need to double to feed a projected population increase from 7.8 billion to 9.1 billion in 2050. The effects of global warming (floods, droughts, and other extreme weather events) are expected to negatively affect food production, with different impacts in different regions. The FAO also said the use of agricultural resources for biofuels may also put downward pressure on food availability. The more recent emergence of bio-energy with carbon capture (BECCS) as a prevalent "negative emissions" strategy for reaching Paris Climate Accord goals is another such pressure. Evidence in support Research indicates that technological superiority and higher land productivity had significant positive effects on population density but insignificant effects on the standard of living during the time period 1–1500 AD. In addition, scholars have reported on the lack of a significant trend of wages in various places over the world for very long stretches of time. In Babylonia during the period 1800 to 1600 BC, for example, the daily wage for a common laborer was enough to buy about 15 pounds of wheat. In Classical Athens in about 328 BC, the corresponding wage could buy about 24 pounds of wheat. In England in 1800 AD the wage was about 13 pounds of wheat. In spite of the technological developments across these societies, the daily wage hardly varied. In Britain between 1200 and 1800, only relatively minor fluctuations from the mean (less than a factor of two) in real wages occurred. Following depopulation by the Black Death and other epidemics, real income in Britain peaked around 1450–1500 and began declining until the British Agricultural Revolution. Historian Walter Scheidel posits that waves of plague following the initial outbreak of the Black Death throughout Europe had a leveling effect that changed the ratio of land to labor, reducing the value of the former while boosting that of the latter, which lowered economic inequality by making employers and landowners less well off while improving the economic prospects and living standards of workers. He says that "the observed improvement in living standards of the laboring population was rooted in the suffering and premature death of tens of millions over the course of several generations." This leveling effect was reversed by a "demographic recovery that resulted in renewed population pressure." Robert Fogel published a study of lifespans and nutrition from about a century before Malthus to the 19th century that examined European birth and death records, military and other records of height and weight that found significant stunted height and low body weight indicative of chronic hunger and malnutrition. He also found short lifespans that he attributed to chronic malnourishment which left people susceptible to disease. Lifespans, height and weight began to steadily increase in the UK and France after 1750. Fogel's findings are consistent with estimates of available food supply. Evidence supporting Malthusianism today can be seen in the poorer countries of the world with booming populations. In East Africa specifically, experts say that this area of the world has not yet escaped the Malthusian effects of population growth. Jared Diamond in his book Collapse (2005), for example, argues that the Rwandan Genocide was brought about in part due to excessive population pressures. He argues that Rwanda "illustrates a case where Malthus's worst-case scenario does seem to have been right." Due to population pressures in Rwanda, Diamond explains that the population density combined with lagging technological advancements caused its food production to not be able to keep up with its population. Diamond claims that this environment is what caused the mass killings of Tutsi and even some Hutu Rwandans. The genocide, in this instance, provides a potential example of a Malthusian trap. Theory of breakout via technology Industrial Revolution Some researchers contend that a British breakout occurred due to technological improvements and structural change away from agricultural production, while coal, capital, and trade played a minor role. Economic historian Gregory Clark, building on the insights of Galor and Moav, has argued, in his book A Farewell to Alms, that a British breakout may have been caused by differences in reproduction rates among the rich and the poor (the rich were more likely to marry, tended to have more children, and, in a society where disease was rampant and childhood mortality at times approached 50%, upper-class children were more likely to survive to adulthood than poor children.) This in turn led to sustained "downward mobility": the descendants of the rich becoming more populous in British society and spreading middle-class values such as hard work and literacy. 20th century After World War II, mechanized agriculture produced a dramatic increase in productivity of agriculture and the Green Revolution greatly increased crop yields, expanding the world's food supply while lowering food prices. In response, the growth rate of the world's population accelerated rapidly, resulting in predictions by Paul R. Ehrlich, Simon Hopkins, and many others of an imminent Malthusian catastrophe. However, populations of most developed countries grew slowly enough to be outpaced by gains in productivity. By the early 21st century, many technologically developed countries had passed through the demographic transition, a complex social development encompassing a drop in total fertility rates in response to various fertility factors, including lower infant mortality, increased urbanization, and a wider availability of effective birth control. On the assumption that the demographic transition is now spreading from the developed countries to less developed countries, the United Nations Population Fund estimates that human population may peak in the late 21st century rather than continue to grow until it has exhausted available resources. Recent empirical research corroborates this assumption for most of the less developed countries, with the exception of most of Sub-Saharan Africa. A 2004 study by a group of prominent economists and ecologists, including Kenneth Arrow and Paul Ehrlich suggests that the central concerns regarding sustainability have shifted from population growth to the consumption/savings ratio, due to shifts in population growth rates since the 1970s. Empirical estimates show that public policy (taxes or the establishment of more complete property rights) can promote more efficient consumption and investment that are sustainable in an ecological sense; that is, given the current (relatively low) population growth rate, the Malthusian catastrophe can be avoided by either a shift in consumer preferences or public policy that induces a similar shift. According to Malthus, population doubled every 25 years. Population sat at less than 17 million people in the U.S. in the 1850s and a century later, according to the United States Census Bureau, population had risen to 150 million. Malthus overpopulation would lead to war, famine, and diseases and in the future, society won't be able to feed every person and eventually die. Malthus theory was incorrect, however, because by the early 1900s and mid 1900s, the rise of conventional foods brought a decline to food production and efficiency increased exponentially. More supply was being produced with less work, less resources, and less time. Processed foods had much to do with it, many wives wanting to spend less time in the kitchen and instead work. This was the beginning of technological advancements adhering to food demand even in the middle of a war. Economists disregarded Malthus population theory because Malthus didn't factor in important roles society would have on economic growth. These factors concerned the society's need to improve their quality of life and their want for economic prosperity. Cultural shifts also had much to do with food production increase, and this put end to the population theory. Criticism One of the earliest critics was David Ricardo. Malthus immediately and correctly recognised it to be an attack on his theory of wages. Ricardo and Malthus debated this in a lengthy personal correspondence. In Ireland, where applying his thesis, Malthus proposed that "to give full effect to the natural resources of the country a great part of the population should be swept from the soil", there were early refutations. In Observations on the population and resources of Ireland (1821), Whitley Stokes, invoking the advantages mankind derives from "improved industry, improved conveyance, improvements in morals, government and religion", denied that there was a "law of nature" that procreation must outrun the means of subsistence. Ireland's problem was not her "numbers" but her indifferent government. In An Inquiry Concerning the Population of Nations containing a Refutation of Mr. Malthus's Essay on Population (1818), George Ensor had developed a similar broadside against Malthusian political economy, arguing that poverty was sustained not by reckless propensity to propagate, but rather by the state's indulgence of the heedless concentration of private wealth. Following the same line of argument, William Hazlitt (1819) wrote, "Mr Malthus wishes to confound the necessary limits of the produce of the earth with the arbitrary and artificial distribution of that produce by the institutions of society". Thomas Carlyle dismissed Malthusianism as pessimistic sophistry. In Chartism (1839), he denied the possibility that "twenty-four millions" of English "working people[s]", "scattered over a hundred and eighteen thousand square miles of space", could collectively "take a resolution" to diminish the supply of labourers "and act on it". Even if they could, the ongoing influx of Irish immigrants would render their efforts redundant. Associating Malthusianism with laissez-faire, he instead advocated proactive legislation. His later essay "Indian Meal" (1849) argued that maize production would remedy the failure of the potato crop as well as any prospective food shortages. Karl Marx (who had occasion to cite Ensor). referred to Malthusianism as "nothing more than a school-boyish, superficial plagiary of Defoe, Sir James Steuart, Townsend, Franklin, Wallace". Friedrich Engels argued that Malthus failed to recognise a crucial difference between humans and other species. In capitalist societies, as Engels put it, scientific and technological "progress is as unlimited and at least as rapid as that of population". Marx argued, even more broadly, that the growth of both a human population in toto and the "relative surplus population" within it, occurred in direct proportion to accumulation. Henry George in Progress and Poverty (1879) criticised Malthus's view that population growth was a cause of poverty, arguing that poverty was caused by the concentration of ownership of land and natural resources. George noted that humans are distinct from other species, because unlike most species humans can use their minds to leverage the reproductive forces of nature to their advantage. He wrote, "Both the hawk and the man eat chickens; but the more hawks, the fewer chickens, while the more men, the more chickens." D. E. C. Eversley observed that Malthus appeared unaware of the extent of industrialisation, and either ignored or discredited the possibility that it could improve living conditions of the poorer classes. Barry Commoner believed in The Closing Circle (1971) that technological progress will eventually reduce the demographic growth and environmental damage created by civilisation. He also opposed coercive measures postulated by neo-malthusian movements of his time arguing that their cost will fall disproportionately on the low-income population who are struggling already. Ester Boserup suggested that expanding population leads to agricultural intensification and development of more productive and less labor-intensive methods of farming. Thus, human population levels determines agricultural methods, rather than agricultural methods determining population. Environmentalist founder of Ecomodernism, Stewart Brand, summarised how the Malthusian predictions of The Population Bomb and The Limits to Growth failed to materialise due to radical changes in fertility that peaked at a growth of 2 percent per year in 1963 globally and has since rapidly declined. While short-term trends, even on the scale of decades or centuries, cannot prove or disprove the existence of mechanisms promoting a Malthusian catastrophe over longer periods, due to the prosperity of a major fraction of the human population at the beginning of the 21st century, and the debatability of the predictions for ecological collapse made by Paul R. Ehrlich in the 1960s and 1970s, some people, such as economist Julian L. Simon who wrote The Ultimate Resource which it contends is human technology and medical statistician Hans Rosling questioned its inevitability. Joseph Tainter asserts that science has diminishing marginal returns and that scientific progress is becoming more difficult, harder to achieve, and more costly, which may reduce efficiency of the factors that prevented the Malthusian scenarios from happening in the past. The view that a "breakout" from the Malthusian trap has led to an era of sustained economic growth is explored by "unified growth theory". One branch of unified growth theory is devoted to the interaction between human evolution and economic development. In particular, Oded Galor and Omer Moav argue that the forces of natural selection during the Malthusian epoch selected beneficial traits to the growth process and this growth enhancing change in the composition of human traits brought about the escape from the Malthusian trap, the demographic transition, and the take-off to modern growth. See also Demographic trap Overshoot (population) Antinatalism Ecofascism John B. Calhoun and his Behavioral sink for a more detailed perspective on social pathologies that may develop prior to population collapse. Dysgenics as for the qualitative counterpart to Malthus' primarily quantitative concerns about human populations Political demography Cliodynamics Population cycle Population dynamics National Security Study Memorandum 200 - a U.S. National Security Council Study advocating population reduction in selected countries to advance U.S. interests Jevons paradox Food Race Resource war References Further reading See especially Chapter 2 of this book Turchin, P., et al., eds. (2007). History & Mathematics: Historical Dynamics and Development of Complex Societies. Moscow: KomKniga. A Trap At The Escape From The Trap? Demographic-Structural Factors of Political Instability in Modern Africa and West Asia. Cliodynamics 2/2 (2011): 1–28. Lueger, T. (2019). The Principle of Population and the Malthusian Trap. Darmstadt Discussion Papers in Economics 232, 2018. The Principle of Population and the Malthusian Trap A Trap At The Escape From The Trap? Demographic-Structural Factors of Political Instability in Modern Africa and West Asia. Cliodynamics 2/2 (2011): 1–28. External links Essay on life of Thomas Malthus Malthus' Essay on the Principle of Population David Friedman's essay arguing against Malthus' conclusions United Nations Population Division World Population Trends homepage Demographic economic problems Disaster preparedness Energy economics Human geography Human overpopulation Macroeconomic theories Theories of history Schools of economic thought
Malthusianism
Environmental_science
6,084
21,247,683
https://en.wikipedia.org/wiki/Catalog%20of%205%2C268%20Standard%20Stars%20Based%20on%20the%20Normal%20System%20N30
Catalog of 5,268 Standard Stars Based on the Normal System N30 is the 1952 auxiliary star catalogue created by Herbert Rollo Morgan to address proper motion inaccuracies in 19th century observations by converting contemporary catalogues from a mean epoch around 1900 (±0.1 yr) to epoch and equinox 1950.0. However, the positions were derived from more than 70 recent catalogs with epochs of observation between 1917 and 1949. The N30 system is independent from any other astrometric system. Independent proper motions were determined by comparing the 1930 normal positions with the normal positions at the mean epoch, 30 years earlier, in the Albany General Catalogue, corrected by Morgan in 1948. Its primary use is the incorporation of 19th century astronomical data into modern research, and includes Harvard photometric magnitude, Henry Draper (HD) spectral type, and proper motion. See also B1950 J2000 References External links Catalog download page - America Alternative catalog download page - Europe Astronomical catalogues of stars
Catalog of 5,268 Standard Stars Based on the Normal System N30
Astronomy
202
2,930,391
https://en.wikipedia.org/wiki/Term%20indexing
In computer science, a term index is a data structure to facilitate fast lookup of terms and clauses in a logic program, deductive database, or automated theorem prover. Overview Many operations in automatic theorem provers require search in huge collections of terms and clauses. Such operations typically fall into the following scheme. Given a collection of terms (clauses) and a query term (clause) , find in some/all terms related to according to a certain retrieval condition. Most interesting retrieval conditions are formulated as existence of a substitution that relates in a special way the query and the retrieved objects . Here is a list of retrieval conditions frequently used in provers: term is unifiable with term , i.e., there exists a substitution , such that = term is an instance of , i.e., there exists a substitution , such that = term is a generalisation of , i.e., there exists a substitution , such that = clause θ-subsumes clause , i.e., there exists a substitution , such that is a subset/submultiset of clause is θ-subsumed by , i.e., there exists a substitution , such that is a subset/submultiset of More often than not, we are actually interested in finding the appropriate substitutions explicitly, together with the retrieved terms , rather than just in establishing existence of such substitutions. Very often the sizes of term sets to be searched are large, the retrieval calls are frequent and the retrieval condition test is rather complex. In such situations linear search in , when the retrieval condition is tested on every term from , becomes prohibitively costly. To overcome this problem, special data structures, called indexes, are designed in order to support fast retrieval. Such data structures, together with the accompanying algorithms for index maintenance and retrieval, are called term indexing techniques. Classic indexing techniques discrimination trees substitution trees path indexing Substitution trees outperform path indexing, discrimination tree indexing, and abstraction trees. A discrimination tree term index stores its information in a trie data structure. Indexing techniques used in logic programming First-argument indexing is the most common strategy where the first argument is used as index. It distinguishes atomic values and the principal functor of compound terms. Nonfirst argument indexing is a variation of first-argument indexing that uses the same or similar techniques as first-argument indexing on one or more alternative arguments. For instance, if a predicate call uses variables for the first argument, the system may choose to use the second argument as the index instead. Multiargument indexing creates a combined index over multiple instantiated arguments if there is not a sufficiently selective single argument index. Deep indexing is used when multiple clauses use the same principal functor for some argument. It recursively uses the same or similar indexing techniques on the arguments of the compound terms. Trie indexing uses a prefix tree to find applicable clauses. References Further reading P. Graf, Term Indexing, Lecture Notes in Computer Science 1053, 1996 (slightly outdated overview) R. Sekar and I.V. Ramakrishnan and A. Voronkov, Term Indexing, in A. Robinson and A. Voronkov, editors, Handbook of Automated Reasoning, volume 2, 2001 (recent overview) W. W. McCune, Experiments with Discrimination-Tree Indexing and Path Indexing for Term Retrieval, Journal of Automated Reasoning, 9(2), 1992 P. Graf, Substitution Tree Indexing, Proc. of RTA, Lecture Notes in Computer Science 914, 1995 M. Stickel, The Path Indexing Method for Indexing Terms, Tech. Rep. 473, Artificial Intelligence Center, SRI International, 1989 S. Schulz, Simple and Efficient Clause Subsumption with Feature Vector Indexing, Proc. of IJCAR-2004 workshop ESFOR, 2004 A. Riazanov and A. Voronkov, Partially Adaptive Code Trees, Proc. JELIA, Lecture Notes in Artificial Intelligence 1919, 2000 H. Ganzinger and R. Nieuwenhuis and P. Nivela, Fast Term Indexing with Coded Context Trees, Journal of Automated Reasoning, 32(2), 2004 A. Riazanov and A. Voronkov, Efficient Instance Retrieval with Standard and Relational Path Indexing, Information and Computation, 199(1–2), 2005 Data structures Logic programming Theorem proving software systems
Term indexing
Mathematics
914
58,658,778
https://en.wikipedia.org/wiki/Sharp%20sand
Sharp sand, also known as grit sand or river sand and as builders' sand, concrete sand, or ASTM C33 when medium or coarse grain, is a gritty sand used in concrete and potting soil mixes or to loosen clay soil as well as for building projects. It is not cleaned or smoothed to the extent recreational play sand is. It is useful for drainage. It is an angular grained sand. It was used in the production of brass. It is now used in the building trade. Sand and gravel ridges known as eskers are a frequently used source. References Sand
Sharp sand
Physics
120
41,999
https://en.wikipedia.org/wiki/Franz%20Mertens
Franz Mertens (20 March 1840 – 5 March 1927) (also known as Franciszek Mertens) was a Polish mathematician. He was born in Schroda in the Grand Duchy of Posen, Kingdom of Prussia (now Środa Wielkopolska, Poland) and died in Vienna, Austria. The Mertens function M(x) is the sum function for the Möbius function, in the theory of arithmetic functions. The Mertens conjecture concerning its growth, conjecturing it bounded by x1/2, which would have implied the Riemann hypothesis, is now known to be false (Odlyzko and te Riele, 1985). The Meissel–Mertens constant is analogous to the Euler–Mascheroni constant, but the harmonic series sum in its definition is only over the primes rather than over all integers and the logarithm is taken twice, not just once. Mertens's theorems are three 1874 results related to the density of prime numbers. Erwin Schrödinger was taught calculus and algebra by Mertens. His memory is honoured by the Franciszek Mertens Scholarship granted (from 2017) to those outstanding pupils of foreign secondary schools who wish to study at the Faculty of Mathematics and Computer Science of the Jagiellonian University in Kraków and were finalists of the national-level mathematics, or computer science olympiads, or they have participated in one of the following international olympiads: in mathematics (IMO), computer science (IOI), artificial intelligence (IOAI), astronomy (IAO), astronomy and astrophysics (IOAA), physics (IPhO), linguistics (IOL), European Girls' Mathematical Olympiad (EGMO), European Girls’ Olympiad in Informatics (EGOI), Romanian Masters of Mathematics (RMM), Romanian Masters of Informatics (RMI) or International Zhautykov Olympiad (IZhO). See also Mertens's theorems Cauchy product References External links 1840 births 1927 deaths People from Środa Wielkopolska People from the Province of Posen Mathematicians from the Kingdom of Prussia Polish mathematicians Mathematicians from Austria-Hungary Austrian mathematicians 19th-century German mathematicians 20th-century German mathematicians Humboldt University of Berlin alumni Academic staff of Jagiellonian University Academic staff of the University of Vienna Number theorists
Franz Mertens
Mathematics
479
62,923,402
https://en.wikipedia.org/wiki/Nokia%20Talkman%20520
The Nokia Talkman 520 is a portable phone which is discontinued. References Talkman 520
Nokia Talkman 520
Technology
19
20,949,019
https://en.wikipedia.org/wiki/Semantic%20role%20labeling
In natural language processing, semantic role labeling (also called shallow semantic parsing or slot-filling) is the process that assigns labels to words or phrases in a sentence that indicates their semantic role in the sentence, such as that of an agent, goal, or result. It serves to find the meaning of the sentence. To do this, it detects the arguments associated with the predicate or verb of a sentence and how they are classified into their specific roles. A common example is the sentence "Mary sold the book to John." The agent is "Mary," the predicate is "sold" (or rather, "to sell,") the theme is "the book," and the recipient is "John." Another example is how "the book belongs to me" would need two labels such as "possessed" and "possessor" and "the book was sold to John" would need two other labels such as theme and recipient, despite these two clauses being similar to "subject" and "object" functions. History In 1968, the first idea for semantic role labeling was proposed by Charles J. Fillmore. His proposal led to the FrameNet project which produced the first major computational lexicon that systematically described many predicates and their corresponding roles. Daniel Gildea (Currently at University of Rochester, previously University of California, Berkeley / International Computer Science Institute) and Daniel Jurafsky (currently teaching at Stanford University, but previously working at University of Colorado and UC Berkeley) developed the first automatic semantic role labeling system based on FrameNet. The PropBank corpus added manually created semantic role annotations to the Penn Treebank corpus of Wall Street Journal texts. Many automatic semantic role labeling systems have used PropBank as a training dataset to learn how to annotate new sentences automatically. Uses Semantic role labeling is mostly used for machines to understand the roles of words within sentences. This benefits applications similar to Natural Language Processing programs that need to understand not just the words of languages, but how they can be used in varying sentences. A better understanding of semantic role labeling could lead to advancements in question answering, information extraction, automatic text summarization, text data mining, and speech recognition. See also Named entity recognition Lexical semantics Semantic parsing Syntax tree Annotation References External links CoNLL-2005 Shared Task: Semantic Role Labeling Illinois Semantic Role Labeler state of the art semantic role labeling system Demo Preposition SRL: Identifies semantic relations expressed by prepositions Shalmaneser is another state of the art system for assigning semantic predicates and roles. Grammar Computational linguistics Tasks of natural language processing
Semantic role labeling
Technology
534
3,644,820
https://en.wikipedia.org/wiki/Construction%20paper
Construction paper, also known as sugar paper, is coloured cardstock paper. The texture is slightly rough, and the surface is unfinished. Due to the source material, mainly wood pulp, small particles are visible on the paper's surface. It is used for projects or crafts. Etymology The etymology of "sugar paper" lies in its use for making bags to contain sugar. It is related to the "blue paper" used by confectionery bakers in England from the 17th century onwards; for example, in the baking of Regency ratafia cakes (or macaroons). History The term "construction paper" was associated with the material in the early 20th century, although the general process for creating the paper began in the late 19th century when industrialized paper production and synthetic dye technology were combined. Around that time, construction paper was primarily advertised for classroom settings as an effective canvas for supporting multiple drawing media. The process for creating the paper involved a machine oriented process that exposed the paper to dyes while it was still pulp, resulting in a thorough distribution and brilliance of colour. The primary dyes involved in producing construction paper were abundant until Germany, the main producer of aniline for dyes at the time, became involved in World War I and ceased its exports. The shortage marked a period in which construction paper was created using substitute colouring methods. Dyeing One of the defining features of construction paper is the radiance of its colours. Before the methodology behind construction paper's colouring was introduced, most paper was coloured by pigments and vegetable oil, which had weaker staining capabilities. Synthetic dyes were later developed, which provided a wider range of colours, stronger dyeing strength, and had lower costs. However, the colours given by synthetic dyes tend to fade over short periods of time, an effect often seen in construction paper, noted by greying colours and brittleness. In popular culture The animated television series Blue's Clues and South Park were initially made using construction paper and stop motion. See also Card stock Papermaking Bristol board References Paper Visual arts materials Office equipment
Construction paper
Physics
426