id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
2,815,642
https://en.wikipedia.org/wiki/Algebra%20i%20Logika
Algebra i Logika (English: Algebra and Logic) is a peer-reviewed Russian mathematical journal founded in 1962 by Anatoly Ivanovich Malcev, published by the Siberian Fund for Algebra and Logic at Novosibirsk State University. An English translation of the journal is published by Springer-Verlag as Algebra and Logic since 1968. It published papers presented at the meetings of the "Algebra and Logic" seminar at the Novosibirsk State University. The journal is edited by academician Yury Yershov. The journal is reviewed cover-to-cover in Mathematical Reviews and Zentralblatt MATH. Abstracting and Indexing Algebra i Logika is indexed and abstracted in the following databases: According to the Journal Citation Reports, the journal had a 2020 impact factor of 0.753. References External links Algebra i Logika website Algebra and Logic website Algebra journals Academic journals established in 1962 Novosibirsk State University Magazines published in Novosibirsk Russian-language journals Mathematical logic journals
Algebra i Logika
Mathematics
204
11,360,576
https://en.wikipedia.org/wiki/International%20Journal%20of%20Wireless%20Information%20Networks
The International Journal of Wireless Information Networks is a quarterly peer-reviewed scientific journal covering research on wireless networks, including sensor networks, mobile ad hoc networks, wireless personal area networks, wireless LANs, indoor positioning systems, wireless health, body area networking, cyber-physical systems, and RFID techniques. The journal is abstracted and indexed in Scopus. References External links Wireless networking Computer science journals Engineering journals Academic journals established in 1994 Springer Science+Business Media academic journals Quarterly journals English-language journals
International Journal of Wireless Information Networks
Technology,Engineering
101
1,101,069
https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20computer%20science
This article is a list of notable unsolved problems in computer science. A problem in computer science is considered unsolved when no solution is known or when experts in the field disagree about proposed solutions. Computational complexity P versus NP problem What is the relationship between BQP and NP? NC = P problem NP = co-NP problem P = BPP problem P = PSPACE problem L = NL problem PH = PSPACE problem L = P problem L = RL problem Unique games conjecture Is the exponential time hypothesis true? Is the strong exponential time hypothesis (SETH) true? Do one-way functions exist? Is public-key cryptography possible? Log-rank conjecture Polynomial versus nondeterministic-polynomial time for specific algorithmic problems Can integer factorization be done in polynomial time on a classical (non-quantum) computer? Can the discrete logarithm be computed in polynomial time on a classical (non-quantum) computer? Can the shortest vector of a lattice be computed in polynomial time on a classical or quantum computer? Can the graph isomorphism problem be solved in polynomial time on a classical computer? Is graph canonization polynomial time equivalent to the graph isomorphism problem? Can leaf powers and -leaf powers be recognized in polynomial time? Can parity games be solved in polynomial time? Can the rotation distance between two binary trees be computed in polynomial time? Can graphs of bounded clique-width be recognized in polynomial time? Can one find a simple closed quasigeodesic on a convex polyhedron in polynomial time? Can a simultaneous embedding with fixed edges for two given graphs be found in polynomial time? Can the square-root sum problem be solved in polynomial time in the Turing machine model? Other algorithmic problems The dynamic optimality conjecture: Do splay trees have a bounded competitive ratio? Can a depth-first search tree be constructed in NC? Can the fast Fourier transform be computed in time? What is the fastest algorithm for multiplication of two n-digit numbers? What is the lowest possible average-case time complexity of Shellsort with a deterministic fixed gap sequence? Can 3SUM be solved in strongly sub-quadratic time, that is, in time for some ? Can the edit distance between two strings of length be computed in strongly sub-quadratic time? (This is only possible if the strong exponential time hypothesis is false.) Can X + Y sorting be done in time? What is the fastest algorithm for matrix multiplication? Can all-pairs shortest paths be computed in strongly sub-cubic time, that is, in time for some ? Can the Schwartz–Zippel lemma for polynomial identity testing be derandomized? Does linear programming admit a strongly polynomial-time algorithm? (This is problem #9 in Smale's list of problems.) How many queries are required for envy-free cake-cutting? What is the algorithmic complexity of the minimum spanning tree problem? Equivalently, what is the decision tree complexity of the MST problem? The optimal algorithm to compute MSTs is known, but it relies on decision trees, so its complexity is unknown. Gilbert–Pollak conjecture: Is the Steiner ratio of the Euclidean plane equal to ? Programming language theory Barendregt–Geuvers–Klop conjecture Other problems Is the Aanderaa–Karp–Rosenberg conjecture true? Černý Conjecture: If a deterministic finite automaton with states has a synchronizing word, must it have one of length at most ? Generalized star-height problem: Can all regular languages be expressed using generalized regular expressions with a limited nesting depth of Kleene stars? Separating words problem: How many states are needed in a deterministic finite automaton that behaves differently on two given strings of length ? What is the Turing completeness status of all unique elementary cellular automata? The problem to determine all positive integers such that the concatenation of and in base uses at most distinct characters for and fixed and many other problems in the coding theory are also the unsolved problems in mathematics. References External links Open problems around exact algorithms by Gerhard J. Woeginger, Discrete Applied Mathematics 156 (2008) 397–405. The RTA list of open problems – open problems in rewriting. The TLCA List of Open Problems – open problems in area typed lambda calculus. Computer Science Computer Science
List of unsolved problems in computer science
Mathematics
893
6,884,839
https://en.wikipedia.org/wiki/GQM
GQM, the initialism for goal, question, metric, is an established goal-oriented approach to software metrics to improve and measure software quality. History GQM has been promoted by Victor Basili of the University of Maryland, College Park and the Software Engineering Laboratory at the NASA Goddard Space Flight Center after supervising a Ph.D. thesis by Dr. David M. Weiss. Dr. Weiss' work was inspired by the work of Albert Endres at IBM Germany. Method GQM defines a measurement model on three levels: 1. Conceptual level (Goal) A goal is defined for an object, for a variety of reasons, with respect to various models of quality, from various points of view and relative to a particular environment. 2. Operational level (Question) A set of questions is used to define models of the object of study and then focuses on that object to characterize the assessment or achievement of a specific goal. 3. Quantitative level (Metric) A set of metrics, based on the models, is associated with every question in order to answer it in a measurable way. GQM stepwise Another interpretation of the procedure is: Planning Definition Data collection Interpretation Sub-steps Sub-steps are needed for each phases. To complete the definition phase, an eleven-step procedure is proposed: Define measurement goals Review or produce software process models Conduct GQM interviews Define questions and hypotheses Review questions and hypotheses Define metrics Check metrics on consistency and completeness Produce GQM plan Produce measurement plan Produce analysis plan Review plans Recent developments The GQM+Strategies approach was developed by Victor Basili and a group of researchers from the Fraunhofer Society. It is based on the Goal Question Metric paradigm and adds the capability to create measurement programs that ensure alignment between business goals and strategies, software-specific goals, and measurement goals. Novel application of GQM towards business data are described. Specifically in the software engineering areas of Quality assurance and Testing, GQM is used. Further reading Victor R. Basili's contributions to software quality (IEEE Software, 2006) Solingen/Berghout: The Goal/Question/Metric Method: A Practical Guide for Quality Improvement of Software Development (PDF, 2015) See also Software quality References Software metrics Software quality
GQM
Mathematics,Engineering
471
5,950,276
https://en.wikipedia.org/wiki/Aerostatics
A subfield of fluid statics, aerostatics is the study of gases that are not in motion with respect to the coordinate system in which they are considered. The corresponding study of gases in motion is called aerodynamics. Aerostatics studies density allocation, especially in air. One of the applications of this is the barometric formula. An aerostat is a lighter than air craft, such as an airship or balloon, which uses the principles of aerostatics to float. Basic laws Treatment of the equations of gaseous behaviour at rest is generally taken, as in hydrostatics, to begin with a consideration of the general equations of momentum for fluid flow, which can be expressed as: , where is the mass density of the fluid, is the instantaneous velocity, is fluid pressure, are the external body forces acting on the fluid, and is the momentum transport coefficient. As the fluid's static nature mandates that , and that , the following set of partial differential equations representing the basic equations of aerostatics is found. However, the presence of a non-constant density as is found in gaseous fluid systems (due to the compressibility of gases) requires the inclusion of the ideal gas law: , where denotes the universal gas constant, and the temperature of the gas, in order to render the valid aerostatic partial differential equations: , which can be employed to compute the pressure distribution in gases whose thermodynamic states are given by the equation of state for ideal gases. Fields of study Atmospheric pressure fluctuation Composition of mountain air Cross-section of the atmosphere Gas density Gas diffusion in soil Gas pressure Kinetic theory of gases Partial pressures in gas mixtures Pressure measurement See also Aeronautics References Fluid mechanics Aerodynamics
Aerostatics
Chemistry,Engineering
350
7,361,848
https://en.wikipedia.org/wiki/Black%20swan%20theory
The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The term is based on a Latin expression which presumed that black swans did not exist. The expression was used until around 1697 when Dutch mariners saw black swans living in Australia. After this, the term was reinterpreted to mean an unforeseen and consequential event. The reinterpreted theory was developed by Nassim Nicholas Taleb, starting in 2001, to explain: The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology. The non-computability of the probability of consequential rare events using scientific methods (owing to the very nature of small probabilities). The psychological biases that blind people, both individually and collectively, to uncertainty and to the substantial role of rare events in historical affairs. Taleb's "black swan theory" (which differs from the earlier philosophical versions of the problem) refers only to statistically unexpected events of large magnitude and consequence and their dominant role in history. Such events, considered extreme outliers, collectively play vastly larger roles than regular occurrences. More technically, in the scientific monograph "Silent Risk", Taleb mathematically defines the black swan problem as "stemming from the use of degenerate metaprobability". Background The phrase "black swan" derives from a Latin expression; its oldest known occurrence is from the 2nd-century Roman poet Juvenal's characterization in his Satire VI of something being "rara avis in terris nigroque simillima cygno" ("a bird as rare upon the earth as a black swan"). When the phrase was coined, the black swan was presumed by Romans not to exist. The importance of the metaphor lies in its analogy to the fragility of any system of thought. A set of conclusions is potentially undone once any of its fundamental postulates is disproved. In this case, the observation of a single black swan would be the undoing of the logic of any system of thought, as well as any reasoning that followed from that underlying logic. Juvenal's phrase was a common expression in 16th century London as a statement of impossibility. The London expression derives from the Old World presumption that all swans must be white because all historical records of swans reported that they had white feathers. In that context, a black swan was impossible or at least nonexistent. However, in 1697, Dutch explorers led by Willem de Vlamingh became the first Europeans to see black swans, in Western Australia. The term subsequently metamorphosed to connote the idea that a perceived impossibility might later be disproved. Taleb notes that in the 19th century, John Stuart Mill used the black swan logical fallacy as a new term to identify falsification. Black swan events were discussed by Taleb in his 2001 book Fooled By Randomness, which concerned financial events. His 2007 book The Black Swan extended the metaphor to events outside financial markets. Taleb regards almost all major scientific discoveries, historical events, and artistic accomplishments as "black swans"—undirected and unpredicted. He gives the rise of the Internet, the personal computer, World War I, the dissolution of the Soviet Union, and the September 11, 2001 attacks as examples of black swan events. Taleb asserts:What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme 'impact'. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. I stop and summarize the triplet: rarity, extreme 'impact', and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives. Identifying Based on the author's criteria: The event is a surprise (to the observer). The event has a major effect. After the first recorded instance of the event, it is rationalized by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. The same is true for the personal perception by individuals. According to Taleb, the COVID-19 pandemic was not a black swan, as it was expected with great certainty that a global pandemic would eventually take place. Instead, it is considered a white swan—such an event has a major effect, but is compatible with statistical properties. Coping with black swans The practical aim of Taleb's book is not to attempt to predict events which are unpredictable, but to build robustness against negative events while still exploiting positive events. Taleb contends that banks and trading firms are very vulnerable to hazardous black swan events and are exposed to unpredictable losses. On the subject of business, and quantitative finance in particular, Taleb critiques the widespread use of the normal distribution model employed in financial engineering, calling it a Great Intellectual Fraud. Taleb elaborates the robustness concept as a central topic of his later book, Antifragile: Things That Gain From Disorder. In the second edition of The Black Swan, Taleb provides "Ten Principles for a Black-Swan-Robust Society". Taleb states that a black swan event depends on the observer. For example, what may be a Black Swan surprise for a turkey is not a Black Swan surprise to its butcher; hence the objective should be to "avoid being the turkey" by identifying areas of vulnerability to "turn the Black Swans white". Epistemological approach Taleb claims that his black swan is different from the earlier philosophical versions of the problem, specifically in epistemology (as associated with David Hume, John Stuart Mill, Karl Popper, and others), as it concerns a phenomenon with specific statistical properties which he calls, "the fourth quadrant". Taleb's problem is about epistemic limitations in some parts of the areas covered in decision making. These limitations are twofold: philosophical (mathematical) and empirical (human-known) epistemic biases. The philosophical problem is about the decrease in knowledge when it comes to rare events because these are not visible in past samples and therefore require a strong a priori (extrapolating) theory; accordingly, predictions of events depend more and more on theories when their probability is small. In the "fourth quadrant", knowledge is uncertain and consequences are large, requiring more robustness. According to Taleb, thinkers who came before him who dealt with the notion of the improbable (such as Hume, Mill, and Popper) focused on the problem of induction in logic, specifically, that of drawing general conclusions from specific observations. The central and unique attribute of Taleb's black swan event is that it is high-impact. His claim is that almost all consequential events in history come from the unexpected – yet humans later convince themselves that these events are explainable in hindsight. One problem, labeled the ludic fallacy by Taleb, is the belief that the unstructured randomness found in life resembles the structured randomness found in games. This stems from the assumption that the unexpected may be predicted by extrapolating from variations in statistics based on past observations, especially when these statistics are presumed to represent samples from a normal distribution. These concerns often are highly relevant in financial markets, where major players sometimes assume normal distributions when using value at risk models, although market returns typically have fat tail distributions. Taleb said:I don't particularly care about the usual. If you want to get an idea of a friend's temperament, ethics, and personal elegance, you need to look at him under the tests of severe circumstances, not under the regular rosy glow of daily life. Can you assess the danger a criminal poses by examining only what he does on an ordinary day? Can we understand health without considering wild diseases and epidemics? Indeed the normal is often irrelevant. Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the 'normal,' particularly with 'bell curve' methods of inference that tell you close to nothing. Why? Because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty. Its nickname in this book is GIF, Great Intellectual Fraud.More generally, decision theory, which is based on a fixed universe or a model of possible outcomes, ignores and minimizes the effect of events that are "outside the model". For instance, a simple model of daily stock market returns may include extreme moves such as Black Monday (1987), but might not model the breakdown of markets following the September 11, 2001 attacks. Consequently, the New York Stock Exchange and Nasdaq exchange remained closed till September 17, 2001, the most protracted shutdown since the Great Depression. A fixed model considers the "known unknowns", but ignores the "unknown unknowns", made famous by a statement of Donald Rumsfeld. The term "unknown unknowns" appeared in a 1982 New Yorker article on the aerospace industry, which cites the example of metal fatigue, the cause of crashes in Comet airliners in the 1950s. Deterministic chaotic dynamics reproducing the Black Swan Event have been researched in economics. That is in agreement with Taleb's comment regarding some distributions which are not usable with precision, but which are more descriptive, such as the fractal, power law, or scalable distributions and that awareness of these might help to temper expectations. Beyond this, Taleb emphasizes that many events simply are without precedent, undercutting the basis of this type of reasoning altogether. Taleb also argues for the use of counterfactual reasoning when considering risk. See also Subjective probability References Bibliography . . The U.S. response to NEOs- avoiding a black swan event External links . Finance Epistemological theories Metatheory of science Theory Metaphors referring to birds Nassim Nicholas Taleb
Black swan theory
Biology
2,185
34,484,092
https://en.wikipedia.org/wiki/Skew%20gradient
In mathematics, a skew gradient of a harmonic function over a simply connected domain with two real dimensions is a vector field that is everywhere orthogonal to the gradient of the function and that has the same magnitude as the gradient. Definition The skew gradient can be defined using complex analysis and the Cauchy–Riemann equations. Let be a complex-valued analytic function, where u,v are real-valued scalar functions of the real variables x, y. A skew gradient is defined as: and from the Cauchy–Riemann equations, it is derived that Properties The skew gradient has two interesting properties. It is everywhere orthogonal to the gradient of u, and of the same length: References Peter Olver, Introduction to Partial Differential Equations, ch. 7, p. 232 Differential calculus Generalizations of the derivative Linear operators in calculus Vector calculus
Skew gradient
Mathematics
175
26,993,190
https://en.wikipedia.org/wiki/Emotional%20or%20behavioral%20disability
An emotional or behavioral disability is a disability that impacts a person's ability to effectively recognize, interpret, control, and express fundamental emotions. The Individuals with Disabilities Education Act of 2004 characterizes the group of disabilities as Emotional Disturbance (ED). This term is controversial, as it is seen by some as excludingor even discriminating againststudents with behavioral issues and focusing solely on emotional aspects. Characteristics This group of disabilities are particularly difficult to classify as generalizations occur that may lead to some students who do not fit specific diagnostic criteria, but are still disabled, not determined eligible for special education services. Broadly, the group can be broken down to internal behaviors, external behaviors and low incidence behaviors. Internal behaviors are observed in students who are depressed, withdrawn and anxious. External behaviors are seen in students who are aggressive and act out. Such behavior would be classified as Disruptive Behavioral Disorder (DBD). Low incidence behaviors are behaviors that occur only in particular environmental triggers, such as a specific person or phrase. Note, some students may have only one category, some have mixed. Services Students with an ED often have an early diagnosis among school districts. This is because teachers initiate the referral process among concerns over behavior in class. Often, the DSM-IV is used by a school psychologist, whom may conduct interviews and distribute surveys as part of the social-emotional evaluation. When determined ED the student will receive an Individualized Education Plan. Students can also receive certain supports under the Rehabilitation Act of 1973, referred to as a 504 plan. This often includes goals towards appropriate behavior, productive coping strategies and academic skills. Effective services should focus on these, and can mandate an educational assistant for support in regular education classes, access to a resource room for individualized instruction, medication management provided by a mental health professional, as well as individual counseling. Students with ED are often considered at-risk for dropping out of school, suicide and criminal activity, as well as also being diagnosed with a learning disability. Nonetheless, with the appropriate supports in place, students with ED have been shown to have enormous potential to succeed. See also Bipolar disorder Learning disability Resource room Special education References External links Emotional Disturbances via the IDEA. Students with Emotional Disturbance: Eligibility and Characteristics Intellectual disability Emotion Emotional issues
Emotional or behavioral disability
Biology
459
22,036,768
https://en.wikipedia.org/wiki/Process%20duct%20work
Process duct work conveys large volumes of hot, dusty air from processing equipment to mills, baghouses to other process equipment. Process duct work may be round or rectangular. Although round duct work costs more to fabricate than rectangular duct work, it requires fewer stiffeners and is favored in many applications over rectangular ductwork. The air in process duct work may be at ambient conditions or may operate at up to . Process ductwork varies in size from 2 ft diameter to 20 ft diameter or to perhaps 20 ft by 40 ft rectangular. Large process ductwork may fill with dust, depending on slope, to up to 30% of cross section, which can weigh 2 to 4 tons per linear foot. Round ductwork is subject to duct suction collapse, and requires stiffeners to minimize this, but is more efficient in material than rectangular duct work. There are no comprehensive, design references for process duct work design. The ASCE reference for the design of power plant duct design gives some general guidance on duct design, but does not specifically give designers sufficient information to design process duct work. Structural process ductwork Structural process ductwork carries large volumes of high temperature, dusty air, between pieces of process equipment. The design of this ductwork requires an understanding of the interaction of heat softening of metals, potential effects of dust buildup in large ductwork, and structural design principles. There are two basic shapes for structural process ductwork: rectangular and round. Rectangular ductwork is covered by the ASCE "The Structural Design of Air & Gas Ducts for Process Power Stations and Industrial Applications". In the practical design of primarily round structural process ductwork in the cement, lime and lead industries, the duct size involved ranges from 18 inches (45 cm) to 30 feet (10 m). The air temperature may vary from ambient to 1000 °F (515 °C). Process ductwork is subject to large loads due to dust buildup, fan suction pressure, wind, and earthquake forces. 30 ft diameter process ductwork may cost $7,000 per ton. Failure to properly integrate design forces may lead to catastrophic duct collapse. Overdesign of ductwork is expensive. Round and rectangular duct structural design The structural design of ductwork plate is based on buckling of the plate element. Round ductwork plate design is based on diameter to duct plate thickness ratios, and the allowable stresses are contained in multiple references such as US Steel Plate, ASME/ANSI STS-1,SMNACA, Tubular Steel Structures, and other references. In actuality round ductwork bent in bending is approximately 30% stronger than a similar shape in compression, however one uses the same allowable stresses in bending as we do for compression. Round ducts require typical stiffeners at roughly 3 diameter spacing, or roughly 20 ft. O.C. for wind ovaling and fabrication and truck shipping requirements. Round ducts, larger than in diameter (1/4" plate) require support ring stiffeners. Smaller-diameter ducts may not require support ring stiffeners, but may be designed with saddle supports. When stiffener rings are required they are traditionally designed based on "Roark", although this reference is quite conservative. Round duct elbow allowable stresses are lower than the allowable stresses for straight duct by a K factor = 1.65/(h 2/3power) where h = t (duct) * R (elbow) /(r(duct)*r (duct). This equation, or similar equations is found in Tubular Steel Structures section 9.9. Rectangular ductwork design properties is based on width-to-thickness ratios. This is simplified, normally to width=t/16, from corner elements or corner angle stiffeners, although in reality, the entire duct top & side plate does participate, somewhat in duct section properties. Duct logic Duct logic is the process of planning for duct thermal movement, combined with planning to minimize duct dust dropout. Ducts move with changes in internal temperature. Ducts are assumed to have the same temperature as their internal gasses, which may be up to 900 °F. If the internal duct temperature exceeds 1000 °F, refractory lining is used to minimize the duct surface temperature. At 1000 °F, ducts may grow approximately 5/8 inch per 10 feet of length. This movement must be carefully planned for, with cloth (or metal) expansion joints at each equipment flange, and one joint per each straight section of ductwork. Sloping ductwork at or above the duct dust angle of repose will minimize dust buildup. Therefore, many ducts carrying high dust loads slope at 30 degrees, or steeper. Duct elbow geometry To minimize pressure loss in duct elbows, the typical elbow radius is 1 1/2 times the duct diameter. In cases where this elbow radius is not feasible, turning vanes are added to the duct. Duct transition and elbow layout Process ductwork is often large (6-foot diameter to 18-foot diameter), carrying large volumes of hot dirty gasses, at velocities of 3000 to 4500 feet per minute. The fans used to move these gasses are also large, 250 to 4000 horsepower. Therefore, minimizing duct pressure drop by minimizing turbulence at elbows and transitions is of importance. Duct elbow radius is usually 1 1/2 to 2 times the duct size. The side slopes of transitions are typically 10 to 30 degrees. Note: the duct gas velocity is chosen to minimize duct dust dropout. Cement and lime plant duct velocity at normal operations is 3000 to 3200 foot per minute, lead plant velocities are 4000 to 4500 foot per minute, as the dust is heavier. Other industries, such as grain have lower gas velocities. Higher duct gas velocity may require more powerful fans than lower duct velocities. Duct support types Fixed supports typically are designed to resist lateral movement of the duct. Depending on the support geometry, fixed supports may also resist rotation of the duct art the support. Sliding supports are typically supported on Teflon, (or other material) pads, isolated from the duct so that temperature and dust does not damage the sliding surface. Link supports are often "bents", or braced frames down from the duct support ring (frame) to a foundation or support plane. if the bent is long enough, hinges are not required to allow for duct thermal growth. Rod or hanger supports are similar to link supports, but due to the flexibility of the rod supports, that are easier to design and detail. Guide supports: Often rings inside a structural frame, with angle guides, that allow the duct to grow vertically while restraining the duct laterally for wind loads. Unusual "support" conditions (details): Hinges at expansion joints Tension ties across dual fixed supports Designs that allowing duct elbows to flex under unusual support conditions Other unusual design models. Duct design loads For cement plant and Lime plant process ductwork, duct loads are a combination of: Duct dead loads: are often simplified (in Cement plant usage) by using duct plate weight, multiplied by 1.15 as a stiffener allowance, as duct stiffeners usually weigh less than 15% times the duct plate weight. Duct stiffener allowance for rectangular power plants ductwork may be 50% to 100% of the duct plate weight. Duct internal dust loads (bottom of duct): which vary considerably with duct slope. These loads need to be approved by the client, but are often used as follows: For duct sloping 0 degrees to 30 degrees, duct internal dust is 25% of duct cross section. For duct sloping 30 degrees to 45 degrees duct dust loads are reduced to 15% of cross section, plus internal duct coating loads. For ducts sloping 45 degrees to 85 degrees, duct internal dust is 5% of duct cross section, plus internal duct coating loads. For ducts sloping over 85 degrees. Because of the potential for high dust loading, most process ductwork is run at a 30 to 45 degree slope. 2a) Duct dust loading in non-process ducts (2-foot diameter and smaller), such as conveyor venting ducts are sometimes run horizontally and can be filled to 100% of cross section. 2b) Power plant internal duct dust loads are coordinated with the client, and are sometimes used at 1 to 2 foot of internal ash loadings. 3) Duct internal, coating dust loads, which sometimes are used as a 2" (50 mm) coating of dust on the internal perimeter. 4) Duct suction pressure loads. Most process duct loads have design pressures of 25 inches (600 mm) to 40 inches (1000 mm) of water pressure. This suction pressure operates to cause suction pressure collapse on the duct side walls. Also this pressure operates perpendicular to the duct "expansion joints" to create an additional load on the duct supports that adds to dead, and live loads. Please note: duct pressure loads vary with temperature, as the gas density varies with temperature. A duct pressure of 25 inches of , at room temperature may become 12 inches to 6 inches at duct operating pressures. 5) Duct wind loads 6) Duct Seismic loads 7) Duct Snow loads, normally inconsequential, as snow will melt quickly unless the plant is in shutdown mode. 8) Top of duct dust loads, often used as zero, since plant dust generation is much less now, than in the past. 9) Duct suction pressure loads, act perpendicular to end of duct cross section, and can be significant. For a duct designed for 25" of water at a startup temperature of 70 degree F, on an 8 foot in diameter duct, this is equal to 8000 pounds at each end of the duct. Round ductwork The majority of cement plant process ductwork is round. This is because the round duct shape does not bend between circumferential stiffeners. Therefore, bending stiffeners are not required, and round ductwork requires fewer and lighter intermediate stiffeners than rectangular ductwork. Round cement plant duct stiffeners are sometimes about 5% duct plate weight. Rectangular cement plant duct stiffeners are 15 to 20% times duct plate weight. Power plant ductwork is often larger. Power plant ductwork is usually rectangular, with stiffener weights of 50% (or more) times duct plate weight. (this is based on personal experience, and my vary with loads, duct size, and industry standards) Large, round process ductwork is usually fabricated from 1/4-inch (6 mm) mild steel plate, with ovaling stiffening rings at 15 to 20 ft (5 to 6 M) on center, regardless of diameter. These lengths allow for resistance to wind ovaling and resistance to out of round when shipping by truck. This also works well with fabricator equipment. The typical intermediate rings are designed for wind bending stresses, reduced as required by the yield stress reduction at working temperatures. The typical rings are fabricated from rolled steel plate, angles or tee's welded together to create the ring cross section required. Rings are fabricated from any combination of plate, tee or W shape that the shop can roll. Rings are usually mild carbon steel, ASTM A36 plate, or equivalent. The location of ring butt welds should preferably be offset 15 degrees(+/-)from point of maximum stress to minimize the effect of weld porosity on weld allowable stress. See US Steel Plate, volume II for empirical ring spacing, and wind bending stress: Spacing = Ls = 60 sqrt [Do (ft) * t plate (in) /wind pressure (psf)] Section = p * L (spacing, ft) * Do (ft) * Do (ft)/Fb (20,000 at ambient T) This reference is older, but a good starting point for duct design. SMACNA, (2ND Edition) chapter 4 has many useful formulas for round ducts, allowable stresses, ring spacing, effect of dust, ice, and live loads. The basic factor of safety for SMACNA, 3, is larger than typically used on typical structural engineering projects, of 1.6. Under SMACNA the critical ring spacing for rings is L = 1.25 * D (ft) sqrt (D(ft)/t(inches)), which is similar to tubular steel structures, L = 3.13 * R sqrt (R/t). In effect, using Spacing = 60 sqrt [Do (ft) * t plate (in) /wind pressure (psf)] is conservative. Allowable bending and compression stress in ducts can come from several sources. See API 560 for design of wind ovaling stiffeners See Tubular Steel structures, chapter 2, 9 & 12 for the allowable stresses for thin, round ducts, their allowable stresses, elbows, elbow softening coefficients, and some procedures for the design of duct support rings. These allowable stresses can be verified with select review of chapters of US Steel Plate, Blodgett Design of plate structures, Roark & Young, or API 650. Round duct support rings are spaced, often at three diameters, or as require at up to about 50 ft centers (14 m). At this spacing the main support rings are designed for the sum of suction pressure stresses & support bending moments. Round ductwork allowable compressive stress is = 662 /(d/t) +339 * Fy (tubular steel structures, chapter 2). Other reference use similar equations. Ductwork typical cement plant pressure drops are: 60% to 80% of high temperature process duct work pressure drop occurs in the process equipment, baghouses, mills and cyclones. Since motor 1 (one) horsepower cost roughly $1000/year (US$) (2005), duct efficiency is important. Minimizing duct pressure drop can reduce plan operating costs. most ductwork, non-equipment pressure drop occurs at transitions and changes of directions (elbows). The bests way to minimize duct pressure drop or to minimize plant operating costs, is to use elbows with an elbow radius to duct radius exceeding 1.5. (For a 15-foot duct, the elbow radius would therefore equal, or exceed, 22.5 ft.) Process duct pressure drops (US practice) are usually measured in inches of water. A typical duct operates at about - 25 inches (160 psf.) total suction pressure, with roughly 75% of the pressure loss in the bag house, and 10% of pressure lost in duct friction, and 15% (nominal)lost in elbow turbulence. A major consideration of duct design is to minimize duct pressure losses, turbulence, as poor duct geometry, increases turbulence, and increases plant electrical usage. Round duct work suction pressure collapse, in ducts over 6 feet in diameter, is prevented with rings at supports, and roughly 3 diameter centers. Round duct support rings are traditionally designed from the formula's found in Roark & Young. However this reference is based on point loads on rings, while actual duct ring loads are based on almost uniform bottom dust. Therefore, these formulars can be shown with Ram, or other analysis methods to have conservatism factor of roughly 2 above the stresses given In Roark. The duct ring force dead, live and dust forces need to be combined with suction pressure stresses. Suction pressure forces concentrate on the rings, as they are the stiffest element present. Round ductwork elbow allowable stresses are reduced due to the elbow curvature. Various references give similar results for this reduction. Tubular steel structures, Section 9.9 gives the (Beskin) reduction factor of K= 1.65/(h (2/3 power)) where h= t (plate) *R(elbow)/ r (duct) (where suction pressures are smaller). This K reduces the I factor of the duct I effective = I/K. Round duct rings are fabricated from rolled tees, angles, or plates, welded into the shape required. Typically these are designed with ASTM A-36 properties. Factors of safety Typical duct round plate factor of safety (traditional factor of safety) should be 1.6, because duct plate bending, and buckling is mostly controlled by typical intermediate ring design. Typical intermediate ring factor of safety should be 1.6, because there is ample evidence in various codes, (API 360, etc.) that intermediate rings designed for wind ovaling and suction pressure combinations are safe. Typical main support ring factor of safety, if designed by "Roark" formulas should be 1.6, (If constructed to the Roark normal 1% out of round standard tolerance) because it can be shown by various methods that these formulas are at least a factor of two, above three D duct ring analysis results etc.. Typical duct elbow factor of safety should be above 1.6, because it can be difficult to show that shipping out of round for elbows corresponds to the normal 1% out of round standard tolerance. (various code and reference notes). Round structural conveyor tubes Round structural tubes are sometimes used to support and contain conveyors transporting coal, lead concentrate, or other dusty material over county roads, plant access roads, or river barge loading facilities. When tubes are used for these purposes they may be 10'-6" to 12 foot diameter, and up to 250 foot long, using up to 1/2" plate and ovaling ring stiffeners at 8 foot (to 20 foot centers). On one such project My firm added L8x8x3/4 at the top 45 degree location to stiffen the plate near the point of maximum stress for tubes (as per Timoshenko, and others). Some vendor provide conveyor galleries for the same purpose. Rectangular ductwork Rectangular cement plant ductwork is often 1/4" (6 mm) duct plate, with stiffeners spaced at about 2'-6", depending on suction pressure and temperature. Thinner plate requires a closer stiffener spacing. The stiffeners are usually considered pinned end. Power plant ductwork can be 5/16" thick duct plate, with "fixed end" W stiffeners at roughly 2'-5" spacing. Because rectangular duct plate bends, stiffeners are required at reasonably close spacing. Duct plate 3/16", or thinner, may dishpan, or make noise, and should be avoided. Rectangular duct section properties are calculated from the distance between the upper to lower duct corners of the ductwork The flanges areas are based on the size of corner angles plus duct plate width based on the plate thickness ratio of 16*t. (see AISC structural duct design below) For section properties the "web" plate is ignored. The typical stiffener spacing for cement plant duct work is usually based on duct plate bending M = W * L * L / 8. This is because using a fixed-fixed condition requires difficult to design plate attachments. Power plant, and other larger ductwork, usually goes thru the expense of creating "fixed End" corner moment. all stiffeners for rectangular ductwork requires consideration of lateral torsional bracing stiffeners. Effect of temperature on duct yield stress Ducts are usually designed as if the duct plate and stiffener temperatures match the internal duct gas temperatures. For mild carbon steels (ASTM A36) temperatures, the design yield stress ratio at 300 °F is 84% of room temperature stress. At 500 °F, the design yield stress ratio is 77% of room temperature stress. At 700 °F, the design yield stress ratio is about 71% of room temperature stress. Temperatures above 800 °F may cause mild carbon steel to warp. This is because, in this temperature range, the crystal lattice structure of mild carbon steel changes with temperatures above about 800 degrees F (reference, US Steel Plate, elevated temperature steel). For ductwork operating above 800 degrees F, duct plate material should resist warping. Either Core-ten or ASTM A304 stainless steel may be used for duct plate between 800 °F and 1200 °F, Core-ten plate is less expensive than stainless steel. Corten steels have essentially the same yield stress ratios as Corten through 700 °F. At 900 °F, the yield stress ratio is 63%. At 1100 °F, the yield stress ratio is 58% (AISC tables). Corten steels should not be used above 1100 °F. Unless the duct and its stiffeners are insulated, the stiffeners can be designed in ASTM A36 steels, even at a duct temperature of 1000 °F. This is because the stiffener temperature is cooler than the duct gas temperature by several hundred degrees (F). Duct stiffener temperatures are assumed to drop about 100 °F per inch of depth (when uninsulated) (no reference available). Corrosion and wear resistance Corrosion As reducing the loss of heat at plants has changed over the years, ductwork now connects more pieces of equipment than ever before. Care needs to be taken to avoid condensation of moisture in plant ductwork. Once condensation occurs, the condensation may absorb , other components in the gas stream, and become corrosive on low carbon steel. Methods to avoid this problem may include Duct insulation specialty steels, such as COR-10 steels or A304 SS or A316L SS, Duct internal coatings. Duct internal coatings are expensive, and may cost more than the stack plate they protect. Uncoated stacks cement plant stacks, with condensation, have been noted to last less than two years. Sulfuric acid attack may require stainless steel ducts, fiberglass ducts, etc. Wear resistance Many plant exhaust gasses contain dusts with high wear potential. Typically wear resistant steels are not useful in resisting duct wear, particularly at higher temperatures. Wear resistant steel ducts are hard to fabricate, and refractory coatings are usually less expensive than wear resistant steel ductwork. Each industry may have different approaches to resist duct wear. Cement plant clinker dust is more abrasive than sand. In high temperature ducts, or ducts with wear potential, -inch refractory, is often anchored to the duct plate with V anchors at 6" O.C. (+/-) to resist a) temperature, or b) wear at elbows or a combination of these effects. Occasionally ceramic tiles or ceramic mortars are anchored to ductwork to resist temperature and wear. Grain plant hulls are also very abrasive. Sometimes plastic liners are used to resist wear in grain facilities, where temperatures are lower than in mineral processing facilities. Expansion joint types Duct segments are typically separated with metal or fabric expansion joints. These joints are designed and detailed for the duct suction pressure, temperatures, and movements between duct segments. Fabric joints are often chosen to separate the duct segments because they usually cost 40% less than metal joints. Also metal joints place an additional loads onto duct segments. Metal joints prefer axial movements, and provide significant lateral loads onto duct segments. fabric joints cost $100 to $200 per square foot of joint (2010). Metal joints can cost twice this amount. fabric expansion duct forces are assumed to be 0 #/inch. Metal expansion joint forces for metal joints a 24-inch diameter duct are on the order of 850#/ inch of movement for axial spring rate, and 32,500 #/inch for lateral movement. These coefficients will vary with duct size, joint thickness, and becomes larger for rectangular ducts (based on one recent job). Fabric expansion joint life is about 5 years under field conditions. Many plants prefer access platforms near the joints for replacing the joint fabric. Finite element software Currently software is available to model ductwork in 3D. This software needs to be used with care, as the design rules for width to thickness and elbow softening coefficients, etc., may not be input into the design program. Drawing presentation and dimensioning It is easy to draw ducts in 3D without correct dimensioning. Drawings should be laid out with: Work points, with elevations, and plan dimensioning. Elbow radius, duct diameters, or width and thickness dimensions, elbow tangent dimensions (true view and plan and elevation views) Column grids, dimensions between supports, showing work points Lack of dimensions in 3D generated drawings makes drawings hard to follow. supports need to be coordinated with elevations. Special duct loading conditions Special duct loading conditions may occur outside of dead, live, dust and temperature conditions. Ductwork associated with coal mills, coke grinding facilities, and to some extent grain processing facilities, may be subject to explosive dusts. Ductwork designed for explosive dust is typically designed for 50 psi internal pressure, and will typically have one explosion relief one vent per duct section. the likelihood of a dust explosion on an indirect coal mill system is 100%, over time. This can generate a plum of fire 5 ft. to 15 ft. in diameter, and 20 ft. to 30 ft. long. Therefore, access to areas surrounding explosion vents shall limit personal access with locked access. Duct details Ducts are shipped from fabricating facility to job sites on trucks, by rail, or on barges in lengths accommodating the mode of transport, often in 20 foot sections. These sections are connected with flanges, or weld straps. Flanges are provided at expansion joints, or to join low stress duct sections. Flanges may be difficult to design for the duct plate forces. Flange gaskets add flexibility to the flanges that make their ability to carry forces problematic. Therefore, weld straps (short steel straps) are commonly used for higher stress duct plate connections. Various duct photos A close look at the fixed duct support photo shows several properties or round ring supports. There are stiffeners at roughly 60 degrees on center. This duct ring is fabricated from two rolled WTs, welded at the center. This is a smaller duct, with light loads, so that the bottom flange was slightly modified by support clearance requirements. A small gap is shown for placing the duct PTFE slide bearing, although a fixed support could also be inserted in this gap. In the background of this photo is a duct flange. The duct flange normally has 3/4" bolts at 6" nominal; spacing. Duct flange angle thickness needs to be designed for duct plate tensile stresses, as flanges will bend. 5/16" or 3/8" angle thicknesses are common. See above photo of round duct elbows, transitions, and stiffeners. The duct elbow radius is from 1 1/2 to 2 times the duct diameter. The round duct has ovaling, and shipping rings at 20 foot nominal spacing, and larger support rings at supports. The Y split has suction stiffeners at the duct intersection. Note the 3000 HP fan inlet transition and stack inlet transition also shown in this photo. The adjacent photo also shows several principles of process ductwork. It shows a large baghouse inlet ductwork. The inlet duct is tapered to minimize dust dropout. A shallow taper such at this also reduces pressure losses when changing duct diameters. Note the rectangular duct ring spacing is roughly 2'-6" on center. The round duct is stiffened near each branch duct. Resources There are several references for process duct work. These references are worked together to review duct design processes. Other references are often used for duct design, but they give similar results. Finite element design of process duct work is possible, but a requirement of design theory and allowable stresses is required to properly interpret the finite element model. ASCE - Structural Design of Air & Gas Ducts for Power Stations and Industrial Boiler Applications Roark & Young. Formulas for stress & Strain, various editions US Steel Plate, Plate Structures, Volume I & II US Steel Plate, Steels for Elevated Service Temperatures 1974 AISC, On Line steel temperature versus Yield, and steel temperature versus Young's modulus charts. Lincoln Arc Welding, Design of Welded Structures, Omar Blodgett, chapter 6, Section 6.6 Lincoln Arc Welding, Tubular Steel Structures, by Troitsky Cold Formed Steel Structures ASHRE, for the design of pressure drop, elbows and fans API 560, contains references to minimize wind ovaling SMNACA can also be use as a reference Process Vendor, 2005, Process Ducting Loads Similar Design Reference, out of print, Gaylord & Gaylord, Design of Bins. Cement, lime and lead industry accepted dust loads (for structural loading) are: Process ductwork is intended to convey large volumes of dust. some of this dust will settle to the bottom of the duct during power outages and normal operation. The percentage of duct cross section filled with dust is often assumed to be as follows: Duct slopes level to 30 degrees, 25% of cross section. Duct slopes, 30 degrees to 45 degrees, 15% of cross section Duct Slopes, 45 degrees to 85 Degrees, 5% Above 85 degrees, a 2 in (50 mm) interior coating of dust. These loads are always confirmed with the client, before use, but the above in US common practice To minimize the buildup of dust, each material has a minimum carrying velocity, lime = about 2800 fpm., cement about 3200 fpm, and lead dust about 4200 fpm. Dust density depends on industry, Normally these are: cement dust density = 94 pcf, lime industry = 50 pcf, lead oxide dust = 200 pcf. Duct Wear: High temperature ductwork often carries large volumes of hot abrasive dust. Often the design temperature of the duct, or the abrasiveness of the dust, prevents the use of abrasive resisting steels. In these cases refractory can be anchored inside the duct, or abrasive resisting tiles, with weld nuts, are welded to the inside of the ductwork. Duct Thermal Movement Duct steels expand with temperature. Each type of steel may have a different coefficient of thermal expansion, typical mild carbon steels expand with the coefficient of 0.0000065 (See AISC). References Building engineering
Process duct work
Engineering
6,176
5,882,353
https://en.wikipedia.org/wiki/Uwe%20Storch
Uwe Storch (born 12 July 1940, Leopoldshall– Lanzarote, 17 September 2017) was a German mathematician. His field of research was commutative algebra and analytic and algebraic geometry, in particular derivations, divisor class group, resultants. Storch studied mathematics, physics and mathematical logic in Münster and in Heidelberg. He got his PhD 1966 under the supervision of Heinrich Behnke with a thesis on almost (or Q) factorial rings. 1972 Habilitation in Bochum, 1974 professor in Osnabrück and since 1981 professor for algebra and geometry in Bochum. 2005 Emeritation. Uwe Storch was married and had four sons. Theorem of Eisenbud–Evans–Storch The Theorem of Eisenbud-Evans-Storch states that every algebraic variety in n-dimensional affine space is given geometrically (i.e. up to radical) by n polynomials. Selected publications Günther Scheja and Uwe Storch, Lehrbuch der Algebra, 2 volumes, Stuttgart 1980 (1st edition was in 3 volumes), 1988. Uwe Storch and Hartmut Wiebe, Lehrbuch der Mathematik, 4 volumes. References External links 1940 births 2017 deaths 20th-century German mathematicians 21st-century German mathematicians Algebraists People from Staßfurt
Uwe Storch
Mathematics
284
23,574,024
https://en.wikipedia.org/wiki/Eurocarbdb
EuroCarbDB was an EU-funded initiative for the creation of software and standards for the systematic collection of carbohydrate structures and their experimental data, which was discontinued in 2010 due to lack of funding. The project included a database of known carbohydrate structures and experimental data, specifically mass spectrometry, HPLC and NMR data, accessed via a web interface that provides for browsing, searching and contribution of structures and data to the database. The project also produces a number of associated bioinformatics tools for carbohydrate researchers: GlycanBuilder, a Java applet for drawing glycan structures GlycoWorkbench, a standalone Java application for semi-automated analysis and annotation of glycan mass spectra GlycoPeakfinder, a webapp for calculating glycan compositions from mass data The canonical online version of EuroCarbDB was hosted by the European Bioinformatics Institute at www.ebi.ac.uk up to 2012, and then relax.organ.su.se. EuroCarb code has since been incorporated into and extended by UniCarb-DB, which also includes the work of the defunct GlycoSuite database. References External links an online version of EuroCarbDB Eurocarbdb googlecode project initial publication of the EuroCarb project Official site for eurocarbdb reports and recommendations (no longer active) Bioinformatics software Biological databases Carbohydrates Science and technology in Cambridgeshire South Cambridgeshire District
Eurocarbdb
Chemistry,Biology
317
21,904,568
https://en.wikipedia.org/wiki/Fate%20of%20Chris%20Lively%20and%20Wife
Fate of Chris Lively and Wife is an American folk song recorded by Blind Alfred Reed. Written in ballad form and performed at a slow and somber tempo, the song tells of the death of Christopher Columbus Lively and his wife Mary Elizabeth Fisher Lively, who were killed on September 2, 1927 when a train collided with their horse and wagon at a railroad crossing near Pax, West Virginia. Lively was born on February 7, 1849, in Town Creek, Fayette County, West Virginia and was 78 years old at the time of the accident. The song concludes with an important message: Now good people, I hope you take warning, As you journey along through this life, Every time when you see “Railroad Crossing,” Just remember Chris Lively and wife. The song was recorded on December 19, 1927 in Camden, New Jersey. Blind Alfred Reed sang and played violin, and Orville Reed played guitar. It was released as a 10-inch record Victor 21533. It can also be heard on the album "Complete Recorded Works, 1927-29" by Blind Alfred Reed. References American folk songs Blind Alfred Reed songs Train wreck ballads Songs about West Virginia Songs based on actual events 1927 songs
Fate of Chris Lively and Wife
Technology
239
78,626,621
https://en.wikipedia.org/wiki/Intrinsic%20DNA%20fluorescence
The term intrinsic DNA fluorescence refers to the fluorescence emitted directly by DNA when it absorbs ultraviolet (UV) radiation. It contrasts to that stemming from fluorescent labels that are either simply bound to DNA or covalently attached to it, widely used in biological applications; such labels may be chemically modified, not naturally occurring, nucleobases. The intrinsic DNA fluorescence was discovered in the 1960s by studying nucleic acids in low temperature glasses. Since the beginning of the 21st century, the much weaker emission of nucleic acids in fluid solutions is being studied at room temperature by means sophisticated spectroscopic techniques, using as UV source femtosecond laser pulses, and following the evolution of the emitted light from femtoseconds to nanoseconds. The development of specific experimental protocols has been crucial for obtaining reliable results. Fluorescence studies combined to theoretical computations, bring information about the relaxation of the electronic excited states and, thus, contribute to understanding the very first steps of a complex series of events triggered by UV radiation, ultimately leading to DNA damage. The principles governing the behavior of the intrinsic RNA fluorescence, to which only a few studies have been dedicated, are the same as those described for DNA. The knowledge of the fundamental processes underlying the DNA fluorescence paves the way for the development of label-free biosensors. The development of such optoelectronic devices for certain applications would have the advantage of bypassing a step of chemical synthesis or avoiding the uncertainties due to non-covalent biding of fluorescent dyes to nucleic acids. Conditions for measuring the intrinsic DNA fluorescence Due to the weak intensity of the intrinsic DNA fluorescence, specific cautions are necessary in order to perform correct measurements and obtain reliable results. A first requirement concerns the purity of both the DNA samples and that of the chemicals and the water used to the preparation of the buffered solutions. The buffer emission must be systematically recorded and, in certain cases, subtracted in an appropriate way. A second requirement is associated with the DNA damage provoked by the exciting UV light which alters its fluorescence. In order to overcome these difficulties, continuous stirring of the solution is needed. For measurements using laser excitation, the circulation of the DNA solution by means of a peristaltic pump is recommended; the reproducibility of successive fluorescence signal needs to be checked. Spectral shapes and quantum yields The fluorescence spectra of the DNA monomeric chromophores (nucleobases, nucleosides or nucleotides) in neutral aqueous solution, obtained with excitation around 260 nm, peak in the near ultraviolet (300-400 nm); and a long tail, extending all over the visible domain is present in their emission spectrum. The spectra of the DNA multimers (composed of more than one nucleobase) are not the sum of the spectra of their monomeric constituents. In some cases, in addition to a main peak located in the UV, a second band is present at longer wavelengths; it is attributed to excimer or exciplex formation. The duplex spectra are affected by their size and the viscosity of the solution, while those of G-Quadruplexes by the metal cations present in their central cavity. Due to the fluorescence dependence on the secondary structure, it is possible to follow the formation and the melting of G-Quadruplexes by monitoring their emission; and also to detect the occurrence of hairpin loops in these systems. The fluorescence quantum yields Φ, that is the number of emitted photons over the number of absorbed photons, are typically in the range of 10−4-10−3. The highest values are encountered for G-quadruplexes. The DNA nucleoside thymidine (dT) was proposed as a reference for the determination of small fluorescence quantum yields. A limited number of measurements were also performed with UVA excitation (330 nm), where DNA single and double strands, but not their monomeric units, absorb weakly. The UVA-induced fluorescence peaks between 415 and 430 nm; the corresponding Φ values are at least one order of magnitude higher compared to those determined with excitation around 260 nm. The fluorescence of some minor, naturally occurring nucleobases, such as 5-methyl cytosine, N7-methylated guanosine or N6-methyladenine, has been studied both in monomeric form and in multimers. The emission spectra of these systems are red-shifted compared to those of the major nucleobases and give rise to exciplexes. Time-resolved techniques The specificity of the intrinsic DNA fluorescence is that, contrary to most fluorescent molecules, its time-evolution cannot be described by a constant decay rate (corresponding to a mono-exponential decay). For the monomeric units, the fluorescence lasts at most a few picoseconds, and the decay rate is not constant. In the case of multimers, the fluorescence continues over much longer times, lasting in some cases for several tens of nanoseconds. The time constants derived from fittings with multi-exponential functions depend of the probed time window. In order to obtain a complete picture of this complex time evolution, a femtosecond laser is needed as excitation source. Time-resolved techniques employed to this end are fluorescence upconversion, Kerr-gated fluorescence spectroscopy and time-correlated single photon counting. In addition to the changes in the fluorescence intensity, all of them allow the recording of time-resolved fluorescent spectra and fluorescence anisotropies, which provide information about the relaxation of the excited electronic states and the type of the emitting excited states. The early studies were performed using time-correlated single photon counting combined with nanosecond sources (synchrotron radiation or lasers). Although they discovered the key difference between the behavior of monomeric and multimeric nucleic acids, they failed to obtain a full picture of the fluorescence dynamics. Emitting excited states and their lifetimes Monomeric chromophores Emission from the monomeric DNA chromophores arises from their lower in energy electronic excited states, that is the ππ* states of the nucleobases. These are bright states, in the sense that they are also responsible for photon absorption. Their lifetimes are extremely short: they fully decay within, at most, a few ps. Such ultrafast decays are due to the existence of conical intersections connecting the excited state with the ground state. Therefore, the dominant deactivation pathway is non-radiative, leading to very low fluorescence quantum yields. The evolution toward the conical intersection is accompanied by conformational movements. An important part of the photons is emitted while the system is moving along the potential energy surface of the excited state, before reaching a point of minimum energy. As motions on a low-dimensional surfaces do not follow exponential patterns, the fluorescence decays are not characterized by constant decay rates. Multichromophoric systems Due to their close proximity, nucleobases in DNA multimers may be electronically coupled. This leads to delocalization of the excited states responsible for photon absorption (Franck-Condon states) over more than one nucleobase (collective states). The electronic coupling depends on the geometrical arrangement of the chromophores. Therefore, the properties of the collective states are affected by factors that determine the relative position of the nucleobases. Among others, the conformational disorder characterizing the nucleic acids modulates the coupling values, giving rise to a large number of Franck-Condon states. Each one of them evolves along a specific energy surface. One can distinguish two limiting types of emitting states in DNA. On the one hand, ππ* states, localized on single nucleobases or delocalized over several of them. And on the other, excited charge transfer states in which an important fraction of an atomic charge has been transferred from one nucleobase to another. The latter are weakly emissive. And between these two types, there is a multitude of emitting states, more or less delocalized, with different amounts of charge transfer. The properties of the emitting states may be modified during their lifetime under the effect of conformational motions of the nucleic acid, occurring on the same time-scale. Because of this complexity, the description of the fluorescence decays by multiexponential functions is only phenomenological. Experimentally, the different types of emitting states can be differentiated through their fluorescence anisotropy. The charge transfer character of an excited state lowers the fluorescence anisotropy. The decrease of fluorescence anisotropy observed for all the DNA multimers on the femtosecond time-scale was explained by an ultrafast transfer of the excitation energy among the nucleobases. A particular class of emitting states with mixed ππ*/charge transfer character was detected in all types of duplexes, including genomic DNA. Their specificity is that their emission appears at short wavelengths (λ<330 nm) and represents the longest-living components of the overall duplex fluorescence decaying on the nanosecond timescale. It contrasts with the excimer/exciplex emission, characterized by a pronounced charge transfer character, appearing at long wavelengths and decaying on the sub-nanosecond time-scale. The contribution of the high energy emitting states to the total fluorescence increases with the local rigidity of the duplex, depending on the number of the Watson-Crick hydrogen bonds or the size of the system. Applications The utilization of the intrinsic fluorescence of nucleic acids for sensing purposes started to be scrutinized just in 2019. The possibility of detecting target DNA, Pb2+ ions, or aptamer binding, the screening of a large number of sequences and the authentication of COVID-19 vaccines have been explored. Moreover, the possibility of detecting the DNA damage by probing its fluorescence at short wavelengths (300 nm) has been discussed. Due to their modulable structure, G-quadruplexes, are particularly promising for the development of label-free and dye-free biosensors. References Further reading Reviews and Accounts Book Chapters Biochemistry Chemistry Fluorescence DNA
Intrinsic DNA fluorescence
Chemistry,Biology
2,157
19,635,086
https://en.wikipedia.org/wiki/Old%20Man%20River%27s%20City%20project
The Old Man River's City project was an architectural design created by Buckminster Fuller in 1971. Fuller was asked to design the structure by the city of East St. Louis. Old Man River's City would have been a truly massive housing project for the city's 70,000 residents. The total capacity of the building, a circular multi-terraced dome, would be 125,000 occupants. Each family would have approximately of living space. References External links A Community Dwelling Machine Buckminster Fuller Planned communities in the United States Buildings and structures in St. Clair County, Illinois East St. Louis, Illinois 1971 establishments in Illinois Proposed arcologies
Old Man River's City project
Technology
132
18,742,021
https://en.wikipedia.org/wiki/B%26G
B&G, formerly known as Brookes and Gatehouse, is a developer and manufacturer of advanced instrumentation, autopilot and navigation systems for racing and cruising sailing yachts. Its equipment can be found on many of the world's leading sail racing yachts, including the majority of competitors for events such as the America's Cup, Vendee Globe and the Volvo Ocean Race. History The company was founded in 1956 by Major R.N. Gatehouse and Ronald Brookes who had formed a partnership the previous year to develop and manufacture a new radio direction finder (RDF) for use by private sailing boats. In 1956 the 'Homer' receiver was produced, said to be the first transistorised RDF to be made available to the world's leisure marine market. Over the course of the 1950s, B&G, then based in Lymington on the south coast of England, extended its activities into echo sounders and in 1960 produced its first speedometer. In 1966 the ketch Gypsy Moth IV, the yacht which earned Sir Francis Chichester his single-handed circumnavigation record, was equipped with a full suite of B&G instruments. In the 1970s and 1980s the company continued to rapidly innovate, receiving in 1981 a Design Council Award for its Hercules electronic data system, which effectively introduced computers to leisure boating, and continues to do so to this day, remaining the market leader in advanced instrumentation systems for grand prix racing yachts and sailing superyachts, as well as for club racers and blue water cruising yachts. The company has changed hands a number of times during its lifetime, and is now a brand of Navico, the Norwegian-based leader in the marine electronics sector. B&G however remains headquartered just a few miles from its birthplace, in Hampshire, UK. Products B&G is primarily known for producing integrated sailing instrumentation systems that collect and analyse a wide range of data relating both to a yacht's performance and the external conditions in which it is sailing. This information can then be displayed to the helmsman, navigator and crew and/or analysed by integral data processing functions to produce additional information flows that enable the yacht's crew to make informed decisions with regard to performance optimisation, navigation strategy, safety and tactics. Data that is collected may include apparent wind speed, apparent wind angle, boat speed, heading, position (Latitude/Longitude) and depth. By combining and analysing these inputs a B&G system can calculate and display in real time information such as true wind speed, true wind direction, course over ground, Velocity made good, time and distance to next waypoint, and much more. This data can additionally be fed into tactical navigation software such as B&G's proprietary Deckman system and analysed against tidal and current databases, electronic charting and weather prediction feeds to enable the navigator to devise the fastest and most efficient route between two or more points. This information can also be fed into autopilot systems. Alongside instrumentation, B&G also designs and manufactures autopilots for sailing yachts. Their autopilots have been used by many short-handed racers, including Dame Ellen MacArthur (during her single-handed circumnavigation race on Kingfisher in the 2000-01 Vendée Globe and her outright single-handed circumnavigation record on Castorama/B&Q), François Gabart (winner of the 2012 Vendée Globe on the IMOCA 60 Macif), Armel Le Cléac'h (1st Vendée Globe 2016, 2nd Vendée Globe 2012 - IMOCA 60, Banque Populaire) and Alex Thomson (2nd Vendée Globe 2016, 3rd Vendée Globe 2012 - IMOCA 60, Hugo Boss). From 2010 onwards B&G also re-entered the navigation market with their Zeus range of multi-function displays alongside the first recreational FMCW "Broadband" Radar. The unique feature of the B&G navigation devices was their inclusion of unique sailing features including the patented SailSteer composite gauge, SailingTime - providing realistic time-to-waypoint features for sailing yachts - Laylines, Start Line function for racing and WindPlot for tracking of wind strength and direction trends. Product categories developed and marketed by B&G include: Sailing Instruments and sensors Autopilots Navigation (Chartplotters / Multi-Function Displays) Radar Communications (VHF) Products include: Hydra 330/2/2000 Instruments Helmstar autopilot Hercules 190/290/390 Instruments Hercules 690/790/2000 Instruments and Pilots Network Instruments and Pilots H1000 Instruments and Pilots H3000 Instruments and Pilots H5000 Instruments and Pilots WTP, WTP2, WTP3 Grand Prix instrument systems 608, 213, WS300, WS700 wind sensors Deckman Tactical Navigation software Zeus, Zeus Touch, Zeus 2, Zeus 3 of multi-function navigation systems Vulcan FS chartplotters Triton (T41), Triton 2 Instruments and Pilots Broadband Radar (FMCW) HALO Radar (Pulse Compression) V20, V50, V60, V90, V100 VHF radios See also Navico, parent company of B&G Simrad Yachting, another Navico brand Lowrance Electronics, another Navico brand C-MAP, another Navico brand References http://www.bandg.com/ http://www.navico.com/ External links Electronics companies established in 1956 Navigation system companies 1956 establishments in England Companies based in Hampshire Navigational equipment manufacturers Marine electronics Manufacturing companies of the United Kingdom British brands
B&G
Engineering
1,171
37,203,904
https://en.wikipedia.org/wiki/Middleware%20for%20Robotic%20Applications
Middleware for Robotic Applications (MIRA) is a cross-platform, open-source software framework written in C++ that provides a middleware, several base functionalities and numerous tools for developing and testing distributed software modules. It also focuses on easy creation of complex, dynamic applications, while reusing these modules as plugins. The main purpose of MIRA is the development of robotic applications, but as it is designed to allow type safe data exchange between software modules using intra- and interprocess communication it is not limited to these kinds of applications. MIRA is developed in a cooperation of the MetraLabs GmbH and the Ilmenau University of Technology/Neuroinformatics and Cognitive Robotics Lab. Therefore, MIRA was designed to fulfill the requirements of both commercial and educational purposes. Features General: adds introspection/reflection and serialization to C++ with the usage of C++ language-constructs only (a meta-language or metacompilers are not necessary) efficient data exchange between software modules the used communication technique based on "channels" always allows non-blocking access to the transferred data for the user the communication is fully transparent no matter if the software modules are located within the same process, different processes or on different machines, the underlying transport layer will choose the fasted method for data transportation automatically beside data exchange via "channels", MIRA supports Remote Procedure Calls (RPC) and Remote Method Invokation. MIRA is fully decentralized, hence there is no central server or central communication hub, making its communication more robust and allows its usage in multi-robot applications Robotic Application specific: easy configuration of software modules via configuration files parameters of algorithms can be modified live at runtime to speed up the debugging and development process huge amounts of robot sensor data can be recorded in Tapes for later playback, here different codecs can be used to compress the data Platforms MIRA supports and was successfully tested on the following platforms: Linux – Ubuntu and derivates, OpenSuse, CentOS, Red Hat and Fedora Windows – Microsoft Windows XP, Windows Vista, Windows 7 (32bit and 64bit) Applications using MIRA MIRA is used within the following applications: Konrad and Suse - Guide Robots, that guide visitors within the Zuse-Building of the Ilmenau University of Technology Monitoring the air quality within clean rooms at Infineon Technologies using several SCITOS G5 robots and projects: CompanionAble - Integrated Cognitive Assistive & Domotic Companion Robotic System for Ability & Security Robot-Era - Implementation and integration of advanced robotic systems and intelligent environments in real scenarios for the ageing population Usability Reflection/Serialization class Data { int value; std::map<std::string,std::list<int> > complex; Foo* ptr; template <typename Reflector> void reflect(Reflector& r) { r.member("Value", value, "an int member"); r.member("Complex", complex, "a complex member"); r.member("Pointer", ptr, "a pointer pointer"); } }; arbitrary complex data types can be serialized by adding a simple reflect method to the class as shown above after these minor changes, the objects of the class can be transported via inter-process communication, can be used as parameters in configuration files for software modules, can be recorded in "Tape" files, etc. Remote Procedure Calls class MyClass { int compute(const std::list<float>& values); template <typename Reflector> void reflect(Reflector& r) { r.method("compute", &MyClass::compute, this, "comment"); } }; arbitrary methods can be turned into RPC methods by adding one line of code within the reflect() method. There is no need to write wrappers around the methods or to use meta description languages. References External links MIRA Website MIRA Documentation MIRA Questions & Answers Robotics software Robotics suites 2012 software 2012 in robotics
Middleware for Robotic Applications
Engineering
832
9,358,077
https://en.wikipedia.org/wiki/Core%20router
A core router is a router designed to operate in the Internet backbone, or core, or in core networks of internet service providers. To fulfill this role, a router must be able to support multiple telecommunications interfaces of the highest speed in use in the core Internet and must be able to forward IP packets at full speed on all of them. It must also support the routing protocols being used in the core. A core router is distinct from an edge router: edge routers sit at the edge of a backbone network and connect to core routers. History Like the term "supercomputer", the term "core router" refers to the largest and most capable routers of the then-current generation. A router that was a core router when introduced would likely not be a core router ten years later. Although the local area NPL network was using line speeds of 768 kbit/s from 1967, at the inception of the ARPANET (the Internet's predecessor) in 1969, the fastest links were 56 kbit/s. A given routing node had at most six links. The "core router" was a dedicated minicomputer called an IMP Interface Message Processor. Link speeds increased steadily, requiring progressively more powerful routers until the mid-1990s, when the typical core link speed reached 155 Mbit/s. At that time, several breakthroughs in fiber optic telecommunications (notably DWDM and EDFA) technologies combined to lower bandwidth costs that in turn drove a sudden dramatic increase in core link speeds: by 2000, a core link operated at 2.5 Gbit/s and core Internet companies were planning for 10 Gbit/s speeds. The largest provider of core routers in the 1990s was Cisco Systems, who provided core routers as part of a broad product line. Juniper Networks entered the business in 1996, focusing primarily on core routers and addressing the need for a radical increase in routing capability that was driven by the increased link speed. In addition, several new companies attempted to develop new core routers in the late 1990s. It was during this period that the term "core router" came into wide use. The required forwarding rate of these routers became so high that it could not be met with a single processor or a single memory, so these systems all employed some form of a distributed architecture based on an internal switching fabric. The Internet was historically supply-limited, and core Internet providers historically struggled to expand the Internet to meet the demand. During the late 1990s, they expected a radical increase in demand, driven by the Dot-com bubble. By 2001, it became apparent that the sudden expansion in core link capacity had outstripped the actual demand for Internet bandwidth in the core. The core Internet providers were able to defer purchases of new core routers for a time, and most of the new companies went out of business. As of 2012, the typical Internet core link speed is 40 Gbit/s, with many links at higher speeds, reaching or exceeding 100 Gbit/s (out of a theoretical current maximum of 111 Gbit/s, provided by Nippon Telegraph and Telephone ), provisioning the explosion in demand for bandwidth in the current generation of cloud computing and other bandwidth-intensive (and often latency-sensitive) applications such as high-definition video streaming (see IPTV) and Voice over IP. This, along with newer technologies – such as DOCSIS 3, channel bonding, and VDSL2 (the latter of which can wring more than 100 Mbit/s out of plain, unshielded twisted-pair copper under normal conditions, out of a theoretical maximum of 250 Gbit/s at 0.0m from the VRAD) – and more sophisticated provisioning systems – such as FTTN (fiber [optic cable] to the node) and FTTP (fiber to the premises, either to the home or provisioned with Cat 5e cable) – can provide downstream speeds to the mass-market residential consumer in excess of 300 Mbit/s and upload speeds in excess of 100 Mbit/s with no specialized equipment or modification e.g.(Verizon FiOS). Current core router manufacturers (core router model between parentheses) Nokia (7950 Extensible Routing System [XRS] Series, 7750 series) Ciena (Ciena 5430 15T, Ciena 6500) Cisco Systems (8000 series, CRS (former), Network Convergence System 6000) Extreme Networks (Black Diamond 20808) Ericsson (SSR series) Huawei Technologies Ltd. (NetEngine 9000 (NE9000), NetEngine 5000E, NetEngine 80E, NetEngine 80) Juniper Networks (Juniper T-Series and PTX Series) ZTE (ZXR10 Series: T8000, M6000) Previous core router manufacturers Alcatel-Lucent (acquired by Nokia in 2016) Allegro Networks Axiowave Networks Avici Systems (changed name to Soapstone Networks in 2008 and no longer makes core routers) Brocade Communications Systems (NetIron XMR Series) Caspian Networks (closed in 2006, makers of core routers A120 and A50) Charlotte's Web Networks Chiaro Networks (closed in 2005, maker of Chiaro Enstara core routers) Foundry Networks (acquired by Brocade in 2008) Hyperchip IPOptical Ironbridge Marconi (telecom business acquired by Ericsson in 2006) Nortel Networks (bankrupt) Osphere Net Systems Pluris Procket Networks (acquired by Cisco Systems in 2004) See also Cisco Systems acquisitions Edge router Network topology References Internet architecture Hardware routers
Core router
Technology
1,175
8,947,095
https://en.wikipedia.org/wiki/Solar%20energetic%20particles
Solar energetic particles (SEP), formerly known as solar cosmic rays, are high-energy, charged particles originating in the solar atmosphere and solar wind. They consist of protons, electrons and heavy ions with energies ranging from a few tens of keV to many GeV. The exact processes involved in transferring energy to SEPs is a subject of ongoing study. SEPs are relevant to the field of space weather, as they are responsible for SEP events and ground level enhancements. History SEPs were first detected in February and March 1942 by Scott Forbush indirectly as ground level enhancements. Solar particle events SEPs are accelerated during solar particle events. These can originate either from a solar flare site or by shock waves associated with coronal mass ejections (CMEs). However, only about 1% of CMEs produce strong SEP events. Two main mechanisms of acceleration are possible: diffusive shock acceleration (DSA, an example of second-order Fermi acceleration) or the shock-drift mechanism. SEPs can be accelerated to energies of several tens of MeV within 5–10 solar radii (5% of the Sun–Earth distance) and can reach Earth in a few minutes in extreme cases. This makes prediction and warning of SEP events quite challenging. In March 2021, NASA reported that scientists had located the source of several SEP events, potentially leading to improved predictions in the future. Research SEPs are of interest to scientists because they provide a good sample of solar material. Despite the nuclear fusion occurring in the core, the majority of solar material is representative of the material that formed the solar system. By studying SEP's isotopic composition, scientists can indirectly measure the material that formed the solar system. See also Solar wind References Reames D.V., Solar Energetic Particles, Springer, Berlin, (2017a) , doi: 10.1007/978-3-319-50871-9. External links Solar Energetic Particles (Rainer Schwenn) NASA The Isotopic Composition of Solar Energetic Particles Solar phenomena
Solar energetic particles
Physics
422
58,712,802
https://en.wikipedia.org/wiki/Peter%20Thonemann
Peter Clive Thonemann (3 June 1917 – 10 February 2018) was an Australian-born British physicist who was a pioneer in the field of fusion power while working in the United Kingdom. Thonemann was born in Melbourne and moved to Oxford University in 1944, becoming one of the earliest researchers on the topic of controlled fusion. He led the fusion research at Oxford in its early years, before moving to the Atomic Energy Research Establishment (Harwell) in 1950. He led the ZETA reactor development at Harwell and announced its apparent success in 1958. Thonemann was deputy director of the new Culham Laboratory in 1965–66. In 1968 he left Culham to become Professor of Physics at today's Swansea University, where he worked on applying his physics knowledge to biological research. He retired from Swansea in 1984, living out his life in the city. Early life and education Julius Emil Thonemann moved to Australia from Germany in 1854 and was consul to Victoria for the Austro-Hungarian Empire from 1866 to 1879. His son, Frederick Emil Thonemann, was born in 1860 in Melbourne. Frederick established a wool trading business, Thonemann and Lange, which later became a stock brokerage, F. Thonemann and Sons, among other businesses. Peter was the second of four children, born to Frederick's second wife, Mabel Jessie Fyfe. Peter grew up in the family's large house, "Rathgawn", and attended Melbourne Grammar School. In 1936 he began a physics degree at the University of Melbourne, completing his bachelor's degree in 1939. When the war started that year, he was sent to work at the Munitions Supply Laboratories in Melbourne, where he worked until 1942 when he moved to Amalgamated Wireless in Sydney. There, he met his future wife, Jean, with whom he had two children, Helena, in 1946, and Philip, in 1949. In 1944 he began his master's degree at the University of Sydney, where he wrote his thesis on the study of high frequency fields in an ionized gas. Australian universities were not offering PhDs at that time, so he took a position at Oxford University later that year. Fusion work Immediately after the war, Jim Tuck returned to Oxford from his time on the Manhattan Project. He met Thonemann, whose experience in electric discharges in gas made him familiar with the pinch effect, a possible route to controlled fusion. The two wrote a proposal to build a small machine, but before it was approved, Tuck returned to the US. In 1947, Cousins and Ware began experiments using pinch in toroidal tubes. Thonemann was able to arrange a small amount of funding, and in 1948 began basic experiments with electrical discharges in a linear tube containing mercury gas to study the pinch effect. By the next year he had moved to a larger copper torus and was able to demonstrate the pinch to Frederick Lindemann and John Cockcroft. Thonemann became the Atomic Energy Research Establishment (AERE) Head of Research on Controlled Thermonuclear Reactions in 1949, a position he held until 1960. As a result of the exposing of Klaus Fuchs as a Soviet spy, by 1952 the fusion research at Oxford was moved to Harwell, while Cousins and Ware's work moved to the Atomic Weapons Establishment at Aldermaston. At Harwell, Cockcroft had successfully argued for the construction of a much larger machine, ZETA. By 1957, early indications were that ZETA had successfully produced tiny amounts of fusion, and the story began to leak to the press. This led to considerable coverage about Thonemann's role in the Australian press. In January it was announced that ZETA had succeeded. After further work, it became clear that the signals of fusion were false, and the story had to be withdrawn, causing great embarrassment. After some arguments within the UK scientific establishment, the decision was made to move the fusion-related work to a new location at Culham. Thonemann moved to Culham and became the deputy director during 1965–66. Biological studies In 1968, Thonemann moved to University College of Swansea, now Swansea University, to become a professor of physics. He was not able to raise funds to begin a fusion program at Swansea, and instead began applying the mathematics of the dynamics of particles in plasma to the movement of E. coli in response to the gradients of nutrients. Thonemann retired from Swansea in 1984 but continued to live in the town until his death on 10 February 2018, aged 100. He had two children, a son and daughter, with his wife Jean. Notes References Citations Bibliography 1917 births 2018 deaths Australian men centenarians British men centenarians Nuclear physicists Plasma physicists Scientists from Melbourne Australian people of German descent
Peter Thonemann
Physics
968
50,608,039
https://en.wikipedia.org/wiki/Cortinarius%20taylorianus
Cortinarius taylorianus is a basidiomycete fungus of the genus Cortinarius native to New Zealand, where it grows under Nothofagus and produces an imposing purple mushroom. This species is named in honour of Grace Marie Taylor, a New Zealand fungi expert. See also List of Cortinarius species References External links taylorianus Fungi of New Zealand Fungi described in 1990 Taxa named by Egon Horak Fungus species
Cortinarius taylorianus
Biology
93
56,730,121
https://en.wikipedia.org/wiki/WASP-31b
WASP-31b is a low-density (puffy) "hot Jupiter" extrasolar planet orbiting the metal-poor (63% of solar metallicity) dwarf star WASP-31. The exoplanet was discovered in 2010 by the WASP project. WASP-31b is in the constellation of Crater, and is about 1305 light-years (light travel distance) from Earth. Characteristics WASP-31b is a low-density (puffy) "hot Jupiter" exoplanet with a mass about 0.48 times that of Jupiter and a radius about 1.55 times that of Jupiter. The planetary atmosphere has indeed the largest scale height, equal to 1150km, among exoplanets with measurable atmospheres as at 2021. The exoplanet orbits WASP-31, its host star, every 3.4 days. In 2012, it was found from the Rossiter–McLaughlin effect that WASP-31b is orbiting the parent star in a prograde direction, with the WASP-31 star rotational axis inclined to the planetary orbit by 2.8 degrees. The spectroscopic study in 2014 revealed that WASP-31b has a dense cloud deck overlaid by a hazy atmosphere. WASP-31b was also reported to have significant amounts of potassium in its upper atmosphere, but the detection of potassium was refuted in 2015. The potassium detection discrepancy was resolved in 2020 with the improved cloud deck model, with the best fit being a very small amount of water over clouds and no potassium at all. Reanalysis of planetary spectroscopic data in 2020 has revealed the presence of chromium monohydride besides water. See also WASP-6b WASP-12b WASP-17b WASP-19b WASP-39b WASP-121b References External links Exoplanets discovered by WASP Exoplanets discovered in 2010 Giant planets Hot Jupiters Crater (constellation)
WASP-31b
Astronomy
393
39,699,737
https://en.wikipedia.org/wiki/Eco-socialism
Eco-socialism (also known as green socialism, socialist ecology, ecological materialism, or revolutionary ecology) is an ideology merging aspects of socialism with that of green politics, ecology and alter-globalization or anti-globalization. Eco-socialists generally believe that the expansion of the capitalist system is the cause of social exclusion, poverty, war and environmental degradation through globalization and imperialism, under the supervision of repressive states and transnational structures. Eco-socialism asserts that the capitalist economic system is fundamentally incompatible with the ecological and social requirements of sustainability. Thus, according to this analysis, giving economic priority to the fulfillment of human needs while staying within ecological limits, as sustainable development demands, is in conflict with the structural workings of capitalism. By this logic, market-based solutions to ecological crises (such as environmental economics and green economy) are rejected as technical tweaks that do not confront capitalism's structural failures. Eco-socialists advocate for the succession of capitalism by eco-socialism—an egalitarian economic/political/social structure designed to harmonize human society with non-human ecology and to fulfill human needs—as the only sufficient solution to the present-day ecological crisis, and hence the only path towards sustainability. Eco-socialists advocate dismantling capitalism, focusing on social ownership of the means of production by freely associated producers, and restoring the commons. Ideology Eco-socialists are critical of many past and existing forms of both green politics and socialism. They are often described as "Red Greens" – adherents to Green politics with clear anti-capitalist views, often inspired by Marxism (red greens are in contrast to eco-capitalists and green anarchists). The term "watermelon" is commonly applied, often pejoratively, to Greens who seem to put "social justice" goals above ecological ones, implying they are "green on the outside but red on the inside". The term is common in Australia and New Zealand, and usually attributed to either Petr Beckmann or, more frequently, Warren T. Brookes, both critics of environmentalism. The term is also found in non-English speaking political discourse. The Watermelon, a New Zealand website, uses the term proudly, stating that it is "green on the outside and liberal on the inside", while also citing "socialist political leanings", reflecting the use of the term "liberal" to describe the political left in many English-speaking countries. Red Greens are often considered "fundies" or "fundamentalist greens", a term usually associated with deep ecology even though the German Green Party "fundi" faction included eco-socialists, and eco-socialists in other Green Parties, like Derek Wall, have been described in the press as fundies. Eco-socialists also criticise bureaucratic and elite theories of self-described socialism such as Maoism, Stalinism and what other critics have termed bureaucratic collectivism or state capitalism. Instead, eco-socialists focus on imbuing socialism with ecology while keeping the emancipatory goals of "first-epoch" socialism. Eco-socialists aim for communal ownership of the means of production by "freely associated producers" with all forms of domination eclipsed, especially gender inequality and racism. This often includes the restoration of commons land in opposition to private property, in which local control of resources valorizes the Marxist concept of use value above exchange value. Practically, eco-socialists have generated various strategies to mobilise action on an internationalist basis, developing networks of grassroots individuals and groups that can radically transform society through nonviolent "prefigurative projects" for a post-capitalist, post-statist world. History 1880s–1930s Contrary to the depiction of Karl Marx by some environmentalists, social ecologists and fellow socialists as a productivist who favoured the domination of nature, eco-socialists have revisited Marx's writings and believe that he "was a main originator of the ecological world-view". Eco-socialist authors, like John Bellamy Foster and Paul Burkett, point to Marx's discussion of a "metabolic rift" between man and nature, his statement that "private ownership of the globe by single individuals will appear quite absurd as private ownership of one man by another" and his observation that a society must "hand it [the planet] down to succeeding generations in an improved condition". Nonetheless, other eco-socialists feel that Marx overlooked a "recognition of nature in and for itself", ignoring its "receptivity" and treating nature as "subjected to labor from the start" in an "essentially active relationship". William Morris, the English novelist, poet and designer, is largely credited with developing key principles of what was later called eco-socialism. During the 1880s and 1890s, Morris promoted his eco-socialist ideas within the Social Democratic Federation and the Socialist League. Following the Russian Revolution, some environmentalists and environmental scientists attempted to integrate ecological consciousness into Bolshevism, although many such people were later purged from the Communist Party of the Soviet Union. The "pre-revolutionary environmental movement", encouraged by the revolutionary scientist Aleksandr Bogdanov and the Proletkul't organisation, made efforts to "integrate production with natural laws and limits" in the first decade of Soviet rule, before Joseph Stalin attacked ecologists and the science of ecology and the Soviet Union fell into the pseudo-science of the state biologist Trofim Lysenko, who "set about to rearrange the Russian map" in ignorance of environmental limits. 1950s–1960s Social ecology is closely related to the work and ideas of Murray Bookchin and influenced by anarchist Peter Kropotkin. Social ecologists assert that the present ecological crisis has its roots in human social problems, and that the domination of human-over-nature stems from the domination of human-over-human. In 1958, Murray Bookchin defined himself as an anarchist, seeing parallels between anarchism and ecology. His first book, Our Synthetic Environment, was published under the pseudonym Lewis Herber in 1962, a few months before Rachel Carson's Silent Spring. The book described a broad range of environmental ills but received little attention because of its political radicalism. His groundbreaking essay "Ecology and Revolutionary Thought" introduced ecology as a concept in radical politics. In 1968, he founded another group that published the influential Anarchos magazine, which published that and other innovative essays on post-scarcity and on ecological technologies such as solar and wind energy, and on decentralization and miniaturization. Lecturing throughout the United States, he helped popularize the concept of ecology to the counterculture. Post-Scarcity Anarchism is a collection of essays written by Murray Bookchin and first published in 1971 by Ramparts Press. It outlines the possible form anarchism might take under conditions of post-scarcity. It is one of Bookchin's major works, and its radical thesis provoked controversy for being utopian and messianic in its faith in the liberatory potential of technology. Bookchin argues that post-industrial societies are also post-scarcity societies, and can thus imagine "the fulfillment of the social and cultural potentialities latent in a technology of abundance". The self-administration of society is now made possible by technological advancement and, when technology is used in an ecologically sensitive manner, the revolutionary potential of society will be much changed. In 1982, his book The Ecology of Freedom had a profound impact on the emerging ecology movement, both in the United States and abroad. He was a principal figure in the Burlington Greens in 1986–1990, an ecology group that ran candidates for city council on a program to create neighborhood democracy. Bookchin later developed a political philosophy to complement social ecology which he called "Communalism" (spelled with a capital "C" to differentiate it from other forms of communalism). While originally conceived as a form of social anarchism, he later developed Communalism into a separate ideology which incorporates what he saw as the most beneficial elements of Anarchism, Marxism, syndicalism, and radical ecology. Politically, Communalists advocate a network of directly democratic citizens' assemblies in individual communities/cities organized in a confederal fashion. This method used to achieve this is called libertarian municipalism which involves the establishment of face-to-face democratic institutions which are to grow and expand confederally with the goal of eventually replacing the nation-state. 1970s–1990s In the 1970s, Barry Commoner, suggesting a left-wing response to The Limits to Growth model that predicted catastrophic resource depletion and spurred environmentalism, postulated that capitalist technologies were chiefly responsible for environmental degradation, as opposed to population pressures. East German dissident writer and activist Rudolf Bahro published two books addressing the relationship between socialism and ecology – The Alternative in Eastern Europe and Socialism and Survival – which promoted a 'new party' and led to his arrest, for which he gained international notoriety. At around the same time, Alan Roberts, an Australian Marxist, posited that people's unfulfilled needs fuelled consumerism. Fellow Australian Ted Trainer further called upon socialists to develop a system that met human needs, in contrast to the capitalist system of created wants. A key development in the 1980s was the creation of the journal Capitalism, Nature, Socialism (CNS) with James O'Connor as founding editor and the first issue in 1988. The debates ensued led to a host of theoretical works by O'Connor, Carolyn Merchant, Paul Burkett and others. The Australian Democratic Socialist Party launched the Green Left Weekly newspaper in 1991, following a period of working within Green Alliance and Green Party groups in formation. This ceased when the Australian Greens adopted a policy of proscription of other political groups in August 1991. The DSP also published a comprehensive policy resolution, "Socialism and Human Survival" in book form in 1990, with an expanded second edition in 1999 entitled "Environment, Capitalism & Socialism". 1990s onwards The 1990s saw the socialist feminists Mary Mellor and Ariel Salleh address environmental issues within an eco-socialist paradigm. With the rising profile of the anti-globalization movement in the Global South, an environmentalism of the poor, combining ecological awareness and social justice, has also become prominent. David Pepper also released his important work, Ecosocialism: From Deep Ecology to Social Justice, in 1994, which critiques the current approach of many within Green politics, particularly deep ecologists. In 2001, Joel Kovel, a social scientist, psychiatrist and former candidate for the Green Party of the United States (GPUS) presidential nomination in 2000, and Michael Löwy, an anthropologist and member of the Reunified Fourth International, released "An Ecosocialist Manifesto", which has been adopted by some organisations and suggests possible routes for the growth of eco-socialist consciousness. Kovel's 2002 work, The Enemy of Nature: The End of Capitalism or the End of the World?, is considered by many to be the most up-to-date exposition of eco-socialist thought. In October 2007, the International Ecosocialist Network was founded in Paris. Influence on current green and socialist movements Currently, many Green Parties around the world, such as the Dutch Green Left Party (), contain strong eco-socialist elements. Radical Red-green alliances have been formed in many countries by eco-socialists, radical Greens and other radical left groups. In Denmark, the Red-Green Alliance was formed as a coalition of numerous radical parties. Within the European Parliament, a number of far-left parties from Northern Europe have organized themselves into the Nordic Green Left Alliance. Red Greens feature heavily in the Green Party of Saskatchewan (in Canada but not necessarily affiliated to the Green Party of Canada). In 2016, GPUS officially adopted eco-socialist ideology within the party. The Green Party of England and Wales has an eco-socialist group, Green Left, founded in June 2006. Members of the Green Party holding a number of influential positions, including former Principal Speakers Siân Berry and Derek Wall, as well as prominent Green Party candidate and human rights activist Peter Tatchell have been associated with the grouping. Many Marxist organisations also contain eco-socialists, as evidenced by Löwy's involvement in the reunified Fourth International and Socialist Resistance, a British Marxist newspaper that reports on eco-socialist issues and has published two collections of essays on eco-socialist thought: Ecosocialism or Barbarism?, edited by Jane Kelly and Sheila Malone, and The Global Fight for Climate Justice, edited by Ian Angus with a foreword by Derek Wall. Influence on existing socialist regimes Eco-socialism has had a minor influence over developments in the environmental policies of what can be called "existing socialist" regimes, notably the People's Republic of China. Pan Yue, deputy director of the PRC's State Environmental Protection Administration, has acknowledged the influence of eco-socialist theory on his championing of environmentalism within China, which has gained him international acclaim (including being nominated for the Person of the Year Award 2006 by The New Statesman, a British current affairs magazine). Yue stated in an interview that, while he often finds eco-socialist theory "too idealistic" and lacking "ways of solving actual problems", he believes that it provides "political reference for China's scientific view of development", "gives socialist ideology room to expand" and offers "a theoretical basis for the establishment of fair international rules" on the environment. He echoes much of eco-socialist thought, attacking international "environmental inequality", refusing to focus on technological fixes and arguing for the construction of "a harmonious, resource-saving and environmentally-friendly society". He also shows a knowledge of eco-socialist history, from the convergence of radical green politics and socialism and their political "red-green alliances" in the post-Soviet era. This focus on eco-socialism has informed in the essay On Socialist Ecological Civilisation, published in September 2006, which according to China Dialogue "sparked debate" in China. The current Constitution of Bolivia, promulgated in 2009, is the first both ecologic and pro-socialist Constitution in the world, making the Bolivian state officially ecosocialist. International organizations In 2007 biologist David Schwartzman identified the necessity to build a transnational ecosocialist movement as one of the critical challenges facing ecosocialists. Later in 2007, it was announced that attempts to form an Ecosocialist International Network (EIN) would be made and an inaugural meeting of the International occurred on 7 October 2007 in Paris. The meeting attracted "more than 60 activists from Argentina, Australia, Belgium, Brazil, Canada, Cyprus, Denmark, France, Greece, Italy, Switzerland, United Kingdom, and the United States" and elected a steering committee featuring representatives from Britain, the United States, Canada, France, Greece, Argentina, Brazil and Australia, including Joel Kovel, Michael Löwy, Derek Wall, Ian Angus (editor of Climate and Capitalism in Canada) and Ariel Salleh. The Committee states that it wants "to incorporate members from China, India, Africa, Oceania and Eastern Europe". EIN held its second international conference in January 2009, in association with the next World Social Forum in Brazil. The conference released The Belem Ecosocialist Declaration. International networking by eco-socialists has already been seen in the Praxis Research and Education Center, a group on international researchers and activists. Based in Moscow and established in 1997, Praxis, as well as publishing books "by libertarian socialists, Marxist humanists, anarchists, [and] syndicalists", running the Victor Serge Library and opposing war in Chechnya, states that it believes "that capitalism has brought life on the planet near to the brink of catastrophe, and that a form of ecosocialism needs to emerge to replace capitalism before it is too late". Critique of capitalist expansion and globalization Merging aspects of Marxism, socialism, environmentalism and ecology, eco-socialists generally believe that the capitalist system is the cause of social exclusion, inequality and environmental degradation through globalization and imperialism under the supervision of repressive states and transnational structures. In the "Ecosocialist Manifesto" (2001), Joel Kovel and Michael Löwy suggest that capitalist expansion causes "crises of ecology" through the "rampant industrialization" and "societal breakdown" that springs "from the form of imperialism known as globalization". They believe that capitalism's expansion "exposes ecosystems" to pollutants, habitat destruction and resource depletion, "reducing the sensuous vitality of nature to the cold exchangeability required for the accumulation of capital", while submerging "the majority of the world's people to a mere reservoir of labor power" as it penetrates communities through "consumerism and depoliticization". Other eco-socialists like Derek Wall highlight how in the Global South free-market capitalist structures economies to produce export-geared crops that take water from traditional subsistence farms, increasing hunger and the likelihood of famine; furthermore, forests are increasingly cleared and enclosed to produce cash crops that separate people from their local means of production and aggravate poverty. Wall shows that many of the world's poor have access to the means of production through "non-monetised communal means of production", such as subsistence farming, but, despite providing for need and a level of prosperity, these are not included in conventional economics measures, like GNP. Wall therefore views neo-liberal globalization as "part of the long struggle of the state and commercial interests to steal from those who subsist" by removing "access to the resources that sustain ordinary people across the globe". Furthermore, Kovel sees neoliberalism as "a return to the pure logic of capital" that "has effectively swept away measures which had inhibited capital's aggressivity, replacing them with naked exploitation of humanity and nature." For Kovel, this "tearing down of boundaries and limits to accumulation is known as globalization", which was "a deliberate response to a serious accumulation crisis (in the 1970s) that had convinced the leaders of the global economy to install what we know as neoliberalism." Furthermore, Ramachandra Guha and Joan Martinez Alier blame globalization for creating increased levels of waste and pollution, and then dumping the waste on the most vulnerable in society, particularly those in the Global South. Others have also noted that capitalism disproportionately affects the poorest in the Global North as well, leading to examples of resistance such as the environmental justice movement in the United States, consisting of working-class people and ethnic minorities who highlight the tendency for waste dumps, major road projects and incinerators to be constructed around socially excluded areas. However, as Wall highlights, such campaigns are often ignored or persecuted precisely because they originate among the most marginalized in society: the African-American radical green religious group MOVE, campaigning for ecological revolution and animal rights from Philadelphia, had many members imprisoned or even killed by US authorities from the 1970s onwards. Eco-socialism disagrees with the elite theories of capitalism, which tend to label a specific class or social group as conspirators who construct a system that satisfies their greed and personal desires. Instead, eco-socialists suggest that the very system itself is self-perpetuating, fuelled by "extra-human" or "impersonal" forces. Kovel uses the Bhopal industrial disaster as an example. Many anti-corporate observers would blame the avarice of those at the top of many multi-national corporations, such as the Union Carbide Corporation in Bhopal, for seemingly isolated industrial accidents. Conversely, Kovel suggests that Union Carbide were experiencing a decrease in sales that led to falling profits, which, due to stock market conditions, translated into a drop in share values. The depreciation of share value made many shareholders sell their stock, weakening the company and leading to cost-cutting measures that eroded the safety procedures and mechanisms at the Bhopal site. Though this did not, in Kovel's mind, make the Bhopal disaster inevitable, he believes that it illustrates the effect market forces can have on increasing the likelihood of ecological and social problems. Use and exchange value Eco-socialism focuses closely on Marx's theories about the contradiction between use values and exchange values. Kovel posits that, within a market, goods are not produced to meet needs but are produced to be exchanged for money that we then use to acquire other goods; as we have to keep selling in order to keep buying, we must persuade others to buy our goods just to ensure our survival, which leads to the production of goods with no previous use that can be sold to sustain our ability to buy other goods. Such goods, in an eco-socialist analysis, produce exchange values but have no use value. Eco-socialists like Kovel stress that this contradiction has reached a destructive extent, where certain essential activities such as caring for relatives full-time and basic subsistence are unrewarded, while unnecessary commodities earn individuals huge fortunes and fuel consumerism and resource depletion. "Second contradiction" of capitalism James O'Connor argues for a "second contradiction" of underproduction, to complement Marx's "first" contradiction of capital and labor. While the second contradiction is often considered a theory of environmental degradation, O'Connor's theory in fact goes much further. Building on the work of Karl Polanyi, along with Marx, O'Connor argues that capitalism necessarily undermines the "conditions of production" necessary to sustain the endless accumulation of capital. These conditions of production include soil, water, energy, and so forth. But they also include an adequate public education system, transportation infrastructures, and other services that are not produced directly by capital, but which capital needs in order accumulate effectively. As the conditions of production are exhausted, the costs of production for capital increase. For this reason, the second contradiction generates an underproduction crisis tendency, with the rising cost of inputs and labor, to complement the overproduction tendency of too many commodities for too few customers. Like Marx's contradiction of capital and labor, the second contradiction therefore threatens the system's existence. In addition, O'Connor believes that, in order to remedy environmental contradictions, the capitalist system innovates new technologies that overcome existing problems but introduce new ones. O'Connor cites nuclear power as an example, which he sees as a form of producing energy that is advertised as an alternative to carbon-intensive, non-renewable fossil fuels, but creates long-term radioactive waste and other dangers to health and security. While O'Connor believes that capitalism is capable of spreading out its economic supports so widely that it can afford to destroy one ecosystem before moving onto another, he and many other eco-socialists now fear that, with the onset of globalization, the system is running out of new ecosystems. Kovel adds that capitalist firms have to continue to extract profit through a combination of intensive or extensive exploitation and selling to new markets, meaning that capitalism must grow indefinitely to exist, which he thinks is impossible on a planet of finite resources. Role of the state and transnational organizations Capitalist expansion is seen by eco-socialists as being "hand in glove" with "corrupt and subservient client states" that repress dissent against the system, governed by international organisations "under the overall supervision of the western powers and the superpower United States", which subordinate peripheral nations economically and militarily. Kovel further claims that capitalism itself spurs conflict and, ultimately, war. Kovel states that the 'War on Terror', between Islamist extremists and the United States, is caused by "oil imperialism", whereby the capitalist nations require control over sources of energy, especially oil, which are necessary to continue intensive industrial growth - in the quest for control of such resources, Kovel argues that the capitalist nations, specifically the United States, have come into conflict with the predominantly Muslim nations where oil is often found. Eco-socialists believe that state or self-regulation of markets does not solve the crisis "because to do so requires setting limits upon accumulation", which is "unacceptable" for a growth-orientated system; they believe that terrorism and revolutionary impulses cannot be tackled properly "because to do so would mean abandoning the logic of empire". Instead, eco-socialists feel that increasing repressive counter-terrorism increases alienation and causes further terrorism and believe that state counter-terrorist methods are, in Kovel and Löwy's words, "evolving into a new and malignant variation of fascism". They echo Rosa Luxemburg's "stark choice" between "socialism or barbarism", which was believed to be a prediction of the coming of fascism and further forms of destructive capitalism at the beginning of the twentieth century (Luxemburg was in fact murdered by the proto-fascist in the revolutionary atmosphere of Germany in 1919). With individuals now declaring the choice is "ecosocialism or ecofascism". Tensions within the eco-socialist discourse Reflecting tensions within the environmental and socialist movements, there is some conflict of ideas. However, in practice a synthesis is emerging which calls for democratic regulation of industry in the interests of people and the environment, nationalisation of some key environmental industries, local democracy and an extension of co-ops and the library principle. For example, Scottish Green Peter McColl argues that elected governments should abolish poverty through a citizens income scheme, regulate against social and environmental malpractice and encourage environmental good practice through state procurement. At the same time, economic and political power should be devolved as far as is possible through co-operatives and increased local decision making. By putting political and economic power into the hands of the people most likely to be affected by environmental injustice, it is less likely that the injustice will take place. Critique of other forms of green politics Eco-socialists criticise many within the Green movement for not being overtly anti-capitalist, for working within the existing capitalist, statist system, for voluntarism, or for reliance on technological fixes. The eco-socialist ideology is based on a critique of other forms of Green politics, including various forms of green economics, localism, deep ecology, bioregionalism and even some manifestations of radical green ideologies such as eco-feminism and social ecology. As Kovel puts it, eco-socialism differs from Green politics at the most fundamental level because the 'Four Pillars' of Green politics (and the 'Ten Key Values' of the US Green Party) do not include the demand for the emancipation of labour and the end of the separation between producers and the means of production. Many eco-socialists also oppose Malthusianism and are alarmed by the gulf between Green politics in the Global North and the Global South. Opposition to reformism and technologism Eco-socialists are highly critical of those Greens who favour "working within the system". While eco-socialists like Kovel recognise the ability of within-system approaches to raise awareness, and believe that "the struggle for an ecologically rational world must include a struggle for the state", he believes that the mainstream Green movement is too easily co-opted by the current powerful socio-political forces as it "passes from citizen-based activism to ponderous bureaucracies scuffling for 'a seat at the table. For Kovel, capitalism is "happy to enlist" the Green movement for "convenience", "control over popular dissent" and "rationalization". He further attacks within-system green initiatives like carbon trading, which he sees as a "capitalist shell game" that turns pollution "into a fresh source of profit". Brian Tokar has further criticised carbon trading in this way, suggesting that it augments existing class inequality and gives the "largest 'players' ... substantial control over the whole 'game. In addition, Kovel criticises the "defeatism" of voluntarism in some local forms of environmentalism that do not connect: he suggests that they can be "drawn off into individualism" or co-opted to the demands of capitalism, as in the case of certain recycling projects, where citizens are "induced to provide free labor" to waste management industries who are involved in the "capitalization of nature". He labels the notion on voluntarism "ecopolitics without struggle". Technological fixes to ecological problems are also rejected by eco-socialists. Saral Sarkar has updated the thesis of 1970s 'limits to growth' to exemplify the limits of new capitalist technologies such as hydrogen fuel cells, which require large amounts of energy to split molecules to obtain hydrogen. Furthermore, Kovel notes that "events in nature are reciprocal and multi-determined" and can therefore not be predictably "fixed"; socially, technologies cannot solve social problems because they are not "mechanical". He posits an eco-socialist analysis, developed from Marx, that patterns of production and social organisation are more important than the forms of technology used within a given configuration of society. Under capitalism, he suggests that technology "has been the sine qua non of growth"; thus he believes that even in a world with hypothetical "free energy" the effect would be to lower the cost of automobile production, leading to the massive overproduction of vehicles, "collapsing infrastructure", chronic resource depletion and the "paving over" of the "remainder of nature". In the modern world, Kovel considers the supposed efficiency of new post-industrial commodities is a "plain illusion", as miniaturized components involve many substances and are therefore non-recyclable (and, theoretically, only simple substances could be retrieved by burning out-of-date equipment, releasing more pollutants). He is quick to warn "environmental liberals" against over-selling the virtues of renewable energies that cannot meet the mass energy consumption of the era; although he would still support renewable energy projects, he believes it is more important to restructure societies to reduce energy use before relying on renewable energy technologies alone. Critique of green economics Eco-socialists have based their ideas for political strategy on a critique of several different trends in green economics. At the most fundamental level, eco-socialists reject what Kovel calls "ecological economics" or the "ecological wing of mainstream economics" for being "uninterested in social transformation". He further rejects the Neo-Smithian school, who believe in Adam Smith's vision of "a capitalism of small producers, freely exchanging with each other", which is self-regulating and competitive. The school is represented by thinkers like David Korten who believe in "regulated markets" checked by government and civil society but, for Kovel, they do not provide a critique of the expansive nature of capitalism away from localised production and ignore "questions of class, gender or any other category of domination". Kovel also criticises their "fairy-tale" view of history, which refers to the abuse of "natural capital" by the materialism of the Scientific Revolution, an assumption that, in Kovel's eyes, seems to suggest that "nature had toiled to put the gift of capital into human hands", rather than capitalism being a product of social relations in human history. Other forms of community-based economics are also rejected by eco-socialists such as Kovel, including followers of E. F. Schumacher and some members of the cooperative movement, for advocating "no more than a very halting and isolated first step". He thinks that their principles are "only partially realizable within the institutions of cooperatives in capitalist society" because "the internal cooperation" of cooperatives is "forever hemmed in and compromised" by the need to expand value and compete within the market. Marx also believed that cooperatives within capitalism make workers into "their own capitalist ... by enabling them to use the means of production for the employment of their own labour". For Kovel and other eco-socialists, community-based economics and Green localism are "a fantasy" because "strict localism belongs to the aboriginal stages of society" and would be an "ecological nightmare at present population levels" due to "heat losses from a multitude of dispersed sites, the squandering of scarce resources, the needless reproduction of effort, and cultural impoverishment". While he feels that small-scale production units are "an essential part of the path towards an ecological society", he sees them not as "an end in itself"; in his view, small enterprises can be either capitalist or socialist in their configuration and therefore must be "consistently anti-capitalist", through recognition and support of the emancipation of labour, and exist "in a dialectic with the whole of things", as human society will need large-scale projects, such as transport infrastructures. He highlights the work of steady-state theorist Herman Daly, who exemplifies what eco-socialists see as the good and bad points of ecological economics — while Daly offers a critique of capitalism and a desire for "workers ownership", he only believes in workers ownership "kept firmly within a capitalist market", ignoring the eco-socialist desire for struggle in the emancipation of labour and hoping that the interests of labour and management today can be improved so that they are "in harmony". Critique of deep ecology Despite the inclusion of both in political factions like the fundies of the German Green Party, eco-socialists and deep ecologists hold markedly opposite views. Eco-socialists like Kovel have attacked deep ecology because, like other forms of Green politics and green economics, it features "virtuous souls" who have "no internal connection with the critique of capitalism and the emancipation of labor". Kovel is particularly scathing about deep ecology and its "fatuous pronouncement" that Green politics is "neither left nor right, but ahead", which for him ignores the notion that "that which does not confront the system becomes its instrument". Even more scathingly, Kovel suggests that in "its effort to decentre humanity within nature", deep ecologists can "go too far" and argue for the "splitting away of unwanted people", as evidenced by their desire to preserve wilderness by removing the groups that have lived there "from time immemorial". Kovel thinks that this lends legitimacy to "capitalist elites", like the United States State Department and the World Bank, who can make preservation of wilderness a part of their projects that "have added value as sites for ecotourism" but remove people from their land. Between 1986 and 1996, Kovel notes that over three million people were displaced by "conservation projects"; in the making of the national parks of the United States, three hundred Shoshone Indians were killed in the development of Yosemite. Kovel believes that deep ecology has affected the rest of the Green movement and led to calls for restrictions on immigration, "often allying with reactionaries in a ... cryptically racist quest". Indeed, he finds traces of deep ecology in the "biological reduction" of Nazism, an ideology many "organicist thinkers" have found appealing, including Herbert Gruhl, a founder of the German Green Party (who subsequently left when it became more left-wing) and originator of the phrase "neither left nor right, but ahead". Kovel warns that, while 'ecofascism' is confined to a narrow band of far right intellectuals and "disaffected white power skinheads" who involved themselves alongside far left groups in the anti-globalization movement, it may be "imposed as a revolution from above to install an authoritarian regime in order to preserve the main workings of the system" in times of crisis. Critique of bioregionalism Bioregionalism, a philosophy developed by writers like Kirkpatrick Sale who believe in the self-sufficiency of "appropriate bioregional boundaries" drawn up by inhabitants of "an area", has been thoroughly critiqued by Kovel, who fears that the "vagueness" of the area will lead to conflict and further boundaries between communities. While Sale cites the bioregional living of Native Americans, Kovel notes that such ideas are impossible to translate to populations of modern proportions, and evidences the fact that Native Americans held land in commons, rather than private property – thus, for eco-socialists, bioregionalism provides no understanding of what is needed to transform society, and what the inevitable "response of the capitalist state" would be to people constructing bioregionalism. Kovel also attacks the problems of self-sufficiency. Where Sale believes in self-sufficient regions "each developing the energy of its peculiar ecology", such as "wood in the northwest [US]", Kovel asks "how on earth" these can be made sufficient for regional needs, and notes the environmental damage of converting Seattle into a "forest-destroying and smoke-spewing wood-burning" city. Kovel also questions Sale's insistence on bioregions that do "not require connections with the outside, but within strict limits", and whether this precludes journeys to visit family members and other forms of travel. Critique of variants of eco-feminism Like many variants of socialism and Green politics, eco-socialists recognise the importance of "the gendered bifurcation of nature" and support the emancipation of gender as it "is at the root of patriarchy and class". Nevertheless, while Kovel believes that "any path out of capitalism must also be eco-feminist", he criticises types of ecofeminism that are not anti-capitalist and can "essentialize women's closeness to nature and build from there, submerging history into nature", becoming more at place in the "comforts of the New Age Growth Centre". These limitations, for Kovel, "keep ecofeminism from becoming a coherent social movement". Critique of social ecology While having much in common with the radical tradition of social ecology, eco-socialists still see themselves as distinct. Kovel believes this is because social ecologists see hierarchy "in-itself" as the cause of ecological destruction, whereas eco-socialists focus on the gender and class domination embodied in capitalism and recognise that forms of authority that are not "an expropriation of human power for ... self-aggrandizement", such as a student-teacher relationship that is "reciprocal and mutual", are beneficial. In practice, Kovel describes social ecology as continuing the anarchist tradition of non-violent direct action, which is "necessary" but "not sufficient" because "it leaves unspoken the question of building an ecological society beyond capital". Furthermore, social ecologists and anarchists tend to focus on the state alone, rather than the class relations behind state domination (in the view of Marxists). Kovel fears that this is political, springing from historical hostility to Marxism among anarchists, and sectarianism, which he points out as a fault of the "brilliant" but "dogmatic" founder of social ecology, Murray Bookchin. Opposition to Malthusianism and neo-Malthusianism While Malthusianism and eco-socialism overlap within the Green movement because both address over-industrialism, and despite the fact that eco-socialists, like many within the Green movement, are described as neo-Malthusian because of their criticism of economic growth, eco-socialists are opposed to Malthusianism. This divergence stems from the difference between Marxist and Malthusian examinations of social injustice – whereas Marx blames inequality on class injustice, Malthus argued that the working-class remained poor because of their greater fertility and birth rates. Neo-Malthusians have slightly modified this analysis by increasing their focus on overconsumption – nonetheless, eco-socialists find this attention inadequate. They point to the fact that Malthus did not thoroughly examine ecology and that Garrett Hardin, a key neo-Malthusian, suggested that further enclosed and privatised land, as opposed to commons, would solve the chief environmental problem, which Hardin labeled the 'tragedy of the commons'. "Two varieties of environmentalism" Joan Martinez-Alier and Ramachandra Guha attack the gulf between what they see as the two "varieties of environmentalism" – the environmentalism of the North, an aesthetic environmentalism that is the privilege of wealthy people who no longer have basic material concerns, and the environmentalism of the South, where people's local environment is a source of communal wealth and such issues are a question of survival. Nonetheless, other eco-socialists, such as Wall, have also pointed out that capitalism disproportionately affects the poorest in the Global North as well, leading to examples of resistance such as the environmental justice movement in the US and groups like MOVE. Critique of other forms of socialism Eco-socialists choose to use the term "socialist", despite "the failings of its twentieth century interpretations", because it "still stands for the supersession of capital" and thus "the name, and the reality" must "become adequate for this time". Eco-socialists have nonetheless often diverged with other Marxist movements. Eco-socialism has also been partly influenced by and associated with agrarian socialism as well as some forms of Christian socialism, especially in the United States. Critique of socialist states While many see socialism as a necessity to respond to the environmental challenges brought about by capitalism, and saw hope in the Soviet Union and other such socialist states in providing an environmental path forward, others have critiqued the history and policies of such states for their lack of environmental planning and policy. For Kovel and Michael Löwy, eco-socialism is "the realization of the 'first-epoch' socialisms" by resurrecting the notion of "free development of all producers", and distancing themselves from "the attenuated, reformist aims of social democracy and the productivist structures of the bureaucratic variations of socialism", such as forms of Leninism and Stalinism. They ground the failure of past socialist movements in "underdevelopment in the context of hostility by existing capitalist powers", which led to "the denial of internal democracy" and "emulation of capitalist productivism". Kovel believes that the forms of 'actually existing socialism' consisted of "public ownership of the means of production", rather than meeting "the true definition" of socialism as "a free association of producers", with the Party-State bureaucracy acting as the "alienating substitute 'public. In analysing the Russian Revolution, Kovel feels that "conspiratorial" revolutionary movements "cut off from the development of society" will "find society an inert mass requiring leadership from above". From this, he notes that the anti-democratic Tsarist heritage meant that the Bolsheviks, who were aided into power by World War One, were a minority who, when faced with a counter-revolution and invading Western powers, continued "the extraordinary needs of 'war communism, which "put the seal of authoritarianism" on the revolution; thus, for Kovel, Lenin and Trotsky "resorted to terror", shut down the Soviets (workers' councils) and emulated "capitalist efficiency and productivism as a means of survival", setting the stage for Stalinism. In Kovel's eyes, Lenin came to oppose the nascent Bolshevik environmentalism and its champion Aleksandr Bogdanov, who was later attacked for "idealism"; Kovel describes Lenin's philosophy as "a sharply dualistic materialism, rather similar to the Cartesian separation of matter and consciousness, and perfectly tooled ... to the active working over of the dead, dull matter by the human hand", which led him to want to overcome Russian backwardness through rapid industrialization. This tendency was, according to Kovel, augmented by a desire to catch-up with the West and the "severe crisis" of the revolution's first years. Furthermore, Kovel quotes Trotsky, who believed in a Communist "superman" who would "learn how to move rivers and mountains". Kovel believes that, in Stalin's "revolution from above" and mass terror in response to the early 1930s economic crisis, Trotsky's writings "were given official imprimatur", despite the fact that Trotsky himself was eventually purged, as Stalinism attacked "the very notion of ecology... in addition to ecologies". Kovel adds that Stalin "would win the gold medal for enmity to nature", and that, in the face of massive environmental degradation, the inflexible Soviet bureaucracy became increasingly inefficient and unable to emulate capitalist accumulation, leading to a "vicious cycle" that led to its collapse. Critique of the wider socialist movement Beyond the forms of "actually existing socialism", Kovel criticises socialists in general as treating ecology "as an afterthought" and holding "a naive faith in the ecological capacities of a working-class defined by generations of capitalist production". He cites David McNally, who advocates increasing consumption levels under socialism, which, for Kovel, contradicts any notion of natural limits. He also criticises McNally's belief in releasing the "positive side of capital's self-expansion" after the emancipation of labor; instead, Kovel argues that a socialist society would "seek not to become larger" but would rather become "more realized", choosing sufficiency and eschewing economic growth. Kovel further adds that the socialist movement was historically conditioned by its origins in the era of industrialization so that, when modern socialists like McNally advocate a socialism that "cannot be at the expense of the range of human satisfaction", they fail "to recognize that these satisfactions can be problematic with respect to nature when they have been historically shaped by the domination of nature". Eco-socialist strategy Eco-socialists generally advocate the non-violent dismantling of capitalism and the state, focusing on collective ownership of the means of production by freely associated producers and restoration of the commons. To get to an eco-socialist society, eco-socialists advocate working-class anti-capitalist resistance but also believe that there is potential for agency in autonomous, grassroots individuals and groups across the world who can build "prefigurative" projects for non-violent radical social change. These prefigurative steps go "beyond the market and the state" and base production on the enhancement of use values, leading to the internationalization of resistance communities in an 'Eco-socialist Party' or network of grassroots groups focused on non-violent, radical social transformation. An 'Eco-socialist revolution' is then carried out. Agency Many eco-socialists, like Alan Roberts, have encouraged working-class action and resistance, such as the 'green ban' movement in which workers refuse to participate in projects that are ecologically harmful. Similarly, Kovel and Hans A. Baer focus on working-class involvement in the formation of new eco-socialist parties or their increased involvement in existing Green Parties; however, he believes that, unlike many other forms of socialist analysis, "there is no privileged agent" or revolutionary class, and that there is potential for agency in numerous autonomous, grassroots individuals and groups who can build "prefigurative" projects for non-violent radical social change. He defines "prefiguration" as "the potential for the given to contain the lineaments of what is to be", meaning that "a moment toward the future exists embedded in every point of the social organism where a need arises". If "everything has prefigurative potential", Kovel notes that forms of potential ecological production will be "scattered", and thus suggests that "the task is to free them and connect them". While all "human ecosystems" have "ecosocialist potential", Kovel points out that ones such as the World Bank have low potential, whereas internally democratic anti-globalization "affinity groups" have a high potential through a dialectic that involves the "active bringing and holding together of negations", such as the group acting as an alternative institution ("production of an ecological/socialist alternative") and trying to shut down a G8 summit meeting ("resistance to capital"). Therefore, "practices that in the same motion enhance use-values and diminish exchange-values are the ideal" for eco-socialists. Prefiguration For Kovel, the main prefigurative steps "are that people ruthlessly criticize the capitalist system... and that they include in this a consistent attack on the widespread belief that there can be no alternative to it", which will then "delegitimate the system and release people into struggle". Kovel justifies this by stating that "radical criticism of the given... can be a material force", even without an alternative, "because it can seize the mind of the masses of people", leading to "dynamic" and "exponential", rather than "incremental" and "linear", victories that spread rapidly. Following this, he advocates the expansion of the dialectical eco-socialist potential of groups through sustaining the confrontation and internal cohesion of human ecosystems, leading to an "activation" of potentials in others that will "spread across the whole social field" as "a new set of orienting principles" that define an ideology or party-life' formation". In the short-term, eco-socialists like Kovel advocate activities that have the "promise of breaking down the commodity form". This includes organizing labor, which is a "reconfiguring of the use-value of labor power"; forming cooperatives, allowing "a relatively free association of labor"; forming localised currencies, which he sees as "undercutting the value-basis of money"; and supporting "radical media" that, in his eyes, involve an "undoing of the fetishism of commodities". Arran Gare, Wall and Kovel have advocated economic localisation in the same vein as many in the Green movement, although they stress that it must be a prefigurative step rather than an end in itself. Kovel also advises political parties attempting to "democratize the state" that there should be "dialogue but no compromise" with established political parties, and that there must be "a continual association of electoral work with movement work" to avoid "being sucked back into the system". Such parties, he believes, should focus on "the local rungs of the political system" first, before running national campaigns that "challenge the existing system by the elementary means of exposing its broken promises". These views on party action have been supported by other eco-socialists. Kovel believes in building prefigurations around forms of production based on use values, which will provide a practical vision of a post-capitalist, post-statist system. Such projects include Indymedia ("a democratic rendering of the use-values of new technologies such as the Internet, and a continual involvement in wider struggle"), open-source software, Wikipedia, public libraries and many other initiatives, especially those developed within the anti-globalization movement. These strategies, in Wall's words, "go beyond the market and the state" by rejecting the supposed dichotomy between private enterprise and state-owned production, while also rejecting any combination of the two through a mixed economy. He states that these present forms of "amphibious politics", which are "half in the dirty water of the present but seeking to move on to a new, unexplored territory". Löwy also highlights acting with a post-statist view saying that eco-socialists should take inspiration from Marx's commentary on the Paris Commune. Wall suggests that open-source software, for example, opens up "a new form of commons regime in cyberspace", which he praises as production "for the pleasure of invention" that gives "access to resources without exchange". He believes that open source has "bypassed" both the market and the state, and could provide "developing countries with free access to vital computer software". Furthermore, he suggests that an "open source economy" means that "the barrier between user and provider is eroded", allowing for "cooperative creativity". He links this to Marxism and the notion of usufruct, asserting that "Marx would have been a Firefox user". Internationalization of prefiguration and the eco-socialist party Many eco-socialists have noted that the potential for building such projects is easier for media workers than for those in heavy industry because of the decline in trade unionism and the globalized division of labor which divides workers. Kovel posits that class struggle is "internationalized in the face of globalization", as evidenced by a wave of strikes across the Global South in the first half of the year 2000; indeed, he says that "labor's most cherished values are already immanently ecocentric". Kovel therefore thinks that these universalizing tendencies must lead to the formation of "a consciously 'Ecosocialist Party that is neither like a parliamentary or vanguardist party. Instead, Kovel advocates a form of political party "grounded in communities of resistance", where delegates from these communities form the core of the party's activists, and these delegates and the "open and transparent" assembly they form are subject to recall and regular rotation of members. He holds up the Zapatista Army of National Liberation (EZLN) and the Gaviotas movement as examples of such communities, which "are produced outside capitalist circuits" and show that "there can be no single way valid for all peoples". Nonetheless, he also firmly believes in connecting these movements, stating that "ecosocialism will be international or it will be nothing" and hoping that the Ecosocialist Party can retain the autonomy of local communities while supporting them materially. With an ever-expanding party, Kovel hopes that "defections" by capitalists will occur, leading eventually to the armed forces and police who, in joining the revolution, will signify that "the turning point is reached". Two principles The economist Pat Devine highlights the necessity to abolish the social division of labor as one of two principles necessary for building an eco-socialist future, the other being the abolition of the metabolic rift, as detailed by John Bellamy Foster. Revolution and transition to eco-socialism The revolution as envisaged by eco-socialists involves an immediate socio-political transition. Internationally, eco-socialists believe in a reform of the nature of money and the formation of a World People's Trade Organisation (WPTO) that democratizes and improves world trade through the calculation of an Ecological Price (EP) for goods. This would then be followed by a transformation of socioeconomic conditions towards ecological production, commons land and notions of usufruct (that seek to improve the common property possessed by society) to end private property. Eco-socialists assert that this must be carried out with adherence to non-violence. Immediate aftermath of the revolution Eco-socialists like Kovel use the term "Eco-socialist revolution" to describe the transition to an eco-socialist world society. In the immediate socio-political transition, he believes that four groups will emerge from the revolution, namely revolutionaries, those "whose productive activity is directly compatible with ecological production" (such as nurses, schoolteachers, librarians, independent farmers and many other examples), those "whose pre-revolutionary practice was given over to capital" (including the bourgeoisie, advertising executives and more) and "the workers whose activity added surplus value to capitalist commodities". In terms of political organisation, he advocates an "interim assembly" made up of the revolutionaries that can "devise incentives to make sure that vital functions are maintained" (such as short-term continuation of "differential remuneration" for labor), "handle the redistribution of social roles and assets", convene "in widespread locations", and send delegates to regional, state, national and international organisations, where every level has an "executive council" that is rotated and can be recalled. From there, he asserts that "productive communities" will "form the political as well as economic unit of society" and "organize others" to make a transition to eco-socialist production. He adds that people will be allowed to be members of any community they choose with "associate membership" of others, such as a doctor having main membership of healthcare communities as a doctor and associate membership of child-rearing communities as a father. Each locality would, in Kovel's eyes, require one community that administered the areas of jurisdiction through an elected assembly. High-level assemblies would have additional "supervisory" roles over localities to monitor the development of ecosystemic integrity, and administer "society-wide services" like transport in "state-like functions", before the interim assembly can transfer responsibilities to "the level of the society as a whole through appropriate and democratically responsive committees". Transnational trade and capital reform In Kovel's eyes, part of the eco-socialist transition is the reforming money to retain its use in "enabling exchanges" while reducing its functions as "a commodity in its own right" and "repository of value". He argues for directing money to "enhancement of use-values" through a "subsidization of use-values" that "preserves the functioning core of the economy while gaining time and space for rebuilding it". Internationally, he believes in the immediate cessation of speculation in currencies ("breaking down the function of money as commodity, and redirecting funds on use-values"), the cancellation of the debt of the Global South ("breaking the back of the value function" of money) and the redirecting the "vast reservoir of mainly phony value" to reparations and "ecologically sound development". He suggests the end of military aid and other forms of support to "comprador elites in the South" will eventually "lead to their collapse". In terms of trade, Kovel advocates a World People's Trade Organization (WPTO), "responsible to a confederation of popular bodies", in which "the degree of control over trade is ... proportional to involvement with production", meaning that "farmers would have a special say over food trade" and so on. He posits that the WPTO should have an elected council that will oversee a reform of prices in favour of an Ecological Price (EP) "determined by the difference between actual use-values and fully realized ones", thus having low tariffs for forms of ecological production like organic agriculture; he also envisages the high tariffs on non-ecological production providing subsidies to ecological production units. The EP would also internalize the costs of current externalities (like pollution) and "would be set as a function of the distance traded", reducing the effects of long-distance transport like carbon emissions and increased packaging of goods. He thinks that this will provide a "standard of transformation" for non-ecological industries, like the automobile industry, thus spurring changes towards ecological production. Ecological production Eco-socialists pursue "ecological production" that, according to Kovel, goes beyond the socialist vision of the emancipation of labor to "the realization of use-values and the appropriation of intrinsic value". He envisions a form of production in which "the making of a thing becomes part of the thing made" so that, using a high quality meal as an analogy, "pleasure would obtain for the cooking of the meal" - thus activities "reserved as hobbies under capitalism" would "compose the fabric of everyday life" under eco-socialism. This, for Kovel, is achieved if labor is "freely chosen and developed... with a fully realized use-value" achieved by a "negation" of exchange-value, and he exemplifies the Food Not Bombs project for adopting this. He believes that the notion of "mutual recognition ... for the process as well as the product" will avoid exploitation and hierarchy. With production allowing humanity to "live more directly and receptively embedded in nature", Kovel predicts that "a reorientation of human need" will occur that recognises ecological limits and sees technology as "fully participant in the life of eco-systems", thus removing it from profit-making exercises. In the course on an Eco-socialist revolution, writers like Kovel and Baer advocate a "rapid conversion to ecosocialist production" for all enterprises, followed by "restoring ecosystemic integrity to the workplace" through steps like workers ownership. He then believes that the new enterprises can build "socially developed plans" of production for societal needs, such as efficient light-rail transport components. At the same time, Kovel argues for the transformation of essential but, under capitalism, non-productive labour, such as child care, into productive labour, "thereby giving reproductive labour a status equivalent to productive labour". During such a transition, he believes that income should be guaranteed and that money will still be used under "new conditions of value... according to use and to the degree to which ecosystem integrity is developed and advanced by any particular production". Within this structure, Kovel asserts that markets and will become unnecessary – although "market phenomena" in personal exchanges and other small instances might be adopted – and communities and elected assemblies will democratically decide on the allocation of resources. Istvan Meszaros believes that such "genuinely planned and self-managed (as opposed to bureaucratically planned from above) productive activities" are essential if eco-socialism is to meet its "fundamental objectives". Eco-socialists are quick to assert that their focus on "production" does not mean that there will be an increase in production and labor under Eco-socialism. Kovel thinks that the emancipation of labor and the realization of use-value will allow "the spheres of work and culture to be reintegrated". He cites the example of Paraguayan Indian communities (organised by Jesuits) in the eighteenth century who made sure that all community members learned musical instruments, and had labourers take musical instruments to the fields and take turns playing music or harvesting. Commons, property and usufruct Most eco-socialists, including Alier and Guha, echo subsistence eco-feminists like Vandana Shiva when they argue for the restoration of commons land over private property. They blame ecological degradation on the inclination to short-term, profit-inspired decisions inherent within a market system. For them, privatization of land strips people of their local communal resources in the name of creating markets for neo-liberal globalization, which benefits a minority. In their view, successful commons systems have been set up around the world throughout history to manage areas cooperatively, based on long-term needs and sustainability instead of short-term profit. Many eco-socialists focus on a modified version of the notion of 'usufruct' to replace capitalist private property arrangements. As a legal term, usufruct refers to the legal right to use and derive profit or benefit from property that belongs to another person, as long as the property is not damaged. According to eco-socialists like Kovel, a modern interpretation of the idea is "where one uses, enjoys – and through that, improves – another's property", as its Latin etymology "condenses the two meanings of use – as in use-value, and enjoyment – and as in the gratification expressed in freely associated labour". The idea, according to Kovel, has roots in the Code of Hammurabi and was first mentioned in Roman law "where it applied to ambiguities between masters and slaves with respect to property"; it also features in Islamic Sharia law, Aztec law and the Napoleonic Code. Crucially for eco-socialists, Marx mentioned the idea when he stated that human beings are no more than the planet's "usufructaries, and, like , they must hand it down to succeeding generations in an improved condition". Kovel and others have taken on this reading, asserting that, in an eco-socialist society, "everyone will have ... rights of use and ownership over those means of production necessary to express the creativity of human nature", namely "a place of one's own" to decorate to personal taste, some personal possessions, the body and its attendant sexual and reproductive rights. However, Kovel sees property as "self-contradictory" because individuals emerge "in a tissue of social relations" and "nested circles", with the self at the centre and extended circles where "issues of sharing arise from early childhood on". He believes that "the full self is enhanced more by giving than by taking" and that eco-socialism is realized when material possessions weigh "lightly" upon the self – thus restoration of use-value allows things to be taken "concretely and sensuously" but "lightly, since things are enjoyed for themselves and not as buttresses for a shaky ego". This, for Kovel, reverses what Marxists see as the commodity fetishism and atomization of individuals (through the "unappeasable craving" for "having and excluding others from having") under capitalism. Under eco-socialism, he therefore believes that enhancement of use-value will lead to differentiated ownership between the individual and the collective, where there are "distinct limits on the amount of property individuals control" and no-one can take control of resources that "would permit the alienation of means of production from another". He then hopes that the "hubris" of the notion of "ownership of the planet" will be replaced with usufruct. Non-violence Most eco-socialists are involved in peace and anti-war movements, and eco-socialist writers, like Kovel, generally believe that "violence is the rupturing of ecosystems" and is therefore "deeply contrary to ecosocialist values". Kovel believes that revolutionary movements must prepare for post-revolutionary violence from counter-revolutionary sources by "prior development of the democratic sphere" within the movement, because "to the degree that people are capable of self-government, so will they turn away from violence and retribution" for "a self-governed people cannot be pushed around by any alien government". In Kovel's view, it is essential that the revolution "takes place in" or spreads quickly to the United States, which "is capital's gendarme and will crush any serious threat", and that revolutionaries reject the death penalty and retribution against former opponents or counter-revolutionaries. Although traditionally non-violent, there is growing scepticism of solely using non-violent tactics as a strategy in the eco-socialist agenda and as a way of dismantling harmful systems. Although progress has been made in the climate movement with non-violent tactics (as demonstrated by XR who pushed the UK government to declare a climate emergency), the movement is still failing to bring about radical decarbonisation. As eco-socialist activist, Andreas Malm states in his book How to Blow Up a Pipeline, "If non-violence is not to be treated as a holy covenant or rite, then one must adopt the explicitly anti-Gandhian position of Mandela: 'I called for non-violent protest for as long as it was effective', as 'a tactic that should be abandoned when it no longer worked." Malm argues there is another phase beyond peaceful protest. Criticism While in many ways the criticisms of eco-socialism combine the traditional criticisms of both socialism and Green politics, there are unique critiques of eco-socialism, which are largely from within the traditional socialist or Green movements themselves, along with conservative criticism. Some socialists are critical of the term "eco-socialism". David Reilly, who questions whether his argument is improved by the use of an "exotic word", argues instead that the "real socialism" is "also a green or 'eco'" one that you get to "by dint of struggle". Other socialists, like Paul Hampton of the Alliance for Workers' Liberty (a British third camp socialist party), see eco-socialism as "classless ecology", wherein eco-socialists have "given up on the working class" as the privileged agent of struggle by "borrowing bits from Marx but missing the locus of Marxist politics". Writing in Capitalism Nature Socialism, Doug Boucher, Peter Caplan, David Schwartzman and Jane Zara criticise eco-socialists in general and Joel Kovel in particular for a deterministic "catastrophism" that overlooks "the countervailing tendencies of both popular struggles and the efforts of capitalist governments to rationalize the system" and the "accomplishments of the labor movement" that "demonstrate that despite the interests and desires of capitalists, progress toward social justice is possible". They argue that an ecological socialism must be "built on hope, not fear". Conservatives have criticised the perceived opportunism of left-wing groups who have increased their focus on green issues since the fall of communism. Fred L. Smith Jr., President of the Competitive Enterprise Institute think-tank, exemplifies the conservative critique of left Greens, attacking the "pantheism" of the Green movement and conflating "eco-paganism" with eco-socialism. Like many conservative critics, Smith uses the term 'eco-socialism' to attack non-socialist environmentalists for advocating restrictions on the market-based solutions to ecological problems. He nevertheless wrongly claims that eco-socialists endorse "the Malthusian view of the relationship between man and nature", and states that Al Gore, a former Democratic Party Vice President of the United States and now a climate change campaigner, is an eco-socialist, despite the fact that Gore has never used this term and is not recognised as such by other followers of either Green politics or socialism. Some environmentalists and conservationists have criticised eco-socialism from within the Green movement. In a review of Joel Kovel's The Enemy of Nature, David M. Johns criticises eco-socialism for not offering "suggestions about near term conservation policy" and focusing exclusively on long-term societal transformation. Johns believes that species extinction "started much earlier" than capitalism and suggests that eco-socialism neglects the fact that an ecological society will need to transcend the destructiveness found in "all large-scale societies", the very tendency that Kovel himself attacks among capitalists and traditional leftists who attempt to reduce nature to "linear" human models. Johns questions whether non-hierarchical social systems can provide for billions of people, and criticises eco-socialists for neglecting issues of population pressure. Furthermore, Johns describes Kovel's argument that human hierarchy is founded on raiding to steal women as "archaic". List of eco-socialists See also Critique of political economy Diggers movement Eco-communalism Eco-social market economy Ecological democracy Ecological economics Green left Green libertarianism Green politics and parties Green New Deal Marxist philosophy of nature Radical environmentalism Red socialism Social-ecology Veganarchism Yellow socialism References Bibliography External links Another Green World: Derek Wall's Ecosocialist Blog Ecosocialist Horizons The official site of "Ecosocialists Greece" Political Organization Anti-globalization movement Economic ideologies Environmentalism Green politics History of environmentalism Marxism Political ideologies Political movements Political theories Socialism Political ecology
Eco-socialism
Environmental_science
14,651
40,979,596
https://en.wikipedia.org/wiki/Social%20reserves
Social Reserves refer to the intangible ties that bind a country together. As a resource, they may be compared to a country's financial reserves. The term bears some similarity to the Bhutanese concept of Gross national happiness in that it attempts to value quality of life in a way that goes beyond traditional economic indicators. The term was coined in November 2013 by Singapore President Dr Tony Tan at an event organised by St. Joseph's Institution, Singapore. Speaking in a lecture series on leadership, President Tan said: ”The social reserves of a nation are the intangible ties that bind us to one another, and make a nation greater than the sum of individual citizens. [They] are the goodwill that makes us look out for one another even during difficult times, the resilience to overcome challenges and constraints, and the tenacity to progress as individuals and as a nation.” Singapore maintains large financial reserves, primarily through two sovereign wealth funds. The Government of Singapore Investment Corporation (GIC) manages Singapore's foreign reserves; Temasek Holdings is an investment company owned by the Government of Singapore. Prior to running for Singapore's elected presidency, President Tony Tan was executive director of GIC. An example of an effort to build up social reserves, he said, was the way that he had expanded Singapore's President's Challenge charity event to go beyond fund-raising to promote volunteerism and social entrepreneurship. References Index numbers National accounts Social ethics Social philosophy
Social reserves
Mathematics
299
11,973,439
https://en.wikipedia.org/wiki/Auburn%20Dam
Auburn Dam was a proposed concrete arch dam on the North Fork of the American River east of the town of Auburn, California, in the United States, on the border of Placer and El Dorado Counties. Slated to be completed in the 1970s by the U.S. Bureau of Reclamation, it would have been the tallest concrete dam in California and one of the tallest in the United States, at a height of and storing of water. Straddling a gorge downstream of the confluence of the North and Middle Forks of the American River and upstream of Folsom Lake, it would have regulated water flow and provided flood control in the American River basin as part of Reclamation's immense Central Valley Project. The dam was first proposed in the 1950s; construction work commenced in 1968, involving the diversion of the North Fork American River through a tunnel and the construction of a massive earthen cofferdam. Following a nearby earthquake and the discovery of an unrelated seismic fault that underlay the dam site, work on the project was halted for fears that the dam's design would not allow it to survive a major quake on the same fault zone. Although the dam was redesigned and a new proposal submitted by 1980, spiraling costs and limited economic justification put an end to the project until severe flooding in 1986 briefly renewed interest in Auburn's flood control potential. The California State Water Resources Control Board denied water rights for the dam project in 2008 due to lack of construction progress. Although new proposals surfaced from time to time after the 1980s, the dam was never built for a number of reasons, including limited water storage capacity, geologic hazards, and potential harm to recreation and the local environment. Much of the original groundwork at the Auburn Dam site still exists, and up to 2007, the North Fork American River still flowed through the diversion tunnel that had been constructed in preparation for the dam. Reclamation and Placer County Water Agency completed a pump station project that year which blocked the tunnel, returned the river to its original channel, and diverted a small amount of water through another tunnel under Auburn to meet local needs. However, some groups continue to support construction of the dam, which they state would provide important water regulation and flood protection. Background Starting in the 1850s during the California Gold Rush, the city of Sacramento rapidly grew around the confluence of the Sacramento River and its tributary the American River, near the middle of the Central Valley of California. The city's increasing population necessitated the construction of an extensive system of levees on the two rivers to prevent flooding. These early flood control works were insufficient; in 1862, the city was inundated so completely that the state government was temporarily moved to San Francisco. In 1955, the U.S. Army Corps of Engineers built the Folsom Dam at the confluence of the North and South Forks of the American River to provide flood control for the Sacramento metropolitan area. However, the Folsom Dam, with a capacity of just 1 million acre feet (1.2 km3) compared to the annual American River flow of 2.7 million acre feet (3.3 km3), proved inadequate. A flood in 1955 filled the Folsom Reservoir to capacity, before the dam was even completed; it has also filled many times since. However, increased water uses and diversions, requirements for 200-year flood control, and joint system operations have increased seasonal flood capacity in Folsom Lake. The demand for irrigation water in the Sacramento area and other parts of the Central Valley were also growing. In 1854, a diversion dam was constructed on the North Fork American River at the site of Auburn Dam, to divert water into ditches that supplied downstream farms. Irrigation with dam and canal systems was favored because the seasonal nature of the American River caused floods in some years and droughts in others. A large dam at the Auburn site was thus considered for both flood control and water supply. In the 1950s, the Bureau of Reclamation created the first plans for a high dam at Auburn. Several designs, ranging from earth-fill to concrete gravity dams, were considered. Before the dam could be built, the Auburn-Foresthill Road – which crosses the river just upstream of the dam site – had to be relocated. Even before the project was authorized, contracts were let for the construction of a high bridge to carry the road over the proposed reservoir, as well as preliminary excavations at the dam site. The eventual design of Auburn Dam called for the creation of a reservoir with of capacity, more than twice that of Folsom Lake. The extra storage would greatly reduce the flood risk to Sacramento. The dam was to be the principal feature of the Auburn-Folsom South Unit of the Central Valley Project, with the purpose to "provide new and supplemental water for irrigation, municipal and industrial use, and to replenish severely depleted ground water in the Folsom South region". Congress authorized the project in 1965; the targeted completion date was 1973. As the Auburn Dam proposal evolved, the project transformed from a primary flood-control structure to a multipurpose high dam that would serve various other purposes including long-term water storage, hydroelectricity generation, and recreation. One of the first ideas, publicized in the late 1950s, called for a embankment dam impounding of water. In 1963, a earthfill dam holding back of water was proposed. The pre-construction design was finalized in 1967, for a concrete thin-arch gravity structure over high. This dam would be long, thick at the base, and equipped with five 150 megawatt generators at its base for a total generating capacity of 700 megawatts. Two concrete-lined flip bucket spillways would abut both sides of the dam. With the initial plans set and the project authorized, construction work for the dam started in late 1968. Construction Site preparation Official groundbreaking of the Auburn Dam started on October 19, 1968, with preparatory excavations and test shafts drilled into the sides of the North Fork American River gorge. The contract for the diversion tunnel through the mountainside on river left, in diameter, long, and equipped to handle a flow of (a roughly 35-year flood) was let to Walsh Western for about $5.1 million in 1968. The actual construction of the tunnel itself did not begin until mid-1971, and it was completed in late November 1972. One worker was killed during the excavation of the tunnel. In 1975, the earthen cofferdam for the Auburn project, high, was completed, diverting the river into the tunnel. The diversion tunnel bypassed a roughly section of the riverbed to allow construction of the main dam. Upstream of the dam site, Auburn-Foresthill Road – one of the only all-weather thoroughfares of the region – would be inundated by the proposed reservoir. In preparation for the reservoir's filling, it was rerouted over a three-span, -long truss bridge rising above the river. Even though Auburn Dam would never be completed, the bridge was still required because the pool behind the cofferdam would flood the original river crossing. It also improved safety and reduced travel time by eliminating a steep, narrow and winding grade into the canyon on either side of the river, as comparisons to maps showing the old road alignment will attest. The contracts for various projects pertaining to the relocation of the roadway were given to O.K. Mittry and Sons, Hensel Phelps Construction Company, and Willamette-Western Corporation, the latter for the construction of the actual bridge. The Foresthill Bridge, the fourth highest bridge in the United States, was completed in 1973. Earthquake and redesigning In 1975, a magnitude 5.7 earthquake shook the Sierra Nevada near Oroville Dam, about north of the Auburn Dam construction site. This quake concerned geologists and engineers working on the project so much that the Auburn Dam construction was halted while the site was resurveyed and investigations conducted into the origins of the earthquake. It was discovered that the quake might have been caused by reservoir-induced seismicity, i.e. the weight of the water from Lake Oroville, whose dam had been completed in 1968, was pressing down on the fault zone enough to cause geologic stress, during which the fault might slip and cause an earthquake. As the concrete thin-arch design of the Auburn Dam could be vulnerable to such a quake, the project had to be drastically redesigned. Over the next few years, while all construction was stayed, Reclamation conducted evaluations of the seismic potential of the dam site, even though these delays caused the cost of the project to rise with every passing year. The studies concluded that a major fault system underlay the vicinity of the Auburn Dam site, with many folds of metamorphic rock formed by the contact of the foothill rocks and the granite batholith of the Sierra Nevada. Reclamation predicted that the Auburn Reservoir could induce an earthquake of up to a 6.5, while the U.S. Geological Survey projected a higher magnitude of 7.0. Nevertheless, Reclamation redesigned the Auburn Dam based on their 6.5 figure, even though a 7.0 would be three times stronger. The design for the Auburn Dam was changed to a concrete thick-arch gravity dam, to provide better protection against a possible earthquake induced by its own reservoir. Through the rest of the 1970s, other possible designs were looked at but never implemented, while preliminary work on the construction site resumed. On April 29, 1979, the foundations for the Auburn Dam were completed. However, debates continued over whether to build an arched or straight-axis gravity dam. Some favored the latter design because it would have greater mass, allowing it to better withstand earthquakes. Cofferdam failure In early February 1986 ten inches (254 mm) of rain fell on the Sacramento region in 11 days, melting the Sierra Nevada snowpack and causing a huge flood to pour down the American River. The 1986 floods were some of the most severe recorded in the 20th century; Placer County was quickly designated a Federal Disaster Area. Rampaging streams and rivers incurred some $7.5 million in damages within the county. The rating for Sacramento's levees, supposedly designed to prevent a 125-year flood, was dropped to a 78-year flood in studies conducted after the 1986 event, which suggested that such weather occurred more frequently than previously believed. The floods tore out levees along the Sacramento and Feather Rivers through the Sacramento Valley, and the city of Sacramento was spared by a close margin. Folsom Lake filled to dangerously high levels with runoff from the North, Middle and South Forks of the American River. The flood rapidly filled the pool behind the Auburn cofferdam to capacity, as the diversion tunnel could not handle all the water pouring into the reservoir. At about 6:00 A.M. on February 18, the rising water overtopped the cofferdam near the right abutment, creating a waterfall that quickly eroded into the structure. Although the cofferdam was designed with a soft earthen plug to fail in a controlled manner if any such event were to occur, the structure eroded quicker than expected. The outflow reached by noon; several hours later the maximum discharge was reached at , completely inundating the construction site and destroying almost half of the cofferdam. When the high cofferdam collapsed, its backed-up water surged downstream into already-spilling Folsom Lake less than a mile downstream, deposited the dam debris and raised the lake level suddenly. Folsom Dam outflow reached , which exceeded the design capacity of levees through Sacramento, but the levees were not overtopped and severe flooding in the city was averted by a close margin. The flood events made it clear that the American River flood control system was inadequate for the flood potential of the watershed. This spurred renewed interest in the Auburn Dam, since a permanent dam would have helped store extra floodwater and also prevented the failure of the cofferdam. Stopping the project Economic cost Following the floods of the 1980s, public opinion began to turn against the Auburn Dam because of the massive estimated cost to finish the project, which was then already rising into the billions of dollars, and the fairly small amount of water it would capture relative to that cost. The best dam sites require a relatively small dam that can store massive amounts of water, and most of those sites in the U.S. have already been utilized. A comparison with Hoover Dam, for example, reveals that the Auburn would store very little water compared to its structural size. Lake Mead, the reservoir behind Hoover, stores about . The proposed Auburn Reservoir, with a mere 8% of that capacity, would require the construction of a dam as tall as Hoover and over three times as wide. As early as 1980, the cost of building the Auburn Dam was estimated at $1 billion. As of 2007, the cost to build the dam would be about $10 billion. Other projects to improve safety margins and spillway capacity of Folsom Dam, and to increase the capacity of levees in the Sacramento area, were projected to cost significantly less while also providing similar levels of flood protection. Also, the United States National Research Council believes that existing stream-flow records, which only date back about 150 years, are insuffient to justify the construction of a dam as large as Auburn. The amount of water supply that Auburn Dam would make available was also in question, because while the American River floods in some years, in other years it barely discharges enough water to fill existing reservoirs. This cast doubts that Auburn could deliver enough water to justify its cost, or the completion of Folsom South Canal, the other major feature of the Auburn-Folsom South Unit Project. Failure risk The Auburn Dam would also be at risk for failure from an earthquake, due to the risk of the reservoir inducing a quake on one of the many fault lines that crosses the area, known as the Bear Mountain fault zone. Surface displacement of the ground might range from a few inches/centimeters to in each direction, depending on the magnitude of the earthquake. Although a new concrete-gravity design by Reclamation was modeled to survive a magnitude 6.5 earthquake, it performed poorly under the 7.0 that the USGS had originally estimated. A Bureau of Reclamation study released in 1980 projected that a failure of Auburn Dam would result in a giant wave reaching Folsom Lake within five minutes; depending on reservoir levels, it would cause a cascading failure of Folsom and Nimbus Dams downstream within an hour, unleashing millions of acre-feet of water which would cause far greater damage downstream than any natural flood. Most of the greater Sacramento area would be inundated; Nimbus Dam would be overtopped by of water and the California State Capitol would be under of water. An earlier study in 1975 predicted that a failure of Folsom Dam alone would result in over 250,000 deaths. If Auburn were to fail at full capacity, the resulting flood would be over three times larger, and cause even greater damage, inundating land for miles on either side of the American and Sacramento rivers. Impact on recreation Filling the Auburn Reservoir would result in a two-pronged, lake which would inundate numerous canyons and rapids of the North and Middle Forks of the American River. In 1981, the American River was acknowledged as the most popular recreational river in California. Over one million people visit the canyons of the North and Middle Forks of the American River each year to engage in various recreational activities, including kayaking, rafting, hiking, hunting, biking, horseback riding, gold mining, off-roading, and rock climbing. About 900,000 of these visitors go to the Auburn State Recreation Area, which includes the former dam site. The reservoir would inundate most of the Auburn recreation area, although some new recreational opportunities such as boating, water-skiing and deep water fishing would be created as a result of the new lake. Many trails, including those used by the Tevis Cup and Western States Endurance Run, would be submerged. The Auburn Reservoir would also result in the destruction of thousands of acres of riverine habitat, and the inundation of historic and archaeological sites. Fate of the project In the end, the Auburn Dam project, once referred to as "the dam that wouldn't die" and "with more lives than an alley cat", was defeated by the intervention of environmentalists, conservationists, and cost-conscious economists. Although four bills to revive the dam project were introduced in Congress over the next twenty years, all were turned down. Representative Norman D. Shumway introduced the Auburn Dam Revival Act of 1987, which was rejected because of the phenomenally high costs. A flood control bill in 1988 involving the Auburn Dam was also defeated. In 1992 and 1996, plans for restarting the Auburn project appeared in various water projects bills. However, even though the project was now leaning towards purely flood control instead of the original expensive multipurpose that environmental groups had opposed, both were denied. As the years dragged on, the cost of the project grew, and it officially ended with the revoking of USBR water rights to the site by the state on November 11, 2008. Proposals for resurrecting the Auburn Dam Although the Auburn Dam is now mostly considered history, there are still proponents and groups devoted to restarting the long-inactive project. Advocates argue that the construction of Auburn would be the only solution for providing much-needed flood protection to the Sacramento area; that millions of dollars have already been spent making preparations; that it would provide an abundant supply of reliable water and hydroelectricity; and also that the recreational areas lost under the reservoir could be rebuilt around it. A major supporter of the revival of the dam was the Sacramento County Taxpayer's League which reported in 2011 that two-thirds of Sacramento citizens support construction of the Auburn. The League also argued that the dam would only cost $2.6 billion instead of $6–10 billion, and that it is the cheapest alternative to provide flood control for the American River. Area Congressman John Doolittle was one of the largest proponents of the Auburn Dam, and he appropriated several million dollars for funds to conduct feasibility studies for the dam. About $3 million went into the main feasibility report, and the remaining $1 million was used for a study concerning the relocation of California State Route 49, which runs through the site. After the Hurricane Katrina disaster in 2005, Doolittle drew public attention to the flood vulnerability of the Sacramento region. He also used the flood-protection "incompetence" of the Folsom Dam to his advantage, saying that "without an Auburn Dam we could soon be in the unenviable position of suffering from both severe drought and severe flooding in the very same year." He led all 18 Republican members of the United States House of Representatives from California in a protest in 2008, trying to convince Governor Arnold Schwarzenegger to revoke the water-rights decision that California had made against Reclamation. Doolittle is sometimes known as the Auburn Dam's "chief sponsor". In response to public outcry, most pro-Auburn Dam groups now recommend the construction of a dry dam, or one that purely supports the purpose of flood control. Such a dam would stand empty most of the year, but during a flood the excess flow would pool temporarily behind the dam instead of flowing straight through, and therefore the dam could still provide flood control while leaving the American River canyons dry for most of the year (hence "dry"). Water would be impounded for only a few days or weeks each year instead of all year long, minimizing damage on the local environment. The dam would be built to protect against a 500-year flood. Also, with the construction of a "dry" Auburn Dam, Folsom Lake could be kept at a higher level throughout the year because of reduced flood-control pressure, therefore facilitating recreational access to the reservoir. Finally, regulations in flow could help groundwater recharge efforts; the lower Sacramento Valley aquifer is acknowledged as severely depleted. Legacy Since its inception, hundreds of millions of dollars have been poured into the Auburn Dam project, but no further work has been done since the 1980s. However, the Bureau of Reclamation continues to list the Auburn as a considered alternative for the future of its Auburn-Folsom South Unit project. As of now, massive evidence of the dam's construction still remain in the North Fork American River canyon, specifically the excavations for the abutments and spillway, with the consequences of increased erosion. In recent decades, California has been struck with a series of severe droughts. In order to facilitate continued deliveries of water to the thirsty southern half of the state, the Central Valley and State Water Projects have been forced to cut water supplies for agriculture in much of the San Joaquin Valley. Annual deficits of water in the state are projected to rise from in 1998 to an estimated by 2025. The state has proposed three or four solutions to the shortfall. One, the Peripheral Canal, would facilitate water flow from the water-rich north to the dry south, but has never been built due to environmental concerns. The raising of Shasta Dam on the Sacramento or New Melones Dam on the Stanislaus, or the building of Sites Reservoir, has also been proposed. Lastly, the Auburn Dam has also been revived in light of this. According to supporters, it would cause the least environmental destruction of the multitude of choices, and would give the most reliable water yield, regardless of its skyrocketing costs. In part as an alternative to Auburn Dam project, flood control for the lower American River is being improved through the US$1 billion Joint Federal Project (a collaboration of the US Bureau of Reclamation and the US Army Corps of Engineers) at Folsom Dam which adds a new lower spillway and strengthens the eight dikes that serve as part of the dam. Additional work proposed includes a possible raise of Folsom Dam several feet to improve its flood control and storage capacity. Key levees downstream have also been improved for flood control in the Sacramento area by the US Army Corps of Engineers and the Sacramento Area Flood Control Agency. Sugar Pine Reservoir, an auxiliary component of the Auburn-Folsom South Project upstream in the watershed, was transferred in title by the Bureau of Reclamation to Foresthill Public Utility District in 2003. As a result of a court decision in 1990 (Hodge Decision), the uses of Reclamation's Folsom South Canal changed further when the Freeport Project came online in 2011 to redivert water supplies for East Bay Municipal Utility District and Sacramento County Water Agency from the Sacramento River instead of from the canal via the lower American River, thereby reducing the need for additional supplies from Auburn Dam to the American River. Anticipated diversions from the Folsom South Canal had previously been reduced when the Sacramento Municipal Utility District decommissioned its Rancho Seco nuclear facility in 1989 and no longer required large quantities of cooling water from the canal. A pumping station to supply water to the Placer County Water Agency was built in 2006 on the Middle Fork American River, supplying to a northwest-running pipeline, eliminating the need for Auburn Dam for this supply. The capacity of the station is eventually expected to be upgraded to . By 2006, the Bureau of Reclamation itself began to restore the dam site, which then had been untouched for more than a decade. The river diversion tunnel was sealed but not filled in, and the remnants of the construction site in the riverbed as well as the remains of the cofferdam excavated from the canyon. After the riverbed was leveled and graded, an artificial riverbed with manmade Class III rapids was constructed to channel the river through the site. The restoration project also included the construction of other recreational amenities in the Auburn site. This act was seen as the final step of decommissioning the Auburn project and shelving it forever. References Works cited U.S. Army Corps of Engineers. American River Watershed Common Features General Reevaluation Report. Final Environmental Impact Statement/Environmental Impact Report. December 2015. https://www.spk.usace.army.mil/Portals/12/documents/civil_works/CommonFeatures/ARCF_GRR_Final_EIS-EIR_Jan2016.pdf External links Auburn Dam Council Sacramento County Taxpayers League – Auburn Dam Auburn Dam Watch Dams on the American River Central Valley Project Proposed buildings and structures in California United States Bureau of Reclamation proposed dams History of El Dorado County, California History of Placer County, California 1970s in California 2008 in California
Auburn Dam
Engineering
5,001
11,421,370
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20snoR31/Z110/Z27
In molecular biology, Small nucleolar RNA Z110 (homologous to Z27 and R31) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA Z110 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs Plant snoRNA Z110 was identified in screens of Arabidopsis thaliana and Oryza sativa . References External links Small nuclear RNA
Small nucleolar RNA snoR31/Z110/Z27
Chemistry
213
14,026,177
https://en.wikipedia.org/wiki/Bequest%20value
Bequest value, in economics, is the value of satisfaction from preserving a natural environment or a historic environment, in other words natural heritage or cultural heritage for future generations. It is often used when estimating the value of an environmental service or good. Together with the existence value, it makes up the non-use value of such an environmental service or good. References Environmental economics
Bequest value
Environmental_science
77
25,738,537
https://en.wikipedia.org/wiki/Simplicial%20map
A simplicial map (also called simplicial mapping) is a function between two simplicial complexes, with the property that the images of the vertices of a simplex always span a simplex. Simplicial maps can be used to approximate continuous functions between topological spaces that can be triangulated; this is formalized by the simplicial approximation theorem. A simplicial isomorphism is a bijective simplicial map such that both it and its inverse are simplicial. Definitions A simplicial map is defined in slightly different ways in different contexts. Abstract simplicial complexes Let K and L be two abstract simplicial complexes (ASC). A simplicial map of K into L is a function from the vertices of K to the vertices of L, , that maps every simplex in K to a simplex in L. That is, for any , . As an example, let K be the ASC containing the sets {1,2},{2,3},{3,1} and their subsets, and let L be the ASC containing the set {4,5,6} and its subsets. Define a mapping f by: f(1)=f(2)=4, f(3)=5. Then f is a simplicial mapping, since f({1,2})={4} which is a simplex in L, f({2,3})=f({3,1})={4,5} which is also a simplex in L, etc. If is not bijective, it may map k-dimensional simplices in K to l-dimensional simplices in L, for any l ≤ k. In the above example, f maps the one-dimensional simplex {1,2} to the zero-dimensional simplex {4}. If is bijective, and its inverse is a simplicial map of L into K, then is called a simplicial isomorphism. Isomorphic simplicial complexes are essentially "the same", up ro a renaming of the vertices. The existence of an isomorphism between L and K is usually denoted by . The function f defined above is not an isomorphism since it is not bijective. If we modify the definition to f(1)=4, f(2)=5, f(3)=6, then f is bijective but it is still not an isomorphism, since is not simplicial: , which is not a simplex in K. If we modify L by removing {4,5,6}, that is, L is the ASC containing only the sets {4,5},{5,6},{6,4} and their subsets, then f is an isomorphism. Geometric simplicial complexes Let K and L be two geometric simplicial complexes (GSC). A simplicial map of K into L is a function such that the images of the vertices of a simplex in K span a simplex in L. That is, for any simplex , . Note that this implies that vertices of K are mapped to vertices of L. Equivalently, one can define a simplicial map as a function from the underlying space of K (the union of simplices in K) to the underlying space of L, , that maps every simplex in K linearly to a simplex in L. That is, for any simplex , , and in addition, (the restriction of to ) is a linear function. Every simplicial map is continuous. Simplicial maps are determined by their effects on vertices. In particular, there are a finite number of simplicial maps between two given finite simplicial complexes. A simplicial map between two ASCs induces a simplicial map between their geometric realizations (their underlying polyhedra) using barycentric coordinates. This can be defined precisely. Let K, L be two ASCs, and let be a simplicial map. The affine extension of is a mapping defined as follows. For any point , let be its support (the unique simplex containing x in its interior), and denote the vertices of by . The point has a unique representation as a convex combination of the vertices, with and (the are the barycentric coordinates of ). We define . This |f| is a simplicial map of |K| into |L|; it is a continuous function. If f is injective, then |f| is injective; if f is an isomorphism between K and L, then |f| is a homeomorphism between |K| and |L|. Simplicial approximation Let be a continuous map between the underlying polyhedra of simplicial complexes and let us write for the star of a vertex. A simplicial map such that , is called a simplicial approximation to . A simplicial approximation is homotopic to the map it approximates. See simplicial approximation theorem for more details. Piecewise-linear maps Let K and L be two GSCs. A function is called piecewise-linear (PL) if there exist a subdivision K' of K, and a subdivision L' of L, such that is a simplicial map of K' into L'. Every simplicial map is PL, but the opposite is not true. For example, suppose |K| and |L| are two triangles, and let be a non-linear function that maps the leftmost half of |K| linearly into the leftmost half of |L|, and maps the rightmost half of |K| linearly into the rightmostt half of |L|. Then f is PL, since it is a simplicial map between a subdivision of |K| into two triangles and a subdivision of |L| into two triangles. This notion is an adaptation of the general notion of a piecewise-linear function to simplicial complexes. A PL homeomorphism between two polyhedra |K| and |L| is a PL mapping such that the simplicial mapping between the subdivisions, , is a homeomorphism. References Algebraic topology Simplicial homology Simplicial sets
Simplicial map
Mathematics
1,356
21,513
https://en.wikipedia.org/wiki/North%20Atlantic%20Deep%20Water
North Atlantic Deep Water (NADW) is a deep water mass formed in the North Atlantic Ocean. Thermohaline circulation (properly described as meridional overturning circulation) of the world's oceans involves the flow of warm surface waters from the southern hemisphere into the North Atlantic. Water flowing northward becomes modified through evaporation and mixing with other water masses, leading to increased salinity. When this water reaches the North Atlantic, it cools and sinks through convection, due to its decreased temperature and increased salinity resulting in increased density. NADW is the outflow of this thick deep layer, which can be detected by its high salinity, high oxygen content, nutrient minima, high 14C/12C, and chlorofluorocarbons (CFCs). CFCs are anthropogenic substances that enter the surface of the ocean from gas exchange with the atmosphere. This distinct composition allows its path to be traced as it mixes with Circumpolar Deep Water (CDW), which in turn fills the deep Indian Ocean and part of the South Pacific. NADW and its formation is essential to the Atlantic meridional overturning circulation (AMOC), which is responsible for transporting large amounts of water, heat, salt, carbon, nutrients and other substances from the Tropical Atlantic to the Mid and High Latitude Atlantic. In the conveyor belt model of thermohaline circulation of the world's oceans, the sinking of NADW pulls the waters of the North Atlantic drift northward. However, this is almost certainly an oversimplification of the actual relationship between NADW formation and the strength of the Gulf Stream/North Atlantic drift. NADW has a temperature of 2.0-3.5 °C with a practical salinity of SP = 34.9-35.0, found at a depth between 1500 and 4000m. Formation and sources The NADW is a complex of several water masses formed by deep convection and overflow of dense water across the Greenland-Iceland-Scotland Ridge. The upper layers are formed by deep open ocean convection during winter. Labrador Sea Water (LSW), formed in the Labrador Sea, can reach depths of 2000 m as dense water sinks downward. Classical Labrador Sea Water (CLSW) production is dependent on preconditioning of water in the Labrador Sea from the previous year and the strength of the North Atlantic oscillation (NAO). During a positive NAO phase, conditions exist for strong winter storms to develop. These storms freshen the surface water, and their winds increase cyclonic flow, which allows denser waters to sink. As a result, the temperature, salinity, and density vary yearly. In some years these conditions do not exist and CLSW is not formed. CLSW has characteristic potential temperature of 3 °C, salinity of 34.88 psu, and density of 34.66. Another component of LSW is the Upper Labrador Sea Water (ULSW). ULSW forms at a density lower than CLSW and has a CFC maximum between 1200 and 1500 m in the subtropical North Atlantic. Eddies of cold less saline ULSW have similar densities of warmer saltier water and flow along the DWBC, but maintain their high CFCs. The ULSW eddies erode rapidly as they mix laterally with this warmer saltier water. The lower waters mass of NADW form from overflow of the Greenland-Iceland-Scotland Ridge. They are Iceland-Scotland Overflow Water (ISOW) and Denmark Strait Overflow Water (DSOW). The overflows are a combination of dense Arctic Ocean water (18%), modified Atlantic water (32%), and intermediate water from the Nordic seas (20%), that entrain and mix with other water masses (contributing 30%) as they flow over the Greenland-Iceland-Scotland Ridge. The formation of both of these waters involves the conversion of warm, salty, northward-flowing surface waters to cold, dense, deep waters behind the Greenland-Iceland-Scotland Ridge. Water flow from the North Atlantic current enters the Arctic Ocean through the Norwegian Current, which splits into the Fram Strait and Barents Sea Branch. Water from the Fram Strait recirculates, reaching a density of DSOW, sinks, and flows towards the Denmark Strait. Water flowing into the Barents Sea feeds ISOW. ISOW enters the eastern North Atlantic over the Iceland-Scotland Ridge through the Faeroe Bank Channel at a depth of 850 m, with some water flowing over the shallower Iceland-Faeroe Rise. ISOW has a low CFC concentrations and it has been estimated from these concentrations that ISOW resides behind the ridge for 45 years. As the water flows southward at the bottom of the channel, it entrains surrounding water of the eastern North Atlantic, and flows to the western North Atlantic through the Charlie–Gibbs fracture zone, entraining with LSW. This water is less dense than DSOW and lays above it as it flows cyclonically in the Irminger Basin. DSOW is the coldest, densest, and freshest water mass of NADW. DSOW formed behind the ridge flows over the Denmark Strait at a depth of 600m. The most significant water mass contributing to DSOW is Arctic Intermediate Water (AIW). Winter cooling and convection allow AIW to sink and pool behind the Denmark Strait. Upper AIW has a high amount of anthropogenic tracers due its exposure to the atmosphere. AIW's tritium and CFC signature is observed in DSOW at the base of the Greenland continental slope. This also showed that the DSOW flowing 450 km to the south was no older than 2 years. Both the DSOW and ISOW flow around the Irminger Basin and Labrador Sea in a deep boundary current. Leaving the Greenland Sea with 2.5 Sv, its flow increases to 10 Sv south of Greenland. It is cold and relatively fresh, flowing below 3500 m in the DWBC and spreading inward the deep Atlantic basins. Spreading pathways The southward spread of NADW along the Deep Western Boundary current (DWBC) can be traced by its high oxygen content, high CFCs, and density. ULSW is the major source of upper NADW. ULSW advects southward from the Labrador Sea in small eddies that mix into the DWBC. A CFC maximum associated with ULSW has been observed along 24°N in the DWBC at 1500 m. Some of the upper ULSW recirculates into the Gulf Stream, while some remains in the DWBC. High CFCs in the subtropics indicate recirculation in the subtropics. ULSW that remains in the DWBC dilutes as it moves equatorward. Deep convection in the Labrador Sea during the late 1980s and early 1990s resulted in CLSW with a lower CFC concentration due to downward mixing. Convection allowed the CFCs to penetrate further downward to 2000m. These minima could be tracked, and were first observed in the subtropics in the early 1990s. ISOW and DSOW flow around the Irminger Basin and DSOW entering the DWBC. These are the two lower portions of the NADW. Another CFC maximum is seen at 3500 m in the subtropics from the DSOW contribution to NADW. Some of the NADW recirculates with the northern gyre. To the south of the gyre, NADW flows under the Gulf Stream, where it continues along the DWBC until it reaches another gyre in the subtropics. Lower North Atlantic Deep Water (LNADW), originating in the Greenland and Norwegian seas, brings high salinity, oxygen, and freon concentrations towards to the Romanche Trench, an equatorial fracture zone in the Mid-Atlantic Ridge (MAR). Found at depths around , LNADW flow east through the trench over Antarctic Bottom Water—the trench is the only opening in the MAR where inter-basin exchange is possible for these two water masses. Variability It is believed that North Atlantic Deep Water formation has been dramatically reduced at times during the past (such as during the Younger Dryas or during Heinrich events), and that this might correlate with a decrease in the strength of the Gulf Stream and the North Atlantic drift, in turn cooling the climate of northwestern Europe. There is concern that global warming might cause this to happen again. It is also hypothesized that during the Last Glacial Maximum, NADW was replaced with an analogous watermass that occupied a shallower depth known as Glacial North Atlantic Intermediate Water. See also Ekman transport Irminger Current Sargasso Sea References Further reading External links Glossary of Physical Oceanography and Related Disciplines North Atlantic Deep Water (NADW) Water masses Atlantic Ocean
North Atlantic Deep Water
Chemistry
1,824
25,711,256
https://en.wikipedia.org/wiki/Kepler-6
Kepler-6 is a G-type star situated in the constellation Cygnus. The star lies within the field of view of the Kepler Mission, which discovered it as part of a NASA-led mission to discover Earth-like planets. The star, which is slightly larger, more metal-rich, slightly cooler, and more massive than the Sun, is orbited by at least one extrasolar planet, a Jupiter-sized planet named Kepler-6b that orbits closely to its star. Nomenclature and history Kepler-6 was named for the Kepler Mission, a NASA project launched in 2009 that aims to discover Earth-like planets that transit, or cross in front of, their home stars with respect to Earth. Unlike stars like the Sun or Sirius, Kepler-6 does not have a common and colloquial name. The discovery of Kepler-6b was announced by the Kepler team on January 4, 2010 at the 215th meeting of the American Astronomical Society along with planets around Kepler-4, Kepler-5, Kepler-7, and Kepler-8. It was the third planet to be discovered by the Kepler spacecraft; the first three planets to be verified by data from Kepler had been previously discovered. These three planets were used to test the accuracy of Kepler's measurements. The discovery of Kepler-6 was confirmed by follow-up observations made using the Hobby–Eberly and Smith telescopes in Texas; the Keck 1 telescope in Hawaii; the Hale and Shane telescopes in southern California; the WIYN, MMT, and Tillinghast telescopes in Arizona; and the Nordic Optical Telescope in the Canary Islands. Characteristics Kepler-6 is a star that is approximately 1.209 Msun, or some five-fourths the mass of the Sun. It is also wider than the sun, with a radius of 1.391 Rsun, or seven-fifths of that of the Sun. The star is approximately 3.8 billion years old, and has an effective temperature of 5647 K (9,705 °F). In comparison, the Sun has a slightly warmer temperature of 5778 K. Kepler-6 has a metallicity of [Fe/H] = +0.34, making it 2.2 times more metallic than the Sun. On average, metal-rich stars tend to be more likely to have planets and planetary systems. The star, as seen from Earth, has an apparent magnitude of 13.8. It is not visible with the naked eye. In comparison, Pluto's apparent magnitude at its brightest is slightly brighter, at 13.65. Planetary system Kepler-6 has one confirmed extrasolar planet; it is a gas giant named Kepler-6b. The planet is approximately .669 MJ, or some two-thirds the mass of planet Jupiter. It is also slightly more diffuse than Jupiter, with a radius of approximately 1.323 RJ. Kepler-6b orbits at an average distance of .0456 AU from its star, and completes an orbit every 3.234 days. The eccentricity of the planet's orbit is assumed to be 0, which is that of a circular orbit. See also List of extrasolar planets Kepler Mission References Planetary systems with one confirmed planet Cygnus (constellation) 17 Planetary transit variables
Kepler-6
Astronomy
670
3,742
https://en.wikipedia.org/wiki/Bluetooth
Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to . It employs UHF radio waves in the ISM bands, from 2.402GHz to 2.48GHz. It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones, wireless speakers, HIFI systems, car audio and wireless transmission between TVs and soundbars. Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1 but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents applies to the technology, which is licensed to individual qualifying devices. , 4.7 billion Bluetooth integrated circuit chips are shipped annually. Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhance IoT capabilities. Etymology The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the runestone of Harald Bluetooth in the book A History of the Vikings by Gwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth. According to Bluetooth's official website, Bluetooth is the Anglicised version of the Scandinavian Blåtand/Blåtann (or in Old Norse blátǫnn). It was the epithet of King Harald Bluetooth, who united the disparate Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols. The Bluetooth logo is a bind rune merging the Younger Futhark runes  (ᚼ, Hagall) and  (ᛒ, Bjarkan), Harald's initials. History The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, and . Nils Rydbeck tasked Tord Wingren with specifying and Dutchman Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund. Principal design and development began in 1994 and by 1997 the team had a workable solution. From 1997 Örjan Johansson became the project leader and propelled the technology and standardization. In 1997, Adalio Sanchez, then head of IBM ThinkPad product R&D, approached Nils Rydbeck about collaborating on integrating a mobile phone into a ThinkPad notebook. The two assigned engineers from Ericsson and IBM studied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal. Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruited Toshiba and Nokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM. The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" at COMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson model T39 that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001, making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth. Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices. Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, since Wi-Fi was not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations with Motorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time. In 2012, Jaap Haartsen was nominated by the European Patent Office for the European Inventor Award. Implementation Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, including guard bands 2MHz wide at the bottom end and 3.5MHz wide at the top. This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, with adaptive frequency-hopping (AFH) enabled. Bluetooth Low Energy uses 2MHz spacing, which accommodates 40 channels. Originally, Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK (differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneous bit rate of 1Mbit/s is possible. The term Enhanced Data Rate (EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively. In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSK modulation on 4 MHz channels with forward error correction (FEC). Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots. The above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently. Communication and connection A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave). The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another. At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets. Uses Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, a quasi optical wireless path must be viable. Bluetooth classes and power use Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range. The actual range of a given link depends on several qualities of both communicating devices and the air and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas. The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products. Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device. In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits. Bluetooth profile To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles. For example, The Headset Profile (HSP) connects headphones and earbuds to a cell phone or laptop. The Health Device Profile (HDP) can connect a cell phone to a digital thermometer or heart rate detector. The Video Distribution Profile (VDP) sends a video stream from a video camera to a TV screen or a recording device. Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices. List of applications Wireless control and communication between a mobile phone and a handsfree headset. This was one of the earliest applications to become popular. Wireless control of audio and communication functions between a mobile phone and a Bluetooth compatible car stereo system (and sometimes between the SIM card and the car phone). Wireless communication between a smartphone and a smart lock for unlocking doors. Wireless control of and communication with iOS and Android device phones, tablets and portable wireless speakers. Wireless Bluetooth headset and intercom. Idiomatically, a headset is sometimes called "a Bluetooth". Wireless streaming of audio to headphones with or without communication capabilities. Wireless streaming of data collected by Bluetooth-enabled fitness devices to phone or PC. Wireless networking between PCs in a confined space and where little bandwidth is required. Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer. Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX and sharing directories via FTP. Triggering the camera shutter of a smartphone using a Bluetooth controlled selfie stick. Replacement of previous wired RS-232 serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices. For controls where infrared was often used. For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired. Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices. Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks. Game consoles have been using Bluetooth as a wireless communications protocol for peripherals since the seventh generation, including Nintendo's Wii and Sony's PlayStation 3 which use Bluetooth for their respective controllers. Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem. Short-range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices. Allowing a DECT phone to ring and answer calls on behalf of a nearby mobile phone. Real-time location systems (RTLS) are used to track and identify the location of objects in real time using "Nodes" or "tags" attached to, or embedded in, the objects tracked, and "Readers" that receive and process the wireless signals from these tags to determine their locations. Personal security application on mobile phones for prevention of theft or loss of items. The protected item has a Bluetooth marker (e.g., a tag) that is in constant communication with the phone. If the connection is broken (the marker is out of range of the phone) then an alarm is raised. This can also be used as a man overboard alarm. Calgary, Alberta, Canada's Roads Traffic division uses data collected from travelers' Bluetooth devices to predict travel times and road congestion for motorists. Wireless transmission of audio (a more reliable alternative to FM transmitters) Live video streaming to the visual cortical implant device by Nabeel Fattah in Newcastle university 2017. Connection of motion controllers to a PC when using VR headsets Wireless connection between TVs and soundbars. Devices Bluetooth exists in numerous products such as telephones, speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definition headsets, modems, hearing aids and even watches. Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files). Bluetooth protocols simplify the discovery and setup of services between devices. Bluetooth devices can advertise all of the services they provide. This makes using services easier, because more of the security, network address and permission configuration can be automated than with many other network types. Computer requirements A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While some desktop computers and most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle". Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter. Operating system implementation For Microsoft platforms, Windows XP Service Pack 2 and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR. Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft. Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR. Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR). The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP, DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced. Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent. Apple products have worked with Bluetooth since Mac OSX v10.2, which was released in 2002. Linux has two popular Bluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed by Qualcomm. Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed by Broadcom. There is also Affix stack, developed by Nokia. It was once popular, but has not been updated since 2005. FreeBSD has included Bluetooth since its v5.0 release, implemented through netgraph. NetBSD has included Bluetooth since its v4.0 release. Its Bluetooth stack was ported to OpenBSD as well, however OpenBSD later removed it as unmaintained. DragonFly BSD has had NetBSD's Bluetooth implementation since 1.11 (2008). A netgraph-based implementation from FreeBSD has also been available in the tree, possibly disabled until 2014-11-15, and may require more work. Specifications and features The specifications were formalized by the Bluetooth Special Interest Group (SIG) and formally announced on 20 May 1998. In 2014 it had a membership of over 30,000 companies worldwide. It was established by Ericsson, IBM, Intel, Nokia and Toshiba, and later joined by many other companies. All versions of the Bluetooth standards are backward-compatible with all earlier versions. The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications: The Bluetooth Core Specificationtypically released every few years Core Specification Addendum (CSA) Core Specification Supplements (CSS)can be released more frequently than Addenda Errataavailable with a Bluetooth SIG account: Errata login) Bluetooth 1.0 and 1.0B Products were not interoperable. Anonymity was not possible, preventing certain services from using Bluetooth environments. Bluetooth 1.1 Ratified as IEEE Standard 802.15.1–2002 Many errors found in the v1.0B specifications were fixed. Added possibility of non-encrypted channels. Received signal strength indicator (RSSI) Bluetooth 1.2 Major enhancements include: Faster connection and discovery Adaptive frequency-hopping spread spectrum (AFH), which improves resistance to radio frequency interference by avoiding the use of crowded frequencies in the hopping sequence Higher transmission speeds in practice than in v1.1, up to 721 kbit/s Extended Synchronous Connections (eSCO), which improve voice quality of audio links by allowing retransmissions of corrupted packets, and may optionally increase audio latency to provide better concurrent data transfer Host Controller Interface (HCI) operation with three-wire UART Ratified as IEEE Standard 802.15.1–2005 Introduced flow control and retransmission modes for Bluetooth 2.0 + EDR This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for faster data transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s. EDR uses a combination of GFSK and phase-shift keying modulation (PSK) with two variants, π/4-DQPSK and 8-DPSK. EDR can provide a lower power consumption through a reduced duty cycle. The specification is published as Bluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet. Bluetooth 2.1 + EDR Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007. The headline feature of v2.1 is secure simple pairing (SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security. Version 2.1 allows various other improvements, including extended inquiry response (EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode. Bluetooth 3.0 + HS Version 3.0 + HS of the Bluetooth Core Specification was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated 802.11 link. The main new feature is (Alternative MAC/PHY), the addition of 802.11 as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0 or earlier Core Specification Addendum 1. L2CAP Enhanced modes Enhanced Retransmission Mode (ERTM) implements reliable L2CAP channel, while Streaming Mode (SM) implements unreliable channel with no retransmission or flow control. Introduced in Core Specification Addendum 1. Alternative MAC/PHY Enables the use of alternative MAC and PHYs for transporting Bluetooth profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration. However, when large quantities of data must be sent, the high-speed alternative MAC PHY 802.11 (typically associated with Wi-Fi) transports the data. This means that Bluetooth uses proven low power connection models when the system is idle, and the faster radio when it must send large quantities of data. AMP links require enhanced L2CAP modes. Unicast Connectionless Data Permits sending service data without establishing an explicit L2CAP channel. It is intended for use by applications that require low latency between user action and reconnection/transmission of data. This is only appropriate for small amounts of data. Enhanced Power Control Updates the power control feature to remove the open loop power control, and also to clarify ambiguities in power control introduced by the new modulation schemes added for EDR. Enhanced power control removes the ambiguities by specifying the behavior that is expected. The feature also adds closed loop power control, meaning RSSI filtering can start as the response is received. Additionally, a "go straight to maximum power" request has been introduced. This is expected to deal with the headset link loss issue typically observed when a user puts their phone into a pocket on the opposite side to the headset. Ultra-wideband The high-speed (AMP) feature of Bluetooth v3.0 was originally intended for UWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification. On 16 March 2009, the WiMedia Alliance announced it was entering into technology transfer agreements for the WiMedia Ultra-wideband (UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG), Wireless USB Promoter Group and the USB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations. In October 2009, the Bluetooth Special Interest Group suspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of former WiMedia members had not and would not sign up to the necessary agreements for the IP transfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap. Bluetooth 4.0 The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted . It includes Classic Bluetooth, Bluetooth high speed and Bluetooth Low Energy (BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols. Bluetooth Low Energy, previously known as Wibree, is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by a coin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions. The provisional names Wibree and Bluetooth ULP (Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE. Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. In terms of lengthening the battery life of Bluetooth devices, represents a significant progression. In a single-mode implementation, only the low energy protocol stack is implemented. Dialog Semiconductor, STMicroelectronics, AMICCOM, CSR, Nordic Semiconductor and Texas Instruments have released single mode Bluetooth Low Energy solutions. In a dual-mode implementation, Bluetooth Smart functionality is integrated into an existing Classic Bluetooth controller. , the following semiconductor companies have announced the availability of chips meeting the standard: Qualcomm Atheros, CSR, Broadcom and Texas Instruments. The compliant architecture shares all of Classic Bluetooth's existing radio and functionality resulting in a negligible cost increase compared to Classic Bluetooth. Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost. General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services with AES Encryption. Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer. Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012. Core Specification Addendum 4 has an adoption date of 12 February 2013. Bluetooth 4.1 The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously. New features of this specification include: Mobile wireless service coexistence signaling Train nudging and generalized interlaced scanning Low Duty Cycle Directed Advertising L2CAP connection-oriented and dedicated channels with credit-based flow control Dual Mode and Topology LE Link Layer Topology 802.11n PAL Audio architecture updates for Wide Band Speech Fast data advertising interval Limited discovery time Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1. Bluetooth 4.2 Released on 2 December 2014, it introduces features for the Internet of things. The major areas of improvement are: Bluetooth Low Energy Secure Connection with Data Packet Length Extension to improve the cryptographic protocol Link Layer Privacy with Extended Scanner Filter Policies to improve data security Internet Protocol Support Profile (IPSP) version 6 ready for Bluetooth smart devices to support the Internet of things and home automation Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates. Bluetooth 5 The Bluetooth SIG released Bluetooth 5 on 6 December 2016. Its new features are mainly focused on new Internet of Things technology. Sony was the first to announce Bluetooth 5.0 support with its Xperia XZ Premium in Feb 2017 during the Mobile World Congress 2017. The Samsung Galaxy S8 launched with Bluetooth 5 support in April 2017. In September 2017, the iPhone 8, 8 Plus and iPhone X launched with Bluetooth 5 support as well. Apple also integrated Bluetooth 5 in its new HomePod offering released on 9 February 2018. Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0); the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market." Bluetooth 5 provides, for BLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation of low-energy Bluetooth connections. The major areas of improvement are: Slot Availability Mask (SAM) 2 Mbit/s PHY for LE Long Range High Duty Cycle Non-Connectable Advertising LE Advertising Extensions LE Channel Selection Algorithm #2 Features added in CSA5 – integrated in v5.0: Higher Output Power The following features were removed in this version of the specification: Park State Bluetooth 5.1 The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019. The major areas of improvement are: Angle of arrival (AoA) and Angle of Departure (AoD) which are used for locating and tracking of devices Advertising Channel Index GATT caching Minor Enhancements batch 1: HCI support for debug keys in LE Secure Connections Sleep clock accuracy update mechanism ADI field in scan response data Interaction between and Flow Specification Block Host channel classification for secondary advertising Allow the SID to appear in scan response reports Specify the behavior when rules are violated Periodic Advertising Sync Transfer Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1: Models Mesh-based model hierarchy The following features were removed in this version of the specification: Unit keys Bluetooth 5.2 On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features: Enhanced Attribute Protocol (EATT), an improved version of the Attribute Protocol (ATT) LE Power Control LE Isochronous Channels LE Audio that is built on top of the new 5.2 features. BT LE Audio was announced in January 2020 at CES by the Bluetooth SIG. Compared to regular Bluetooth Audio, Bluetooth Low Energy Audio makes lower battery consumption possible and creates a standardized way of transmitting audio over BT LE. Bluetooth LE Audio also allows one-to-many and many-to-one transmission, allowing multiple receivers from one source or one receiver for multiple sources, known as Auracast. It uses a new LC3 codec. BLE Audio will also add support for hearing aids. On 12 July 2022, the Bluetooth SIG announced the completion of Bluetooth LE Audio. The standard has a lower minimum latency claim of 20–30 ms vs Bluetooth Classic audio of 100–200 ms. At IFA in August 2023 Samsung announced support for Auracast through a software update for their Galaxy Buds2 Pro and two of their TV's. In October users started getting updates for the earbuds. Bluetooth 5.3 The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are: Connection Subrating Periodic Advertisement Interval Channel Classification Enhancement Encryption key size control enhancements The following features were removed in this version of the specification: Alternate MAC and PHY (AMP) Extension Bluetooth 5.4 The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features: Periodic Advertising with Responses (PAwR) Encrypted Advertising Data LE Security Levels Characteristic Advertising Coding Selection Bluetooth 6.0 The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024. This version adds the following features: Bluetooth Channel Sounding Decision-based advertising filtering Monitoring advertisers enhancement LL extended feature set Frame space update Technical information Architecture Software Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller. High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets. Hardware The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g. SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol. A Bluetooth device is a short-range wireless device. Bluetooth devices are fabricated on RF CMOS integrated circuit (RF circuit) chips. Bluetooth protocol stack Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols. Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols: HCI and RFCOMM. Link Manager The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC). The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services: Transmission and reception of data. Name request Request of the link addresses. Establishment of the connection. Authentication. Negotiation of link mode and connection establishment. Host Controller Interface The Host Controller Interface provides a command interface between the controller and the host. Logical Link Control and Adaptation Protocol The Logical Link Control and Adaptation Protocol (L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols. Provides segmentation and reassembly of on-air packets. In Basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU. In Retransmission and Flow Control modes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks. Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes: Enhanced Retransmission Mode (ERTM) This mode is an improved version of the original retransmission mode. This mode provides a reliable L2CAP channel. Streaming Mode (SM) This is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel. Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer. Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links. Service Discovery Protocol The Service Discovery Protocol (SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine which Bluetooth profiles the headset can use (Headset Profile, Hands Free Profile (HFP), Advanced Audio Distribution Profile (A2DP) etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by a Universally unique identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128). Radio Frequency Communications Radio Frequency Communications (RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulates EIA-232 (formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation. RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth. Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM. Bluetooth Network Encapsulation Protocol The Bluetooth Network Encapsulation Protocol (BNEP) is used for transferring another protocol stack's data via an L2CAP channel. Its main purpose is the transmission of IP packets in the Personal Area Networking Profile. BNEP performs a similar function to SNAP in Wireless LAN. Audio/Video Control Transport Protocol The Audio/Video Control Transport Protocol (AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player. Audio/Video Distribution Transport Protocol The Audio/Video Distribution Transport Protocol (AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over an L2CAP channel intended for video distribution profile in the Bluetooth transmission. Telephony Control Protocol The Telephony Control Protocol– Binary (TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices." TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest. Adopted protocols Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include: Point-to-Point Protocol (PPP) Internet standard protocol for transporting IP datagrams over a point-to-point link. TCP/IP/UDP Foundation Protocols for TCP/IP protocol suite Object Exchange Protocol (OBEX) Session-layer protocol for the exchange of objects, providing a model for object and operation representation Wireless Application Environment/Wireless Application Protocol (WAE/WAP) WAE specifies an application framework for wireless devices and WAP is an open standard to provide mobile users access to telephony and information services. Baseband error correction Depending on packet type, individual packets may be protected by error correction, either 1/3 rate forward error correction (FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged by automatic repeat request (ARQ). Setting up connections Any Bluetooth device in discoverable mode transmits the following information on demand: Device name Device class List of services Technical information (for example: device features, manufacturer, Bluetooth specification used, clock offset) Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device. Every device has a unique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices. Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range named T610 (see Bluejacking). Pairing and bonding Motivation Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range). To resolve this conflict, Bluetooth uses a process called bonding, and a bond is generated through a process called pairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively. Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship. Implementation During pairing, the two devices establish a relationship by creating a shared secret known as a link key. If both devices store the same link key, they are said to be paired or bonded. A device that wants to communicate only with a bonded device can cryptographically authenticate the identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticated ACL link between the devices may be encrypted to protect exchanged data against eavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with. Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases. Pairing mechanisms Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms: Legacy pairing: This is the only method available in Bluetooth v2.0 and before. Each device must enter a PIN code; pairing is only successful if both devices enter the same PIN code. Any 16-byte UTF-8 string may be used as a PIN code; however, not all devices may be capable of entering all possible PIN codes. Limited input devices: The obvious example of this class of device is a Bluetooth Hands-free headset, which generally have few inputs. These devices usually have a fixed PIN, for example "0000" or "1234", that are hard-coded into the device. Numeric input devices: Mobile phones are classic examples of these devices. They allow a user to enter a numeric value up to 16 digits in length. Alpha-numeric input devices: PCs and smartphones are examples of these devices. They allow a user to enter full UTF-8 text as a PIN code. If pairing with a less capable device the user must be aware of the input limitations on the other device; there is no mechanism available for a capable device to determine how it should limit the available input a user may use. Secure Simple Pairing (SSP): This is required by Bluetooth v2.1, although a Bluetooth v2.1 device may only use legacy pairing to interoperate with a v2.0 or earlier device. Secure Simple Pairing uses a form of public-key cryptography, and some types can help protect against man in the middle, or MITM attacks. SSP has the following authentication mechanisms: Just works: As the name implies, this method just works, with no user interaction. However, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with minimal IO capabilities, and is more secure than the fixed PIN mechanism this limited set of devices uses for legacy pairing. This method provides no man-in-the-middle (MITM) protection. Numeric comparison: If both devices have a display, and at least one can accept a binary yes/no user input, they may use Numeric Comparison. This method displays a 6-digit numeric code on each device. The user should compare the numbers to ensure they are identical. If the comparison succeeds, the user(s) should confirm pairing on the device(s) that can accept an input. This method provides MITM protection, assuming the user confirms on both devices and actually performs the comparison properly. Passkey Entry: This method may be used between a device with a display and a device with numeric keypad entry (such as a keyboard), or two devices with numeric keypad entry. In the first case, the display presents a 6-digit numeric code to the user, who then enters the code on the keypad. In the second case, the user of each device enters the same 6-digit number. Both of these cases provide MITM protection. Out of band (OOB): This method uses an external means of communication, such as near-field communication (NFC) to exchange some information used in the pairing process. Pairing is completed using the Bluetooth radio, but requires information from the OOB mechanism. This provides only the level of MITM protection that is present in the OOB mechanism. SSP is considered simple for the following reasons: In most cases, it does not require a user to generate a passkey. For use cases not requiring MITM protection, user interaction can be eliminated. For numeric comparison, MITM protection can be achieved with a simple equality comparison by the user. Using OOB with NFC enables pairing when devices simply get close, rather than requiring a lengthy discovery process. Security concerns Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simple XOR attacks to retrieve the encryption key. Turning off encryption is required for several normal operations, so it is problematic to detect if encryption is disabled for a valid reason or a security attack. Bluetooth v2.1 addresses this in the following ways: Encryption is required for all non-SDP (Service Discovery Protocol) connections A new Encryption Pause and Resume feature is used for all normal operations that require that encryption be disabled. This enables easy identification of normal operation from security attacks. The encryption key must be refreshed before it expires. Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device. Security Overview Bluetooth implements confidentiality, authentication and key derivation with custom algorithms based on the SAFER+ block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm. The E0 stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices. An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker. In September 2008, the National Institute of Standards and Technology (NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers. Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See the pairing mechanisms section for more about these changes. Bluejacking Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!" Bluejacking does not involve the removal or alteration of any data from the device. Some form of DoS is also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices. History of security concerns 2001–2004 In 2001, Jakobsson and Wetzel from Bell Laboratories discovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme. In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data. In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at the CeBIT fairgrounds, showing the importance of the problem to the world. A new attack called BlueBug was used for this experiment. In 2004 the first purported virus using Bluetooth to spread itself among mobile phones appeared on the Symbian OS. The virus was first described by Kaspersky Lab and requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology or Symbian OS since the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see also Bluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to with directional antennas and signal amplifiers. This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use. 2005 In January 2005, a mobile malware worm known as Lasco surfaced. The worm began targeting mobile phones using Symbian OS (Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other .SIS files on the device, allowing replication to another device through the use of removable media (Secure Digital, CompactFlash, etc.). The worm can render the mobile device unstable. In April 2005, University of Cambridge security researchers published results of their actual implementation of passive attacks against the PIN-based pairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones. In June 2005, Yaniv Shaked and Avishai Wool published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary. In August 2005, police in Cambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way. 2006 In April 2006, researchers from Secure Network and F-Secure published a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm. In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked. 2017 In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, including Microsoft Windows, Linux, Apple iOS, and Google Android. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017. 2018 In July 2018, Lior Neumann and Eli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections. Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers. 2019 In August 2019, security researchers at the Singapore University of Technology and Design, Helmholtz Center for Information Security, and University of Oxford discovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)". Google released an Android security patch on 5 August 2019, which removed this vulnerability. 2023 In November 2023, researchers from Eurecom revealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected. Health concerns Bluetooth uses the radio frequency spectrum in the 2.402GHz to 2.480GHz range, which is non-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included by IARC in the possible carcinogen list. Maximum power output from a Bluetooth radio is 100mW for Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones. UMTS and W-CDMA output 250mW, GSM1800/1900 outputs 1000mW, and GSM850/900 outputs 2000mW. Award programs The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets. The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World. The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making. See also ANT+ Bluetooth stack – building blocks that make up the various implementations of the Bluetooth protocol List of Bluetooth profiles – features used within the Bluetooth stack Bluesniping BlueSoleil – proprietary Bluetooth driver Bluetooth Low Energy beacons (AltBeacon, iBeacon, Eddystone) Bluetooth mesh networking Continua Health Alliance DASH7 Audio headset Wi-Fi hotspot Java APIs for Bluetooth Key finder Li-Fi List of Bluetooth protocols MyriaNed Near-field communication NearLink RuBee – secure wireless protocol alternative Tethering Thread (network protocol) Wi-Fi HaLow Zigbee – low-power lightweight wireless protocol in the ISM band based on IEEE 802.15.4 Notes References External links Specifications at Bluetooth SIG Bluetooth Mobile computers Networking standards Wireless communication systems Telecommunications-related introductions in 1989 Swedish inventions Dutch inventions
Bluetooth
Technology,Engineering
13,375
25,776,647
https://en.wikipedia.org/wiki/D-Deprenyl
{{DISPLAYTITLE:D-Deprenyl}} -Deprenyl, also known as or dextro-N-propargyl-N-methylamphetamine, is an MAO-B inhibitor that metabolizes into -amphetamine and -methamphetamine and is therefore also a norepinephrine–dopamine releasing agent. It is one of the two enantiomers of deprenyl and is the opposite enantiomer of -deprenyl (selegiline). -Deprenyl, also an MAO-B inhibitor, metabolizes to -amphetamine and -methamphetamine, which are both norepinephrine releasing agents. In contrast, -deprenyl additionally has dopaminergic effects and has been found to be reinforcing in scientific research, whereas -deprenyl is not known to have any appreciable psychological reinforcement. In addition to its actions as an MAO-B inhibitor and NDRA, -deprenyl has been found to bind with high affinity to the σ1 receptor (Ki = 79 nM) similarly to various other amphetamine derivatives. Its -isomer, selegiline, binds with 3.5-fold lower affinity in comparison. See also Clorgiline Tranylcypromine References Abandoned drugs Enantiopure drugs Methamphetamines Monoamine oxidase inhibitors Norepinephrine-dopamine releasing agents Phenethylamines Prodrugs Propargyl compounds Sigma receptor ligands
D-Deprenyl
Chemistry
337
57,638,831
https://en.wikipedia.org/wiki/NGC%203315
NGC 3315 is a lenticular galaxy located about 185 million light-years away in the constellation Hydra. It was discovered by astronomer Edward Austin on March 24, 1870. It is a member of the Hydra Cluster. See also List of NGC objects (3001–4000) NGC 3305 References External links Hydra Cluster Hydra (constellation) Lenticular galaxies 3315 31540 Astronomical objects discovered in 1870 Discoveries by Edward Austin
NGC 3315
Astronomy
85
25,255,717
https://en.wikipedia.org/wiki/Apocholate%20citrate%20agar
Apocholate citrate agar (ACA) is a selective environment used to isolate Shigella and Salmonella bacteria. The name derives from apocholate and citrate in agar. References Cell culture media Microbiological media
Apocholate citrate agar
Biology
52
51,552,849
https://en.wikipedia.org/wiki/Mieczys%C5%82aw%20Warmus
Mieczysław Warmus (born 1 June 1918 in Dobrowlany; died 20 September 2007 in Australia) was a Polish mathematician, a pioneer of computer science in Poland, professor, university lecturer, author of over a hundred scientific papers. References Homepage Biography (Prof. Mieczysław Warmus (1918–2007)) Biography (Jadwiga Dutkiewicz Mieczysław Warmus Życie i praca naukowa ) 20th-century Polish mathematicians 1918 births 2007 deaths Recipients of the Medal of the 10th Anniversary of the People's Republic of Poland
Mieczysław Warmus
Technology
121
47,885,299
https://en.wikipedia.org/wiki/HD%2082514
HD 82514, also known as HR 3790, is a solitary, orange-hued star located in the southern constellation Antlia. It has an apparent magnitude of 5.86, allowing it to be faintly seen with the naked eye. Based on parallax measurements from the Gaia spacecraft, it is estimated to be 279 light years away from the Solar System. However, it is receding with a heliocentric radial velocity of . HD 82514 has a stellar classification of K3 III, indicating that it is an evolved red giant. It has a comparable mass to the Sun, but as a result of its evolved state, it has an enlarged radius of . It radiates at 65 times the luminosity of the Sun from its photosphere at an effective temperature of roughly . It spins slowly with a projected rotational velocity of , which is common for most giant stars. HD 82514 has an iron abundance 44% above solar levels, making it metal enriched. The star is believed to be a member of the thick disk. There is a 13th magnitude companion located away along a position angle of . This object is designated as CD −35°5751BC, which makes it a double star itself. It consists of two low mass stars separated by 2.3 " from each other. However, the system is not related to HD 82514, having a smaller parallax. HD 82514 is located within the boundaries of the open cluster Turner 5. However, it is only a field star. References Antlia K-type giants 082514 3790 046736 CD-35 05751 Double stars Antliae, 11
HD 82514
Astronomy
342
32,424,092
https://en.wikipedia.org/wiki/Neoxanthin
Neoxanthin is a carotenoid and xanthophyll. In plants, it is an intermediate in the biosynthesis of the plant hormone abscisic acid. It is often present in two forms: all-trans and 9-cis isomers. It is produced from violaxanthin, but a suspected neoxanthin synthase is still to be confirmed. Two different genes were confirmed to be implied in violaxanthin conversion to neoxanthin in Arabidopsis and tomato. It has a specific role in protection against photooxidative stress. It is a major xanthophyll found in green leafy vegetables such as spinach. References Carotenoids Epoxides Dienes
Neoxanthin
Biology
153
22,190
https://en.wikipedia.org/wiki/Organic%20electronics
Organic electronics is a field of materials science concerning the design, synthesis, characterization, and application of organic molecules or polymers that show desirable electronic properties such as conductivity. Unlike conventional inorganic conductors and semiconductors, organic electronic materials are constructed from organic (carbon-based) molecules or polymers using synthetic strategies developed in the context of organic chemistry and polymer chemistry. One of the promised benefits of organic electronics is their potential low cost compared to traditional electronics. Attractive properties of polymeric conductors include their electrical conductivity (which can be varied by the concentrations of dopants) and comparatively high mechanical flexibility. Challenges to the implementation of organic electronic materials are their inferior thermal stability, high cost, and diverse fabrication issues. History Electrically conductive polymers Traditional conductive materials are inorganic, especially metals such as copper and aluminum as well as many alloys. In 1862 Henry Letheby described polyaniline, which was subsequently shown to be electrically conductive. Work on other polymeric organic materials began in earnest in the 1960s. For example in 1963, a derivative of tetraiodopyrrole was shown to exhibit conductivity of 1 S/cm (S = Siemens). In 1977, it was discovered that oxidation enhanced the conductivity of polyacetylene. The 2000 Nobel Prize in Chemistry was awarded to Alan J. Heeger, Alan G. MacDiarmid, and Hideki Shirakawa jointly for their work on polyacetylene and related conductive polymers. Many families of electrically conducting polymers have been identified including polythiophene, polyphenylene sulfide, and others. J.E. Lilienfeld first proposed the field-effect transistor in 1930, but the first OFET was not reported until 1987, when Koezuka et al. constructed one using Polythiophene which shows extremely high conductivity. Other conductive polymers have been shown to act as semiconductors, and newly synthesized and characterized compounds are reported weekly in prominent research journals. Many review articles exist documenting the development of these materials. In 1987, the first organic diode was produced at Eastman Kodak by Ching W. Tang and Steven Van Slyke. Electrically conductive charge transfer salts In the 1950s, organic molecules were shown to exhibit electrical conductivity. Specifically, the organic compound pyrene was shown to form semiconducting charge-transfer complex salts with halogens. In 1972, researchers found metallic conductivity (conductivity comparable to a metal) in the charge-transfer complex TTF-TCNQ. Light and electrical conductivity André Bernanose was the first person to observe electroluminescence in organic materials. Ching W. Tang and Steven Van Slyke, reported fabrication of the first practical OLED device in 1987. The OLED device incorporated a double-layer structure motif composed of copper phthalocyanine and a derivative of perylenetetracarboxylic dianhydride. In 1990, a polymer light emitting diodes was demonstrated by Bradley, Burroughes, Friend. Moving from molecular to macromolecular materials solved the problems previously encountered with the long-term stability of the organic films and enabled high-quality films to be easily made. In the late 1990's, highly efficient electroluminescent dopants were shown to dramatically increase the light-emitting efficiency of OLEDs These results suggested that electroluminescent materials could displace traditional hot-filament lighting. Subsequent research developed multilayer polymers and the new field of plastic electronics and organic light-emitting diodes (OLED) research and device production grew rapidly. Conductive organic materials Organic conductive materials can be grouped into two main classes: polymers and conductive molecular solids and salts. Polycyclic aromatic compounds such as pentacene and rubrene often form semiconducting materials when partially oxidized. Conductive polymers are often typically intrinsically conductive or at least semiconductors. They sometimes show mechanical properties comparable to those of conventional organic polymers. Both organic synthesis and advanced dispersion techniques can be used to tune the electrical properties of conductive polymers, unlike typical inorganic conductors. Well-studied class of conductive polymers include polyacetylene, polypyrrole, polythiophenes, and polyaniline. Poly(p-phenylene vinylene) and its derivatives are electroluminescent semiconducting polymers. Poly(3-alkythiophenes) have been incorporated into prototypes of solar cells and transistors. Organic light-emitting diode An OLED (organic light-emitting diode) consists of a thin film of organic material that emits light under stimulation by an electric current. A typical OLED consists of an anode, a cathode, OLED organic material and a conductive layer. OLED organic materials can be divided into two major families: small-molecule-based and polymer-based. Small molecule OLEDs (SM-OLEDs) include tris(8-hydroxyquinolinato)aluminium fluorescent and phosphorescent dyes, and conjugated dendrimers. Fluorescent dyes can be selected according to the desired range of emission wavelengths; compounds like perylene and rubrene are often used. Devices based on small molecules are usually fabricated by thermal evaporation under vacuum. While this method enables the formation of well-controlled homogeneous film; is hampered by high cost and limited scalability. Polymer light-emitting diodes (PLEDs) are generally more efficient than SM-OLEDs. Common polymers used in PLEDs include derivatives of poly(p-phenylene vinylene) and polyfluorene. The emitted color is determined by the structure of the polymer. Compared to thermal evaporation, solution-based methods are more suited to creating films with large dimensions. Organic field-effect transistor An organic field-effect transistor (OFET) is a field-effect transistor utilizing organic molecules or polymers as the active semiconducting layer. A field-effect transistor (FET) is any semiconductor material that utilizes electric field to control the shape of a channel of one type of charge carrier, thereby changing its conductivity. Two major classes of FET are n-type and p-type semiconductor, classified according to the charge type carried. In the case of organic FETs (OFETs), p-type OFET compounds are generally more stable than n-type due to the susceptibility of the latter to oxidative damage. As for OLEDs, some OFETs are molecular and some are polymer-based system. Rubrene-based OFETs show high carrier mobility of 20–40 cm2/(V·s). Another popular OFET material is Pentacene. Due to its low solubility in most organic solvents, it's difficult to fabricate thin film transistors (TFTs) from pentacene itself using conventional spin-cast or, dip coating methods, but this obstacle can be overcome by using the derivative TIPS-pentacene. Organic electronic devices Organic solar cells could cut the cost of solar power compared with conventional solar-cell manufacturing. Silicon thin-film solar cells on flexible substrates allow a significant cost reduction of large-area photovoltaics for several reasons: The so-called 'roll-to-roll'-deposition on flexible sheets is much easier to realize in terms of technological effort than deposition on fragile and heavy glass sheets. Transport and installation of lightweight flexible solar cells also saves cost as compared to cells on glass. Inexpensive polymeric substrates like polyethylene terephthalate (PET) or polycarbonate (PC) have the potential for further cost reduction in photovoltaics. Protomorphous solar cells prove to be a promising concept for efficient and low-cost photovoltaics on cheap and flexible substrates for large-area production as well as small and mobile applications. One advantage of printed electronics is that different electrical and electronic components can be printed on top of each other, saving space and increasing reliability and sometimes they are all transparent. One ink must not damage another, and low temperature annealing is vital if low-cost flexible materials such as paper and plastic film are to be used. There is much sophisticated engineering and chemistry involved here, with iTi, Pixdro, Asahi Kasei, Merck & Co.|Merck, BASF, HC Starck, Sunew, Hitachi Chemical, and Frontier Carbon Corporation among the leaders. Electronic devices based on organic compounds are now widely used, with many new products under development. Sony reported the first full-color, video-rate, flexible, plastic display made purely of organic materials; television screen based on OLED materials; biodegradable electronics based on organic compound and low-cost organic solar cell are also available. Fabrication methods Small molecule semiconductors are often insoluble, necessitating deposition via vacuum sublimation. Devices based on conductive polymers can be prepared by solution processing methods. Both solution processing and vacuum based methods produce amorphous and polycrystalline films with variable degree of disorder. "Wet" coating techniques require polymers to be dissolved in a volatile solvent, filtered and deposited onto a substrate. Common examples of solvent-based coating techniques include drop casting, spin-coating, doctor-blading, inkjet printing and screen printing. Spin-coating is a widely used technique for small area thin film production. It may result in a high degree of material loss. The doctor-blade technique results in a minimal material loss and was primarily developed for large area thin film production. Vacuum based thermal deposition of small molecules requires evaporation of molecules from a hot source. The molecules are then transported through vacuum onto a substrate. The process of condensing these molecules on the substrate surface results in thin film formation. Wet coating techniques can in some cases be applied to small molecules depending on their solubility. Organic solar cells Organic semiconductor diodes convert light into electricity. Figure to the right shows five commonly used organic photovoltaic materials. Electrons in these organic molecules can be delocalized in a delocalized π orbital with a corresponding π* antibonding orbital. The difference in energy between the π orbital, or highest occupied molecular orbital (HOMO), and π* orbital, or lowest unoccupied molecular orbital (LUMO) is called the band gap of organic photovoltaic materials. Typically, the band gap lies in the range of 1-4eV. The difference in the band gap of organic photovoltaic materials leads to different chemical structures and forms of organic solar cells. Different forms of solar cells includes single-layer organic photovoltaic cells, bilayer organic photovoltaic cells and heterojunction photovoltaic cells. However, all three of these types of solar cells share the approach of sandwiching the organic electronic layer between two metallic conductors, typically indium tin oxide. Organic field-effect transistors An organic field-effect transistor is a three terminal device (source, drain and gate). The charge carriers move between source and drain, and the gate serves to control the path's conductivity. There are mainly two types of organic field-effect transistor, based on the semiconducting layer's charge transport, namely p-type (such as dinaphtho[2,3-b:2′,3′-f]thieno[3,2-b]thiophene, DNTT), and n-type (such phenyl C61 butyric acid methyl ester, PCBM). Certain organic semiconductors can also present both p-type and n-type (i.e., ambipolar) characteristics. Such technology allows for the fabrication of large-area, flexible, low-cost electronics. One of the main advantages is that being mainly a low temperature process compared to CMOS, different type of materials can be utilized. This makes them in turn great candidates for sensing. Features Conductive polymers are lighter, more flexible, and less expensive than inorganic conductors. This makes them a desirable alternative in many applications. It also creates the possibility of new applications that would be impossible using copper or silicon. Organic electronics not only includes organic semiconductors, but also organic dielectrics, conductors and light emitters. New applications include smart windows and electronic paper. Conductive polymers are expected to play an important role in the emerging science of molecular computers. See also Annealing Bioplastic Carbon nanotube Circuit deposition Conductive ink Flexible electronics Laminar Melanin Organic field-effect transistor (OFET) Organic semiconductor Organic light-emitting diode Photodetector Printed electronics Radio frequency identification Radio tag Schön scandal Spin coating References Further reading Grasser, Tibor., Meller, Gregor. Baldo, Marc. (Eds.) (2010) Organic electronics Springer, Heidelberg. (Print) 978-3-642-04538-7 (Online) Electronic Processes in Organic Crystals and Polymers, 2 ed. by Martin Pope and Charles E. Swenberg, Oxford University Press (1999), Handbook of Organic Electronics and Photonics (3-Volume Set) by Hari Singh Nalwa, American Scientific Publishers. (2008), External links orgworld – Organic Semiconductor World homepage. Artificial materials
Organic electronics
Physics
2,756
36,802,569
https://en.wikipedia.org/wiki/Andrew%20Bruce%20Holmes
Andrew Bruce Holmes (born 5 September 1943) is an Australian and British senior research chemist and professor at the Bio21 Institute, Melbourne, Australia, and the past President of the Australian Academy of Science. His research interests lie in the synthesis of biologically-active natural products (spanning therapeutic materials to new biotechnological probes) and optoelectronic polymers (with applications to electroluminescent flexible displays and organic solar cells). Education Holmes' undergraduate studies and masters' research were conducted at the University of Melbourne where he was resident at Ormond College. Travelling to the UK on a Shell Overseas Science Scholarship, he performed his PhD work at University College London under the supervision of Franz Sondheimer. Career and research As a postdoctoral researcher, Holmes worked on the total synthesis of Vitamin B12 with Albert Eschenmoser. In 1972 he was appointed as a demonstrator to the University of Cambridge where he stayed for 32 years, ultimately as Professor of Organic and Polymer Chemistry, and Director of the Melville Laboratory for Polymer Synthesis where he oversaw the founding and initial decade of the Melville Laboratory. Holmes' early work at Cambridge expanded his interest in new techniques for synthesising small molecules that are biologically-active and practically-useful, including natural products (such as alkaloids) and peptidomimetics. In 1989, during systematic characterisation of a newly synthesised conductive polymer, Chloe Jennings working in Holmes' research group observed that the polymer emitted light when a current was passed through it. An intensive period of research in Holmes' group, and other polymer chemistry groups, led to the discovery of differently-coloured light-emitting polymers that spanned the visible colour spectrum. A subsequent collaboration with physicist Richard Friend and co-workers at Cambridge's Cavendish Laboratory revealed the potential of these conjugated polymers for applications such as organic LEDs and rollable displays. Friend and Holmes co-founded the company Cambridge Display Technology for commercial exploitation of these materials – an early success story of Silicon Fen. In 2004 Holmes returned to his native Australia on a Federation Fellowship, to lead a group at the newly established Bio21 Institute. He has pursued the application of photovoltaic polymers to solar energy, and was instrumental in forming the Victorian Organic Solar Cell Consortium. He has also continued to develop new syntheses of novel, biologically-useful materials. An example is his groups' synthesis of phosphoinositides, amphiphilic phospholipids situated in the cell membrane, which collaborators at the Ludwig Institute for Cancer Research have used to probe the dynamics of signal transduction (intercellular signalling being an important component of many aspects of cell biology, including that of tumors). Holmes has served on the editorial or advisory boards of numerous learned scientific journals, including Organic Letters, Chemical Communications and Angewandte Chemie. In 2006, his 1998 paper on electroluminescent polymers was the most highly cited paper in Angewandte Chemie's 120-year history. By August 2012 he had authored over 490 scientific papers and 52 patent applications. In 2014 he was appointed as President of the Australian Academy of Science. Awards and honours Holmes was elected Fellow of the Royal Society (FRS) in 2000, and Fellow of the Australian Academy of Science in 2006. In 2003, he received the Descartes Prize and in 2012 the Royal Medal of the Royal Society. His formal titles include Chemistry alumnus, Laureate Professor of Chemistry, University of Melbourne; CSIRO Fellow, CSIRO Division of Materials Science and Engineering; Emeritus Professor and Distinguished Research Fellow, Imperial College London; Fellow of the Royal Society; Fellow of the Australian Academy of Science; Fellow of the Australian Academy of Technological Sciences and Engineering; and Foreign Secretary and (as of 2014) President of the Australian Academy of Science. In 2011, he received the Royal Society of Chemistry's John B Goodenough Award. In 2004 he was appointed a Member of the Order of Australia "for service to science through research and development, particularly in the fields of organic synthesis and polymer chemistry"; and in 2017 was appointed Companion of the Order of Australia for eminent service to science through developments in the field of organic and polymer chemistry as a researcher, editor and academic, and through the governance of nationally recognised, leading scientific organisations. He was awarded the 2021 Matthew Flinders Medal and Lecture. Personal life Holmes is a keen hillwalker and an enthusiastic aficionado of classical music, from baroque to romantic opera. During his time in Cambridge he was a member and regular volunteer at St Columba's United Reformed Church. He lives in Melbourne and Lorne, Victoria with his wife Jennifer. References External links List of Royal Society Medalists, 2012 (see also the Royal Medal) Andrew Holmes' group website at the University of Melbourne Andrew Holmes' biographical page at the University of Melbourne Andrew Holmes' biographical sketch at the Royal Society website Victorian Organic Solar Cells Consortium CSIRO press release: "Australian scientist awarded a Royal Medal from the Royal Society London", July 10, 2012 Robyn Williams interviews Andrew Holmes for ABC Radio's The Science Show: "The value of international scientific collaborations", May 5, 2012 (transcript and audio file available for download at link) Living people English chemists Australian chemists Academics of Imperial College London Academics of University College London Academics of the University of Cambridge Fellows of the Royal Society Companions of the Order of Australia 1943 births Organic chemists Fellows of the Australian Academy of Technological Sciences and Engineering University of Melbourne alumni Fellows of the Institute of Physics Alumni of University College London Fellows of Clare College, Cambridge Fellows of the Australian Academy of Science Presidents of the Australian Academy of Science
Andrew Bruce Holmes
Chemistry
1,142
21,093,469
https://en.wikipedia.org/wiki/Slatwall
Slatwall (also known as Slat Wall and slotwall) is a building material used in shopfitting and interior design for wall coverings or display fixtures. It is made using a wide range of different materials depending on the usage and cost. In the past Slatwall was only known as a shop fitting product, usually . The shopfitting products are made with horizontal grooves that are configured to accept a variety of merchandising accessories with ease. References External links Retail store elements
Slatwall
Physics,Technology
102
57,223,553
https://en.wikipedia.org/wiki/Nanomaterials%20%28journal%29
Nanomaterials is an interdisciplinary scientific journal that covers all aspects of nanomaterials. The journal publishes theoretical and experimental research articles and studies about synthesis and use of nanomaterials. It was founded in 2010. The journal is published by MDPI, as of 2022 editor in chief is Shirley Chiang, an American microscopist from University of California, Davis, the Department of Physics and Astronomy. Abstracting and indexing The journal is abstracted and indexed in: DOAJ EBSCO Scopus Science Citation Index Expanded According to the Journal Citation Reports, the journal has a 2022 impact factor of 5.3. References External links English-language journals MDPI academic journals Nanotechnology journals
Nanomaterials (journal)
Materials_science
146
15,211,685
https://en.wikipedia.org/wiki/Glucose-regulated%20protein
Glucose-regulated protein is a protein in the endoplasmic reticulum in the cell. It comes in several different molecular masses, including: Grp78 (78 kDa) Grp94 (94 kDa) Grp170 (170 kDa), which is a human chaperone protein References Endoplasmic reticulum resident proteins
Glucose-regulated protein
Chemistry
78
30,385,383
https://en.wikipedia.org/wiki/Red-suffusion%20rosy-faced%20lovebird%20mutation
The red-suffusion rose-faced lovebird (Agapornis roseicollis), also known as the red-pied lovebird, is not a true colour mutation of lovebird species. Many breeders believe it is due to a health issue, most likely dealing with the bird's liver. Some think the red-pied has some genetic relations with the Lutino rosy-faced lovebird mutation, as many cases of red spots appear in Lutino lovebirds. Although many breeders of parrots have claimed that this is a genetic mutation, no one has been able to successfully reproduce it through a series of generations. See also Rosy-faced lovebird Rosy-faced lovebird colour genetics References Aviculture Genetics Lovebirds Rosy-faced lovebird colour mutations
Red-suffusion rosy-faced lovebird mutation
Biology
160
310,378
https://en.wikipedia.org/wiki/Clark%20Y%20airfoil
Clark Y is the name of a particular airfoil profile, widely used in general purpose aircraft designs, and much studied in aerodynamics over the years. The profile was designed in 1922 by Virginius E. Clark using thickness distribution of the German-developed Goettingen 398 airfoil. The airfoil has a thickness of 11.7 percent and is flat on the lower surface aft of 30 percent of chord. The flat bottom simplifies angle measurements on propellers, and makes for easy construction of wings. For many applications the Clark Y has been an adequate airfoil section; it gives reasonable overall performance in respect of its lift-to-drag ratio, and has gentle and relatively benign stall characteristics. The flat lower surface is not optimal from an aerodynamic perspective, and it is rarely used in modern designs. The Clark YH airfoil is similar but with a reflexed (turned up) trailing edge producing a more positive pitching moment reducing the horizontal tail load required to trim an aircraft. Applications Aircraft The Lockheed Vega and Spirit of St. Louis are two of the better known aircraft using the Clark Y profile, while the Ilyushin Il-2 and Hawker Hurricane are examples of mass-produced users of the Clark YH. The Northrop Tacit Blue stealth technology demonstrator aircraft also used a Clark Y. The Clark Y was chosen as its flat bottom worked well with the design goal of a low radar cross-section. Model aircraft The Clark Y has found favor for the construction of model aircraft, thanks to the flight performance that the section offers at medium Reynolds number airflows. Applications range from free-flight gliders through to multi-engined radio control scale models. The Clark Y is appealing for its near-flat lower surface, which aids in the construction of wings on plans mounted on a flat construction board. Inexperienced modellers are more readily able to build model aircraft which provide a good flight performance with benign stalling characteristics. Cars An inverted Clark Y airfoil was used on the spoilers of the Dodge Charger Daytona and Plymouth Superbird. Aircraft Some of the better-known aircraft that use the Clark Y and YH: Clark Y Aeronca 50 Chief Avia B.122 Consolidated PT-1 to Fleet Fawn (all intermediate designs used the same section) Curtiss P-6 Hawk (most of the Curtiss Hawks used the same section) Heath Parasol Lockheed Vega to Orion (all intermediate designs used the same section) Polikarpov R-5 Ryan Brougham and related types including the Spirit of St. Louis Stinson Reliant Vultee V-11 Waco Standard and Custom Cabin series Clark YH Currie Wot Hawker Hurricane Ilyushin Il-2 and Il-10 Mikoyan-Gurevich MiG-1 and MiG-3 Miles Magister Nanchang CJ-6 Polikarpov I-153 Potez 39 Stolp SA-900 V-Star Yakovlev Yak-1, 3 and 9 Yakovlev Yak-50 References External links Clark Y airfoil at airfoiltools.com including coordinate data Clary YH airfoil at airfoiltools.com Aerodynamics Aircraft wing design
Clark Y airfoil
Chemistry,Engineering
645
2,833,962
https://en.wikipedia.org/wiki/Kosmotropic
Co-solvents (in water solvent) are defined as kosmotropic (order-making) if they contribute to the stability and structure of water-water interactions. In contrast, chaotropic (disorder-making) agents have the opposite effect, disrupting water structure, increasing the solubility of nonpolar solvent particles, and destabilizing solute aggregates. Kosmotropes cause water molecules to favorably interact, which in effect stabilizes intramolecular interactions in macromolecules such as proteins. Ionic kosmotropes Ionic kosmotropes tend to be small or have high charge density. Some ionic kosmotropes are , , , , , and . Large ions or ions with low charge density (such as , , , ) instead act as chaotropes. Kosmotropic anions are more polarizable and hydrate more strongly than kosmotropic cations of the same charge density. A scale can be established if one refers to the Hofmeister series or looks up the free energy of hydrogen bonding () of the salts, which quantifies the extent of hydrogen bonding of an ion in water. For example, the kosmotropes and have between 0.1 and 0.4 J/mol, whereas the chaotrope has a between −1.1 and −0.9. Recent simulation studies have shown that the variation in solvation energy between the ions and the surrounding water molecules underlies the mechanism of the Hofmeister series. Thus, ionic kosmotropes are characterized by strong solvation energy leading to an increase of the overall cohesiveness of the solution, which is also reflected by the increase of the viscosity and density of the solution. Applications Ammonium sulfate is the traditional kosmotropic salt for the salting out of protein from an aqueous solution. Kosmotropes are used to induce protein aggregation in pharmaceutical preparation and at various stages of protein extraction and purification. Nonionic kosmotropes Nonionic kosmotropes have no net charge but are very soluble and become very hydrated. Carbohydrates such as trehalose and glucose, as well as proline and tert-butanol, are kosmotropes. See also Chaotropic agent and guanidinium chloride Protein precipitation, on ammonium sulfate "salting out" References External links Chemical properties
Kosmotropic
Chemistry
499
31,979,549
https://en.wikipedia.org/wiki/Veblen%E2%80%93Young%20theorem
In mathematics, the Veblen–Young theorem, proved by , states that a projective space of dimension at least 3 can be constructed as the projective space associated to a vector space over a division ring. Non-Desarguesian planes give examples of 2-dimensional projective spaces that do not arise from vector spaces over division rings, showing that the restriction to dimension at least 3 is necessary. Jacques Tits generalized the Veblen–Young theorem to Tits buildings, showing that those of rank at least 3 arise from algebraic groups. generalized the Veblen–Young theorem to continuous geometry, showing that a complemented modular lattice of order at least 4 is isomorphic to the principal right ideals of a von Neumann regular ring. Statement A projective space S can be defined abstractly as a set P (the set of points), together with a set L of subsets of P (the set of lines), satisfying these axioms : Each two distinct points p and q are in exactly one line. Veblen's axiom: If a, b, c, d are distinct points and the lines through ab and cd meet, then so do the lines through ac and bd. Any line has at least 3 points on it. The Veblen–Young theorem states that if the dimension of a projective space is at least 3 (meaning that there are two non-intersecting lines) then the projective space is isomorphic with the projective space of lines in a vector space over some division ring K. References Theorems in projective geometry Theorems in algebraic geometry
Veblen–Young theorem
Mathematics
317
6,148,575
https://en.wikipedia.org/wiki/Diebold%2010xx
The Diebold 10xx (or Modular Delivery System, MDS) series is a third and fourth generation family of automated teller machines manufactured by Diebold. History Introduced in 1985 as a successor to the TABS 9000 series, the 10xx family of ATMs was re-styled to the "i Series" variant in 1991, the "ix Series" variant in 1994, and finally replaced by the Diebold Opteva series of ATMs in 2003. The 10xx series of ATMs were also marketed under the InterBold brand; a joint venture between IBM and Diebold. IBM machines were marketed under the IBM 478x series. Not all of the 10xx series of ATMs were offered by IBM. Diebold stopped producing the 1000-series ATM's around 2008. 10xx Series Models Members of the 10xx Series included: MDS Series - Used a De La Rue cash dispensing mechanism 1060 - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1062 - Multi-function, indoor lobby unit 1072 - Multi-function, exterior "through-the-wall" unit i Series - Used an ExpressBus Multi Media Dispenser (MMD) cash dispensing mechanism 1060i - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1061i - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1062i - Multi-function, indoor lobby unit 1064i - Mono-function, indoor cash dispenser 1070i - Multi-function, exterior "through-the-wall" unit with a longer "top-hat throat" 1072i - Multi-function, exterior "through-the-wall" unit 1073i - Multi-function, exterior "through-the-wall" unit, modified for use while sitting in a car 1074i - Multi-function, exterior unit, designed as a stand-alone unit for use in a drive-up lane. ix Series - Used an ExpressBus Multi Media Dispenser (MMD) cash dispensing mechanism 1062ix - Multi-function, indoor lobby unit 1063ix - Mono-function, indoor cash dispenser with a smaller screen than the 1064ix 1064ix - Mono-function, indoor cash dispenser 1070ix - Multi-function, exterior "through-the-wall" unit 1071ix - Mono-function, exterior cash dispenser 1072ix - Multi-function, exterior "through-the-wall" unit 1073ix - Multi-function, exterior "through-the-wall" unit, modified for use while sitting in a car 1074ix - Multi-function, exterior unit, designed as a stand-alone unit for use in a drive-up lane. 1075ix - Multi-function, Exterior unit 1077ix - Mono-function, exterior "through-the-wall" unit See also List of Diebold products References External links Diebold: A Field Guide to Diebold ATMs How frequently is the ATM database updated? Diebold Automated teller machines Embedded systems
Diebold 10xx
Technology,Engineering
657
28,399,390
https://en.wikipedia.org/wiki/Sagamihara%20Campus
Sagamihara Campus is a facility of the Japan Aerospace Exploration Agency (JAXA) in Sagamihara, Kanagawa Prefecture. Gallery References External links Sagamihara Campus JAXA Sagamihara Campus ISAS Space technology research institutes JAXA facilities
Sagamihara Campus
Astronomy
51
12,835,538
https://en.wikipedia.org/wiki/Arieh%20Warshel
Arieh Warshel (; born November 20, 1940) is an Israeli-American biochemist and biophysicist. He is a pioneer in computational studies on functional properties of biological molecules, Distinguished Professor of Chemistry and Biochemistry, and holds the Dana and David Dornsife Chair in Chemistry at the University of Southern California. He received the 2013 Nobel Prize in Chemistry, together with Michael Levitt and Martin Karplus for "the development of multiscale models for complex chemical systems". Biography Warshel was born to a Jewish family in 1940 in kibbutz Sde Nahum, Mandatory Palestine. Warshel served in the Israeli Armored Corps. After serving the Israeli Army (final rank Captain), Warshel attended the Technion, Haifa, where he received his BSc degree in chemistry, summa cum laude, in 1966. Subsequently, he earned both MSc and PhD degrees in Chemical Physics (in 1967 and 1969, respectively), with Shneior Lifson at Weizmann Institute of Science, Israel. After his PhD, he did postdoctoral work at Harvard University until 1972, and from 1972 to 1976 he returned to the Weizmann Institute and worked for the Laboratory of Molecular Biology, Cambridge, England. After being denied tenure by Weizmann Institute in 1976, he joined the faculty of the department of chemistry at USC. He was awarded the 2013 Nobel Prize in Chemistry. As a soldier, he fought in both the 1967 Six-Day War and the 1973 Yom Kippur War, attaining the rank of captain in the IDF. As part of Shenzhen's 13th Five-Year Plan funding research in emerging technologies and opening "Nobel laureate research labs", in April 2017 he opened the Warshel Institute for Computational Biology at the Chinese University of Hong Kong, Shenzhen campus. Honors Warshel is known for his work on computational biochemistry and biophysics, in particular for pioneering computer simulations of the functions of biological systems, and for developing what is known today as Computational Enzymology. He is a member of many scientific organisations, most importantly: Elected member of the United States National Academy of Sciences (2009) Fellow of the Royal Society of Chemistry (2008) Fellow of the Biophysical Society (2000) Fellow of the American Association for the Advancement of Science (2012) Honorary fellow of the Royal Society of Chemistry (2014) Honorary doctorate at Bar-Ilan University (2014) Honorary doctorate of the Faculty of Science and Technology at Uppsala University (2015) Awards Annual Award of the International Society of Quantum Biology and Pharmacology (1993) Tolman Medal (2003) President's award for computational biology from the ISQBP (2006) RSC Soft Matter and Biophysical Chemistry Award (2012) Nobel Prize in Chemistry (2013) together with Martin Karplus and Michael Levitt for "the development of multiscale models for complex chemical systems". Golden Plate Award of the American Academy of Achievement (2014) The Founders Award of the Biophysical Society (2014) The 2013 Israel Chemical Society Gold Medal (2014) Major research achievements Arieh Warshel made major contributions in introducing computational methods for structure–function correlation of biological molecules, pioneering and co-pioneering programs, methods and key concepts for detailed computational studies of functional properties of biological molecules using Cartesian-based force field programs, the combined Quantum Chemistry/Molecular mechanics (i.e., QM/MM) method for simulating enzymatic reactions, the first molecular dynamics simulation of a biological process, microscopic electrostatic models for proteins, free energy perturbation in proteins and other key advances. It was for the development of these methods that Warshel shared the 2013 Nobel Prize in Chemistry. Books Arieh Warshel. From Kibbutz Fishponds to The Nobel Prize: Taking Molecular Functions into Cyberspace, World Scientific Publishing, 2021. See also List of Israeli Nobel laureates List of Jewish Nobel laureates Coarse-grained modeling Empirical valence bond References External links Faculty profile, USC Dornsife Warshel research group at the University of Southern California 1940 births Living people Nobel laureates in Chemistry American Nobel laureates Israeli Nobel laureates American biochemists American biophysicists Foreign members of the Russian Academy of Sciences Israeli biochemists Israeli biophysicists Israeli emigrants to the United States Israeli Jews Israeli officers Jewish chemists Jewish American physicists Jewish military personnel Members of the United States National Academy of Sciences Theoretical chemists University of Southern California faculty Academic staff of the Chinese University of Hong Kong, Shenzhen Weizmann Institute of Science alumni Computational chemists
Arieh Warshel
Chemistry
922
3,954
https://en.wikipedia.org/wiki/Biochemistry
Biochemistry, or biological chemistry, is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis that allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition, and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology. History At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists. The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister. It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level. Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression. Starting materials: the chemical elements of life Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts). Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more. Biomolecules The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity. Carbohydrates Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications. The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare. Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance. When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals. Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2). Lipids Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid. Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain). Most lipids have some polar character and are largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below. Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilizers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome). Proteins Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that makes each amino acid different, and the properties of the side chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues. Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain. The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole. The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit. Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids. If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein. A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release ammonia into the water where it is quickly diluted. In general, mammals convert ammonia into urea, via the urea cycle. In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function. Nucleic acids Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid. Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine, thymine, and uracil contain two hydrogen bonds, while hydrogen bonds formed between cytosine and guanine are three. Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA. Metabolism Carbohydrates as energy source Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides. Glycolysis (anaerobic) Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway. Aerobic In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen. Gluconeogenesis In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate. The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle. Relationship to other "molecular-scale" biological sciences Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields: Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level. Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies. Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA. Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules). See also Lists Important publications in biochemistry (chemistry) List of biochemistry topics List of biochemists List of biomolecules See also Astrobiology Biochemistry (journal) Biological Chemistry (journal) Biophysics Chemical ecology Computational biomodeling Dedicated bio-based chemical EC number Hypothetical types of biochemistry International Union of Biochemistry and Molecular Biology Metabolome Metabolomics Molecular biology Molecular medicine Plant biochemistry Proteolysis Small molecule Structural biology TCA cycle Notes References Cited literature Further reading Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999. Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell 4th Edition, Routledge, March, 2002, hardcover, 1616 pp. 3rd Edition, Garland, 1994, 2nd Edition, Garland, 1989, Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982. External links The Virtual Library of Biochemistry, Molecular Biology and Cell Biology Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI. SystemsX.ch – The Swiss Initiative in Systems Biology Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook. Biotechnology Molecular biology
Biochemistry
Chemistry,Biology
6,497
38,743,125
https://en.wikipedia.org/wiki/Michigan%20Spin%20Physics%20Center
The University of Michigan Spin Physics Center focuses on studies of spin effects in high polarized proton-proton elastic and inelastic scattering. These polarized scattering experiments use the world-class solid and jet polarized proton targets, which are developed, upgraded and tested at the center. The Center obtained a record density of about 1012 spin-polarized hydrogen atoms per cm3. The center also led the development of the world's first accelerated polarized beams at the 12 GeV Argonne ZGS (in 1973) and then at the 28 GeV Brookhaven AGS. The Center led pioneering experiments at the IUCF Cooler Ring from 1988 until its 2003 shutdown, which developed and tested Siberian snakes and Spin-flippers, which are now used to accelerate, store and use high energy polarized proton beams. The center also leads the International SPIN Collaboration and its proton polarization know-how is used in many experiments worldwide. Main discoveries In 1978 the Center found that protons with parallel spins interact much stronger than protons with anti-parallel spin. According to Quantum Chromodynamics the interaction between parallel and anti-parallel spinning proton beams should be the same. Sheldon Glashow called this effect "the thorn in the side of QCD". This effect remained unexplained until today. In 2005 Stanley Brodsky called it "one of the unsolved mysteries in hadronic physics". References External links Michigan Spin Physics Center Quantum chromodynamics Particle experiments Unsolved problems in physics University of Michigan
Michigan Spin Physics Center
Physics
317
262,043
https://en.wikipedia.org/wiki/Superplasticity
In materials science, superplasticity is a state in which solid crystalline material is deformed well beyond its usual breaking point, usually over about 400% during tensile deformation. Such a state is usually achieved at high homologous temperature. Examples of superplastic materials are some fine-grained metals and ceramics. Other non-crystalline materials (amorphous) such as silica glass ("molten glass") and polymers also deform similarly, but are not called superplastic, because they are not crystalline; rather, their deformation is often described as Newtonian fluid. Superplastically deformed material gets thinner in a very uniform manner, rather than forming a "neck" (a local narrowing) that leads to fracture. Also, the formation of microvoids, which is another cause of early fracture, is inhibited. Superplasticity must not be confused with superelasticity. Historical developments of superplasticity Some evidence of superplastic-like flow in metals has been found in some artifacts, such as in Wootz steels in ancient India, even though superplasticity was first scientific recognition in the twentieth century in the report on 163% elongation in brass by Bengough in 1912. Later, Jenkins' higher elongation of 300% in Cd–Zn and Pb–Sn alloys in 1928. However, those works did not go further to set a new phenomenon of mechanical properties of materials. Until the work of Pearson was published in 1934, a significant elongation of 1950% was found in Pb–Sn eutectic alloy. It was easy to become the most extensive elongation report in scientific investigation at this time. There was no further interest in superplasticity in the Western World for more than 25 years after Pearson's effort. Later, Bochvar and Sviderskaya continued superplasticity in the Soviet Union with many publications on Zn–Al alloys. A research institute focused on superplasticity, the Institute of Metals Superplasticity Problems, was established in 1985 in Ufa City, Russia. This institute has remained the only global institute to work exclusively to research in superplasticity. The interest in superplasticity rose in 1982 when the first major international conference on 'Superplasticity in Structural Materials, edited by Paton and Hamilton, was held in San Diego. From there, numerous investigations have been published with considerable results. Superplasticity is now the background for superplastic deformation forming as an essential aerospace application technique. Conditions In metals and ceramics, requirements for it being superplastic include a fine grain size (less than approximately 10 micrometers) and an operating temperature that is often from above a half absolute melting point. Several studies have found superplasticity in coarse-grain materials. However, the scientific community has agreed the grain size threshold at 10 micrometers is the precondition for activating superplasticity. Generally, grain growth at high-temperature, therefore maintaining the fine grain size structure at homologous temperature, is the main challenge in superplasticity research. The typical microstructure strategy uses a fine dispersion of thermally stable particles, which pin the grain boundaries and maintain the fine grain structure at the high temperatures and existence of multiple phases required for superplastic deformation. The alloy's most typical microstructure for superplasticity is eutectic or eutectoid structure, as found in Sn-Pb, or Zn-Alloy alloys. Those materials that meet these parameters must still have a strain rate sensitivity (a measurement of the way the stress on a material reacts to changes in strain rate) of >0.3 to be considered superplastic. The ideal strain rate sensitivity is 0.5, typically found in micro duplex alloys. Mechanism The mechanisms of superplasticity in metals are determined as the Grain Boundary Sliding (GBS). However, the grain boundary sliding (GBS) can lead to the stress concentration at the triple junction or the grain boundary of the hard phases. Therefore, the GBS in polycrystal structured materials must be accompanied by other accommodation processes such as diffusion or dislocation. The diffusion models proposed by Ashby and Verall explain a gradual change in grain shapes to maintain the compatibility between the grains during the deformation. The changes in grain shape are operated by diffusion. The grain boundary migrates to form an equiaxed shape with a new orientation compared to the original grains. The dislocation model is explained as the stress concentration by GBS will be relaxed by dislocation motion in the blocking grains. The dislocation piles up, and the climb would allow another dislocation to be emitted. The further detail in dislocation model is still under debate, with several proposed by Crossman and Ashby, Langdon, and Gifkins model. High strain Rate Superplasticity In general, superplasticity often occurs at a slow strain rate, in order of 10−4 s−1, and can be energy-consuming. In addition, prolonged time exposed to high-operation temperature also degraded the mechanical properties of materials. There is a strong demand to increase the strain rate in superplastic deformation to the order of 10−2 s−1, called High strain Rate Superplasticity (HSRS). Increment of strain rate in superplastic deformation is generally achieved by reduction of grain size in the ultrafine range from 100 to less than 500 ums. Further grain refinement to nanocrystalline structure with grain size less than 100 nm is ineffective in raising the deformation rate or improving ductility. The most common grain refinement process for HSRS research uses Severe Plastic Deformation (SPD). SPD can fabricate exceptional grain refinement to the sub-micrometer or even the nanometer range. Among many SPD techniques, the two most widely used techniques are equal-channel angular pressing (ECAP) and high-pressure torsion (HPT). Besides producing the ultrafine grain size, these techniques also provide a high fraction of high-angle boundaries. These high-angle grain boundaries are a specific benefit to increase the strain rates of deformation. Of the importance of grain refinement processing to superplasticity research, ECAP and HPT have been devoted to mainstream positions in superplasticity studies in metals. Advantages of superplastic forming The process offers a range of important benefits, from both the design and production aspects. To begin with there is the ability to form components with double curvature and smooth contours from single sheet in one operation, with exceptional dimensional accuracy and surface finish, and none of the "spring back" associated with cold forming techniques. Because only single surface tools are employed, lead times are short and prototyping is both rapid and easy, because a range of sheet alloy thicknesses can be tested on the same tool. Forming techniques There are three forming techniques currently in use to exploit these advantages. The method chosen depends upon design and performance criteria such as size, shape, and alloy characteristics. Cavity forming A graphite-coated blank is put into a heated hydraulic press. Air pressure is then used to force the sheet into close contact with the mould. At the beginning, the blank is brought into contact with the die cavity, hindering the forming process by the blank/die interface friction. Thus, the contact areas divide the single bulge into a number of bulges, which are undergoing a free bulging process. The procedure allows the production of parts with relatively exact outer contours. This forming process is suitable for the manufacturing of parts with smooth, convex surfaces. Bubble forming A graphite coated blank is clamped over a 'tray' containing a heated male mould. Air pressure forces the metal into close contact with the mould. The difference between this and the female forming process is that the mould is, as stated, male and the metal is forced over the protruding form. For the female forming the mould is female and the metal is forced into the cavity. The tooling consists of two pressure Chambers and a counter punch, which is linearly displaceable. Similar to the cavity forming technology, at the process beginning, the firmly clamped blank is bulged by gas pressure. The second phase of the process involves the material being formed over the punch surface by applying a pressure against the previous forming direction. Due to a better material use, which is caused by process conditions, blanks with a smaller initial thickness compared to cavity forming can be used. Thus, the bubble forming technology is particularly suitable for parts with high forming depths. Diaphragm forming A graphite coated blank is placed into a heated press. Air pressure is used to force the metal into a bubble shape before the male mold is pushed into the underside of the bubble to make an initial impression. Air pressure is then used from the other direction to final form the metal around the male mould. This process has long cycle times because the superplastic strain rates are low. Product also suffers from poor creep performance due to the small grain sizes and there can be cavitation porosity in some alloys. Surface texture is generally good however. With dedicated tooling, dies and machines are costly. The main advantage of the process is that it can be used to produce large complex components in one operation. This can be useful for keeping the mass down and avoiding the need for assembly work, a particular advantage for aerospace products. For example, the diaphragm-forming method (DFM) can be used to reduce the tensile flow stress generated in a specific alloy matrix composite during deformation. Aluminium and aluminium based alloys Superplastically formed (SPF) aluminium alloys have the ability to be stretched to several times their original size without failure when heated to between 470 and 520 °C. These dilute alloys containing zirconium, later known by the trade name SUPRAL, were heavily cold worked to sheet and dynamically crystallized to a fine stable grain size, typically 4–5 μm, during the initial stages of hot deformation. Also superplastic forming is a net-shape processing technology that dramatically decreases fabrication and assembly costs by reducing the number of parts and the assembly requirements. Using SPF technology, it was anticipated that a 50% manufacturing cost reduction can be achieved for many aircraft assemblies, such as the nose cone and nose barrel assemblies. Other spin-offs include weight reduction, elimination of thousands of fasteners, elimination of complex featuring and a significant reduction in the number of parts. The breakthrough for superplastic Al-Cu alloys was made by Stowell, Watts and Grimes in 1969 when the first of several dilute aluminium alloys (Al-6% Cu-0.5%Zr) was rendered superplastic with the introduction of relatively high levels of zirconium in solution using specialized casting techniques and subsequent electrical treatment to create extremely fine precipitates. Commercial alloys Some commercial alloys have been thermo-mechanically processed to develop superplasticity. The main effort has been on the Al 7000 series alloys, Al-Li alloys, Al-based metal-matrix composites, and mechanically alloyed materials. Aluminium alloy composites Aluminium alloy and its composites have wide applications in automotive industries. At room temperature, composites usually have higher strength compared to its component alloy. At high temperature, aluminium alloy reinforced by particles or whiskers such as , , and SiC can have tensile elongation more than 700%. The composites are often fabricated by powder metallurgy to ensure fine grain sizes and the good dispersion of reinforcements. The grain size that allows the optimal superplastic deformation to happen is usually 0.5~1 μm, less than the requirement of conventional superplasticity. Just like other superplastic materials, the strain rate sensitivity m is larger than 0.3, indicating good resistance against local necking phenomenon. A few aluminium alloy composites such as 6061 series and 2024 series have shown high strain rate superplasticity, which happens in a much higher strain rate regime than other superplastic materials. This property makes aluminium alloy composites potentially suitable for superplastic forming because the whole process can be done in a short time, saving time and energy. Deformation mechanism for aluminium alloy composites The most common deformation mechanism in aluminium alloy composites is grain boundary sliding (GBS), which is often accompanied by atom/dislocation diffusion to accommodate deformation. The GBS mechanism model predicts a strain rate sensitivity of 0.3, which agrees with most of the superplastic aluminium alloy composites. Grain boundary sliding requires the rotation or migration of very fine grains at relatively high temperature. Therefore, the refinement of grain size and the prevention of grain growth at high temperature is of importance. The very high temperature (close to melting point) is also said to be related to another mechanism, interfacial sliding, because at high temperatures, partial liquids appear in the matrix. The viscosity of the liquid plays the main role to accommodate the sliding of adjacent grain boundaries. The cavitation and stress concentration caused by the addition of second phase reinforcements are inhibited by the flow of liquid phase. However, too much liquid leads to voids thus deteriorating the stability of the materials. So temperature close to but not exceeding too much the initial melting point is often the optimal temperature. The partial melting could lead to the formation of filaments at the fracture surface, which can be observed under scanning electron microscope. The morphology and chemistry of reinforcements also have influence on the superplasticity of some composites. But no single criterion has yet been proposed to predict their influences. Methods to improve superplasticity A few ways have been suggested to optimize the superplastic deformation of aluminium alloy composites, which are also indicative for other materials: Good dispersion of reinforcements. This is also important for room-temperature performance. Refine the grain size of the matrix. The refinement creates more grains that can slide over each other at high temperature, facilitating the grain boundary sliding mechanism. This also implies a higher optimal strain rate. The trend of increase in strain rate has been observed in materials of finer grain sizes. Severe plastic deformation like equal-channel angular pressing has been reported to be able to achieve ultra-fine grained materials. Appropriately choosing the temperature and the strain rate. Some composites have to be heated close to melting, which might have opposite effects on other composites. Titanium and titanium based alloys In the aerospace industry, Titanium alloys such as Ti–6Al–4V find extensive use in aerospace applications, not only because of their specific high temperature strength, but also because a large number of these alloys exhibit superplastic behavior. Superplastic sheet thermoforming has been identified as a standard processing route for the production of complex shapes, especially and are amenable to superplastic forming (SPF). However, in these alloys the additions of vanadium make them considerably expensive and so, there is a need for developing superplastic titanium alloys with cheaper alloying additions. The Ti-Al-Mn alloy could be such a candidate material. This alloy shows significant post-uniform deformation at ambient and near-ambient temperatures. Ti-Al-Mn (OT4-1) alloy Ti-Al-Mn (OT4-1) alloy is currently being used for aero engine components as well as other aerospace applications by forming through a conventional route that is typically cost, labour and equipment intensive. The Ti-Al-Mn alloy is a candidate material for aerospace applications. However, there is virtually little or no information available on its superplastic forming behaviour. In this study, the high temperature superplastic bulge forming of the alloy was studied and the superplastic forming capabilities are demonstrated. The bulging process The gas pressure bulging of metal sheets has become an important forming method. As the bulging process progresses, significant thinning in the sheet material becomes obvious. Many studies were made to obtain the dome height with respect to the forming time useful to the process designer for the selection of initial blank thickness as well as non-uniform thinning in the dome after forming. Case study The Ti-Al-Mn (OT4-1) alloy was available in the form of a 1 mm thick cold-rolled sheet. The chemical composition of the alloy. A 35-ton hydraulic press was used for the superplastic bulge forming of a hemisphere. A die set-up was fabricated and assembled with the piping system enabling not only the inert gas flushing of the die- assembly prior to forming, but also for the forming of components under reverse pressure, if needed. The schematic diagram of the superplastic forming set-up used for bulge forming with all necessary attachments and the photograph of the top (left) and bottom (right) die for SPF. A circular sheet (blank) of 118 mm diameter was cut from the alloy sheet and the cut surfaces polished to remove burrs. The blank was placed on the die and the top chamber brought in contact. The furnace was switched on to the set temperature. Once the set temperature was reached the top chamber was brought down further to effect the required blank holder pressure. About 10 minutes were allowed for thermal equilibration. The argon gas cylinder was opened to the set pressure gradually. Simultaneously, the linear variable differential transformer (LVDT), fitted at the bottom of the die, was set for recording the sheet bulge. Once the LVDT reached 45 mm (radius of bottom die), gas pressure was stopped and the furnace switched off. The formed components were taken out when the temperature of the die set had dropped to 600 °C. Easy removal of the component was possible at this stage. Superplastic bulge forming of hemispheres were carried out at temperatures of 1098, 1123, 1148, 1173, 1198 and 1223 K (825, 850, 875, 900, 925 and 950 °C) at forming pressures of 0.2, 0.4, 0.6 and 0.87 MPa. As the bulge forming process progresses, significant thinning in the sheet material becomes obvious. An ultrasonic technique was used to measure the thickness distribution on the profile of the formed component. The components were analyzed in terms of the thickness distribution, thickness strain and thinning factor. Post deformation micro-structural studies were conducted on the formed components in order to analyze the microstructure in terms of grain growth, grain elongation, cavitations, etc. Results and discussions The microstructure of the as-received material with a two-dimensional grain size of 14 μm is shown in Fig. 8. The grain size was determined using the linear intercept method in both the longitudinal and transverse directions of the rolled sheet. Successful superplastic forming of hemispheres were carried out at temperatures of 1098, 1123, 1148, 1173, 1198 and 1223 K and argon gas forming pressures of 0.2, 0.4, 0.6 and 0.8 MPa. A maximum time limit of 250 minutes was given for the complete forming of the hemispheres. This cut-off time of 250 minutes was given for practical reasons. Fig. 9 shows a photo-graph of the blank (specimen) and a bulge formed component (temperature of 1123 K and a forming gas pressure of 0.6 MPa). The forming times of successfully formed components at different forming temperatures and pressures. From the travel of the LVDT fitted at the bottom of the die (which measured the bulge height/depth) an estimate of the rate of forming was obtained. It was seen that the rate of forming was rapid initially and decreased gradually for all the temperature and pressure ranges as reported in Table 2. At a particular temperature, the forming time reduced as the forming pressure was increased. Similarly at a given forming pressure, forming time decreased with an increase in temperature. The thickness of the bulge profile was measured at 7 points including the periphery (base) and pole. These points were selected by taking the line between centre of the hemisphere and base point as reference and offsetting by 15° until the pole point was reached. Hence the points 1, 2, 3, 4 and 5 subtend an angle of 15°, 30°, 45°, 60° and 75° respectively with the base of the hemisphere as shown in Fig. 10. The thickness was measured at each of these points on the bulge profile by using an ultrasonic technique. The thickness values for each of the successfully formed hemispherical components. Fig. 11 shows the pole thickness of fully formed hemispheres as a function of forming pressure at different temperatures. At a particular temperature the pole thickness reduced as the forming pressure was increased. For all the cases studied the pole thickness lay in the range of about 0.3 to 0.4 mm from the original blank thickness of 1 mm. The thickness strain , where is the local thickness and is the initial thickness, was calculated at different locations for all the successfully formed components. For a particular pressure the thickness strain reduced as the forming temperature was increased. Fig. 12 shows the thickness strain, as a function of position along the dome cross section in case of a component formed at 1123 K at a forming pressure of 0.6 MPa. The post-formed microstructure revealed that there was no significant change in grain size. Fig. 13 shows the microstructure of the bulge formed component at the base and the pole for a component formed at a temperature of 1148 K and forming pressure of 0.6 MPa. These microstructures show no significant change in grain size. Conclusion The high temperature deformation behaviour and superplastic forming capability of a Ti-Al-Mn alloy was studied. Successful forming of 90 mm diameter hemispheres using the superplastic route were carried out at the temperature range of 1098 to 1223 K and forming pressure range of 0.2 to 0.8 MPa. The following conclusions could be drawn: The forming time decreased steeply when the gas pressure or temperature was increased. The rate of forming was initially high, but reduced progressively with time. At a particular temperature the pole thickness reduced as the forming pressure was increased. For all the cases studied the pole thickness lay in the range of about 0.3 to 0.4 mm from the original blank thickness of 1.0 mm. The thinning factor and thickness strain increased as one moved from the periphery to the pole. The post-formed microstructures show no significant change in grain size. Iron and steel Mostly on non-qualified materials, such as austenitic steel of the Fe-Mn-Al alloy, which has some of the specific material parameters closely related to microstructural mechanisms. These parameters are used as indicators of material superplastic potentiality. The material was submitted to hot tensile testing, within a temperature range from 600 °C to 1000 °C and strain-rates varying from 10−6 to 1 s−1. The strain rate sensitivity parameter (m) and observed maximum elongation until rupture (εr) could be determined and also obtained from the hot tensile test. Fe with Mn and Al alloys The experiments stated a possibility of superplastic behaviour in a Fe-Mn-Al alloy within a temperature range from 700 °C to 900 °C with grain size around 3 μm (ASTM grain size 12) and average strain rate sensitivity of m ~ 0.54, as well as a maximum elongation at rupture around 600%. Fe with Al and Ti alloys The superplastic behaviour of Fe-28Al, Fe-28Al-2Ti and Fe-28Al-4Ti alloys has been investigated by tensile testing, optical microscopy and transmission electron microscopy. Tensile tests were performed at 700–900 °C under a strain rate range of about 10−5 to 10−2/s. The maximum strain rate sensitivity index m was found to be 0.5 and the largest elongation reached 620%. In Fe3Al and Fe Al alloys with grain sizes of 100 to 600μm exhibit all deformation characteristics of conventional fine grain size superplastic alloys. However, superplastic behaviour was found in large-grained iron aluminides without the usual requisites for superplasticity of a fine grain size and grain boundary sliding. Metallographic examinations have shown that the average grain size of large-grained iron aluminides decreased during superplastic deformation. Ceramics The properties of ceramics The properties of ceramic materials, like all materials, are dictated by the types of atoms present, the types of bonding between the atoms, and the way the atoms are packed together. This is known as the atomic scale structure. Most ceramics are made up of two or more elements. This is called a compound. For example, alumina (), is a compound made up of aluminium atoms and oxygen atoms. The atoms in ceramic materials are held together by a chemical bond. The two most common chemical bonds for ceramic materials are covalent and ionic. For metals, the chemical bond is called the metallic bond. The bonding of atoms together is much stronger in covalent and ionic bonding than in metallic. That is why, generally speaking, metals are ductile and ceramics are brittle. Due to ceramic materials wide range of properties, they are used for a multitude of applications. In general, most ceramics are: hard wear-resistant brittle refractory thermal insulators electrical insulator nonmagnetic oxidation resistant prone to thermal shock good chemical stability High-strain-rate superplasticity has been observed in aluminium-based and magnesium-based alloys. But for ceramic materials, superplastic deformation has been restricted to low strain rates for most oxides, and nitrides with the presence of cavities leading to premature failure. Here we show that a composite ceramic material consisting of tetragonal zirconium oxide, magnesium aluminates spinal and alpha-alumina phase exhibit superplasticity at strain rates up to 1.0 s−1. The composite also exhibits a large tensile elongation, exceeding 1050% or a strain rate of 0.4 s−1. Superplastic metals and ceramics have the ability to deform over 100% without fracturing, permitting net-shape forming at high temperatures. These intriguing materials deform primarily by grain boundary sliding, a process accelerated with a fine grain size. However, most ceramics that start with a fine grain size experience rapid grain growth during high temperature deformation, rendering them unsuitable for extended superplastic forming. One can limit grain growth using a minor second phase (Zener pinning) or by making a ceramic with three phases, where grain to grain contact of the same phase is minimized. A research on fine grain three phase alumina-mullite()-zirconia, with approximately equal volume fractions of the three phases, demonstrates that superplastic strain rates as high as 10−2/sec at 1500 °C can be reached. These high strain rates put ceramic superplastic forming into the realm of commercial feasibility. Cavitations Superplastic forming will only work if cavitations don't occur during grain boundary sliding, those cavitations leaving either diffusion accommodation or dislocation generation as mechanisms for accommodating grain boundary sliding. The applied stresses during ceramic superplastic forming are moderate, usually 20–50 MPa, usually not high enough to generate dislocations in single crystals, so that should rule out dislocation accommodation. Some unusual and unique features of these three phase superplastic ceramics will be revealed, however, indicating that superplastic ceramics may have a lot more in common with metals than previously thought. Yttria-stabilized tetragonal zirconia polycrystalline Yttrium oxide is used as the stabilizer. This material is predominantly tetragonal in structure. Y-TZP has the highest flexural strength of all the zirconia based materials. The fine grain size of Y-TZP lends itself to be used in cutting tools where a very sharp edge can be achieved and maintained due to its high wear resistance. It is considered to be the first true polycrystalline ceramic shown to be superplastic with a 3-mol % Y-TZP (3Y-TZP), which is now considered to be the model ceramic system. The fine grade size leads to a very dense, non-porous ceramic with excellent mechanical strength, corrosion resistance, impact toughness, thermal shock resistance and very low thermal conductivity. Due to its characteristics Y-TZP is used in wear parts, cutting tools and thermal barrier coatings. Grain size Superplastic properties of 3Y-TZP is greatly affected by grain size as displaced in Fig. 3, elongation to failure decreases and flow strength increases while grain size increases. A study was made on the dependence of flow stress on grain size, the result –in summary- shows that the flow stress approximately depends on the grain size squared: Where: is the flow stress. d is the instantaneous grain size. Alumina () Alumina is probably one of the most widely used structural ceramics, but superplasticity is difficult to obtain in alumina, as a result of rapid anisotropic grain growth during high-temperature deformation. Regardless of which, several studies have been performed on superplasticity in doped, fine-grain .Demonstrated that the grain size of containing 500-ppm MgO can be further refined by adding various dopants, such as , , and . A grain size of about 0.66 μm was obtained in a 500-ppm -doped . As a result of this fine grain size, the exhibits a rupture elongation of 65% at 1450 °C under an applied stress of 20 MPa. See also Superplastic forming High-explosive anti-tank References Bibliography . Superplasticity:Dr R H Johnson Metallurgical Review No 146 Sept 1970. Institute of Metals London, UK Materials science
Superplasticity
Physics,Materials_science,Engineering
6,198
18,867,116
https://en.wikipedia.org/wiki/Random%20positioning%20machine
A random positioning machine, or RPM, rotates biological samples along two independent axes to change their orientation in space in complex ways and so eliminate the effect of gravity. Description The RPM is a more sophisticated development of the single-axis clinostat. RPMs usually consist of two independently rotating frames. One frame is positioned inside the other giving a very complex net change of orientation to a biological sample mounted in the middle. The RPM is sometimes wrongly referred to as the "3-D clinostat" (which rotates both axis in the same direction, i.e. both clockwise). It is a microweight ('micro-gravity') simulator that is based on the principle of 'gravity-vector-averaging'. RPM provides a functional volume which is 'exposed' to simulated microweight. Simulated micro-, partial, and hyper gravity The concept of 'random' positioning has been used to simulate a micro-gravity environment through the nullification of gravity. This is accomplished by disorientating the target model, or as "vector-averaging". Through the use of a centrifuge, a 'hyper-gravity' gravity can be simulated, as the model will get exposed to a continued accelerated force. In the circumstances of hyper-gravity within a micro-gravity environment, a partial 'Earth' gravity is created. Hyper-gravity simulation is also achieved through the use of larger centrifuges, such as the Large diameter Centrifuge (LDC) at the European Space Agency. The LDC simulated up to twenty times the Earth's gravitational strength. A system developed by Airbus uses an algorithm to simulate partial-gravity through a not fully randomly vector-averaging. The vector-averaging by Airbus' algorithm doesn't average out the vector to null but to a percentage representing simulated partial-gravity. Scientific Research As the human body undergoes physiological changes once subjected to weightlessness or microgravity, space-related changes in physiology have come into the focus of scientific research. Experimenting aboard rockets during Sub-orbital flights or in ground-based facilities such as drop towers is not always feasible. However, technological advances made it possible to simulate microgravity in Random Positioning machines, which find vast implications in modern research. Disadvantages The simulated microgravity environment attained inside the RPM is not perfect. A secondary effect part of this is the shear forces created by the fluid dynamics of the cell culture medium. They have been mathematically modeled by Wüest, and according to the research by Hauslage, they are of a magnitude enough to have biological implications. Also, Cortés-Sánchez showed these effects in mammalian cells cultured in the RPM. See also Gravitropism Sub-orbital flights Weightlessness Drop Tubes References External links ETH Space Biology Random Positioning Machine DESC VU Amsterdam Standard and desktop Random Positioning Machines Manufacturer's Website: yuri GmbH Laboratory equipment Gravitational instruments Positioning instruments
Random positioning machine
Technology,Engineering
591
26,849,676
https://en.wikipedia.org/wiki/Tessellated%20roof
Tessellated roof is a frame and a self-supporting structural system in architecture. A simple ridged roof may inside be a tessellated system. The interlinking shapes are replicated across the moulded surface using curvilinear coordinates, a specific technique with rigid interlinking beams, having characteristics similar to woven fabric. A tessellated roof is one of the most flexible framed systems to design. The measurements and precision are complex and commonly part of a computer-aided design process of production. It is used in a honeycomb geometry form, in the biomes of the Eden Project. It can be fabricated to fit a wide range of situations. The size of the repeated geometric shape used can be customised, with a multitude of the same shape throughout the structure. An even and equal load is shared by the interlocking structural integrity of the frame as a whole. The use of a tessellated roof for public areas is an increasingly implemented architectural feature of modern public buildings, covering walkways and over retail centers. A transparent roof being for shelter from the weather, has an advantage during daylight with electricity for artificial lighting in solid roof buildings being a financial cost. A modern tessellated roof for roofing public areas is a variation of a greenhouse or glass roof in different shapes and sized. The roof can be held aloft with columns, that may have branches to support and connect to the roof latticework, which stabilise the roof to create a strong structure. The material of the roof in-between or covering the tessellated frame may be a light composite, toughened glass or insulated glazing. There are roofed boulevards with columns that can form a colonnade. Some tessellated roof shapes connect to the ground in place of conventional rain gutters, for example the FieraMilano, or it can be supported entirely by the surrounding buildings. A tessellated roof can convert previous outdoor space into a dry public area; some examples of this method are Galleria Vittorio Emanuele II and many other shopping complexes or the Queen Elizabeth II Great Court at the British Museum in London by Norman Foster. See also Reciprocal frame Space frame Thin-shell structure References Roofs Structural engineering Structural system Tensile architecture
Tessellated roof
Technology,Engineering
457
7,083,018
https://en.wikipedia.org/wiki/Inca%20technology
Inca technology includes devices, technologies and construction methods used by the Inca people of western South America (between the 1100s and their conquest by Spain in the 1500s), including the methods Inca engineers used to construct the cities and road network of the Inca Empire. Hydraulic engineering The builders of the empire planned and built impressive waterworks in their city centers, including canals, fountains, drainage systems and expansive irrigation. Inca's infrastructure and water supply system have been hailed as “the pinnacle of the architectural and engineering works of the Inca civilization”. Major Inca centers were chosen by experts who decided the site, its apportionment, and the basic layout of the city. In many cities we see great hydraulic engineering marvels. For example, in the city of Tipon, 3 irrigation canals diverted water from Rio Pukara to Tipon which is about 1.35 km north for Tipon's terraces. Tipon also had natural springs that they built fountains for that supplied noble residents with water for non agricultural purposes. Machu Picchu In 1450, Machu Picchu was constructed. This date was determined and based on the Carbon 14 test results. The famous lost Inca city is an architectural remnant of a society whose understanding of civil and hydraulic engineering was advanced. Today, it is famously known for its remarkable preservation as well as the beauty of the building's architecture. The site is located 120 km northwest of Cuzco in the Urubamba river valley, Peru. At 2560 m above sea level, sitting atop a mountain, the city planners had to consider the steep slopes of the site as well as the humid and rainy climate. The Inca people built this site atop a hill which was terraced (most likely for agricultural purposes). In addition to terraces, Machu Picchu is composed of two additional basic architectural elements; elite residential compounds and religious structures. The site is full of staircases and sculpted rock, which were also important to their architecture and engineering practices. Making models out of clay before beginning to build, the city planners remained consistent with Inca architecture and laid out a city that separated the agriculture and urban areas. Before construction began the engineers had to assess the spring and whether it could provide for all of the city’s anticipated citizens. After evaluating the water supply, the civil engineers designed a -long canal to what would become the city’s center. The canal descends the mountain slope, enters the city walls, passes through the agricultural sector, then crosses the inner wall into the urban sector, where it feeds a series of fountains. The fountains are publicly accessible and partially enclosed by walls that are typically about 1.2 m high, except for the lowest fountain, which is a private fountain for the Temple of the Condor and has higher walls. At the head of each fountain, a cut stone conduit carries the water to a rectangular spout, which is shaped to create a jet of water suitable for filling aryballos–a typical Inca clay water jug. The water collects in a stone basin in the floor of the fountain, then enters a circular drain that delivers it to the approach channel for the next fountain. The Incas built the canals on steady grades, using cut stones as the water channels. Most citizens worked on the construction and maintenance of the canal and irrigation systems, bronze and stone tools to complete the water-tight stone canals. The water then traveled through the channels into sixteen fountains known as the "stairway of fountains", reserving the first water source for the Emperor. This incredible feat supplied the population of Machu Picchu, which varied between 300 and 1000 people when the emperor was present and also helped irrigate water to the farming steppes. The fountains and canal system were built so well that they would, after a few minor repairs, still work today. To go along with the Incas' advanced water supply system, an equally impressive drainage system was built as well. Machu Picchu contains nearly 130 outlets in the center that moved the water out of the city through walls and other structures. The agriculture terraces are a feature of the complicated drainage system; the steppes helped avoid erosion and were built on a slope to aim excess water into channels that ran alongside the stairways. These channels carried the runoff into the main drain, avoiding the main water supply. This carefully planned drainage system shows the Incas' concern and appreciation for clean water. Water engineer Ken Wright and his archaeological team found the emperor’s bathing room complete with a separate drain that carried off his used bath water so it would never re-enter Machu Picchu’s water supply. Terraces Terrace function and structure The Inca faced many problems with living in areas with steep terrain. Two large issues were soil erosion and area to grow crops. The solution to these problems was the development of terraces, called Andenes. These terraces allowed the Inca to utilize the land for farming that they never could in the past. Everything about how the terrace functions, looks, its geometric alignment, etc. all depend on the slope of the land. The different layering of materials is part of what makes the terraces so successful. It starts with a base layer of large rocks, followed by a second layer of smaller rocks, then a layer of sand-like material, and finally the topsoil. You can practice this in a simulation here. The most impressive part of the terraces was their drainage systems. Drain outlets were placed in the numerous stone retaining walls. The larger rocks at the base of each terrace level are what allowed the water to flow more easily through the larger spaces in between the rocks, eventually coming out at the “Main Drain”. The Inca even constructed different types of drainage channels that are used for different purposes throughout the city. How they were built and why they were effective Studies have indicated that when terraces like the ones in the Colca Valley were being constructed, the first step was excavating into the slope, and then a subsequent infilling of the slope. A retaining wall was built to hold the fill material. This wall had many uses, including absorbing heat from the sun during the day and radiating it back out at night, often keeping crops from freezing in the chilling nighttime temperatures, and holding back the different layers of sediment. After the wall is built, the larger rocks would be placed on the bottom, then smaller rocks, then sand, then soil. Since the soil was now level, the water did not rush down the side of the mountain, which is what causes erosion. Previously, this erosion was so powerful that it had potential to wipe out major areas of the Inca road, as well as wash away all of the nutrients and fertile soil. Since the soil never washed away, nutrients would always be added from previously grown crops year after year. The Inca even grew specific crops together, to balance out the optimal amount of nutrients for all plants. For example, a planting method is known as "three sisters" incorporated the growth of corn, beans, and squash in the same terrace. This was because the fixed nitrogen in the beans helped the corn grow, while the squash acted as mulch keeping the soil moist, and also acted as a weed repellant. Freeze-drying Purpose All food grown or killed by the Inca could be freeze dried. Freeze drying is still very popular today. One of the biggest benefits for freeze-drying is that it takes out all of the water and moisture but leaves all of the nutritious value. The water in meats and vegetables is what gives them a lot of their weight. This is what made it very popular for transportation purposes and storage because dried meats lasted twice as long as non-freeze-dried foods. Vegetables Inca diet was largely vegetarian because large wild game was often reserved for special occasions. A very common and well known freeze-dried item was the potato, or when it was frozen, Chuño. Meats Common meats to freeze-dry included llama, alpaca, duck, and guinea pig. Transportation and storage of jerky (ch'arki in Quechua) was much easier to transport and lasted longer than not dried meats. These all had potential to be freeze-dried. Process Both meats and vegetables went through a similar freezing process. They would start by laying all the different foods on rocks and during the cold nights in high altitudes with dry air they would freeze. The next morning, a combination of the thin dry air and the heat from the sun would melt the ice and evaporate all the moisture.They would also trample over it in the morning to get any extra moisture out. The process of freeze-drying was important for transportation and storage. The high elevation (low atmospheric pressure) and low temperatures of the Andes mountains is what allowed them to take advantage of this process. Burning mirror The chronicler Inca Garcilaso de la Vega described the use of a burning mirror as part of the annual "Inti Raymi" (sun festival): "The fire for that sacrifice had to be new, given by the hand of the sun, as they said. For which they took a large bracelet, which they call Chipana (similar to others that the Incas commonly wore on the left wrist) which the high priest had; it was large, larger than the common ones, it had for a medallion a concave vessel, the shape of a half orange and brightly polished, they put it against the sun, and at a certain point where the rays that came out of the vessel hit each other, they put a bit of finely unravelled cotton (they did not know how to make tinder), which caught fire naturally in a short space of time. With this fire, thus given by the hand of the sun, the sacrifice was burned and all the meat of that day was roasted." Pathway systems The vast size of the Inca empire made it essential that efficient and effective transportation systems were created and built to assist in the exchanging of goods, services, people, etc. At one point, "their (the Inca) empire eventually extended across western South America from Quito in the north to Santiago in the south, making it the largest empire ever seen in the Americas and the largest in the world at that time (between c. 1400 and 1533 CE)." It is known to have "extended some 3500-4000 km along the mountainous backbone of South America." The trails, roads, and bridges were designed not only to link the empire physically, but these structures also helped the empire to maintain communication. Rope bridges Rope bridges were an integral part of the Inca road system. "Five centuries ago, the Andes were strung with suspension bridges. By some estimates there were as many as 200 of them." As pictured to the right, these structures were used to connect two land masses, allowing for the flow of ideas, goods, people, animals, etc. across the Incan empire. "The Inca suspension bridges achieved clear spans of at least 150 feet, probably much greater. This was a longer span than any European masonry bridges at the time." Since the Incan people did not use wheeled vehicles, most traveled by foot and/or used animals to help in the transporting of goods. Construction Although these bridges were assembled using twisted mountain grass, other vegetation, and saplings, they were dependable. These structures were able to both support the weight of traveling people and animals as well as withstand weather conditions over certain amounts of time. Since grass rots away over time, the bridges had to be rebuilt every year. When the Inca people began building a grass suspension bridge, they would first gather natural materials of grass and other vegetation. They would then braid these elements together into rope. This contribution was made by the Inca women. Vast amounts of thin-looking rope were produced. The villagers would then deliver their quota of rope to the builders. The rope was then divided into sections. Each section consisted of an amount of thin rope being laid out together in preparation to create a thicker rope cord. Once the sections are laid out, the strands of rope made earlier are twisted together tightly and evenly, producing the larger and thicker rope cord. These larger ropes are then braided together to create cables, some as thick as a human torso. Depending on the dimensions of the cable, each could weigh up to 200 pounds. These cables were then delivered to the bridge site. It was considered bad luck for women to be anywhere near the construction of the bridge, so the Inca men were therefore in charge of the on-site construction. At the bridge site, a builder(s) would travel to the opposite landmass that they were working to connect. Once they were positioned on the opposite side, one of the thin, light-weight ropes would be thrown over to them. This rope would then be used to pull the main cables over the gorge. Stone beams were built on either side of the gorge and were used in helping to position and secure the cables. The cables were wrapped around these stone beams and tightened inch by inch to decrease any slack in the bridge. Once this was finished, the riggers carefully made their way across the hanging cables, tying the foot-ropes together and connecting the handrails and the foot-ropes with the remainder of the thin grass ropes. Not all rope bridges were exactly alike in terms of design and build. Some riggers also wove pieces of wood into the foot-ropes. Modern-day rope bridge builders in Huinchiri, Peru make offerings to Pacha Mama, otherwise known as "Mother Earth," throughout their building process to ensure that the bridge will be strong and safe. This may have been a practice used by the Inca people since they too were religious. If all went smoothly and if tasks were performed in a timely fashion, a bridge had the potential of being constructed in three days. Modern rope bridges People today continue to honor Incan traditions and expand their knowledge in the building of rope bridges. "Each June in Huinchiri, Peru, four Quechua communities on two sides of a gorge join together to build a bridge out of grass, creating a form of ancient infrastructure that dates back at least five centuries to the Inca Empire." The previous Q’eswachaka Bridge is cut down and swept away by the Apurímac River current and a new bridge is built in its place. This tradition links the Quechua communities of the Huinchiri, Chaupibanda, Choccayhua, and Ccollana Quehue to their past ancestors. “According to our grandfathers, this bridge was built during the time of the Inkas 600 years ago, and on it they walked their llamas and alpacas carrying their produce.” - Eleuterio Ccallo Tapia "A small portion of a 60-foot replica built by Quechua weavers is on view in The Great Inka Road: Engineering an Empire at the Smithsonian’s National Museum of the American Indian in Washington, DC." This exhibit was on display at the museum through June 27, 2021. Visitors are also encouraged to experience this exhibit online. Either way, museums like the Smithsonian are working to preserve and display examples and knowledge of the Inca inspired rope bridges today. John Wilford shares in the New York Times that students at the Massachusetts Institute of Technology are learning much more than how objects are made. They are being taught to observe and test how archeology entwines with culture. Wilford's article was written in 2007. At this time, students involved in a course called “materials in human experience,” were busy making a 60-foot-long fiber bridge in the Peruvian style. Through this project, they were introduced to the Inca people's way of thinking and building. After creating their ropes and cables, they had planned to stretch the bridge across a dry basin between two campus buildings. Roads According to author Mark Cartwright, "Inca roads covered over 40,000 km (25,000 miles), principally in two main highways running north to south across the Inca Empire, which eventually spread over ancient Peru, Ecuador, Chile and Bolivia." Several sources challenge Cartwright's claim in stating that the Inca roads covered either more or less area then he describes. This number is difficult to solidify since some of the pathways of the Inca still may remain unaccounted for, being that they may have been washed away or covered by natural forces. "Inca engineers were also undaunted by geographical difficulties and built roads across ravines, rivers, deserts, and mountain passes up to 5,000 meters high." Many of the constructed roads are not uniform in design. Most of the uncovered roads are about one to four meters wide. Although this is true, some roads, such as the highway in Huanuco Pampa province, can be much larger. As mentioned in the Pathway systems section, the Inca people mainly traveled on foot. Knowing this, the roads created were most likely built and paved for both humans and animals to walk and/or run along. Several roads were paved with stones or cobbles and some were "edged and protected with the use of small stone walls, stone markers, wooden or cane posts, or piles of stones." Drainage was something that was of particular interest and importance to the Inca people. Drains and culverts were built to ensure that rainwater would effectively run off of the road's surface. The drains and culverts helped in directing the accumulating water either along or under the road. Uses As mentioned in the section Pathway systems, there were several uses for the Inca roads. The most obvious way in which the Inca people used the road/trail systems was to transport goods. They did this on foot and sometimes with the help of animals (llamas and alpacas). Not only were goods transported throughout the vast empire, but so were ideas and messages. The Inca needed a system of communication, so they relied on Chasquis, otherwise known as messengers. The Chasquis were chosen among the strongest and fittest young males. They ran several miles per day, only to deliver messages. These messengers resided in cabins called "tambos." These structures were positioned along the roads and built by the Inca people. These buildings provided the Chasquis with a place to rest. These places of rest could also be used to house the Inca army in a situation of rebellion or war. Modern Inca roads Today, many people travel to South America to hike the Inca trail. Walking and climbing the trail not only serves the purpose of allowing visitors to experience the historic pathways of the Inca people, but it allows for tourists and locals to see the Inca ruins, mountains, and exotic vegetation and animals. References Bibliography “Inka Hydraulic Engineering”, University of Colorado at Denver. 19 September 2006. Brown, Jeff L. “Water Supply and Drainage Systems at Machu Picchu” 19 September 2006 Wright, Kenneth R. “Machu Picchu: Prehistoric Public Works.” American Public Works Association APWA Reporter, 17 November 2003 D’Altroy, Terence N. and Christine A. Hastorf. Empire and Domestic Economy. New York: Kluwer Academic/Plenum Publishers, 2001. Wright, Kenneth, Jonathan M. Kelly, Alfredo Valencia Zegarra. “Machu Pichu: Ancient Hydraulic Engineering”. Journal of Hydraulic Engineering, October 1997. Bauer, Brian. The Development of the Inca State. University of Texas Press, Austin, 1992. Hyslop, John. Inka Settlement Planning. University of Texas Press, Austin, 1990. Inca Civil engineering History of engineering Technology by period
Inca technology
Engineering
3,981
60,099,218
https://en.wikipedia.org/wiki/C18H14O8
{{DISPLAYTITLE:C18H14O8}} The molecular formula C18H14O8 (molar mass: 358.302 g/mol) may refer to: Psoromic acid MCPO Molecular formulas
C18H14O8
Physics,Chemistry
51
650,995
https://en.wikipedia.org/wiki/Reactivity%20series
In chemistry, a reactivity series (or reactivity series of elements) is an empirical, calculated, and structurally analytical progression of a series of metals, arranged by their "reactivity" from highest to lowest. It is used to summarize information about the reactions of metals with acids and water, single displacement reactions and the extraction of metals from their ores. Table Going from the bottom to the top of the table the metals: increase in reactivity; lose electrons (oxidize) more readily to form positive ions; corrode or tarnish more readily; require more energy (and different methods) to be isolated from their compounds; become stronger reducing agents (electron donors). Defining reactions There is no unique and fully consistent way to define the reactivity series, but it is common to use the three types of reaction listed below, many of which can be performed in a high-school laboratory (at least as demonstrations). Reaction with water and acids The most reactive metals, such as sodium, will react with cold water to produce hydrogen and the metal hydroxide: 2 Na (s) + 2 H2O (l) →2 NaOH (aq) + H2 (g) Metals in the middle of the reactivity series, such as iron, will react with acids such as sulfuric acid (but not water at normal temperatures) to give hydrogen and a metal salt, such as iron(II) sulfate: Fe (s) + H2SO4 (l) → FeSO4 (aq) + H2 (g) There is some ambiguity at the borderlines between the groups. Magnesium, aluminium and zinc can react with water, but the reaction is usually very slow unless the metal samples are specially prepared to remove the surface passivation layer of oxide which protects the rest of the metal. Copper and silver will react with nitric acid; but because nitric acid is an oxidizing acid, the oxidizing agent is not the H+ ion as in normal acids, but the NO3− ion. Comparison with standard electrode potentials The reactivity series is sometimes quoted in the strict reverse order of standard electrode potentials, when it is also known as the "electrochemical series". The following list includes the metallic elements of the first six periods. It is mostly based on tables provided by NIST. However, not all sources give the same values: there are some differences between the precise values given by NIST and the CRC Handbook of Chemistry and Physics. In the first six periods this does not make a difference to the relative order, but in the seventh period it does, so the seventh-period elements have been excluded. (In any case, the typical oxidation states for the most accessible seventh-period elements thorium and uranium are too high to allow a direct comparison.) Hydrogen has been included as a benchmark, although it is not a metal. Borderline germanium, antimony, and astatine have been included. Some other elements in the middle of the 4d and 5d rows have been omitted (Zr–Tc, Hf–Os) when their simple cations are too highly charged or of rather doubtful existence. Greyed-out rows indicate values based on estimation rather than experiment. The positions of lithium and sodium are changed on such a series. Standard electrode potentials offer a quantitative measure of the power of a reducing agent, rather than the qualitative considerations of other reactive series. However, they are only valid for standard conditions: in particular, they only apply to reactions in aqueous solution. Even with this proviso, the electrode potentials of lithium and sodium – and hence their positions in the electrochemical series – appear anomalous. The order of reactivity, as shown by the vigour of the reaction with water or the speed at which the metal surface tarnishes in air, appears to be Cs > K > Na > Li > alkaline earth metals, i.e., alkali metals > alkaline earth metals, the same as the reverse order of the (gas-phase) ionization energies. This is borne out by the extraction of metallic lithium by the electrolysis of a eutectic mixture of lithium chloride and potassium chloride: lithium metal is formed at the cathode, not potassium. Comparison with electronegativity values The image shows a periodic table extract with the electronegativity values of metals. Wulfsberg distinguishes: very electropositive metals with electronegativity values below 1.4 electropositive metals with values between 1.4 and 1.9; and electronegative metals with values between 1.9 and 2.54. From the image, the group 1–2 metals and the lanthanides and actinides are very electropositive to electropositive; the transition metals in groups 3 to 12 are very electropositive to electronegative; and the post-transition metals are electropositive to electronegative. The noble metals, inside the dashed border (as a subset of the transition metals) are very electronegative. See also Reactivity (chemistry), which discusses the inconsistent way that the term 'reactivity' is used in chemistry. References External links Science Line Chemistry Inorganic chemistry
Reactivity series
Chemistry
1,081
6,176,811
https://en.wikipedia.org/wiki/Newman%E2%80%93Penrose%20formalism
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system. Newman and Penrose introduced the following functions as primary quantities using this tetrad: Twelve complex spin coefficients (in three groups) which describe the change in the tetrad from point to point: . Five complex functions encoding Weyl tensors in the tetrad basis: . Ten functions encoding Ricci tensors in the tetrad basis: (real); (complex). In many situations—especially algebraically special spacetimes or vacuum spacetimes—the Newman–Penrose formalism simplifies dramatically, as many of the functions go to zero. This simplification allows for various theorems to be proven more easily than using the standard form of Einstein's equations. In this article, we will only employ the tensorial rather than spinorial version of NP formalism, because the former is easier to understand and more popular in relevant papers. One can refer to ref. for a unified formulation of these two versions. Null tetrad and sign convention The formalism is developed for four-dimensional spacetime, with a Lorentzian-signature metric. At each point, a tetrad (set of four vectors) is introduced. The first two vectors, and are just a pair of standard (real) null vectors such that . For example, we can think in terms of spherical coordinates, and take to be the outgoing null vector, and to be the ingoing null vector. A complex null vector is then constructed by combining a pair of real, orthogonal unit space-like vectors. In the case of spherical coordinates, the standard choice is The complex conjugate of this vector then forms the fourth element of the tetrad. Two sets of signature and normalization conventions are in use for NP formalism: and . The former is the original one that was adopted when NP formalism was developed and has been widely used in black-hole physics, gravitational waves and various other areas in general relativity. However, it is the latter convention that is usually employed in contemporary study of black holes from quasilocal perspectives (such as isolated horizons and dynamical horizons). In this article, we will utilize for a systematic review of the NP formalism (see also refs.). It's important to note that, when switching from to , definitions of the spin coefficients, Weyl-NP scalars and Ricci-NP scalars need to change their signs; this way, the Einstein-Maxwell equations can be left unchanged. In NP formalism, the complex null tetrad contains two real null (co)vectors and two complex null (co)vectors . Being null (co)vectors, self-normalization of naturally vanishes, so the following two pairs of cross-normalization are adopted while contractions between the two pairs are also vanishing, Here the indices can be raised and lowered by the global metric which in turn can be obtained via NP quantities and tetrad equations Four covariant derivative operators In keeping with the formalism's practice of using distinct unindexed symbols for each component of an object, the covariant derivative operator is expressed using four separate symbols () which name a directional covariant derivative operator for each tetrad direction. Given a linear combination of tetrad vectors, , the covariant derivative operator in the direction is . The operators are defined as which reduce to when acting on scalar functions. Twelve spin coefficients In NP formalism, instead of using index notations as in orthogonal tetrads, each Ricci rotation coefficient in the null tetrad is assigned a lower-case Greek letter, which constitute the 12 complex spin coefficients (in three groups), Spin coefficients are the primary quantities in NP formalism, with which all other NP quantities (as defined below) could be calculated indirectly using the NP field equations. Thus, NP formalism is sometimes referred to as spin-coefficient formalism as well. Transportation equations: covariant derivatives of tetrad vectors The sixteen directional covariant derivatives of tetrad vectors are sometimes called the transportation/propagation equations, perhaps because the derivatives are zero when the tetrad vector is parallel propagated or transported in the direction of the derivative operator. These results in this exact notation are given by O'Donnell: Interpretation of κ, ε, ν, γ from Dℓa and Δna The two equations for the covariant derivative of a real null tetrad vector in its own direction indicate whether or not the vector is tangent to a geodesic and if so, whether the geodesic has an affine parameter. A null tangent vector is tangent to an affinely parameterized null geodesic if , which is to say if the vector is unchanged by parallel propagation or transportation in its own direction. shows that is tangent to a geodesic if and only if , and is tangent to an affinely parameterized geodesic if in addition . Similarly, shows that is geodesic if and only if , and has affine parameterization when . (The complex null tetrad vectors and would have to be separated into the spacelike basis vectors and before asking if either or both of those are tangent to spacelike geodesics.) Commutators The metric-compatibility or torsion-freeness of the covariant derivative is recast into the commutators of the directional derivatives, which imply that Note: (i) The above equations can be regarded either as implications of the commutators or combinations of the transportation equations; (ii) In these implied equations, the vectors can be replaced by the covectors and the equations still hold. Weyl–NP and Ricci–NP scalars The 10 independent components of the Weyl tensor can be encoded into 5 complex Weyl-NP scalars, The 10 independent components of the Ricci tensor are encoded into 4 real scalars , , , and 3 complex scalars (with their complex conjugates), In these definitions, could be replaced by its trace-free part or by the Einstein tensor because of the normalization relations. Also, is reduced to for electrovacuum (). Einstein–Maxwell–NP equations NP field equations In a complex null tetrad, Ricci identities give rise to the following NP field equations connecting spin coefficients, Weyl-NP and Ricci-NP scalars (recall that in an orthogonal tetrad, Ricci rotation coefficients would respect Cartan's first and second structure equations), These equations in various notations can be found in several texts. The notation in Frolov and Novikov is identical. Also, the Weyl-NP scalars and the Ricci-NP scalars can be calculated indirectly from the above NP field equations after obtaining the spin coefficients rather than directly using their definitions. Maxwell–NP scalars, Maxwell equations in NP formalism The six independent components of the Faraday-Maxwell 2-form (i.e. the electromagnetic field strength tensor) can be encoded into three complex Maxwell-NP scalars and therefore the eight real Maxwell equations and (as ) can be transformed into four complex equations, with the Ricci-NP scalars related to Maxwell scalars by It is worthwhile to point out that, the supplementary equation is only valid for electromagnetic fields; for example, in the case of Yang-Mills fields there will be where are Yang-Mills-NP scalars. To sum up, the aforementioned transportation equations, NP field equations and Maxwell-NP equations together constitute the Einstein-Maxwell equations in Newman–Penrose formalism. Applications of the NP formalism to gravitational radiation field The Weyl scalar was defined by Newman & Penrose as (note, however, that the overall sign is arbitrary, and that Newman & Penrose worked with a "timelike" metric signature of ). In empty space, the Einstein Field Equations reduce to . From the definition of the Weyl tensor, we see that this means that it equals the Riemann tensor, . We can make the standard choice for the tetrad at infinity: In transverse-traceless gauge, a simple calculation shows that linearized gravitational waves are related to components of the Riemann tensor as assuming propagation in the direction. Combining these, and using the definition of above, we can write Far from a source, in nearly flat space, the fields and encode everything about gravitational radiation propagating in a given direction. Thus, we see that encodes in a single complex field everything about (outgoing) gravitational waves. Radiation from a finite source Using the wave-generation formalism summarised by Thorne, we can write the radiation field quite compactly in terms of the mass multipole, current multipole, and spin-weighted spherical harmonics: Here, prefixed superscripts indicate time derivatives. That is, we define The components and are the mass and current multipoles, respectively. is the spin-weight −2 spherical harmonic. See also Light-cone coordinates GHP formalism Tetrad formalism Goldberg–Sachs theorem References Wald treats the more succinct version of the Newman–Penrose formalism in terms of more modern spinor notation. Hawking and Ellis use the formalism in their discussion of the final state of a collapsing star. External links Newman–Penrose formalism on Scholarpedia Theory of relativity Mathematical notation General relativity
Newman–Penrose formalism
Physics,Mathematics
2,170
586,931
https://en.wikipedia.org/wiki/Catherine%20Wolfe%20Bruce
Catherine Wolfe Bruce (January 22, 1816, New York – March 13, 1900, New York) was a noted American philanthropist and patron of astronomy. Early life Bruce was born on January 22, 1816. She was the daughter of the George Bruce (1781–1866), a famous type founder who was born in Edinburgh, and Catherine Wolfe (1785–1861), the daughter of David Wolfe (1748–1836) of New York City. One of five children, her brother was David Wolfe Bruce (1824–1895), who, along with David Wolfe Bishop, inherited the fortune of their cousin, Catharine Lorillard Wolfe. Career She studied painting, learned Latin, German, French and Italian, and was familiar with the literature of those languages. In 1890, she wrote and published a translation of the "Dies Irae." Personal life Due to an ever-increasing illness, she was confined to her home and died on March 13, 1900, at 810 Fifth Avenue in New York City. Philanthropy In 1877, she donated $50,000 for the construction of a library building and the purchase of books in memory of her father. The library, known as "The George Bruce Library" was completed in 1888 and was located at 226 West 42nd Street and designed by G. E. Harney. The building was sold in 1913 and the proceeds were used to build the current George Bruce library located on 125th Street in Harlem and designed by Carrère & Hastings. As an amateur astronomer, she turned to philanthropy in this field at the age of 73, only after reading an article by Simon Newcomb claiming all the major discoveries in astronomy have occurred. Bruce turned to telescope maker Alvan Graham Clark to see how she could support research in astronomy. Bruce made over 54 gifts to astronomy, totaling over $275,000, between 1889 and 1899. She donated funds to the Harvard College Observatory (U.S.A.), Yerkes Observatory (U.S.A.) and Landessternwarte Heidelberg-Königstuhl (Germany), run by Max Wolf at the time, to buy new telescopes at each of those institutes. In 1887, she donated the George Bruce Free Library. Bruce established the Bruce Medal of the Astronomical Society of the Pacific in recognition of lifetime achievements and contributions to astrophysics, and is one of the prestigious awards in the field. Honors Asteroid 323 Brucia, discovered by Max Wolf is named after her, as well as the crater Bruce on the Moon. She was awarded a gold medal by the Grand Duke of Baden. Astronomer Johann Palisa gave her the honor of naming 313 Chaldaea as a token for the gratitude of astronomers. References 1816 births 1900 deaths People associated with astronomy American people of Scottish descent Harvard College Observatory people
Catherine Wolfe Bruce
Astronomy
564
563,456
https://en.wikipedia.org/wiki/Methanogen
Methanogens are anaerobic archaea that produce methane as a byproduct of their energy metabolism, i.e., catabolism. Methane production, or methanogenesis, is the only biochemical pathway for ATP generation in methanogens. All known methanogens belong exclusively to the domain Archaea, although some bacteria, plants, and animal cells are also known to produce methane. However, the biochemical pathway for methane production in these organisms differs from that in methanogens and does not contribute to ATP formation. Methanogens belong to various phyla within the domain Archaea. Previous studies placed all known methanogens into the superphylum Euryarchaeota. However, recent phylogenomic data have led to their reclassification into several different phyla. Methanogens are common in various anoxic environments, such as marine and freshwater sediments, wetlands, the digestive tracts of animals, wastewater treatment plants, rice paddy soil, and landfills. While some methanogens are extremophiles, such as Methanopyrus kandleri, which grows between 84 and 110°C, or Methanonatronarchaeum thermophilum, which grows at a pH range of 8.2 to 10.2 and a concentration of 3 to 4.8 M, most of the isolates are mesophilic and grow around neutral pH. Physical description Methanogens are usually cocci (spherical) or rods (cylindrical) in shape, but long filaments (Methanobrevibacter filliformis, Methanospirillum hungatei) and curved forms (Methanobrevibacter curvatus, Methanobrevibacter cuticularis) also occur. There are over 150 described species of methanogens, which do not form a monophyletic group in the phylum Euryarchaeota (see Taxonomy). They are exclusively anaerobic organisms that cannot function under aerobic conditions due to the extreme oxygen sensitivity of methanogenesis enzymes and FeS clusters involved in ATP production. However, the degree of oxygen sensitivity varies, as methanogenesis has often been detected in temporarily oxygenated environments such as rice paddy soil, and various molecular mechanisms potentially involved in oxygen and reactive oxygen species (ROS) detoxification have been proposed. For instance, a recently identified species Candidatus Methanothrix paradoxum common in wetlands and soil can function in anoxic microsites within aerobic environments but it is sensitive to the presence of oxygen even at trace level and cannot usually sustain oxygen stress for a prolonged time. However, Methanosarcina barkeri from a sister family Methanosarcinaceae is exceptional in possessing a superoxide dismutase (SOD) enzyme, and may survive longer than the others in the presence of O2. As is the case for other archaea, methanogens lack peptidoglycan, a polymer that is found in the cell walls of bacteria. Instead, some methanogens have a cell wall formed by pseudopeptidoglycan (also known as pseudomurein). Other methanogens have a paracrystalline protein array (S-layer) that fits together like a jigsaw puzzle. In some lineages there are less common types of cell envelope such as the proteinaceous sheath of Methanospirillum or the methanochondroitin of Methanosarcina aggregated cells. Ecology In anaerobic environments, methanogens play a vital ecological role, removing excess hydrogen and fermentation products that have been produced by other forms of anaerobic respiration. Methanogens typically thrive in environments in which all electron acceptors other than CO2 (such as oxygen, nitrate, ferric iron (Fe(III)), and sulfate) have been depleted. Such environments include wetlands and rice paddy soil, the digestive tracts of various animals (ruminants, arthropods, humans), wastewater treatment plants and landfills, deep-water oceanic sediments, and hydrothermal vents. Most of these environments are not categorized as extreme, and thus the methanogens inhabiting them are also not considered extremophiles. However, many well-studied methanogens are thermophiles such as Methanopyrus kandleri, Methanothermobacter marburgensis, Methanocaldococcus jannaschii. On the other hand, gut methanogens such as Methanobrevibacter smithii common in humans or Methanobrevibacter ruminantium omnipresent in ruminants are mesophiles. Methanogens in extreme environments In deep basaltic rocks near the mid-ocean ridges, methanogens can obtain their hydrogen from the serpentinization reaction of olivine as observed in the hydrothermal field of Lost City. The thermal breakdown of water and water radiolysis are other possible sources of hydrogen. Methanogens are key agents of remineralization of organic carbon in continental margin sediments and other aquatic sediments with high rates of sedimentation and high sediment organic matter. Under the correct conditions of pressure and temperature, biogenic methane can accumulate in massive deposits of methane clathrates that account for a significant fraction of organic carbon in continental margin sediments and represent a key reservoir of a potent greenhouse gas. Methanogens have been found in several extreme environments on Earth – buried under kilometres of ice in Greenland and living in hot, dry desert soil. They are known to be the most common archaea in deep subterranean habitats. Live microbes making methane were found in a glacial ice core sample retrieved from about three kilometres under Greenland by researchers from the University of California, Berkeley. They also found a constant metabolism able to repair macromolecular damage, at temperatures of 145 to –40 °C. Another study has also discovered methanogens in a harsh environment on Earth. Researchers studied dozens of soil and vapour samples from five different desert environments in Utah, Idaho and California in the United States, and in Canada and Chile. Of these, five soil samples and three vapour samples from the vicinity of the Mars Desert Research Station in Utah were found to have signs of viable methanogens. Some scientists have proposed that the presence of methane in the Martian atmosphere may be indicative of native methanogens on that planet. In June 2019, NASA's Curiosity rover detected methane, commonly generated by underground microbes such as methanogens, which signals possibility of life on Mars. Closely related to the methanogens are the anaerobic methane oxidizers, which utilize methane as a substrate in conjunction with the reduction of sulfate and nitrate. Most methanogens are autotrophic producers, but those that oxidize CH3COO− are classed as chemotroph instead. Methanogens in the digestive tract of animals The digestive tract of animals is characterized by a nutrient-rich and predominantly anaerobic environment, making it an ideal habitat for many microbes, including methanogens. Despite this, methanogens and archaea, in general, were largely overlooked as part of the gut microbiota until recently. However, they play a crucial role in maintaining gut balance by utilizing end products of bacterial fermentation, such as H2, acetate, methanol, and methylamines. Recent extensive surveys of archaea presence in the animal gut, based on 16S rRNA analysis, have provided a comprehensive view of archaea diversity and abundance. These studies revealed that only a few archaeal lineages are present, with the majority being methanogens, while non-methanogenic archaea are rare and not abundant. Taxonomic classification of archaeal diversity identified that representatives of only three phyla are present in the digestive tracts of animals: Methanobacteriota (order Methanobacteriales), Thermoplasmatota (order Methanomassiliicoccales), and Halobacteriota (orders Methanomicrobiales and Methanosarcinales). However, not all families and genera within these orders were detected in animal guts, but only a few genera, suggesting their specific adaptations to the gut environment. Comparative genomics and molecular signatures Comparative proteomic analysis has led to the identification of 31 signature proteins which are specific for methanogens (also known as Methanoarchaeota). Most of these proteins are related to methanogenesis, and they could serve as potential molecular markers for methanogens. Additionally, 10 proteins found in all methanogens, which are shared by Archaeoglobus, suggest that these two groups are related. In phylogenetic trees, methanogens are not monophyletic and they are generally split into three clades. Hence, the unique shared presence of large numbers of proteins by all methanogens could be due to lateral gene transfers. Additionally, more recent novel proteins associated with sulfide trafficking have been linked to methanogen archaea. More proteomic analysis is needed to further differentiate specific genera within the methanogen class and reveal novel pathways for methanogenic metabolism. Modern DNA or RNA sequencing approaches has elucidated several genomic markers specific to several groups of methanogens. One such finding isolated nine methanogens from genus Methanoculleus and found that there were at least 2 trehalose synthases genes that were found in all nine genomes. Thus far, the gene has been observed only in this genus, therefore it can be used as a marker to identify the archaea Methanoculleus. As sequencing techniques progress and databases become populated with an abundance of genomic data, a greater number of strains and traits can be identified, but many genera have remained understudied. For example, halophilic methanogens are potentially important microbes for carbon cycling in coastal wetland ecosystems but seem to be greatly understudied. One recent publication isolated a novel strain from genus Methanohalophilus which resides in sulfide-rich seawater. Interestingly, they have isolated several portions of this strain's genome that are different from other isolated strains of this genus (Methanohalophilus mahii, Methanohalophilus halophilus, Methanohalophilus portucalensis, Methanohalophilus euhalbius). Some differences include a highly conserved genome, sulfur and glycogen metabolisms and viral resistance. Genomic markers consistent with the microbes environment have been observed in many other cases. One such study found that methane producing archaea found in hydraulic fracturing zones had genomes which varied with vertical depth. Subsurface and surface genomes varied along with the constraints found in individual depth zones, though fine-scale diversity was also found in this study. Genomic markers pointing at environmentally relevant factors are often non-exclusive. A survey of Methanogenic Thermoplasmata has found these organisms in human and animal intestinal tracts. This novel species was also found in other methanogenic environments such as wetland soils, though the group isolated in the wetlands did tend to have a larger number of genes encoding for anti-oxidation enzymes that were not present in the same group isolated in the human and animal intestinal tract. A common issue with identifying and discovering novel species of methanogens is that sometimes the genomic differences can be quite small, yet the research group decides they are different enough to separate into individual species. One study took a group of Methanocellales and ran a comparative genomic study. The three strains were originally considered identical, but a detailed approach to genomic isolation showed differences among their previously considered identical genomes. Differences were seen in gene copy number and there was also metabolic diversity associated with the genomic information. Genomic signatures not only allow one to mark unique methanogens and genes relevant to environmental conditions; it has also led to a better understanding of the evolution of these archaea. Some methanogens must actively mitigate against oxic environments. Functional genes involved with the production of antioxidants have been found in methanogens, and some specific groups tend to have an enrichment of this genomic feature. Methanogens containing a genome with enriched antioxidant properties may provide evidence that this genomic addition may have occurred during the Great Oxygenation Event. In another study, three strains from the lineage Thermoplasmatales isolated from animal gastro-intestinal tracts revealed evolutionary differences. The eukaryotic-like histone gene which is present in most methanogen genomes was not present, alluding to evidence that an ancestral branch was lost within Thermoplasmatales and related lineages. Furthermore, the group Methanomassiliicoccus has a genome which appears to have lost many common genes coding for the first several steps of methanogenesis. These genes appear to have been replaced by genes coding for a novel methylated methogenic pathway. This pathway has been reported in several types of environments, pointing to non-environment specific evolution, and may point to an ancestral deviation. Metabolism Methane production Methanogens are known to produce methane from substrates such as H2/CO2, acetate, formate, methanol and methylamines in a process called methanogenesis. Different methanogenic reactions are catalyzed by unique sets of enzymes and coenzymes. While reaction mechanism and energetics vary between one reaction and another, all of these reactions contribute to net positive energy production by creating ion concentration gradients that are used to drive ATP synthesis. The overall reaction for H2/CO2 methanogenesis is: CO2 + 4 H2 -> CH4 + 2 H2O (∆G˚’ = -134 kJ/mol CH4) Well-studied organisms that produce methane via H2/CO2 methanogenesis include Methanosarcina barkeri, Methanobacterium thermoautotrophicum, and Methanobacterium wolfei. These organisms are typically found in anaerobic environments. In the earliest stage of H2/CO2 methanogenesis, CO2 binds to methanofuran (MF) and is reduced to formyl-MF. This endergonic reductive process (∆G˚’= +16 kJ/mol) is dependent on the availability of H2 and is catalyzed by the enzyme formyl-MF dehydrogenase. CO2 + H2 + MF -> HCO-MF + H2O The formyl constituent of formyl-MF is then transferred to the coenzyme tetrahydromethanopterin (H4MPT) and is catalyzed by a soluble enzyme known as formyltransferase. This results in the formation of formyl-H4MPT. HCO-MF + H4MPT -> HCO-H4MPT + MF Formyl-H4MPT is subsequently reduced to methenyl-H4MPT. Methenyl-H4MPT then undergoes a one-step hydrolysis followed by a two-step reduction to methyl-H4MPT. The two-step reversible reduction is assisted by coenzyme F420 whose hydride acceptor spontaneously oxidizes. Once oxidized, F420’s electron supply is replenished by accepting electrons from H2. This step is catalyzed by methylene H4MPT dehydrogenase. HCO-H4MPT + H+ -> CH-H4MPT+ + H2O (Formyl-H4MPT reduction) CH-H4MPT+ + F420H2 -> CH2=H4MPT + F420 + H+(Methenyl-H4MPT hydrolysis) CH2=H4MPT + H2 -> CH3-H4MPT + H+(H4MPT reduction) Next, the methyl group of methyl-M4MPT is transferred to coenzyme M via a methyltransferase-catalyzed reaction. CH3-H4MPT + HS-CoM -> CH3-S-CoM + H4MPT The final step of H2/CO2 methanogenic involves methyl-coenzyme M reductase and two coenzymes: N-7 mercaptoheptanoylthreonine phosphate (HS-HTP) and coenzyme F430. HS-HTP donates electrons to methyl-coenzyme M allowing the formation of methane and mixed disulfide of HS-CoM. F430, on the other hand, serves as a prosthetic group to the reductase. H2 donates electrons to the mixed disulfide of HS-CoM and regenerates coenzyme M. CH3-S-CoM + HS-HTP -> CH4 + CoM-S-S-HTP (Formation of methane) CoM-S-S-HTP + H2 -> HS-CoM + HS-HTP (Regeneration of coenzyme M) Biotechnological application Wastewater treatment Methanogens are widely used in anaerobic digestors to treat wastewater as well as aqueous organic pollutants. Industries have selected methanogens for their ability to perform biomethanation during wastewater decomposition thereby rendering the process sustainable and cost-effective. Bio-decomposition in the anaerobic digester involves a four-staged cooperative action performed by different microorganisms. The first stage is the hydrolysis of insoluble polymerized organic matter by anaerobes such as Streptococcus and Enterobacterium. In the second stage, acidogens break down dissolved organic pollutants in wastewater to fatty acids. In the third stage, acetogens convert fatty acids to acetates. In the final stage, methanogens metabolize acetates to gaseous methane. The byproduct methane leaves the aqueous layer and serves as an energy source to power wastewater-processing within the digestor, thus generating a self-sustaining mechanism. Methanogens also effectively decrease the concentration of organic matter in wastewater run-off. For instance, agricultural wastewater, highly rich in organic material, has been a major cause of aquatic ecosystem degradation. The chemical imbalances can lead to severe ramifications such as eutrophication. Through anaerobic digestion, the purification of wastewater can prevent unexpected blooms in water systems as well as trap methanogenesis within digesters. This allocates biomethane for energy production and prevents a potent greenhouse gas, methane, from being released into the atmosphere. The organic components of wastewater vary vastly. Chemical structures of the organic matter select for specific methanogens to perform anaerobic digestion. An example is the members of Methanosaeta genus dominate the digestion of palm oil mill effluent (POME) and brewery waste. Modernizing wastewater treatment systems to incorporate higher diversity of microorganisms to decrease organic content in treatment is under active research in the field of microbiological and chemical engineering. Current new generations of Staged Multi-Phase Anaerobic reactors and Upflow Sludge Bed reactor systems are designed to have innovated features to counter high loading wastewater input, extreme temperature conditions, and possible inhibitory compounds. Taxonomy Initially, methanogens were considered to be bacteria, as it was not possible to distinguish archaea and bacteria before the introduction of molecular techniques such as DNA sequencing and PCR. Since the introduction of the domain Archaea by Carl Woese in 1977, methanogens were for a prolonged period considered a monophyletic group, later named Euryarchaeota (super)phylum. However, intensive studies of various environments have proved that there are more and more non-methanogenic lineages among methanogenic ones. The development of genome sequencing directly from environmental samples (metagenomics) allowed the discovery of the first methanogens outside the Euryarchaeota superphylum. The first such putative methanogenic lineage was Bathyarchaeia, a class within the Thermoproteota phylum. Later, it was shown that this lineage is not methanogenic but alkane-oxidizing utilizing highly divergent enzyme Acr similar to the hallmark gene of methanogenesis, methyl-CoM reductase (McrABG). The first isolate of Bathyarchaeum tardum from sediment of coastal lake in Russia showed that it metabolizes aromatic compounds and proteins as it was previously predicted based on metagenomic studies. However, more new putative methanogens outside of Euryarchaeota were discovered based on the presence McrABG. For instance, methanogens were found in the phyla Thermoproteota (orders Methanomethyliales, Korarchaeales, Methanohydrogenales, Nezhaarchaeales) and Methanobacteriota_B (order Methanofastidiosales). Additionally, some new lineages of methanogens were isolated in pure culture, which allowed the discovery of a new type of methanogenesis: H2-dependent methyl-reducing methanogenesis, which is independent of the Wood-Ljungdahl pathway. For example, in 2012, the order Methanoplasmatales from the phylum Thermoplasmatota was described as a seventh order of methanogens. Later, the order was renamed Methanomassiliicoccales based on the isolated from human gut Methanomassiliicoccus luminyensis. Another new lineage in the Halobacteriota phylum, order Methanonatronarchaeales, was discovered in alkaline saline lakes in Siberia in 2017. It also employs H2-dependent methyl-reducing methanogenesis but intriguingly harbors almost the full Wood-Ljungdahl pathway. However, it is disconnected from McrABG as no MtrA-H complex was detected. The taxonomy of methanogens reflects the evolution of these archaea, with some studies suggesting that the Last Archaeal Common Ancestor was methanogenic. If correct, this suggests that many archaeal lineages lost the ability to produce methane and switched to other types of metabolism. Currently, most of the isolated methanogens belong to one of three archaeal phyla (classification GTDB release 220): Halobacteriota, Methanobacteriota, and Thermoplasmatota. Under the International Code of Nomenclature for Prokaryotes, all three phyla belong to the same kingdom, Methanobacteriati. In total, more than 150 methanogen species are known in culture, with some represented by more than one strain. Phylum Halobacteriota Class Methanocellia Order Methanocellales Family Methanocellaceae Genus Methanocella Sakai et al. 2008 Methanocella paludicola Sakai et al. 2008 (type species) Methanocella arvoryzae Sakai et al. 2010 Methanocella conradii Lü and Lu 2012 Class Methanomicrobia Order Methanomicrobiales= Family Methanocalculaceae Zhilina et al. 2014 Family Methanocorpusculaceae Zellner et al. 1989 Methanocorpusculum Zellner et al. 1988 Methanocorpusculum parvum Zellner et al. 1988 (type species) Methanocorpusculum bavaricum Zellner et al. 1989 Methanocorpusculum labreanum Methanocorpusculum sinense Zellner et al. 1989 Family Methanomicrobiaceae Balch and Wolfe 1981== Genus Methanomicrobium Balch and Wolfe 1981 Methanomicrobium mobile (Paynter and Hungate 1968) Balch and Wolfe 1981 (type species) Methanomicrobium antiquum Mochimaru et al. 2016 Genus Methanoculleus Maestrojuán et al. 1990 Methanoculleus bourgensis corrig. (Ollivier et al. 1986) Maestrojuán et al. 1990 (type species) Methanoculleus chikugoensis Dianou et al. 2001 Methanoculleus horonobensis Shimizu et al. 2013 Methanoculleus hydrogenitrophicus Tian et al. 2010 Methanoculleus marisnigri Methanoculleus palmolei Zellner et al. 1998 Methanoculleus receptaculi Cheng et al. 2008 Methanoculleus sediminis Chen et al. 2015 Methanoculleus submarinus Mikucki et al. 2003 Methanoculleus taiwanensis Weng et al. 2015 Methanoculleus thermophilus corrig. (Rivard and Smith 1982) Maestrojuán et al. 1990 GenusMethanogenium Romesser et al. 1981 Methanogenium cariaci Romesser et al. 1981 (type species) Methanogenium frigidum Methanogenium marinum Chong et al. 2003 Methanogenium organophilum GenusMethanofollis Zellner et al. 1999 Methanofollis tationis (Zabel et al. 1986) Zellner et al. 1999 (type strains) Methanofollis aquaemaris Lai and Chen 2001 Methanofollis ethanolicus Imachi et al. 2009 Methanofollis fontis Chen et al. 2020 Methanofollis formosanus Wu et al. 2005 Methanofollis liminatans (Zellner et al. 1990) Zellner et al. 1999 Family Methanoregulaceae Sakai et al. 2012 GenusMethanoregula Bräuer et al. 2011 Methanoregula boonei Bräuer et al. 2011 (type species) Methanoregula formicica Yashiro et al. 2011 Family Methanospirillaceae Boone et al. 2002 Methanospirillum Ferry et al. 1974 Methanospirillum hungatei corrig. Ferry et al. 1974 (type species) Methanospirillum lacunae Iino et al. 2010 Methanospirillum psychrodurum Zhou et al. 2014 Methanospirillum stamsii Parshina et al. 2014 Class Methanonatronarchaeia Order Methanonatronarchaeales Family Methanonatronarchaeaceae Sorokin et al. 2018 GenusMethanonatronarchaeum Sorokin et al. 2018 Methanonatronarchaeum thermophilum Sorokin et al. 2018 (type species) Class Methanosarcinia Order Methanosarcinales Family Methanosarcinaceae GenusMethanosarcina Kluyver and van Niel 1936 Methanosarcina barkeri Schnellen 1947 (type species) Methanosarcina acetivorans Methanosarcina baltica von Klein et al. 2002 Methanosarcina flavescens Kern et al. 2016 Methanosarcina horonobensis Shimizu et al. 2011 Methanosarcina lacustris Simankova et al. 2002 Methanosarcina mazei (Barker 1936) Mah and Kuhn 1984 Methanosarcina semesiae Lyimo et al. 2000 Methanosarcina siciliae (Stetter and König 1989) Ni et al. 1994 Methanosarcina soligelidi Wagner et al. 2013 Methanosarcina spelaei Ganzert et al. 2014 Methanosarcina subterranea Shimizu et al. 2015 Methanosarcina thermophila Zinder et al. 1985 Methanosarcina vacuolata Zhilina and Zavarzin 1987 GenusMethanimicrococcus corrig. Sprenger et al. 2000 Methanimicrococcus blatticola corrig. Sprenger et al. 2000 GenusMethanococcoides Sowers and Ferry 1985 Methanococcoides methylutens Sowers and Ferry 1985 (type species) Methanococcoides alaskense Singh et al. 2005 Methanococcoides burtonii Franzmann et al. 1993 Methanococcoides orientis Liang et al. 2022 Methanococcoides vulcani L'Haridon et al. 2014 GenusMethanohalobium Zhilina and Zavarzin 1988 Methanohalobium evestigatum corrig. Zhilina and Zavarzin 1988 (type species) GenusMethanohalophilus Paterek and Smith 1988 Methanohalophilus mahii Paterek and Smith 1988 (type species) Methanohalophilus halophilus (Zhilina 1984) Wilharm et al. 1991 Methanohalophilus levihalophilus Katayama et al. 2014 Methanohalophilus portucalensis Boone et al. 1993 Methanohalophilus profundi L'Haridon et al. 2021 GenusMethanolobus König and Stetter 1983 Methanolobus tindarius König and Stetter 1983 (type species) Methanolobus bombayensis Kadam et al. 1994 Methanolobus chelungpuianus Wu and Lai 2015 Methanolobus halotolerans Shen et al. 2020 Methanolobus mangrovi Zhou et al. 2023 Methanolobus oregonensis (Liu et al. 1990) Boone 2002 Methanolobus profundi Mochimaru et al. 2009 Methanolobus psychrotolerans Chen et al. 2018 Methanolobus sediminis Zhou et al. 2023 Methanolobus taylorii Oremland and Boone 1994 Methanolobus vulcani Stetter et al. 1989 Methanolobus zinderi Doerfert et al. 2009 GenusMethanomethylovorans Lomans et al. 2004 Methanomethylovorans hollandica Lomans et al. 2004 (type species) Methanomethylovorans thermophila Jiang et al. 2005 Methanomethylovorans uponensis Cha et al. 2014 GenusMethanosalsum Boone and Baker 2002 Methanosalsum zhilinae (Mathrani et al. 1988) Boone and Baker 2002 (type species) Methanosalsum natronophilum Sorokin et al. 2015 Family Methanotrichaceae GenusMethanothrix Huser et al. 1983 Methanothrix soehngenii Huser et al. 1983 (type species) Methanothrix harundinacea (Ma et al. 2006) Akinyemi et al. 2021 Methanothrix thermoacetophila corrig. Nozhevnikova and Chudina 1988 "Candidatus Methanothrix paradoxa" corrig. Angle et al. 2017 Family Methermicoccaceae GenusMethermicoccus Cheng et al. 2007 Methermicoccus shengliensis Cheng et al. 2007 (type species) Phylum Methanobacteriota Class Methanobacteria Order Methanobacteriales Family Methanobacteriaceae GenusMethanobacterium Kluyver and van Niel 1936 Methanobacterium formicicum Schnellen 1947 (type species) Methanobacterium bryantii GenusMethanobrevibacter Balch and Wolfe 1981 Methanobrevibacter ruminantium (Smith and Hungate 1958) Balch and Wolfe 1981 (type species) Methanobrevibacter acididurans Savant et al. 2002 Methanobrevibacter arboriphilus corrig. (Zeikus and Henning 1975) Balch and Wolfe 1981 Methanobrevibacter boviskoreani Lee et al. 2013 Methanobrevibacter curvatus Leadbetter and Breznak 1997 Methanobrevibacter cuticularis Leadbetter and Breznak 1997 Methanobrevibacter filiformis Leadbetter et al. 1998 Methanobrevibacter gottschalkii Miller and Lin 2002 Methanobrevibacter millerae Rea et al. 2007 Methanobrevibacter olleyae Rea et al. 2007 Methanobrevibacter oralis Ferrari et al. 1995 Methanobrevibacter smithii Balch and Wolfe 1981 Methanobrevibacter thaueri Miller and Lin 2002 Methanobrevibacter woesei Miller and Lin 2002 Methanobrevibacter wolinii Miller and Lin 2002 "Methanobrevibacter massiliense" Huynh et al. 2015 "Candidatus Methanobrevibacter intestini" Chibani et al. 2022 GenusMethanosphaera Miller and Wolin 1985 Methanosphaera stadtmanae corrig. Miller and Wolin 1985 (type species) Methanosphaera cuniculi Biavati et al. 1990 GenusMethanothermobacter Wasserfallen et al. 2000 Methanothermobacter thermautotrophicus corrig. (Zeikus and Wolfe 1972) Wasserfallen et al. 2000 (type species) Methanothermobacter crinale Cheng et al. 2012 Methanothermobacter defluvii (Kotelnikova et al. 1994) Boone 2002 Methanothermobacter marburgensis Wasserfallen et al. 2000 Methanothermobacter tenebrarum Nakamura et al. 2013 Methanothermobacter thermoflexus (Kotelnikova et al. 1994) Boone 2002 Methanothermobacter thermophilus (Laurinavichus et al. 1990) Boone 2002 Methanothermobacter wolfei corrig. (Winter et al. 1985) Wasserfallen et al. 2000 Family Methanothermaceae GenusMethanothermus Stetter 1982 Methanothermus fervidus Stetter 1982 (type species) Class Methanopyri Order Methanopyrales Family Methanopyraceae GenusMethanopyrus Kurr et al. 1992 Methanopyrus kandleri Kurr et al. 1992 (type species) Class Methanococci Order Methanococcales Family Methanococcaceae Balch and Wolfe 1981 GenusMethanococcus Kluyver and van Niel 1936 Methanococcus vannielii Stadtman and Barker 1951 (type species) Methanococcus aeolicus Methanococcus burtonii Methanococcus chunghsingensis Methanococcus deltae Methanococcus jannaschii Methanococcus maripaludis GenusMethanofervidicoccus Methanofervidicoccus abyssi Sakai et al. 2019 (type species) GenusMethanothermococcus Methanothermococcus thermolithotrophicus (Huber et al. 1984) Whitman 2002 (type species) Family Methanocaldococcaceae GenusMethanocaldococcus Methanocaldococcus jannaschii (Jones et al. 1984) Whitman 2002 (type species) GenusMethanotorris Methanotorris igneus (Burggraf et al. 1990) Whitman 2002 (type species) Phylum Thermoplasmatota Class Thermoplasmata Order Methanomassiliicoccales Family Methanomassiliicoccaceae GenusMethanomassiliicoccus Dridi et al. 2012 Methanomassiliicoccus luminyensis Dridi et al. 2012 (type species) Family Methanomethylophilaceae GenusMethanomethylophilus Borrel et al. 2024 Methanomethylophilus alvi Borrel et al. 2024 (type species) See also Extremophile Hydrogen cycle Kraken Mare List of Archaea genera Methane clathrate Methanogens in digestive tract of ruminants Methanopyrus Methanotroph References Anaerobic digestion Gen Archaea biology Environmental microbiology
Methanogen
Chemistry,Engineering,Biology,Environmental_science
7,731
76,788,845
https://en.wikipedia.org/wiki/Obligate%20mutualism
Obligate mutualism is a special case of mutualism where an ecological interaction between species mutually benefits each other, and one or all species are unable to survive without the other. In some obligate relationships, only one species is dependent on the relationship. For example, a parasite may require a host in order to reproduce and survive, while the host does not depend at all on the parasite. Fig and fig wasps are an example of a co-obligate relationship, where both species are totally dependent on the relationship. The fig plant is entirely dependent on the fig wasp for pollination, and the fig wasp requires the fig plant for reproductive purposes. Many insect-fungi relationships are also co-obligate: the insect disperses, and in some cases protects, the fungi while the fungi provide nutrients for the insects. This interaction allows insects and fungi to, as a group, inhabit previously inhospitable or unreachable environments. Though obligate relationships need not be limited to two species, they are often discussed as such, with the relationship being made up of a host and a symbiont, though the terms are often attributed arbitrarily. Evolution of obligate mutualism Obligate mutualistic relationships, where species are entirely dependent on each other for survival, can evolve through different pathways. In some cases, a free-living symbiont may be engulfed by a host organism and subsequently passed down through vertical transmission, resulting in an obligate dependency. However, it is more common for facultative mutualisms, where the mutualist can exist independently or in association with a host, to act as an intermediary step toward the evolution of obligate or co-obligate mutualism. In this second case, the evolution of obligate mutualism can be divided into three steps: formation, maintenance, and transformation. Formation The formation of the facultative mutualism requires that the species involved all benefit from their mutual cooperation. This mutualism, though it is to the benefit of said species, is best understood as co-exploitation. Facultative mutualism occurs when species' interests align, so that each may reciprocally exploit the other to the benefit of both. Maintenance In order for facultative relationships to turn into obligate relationships, the facultative mutualism must be maintained and continued across generations. There are two methods for the relationship to be carried through generations: vertical and horizontal transmission. Vertical transmission involves the passage of symbionts from parent to offspring hosts. Horizontal transmission involves the passage of symbionts between unrelated hosts. It is proposed that vertical transmission makes for a more stable relationship, because in vertical transmission a host is paired with the same symbiont in every generation, thus the host and symbiont have more chance for co-adaption. In vertical transmission, the hosts and symbionts also share reproductive fate and therefore both suffer from cheating. A cheater is a mutualist who gains more than they get. An extreme example would be an organism who gains from a relationship without giving anything, such as an insect that feeds on nectar without contributing to pollination. Cheaters are thought to destabilize mutualistic relationships, both when they arrive as a third exploitative party and when they result as mutants within pre-existing mutualistic relationships. Horizontal transmission, where there can be multiple symbionts, can result in competition between symbionts and exploitation of the host. There are many obligate relationships involving horizontal transmission. And it has also been found that mutualist/exploiter co-existence is not uncommon. Cheaters often exist alongside mutualistic relationships, and in obligate mutualism the presence of third party explorers early in the formation of the relationship may protect the host-symbiont relationship from further exploitation later on. Transformation Once a mutualistic group has reached a point of stability, where both species are benefiting and there is not a destabilizing problem with cheaters, the third stage, transformation, can occur. In this stage, the mutualists lose the ability to survive independently of one another and thus form a new superorganism. In this case, each symbiont has become so specialized within the mutualistic group that they are now fully dependent on the relationship. Physiological and behavioral changes can evolve as consequences of obligate dependency. In insect-fungi mutualistic groups, for example, fungal spore-carrying organs in insects and the production of increasingly nutrient rich, asexually reproductive spores in fungi appear as part of the co-obligate relationship. In the fig and wasps co-obligate relationship, female wasps have developed morphological traits, such as elongated heads and easily detachable antennae and wings, that allow them to enter the fig ostile and lay eggs and collect pollen, and likewise, as the fig matures it produces nourishment for the wasp larvae. Evolutionary consequences Obligate dependency links the evolutionary fate of the organisms involved, this coupling has the potential to result in both negative and positive consequences. This coupling can enhance the ability of the organism to evolve because natural selection can influence two genomes at once, meaning there are more opportunities for a mutation to positively impact both species. This coupling also has the potential to negatively affect species evolution by limiting the ability of one species to react to environmental selective pressures, tying the organism with the higher fitness to an organism with now lower fitness, this is called the weakest link hypothesis. Studying obligate mutualism Understanding how obligate dependency affects the evolution of involved species as well as being able to properly identify and understand obligate relationships is important in predicting and perhaps guardian against the impacts of climate change on ecological communities. It is not easy to study or identify obligate species and the number of species involved in obligate relationships, as hosts and symbionts lose and gain traits in their relationship, making it hard to determine their taxonomic relationships with other species. Studying obligate relationships is also difficult, as they do not respond well to experimental interference. References Mutualism (biology) Biological interactions Ethology
Obligate mutualism
Biology
1,261
8,015,680
https://en.wikipedia.org/wiki/Monadic%20predicate%20calculus
In logic, the monadic predicate calculus (also called monadic first-order logic) is the fragment of first-order logic in which all relation symbols in the signature are monadic (that is, they take only one argument), and there are no function symbols. All atomic formulas are thus of the form , where is a relation symbol and is a variable. Monadic predicate calculus can be contrasted with polyadic predicate calculus, which allows relation symbols that take two or more arguments. Expressiveness The absence of polyadic relation symbols severely restricts what can be expressed in the monadic predicate calculus. It is so weak that, unlike the full predicate calculus, it is decidable—there is a decision procedure that determines whether a given formula of monadic predicate calculus is logically valid (true for all nonempty domains). Adding a single binary relation symbol to monadic logic, however, results in an undecidable logic. Relationship with term logic The need to go beyond monadic logic was not appreciated until the work on the logic of relations, by Augustus De Morgan and Charles Sanders Peirce in the nineteenth century, and by Frege in his 1879 Begriffsschrift. Prior to the work of these three, term logic (syllogistic logic) was widely considered adequate for formal deductive reasoning. Inferences in term logic can all be represented in the monadic predicate calculus. For example the argument All dogs are mammals. No mammal is a bird. Thus, no dog is a bird. can be notated in the language of monadic predicate calculus as where , and denote the predicates of being, respectively, a dog, a mammal, and a bird. Conversely, monadic predicate calculus is not significantly more expressive than term logic. Each formula in the monadic predicate calculus is equivalent to a formula in which quantifiers appear only in closed subformulas of the form or These formulas slightly generalize the basic judgements considered in term logic. For example, this form allows statements such as "Every mammal is either a herbivore or a carnivore (or both)", . Reasoning about such statements can, however, still be handled within the framework of term logic, although not by the 19 classical Aristotelian syllogisms alone. Taking propositional logic as given, every formula in the monadic predicate calculus expresses something that can likewise be formulated in term logic. On the other hand, a modern view of the problem of multiple generality in traditional logic concludes that quantifiers cannot nest usefully if there are no polyadic predicates to relate the bound variables. Variants The formal system described above is sometimes called the pure monadic predicate calculus, where "pure" signifies the absence of function symbols. Allowing monadic function symbols changes the logic only superficially, whereas admitting even a single binary function symbol results in an undecidable logic. Monadic second-order logic allows predicates of higher arity in formulas, but restricts second-order quantification to unary predicates, i.e. the only second-order variables allowed are subset variables. Footnotes Predicate logic Logical calculi
Monadic predicate calculus
Mathematics
668
6,625,288
https://en.wikipedia.org/wiki/Cradle-to-cradle%20design
Cradle-to-cradle design (also referred to as 2CC2, C2C, cradle 2 cradle, or regenerative design) is a biomimetic approach to the design of products and systems that models human industry on nature's processes, where materials are viewed as nutrients circulating in healthy, safe metabolisms. The term itself is a play on the popular corporate phrase "cradle to grave", implying that the C2C model is sustainable and considerate of life and future generations—from the birth, or "cradle", of one generation to the next generation, versus from birth to death, or "grave", within the same generation. C2C suggests that industry must protect and enrich ecosystems and nature's biological metabolism while also maintaining a safe, productive technical metabolism for the high-quality use and circulation of organic and technical nutrients. It is a holistic, economic, industrial and social framework that seeks to create systems that are not only efficient but also essentially waste free. Building off the whole systems approach of John T. Lyle's regenerative design, the model in its broadest sense is not limited to industrial design and manufacturing; it can be applied to many aspects of human civilization such as urban environments, buildings, economics and social systems. The term "Cradle to Cradle" is a registered trademark of McDonough Braungart Design Chemistry (MBDC) consultants. The Cradle to Cradle Certified Products Program began as a proprietary system; however, in 2012 MBDC turned the certification over to an independent non-profit called the Cradle to Cradle Products Innovation Institute. Independence, openness, and transparency are the Institute's first objectives for the certification protocols. The phrase "cradle to cradle" itself was coined by Walter R. Stahel in the 1970s. The current model is based on a system of "lifecycle development" initiated by Michael Braungart and colleagues at the Environmental Protection Encouragement Agency (EPEA) in the 1990s and explored through the publication A Technical Framework for Life-Cycle Assessment. In 2002, Braungart and William McDonough published a book called Cradle to Cradle: Remaking the Way We Make Things, a manifesto for cradle-to-cradle design that gives specific details of how to achieve the model. The model has been implemented by many companies, organizations and governments around the world. Cradle-to-cradle design has also been the subject of many documentary films such as Waste = Food. Introduction In the cradle-to-cradle model, all materials used in industrial or commercial processes—such as metals, fibers, dyes—fall into one of two categories: "technical" or "biological" nutrients. Technical nutrients are strictly limited to non-toxic, non-harmful synthetic materials that have no negative effects on the natural environment; they can be used in continuous cycles as the same product without losing their integrity or quality. In this manner these materials can be used over and over again instead of being "downcycled" into lesser products, ultimately becoming waste. Biological nutrients are organic materials that, once used, can be disposed of in any natural environment and decompose into the soil, providing food for small life forms without affecting the natural environment. This is dependent on the ecology of the region; for example, organic material from one country or landmass may be harmful to the ecology of another country or landmass. The two types of materials each follow their own cycle in the regenerative economy envisioned by Keunen and Huizing. Structure Initially defined by McDonough and Braungart, the Cradle to Cradle Products Innovation Institute's five certification criteria are: Material health, which involves identifying the chemical composition of the materials that make up the product. Particularly hazardous materials (e.g. heavy metals, pigments, halogen compounds etc.) have to be reported whatever the concentration, and other materials reported where they exceed 100 ppm. For wood, the forest source is required. The risk for each material is assessed against criteria and eventually ranked on a scale with green being materials of low risk, yellow being those with moderate risk but are acceptable to continue to use, red for materials that have high risk and need to be phased out, and grey for materials with incomplete data. The method uses the term 'risk' in the sense of hazard (as opposed to consequence and likelihood). Material reutilization, which is about recovery and recycling at the end of product life. Assessment of energy required for production, which for the highest level of certification needs to be based on at least 40% renewable energy for all parts and subassemblies. Water, particularly usage and discharge quality. Social responsibility, which assesses fair labor practices. Health Currently, many human beings come into contact or consume, directly or indirectly, many harmful materials and chemicals daily. In addition, countless other forms of plant and animal life are also exposed. C2C seeks to remove dangerous technical nutrients (synthetic materials such as mutagenic materials, heavy metals and other dangerous chemicals) from current life cycles. If the materials we come into contact with and are exposed to on a daily basis are not toxic and do not have long term health effects, then the health of the overall system can be better maintained. For example, a fabric factory can eliminate all harmful technical nutrients by carefully reconsidering what chemicals they use in their dyes to achieve the colours they need and attempt to do so with fewer base chemicals. Economics The C2C model shows high potential for reducing the financial cost of industrial systems. For example, in the redesign of the Ford River Rouge Complex, the planting of Sedum (stonecrop) vegetation on assembly plant roofs retains and cleanses rain water. It also moderates the internal temperature of the building in order to save energy. The roof is part of an $18 million rainwater treatment system designed to clean of rainwater annually. This saved Ford $30 million that would otherwise have been spent on mechanical treatment facilities. Following C2C design principles, product manufacture can be designed to cost less for the producer and consumer. Theoretically, they can eliminate the need for waste disposal such as landfills. Definitions Cradle to cradle is a play on the phrase "cradle to grave", implying that the C2C model is sustainable and considerate of life and future generations. Technical nutrients are basically inorganic or synthetic materials manufactured by humans—such as plastics and metals—that can be used many times over without any loss in quality, staying in a continuous cycle. Biological nutrients and materials are organic materials that can decompose into the natural environment, soil, water, etc. without affecting it in a negative way, providing food for bacteria and microbiological life. Materials are usually referred to as the building blocks of other materials, such as the dyes used in colouring fibers or rubbers used in the sole of a shoe. Downcycling is the reuse of materials into lesser products. For example, a plastic computer case could be downcycled into a plastic cup, which then becomes a park bench, etc.; this eventually leads to plastic waste. In conventional understanding, this is no different from recycling that produces a supply of the same product or material. Waste = Food is a basic concept of organic waste materials becoming food for bugs, insects and other small forms of life who can feed on it, decompose it and return it to the natural environment which we then indirectly use for food ourselves. Existing synthetic materials The question of how to deal with the countless existing technical nutrients (synthetic materials) that cannot be recycled or reintroduced to the natural environment is dealt with in C2C design. The materials that can be reused and retain their quality can be used within the technical nutrient cycles while other materials are far more difficult to deal with, such as plastics in the Pacific Ocean. Hypothetical examples One potential example is a shoe that is designed and mass-produced using the C2C model. The sole might be made of "biological nutrients" while the upper parts might be made of "technical nutrients". The shoe is mass-produced at a manufacturing plant that utilizes its waste material by putting it back into the cycle, potentially by using off-cuts from the rubber soles to make more soles instead of merely disposing of them; this is dependent on the technical materials not losing their quality as they are reused. Once the shoes have been manufactured, they are distributed to retail outlets where the customer buys the shoe at a reduced price because the customer is only paying for the use of the materials in the shoe for the period of time that they will be wearing them. When they outgrow the shoe or it is damaged, they return it to the manufacturer. When the manufacturer separates the sole from the upper parts (separating the technical and biological nutrients), the biological nutrients are returned to the natural environment while the technical nutrients can be used to create the sole of another shoe. Another example of C2C design is a disposable cup, bottle, or wrapper made entirely out of biological materials. When the user is finished with the item, it can be disposed of and returned to the natural environment; the cost of disposal of waste such as landfill and recycling is greatly reduced. The user could also potentially return the item for a refund so it can be used again. Finished products Rohner Textile AG Climatex-textile Biofoam, a cradle-to-cradle alternative to expanded polystyrene Sewage sludge treatment plants are facilities that may create fertiliser from sewage sludge. This approach is green retrofit for the current (inefficient) system of organic waste disposal; as composting toilets are a better approach in the long run. Aquion Energy large scale batteries Ecovative Design packaging and insulation made from waste by binding it together with mycelium Implementation The C2C model can be applied to almost any system in modern society: urban environments, buildings, manufacturing, social systems, etc. Five steps are outlined in Cradle to Cradle: Remaking the Way We Make Things: Get "free of" known culprits Follow informed personal preferences Create "passive positive" lists—lists of materials used categorised according to their safety level The X list—substances that must be phased out, such as teratogenic, mutagenic, carcinogenic The gray list—problematic substances that are not so urgently in need of phasing out The P list—the "positive" list, substances actively defined as safe for use Activate the positive list Reinvent—the redesign of the former system Products that adhere to all steps may be eligible to receive C2C certification. Other certifications such as Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM) can be used to qualify for certification, and vice versa in the case of BREEAM. C2C principles were first applied to systems in the early 1990s by Braungart's Hamburger Umweltinstitut (HUI) and The Environmental Institute in Brazil for biomass nutrient recycling of effluent to produce agricultural products and clean water as a byproduct. In 2007, MBDC and the EPEA formed a strategic partnership with global materials consultancy Material ConneXion to help promote and disseminate C2C design principles by providing greater global access to C2C material information, certification and product development. As of January 2008, Material ConneXion's Materials Libraries in New York, Milan, Cologne, Bangkok and Daegu, Korea, started to feature C2C assessed and certified materials and, in collaboration with MBDC and EPEA, the company now offers C2C Certification, and C2C product development. While the C2C model has influenced the construction or redevelopment of smaller sites, several large organizations and governments have also implemented the C2C model and its ideas and concepts: Major implementations The Lyle Center for Regenerative Studies incorporates holistic & cyclic systems throughout the center. Regenerative design is arguably the foundation for the trademarked C2C. The Government of China contributed to the construction of the city of Huangbaiyu based on C2C principles, utilising the rooftops for agriculture. This project is largely criticized as a failure to meet the desires & constraints of the local people. The Ford River Rouge Complex redevelopment, cleaning of rainwater annually. The Netherlands Institute of Ecology (NIOO-KNAW) planned to make its laboratory and office complex completely cradle-to-cradle compliant. Several private houses and communal buildings in the Netherlands. Fashion Positive, an initiative to assist the fashion world in implementing the cradle-to-cradle model in five areas: material health, material reuse, renewable energy, water stewardship and social fairness. Coordination with other models The cradle-to-cradle model can be viewed as a framework that considers systems as a whole or holistically. It can be applied to many aspects of human society, and is related to life-cycle assessment. See for instance the LCA-based model of the eco-costs, which has been designed to cope with analyses of recycle systems. The cradle-to-cradle model in some implementations is closely linked with the car-free movement, such as in the case of large-scale building projects or the construction or redevelopment of urban environments. It is closely linked with passive solar design in the building industry and with permaculture in agriculture within or near urban environments. An earthship is a perfect example where different re-use models are used, including cradle-to-cradle design and permaculture. Constraints A major constraint in the optimal recycling of materials is that at civic amenity sites, products are not disassembled by hand and have each individual part sorted into a bin, but instead have the entire product sorted into a certain bin. This makes the extraction of rare-earth elements and other materials uneconomical (at recycling sites, products typically get crushed after which the materials are extracted by means of magnets, chemicals, special sorting methods, ...) and thus optimal recycling of, for example metals is impossible (an optimal recycling method for metals would require to sort all similar alloys together rather than mixing plain iron with alloys). Obviously, disassembling products is not feasible at currently designed civic amenity sites, and a better method would be to send back the broken products to the manufacturer, so that the manufacturer can disassemble the product. These disassembled product can then be used for making new products or at least to have the components sent separately to recycling sites (for proper recycling, by the exact type of material). At present though, few laws are put in place in any country to oblige manufacturers to take back their products for disassembly, nor are there even such obligations for manufacturers of cradle-to-cradle products. One process where this is happening is in the EU with the Waste Electrical and Electronic Equipment Directive. Also, the European Training Network for the Design and Recycling of Rare-Earth Permanent Magnet Motors and Generators in Hybrid and Full Electric Vehicles (ETN-Demeter) makes designs of electric motors of which the magnets can be easily removed for recycling the rare earth metals. Criticism and response Criticism has been advanced on the fact that McDonough and Braungart previously kept C2C consultancy and certification in their inner circle. Critics argued that this lack of competition prevented the model from fulfilling its potential. Many critics pleaded for a public-private partnership overseeing the C2C concept, thus enabling competition and growth of practical applications and services. McDonough and Braungart responded to this criticism by giving control of the certification protocol to a non-profit, independent Institute called the Cradle to Cradle Products Innovation Institute. McDonough said the new institute "will enable our protocol to become a public certification program and global standard". The new Institute announced the creation of a Certification Standards Board in June 2012. The new board, under the auspices of the Institute, will oversee the certification moving forward. Experts in the field of environment protection have questioned the practicability of the concept. Friedrich Schmidt-Bleek, head of the German Wuppertal Institute, called his assertion that the "old" environmental movement had hindered innovation with its pessimist approach "pseudo-psychological humbug". Schmidt-Bleek said of the Cradle-to-Cradle seat cushions Braungart developed for the Airbus 380: "I can feel very nice on Michael's seat covers in the airplane. Nevertheless I am still waiting for a detailed proposal for a design of the other 99.99 percent of the Airbus 380 after his principles." In 2009 Schmidt-Bleek stated that it is out of the question that the concept can be realized on a bigger scale. Some claim that C2C certification may not be entirely sufficient in all eco-design approaches. Quantitative methodologies (LCAs) and more adapted tools (regarding the product type which is considered) could be used in tandem. The C2C concept ignores the use phase of a product. According to variants of life-cycle assessment (see: ) the entire life cycle of a product or service has to be evaluated, not only the material itself. For many goods e.g. in transport, the use phase has the most influence on the environmental footprint. For example, the more lightweight a car or a plane the less fuel it consumes and consequently the less impact it has. Braungart fully ignores the use phase. It is safe to say that every production step or resource-transformation step needs a certain amount of energy. The C2C concept foresees its own certification of its analysis and therefore is in contradiction to international publishing standards (ISO 14040 and ISO 14044) for life-cycle assessment whereas an independent external review is needed in order to obtain comparative and resilient results. See also Appropriate technology Ellen MacArthur Foundation List of environment topics Modular construction systems Planned obsolescencethe opposite of durable, no waste design The Blue Economy Upcycling References External links Sustainable design Environmental design Industrial ecology Sustainable building
Cradle-to-cradle design
Chemistry,Engineering
3,659
3,377,359
https://en.wikipedia.org/wiki/Nitroso
In organic chemistry, nitroso refers to a functional group in which the nitric oxide () group is attached to an organic moiety. As such, various nitroso groups can be categorized as C-nitroso compounds (e.g., nitrosoalkanes; ), S-nitroso compounds (nitrosothiols; ), N-nitroso compounds (e.g., nitrosamines, ), and O-nitroso compounds (alkyl nitrites; ). Synthesis Nitroso compounds can be prepared by the reduction of nitro compounds or by the oxidation of hydroxylamines. Ortho-nitrosophenols may be produced by the Baudisch reaction. In the Fischer–Hepp rearrangement, aromatic 4-nitrosoanilines are prepared from the corresponding nitrosamines. Properties Nitrosoarenes typically participate in a monomer–dimer equilibrium. The azobenzene N,N'-dioxide (Ar(–O)N+=+N(O–)Ar) dimers, which are often pale yellow, are generally favored in the solid state, whereas the deep-green monomers are favored in dilute solution or at higher temperatures. They exist as cis and trans isomers. The central "double bond" in the dimer in fact has a bond order of about 1.5. When stored in protic media, primary and secondary nitrosoalkanes isomerize to oximes. Some tertiary nitrosoalkanes also isomerize to oximes through C-C bond fission, particularly if the bond is electron-poor. Nitrosophenols and naphthols isomerize to the oxime quinone in solution, but reversibly; nitrosophenol ethers typically dealkylate to facilitate the isomerization. Nitroso tertiary anilines generally do not dealkylate in that way. Due to the stability of the nitric oxide free radical, nitroso organyls tend to have very low C–N bond dissociation energies: nitrosoalkanes have BDEs on the order of , while nitrosoarenes have BDEs on the order of . As a consequence, they are generally heat- and light-sensitive. Compounds containing O–(NO) or N–(NO) bonds generally have even lower bond dissociation energies. For instance, N-nitrosodiphenylamine, Ph2N–N=O, has a N–N bond dissociation energy of only . Organonitroso compounds serve as a ligands giving transition metal nitroso complexes. Reactions Many reactions make use of an intermediate nitroso compound, such as the Barton reaction and Davis–Beirut reaction, as well as the synthesis of indoles, for example: Baeyer–Emmerling indole synthesis, Bartoli indole synthesis. In the Saville reaction, mercury is used to replace a nitrosyl from a thiol group. C-nitroso compounds are used in organic synthesis as synthons in some well-documented chemical reactions such as hetero Diels-Alder (HDA), nitroso-ene and nitroso-aldol reactions. Nitrosyl in inorganic chemistry Nitrosyls are non-organic compounds containing the NO group, for example directly bound to the metal via the N atom, giving a metal–NO moiety. Alternatively, a nonmetal example is the common reagent nitrosyl chloride (). Nitric oxide is a stable radical, having an unpaired electron. Reduction of nitric oxide gives the nitrosyl anion, : Oxidation of NO yields the nitrosonium cation, : Nitric oxide can serve as a ligand forming metal nitrosyl complexes or just metal nitrosyls. These complexes can be viewed as adducts of , , or some intermediate case. In human health See also Nitrosamine, the functional group with the NO attached to an amine, such as R2N–NO Nitrosobenzene Nitric oxide Nitroxyl References Functional groups Nitrosyl compounds
Nitroso
Chemistry,Biology
887
30,483,900
https://en.wikipedia.org/wiki/RegTransBase
RegTransBase is database of regulatory interactions and transcription factor binding sites in prokaryotes See also Transcription factors References External links http://regtransbase.lbl.gov. Biological databases Transcription factors DNA Biophysics
RegTransBase
Physics,Chemistry,Biology
49
4,302,157
https://en.wikipedia.org/wiki/M74%20Group
The M74 Group (also known as the NGC 628 Group) is a small group of galaxies in the constellation Pisces. The face-on spiral galaxy M74 (NGC 628) is the brightest galaxy within the group. Other members include the peculiar spiral galaxy NGC 660 and several smaller irregular galaxies . The M74 Group is one of many galaxy groups that lie within the Virgo Supercluster. Members The table below lists galaxies that have been consistently identified as group members in the Nearby Galaxies Catalog, the Lyons Groups of Galaxies (LGG) Catalog, and the three group lists created from the Nearby Optical Galaxy sample of Giuricin et al. Other possible members galaxies (galaxies listed in only one or two of the lists from the above references) include the irregular galaxies UGC 891, UGC 1104, UGC 1171, UGC 1175, and UGCA 20. References Virgo Supercluster Pisces (constellation) Galaxy clusters
M74 Group
Astronomy
204
49,639,708
https://en.wikipedia.org/wiki/MIST%20%28satellite%29
The M.I.S.T. satellite (Miniature Student Satellite) is a satellite currently under development at the K.T.H. Royal Institute of Technology in Stockholm, Sweden. It is expected to be launched in 2024. The satellite is a 3U CubeSat, primarily built by students working in small teams. The project was defined in 2014, and the work started on 28 January 2015. The project is led by Sven Grahn, an experienced satellite project manager. Seven technical and scientific experiments are included in the K.T.H. student satellite M.I.S.T. The experiments have been proposed from inside K.T.H., from two Swedish companies and from the Swedish Institute of Space Physics in Kiruna, Sweden. References Proposed satellites
MIST (satellite)
Astronomy
160
41,248
https://en.wikipedia.org/wiki/Hydroxyl%20ion%20absorption
Hydroxyl ion absorption is the absorption in optical fibers of electromagnetic radiation, including the near-infrared, due to the presence of trapped hydroxyl ions remaining from water as a contaminant. The hydroxyl (OH−) ion can penetrate glass during or after product fabrication, resulting in significant attenuation of discrete optical wavelengths, e.g., centred at 1.383 μm, used for communications via optical fibres. See also Electromagnetic absorption by water References Fiber optics Glass engineering and science
Hydroxyl ion absorption
Chemistry,Materials_science,Engineering
107
5,593,864
https://en.wikipedia.org/wiki/Jonathan%20B.%20Tucker
Jonathan B. Tucker (August 2, 1954 – July 31, 2011) was an American political scientist and expert on the chemical and biological weapons. Early life and education Tucker was born on August 2, 1954, in Boston, Massachusetts, to Deborah Tucker. Tucker earned a B.S. in biology from Yale University and a Ph.D. in political science (focusing on defense and arms control study) from MIT. Career After finishing his studies Tucker worked as an arms control specialist for the Congressional Office of Technology Assessment, the U.S. Arms Control & Disarmament Agency, and the U.S. State Department. He was an editor at High Technology and Scientific American magazines and wrote about military technologies, biotechnology, and biomedical research. Tucker was a UN weapons biological inspector in Iraq in February 1995. From 1996, he served as founding director of the Chemical and Biological Weapons Nonproliferation Program at the James Martin Center for Nonproliferation Studies of the Monterey Institute of International Studies, and then served as a senior fellow in its Washington Office. He was a professional staff member for the bipartisan Commission on the Prevention of WMD proliferation and terrorism, which published World at Risk, a volume critical of US prevention strategies for post-9/11 terrorism. In 2010, Tucker spent a semester teaching and researching at the TU Darmstadt in Germany as an endowed professor of peace and security studies, and most recently was a senior fellow at the Federation of American Scientists in Washington, D.C. Death On July 31, 2011, Tucker was found dead in his home in Washington D.C. He was 56. Published works Articles Books (editor) References External links 1954 births 2011 deaths Yale College alumni MIT School of Humanities, Arts, and Social Sciences alumni People related to biological warfare Academic staff of Technische Universität Darmstadt
Jonathan B. Tucker
Biology
370
931,087
https://en.wikipedia.org/wiki/Silicon%20compiler
A silicon compiler is an electronic design automation software tool that is used for high-level synthesis of integrated circuits. Such tool takes a user's specification of an IC design as input and automatically generates an integrated circuit (IC) design files as output for further fabrication by the semiconductor fabrication plant or manually from discrete components. The process is sometimes referred to as hardware compilation. The silicon compiler may use vendor's Process Design Kit for the production. Overview Silicon compilation takes place in three major steps: Use high level C to HDL converter Convert a hardware-description language such as Verilog or VHDL into logic (typically in the form of a "netlist"). Place equivalent logic gates on the IC. Silicon compilers typically use standard-cell libraries provided by manufacturers so that they do not have to worry about the actual integrated-circuit layout and can focus on the placement. Routing the standard cells together to form the desired logic. Silicon compilation was first described in 1979 by David L. Johannsen, under the guidance of his thesis adviser, Carver Mead. Johannsen, Mead, and Edmund K. Cheng subsequently founded Silicon Compilers Inc. (SCI) in 1981. Edmund Cheng designed an Ethernet Data Link Controller chip in 1981–82 using structured design methodology, in order to drive the software and circuit-library development at SCI. The project went from concept to chip specification in 3 months, and from chip specification to tape-out in 5 months. Fabricated using a 3-micron NMOS process, the chip measured 50,600 square mils in die area, and was being marketed and manufactured in volume-production by 1983 under license from SCI. John Wawrzynek at Caltech used some of the earliest silicon compilers in 1982 as part of the "Yet Another Processor Project" (YAPP), akin to YACC. In 1983–84, the SCI team designed and implemented the data-path chip used in the MicroVAX in seven months. MicroVAX's data-path chip contains the entire 32-bit processor, except its microcode store and control-store sequencer, and contains 37,000 transistors. At the time, chips with similar levels of complexity required about 3 years to design and implement. Including those seven months, Digital Equipment Corporation completed the design and implementation of the MicroVAX within one year. See also Electric (software) FpgaC References External links Definition from PC Magazine Computer Aids for VLSI Design by Steven M. Rubin Hardware compilation information Hardware compilation mailing list Electronic design automation Computing terminology
Silicon compiler
Technology
521
73,974,319
https://en.wikipedia.org/wiki/Phaffia
Phaffia is a genus of fungi in the order Cystofilobasidiales. The genus comprises orange-red yeasts that form basidia directly from yeast cells, lack hyphae throughout their life cycle, and produce astaxanthin, a carotenoid used as an additive in animal feed to enhance colour in shrimp, salmon, and poultry eggs and also as an antioxidant in dietary supplements. The genus was named after the Dutch specialist Herman Phaff who first isolated the type species from slime fluxes of Japanese and North American trees in the 1960s. The genus Xanthophyllomyces was proposed for the teleomorphic (basidia-bearing) state of Phaffia. Following changes to the International Code of Nomenclature for algae, fungi, and plants, however, the practice of giving different names to teleomorph and anamorph forms of the same fungus was discontinued, meaning that Xanthophyllomyces became a synonym of the earlier name Phaffia. References Tremellomycetes Taxa described in 1976 Basidiomycota genera Yeasts
Phaffia
Biology
232
12,403,972
https://en.wikipedia.org/wiki/Simple%20programmable%20logic%20device
A simple programmable logic device (SPLD) is a programmable logic device with complexity below that of a complex programmable logic device (CPLD). The term commonly refers to devices such as ROMs, PALs, PLAs and GALs. Basic description Simple programmable logic devices (SPLD) are the simplest, smallest and least-expensive forms of programmable logic devices. SPLDs can be used in boards to replace standard logic components (AND, OR, and NOT gates), such as 7400-series TTL. They typically comprise 4 to 22 fully connected macrocells. These macrocells typically consist of some combinatorial logic (such as AND OR gates) and a flip-flop. In other words, a small Boolean logic equation can be built within each macrocell. This equation will combine the state of some number of binary inputs into a binary output and, if necessary, store that output in the flip-flop until the next clock edge. Of course, the particulars of the available logic gates and flip-flops are specific to each manufacturer and product family. But the general idea is always the same. Most SPLDs use either fuses or non-volatile memory cells (EPROM, EEPROM, Flash, and others) to define the functionality. These devices are also known as: Programmable array logic (PAL) Generic array logic (GAL) Programmable logic arrays (PLA) Field-programmable logic arrays (FPLA) Programmable logic devices (PLD) Advantages PLDs are often used for address decoding, where they have several clear advantages over the 7400-series TTL parts that they replaced: One chip requires less board area, power, and wiring than several do. The design inside the chip is flexible, so a change in the logic does not require any rewiring of the board. Rather, simply replacing one PLD with another part that has been programmed with the new design can alter the decoding logic. References Gate arrays
Simple programmable logic device
Technology,Engineering
415
53,727,010
https://en.wikipedia.org/wiki/PiggyBac%20Transposable%20Element%20Derived%205
PiggyBac Transposable Element Derived 5 is an enzyme that in humans is encoded by the PGBD5 gene. PGBD5 is a DNA transposase related to the ancient PiggyBac transposase first identified in the cabbage looper moth, Trichoplusia ni. The gene is believed to have been domesticated over 500 million years ago in the common ancestor of cephalochordates and vertebrates. The putative catalytic triad of the protein composed of three aspartic acid residues is conserved among PGBD5-like genes through evolution, and is distinct from other PiggyBac-like genes. PGBD5 has been shown to be able to transpose DNA in a sequence-specific, cut-and-paste fashion. PGBD5 has also been proposed to mediate site-specific DNA rearrangements in human tumors. Human PGBD5 can mobilize the insect PiggyBac transposons in human cell culture. Expression in the brain In mature mice brain tissue PGBD5 is found primarily in regions of the olfactory bulb, hippocampus, and cerebellum. In embryonic mice brain tissue PGBD5 is found not only in the medial pallium and prepontine isthmus, which are embryonic brain areas that give rise to the development of the hippocampus and cerebellum but also in areas in the embryonic brain that give rise to the hypothalamus and medulla. Disease Associations PGBD5 is expressed in the majority of human pediatric solid tumors. It's upregulated in sporadic Creutzfeldt-Jakob disease. PGBD5 is associated with frontotemporal dementia, where it gets most expressed in neurons, followed by ogliodendrocytes, mature astrocytes, fetal astrocytes, endothelial cells and then microglia/macrophages. References Genes on human chromosome 1 Mobile genetic elements Enzymes
PiggyBac Transposable Element Derived 5
Biology
415
27,053,099
https://en.wikipedia.org/wiki/Cryogenic%20storage%20dewar
A cryogenic storage dewar (or simply dewar) is a specialised type of vacuum flask used for storing cryogens (such as liquid nitrogen or liquid helium), whose boiling points are much lower than room temperature. It is named after inventor James Dewar, who developed it for his own work. They are commonly used in low-temperature physics and chemistry. Cryogenic storage dewars can range widely in size and may take several different forms, including open buckets, flasks with loose-fitting stoppers, and self-pressurising tanks. All dewars have walls constructed from two or more layers, with a high vacuum maintained between the layers. This provides very good thermal insulation between the interior and exterior of the dewar, which reduces the rate at which the contents boil away. Precautions are taken in the design of dewars to safely manage the gas which is released as the liquid slowly boils. Design The simplest dewars allow the gas to escape either through an open top or past a loose-fitting stopper. More sophisticated dewars trap the gas above the liquid, and hold it at high pressure. This increases the boiling point of the liquid, allowing it to be stored for extended periods. Excessive vapour pressure is released automatically through safety valves. Dewars are also designed to be resistant to any sort of puncture to preserve the contents, as cryogens are costly to produce, and some (like helium) are in limited global supply. The method of decanting liquid from a dewar depends upon its design. Simple dewars may be tilted, to pour liquid from the neck. Self-pressurising designs use the pressure of the gas in the top of the dewar to force the liquid upward through a pipe leading to the neck. Safety Cryogens present several safety hazards, and their storage vessels are designed to reduce the associated risk. Firstly, no dewar can provide perfect thermal insulation and the cryogenic liquid slowly boils away, which yields an enormous quantity of gas. This is known as the liquid nitrogen evaporation rate. In dewars with an open top, the gas simply escapes into the surrounding area. However, very high pressures can build up inside sealed dewars, and precautions are taken to minimise the risk of explosion. One or more pressure-relief valves allow gas to vent away from the dewar whenever the pressure becomes excessive. In an incident in 2006 at Texas A&M University, the pressure-relief devices of a tank of liquid nitrogen were sealed with brass plugs. As a result, the tank failed catastrophically and exploded. Secondly, if a dewar is left open to the air for extended periods, atmospheric chemicals can condense or freeze on contact with the cryogenic material. This can introduce contaminants. If these materials freeze, for example, water vapor becoming ice, they can block the openings, leading to pressure buildup and the risk of an explosion. Thirdly, the gas escaping from a dewar can gradually displace the oxygen from the air in the surrounding area, which presents an asphyxiation hazard. Users are trained to store dewars only in a well-ventilated area, and before transporting dewars in an elevator, the excess gas pressure is vented away and the dewars are sent unaccompanied to their destination. References Vacuum flasks Cryogenics Industrial gases Laboratory equipment
Cryogenic storage dewar
Physics,Chemistry
694
11,420,714
https://en.wikipedia.org/wiki/C0465%20RNA
The C0465 RNA is a bacterial non-coding RNA of 78 nucleotides in length that is found between the tar and cheW genes in the genomes of Escherichia coli and Shigella flexneri. This ncRNA was originally identified in E.coli using high-density oligonucleotide probe arrays (microarray). The function of this ncRNA is unknown. See also C0299 RNA C0343 RNA C0719 RNA References External links Non-coding RNA
C0465 RNA
Chemistry
110
4,571,479
https://en.wikipedia.org/wiki/Salvage%20enzyme
Salvage enzymes are enzymes, nucleoside kinases, required during cell division to "salvage" nucleotides, present in body fluids, for the manufacture of DNA. They catalyze the phosphorylation of nucleosides to nucleoside - 5'-phosphates, that are further phosphorylated to triphosphates, that can be built into the growing DNA chain. The salvage enzymes are synthesized during the G1 phase in anticipation of DNA synthesis. After the cell division has been completed, the salvage enzymes, no longer required, are degraded. During interphase the cell derives its requirement of nucleoside-5'-phosphates by de novo synthesis, that leads directly to the 5'-monophosphate nucleotides. Cell cycle Enzymes
Salvage enzyme
Biology
169
32,555,013
https://en.wikipedia.org/wiki/Aanval
Aanval is a commercial SIEM product designed specifically for use with Snort, Suricata, and Syslog data. Aanval has been in active development since 2003 and remains one of the longest running Snort capable SIEM products in the industry. Aanval is Dutch for "attack". History Aanval was created by Loyal Moses in 2003 but was not publicly made available until March 2004 where it was released under the private commercial license C1-RA1008. Throughout the lifecycle of the software it has also been referred to as OpenAanval or ComAanval in addition to Aanval. Aanval's had provided AJAX style security event monitoring and reporting from a web-browser. Since Aanval's creation, it has developed into an intrusion detection, correlation and threat management console with a specific focus on normalizing Snort, Suricata, and Syslog data. Several information security related books have been published that include details and references to Aanval, including "Linux Server Security, Second Edition" by O'Reilly Media, "Security Log Management" by O'Reilly Media, "Snort: IDS and IPS Toolkit" by O'Reilly Media and in 2010 "Unix and Linux System Administration Handbook, Fourth Edition" by O'Reilly Media. See also Snort Intrusion detection system (IDS) Intrusion prevention system (IPS) Network intrusion detection system (NIDS) Sguil References External links Aanval wiki Snort homepage OISF homepage Computer security software
Aanval
Engineering
313
59,558,412
https://en.wikipedia.org/wiki/Karen%20Sp%C3%A4rck%20Jones%20Award
To commemorate the achievements of Karen Spärck Jones, the Karen Spärck Jones Award was created in 2008 by the British Computer Society (BCS) and its Information Retrieval Specialist Group (BCS IRSG). Since 2024, the award has been sponsored by Bloomberg. Prior to 2024, it was sponsored by Microsoft Research. The winner of the award is invited to present a keynote talk the following year alternately at the European Conference on Information Retrieval (ECIR) or the Conference of the European Chapter of the Association for Computational Linguistics (EACL). Chronological recipients and keynote talks 2009: Mirella Lapata : “Image and Natural Language Processing for Multimedia Information Retrieval” 2010: Evgeniy Gabrilovich : “Ad Retrieval Systems in vitro and in vivo: Knowledge-Based Approaches to Computational Advertising” 2011: No award was made 2012: Diane Kelly : “Contours and Convergence” 2013: Eugene Agichtein : “Inferring Searcher Attention and Intention by Mining Behavior Data” 2014: Ryen White : “Mining and Modeling Online Health Search” 2015: Jordan Boyd-Graber : “Opening up the Black Box: Interactive Machine Learning for Understanding Large Document Collections, Characterizing Social Science, and Language-Based Games”, Emine Yilmaz : “A Task-Based Perspective to Information Retrieval” 2016: Jaime Teevan : “Search, Re-Search.” 2017: Fernando Diaz (computer scientist) : “The Harsh Reality of Production Information Access Systems” 2018: Krisztian Balog : “On Entities and Evaluation” 2019: Chirag Shah : “Task-Based Intelligent Retrieval and Recommendation” 2020: Ahmed H. Awadallah : “Learning with Limited Labeled Data: The Role of User Interactions” 2021: Ivan Vulić : “Towards Language Technology for a Truly Multilingual World?” 2022: William Yang Wang "Large Language Models for Question Answering: Challenges and Opportunities" 2023: Hongning Wang "Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict?" References Computer science awards
Karen Spärck Jones Award
Technology
422
44,611,132
https://en.wikipedia.org/wiki/Deconica%20aequatoriae
Deconica aequatoriae is a species of mushroom in the family Strophariaceae found in Ecuador. References Strophariaceae Fungi described in 1978 Fungi of Ecuador Taxa named by Rolf Singer Fungus species
Deconica aequatoriae
Biology
43
6,248,865
https://en.wikipedia.org/wiki/International%20Commission%20on%20Occupational%20Health
The International Commission on Occupational Health (ICOH) is an international non-governmental professional society, founded in Milan during the Expo 1906 as the Permanent Commission on Occupational Health. ICOH aims at fostering the scientific progress, knowledge and development of occupational health and safety in all its aspects. Today, ICOH is the world's leading international scientific society in the field of occupational health with a membership of 2,000 professionals from over 100 countries and is recognised by the United Nations as a non-governmental organisation (NGO) with close working relationships with ILO, WHO, UNEP and ISSA. Activities The most visible activities of ICOH are the triennial World Congresses on Occupational Health, which are usually attended by some 3,000 participants. The 2000 Congress was held in Singapore, the 2003 Congress in Iguassu Falls (Brazil), the 2006 Centennial Congress in Milan, (Italy), the 2009 Congress in Cape Town (South Africa), ICOH 2012 Congress in Cancun (Mexico), ICOH 2015 Congress in Seoul (Rep. of Korea), while ICOH 2018 was held in Dublin (Ireland). The next ICOH Congress will be held in 2022 in a digital format. ICOH 2024 will be held in Marrakesh (Morocco) and ICOH 2027 in Mumbai (India). At the ICOH 2006 General Assembly, the President highlighted the overriding importance of permanent training and education of experts in order to face the rapidly changing world of work, the need to develop occupational health services throughout the world (including the development and dissemination of Basic Occupational Health Services – BOHS), the importance of creating BOHS guidelines, tools, training, and pilot projects and the intention of making a global survey on the OHS situation in ICOH member countries. Cooperation with World Health Organization, International Labour Organization and other NGO partners is among the priorities of the current ICOH Strategy (ICOH Centennial Declaration). Presidents and Secretaries General Presidents: M. De Cristoforis 1906 – 1915, L. Devoto 1915 – 1936, D. Glibert 1936 – 1940, T. Stowell 1948 – 1951, P. Mazell 1951 – 1954, S. Forssman 1954 – 1969, L. Noro 1969 – 1975, E. Vigliani 1975 – 1981, R. Murray 1981 – 1987, S. Hernberg 1987 – 1993, J. F. Caillard 1993 – 2000, B. Knave 2000 – 2003, J. Rantanen 2003 – 2009, K. Kogi 2009 – 2015, J. Takala 2015 – 2022, S.K. Kang 2022–present. Secretaries General: L. Carozzi 1906 – 1957, E. Vigliani 1957 – 1975, R. Murray 1975 – 1981, L. Parmeggiani 1981 – 1988, J. Jeyaratnam 1989 – 2000, K. S. Chia 2000 – 2003, S. Iavicoli 2003 – 2022, D. Gagliardi 2022–present. Current Officers and Board Members Officers President – Seong-Kyu Kang (Rep of Korea) Vice President – Claudina Nogueira (South Africa) Vice President – Martin Hogan (Ireland) Secretary General – Diana Gagliardi (Italy) Past President – Jukka Takala (Finland) Past Secretary General – Sergio Iavicoli (Italy) Board Members Alexis Descatha (France), Maureen Dollard (Australia), Frida Marina Fischer (Brazil), Sunil Kumar Joshi (Nepal), Eun-A Kim (Rep. of Korea), Kirsi Lappalainen (Finland), Stavroula Leka (Ireland), Olivier Lo (Singapore), Dingani Moyo (Zimbabwe), Shyam Pingle (India), Riitta Sauni (Finland), Paul A. Schulte (US), Sandeep Sharma (India), Akizumi Tsutsumi (Japan), Francesco S. Violante (Italy). Scientific Committees ICOH has 38 Scientific Committees. They cover a broad scope of challenges and problems in work life, including traditional risks of occupational injuries and diseases, and the risks of "new work life". The Scientific Committees focus specifically on the areas of competence and promote and carry out research in their respective fields. Most of these committees have regular symposia, scientific monographs and review the abstracts submitted to the International Congresses. Accident Prevention Aging and Work Allergy and Immunotoxicology Biohazards and Occupational Health Cardiology in OH Education and Training in OH Emergency Preparedness and Response in OH Epidemiology in OH Effectiveness in OH Services History of Prevention of Occupational and Environmental Diseases Indoor Air Quality and Health Industrial Hygiene Mining Occupational Safety and Health Musculoskeletal Disorders Nanomaterial Workers Health Neurotoxicology and Psychophysiology Occupational and Environmental Dermatoses Occupational Health Nursing Occupational Medicine Occupational Toxicology OH and Development OH for Health Workers OH in the Chemical Industry OH in the Construction Industry Radiation and Work Reproductive Hazards in the Workplace Respiratory Disorders Rural Health: Agriculture, Pesticides and Organic Dusts Shiftwork and Working Time Small-Scale Enterprises and the Informal Sector Thermal Factors Toxicology of Metals Unemployment, Job Insecurity and Health Vibration and Noise Women Health and Work Work and Vision Work Disability Prevention and Integration Work Organization and Psychosocial Factors Asbestos ban ICOH relations with International Organizations (WHO, ILO) have received critiques, notably on the issue of asbestos: "Part of the explanation for this bland acceptance of the asbestos cancer epidemic is that the WHO and the ILO have allowed organizations such as the International Commission on occupational Health (ICOH) and other asbestos industry consultants and experts to manipulate them and to distort the scientific evidence. The WHO and the ILO were lulled into inaction by conflicting scientific reports of the epidemic." ICOH has set among its main priorities the Asbestos Ban, taking a position in favour of the global ban of asbestos. Through its official bodies and individual members ICOH took actions at all levels of activity, global, national and workplace levels: After the Call for an International ban for asbestos produced by the Collegium Ramazzini, the ICOH Officers Meeting in Paris, 30–31 August 1999, chaired by the ICOH President at that time, Professor Jean-Francois Caillard, decided to endorse it. Furthermore, the endorsement of the "Call for an International ban for asbestos" was approved by the ICOH 2nd General Assembly on 1 September 2000, in connection with the ICOH 2000 Congress. The need for a continuous follow-up was recognised during ICOH 2000–2002 triennium and the ICOH President, Prof. Bengt Knave, decided to establish a Task Group on Asbestos (including members of the Board), that presented in the ICOH Board Meeting of 1–2 March 2002, an article by Benedetto Terracini "World Asbestos Congress: Past, Present and Future, Osasco (Brazil) 17–20 September 2000", as a report. The article was endorsed by the Board. The European Conference on Asbestos 2003 on 3–6 September 2003, drafted and adopted the "Dresden Declaration on Protection of Workers against Asbestos". The Declaration was drafted with strong input by ICOH President and the Secretary of the Scientific Committee on Industrial Hygiene and it summarizes the contemporary effort of ICOH which has the scientific role to "provide guidance and support for a well-governed process to eliminate the use of asbestos". For this aim, ICOH Past President, Jorma Rantanen, during the 13th Session of the Joint ILO/WHO Committee on Occupational Health made the proposal for elimination of asbestos related diseases as a priority for ILO/WHO collaboration. The Committee unanimously approved the proposal. ICOH commitment in this field was also attested by the full support to the Asian Asbestos Conference 2006 organized in Thailand on 26–27 July 2006 by the Ministry of Public Health and co-sponsored by the International Labour Office (ILO), the World Health Organization (WHO), International Ban Asbestos Secretariat (IBAS) and the International Commission on Occupational Health (ICOH). During the Conference, Jorma Rantanen declared ICOH's unequivocal support for a global asbestos ban; this position is rooted in the experiences of ICOH members who have observed the dire consequences of hazardous asbestos exposures on their patients in industrialized countries. Rantanen urged that concerted action be taken by international agencies, national governments, trade unions and NGOs to raise awareness of the asbestos hazard and to highlight the long-term economic benefits of transferring to non-asbestos technologies. The "Bangkok Declaration", recalling the ILO resolution on Asbestos, the ILO Conventions on Occupational Cancer (No. 139), the Safety in the Use of Asbestos, the WHO Global Strategy on Occupational Health for All and the WHA Resolution 58.22 on Cancer Prevention and Control and considering the ICOH International Code of Ethics for Occupational Health Professionals, declared the support of its signatories for a global asbestos ban and was widely disseminated through many networks. For the triennium 2009–2012, a new ICOH Working Group on the Elimination of Asbestos-related Diseases was set. The Working Group mainly focused on examining the existing regulations and bans in order to develop specific recommendation for actions and guidelines. ICOH continued its efforts on this specific issue through the ICOH Statement: Global Asbestos Ban and the Elimination of Asbestos-Related Diseases. To accomplish such elimination, ICOH urges each and every individual country to implement a total ban on production and use of asbestos. ICOH also urges complementary efforts aimed at primary, secondary and tertiary prevention of asbestos-related diseases through country-specific "National Programmes for Elimination of Asbestos-Related Diseases" in line with ILO and WHO guidelines. At the national level the expert input of ICOH members to the decisions concerning the ban of asbestos can be found for example in Finland, Sweden, Germany, Japan and Norway. The ICOH members also were most instrumental in production of the "Asbestos, asbestosis, and cancer: the Helsinki criteria for diagnosis and attribution" – a document that has been taken in use in everyday practices in diagnosing, recognition and compensation of asbestos related diseases and has also been used in courts in some countries in defence of victims of the diseased persons. ICOH members also train the experts in occupational medicine and safety by using the research and criteria documents as a support for education. Core documents ICOH Constitution ICOH Bye-Laws ICOH Code of Ethics ICOH Good Association Practice See also Occupational health psychology Occupational medicine Occupational safety and health Total Worker Health References External links ICOH Homepage ICOH Heritage Repository ICOH on Twitter WHO, Statement by the International Commission on Occupational Health 29 June 2021 FAO, International Congress for Occupational Health: Supporting a breakthrough against child labour in agriculture, 2 March 2022 ILO, ICOH declaration on the World Day for Safety and Health at Work, 27 April 2014 INAIL, Partnership description, 16 February 2016 Occupational safety and health organizations Asbestos International professional associations Occupational health psychology
International Commission on Occupational Health
Environmental_science
2,267
10,804,088
https://en.wikipedia.org/wiki/Natural%20magic
Natural magic in the context of Renaissance magic is that part of the occult which deals with natural forces directly, as opposed to ceremonial magic which deals with the summoning of spirits. Natural magic sometimes makes use of physical substances from the natural world such as stones or herbs. Natural magic so defined includes astrology, alchemy, and disciplines that we would today consider fields of natural science, such as astronomy and chemistry (which developed and diverged from astrology and alchemy, respectively, into the modern sciences they are today) or botany (from herbology). The Jesuit scholar Athanasius Kircher wrote that "there are as many types of natural magic as there are subjects of applied sciences". Heinrich Cornelius Agrippa discusses natural magic in his Three Books of Occult Philosophy (1533), where he calls it "nothing else but the highest power of natural sciences". The Italian Renaissance philosopher Giovanni Pico della Mirandola, who founded the tradition of Christian Kabbalah, argued that natural magic was "the practical part of natural science" and was lawful rather than heretical. See also References Further reading External links . History of science Renaissance Magic (supernatural)
Natural magic
Technology
240
769,604
https://en.wikipedia.org/wiki/Barco%20%28manufacturer%29
Barco NV is a Belgian technology company that specializes in digital projection and imaging technology, focusing on three core markets: entertainment, enterprise, and healthcare. It employs employees located in 90 countries. The company has 400 granted patents. Barco is headquartered in Kortrijk, Belgium, and has its own facilities for Sales & Marketing, Customer Support, R&D and Manufacturing in Europe, North America and Asia-Pacific. Shares of Barco are listed on Euronext Brussels. It has a market cap of around €900 million (December 2024). Barco sells its ClickShare products to enable wireless projection from sender devices to receiver displays. History Barco is an acronym that originally stood for Belgian American Radio Corporation. Barco was founded in 1934 in the town of Poperinge, in the Flemish-speaking region of Belgium. Founder Lucien de Puydt's initial business was to assemble radios from parts imported from the United States – hence the name of his company, the Belgium American Radio Corporation, or "Barco". Radio pioneer Camiel Descamps gave the company a new start in 1941 in Kortrijk after founder Lucien Depuydt died. His wife Maria-Anna Reyntjens and his brother-in-law Joseph Versavel assisted him. Later on, also Elie Timmerman joined them. Starting from their office in Kortrijk, the company started to grow and spread around 90 countries across the globe. In 1949, Barco started developing a multi-standard television that accepted different signal standards, becoming a leader in that field. A jukebox called Barc-O-Matic was sold from 1951. In 1967, it was one of the first European companies to introduce color TV. Building on this, it then entered the professional broadcast market in the late 1960s, supplying TV monitors to broadcasters. From the 1960s onwards, Barco branched out into numerous other activities, which included mechanical components for industrial use, and quality control monitoring for the textile and plastics industries. In 1967, Barco became the first European manufacturer to produce transistor-based portable televisions. Barco first entered projection technology in 1979, when it pioneered the development of cathode ray tube (CRT) projection aboard airplanes. Over the following years, it gradually focused solely on professional markets. In the mid-1980s, Barco became a main projection technology supplier for computer giants IBM, Apple and Hewlett-Packard. In the late 1980s, it entered the Brussels stock market. By 1991, Barco's market share in the graphics projection market alone reached 75%, and the company had established offices across the world, including regional headquarters in the United States and East-Asia. Through the 1990s and the first decade of the new millennium, Barco developed and marketed new display technologies such as liquid crystal display (LCD), light-emitting diodes (LED), Texas Instruments' Digital Light Processing (DLP), and later, liquid crystal on silicon (LCoS). It now covers markets that include media and entertainment, security and monitoring, medical imaging, avionics, 3D and virtual reality, digital cinema, traffic control, broadcast and training and simulation. In 2018, Barco entered into a joint venture with China Film Group Corporation (CFG), Appotronics and CITICPE to commercialize each company's products and services for the global cinema market excluding mainland China: Cinionic. In Barco's case, this involved the company's cinema projectors. Sustainability Barco's Corporate Sustainability Committee, consisting of 13 members, under the leadership of Filip Pintelon, devises an overall sustainability strategy. Under the label "Barco 2020", Barco is currently developing a sustainability plan encompassing three pillars: their people, their community and the planet. A few achievements: Supplier Sustainability program Health and Safety Arrangements Compliance Acquisitions In 1989, Barco acquired EMT, a manufacturer of phonograph turntables and professional audio equipment, which became Barco-EMT. Barco Graphics was the graphics division of the Belgian Barco Group. It was the result of the 1989 merger of Digitized Information Systems Corporation (D.I.S.C.), Aesthedes and Barco's own "Creative Systems" group. In 1995, Barco acquired Elbicon, a manufacturer of inspection systems for the food processing industry. In 1997, Barco acquired Electronic Image Systems (EIS), a manufacturer of CRT projectors for the flight simulation market. In 1998, Barco acquired Dr. Seufert, a manufacturer of visual sub-systems for integration in process control rooms. In 1998, Barco ETS, now Ucamco acquired Gerber Systems Corp., a manufacturer of plotters and automatic optical inspection (AOI) systems for printed circuit boards. In 1999, it acquired Metheus Corporation, a Tektronix spin-off and manufacturer of professional graphics controllers. In 2000, it acquired Texen, a subsidiary of Thales (ex-Thomson). In 2004, it acquired Voxar, a 3D medical imaging software company. In 2004, it acquired Folsom Research, Inc., whose product lines cover image processing, image communication and image functionality & interactivity. In 2008, it acquired High End Systems, an automated luminaires, digital lighting and lighting controls company. In 2010, it acquired Fimi S.r.l., a company specialized in medical image displays, from Philips (no relationship with Fimi, Federazione Industria Musicale Italiana). In 2010, it acquired all intellectual property of Element Labs, a manufacturer of LED equipment. In 2010, it acquired dZine, a Belgium-based company specialized in Digital signage. In 2012, it acquired IP Video Systems, a company specialized in networked visualization. In 2012, it acquired JAOtech, a manufacturer of patient entertainment and point-of-care terminals for hospitals. In 2013, it acquired AWIND, a manufacturer of hardware and software for wireless presentation systems. In 2013, it acquired projectiondesign, a manufacturer of projection technology. In 2014, it acquired X20 Media Inc., an enterprise communication specialist. In 2014, it acquired IOSONO GmbH, a 3D audio expert. In 2015, it acquired Advan Int'l Corp., a manufacturer of LCD displays. In 2016, it acquired Medialon Inc., a US based company. In 2016, it acquired MTT Innovation Inc., a Canadian developer of next-generation projection technology (HDR). In 2019, it acquired a 5% stake in Unilumin group, a china based LED manufacturer. Divestments In 2000, Barco splits its shares in Barco New and BarcoNet which is taken over by Scientific-Atlanta in 2001 and becomes whole owner of a after buy-out of the remaining shareholders in 2002. In 2001, Barco Graphics was acquired by Danish Purup-Eskofot, and renamed Esko-Graphics, which was again renamed Esko in 2006. In 2003, Barco sold Barco-EMT including trademarks to Walter Derrer. The company is continued as EMT Studiotechnik GmbH. In 2004, Barco sold its Dotrix activity to Agfa–Gevaert. In 2007, BarcoVision was acquired by Itema Group from Bergamo, Italy. In 2008, Barco sold its Advanced Visualisation (AVIS) group (the previously acquired Voxar) to Toshiba Medical (TMSC). In 2014, Barco divested a wholly owned subsidiary of Barco NV, Barco Orthogon, based in Bremen, Germany, to Exelis. In 2015, Barco's Defense & Aerospace division was sold to US-based Esterline Technologies Corporation (). In 2017, Barco sold High End Systems to Electronic Theatre Controls. In 2018, Barco's X20 Media was sold to Stratacache. In 2018, Barco divested the wholly owned subsidiary Barco Silex, which becomes Silex Inside. Management CEO 2024 An Steegen CEO 2021-2024 Charles Beauduin & An Steegen CEO 2016-2021 Jan De Witte CEO 2008–2016 Eric Van Zele CEO 2002–2008 Martin De Prycker CEO 2002 Hugo Vandamme Active markets Entertainment: cinema, venues and hospitality Enterprise: control rooms, education and meeting rooms Healthcare: diagnostic, surgical and clinical imaging See also Barco Creator Barco Escape References External links Belgian companies established in 1934 Manufacturing companies established in 1934 Technology companies established in 1934 Information technology companies of Belgium Display technology companies Film and video technology Medical imaging Companies listed on Euronext Brussels Companies based in West Flanders Radio manufacturers Belgian brands
Barco (manufacturer)
Engineering
1,802
71,739,731
https://en.wikipedia.org/wiki/Countable%20Borel%20relation
In descriptive set theory, specifically invariant descriptive set theory, countable Borel relations are a class of relations between standard Borel space which are particularly well behaved. This concept encapsulates various more specific concepts, such as that of a hyperfinite equivalence relation, but is of interest in and of itself. Motivation A main area of study in invariant descriptive set theory is the relative complexity of equivalence relations. An equivalence relation on a set is considered more complex than an equivalence relation on a set if one can "compute using " - formally, if there is a function which is well behaved in some sense (for example, one often requires that is Borel measurable) such that . Such a function If this holds in both directions, that one can both "compute using " and "compute using ", then and have a similar level of complexity. When one talks about Borel equivalence relations and requires to be Borel measurable, this is often denoted by . Countable Borel equivalence relations, and relations of similar complexity in the sense described above, appear in various places in mathematics (see examples below, and see for more). In particular, the Feldman-Moore theorem described below proved useful in the study of certain Von Neumann algebras (see ). Definition Let and be standard Borel spaces. A countable Borel relation between and is a subset of the cartesian product which is a Borel set (as a subset in the Product topology) and satisfies that for any , the set is countable. Note that this definition is not symmetric in and , and thus it is possible that a relation is a countable Borel relation between and but the converse relation is not a countable Borel relation between and . Examples A countable union of countable Borel relations is also a countable Borel relation. The intersection of a countable Borel relation with any Borel subset of is a countable Borel relation. If is a function between standard Borel spaces, the graph of the function is a countable Borel relation between and if and only if is Borel measurable (this is a consequence of the Luzin-Suslin theorem and the fact that ). The converse relation of the graph, , is a countable Borel relation if and only if is Borel measurable and has countable fibers. If is an equivalence relation, it is a countable Borel relation if and only if it is a Borel set and all equivalence classes are countable. In particular hyperfinite equivalence relations are countable Borel relations. The equivalence relation induced by the continuous action of a countable group is a countable Borel relation. As a concrete example, let be the set of subgroups of , the Free group of rank 2, with the topology generated by basic open sets of the form and for some (this is the Product topology on ). The equivalence relation is then a countable Borel relation. Let be the space of subsets of the naturals, again with the product topology (a basic open set is of the form or ) - this is known as the Cantor space. The equivalence relation of Turing equivalence is a countable Borel equivalence relation. The isomorphism equivalence relation between various classes of models, while not being countable Borel equivalence relations, are of similar complexity to a Borel equivalence relation in the sense described above. Examples include: The class of countable graphs where the degree of each vertex is finite. The class field extensions of finite transcendence degree over the rationals. The Luzin–Novikov theorem This theorem, named after Nikolai Luzin and his doctoral student Pyotr Novikov, is an important result used is many proofs about countable Borel relations. Theorem. Suppose and are standard Borel spaces and is a countable Borel relation between and . Then the set is a Borel subset of . Furthermore, there is a Borel function (known as a Borel uniformization) such that the graph of is a subset of . Finally, there exist Borel subsets of and Borel functions such that is the union of the graphs of the , that is . This has a couple of easy consequences: If is a Borel measurable function with countable fibers, the image of is a Borel subset of (since the image is exactly where is the converse relation of the graph of ) . Assume is a Borel equivalence relation on a standard Borel space which has countable equivalence classes. Assume is a Borel subset of . Then is also a Borel subset of (since this is precisely where , and is a Borel set). Below are two more results which can be proven using the Luzin-Novikov Novikov theorem, concerning countable Borel equivalence relations: Feldman–Moore theorem The Feldman–Moore theorem, named after Jacob Feldman and Calvin C. Moore, states: Theorem. Suppose is a Borel equivalence relation on a standard Borel space which has countable equivalence classes. Then there exists a countable group and action of on such that for every the function is Borel measurable, and for any , the equivalence class of with respect to is exactly the orbit of under the action. That is to say, countable Borel equivalence relations are exactly those generated by Borel actions by countable groups. Marker lemma This lemma is due to Theodore Slaman and John R. Steel, and can be proven using the Feldman–Moore theorem: Lemma. Suppose is a Borel equivalence relation on a standard Borel space which has countable equivalence classes. Let . Then there is a decreasing sequence such that for all and . Less formally, the lemma says that the infinite equivalence classes can be approximated by "arbitrarily small" set (for instance, if we have a Borel probability measure on the lemma implies that by the continuity of the measure). References Descriptive set theory Binary relations
Countable Borel relation
Mathematics
1,207
25,034,689
https://en.wikipedia.org/wiki/Ulrike%20and%20Eamon%20Compliant
Ulrike and Eamon Compliant is a work by Blast Theory that premiered at the 53rd Venice Biennale in June 2009, commissioned by the De La Warr Pavilion and supported by Arts Council England. The work is based on the lives of Ulrike Meinhof (Red Army Faction) and Eamon Collins (Irish Republican Army). Having chosen to be Eamon or Ulrike, participants walk through the city receiving mobile phone calls. Exploring subjectivities and political obligations, the work culminates in an interview with the artists in a hidden room. Mixing locative media, live performance, and interactivity, the work counterpoints the context of Venice with Berlin in the 1970s and Northern Ireland in the 1980s. A book with a foreword by Alan Haydon, Director of the De La Warr Pavilion, featuring an essay by Richard Grayson (artist), was first published in November 2009. Ulrike and Eamon Compliant won "Best Real-World Game" at the 2010 International Mobile Gaming Awards on 15 February at Mobile World Congress in Barcelona. References External links Ulrike and Eamon Compliant on Vimeo DLWP Blast Theory Arts Council England Digital media Computer art English contemporary works of art 2009 works 2009 in Italy
Ulrike and Eamon Compliant
Technology
251
4,746,146
https://en.wikipedia.org/wiki/Pentetic%20acid
Pentetic acid or diethylenetriaminepentaacetic acid (DTPA) is an aminopolycarboxylic acid consisting of a diethylenetriamine backbone with five carboxymethyl groups. The molecule can be viewed as an expanded version of EDTA and is used similarly. It is a white solid with limited solubility in water. Coordination properties The conjugate base of DTPA has a high affinity for metal cations. Thus, the penta-anion DTPA5− is potentially an octadentate ligand assuming that each nitrogen centre and each –COO− group counts as a centre for coordination. The formation constants for its complexes are about 100 greater than those for EDTA. As a chelating agent, DTPA wraps around a metal ion by forming up to eight bonds. Its complexes can also have an extra water molecule that coordinates the metal ion. Transition metals, however, usually form less than eight coordination bonds. So, after forming a complex with a metal, DTPA still has the ability to bind to other reagents, as is shown by its derivative pendetide. For example, in its complex with copper(II), DTPA binds in a hexadentate manner utilizing the three amine centres and three of the five carboxylates. Chelating applications Like the more common EDTA, DTPA is predominantly used as chelating agent for complexing and sequestering metal ions. DTPA has been considered for treatment of radioactive materials such as plutonium, americium, and other actinides. In theory, these complexes are more apt to be eliminated in urine. It is normally administered as the calcium or zinc salt (Ca or Zn-DTPA), since these ions are readily displaced by more highly charged cations and mainly to avoid to depleting them in the organism. DTPA forms complexes with thorium(IV), uranium(IV), neptunium(IV), and cerium(III/IV). In August, 2004 the US Food and Drug Administration (USFDA) determined zinc-DTPA and calcium-DTPA to be safe and effective for treatment of those who have breathed in or otherwise been contaminated internally by plutonium, americium, or curium. The recommended treatment is for an initial dose of calcium-DTPA, as this salt of DTPA has been shown to be more effective in the first 24 hours after internal contamination by plutonium, americium, or curium. After that time has elapsed both calcium-DTPA and zinc-DTPA are similarly effective in reducing internal contamination with plutonium, americium or curium, and zinc-DTPA is less likely to deplete the body's normal levels of zinc and other metals essential to health. Each drug can be administered by nebulizer for those who have breathed in contamination, and by intravenous injection for those contaminated by other routes. DTPA is also used as MRI contrasting agent. DTPA improves the resolution of magnetic resonance imaging (MRI) by forming a soluble complex with a gadolinium (Gd3+) ion, which alters the magnetic resonance behavior of the protons of the nearby water molecules and increases the images contrast. DTPA under the form of iron(II) chelate (Fe-DTPA, 10 – 11 wt. %) is also used as aquarium plants fertilizer. The more soluble form of iron, Fe(II), is a micronutrient needed by aquatic plants. By binding to Fe2+ ions DTPA prevents their precipitation as Fe(OH)3, or Fe2O3 · n H2O poorly soluble oxy-hydroxides after their oxidation by dissolved oxygen. It increases the solubility of Fe2+ and Fe3+ ions in water, and therefore the bioavailability of iron for aquatic plants. It contributes so to maintain iron under a dissolved form (probably a mix of Fe(II) and Fe(III) DTPA complexes) in the water column. It is unclear to what extent does DTPA really contribute to protect dissolved Fe2+ against air oxidation and if the Fe(III)-DTPA complex cannot also be directly assimilated by aquatic plants simply because of its enhanced solubility. Under natural conditions, i.e., in the absence of complexing DTPA, Fe2+ is more easily assimilated by most organisms, because of its 100-fold higher solubility than that of Fe3+. In pulp and paper mills DTPA is also used to remove dissolved ferrous and ferric ions (and other redox-active metal ions, such as Mn or Cu) that otherwise would accelerate the catalytic decomposition of hydrogen peroxide (H2O2 reduction by Fe2+ ions according to the Fenton reaction mechanism). This helps preserving the oxidation capacity of the hydrogen peroxide stock which is used as oxidizing agent to bleach pulp in the chlorine-free process of paper making. Several thousands tons of DTPA are produced annually for this purpose in order to limit the non-negligible losses of H2O2 by this mechanism. DTPA chelating properties are also useful in deactivating calcium and magnesium ions in hair products. DTPA is used in over 150 cosmetic products. Biochemistry DTPA is more effective than EDTA to deactivate redox-active metal ions such as Fe(II)/(III), Mn(II)/(IV) and Cu(I)/(II) perpetuating oxidative damages induced in cells by superoxide and hydrogen peroxide. DTPA is also used in bioassays involving redox-active metal ions. Environmental impact An unexpected negative environmental impact of chelating agents, as DTPA, is their toxicity for the activated sludges in the treatment of Kraft pulping effluents. Most of the DTPA worldwide production (several thousands of tons) is intended to avoid hydrogen peroxide decomposition by redox-active iron and manganese ions in the chlorine-free Kraft pulping processes (total chlorine free (TCF) and environmental chlorine free (ECF) processes). DTPA decreases the biological oxygen demand (BOD) of activated sludges and therefore their microbial activity. Related compounds Compounds that are structurally related to DTPA are used in medicine, taking advantage of the high affinity of the triaminopentacarboxylate scaffold for metal ions. In ibritumomab tiuxetan, the chelator tiuxetan is a modified version of DTPA whose carbon backbone contains an isothiocyanatobenzyl and a methyl group. In capromab pendetide and satumomab pendetide, the chelator pendetide (GYK-DTPA) is a modified DTPA containing a peptide linker used to connect the chelate to an antibody. Pentetreotide is a modified DTPA attached to a peptide segment. DTPA and derivatives are used to chelate gadolinium to form an MRI contrast agent, such as Magnevist. Technetium-99m is chelated with DTPA for ventilation perfusion (V/Q) scans and radioisotope renography nuclear medicine scans. See also Nuclear medicine Radiopharmaceutical Hydrogen peroxide decomposition DTPA in chlorine-free Kraft pulping References ''This article incorporates material from Facts about DTPA, a fact sheet produced by the United States Centers for Disease Control and Prevention. Chelating agents Acetic acids Tertiary amines Nuclear medicine Magnetic resonance imaging
Pentetic acid
Chemistry
1,594
23,696,223
https://en.wikipedia.org/wiki/A-77636
A-77636 is a synthetic drug which acts as a selective D1 receptor full agonist. It has nootropic, anorectic, rewarding and antiparkinsonian effects in animal studies, but its high potency and long duration of action causes D1 receptor downregulation and tachyphylaxis, and unlike other D1 full agonists such as SKF-82,958, it does not produce place preference in animals. A-77636 partially substituted for cocaine in animal studies, and has been suggested for use as a possible substitute drug in treating addiction, but it is better known for its use in studying the role of D1 receptors in the brain. References D1-receptor agonists Adamantanes Isochromenes Catechols
A-77636
Chemistry
166
25,056,220
https://en.wikipedia.org/wiki/Promoter%20based%20genetic%20algorithm
The promoter based genetic algorithm (PBGA) is a genetic algorithm for neuroevolution developed by F. Bellas and R.J. Duro in the Integrated Group for Engineering Research (GII) at the University of Coruña, in Spain. It evolves variable size feedforward artificial neural networks (ANN) that are encoded into sequences of genes for constructing a basic ANN unit. Each of these blocks is preceded by a gene promoter acting as an on/off switch that determines if that particular unit will be expressed or not. PBGA basics The basic unit in the PBGA is a neuron with all of its inbound connections as represented in the following figure: The genotype of a basic unit is a set of real valued weights followed by the parameters of the neuron and proceeded by an integer valued field that determines the promoter gene value and, consequently, the expression of the unit. By concatenating units of this type we can construct the whole network. With this encoding it is imposed that the information that is not expressed is still carried by the genotype in evolution but it is shielded from direct selective pressure, maintaining this way the diversity in the population, which has been a design premise for this algorithm. Therefore, a clear difference is established between the search space and the solution space, permitting information learned and encoded into the genotypic representation to be preserved by disabling promoter genes. Results The PBGA was originally presented within the field of autonomous robotics, in particular in the real time learning of environment models of the robot. It has been used inside the Multilevel Darwinist Brain (MDB) cognitive mechanism developed in the GII for real robots on-line learning. In another paper it is shown how the application of the PBGA together with an external memory that stores the successful obtained world models, is an optimal strategy for adaptation in dynamic environments. Recently, the PBGA has provided results that outperform other neuroevolutionary algorithms in non-stationary problems, where the fitness function varies in time. References External links Grupo Integrado de Ingeniería Francisco Bellas’ website Richard J. Duro’s website Artificial neural networks Genetic algorithms
Promoter based genetic algorithm
Biology
454
987,546
https://en.wikipedia.org/wiki/Harmattan
The Harmattan is a season in West Africa that occurs between the end of November and the middle of March. It is characterized by the dry and dusty northeasterly trade wind, of the same name, which blows from the Sahara over West Africa into the Gulf of Guinea. The name is related to the word in the Twi language. The temperature is cold mostly at night in some places but can be very hot in certain places during daytime. Generally, temperature differences can also depend on local circumstances. The Harmattan blows during the dry season, which occurs during the months with the lowest sun. In this season, the subtropical ridge of high pressure stays over the central Sahara and the low-pressure Intertropical Convergence Zone (ITCZ) stays over the Gulf of Guinea. On its passage over the Sahara, the Harmattan picks fine dust and sand particles (between 0.5 and 10 microns). It is also known as the "doctor wind", because of its invigorating dryness compared with humid tropical air. Effects This season differs from winter because it is characterized by cold, dry, dust-laden wind, and also wide fluctuations in the ambient temperatures of the day and night. Temperatures can easily be as low as all day, but sometimes in the afternoon the temperature can also soar to as high as , while the relative humidity drops under 5%. It can also be hot in some regions, like in the Sahara. The air is particularly dry and desiccating when the Harmattan blows over the region. The Harmattan brings desert-like weather conditions: it lowers the humidity, dissipates cloud cover, prevents rainfall formation and sometimes creates big clouds of dust which can result in dust storms or sandstorms. The wind can increase fire risk and cause severe crop damage. The interaction of the Harmattan with monsoon winds can cause tornadoes. Harmattan haze In some countries in West Africa, the heavy amount of dust in the air can severely limit visibility and block the sun for several days, comparable to a heavy fog. This effect is known as the Harmattan haze. It costs airlines millions of dollars in cancelled and diverted flights each year. When the haze is weak, the skies are clear. The extreme dryness of the air may cause branches of trees to die. Health A 2024 study found that dust carried by the Harmattan increases infant and child mortality, as well as has persistent adverse health impacts on surviving children. Humidity can drop lower than 15%, which can result in spontaneous nosebleeds for some people. Other health effects on humans may include conditions of the skin (dryness of the skin), dried or chapped lips, eyes, and respiratory system, including aggravation of asthma. See also Khamsin References External links Cold Geography of Ghana Geography of Nigeria Geography of West Africa Seasons Winds
Harmattan
Physics
583
20,684,184
https://en.wikipedia.org/wiki/Davicil
Davicil is a chlorinated pyridine derivative with antimicrobial properties, which is used as a fungicide. It can be allergenic in humans and produce contact dermatitis. References Pyridines Fungicides Sulfones Chloroarenes
Davicil
Chemistry,Biology
61
12,231,320
https://en.wikipedia.org/wiki/Sarcospan
Sarcospan is a protein that in humans is encoded by the SSPN gene. Originally identified as Kirsten ras associated gene (KRAG), sarcospan is a 25-kDa transmembrane protein located in the dystrophin-associated protein complex of skeletal muscle cells, where it is most abundant. It contains four transmembrane spanning helices with both N- and C-terminal domains located intracellularly. Loss of SSPN expression occurs in patients with Duchenne muscular dystrophy. Dystrophin is required for proper localization of SSPN. SSPN is also an essential regulator of Akt signaling pathways. Without SSPN, Akt signaling pathways will be hindered and muscle regeneration will not occur. Function Sarcospan is a protein that plays a crucial role in muscle health and function. It is part of the dystrophin-associated glycoprotein complex (DGC), which is a protein complex found in muscle cells that helps to maintain the structural integrity of muscle fibers. Sarcospan interacts with other proteins in the DGC, and mutations in the gene that encodes sarcospan can lead to muscular dystrophy, a group of genetic disorders characterized by progressive muscle weakness and degeneration. Sarcospan has multiple functions within the DGC that contribute to its role in muscle health. The DGC is a complex of proteins that spans the cell membrane of muscle cells and links the extracellular matrix to the intracellular cytoskeleton, providing stability and integrity to the muscle fiber. Sarcospan is one of the components of the DGC and interacts with other proteins in the complex, including dystrophin, syntrophins, and dystroglycans. One of the key functions of sarcospan is to help stabilize the DGC and promote its proper localization at the muscle cell membrane. Sarcospan interacts with dystroglycans, which are transmembrane proteins that connect the DGC to the extracellular matrix. This interaction helps to anchor the DGC to the muscle cell membrane and contributes to the overall stability of the muscle fiber. Additionally, sarcospan interacts with syntrophins, which are adapter proteins that link the DGC to the actin cytoskeleton inside the muscle cell. This interaction helps to maintain the structural integrity of the muscle fiber and is important for muscle contraction and force generation. Cell signaling Sarcospan also plays a role in signaling pathways that are involved in muscle growth and regeneration. Studies have shown that sarcospan can regulate the activity of certain signaling molecules, such as focal adhesion kinase (FAK), which is involved in cell adhesion and migration. Sarcospan has been implicated in the regulation of muscle stem cells, known as satellite cells, which are responsible for muscle regeneration after injury or damage. Sarcospan has been shown to modulate satellite cell activation and migration, suggesting that it may have a role in muscle repair and regeneration processes. Sarcospan is primarily localized to the muscle cell membrane, specifically at the neuromuscular junction (NMJ) and the sarcolemma, which is the plasma membrane of muscle cells. The NMJ is the specialized synapse between the motor neuron and the muscle fiber, where nerve impulses are transmitted to the muscle to initiate contraction. The DGC, including sarcospan, is enriched at the NMJ, where it plays a critical role in maintaining the integrity of the muscle membrane and ensuring proper neuromuscular signaling. In addition to the NMJ, sarcospan is also localized along the sarcolemma, which is the continuous plasma membrane that surrounds the entire muscle fiber. Sarcospan is distributed in a striated pattern along the sarcolemma, suggesting that it may have specific roles in different regions of the muscle fiber. The precise localization of sarcospan to the NMJ and the sarcolemma is important for its function in stabilizing the DGC and promoting muscle integrity. Mutations and diseases Mutations in the gene that encodes sarcospan have been implicated in the development of muscular dystrophy, which is a group of genetic disorders characterized by progressive muscle weakness and degeneration. Muscular dystrophy is caused by mutations in various genes that are involved in the structure and function of muscle, including dystrophin, which is a key component of the DGC that interacts with sarcospan. The loss of dystrophin results in muscular dystrophy. SSPN upregulates the levels of Utrophin-glycoprotein complex (UGC) to make up for the loss of dystrophin in the neuromuscular junction. Sarcoglycans bind to SSPN and form the SG-SSPN complex, which interacts with dystroglycans (DG) and Utrophin leading to the formation of the UGC. SSPN regulates the amount of Utrophin produced by the UGC to restore laminin binding due to the absence of dystrophin. If laminin binding is not restored by SSPN, contraction of the membrane is present. In dystrophic mdx mice, SSPN increases levels of Utrophin and restores the levels of laminin binding, reducing the symptoms of muscular dystrophy Mutations in the gene that encodes sarcospan have been implicated in the development of muscular dystrophy, which is a group of genetic disorders characterized by progressive muscle weakness and degeneration. Muscular dystrophy is caused by mutations in various genes that are involved in the structure and function of muscle, including dystrophin, which is a key component of the DGC that interacts with sarcospan. Research applications The study of sarcospan has important research applications that may contribute to the development of therapeutic interventions for muscular dystrophy and other muscle-related disorders. Therapeutic strategies The elucidation of the role of sarcospan in muscular dystrophy has led to the exploration of potential therapeutic strategies that target sarcospan or the DGC. For example, approaches aimed at restoring sarcospan expression or function have been investigated as potential therapeutic interventions for muscular dystrophy. Gene therapy techniques, such as viral-mediated gene delivery, have been explored to restore sarcospan expression in muscle cells, with promising results in preclinical studies. Additionally, gene editing technologies, such as CRISPR-Cas9, have been used to correct sarcospan mutations in muscle cells, offering potential gene-based therapeutic approaches for muscular dystrophy. Drug development Sarcospan has been considered as a potential target for drug development in the treatment of muscular dystrophy. Small molecule compounds that can modulate sarcospan function or stabilize the DGC have been explored as potential therapeutic agents. For example, studies have shown that targeting specific signaling pathways, such as the FAK pathway, which is regulated by sarcospan, can improve muscle function in animal models of muscular dystrophy. Additionally, compounds that can enhance the stability or localization of the DGC, including sarcospan, have been investigated for their potential to ameliorate muscle membrane fragility and reduce muscle damage in muscular dystrophy. Biomarker development Sarcospan has been proposed as a potential biomarker for muscular dystrophy and other muscle-related disorders. Biomarkers are measurable indicators that can provide information about disease status, progression, and response to treatment. Sarcospan levels in blood or other biological samples may reflect the integrity of the DGC and muscle membrane, and changes in sarcospan levels may be indicative of disease progression or response to therapeutic interventions. Development of sarcospan as a biomarker may aid in diagnosis, prognosis, and monitoring of muscular dystrophy and other muscle-related disorders. Mechanistic studies Research on sarcospan has provided insights into the molecular mechanisms underlying muscle development, regeneration, and disease. Studies using animal models or cell culture systems have helped to elucidate the role of sarcospan in the stability and function of the DGC, its involvement in signaling pathways, and its contribution References Proteins
Sarcospan
Chemistry
1,712
36,722,513
https://en.wikipedia.org/wiki/Signatures%20with%20efficient%20protocols
Signatures with efficient protocols are a form of digital signature invented by Jan Camenisch and Anna Lysyanskaya in 2001. In addition to being secure digital signatures, they need to allow for the efficient implementation of two protocols: A protocol for computing a digital signature in a secure two-party computation protocol. A protocol for proving knowledge of a digital signature in a zero-knowledge protocol. In applications, the first protocol allows a signer to possess the signing key to issue a signature to a user (the signature owner) without learning all the messages being signed or the complete signature. The second protocol allows the signature owner to prove that he has a signature on many messages without revealing the signature and only a (possibly) empty subset of the messages. The combination of these two protocols allows for the implementation of digital credential and ecash protocols. See also Topics in cryptography References Further reading Jan Camenisch, Anna Lysyanskaya: A Signature Scheme with Efficient Protocols. SCN 2002: 268-289 Cryptography
Signatures with efficient protocols
Mathematics,Engineering
207
1,447,255
https://en.wikipedia.org/wiki/Widmanst%C3%A4tten%20pattern
Widmanstätten patterns, also known as Thomson structures, are figures of long phases of nickel–iron, found in the octahedrite shapes of iron meteorite crystals and some pallasites. Iron meteorites are very often formed from a single crystal of iron-nickel alloy, or sometimes a number of large crystals that may be many meters in size, and often lack any discernable crystal boundary on the surface. Large crystals are extremely rare in metals, and in meteors they occur from extremely slow cooling from a molten state in the vacuum of space when the solar system first formed. Once in the solid state, the slow cooling then allows the solid solution to precipitate a separate phase that grows within the crystal lattice, which form at very specific angles that are determined by the lattice. In meteors, these interstitial defects can grow large enough to fill the entire crystal with needle or ribbon-like structures easily visible to the naked eye, almost entirely consuming the original lattice. They consist of a fine interleaving of kamacite and taenite bands or ribbons called lamellae. Commonly, in gaps between the lamellae, a fine-grained mixture of kamacite and taenite called plessite can be found. Widmanstätten structures describe analogous features in modern steels, titanium, and zirconium alloys, but are usually microscopic in size. Discovery In 1808, these figures were observed by Count Alois von Beckh Widmanstätten, the director of the Imperial Porcelain works in Vienna. While flame heating iron meteorites, Widmanstätten noticed color and luster zone differentiation as the various iron alloys oxidized at different rates. He did not publish his findings, claiming them only via oral communication with his colleagues. The discovery was acknowledged by Carl von Schreibers, director of the Vienna Mineral and Zoology Cabinet, who named the structure after Widmanstätten. However, it is now believed that the discovery of the metal crystal pattern should be assigned to the English mineralogist William (Guglielmo) Thomson, as he published the same findings four years earlier. Working in Naples in 1804, Thomson treated a Krasnojarsk meteorite with nitric acid to remove the dull patina caused by oxidation. Shortly after the acid made contact with the metal, strange figures appeared on the surface, which he detailed as described above. Civil wars and political instability in southern Italy made it difficult for Thomson to maintain contact with his colleagues in England. This was demonstrated in his loss of important correspondence when its carrier was murdered. As a result, in 1804, his findings were only published in French in the Bibliothèque Britannique. At the beginning of 1806, Napoleon invaded the Kingdom of Naples and Thomson was forced to flee to Sicily and in November of that year, he died in Palermo at the age of 46. In 1808, Thomson's work was again published posthumously in Italian (translated from the original English manuscript) in Atti dell'Accademia Delle Scienze di Siena. The Napoleonic wars obstructed Thomson's contacts with the scientific community and his travels across Europe, in addition to his early death, obscured his contributions for many years. Name The most common names for these figures are Widmanstätten pattern and Widmanstätten structure; however, there are some spelling variations: Widmanstetter (proposed by Frederick C. Leonard) Widmannstätten (used for example for the Widmannstätten lunar crater) Widmanstatten (Anglicized) Due to the discover priority of G. Thomson, several authors suggested to call these figures Thomson structure or Thomson-Widmanstätten structure. Lamellae formation mechanism Iron and nickel form homogeneous alloys at temperatures below the melting point; these alloys are taenite. At temperatures below 900 to 600 °C (depending on the Ni content), two alloys with different nickel content are stable: kamacite with lower Ni-content (5 to 15% Ni) and taenite with high Ni (up to 50%). Octahedrite meteorites have a nickel content intermediate between the norm for kamacite and taenite; this leads under slow cooling conditions to the precipitation of kamacite and growth of kamacite plates along certain crystallographic planes in the taenite crystal lattice. The formation of Ni-poor kamacite proceeds by diffusion of Ni in the solid alloy at temperatures between 450 and 700 °C, and can only take place during very slow cooling, about 100 to 10,000 °C/Myr, with total cooling times of 10 Myr or less. This explains why this structure cannot be reproduced in the laboratory. The crystalline patterns become visible when the meteorites are cut, polished, and acid-etched, because taenite is more resistant to the acid. The dimension of kamacite lamellae ranges from coarsest to finest (upon their size) as the nickel content increases. This classification is called structural classification. Usage Since nickel-iron crystals grow to lengths of some centimeters only when the solid metal cools down at an exceptionally slow rate (over several million years), the presence of these patterns is strongly suggestive of extraterrestrial origin of the material, and can be used to indicate if a piece of iron may come from a meteorite. Preparation The methods used to reveal the Widmanstätten pattern on iron meteorites vary. Most commonly, the slice is ground and polished, cleaned, etched with a chemical such as nitric acid or ferric chloride, washed, and dried. Shape and orientation Cutting the meteorite along different planes affects the shape and direction of Widmanstätten figures because kamacite lamellae in octahedrites are precisely arranged. Octahedrites derive their name from the crystal structure paralleling an octahedron. Opposite faces are parallel so, although an octahedron has 8 faces, there are only 4 sets of kamacite plates. Iron and nickel-iron form crystals with an external octahedral structure only very rarely, but these orientations are still plainly detectable crystallographically without the external habit. Cutting an octahedrite meteorite along different planes (or any other material with octahedral symmetry, which is a sub-class of cubic symmetry) will result in one of these cases: perpendicular cut to one of the three (cubic) axes: two sets of bands at right angles each other parallel cut to one of the octahedron faces (cutting all 3 cubic axes at the same distance from the crystallographic center) : three sets of bands running at 60° angles each other any other angle: four sets of bands with different angles of intersection Structures in non-meteoritic materials The term is also used on non-meteoritic material to indicate a structure with a geometrical pattern resulting from the formation of a new phase along certain crystallographic planes of the parent phase, such as the basketweave structure in some zirconium alloys. The Widmanstätten structures form due to the growth of new phases within the grain boundaries of the parent metals, generally increasing the hardness and brittleness of the metal. The structures form due to the precipitation of a single crystal phase into two separate phases. In this way, the Widmanstätten transformation differs from other transformations, such as a martensite or ferrite transformation. The structures form at very precise angles, which may vary depending on the arrangement of the crystal lattices. These are usually very small structures that must be viewed through a microscope because a very long cooling rate is generally needed to produce structures visible to the naked eye. However, they usually have a great and often an undesirable effect on the properties of the alloy. Widmanstätten structures tend to form within a certain temperature range, growing larger over time. In carbon steel, for example, Widmanstätten structures form during tempering if the steel is held within a range around for long periods of time. These structures form as a needle or plate-like growths of cementite within the crystal boundaries of the martensite. This increases the brittleness of the steel in a way that can only be relieved by recrystallizing. Widmanstätten structures made from ferrite sometimes occur in carbon steel, if the carbon content is below but near the eutectoid composition (~ 0.8% carbon). This occurs as long needles of ferrite within the pearlite. Widmanstätten structures form in many other metals as well. They will form in brass, especially if the alloy has a very high zinc content, becoming needles of zinc in the copper matrix. The needles will usually form when the brass cools from the recrystallization temperature, and will become very coarse if the brass is annealed to for long periods. Telluric iron, which is an iron-nickel alloy very similar to meteorites, also displays very coarse Widmanstätten structures. Telluric iron is metallic iron, rather than an ore (in which iron is usually found), and it originated from the Earth rather than from space. Telluric iron is an extremely rare metal, found only in a few places in the world. Like meteorites, the very coarse Widmanstätten structures most likely develop through very slow cooling, except that the cooling occurred in the Earth's mantle and crust rather than in the vacuum and microgravity of space. Such patterns have also been seen in mulberry, a ternary uranium alloy, after aging at or below for periods of minutes to hours produces a monoclinic ɑ phase. However, the appearance, the composition, and the formation process of these terrestrial Widmanstätten structures are different from the characteristic structure of iron meteorites. When an iron meteorite is forged into a tool or weapon, the Widmanstätten patterns remain but become stretched and distorted. The patterns usually cannot be fully eliminated by blacksmithing, even through extensive working. When a knife or tool is forged from meteoric iron and then polished, the patterns appear on the surface of the metal, albeit distorted, but they tend to retain some of the original octahedral shapes and the appearance of thin lamellae crisscrossing each other. See also Acicular ferrite Count Alois von Beckh Widmanstätten Glossary of meteoritics Meteorite References External links Widmannstätten figures on the Gibeon Iron-Meteorite Meteorite mineralogy and petrology Patterns Ferrous alloys Nickel alloys
Widmanstätten pattern
Chemistry
2,167
8,525,013
https://en.wikipedia.org/wiki/Zairja
A zairja (; also transcribed as zairjah, zairajah, zairdja, zairadja, and zayirga) was a device used by medieval Arab astrologers to generate ideas by mechanical means. The name may derive from a mixture of the Persian words zāycha ( "horoscope; astronomical table") and dāyra ( "circle"). Ibn Khaldun described zairja as: "a branch of the science of letter magic, practiced among the authorities on letter magic, is the technique of finding out answers from questions by means of connections existing between the letters of the expressions used in the question. They imagine that these connections can form the basis for knowing the future happenings they want to know." He suggests that rather than being supernatural it works "from an agreement in the wording of question and answer ... with the help of the technique called the technique of 'breaking down'" (i.e. algebra). By combining number values associated with the letters and categories, new paths of insight and thought were created. According to Ibn Khaldun the most detailed treatment of it is a pseudographical work Za'irajah of the World attributed to as-Sabti, which contains operating instructions in hundreds of lines of verse, beginning: A manuscript in Rabat recounts Ibn Khaldun's introduction to the machine by Al-Marjānī in 1370 (772 AH), and claims that it was a traditional and ancient science. When Ibn Khaldun expressed skepticism, the pair asked the instrument how old it was, and the machine told them it was invented by the prophet Idris (identified with the Biblical Enoch). It has been suggested that Catalan-Majorcan mystic Ramon Llull became familiar with the zairja in his travels and studies of Arab culture, and used it as a prototype for his invention of the Ars Magna. In "Scrambling T-R-U-T-H: Rotating Letters as a Material Form of Thought", David Link provides a clear description and a full history of the device with a representation of the Arabic letters involved. References External links An extract from Ibn Khaldun featuring illustrations of the dial of the zairja Arabic script Language and mysticism Objects used for divination History of astrology
Zairja
Astronomy
476
41,676,644
https://en.wikipedia.org/wiki/Enduring%20Quests%20and%20Daring%20Visions
Enduring Quests and Daring Visions is a vision for astrophysics programs chartered by then-Director of NASA's Astrophysics Division, Paul Hertz, and released in late 2013. It lays out plans over 30 years as long-term goals and missions. Goals include mapping the Cosmic Microwave Background and finding Earth like exoplanets, to go deeper into space-time studying the Large Scale Structure of the Universe, extreme physics, and looking back farther in time. The panel that produced the vision included many notable American astrophysicists, including: Chryssa Kouveliotou, Eric Agol, Natalie Batalha, Misty Bentz, Alan Dressler, Scott Gaudi, Olivier Guyon, Enectali Figueroa-Feliciano, Feryal Ozel, Aki Roberge, Amber Straughn, and Joan Centrella. Examples of discussed missions include: Astro-H (Hitomi) Black Hole Mapper CMB Polarization Surveyor Cosmic Dawn Euclid ExoEarth Mapper Gaia Gravitational Wave Surveyor/Mapper Habitable Exoplanet Imaging Mission (HabEx) Far-Infrared Surveyor (later renamed the Origins Space Telescope) JEM-EUSO James Webb Space Telescope (JWST) Large UV Optical Infrared Surveyor (LUVOIR) Nancy Grace Roman Space Telescope Neutron Star Interior Composition Explorer (NICER) Transiting Exoplanet Survey Satellite (TESS) X-Ray Surveyor (later renamed the Lynx X-ray Observatory) References External links Enduring Quests and Daring Visions (NASA) (.pdf) Astrophysics 2013 in outer space
Enduring Quests and Daring Visions
Physics,Astronomy
326
887,461
https://en.wikipedia.org/wiki/Marianna%20Cs%C3%B6rnyei
Marianna Csörnyei (born October 8, 1975 in Budapest) is a Hungarian mathematician who works as a professor at the University of Chicago. She does research in real analysis, geometric measure theory, and geometric nonlinear functional analysis. She proved the equivalence of the zero measure notions of infinite dimensional Banach spaces. Education and career Csörnyei received her doctorate from Eötvös Loránd University in 1999, supervised by György Petruska. She was a professor at the Mathematics Department of University College London between 1999–2011, and spent the 2009–2010 academic year at Yale University as visiting professor. Currently, she is at the University of Chicago. She is contributing editor of the mathematical journal Real Analysis Exchange. Awards and honors Csörnyei won a 2002 Whitehead Prize from the London Mathematical Society and a Royal Society Wolfson Research Merit Award that same year. She was also awarded the Philip Leverhulme Prize for Mathematics and Statistics in 2008 for her work in geometric measure theory. She was an invited sectional speaker at the International Congress of Mathematicians, in 2010. Csörnyei was selected to deliver the AWM-AMS Noether Lecture at the 2022 Joint Mathematics Meetings in Seattle, Washington. The title of her talk is The Kakeya needle problem for rectifiable sets. She is included in a deck of playing cards featuring notable women mathematicians published by the Association of Women in Mathematics. External links Csörnyei's faculty page at the University of Chicago References 1975 births Living people 21st-century Hungarian mathematicians 21st-century women mathematicians Mathematical analysts Academics of University College London Royal Society Wolfson Research Merit Award holders Whitehead Prize winners University of Chicago faculty Philip Leverhulme Prize winners
Marianna Csörnyei
Mathematics
347
13,851,221
https://en.wikipedia.org/wiki/Pin%20and%20hanger%20assembly
A pin and hanger assembly is used to connect two plate girders of a bridge. These assemblies are used to provide an expansion joint in the bridge. One beam (the anchor span) is set on a pier with a short section cantilevered out toward the next pier. The other (the suspended span) begins underneath the anchor span, and has its far end resting on the next pier. The beams have holes directly above each other. The two holes are connected using hangers, a pair of connecting plates sandwiching the bridge girders. A pair of large steel pins through the plates and girder webbing provide the hinges, holding up the suspended span while allowing it to move longitudinally. Large washers are bolted to each end of the pin to retain the hangers. Exceptionally long spans may have two sets of girders cantilevered from opposite bridge piers with a third set of girders suspended by pin and hanger assemblies from both cantilevers. Safety concerns Pin and hanger assemblies are considered fracture critical bridge components, meaning that the assemblies are non-redundant and failure of these systems could cause part or all of the bridge to collapse. The collapse of the Mianus River Bridge in Connecticut exposed potential flaws with pin and hanger bridges that could lead to catastrophic failures, if left unchecked. Because of this, state departments of transportation incur costly expenses on bridges with pin and hanger assemblies, as they require constant inspection and maintenance. As a result of these safety concerns, and advances in bridge design to allow longer spans, pin and hanger assemblies are no longer used on new bridges in the United States. Retrofitting Attempts have been made to increase the safety of bridges with pin and hanger assemblies by adding some form of redundancy to the assembly. Retrofits that add redundancy to pin and hanger assemblies include adding a "catcher's mitt"a short steel beam attached to the bottom of the cantilevered girder that extends out beneath the suspended girder to "catch" the suspended girder should the pin and hanger assembly fail. Another redundancy is connecting the cantilevered and suspended girders at the pin and hanger assembly with welded blocks and tie rods. Replacement options include bolted splices (gusset plates) plus shear connectors; a link slab; a ship-lap joint; and a new pin and hanger assembly with improved materials. References Structural connectors
Pin and hanger assembly
Engineering
501
25,208,226
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20September%203%2C%202081
A total solar eclipse will occur at the Moon's descending node of orbit on Wednesday, September 3, 2081, with a magnitude of 1.072. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 5 hours before perigee (on September 3, 2081, at 14:05 UTC), the Moon's apparent diameter will be larger. The path of totality will be visible from parts of France, Germany, Switzerland, Liechtenstein, Austria, Italy, Slovenia, Croatia, Hungary, Bosnia and Herzegovina, Serbia, Romania, Bulgaria, Turkey, Syria, Iraq, Kuwait, far western Iran, Bahrain, Qatar, the United Arab Emirates, eastern Saudi Arabia, Oman, the Maldives, and southern Indonesia. A partial solar eclipse will also be visible for parts of Greenland, Europe, North Africa, Northeast Africa, the Middle East, Central Asia, South Asia, and Southeast Asia. Major cities Paris Basel Zurich Innsbruck Ljubljana Zagreb Istanbul Ankara Baghdad Basra Kuwait City Manama Doha Abu Dhabi Male Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2081 An annular solar eclipse on March 10. A partial lunar eclipse on March 25. A total solar eclipse on September 3. A penumbral lunar eclipse on September 18. Metonic Preceded by: Solar eclipse of November 15, 2077 Followed by: Solar eclipse of June 22, 2085 Tzolkinex Preceded by: Solar eclipse of July 24, 2074 Followed by: Solar eclipse of October 14, 2088 Half-Saros Preceded by: Lunar eclipse of August 28, 2072 Followed by: Lunar eclipse of September 8, 2090 Tritos Preceded by: Solar eclipse of October 4, 2070 Followed by: Solar eclipse of August 3, 2092 Solar Saros 136 Preceded by: Solar eclipse of August 24, 2063 Followed by: Solar eclipse of September 14, 2099 Inex Preceded by: Solar eclipse of September 22, 2052 Followed by: Solar eclipse of August 15, 2110 Triad Preceded by: Solar eclipse of November 3, 1994 Followed by: Solar eclipse of July 5, 2168 Solar eclipses of 2080–2083 Saros 136 Metonic series Tritos series Inex series Notes References 2081 09 03 2081 in science 2081 09 03 2081 09 03
Solar eclipse of September 3, 2081
Astronomy
685
66,562,107
https://en.wikipedia.org/wiki/2-Chloromethylpyridine
2-Chloromethylpyridine is an organohalide that consists of a pyridine core bearing a chloromethyl group. It is one of three isomeric chloromethylpyridines, along with 3- and 4-chloromethylpyridine. It is an alkylating agent. 2-Chloromethylpyridine is a precursor to pyridine-containing ligands. Safety 2-Chloromethylpyridine is an analogue of nitrogen mustards, and has been investigated for its mutagenicity. References Alkylating agents Organochlorides 2-Pyridyl compounds
2-Chloromethylpyridine
Chemistry
145
47,524,110
https://en.wikipedia.org/wiki/51%20Eridani%20b
51 Eridani b is a "Jupiter-like" planet that orbits the young F0 V star 51 Eridani, in the constellation Eridanus. It is 96 light years away from the solar system, and it is approximately 20 million years old. Discovery 51 Eridani b was announced in August 2015, but was discovered in December 2014 using the Gemini Planet Imager, an international project led by the Kavli Institute for Particle Astrophysics and Cosmology. 51 Eridani b is the first exoplanet discovered by the Gemini Planet Imager. The Gemini Planet Imager was specifically created to discern and evaluate dim, newer planets orbiting bright stars through “direct imaging.” Direct imaging allows astronomers to use adaptive optics to sharpen the resolution of the image of a target star, then obstruct its starlight. Any residual incoming light is then scrutinized, and the brightest spots suggest a possible planet. Prior to the discovery of 51 Eridani b, each of the directly imaged worlds previously discovered had been gas giants many times the mass of Jupiter. Physical characteristics The planet has a mass at least 2.6, but not more than 11. Its radius is about 1.11 times the radius of Jupiter. It orbits 11.1 AU from its host star, and has an orbital period of roughly 10,000 days. The average temperature is 807 K, which is substantially hotter than the 130 K average temperature of Jupiter, the planet in the Solar System of closest size. Atmosphere 51 Eridani b has relatively low C/O molar ratio of 0.38. The planet has the second strongest methane signature of any exoplanet, after GJ 504 b. This methane signature, along with the low luminosity of the object, should produce additional clues as to how 51 Eridani b was formed. Astronomers also detected the presence of water and ammonia in the planet's spectrum. Atmospheric modeling favors a low surface gravity and a partly cloudy atmosphere. References Eridanus (constellation) Exoplanets discovered in 2014 Exoplanets detected by direct imaging Exoplanets detected by astrometry
51 Eridani b
Astronomy
443
12,932,629
https://en.wikipedia.org/wiki/Morphobank
MorphoBank is a web application for collaborative evolutionary research, specifically phylogenetic systematics or cladistics, on the phenotype. Historically, scientists conducting research on phylogenetic systematics have worked individually or in small groups employing traditional single-user software applications such as MacClade, Mesquite and Nexus Data Editor. As the hypotheses under study have grown more complex, large research teams have assembled to tackle the problem of discovering the Tree of Life for the estimated 4-100 million living species and the many thousands more extinct species known from fossils. Because the phenotype is fundamentally visual, and as phenotype-based phylogenetic studies have continued to increase in size, it becomes important that observations be backed up by labeled images. Traditional desktop software applications currently in wide use do not provide robust support for team-based research or for image manipulation and storage. MorphoBank is a particularly important tool for the growing scientific field of phenomics. The development of MorphoBank, which began in 2001, has been funded by the National Science Foundation's Directorates for Geosciences, Biological Sciences and Computer and Information Science and Engineering. The significance of the scientific work on MorphoBank has been featured in the New York Times(here and here), among other publications. Advantages Teams of scientists studying phylogenetics to build the Tree of Life assemble large spreadsheets of observations about species (referred to as "matrices"). These teams require simultaneous access by each team member to a single and secure copy of the team's data during a scientific research project. This single copy of the data also changes with great frequency during the data collection phase. Images that can be very helpful for documenting homology statements must be displayed, labeled and shared as homology statements develop. This cannot be accomplished elegantly with a desktop software package alone because in a desktop environment each collaborator is working on his own private copy of project data. Changes made by one participant cannot automatically propagate to others, preventing collaborators from seeing each other's data edits until they are manually (and due to the effort involved, often only periodically) merged into a single "true" dataset. In all but the smallest and most disciplined of teams, file version control and the reconciliation of changes made on multiple copies of the data emerge quickly as significant drags on productivity. MorphoBank is an attempt to address these issues by leveraging the ubiquity of the web and modern web-based application techniques, including Ajax, web service layers, and rich web applications to provide a full-featured, net-accessible collaborative workspace for phylogenetic research. In particular, MorphoBank makes it easy to: Share all kinds of data with geographically separated team members, including taxonomy, character and specimen data, media (including images, video and audio), phylogenetic matrices (including data in the widely used NEXUS and TNT format) and other data such as documents and genetic sequences. Label high-resolution images using a web-based image annotation application. Collaboratively edit project data such as phylogenetic matrices using a built-in web-based matrix editor. The editor allows the linking of labeled images to individual cells of a matrix. Manage access to project data. Access ranges from full-access for team members to anonymous read-only access for potential reviewers. Publish completed project data on the web in support of a published paper with a persistent URL. Search The Encyclopedia of Life for taxon exemplar images. Store high resolution CT data Create ontologies for updating and populating matrix cells. These tasks are difficult or impossible in most existing software applications. History In 2001 the National Science Foundation (NSF) sponsored a workshop, at the American Museum of Natural History in New York to develop the outlines of a web-based system for a collaborative, media-rich research tool for morphological phylogenetics. An application prototype presented at the workshop was later refined with feedback from the workshop and became MorphoBank version 1.0. A grant from the US National Oceanic and Atmospheric Administration funded further revisions resulting in version 2.0, released in 2005. Current support from the NSF is funding current feature enhancements to MorphoBank. MorphoBank was hosted by Stony Brook University until late October 2021 and received back up support from the American Museum of Natural History. The current version is 3.0. Rationale for the software was described in the journal Cladistics. MorphoBank has also received support from NESCENT and the San Diego Supercomputer Center. Since 2018, MorphoBank has been supported in part by Phoenix Bioinformatics, a non-profit company founded to sustain databases for the basic sciences. A permanent move of MorphoBank from Stony Brook University to Phoenix Bioinformatics was complete in late October 2021. The San Diego Supercomputer Center has previously provided technical and hosting resources to the MorphoBank project. Usage MorphoBank hosts the products of peer-reviewed scientific research on phenotypes. An increasing volume of systematics data is "born digital" and MorphoBank is well suited to handle this type of material. On August 24, 2007, 62 active research projects were hosted by MorphoBank, as well as 6 completed (and published) projects. By 2017 over 2000 scientists and their students were registered content builders (users are not required to register and are even more numerous) and has more than 500 publicly available projects with approximately 80,000 images that are the products of scientific research. Over 1,500 active research projects are hosted by MorphoBank. The software has been used to assemble phylogenetic research on such groups as mammals, from bats to whales, bivalve molluscs, arachnids, fossil plants and living and extinct amniotes. It has also been used more broadly in evolutionary and paleontological research to host curated images associated with published research on lacewing insects geckos, raptor birds, dinosaurs, frogs and nematodes. MorphoBank is increasingly used in conjunction with the Paleobiology Database. Example published projects: Project 1097: Blank CE, 2013 Origin and early evolution of photosynthetic eukaryotes in freshwater environments – reinterpreting proterozoic paleobiology and biogeochemical processes in light of trait evolution Project 2520: Carvalho, T. P., R. E. Reis, and J. P. Friel, 2017 A new species of Hoplomyzon (Siluriformes: Aspredinidae) from Maracaibo Basin, Venezuela: osteological description using high-resolution Project 2651: Baron, M. G., Norman, D. B., Barrett, P. M., 2017 A new hypothesis of dinosaur relationships and early dinosaur evolution MorphoBank has been particularly important to the Assembling the Tree of Life initiative sponsored by the National Science Foundation. MorphoBank is well-suited to such projects because of its tools for merging taxonomic, character and matrix-based data, as well as its collaborative features. Highlights of this research include a collaborative matrix on mammal evolution published in Science that included over 4,000 phenomic characters scored for over 80 species, a matrix on extant baleen whales featuring nearly 600 images, and more. References Citations . External links MorphoBank home page Modernizing the Tree of Life, Science, 10 June 2003, 300: 1692–1697. Article discussing efforts of projects including MorphoBank to simplify and speed up the assessment of biodiversity. "Morphology: The Shape of Things to Come", Paul D. Thacker, BioScience, June 2003, Vol. 53 No. 6, 544. A report on contemporary initiatives in morphological research, including MorphoBank. Web applications Bioinformatics software Paleontology Evolution Morphology (biology) Taxonomy Species Fossils
Morphobank
Biology
1,645
13,860,772
https://en.wikipedia.org/wiki/Delaunay%20tessellation%20field%20estimator
The Delaunay tessellation field estimator (DTFE), (or Delone tessellation field estimator (DTFE)) is a mathematical tool for reconstructing a volume-covering and continuous density or intensity field from a discrete point set. The DTFE has various astrophysical applications, such as the analysis of numerical simulations of cosmic structure formation, the mapping of the large-scale structure of the universe and improving computer simulation programs of cosmic structure formation. It has been developed by Willem Schaap and Rien van de Weijgaert. The main advantage of the DTFE is that it automatically adapts to (strong) variations in density and geometry. It is therefore very well suited for studies of the large scale galaxy distribution. Method The DTFE consists of three main steps: Step 1 The starting point is a given discrete point distribution. In the upper left-hand frame of the figure, a point distribution is plotted in which at the center of the frame an object is located whose density diminishes radially outwards. In the first step of the DTFE, the Delaunay tessellation of the point distribution is constructed. This is a volume-covering division of space into triangles (tetrahedra in three dimensions), whose vertices are formed by the point distribution (see figure, upper right-hand frame). The Delaunay tessellation is defined such that inside the interior of the circumcircle of each Delaunay triangle no other points from the defining point distribution are present. Step 2 The Delaunay tessellation forms the heart of the DTFE. In the figure it is clearly visible that the tessellation automatically adapts to both the local density and geometry of the point distribution: where the density is high, the triangles are small and vice versa. The size of the triangles is therefore a measure of the local density of the point distribution. This property of the Delaunay tessellation is exploited in step 2 of the DTFE, in which the local density is estimated at the locations of the sampling points. For this purpose the density is defined at the location of each sampling point as the inverse of the area of its surrounding Delaunay triangles (times a normalization constant, see figure, lower right-hand frame). Step 3 In step 3 these density estimates are interpolated to any other point, by assuming that inside each Delaunay triangle the density field varies linearly (see figure, lower left-hand frame). Applications An atlas of the nearby universe One of the main applications of the DTFE is the rendering of our cosmic neighborhood. Below the DTFE reconstruction of the 2dF Galaxy Redshift Survey is shown, revealing an impressive view on the cosmic structures in the nearby universe. Several superclusters stand out, such as the Sloan Great Wall, one of the largest structures in the universe. Numerical simulations of structure formation Most algorithms for simulating cosmic structure formation are particle hydrodynamics codes. At the core of these codes is the smoothed particle hydrodynamics (SPH) density estimation procedure. Replacing it by the DTFE density estimate will yield a major improvement for simulations incorporating feedback processes, which play a major role in galaxy and star formation. Cosmic velocity field The DTFE has been designed for reconstructing density or intensity fields from a discrete set of irregularly distributed points sampling this field. However, it can also be used to reconstruct other continuous fields which have been sampled at the locations of these points, for example the cosmic velocity field. The use of the DTFE for this purpose has the same advantages as it has for reconstructing density fields. The fields are reconstructed locally without the application of an artificial or user-dependent smoothing procedure, resulting in an optimal resolution and the suppression of shot noise effects. The estimated quantities are volume-covering and allow for a direct comparison with theoretical predictions. Evolution and dynamics of the cosmic web The DTFE has been specifically designed for describing the complex properties of the cosmic web. It can therefore be used to study the evolution of voids and superclusters in the large scale matter galaxy distribution. External links DTFE: the Delaunay Tessellation Field Estimator, Willem Schaap, 2007, PhD Thesis, Rijksuniversiteit Groningen, The Netherlands Probing cosmic velocity flows in the local universe, Emilio Romano-Diaz, 2004, PhD Thesis, Rijksuniversiteit Groningen, The Netherlands The cosmic web: geometric analysis, Rien van de Weygaert and Willem Schaap, 2004 Large-scale structure of the cosmos Cosmological simulation Astronomical surveys Geometric algorithms
Delaunay tessellation field estimator
Physics,Astronomy
953