id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
76,149,855
https://en.wikipedia.org/wiki/Capillaritron
A capillaritron is a device for creating ion and atom rays. Mechanism The capillaritron, the basic concept of which was published in 1981, consists of a fine metal capillary through which gas flows as an anode and a concentric extraction cathode with an outlet opening. A flow of gas through the capillary is extracted when high voltage (usually a few kilovolts) is ionised by free electrons and secondary electrons, which are accelerated towards the anode (see also impact ionisation). The positively charged ions are accelerated in the electric field and form an ion beam behind the opening of the extraction cathode. Due to recombination and charge exchange processes in the plasma, the beam also partly consists of uncharged atoms. The capillary usually consists of resistant materials, such as tungsten. A further development from 1992 is the quartz capillaritron. Here the capillary consists of quartz, an electrically insulating material, into which a metal wire is inserted in order to generate the anode potential. The advantage lies in the simpler, more flexible and cheaper production of quartz capillaries with a predetermined inner diameter, which, unlike metal capillaries, do not have to be drilled but can be electrochemically etched or manufactured by a glassblower. As a rule, inert gas is used as operating gas, as this only undergoes a minor chemical reaction with the other materials involved. However, a capillaritron also works with hydrogen, with nitrogen or even with air. With ion beams of capillaritrons, current densities of up to 10 kiloamperes per square millimetre and beam currents of several milliamperes are achieved. Through focusing with ion optics, beams with high power density can be generated in high vacuum, which can also be used to process surfaces selectively. Applications Capillaritrons are commercially available. Ion and atom beams can be used to sputter surfaces over large areas, and the sputtered material can be used for thin film deposition. Atomic beams can also be used to process insulating surfaces. When using ion beams, such surfaces would become more electrostatically charged, which slows down the ions before they hit the surface. Furthermore, the capillaritron as an atom source can be used for mass spectrometry. Capillaritrons are also suited for accelerator applications. Further reading John F. Mahoney, Julius Perel, A. Theodore Forrester: Capillaritron: A New, Versatile Ion Source. In: Appl. Phys. Lett. 38, 1981, S. 320–322 (). Julius Perel, John F. Mahoney, Bernard Kalensher: Investigation of the Capillaritron ion source for electric propulsion, AIAA, 15th International Electric Propulsion 1981, Las Vegas, U.S.A., published online on 17 Aug 2012 (). Julius Perel: Ion Source for Rocket Payload, 6th Quarterly Status Report, Air Force Geophysics Laboratory, Pasadena, U.S.A., August 1983 Roland Hanke, Helmut Knapp, Detlef Rübesame, Stephan Wege, Heinz Niedrig: A capillaritron ion source as triode system coupled with an einzel lens., In: Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms, Volumes 59–60, Part 1, 1 July 1991, Pages 135-138 (). Markus Bautsch, Patrik Varadinek, Stephan Wege, Heinz Niedrig: A Compact and Inexpensive Quartz Capillaritron Source. In: J. Vac. Sci. Tech. A. 12, Nr. 2, 1994, S. 591–593 (). References Ion source Accelerator physics Surface science
Capillaritron
[ "Physics", "Chemistry", "Materials_science" ]
791
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Ion source", "Surface science", "Experimental physics", "Condensed matter physics", "Mass spectrometry", "Accelerator physics" ]
76,152,094
https://en.wikipedia.org/wiki/C10H18O5
{{DISPLAYTITLE:C10H18O5}} The molecular formula C10H18O5 (molar mass: 218.249 g/mol) may refer to: Di-tert-butyl dicarbonate Diethylene glycol diglycidyl ether Molecular formulas
C10H18O5
[ "Physics", "Chemistry" ]
66
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
76,152,796
https://en.wikipedia.org/wiki/Connorstictic%20acid
Connorstictic acid is an organic compound in the structural class of chemicals known as depsidones. It occurs as a secondary metabolite in many lichen species in several genera. History Connorstictic acid was first identified and named in 1971 by Chicita Culberson and William Culberson, from chemical analysis of Diploschistes lichens. They described it as "probably a β-orcinol depsidone", and noted that it commonly co-occurred in lichens with norstictic acid. Its structure was published in 1980 following spectral and elemental analysis of the compound purified from the lichen Pertusaria pseudocorallina. The following year, John Elix and Labunmi Lajide corroborated the structure by synthesising it in several steps from the precursor norstictic acid. They also showed that connorstictic acid could be obtained by the direct reduction of norstictic acid by the addition of sodium triacetoxyborohydride, or by catalytic reduction. In 1981, Chicita Culberson and colleagues reported on the difficulties of isolating connorstictic acid using standard thin-layer chromatography protocols, due to its co-eluting with related substances such as constictic acid and cryptostictic acid, depending on the solvent system used. They suggested that connorstictic acid could be a common or even constant satellite compound in chemistries with stictic and norstictic acids, and that many prior reports of connorstictic acid may have been misidentifications with cryptostictic acid. Properties Connorstictic acid is a member of the class of chemical compounds called depsidones. Its IUPAC name is 5,13,17-trihydroxy-4-(hydroxymethyl)-7,12-dimethyl-2,10,16-trioxatetracyclo[9.7.0.03,8.014,18]octadeca-1(11),3(8),4,6,12,14(18)-hexaene-9,15-dione. The absorbance maxima (λmax) in the infrared spectrum occur at 1250, 1292, 1445, 1610, 1710, 1745, and 3400 cm−1. Connorstictic acid's molecular formula is C19H14O9; it has a molecular mass of 374.29 grams per mole. In its purified crystalline form, its predicted melting point is . Occurrence Lichen genera from which connorstictic acid has been isolated include Bryoria, Buellia, Cladonia, Cratiria, Diorygma, Graphis, Paraparmelia, Parmotrema, Pertusaria, Usnea, and Xanthoparmelia. References Benzaldehydes Heterocyclic compounds with 4 rings Lactones Lichen products Methoxy compounds Hydroxyarenes Dioxepines
Connorstictic acid
[ "Chemistry" ]
624
[ "Natural products", "Lichen products" ]
76,152,941
https://en.wikipedia.org/wiki/Alfa%20Romeo%20690T%20engine
The Alfa Romeo 690T is a twin-turbocharged, direct injected, 90° V6 petrol engine designed and produced by Alfa Romeo since 2015. It is used in the high-performance Giulia Quadrifoglio and Stelvio Quadrifoglio models and is manufactured at the Alfa Romeo Termoli engine plant. Description The 690T is often considered to be the Ferrari F154 engine with two less cylinders, but in fact it is a completely new engine developed by the same engineer, Gianluca Pivetti, of the F154, that shares some peculiarities Alfa knew worked well and to reduce development time. This 2.9-litre V6 uses single-scroll rather than twin-scroll turbos, which produce of boost pressure. Alfa also added mechanical cylinder deactivation to the right bank for increased highway fuel efficiency. The 90-degree V6 engine's crankshaft has three crankpins 120 degrees apart, each with two connecting rods mounted side by side. This configuration results in uneven firing at 90 and 150 degrees of each rotation, but for each cylinder bank results in even pulses every 240 degrees, providing evenly-spaced exhaust pulses to each turbocharger and allows one bank to deactivate. Additionally, from 2020 onward, Alfa added port injection, doubling the number of injectors to 12. The Maserati 3.0-litre V6 Nettuno engine, introduced in the Maserati MC20, shares many of its characteristics with the Ferrari F154 and the Alfa Romeo 690T engines. In 2023 Alfa Romeo presented the 33 Stradale model that featured a bigger displacement 690T engine. Now at 3.0-litres and producing . Applications Alfa Romeo Giulia Quadrifoglio Alfa Romeo Stelvio Quadrifoglio Alfa Romeo Giulia GTA and GTAm 2023 Alfa Romeo Giulia SWB Zagato Alfa Romeo References Alfa Romeo V6 engines Alfa Romeo engines Gasoline engines by model Engines by model Piston engines Internal combustion engine
Alfa Romeo 690T engine
[ "Technology", "Engineering" ]
400
[ "Internal combustion engine", "Engines", "Engines by model", "Piston engines", "Combustion engineering" ]
76,153,284
https://en.wikipedia.org/wiki/Creighton%20Michael
Creighton Michael (born 1949) is an American abstract artist. He earned his B.F.A. in painting from the University of Tennessee, his M.A. in art history from Vanderbilt University, and later received an M.F.A. in painting and multimedia from Washington University in St. Louis. Career Michael has had solo exhibitions at the High Museum of Art, Katonah Museum of Art, Queens Museum of Art, Neuberger Museum of Art, and University of Richmond Museums. His international presence is marked by solo exhibitions in Copenhagen, Montreal, and Reykjavík. Michael received awards from the Pollock-Krasner Foundation grant, a New York Foundation for the Arts fellowship in sculpture, a Sam & Adele Golden Foundation for the Arts award in painting, and an Edward F. Albee Foundation Fellowship Michael has been on the faculty at Rhode Island School of Design, the Pennsylvania Academy of the Fine Arts, and Hunter College in New York City. He served as a visiting lecturer at Princeton University and conducted workshops at various institutions, including the Anderson Ranch Arts Center and Virginia Commonwealth University. As a member of American Abstract Artists, Michael was on the Board of Directors for the International Sculpture Center and the Board of Overseers at the Katonah Museum of Art. He worked as a curator/producer for the show The Art of Rube Goldberg, that was toured by International Arts & Artists, Washington, D.C.; 2017-2020 and the Pencil Pushed: Exploring Process and Boundaries in Drawing'', show at The Ewing Gallery of Art and Architecture and Downtown Gallery, University of Tennessee, Knoxville, Tennessee; 2012. Collections Michael's work is held in public and private collections, including the National Gallery of Art,The Phillips Collection, the Frances Mulhall Achilles Library Collection at the Whitney Museum, the Brooklyn Museum, and the Metropolitan Museum of Art in New York. Noteworthy additions include Embassy of the United States, Nairobi, U.S. Department of State, Permanent Collection, The Hafnarfjördur Centre of Culture and Fine Art in Iceland, High Museum of Art in Atlanta, McNay Art Museum, in San Antonio, Texas, Mint Museum in Charlotte, North Carolina, the Denver Art Museum, and the Yale University Art Gallery in New Haven, Connecticut, among others. References External links Creighton Michael website Creighton Michael, Artspace American abstract artists Museum collections 1949 births Living people University of Tennessee alumni Multimedia artists 20th-century American male artists Vanderbilt University alumni Sam Fox School of Design & Visual Arts alumni Rhode Island School of Design faculty Pennsylvania Academy of the Fine Arts faculty Hunter College faculty Princeton University faculty
Creighton Michael
[ "Technology" ]
531
[ "Multimedia", "Multimedia artists" ]
76,153,838
https://en.wikipedia.org/wiki/Filoboletus%20mycenoides
Filoboletus mycenoides, is a species of agaric fungus in the family Mycenaceae native to Java, first described by Paul Christoph Hennings as the type species of Filoboletus. Morphology Pileus membranous, minute, convex, smooth, glabrous, incarnate, 1-1.25 mm in diameter. Stipe central, thin and filiform, frosty white, glabrous, discoid base 15 mm long barely 200 μm thick. Tubular hymen indistinguishable from hymenophora. Pores rounded. Spores cylindrical, hyaline, 3.5 — 4 X 0.5 μm. References External links mycenoides Fungi described in 1899 Fungus species
Filoboletus mycenoides
[ "Biology" ]
161
[ "Fungi", "Fungus species" ]
76,153,929
https://en.wikipedia.org/wiki/Random%20two-sided%20matching
A random two-sided matching is a process by which members of two groups are matched to each other in a random way. It is often used in sports in order to match teams in knock-out tournaments. In this context, it is often called a draw, as it is implemented by drawing balls at random from a bowl, each ball representing the name of a team. Examples The UEFA Champions League and UEFA Europa League draw A random two-sided matching occurs in the UEFA Champions League Round of 16 and UEFA Europa League Round of 32. After some games are done within 8 groups, the group winner and the group runner-up proceed to the champions league. The UEFA rules say that each winner should be paired with a runner-up. Without further constraints, this problem could easily be solved by finding a random permutation of the winners. But UEFA rules impose two additional constraints: two teams from the same group cannot be paired, and two teams from the same association cannot be paired. Thus, the goal is to choose a random matching in an incomplete bipartite graph. The UEFA mechanism makes several draws from different bowls. At the beginning, there are: Bowl 1, containing identical balls each of which represents one group runner-up; Bowl 2, initially empty, to be filled and refilled later. Bowls A to H, each of which represents a group winner and contains 7 balls with the winner's name on it. The draw proceeds as follows: A ball is drawn from bowl 1, and the runner-up's name is displayed; A computer program shows all winners that can — according to the UEFA rules — be paired with the drawn runner-up. This takes into account not only current constraints, but also constraints for future runners-up. From some of the bowls A to H, representing the potential winners, a single ball is taken and put in bowl 2; The balls in bowl 2 are shuffled. One ball is drawn, and it represents the winner matched to the previously drawn runner-up. Bowl 2 is emptied, and the process repeats for 8 rounds. This procedure yields probabilities that are different than just choosing a matching at random; this creates a distortion in the matching probalities of different groups, which raises suspicion and conspiracy theories. The FIFA draw Another two-sided matching occurs in the FIFA World Cup. First, the runners-up are drawn in a random order. Then, each winner in turn is drawn, and it is matched to the first runner-up in the order, to which it can be matched according to the constraints. This draw, too, produces distorted probabilities relative to the uniform-random matching. See also Fair random assignment - one-sided matching - allocating items to agents with different preferences. References Matching (graph theory) Randomness
Random two-sided matching
[ "Mathematics" ]
569
[ "Matching (graph theory)", "Mathematical relations", "Graph theory" ]
76,153,976
https://en.wikipedia.org/wiki/Filoboletus%20hanedae
Filoboletus hanedae, is a species of agaric fungus in the family Mycenaceae native to South-East Asia, first described by George S. Kobayashi. Its fruiting bodies display bioluminescence. Morphology Pileus (cap) The shape of the pileus in Filoboletus hanedae displays quite a bit of variation with convex or conico-campanulate, umbonate, plane, hygrophanous caps being observed. Margin rather strongly incurved at first. The underside of the pileus has pores, rather than gills, where spores are grown and dispersed. Size The size of the fungus's pileus ranges from about half-a-centimeter to about three and half centimeters in diameter. The stipes (stalk) size ranges from 0.4 to 6 centimeters long. Coloration The coloration of Filoboletus hanedae changes depending on its maturation state. At maturity, the fruiting bodies have white or beige coloration. During maturation, however, the fruiting body - or basidiomata - can also have brown or pink coloration. The visibility of any brown or pink coloration decreases as the fruiting body matures, giving way to the more known white, yellowish and beige appearance. Bioluminescence The flesh is greenish phosphorescent, especially in the lower part of the stem. Ecology Filoboletus hanedae grows on trunks, stipes and sticks in the forest, being noted in Sri Lanka, Peninsular Malaysia, Sumatra, Borneo, Krakatoa archipelago, Karimunjawa, Philippine Islands, Pohnpei, New Guinea, New Caledonia, Australia, Madagascar, Venezuela and Japan. References External links hanedae Fungi described in 1955 Bioluminescent fungi Fungi of Oceania Fungi of Asia Fungus species
Filoboletus hanedae
[ "Biology" ]
379
[ "Fungi", "Fungus species" ]
76,154,132
https://en.wikipedia.org/wiki/Expert%20review%20%28method%29
Expert review or expert evaluation is a method to evaluate survey questions from the perspective of one or more experts. An expert review has two primary goals: Identify potential problems related to data quality and data collection so they can be mitigated. Group survey items by how likely they are to result in measurement errors, such as to classify interviewer effects. An individual expert or a panel of experts such as survey methodologists, subject matter experts (such as sociologists) or other people familiar with questionnaire design review the survey, and they may be asked to suggest revisions. Because variation is found in the problems identified by each expert, a panel rather than one single expert is recommended. The experts document their assessment and typically report the results in open-ended comments. They can also conduct a systematic review using a coding tool such as the Questionnaire Appraisal System that is supported by an item taxonomy of the cognitive demands of a question and a detailed set of potential problem codes document the features that may lead to response error. When applied to translated surveys, the expert panel also includes translators and they usually produce individual reports and then convene a resolution meeting that generates an improved version of the survey translation. Compared to empirical pretesting methods, an expert review is more time-efficient and less costly to complete. An expert review can also be combined with other methods to enable multimode pretesting. See also Heuristic evaluation References Survey methodology Sociology
Expert review (method)
[ "Biology" ]
292
[ "Behavioural sciences", "Behavior", "Sociology" ]
76,154,410
https://en.wikipedia.org/wiki/SpaceX%20Starshield
Starshield is a business unit of SpaceX creating purpose-built low-Earth-orbit satellites designed to provide new military space capabilities to U.S. and allied governments. Starshield was adapted from the global communications network Starlink but brings additional capabilities such as target tracking, optical and radio reconnaissance, and early missile warning. Primary customers include the Space Development Agency, National Reconnaissance Office and the United States Space Force. As of 2024, at least 98 Starshield satellites have been launched, with the latest batch of 17 satellites being launched in October 2024 as part of NROL-167. While SpaceX president and COO Gwynne Shotwell has indicated that there is little information she is allowed to disclose about Starshield, she has noted "very good collaboration" between the intelligence community and SpaceX on the program. The U.S. Congressional Research Service reports that future satellites in Starshield's participating SDA program may wield interceptor missiles, hypersonic projectiles, or directed energy weapons, with the program's founder adding "since Reagan’s day, technology has advanced enough that putting both sensors and shooters in space is not only possible but relatively easy." According to SDA Director Derek Tournear, later satellites will take on the “extremely difficult” task of maintaining contact with missiles in flight. The former four-star general Terrence O'Shaughnessy, who previously ran U.S. Northern Command, is the vice president for SpaceX's Special Programs Group who is thought to be involved with Starshield. The Wall Street Journal reported that Starshield's online job postings required people with top-secret clearances, as well as experience working with the Defense Department and intelligence community — such as representing Starshield to Pentagon combatant commands. The first satellites were designed for the Space Development Agency and outfitted with advanced infrared sensors meant to detect and track ballistic and hypersonic missiles. In 2021, Starshield had entered a $1.8 billion classified contract with the U.S. government, revealed in 2023, to construct hundreds of spy satellites for continuous real-time monitoring of targets around the globe. These began operations from May 2024, starting with NROL-146. These satellites are made in cooperation with Northrop Grumman. History The Starshield name was publicly announced December 2022, however in 2021, Starshield had already entered a $1.8 billion classified contract with the U.S. government, revealed in 2023. In the documents of the contract, SpaceX says that funds from the contract were expected to become an important part of the revenue mix of the company after 2021. Reuters revealed in 2024 that this contract was between the National Reconnaissance Office and SpaceX, and for a spy satellite network consisting of hundreds of satellites functioning as a swarm. The satellites will have imaging capabilities, and the satellite network will enable the US government to have continuous surveillance of nearly anywhere around the globe. Starshield also plans to be more resilient to attack from other powers. Starshield's imaging capabilities are designed to have superior resolution over most existing U.S. government spying systems. Northrop Grumman was selected to partner with SpaceX, with insiders noting that "it is in the government's interest to not be totally invested in one company run by one person". As early as 2020, SpaceX was designing, building, and launching customized satellites based on variants of the Starlink satellite bus for the National Reconnaissance Office (NRO). In October 2020, SDA awarded SpaceX an initial $150 million dual-use contract to develop 4 satellites to detect and track ballistic and hypersonic missiles. The first batch of satellites were originally scheduled to launch September 2022 to form part of the Tracking Layer Tranche 0 of the Space Force's National Defense Space Architecture. The launch was rescheduled multiple times but it eventually launched in April 2023. In 2020, SpaceX hired retired four-star general Terrence J. O'Shaughnessy who according to some sources is associated with Starlink's military satellite development and according to one source is listed as a "chief operating officer" at SpaceX. While still in active duty, O'Shaughnessy advocated before the United States Senate Committee on Armed Services for a layered capability with lethal follow-on that incorporates machine learning and artificial intelligence to gather and act upon sensor data quickly. As of 2024, Terrence O’Shaughnessy reportedly has had a high-level role at Starshield, though there is no indication that SpaceX is working on anything related to lethal weapons. SpaceX was not awarded a contract for the larger Tranche 1, with awards going to York Space Systems, Lockheed Martin Space, and Northrop Grumman Space Systems. As Starlink was being relied on in the Russo-Ukrainian war, expert on battlefield communications Thomas Wellington argued that Starlink signals, because they use narrow focused beams, are less vulnerable to interference and jamming by the enemy in wartime than satellites flying in higher orbits. Although there is no lethal weapons being developed this technology is being used by the military and it "can be integrated onto partner satellites to enable incorporation into the Starshield network." Therefore, if the military needed the use of SpaceX satellites through the Starshield program SpaceX "currently has over 3,000 satellites in low Earth orbit that beam the signal back to users' receiver dishes.". Another Starshield contract was announced in September 2023, involving communications-focused services for U.S. Space Systems Command. This contract with the US Space Force plans to provide customized satellite communications for the military. This is under the Space Force's new "Proliferated Low Earth Orbit" program for LEO satellites, where Space Force will allocate up to $900 million worth of contracts over the next 10 years. Although 16 vendors are competing for awards, the SpaceX contract is the only one to have been issued to date. The one-year Starshield contract was awarded on September 1, 2023. The contract is expected to support 54 mission partners across the Army, Navy, Air Force, and Coast Guard. In February 2024, the United States House Select Committee on Strategic Competition between the United States and the Chinese Communist Party sent a letter to Elon Musk stating that the Starshield program was potentially in breach of contract for not providing access to U.S. troops stationed in Taiwan when "global access" was "possibly" required by the contract. SpaceX responded that they were in full compliance with their U.S. government contracts. SpaceX had notified the Select Committee a week earlier that they were misinformed, but the Select Committee "chose to contact media before seeking additional information [regarding Starshield military use in Taiwan]". In the context of military communication satellites, Col. Eric Felt, director of space architecture at the office of the assistant secretary of the Air Force for space acquisition and integration, said that there are plans to acquire at least 100 Starshield-branded satellites for this purpose by 2029. He said that while the military is an active user of SpaceX's commercial Starlink service, they also want to take advantage of the company's dedicated Starshield product line. Clare Hopper, head of the Space Force’s Commercial Satellite Communications Office (CSCO) stated that demand for Starlink's commercial service is "off the charts" and that currently all of their supported users are still using the commercial Starlink satellite constellation, but that the DoD has "unique service plans that contain privileged capabilities and features that are not available commercially". Launches Between 2020 and March 2024, a dozen Starshield prototypes and operational satellites were launched on Falcon 9. Reuters reported that these satellites have never been acknowledged by SpaceX or the US government and remain classified. Images were posted online of the two SpaceX-built Space Development Agency Tranche 0 Flight 1 Tracking Layer infrared imaging satellites that launched on 2 April 2023. After the launch of Starlink Group 7-16, only 20 of a batch of 22 starlink satellites were catalogued, and the remaining two were later designated as USA-350 and USA-351. See also Defense Satellite Communications System Militarisation of space Rocket Cargo, other U.S. Space Force program involving SpaceX Starlink in the Russo-Ukrainian War References SpaceX satellites Space technology Communications satellite constellations Communications satellites in low Earth orbit Communications satellites of the United States High throughput satellites Wireless networking Military space program of the United States Military satellites of the United States SpaceX military contracts
SpaceX Starshield
[ "Astronomy", "Technology", "Engineering" ]
1,763
[ "Space technology", "Wireless networking", "Computer networks engineering", "Outer space" ]
76,155,114
https://en.wikipedia.org/wiki/Splayed%20opening
In architecture, a splayed opening is a wall opening that is narrower on one side of the wall and wider on another. When used for a splayed window, it allows more light to enter the room. In fortifications, a splayed opening is used to broaden the arc of fire (cf. embrasure, loophole). Splayed arch A splayed arch (also sluing arch) is an arch where the springings are not parallel ("splayed"), causing an opening on the exterior side of an arch to be different (usually wider) than the interior one. The intrados of a splayed arch is not generally cylindrical as it is for typical (round) arch, but has a conical shape. José Calvo-López, a Spanish scholar of architecture, subdivides the splayed arches into symmetrical (where both springers form the same angles with the faces of the wall), and the ox horn arches, where one springer is orthogonal to the wall, and another is not, creating a "warped" intrados (the use of the term "ox horn" should not be confused with , "cow's horn" of a design technique that was used for skew arch profiles). Double-splayed window Double-splayed windows, widening towards both wall faces, with the narrowest part in the middle of a wall, are considered common in the Anglo-Saxon architecture, although the use of this trait for dating is questionable, and English church buildings of the 12th century have such windows too. See also Hagioscope, a splayed opening for observation Squinch, a conical-shaped vault spanning the inner corner between two walls. References Sources Arches and vaults
Splayed opening
[ "Engineering" ]
361
[ "Architecture stubs", "Architecture" ]
76,155,257
https://en.wikipedia.org/wiki/Bayes%20correlated%20equilibrium
In game theory, a Bayes correlated equilibrium is a solution concept for static games of incomplete information. It is both a generalization of the correlated equilibrium perfect information solution concept to bayesian games, and also a broader solution concept than the usual Bayesian Nash equilibrium thereof. Additionally, it can be seen as a generalized multi-player solution of the Bayesian persuasion information design problem. Intuitively, a Bayes correlated equilibrium allows for players to correlate their actions in a way such that no player has an incentive to deviate for every possible type they may have. It was first proposed by Dirk Bergemann and Stephen Morris. Formal definition Preliminaries Let be a set of players, and a set of possible states of the world. A game is defined as a tuple , where is the set of possible actions (with ) and is the utility function for each player, and is a full support common prior over the states of the world. An information structure is defined as a tuple , where is a set of possible signals (or types) each player can receive (with ), and is a signal distribution function, informing the probability of observing the joint signal when the state of the world is . By joining those two definitions, one can define as an incomplete information game. A decision rule for the incomplete information game is a mapping . Intuitively, the value of decision rule can be thought of as a joint recommendation for players to play the joint mixed strategy when the joint signal received is and the state of the world is . Definition A Bayes correlated equilibrium (BCE) is defined to be a decision rule which is obedient: that is, one where no player has an incentive to unilaterally deviate from the recommended joint strategy, for any possible type they may be. Formally, decision rule is obedient (and a Bayes correlated equilibrium) for game if, for every player , every signal and every action , we have for all . That is, every player obtains a higher expected payoff by following the recommendation from the decision rule than by deviating to any other possible action. Relation to other concepts Bayesian Nash equilibrium Every Bayesian Nash equilibrium (BNE) of an incomplete information game can be thought of a as BCE, where the recommended joint strategy is simply the equilibrium joint strategy. Formally, let be an incomplete information game, and let be an equilibrium joint strategy, with each player playing . Therefore, the definition of BNE implies that, for every , and such that , we have for every . If we define the decision rule on as for all and , we directly get a BCE. Correlated equilibrium If there is no uncertainty about the state of the world (e.g., if is a singleton), then the definition collapses to Aumann's correlated equilibrium solution. In this case, is a BCE if, for every , we have for every , which is equivalent to the definition of a correlated equilibrium for such a setting. Bayesian persuasion Additionally, the problem of designing a BCE can be thought of as a multi-player generalization of the Bayesian persuasion problem from Emir Kamenica and Matthew Gentzkow. More specifically, let be the information designer's objective function. Then her ex-ante expected utility from a BCE decision rule is given by: If the set of players is a singleton, then choosing an information structure to maximize is equivalent to a Bayesian persuasion problem, where the information designer is called a Sender and the player is called a Receiver. References Game theory equilibrium concepts
Bayes correlated equilibrium
[ "Mathematics" ]
717
[ "Game theory", "Game theory equilibrium concepts" ]
76,155,268
https://en.wikipedia.org/wiki/Chemical%20dumps%20in%20ocean%20off%20Southern%20California
During the 20th century, a large amount of chemical waste was dumped into the Pacific Ocean within the Southern California Bight off the West Coast of the United States. Dumped materials include DDT, WW II munitions, radioactive waste, PCBs, petroleum products, and sulfuric acid. The chemical waste was dumped in at least 14 offshore locations, ranging from the Channel Islands in the north, to the shores off Ensenada, Mexico in the south. The Environmental Protection Agency has designated one of the offshore sites as a subunit of the Montrose Chemical Superfund site. After studying the offshore site, the EPA is planning is to leave the waste in place, and cover it with a layer of sediment. History Dumping in the mid-20th century From the 1930s until the early 1970s, multiple government agencies (including the California Regional Water Quality Control Board and the U.S. Army Corps of Engineers) approved ocean disposal of domestic, industrial, and military waste at 14 deep-water sites off the coast of Southern California. Waste disposed included refinery wastes, filter cakes and oil drilling wastes, chemical wastes, refuse and garbage, military explosives and radioactive wastes. From 1946 to 1970, over 56,000 barrels of radioactive waste were dumped into the eastern Pacific Ocean, according to a 1999 report by the International Atomic Energy Agency. The barrels were dumped at sites ranging from Alaska to Southern California. Montrose Chemical Corporation manufactured DDT during the years 1947 to 1983 at its plant near Torrance, California. The plant discharged wastewater containing the now-banned pesticide into Los Angeles sewers that emptied into the Pacific Ocean off White Point on the Palos Verdes Shelf. The manufacturing process resulted in groundwater and surface soil contamination on and near the Montrose plant property. Estimates of discharged DDT range from 800 to 1000 tons, between the late 1950s and the early 1970s. Montrose, in addition to dumping DDT, also dumped sulfuric acid, which was a byproduct of the DDT manufacturing process. The acid was transported to the dump sites on barges operated by California Salvage Company. The Montrose Corporation site, consisting of , is now an EPA Superfund site. Other industries also discharged PCBs into the Los Angeles sewer system that ended up on the Palos Verdes Shelf. The Palos Verdes Shelf is located off the coast of Palos Verdes (between Point Fermin and Point Vicente) and covers 43 square kilometers (17 square miles). Not all DDT waste was in the form of barrels. California Salvage, a company that provided waste disposal services during the 1960s, transported DDT on barges to "dumping site 2" (about halfway between Palos Verdes and Santa Catalina island) and dumped it directly into the ocean, as a liquid. Analysis by the EPA suggests that most of the DDT measured in the Southern California waters is from the barge disposal, rather than the barrels. Military munitions, including Hedgehogs, Mark 9 depth charges, anti-submarine weapons and smoke devices, were found on the ocean floor. These WW II munitions were commonly disposed of in the ocean before the 1970s. The EPA concluded that over 3 million tons of petroleum products were dumped in the Southern California waste sites, including refinery wastes, filter cakes and oil drilling wastes. Research and remediation In 1973, the Southern California Coastal Water Research Project (SCCWRP) published a report that identified 14 waste dump sites in the Southern California Bight. Starting in 1975, contaminated waste disposal in the San Pedro Channel was prohibited. Thereafter, uncontaminated dredge materials continued to be disposed of at approved EPA sites in the San Pedro Channel. Since 1985, fish consumption advisories and health warnings have been posted in Southern California because of elevated DDT and PCB levels. Bottom-feeding fish are particularly at risk for high contamination levels. Consumption of white croaker, which has the highest contamination levels, should be avoided. Other bottom-feeding fish, including kelp bass, rockfish, and sculpin, are also highly contaminated. As a part of the Superfund project, the EPA is looking to reinforce the commercial and recreational fishing ban on white croaker. In October 1989, the former Montrose Chemical facility in Torrance was added to the EPA's Superfund National Priorities List. The offshore Palos Verdes Shelf dumping site is an "Operable Unit" of that Montrose Chemical Superfund Site. In 1990, the United States and California filed lawsuits against several companies that had industrial facilities near the Palos Verdes peninsula, citing damages to the nearby marine environment. The defendants included Montrose Chemical, Imperial Chemical Industries, Rhône-Poulenc, and Westinghouse Electric Corporation. In December 2000, Montrose Chemical and three other corporations settled their lawsuits for a total between $73 and $77 million. When combined with prior lawsuits, this brought the total up to $140 million to fund the restoration of the Palos Verdes Shelf marine environment. Until as recently as 2007, bald eagles on Santa Catalina Island were unable to reproduce because the DDT caused their eggshells to become too thin and to break open before the eagle was fully developed. California sea lions have high levels of DDT and a high rate of cancer which is rare in wild animals. In 2017, after studying various approaches to remediation for the Palos Verdes Shelf, the EPA decided to leave the waste in place, and cover it with a layer of sediment. In 2020, the US Corps of Army Engineers published a study outlining a plan to dredge sediment from Queens Gate Channel (a deep water passage leading into the of Port of Long Beach) and deposit it over the Palos Verdes Shelf. In early 2021, a survey of the ocean floor using sonar uncovered more than 25,000 barrel-like objects on the sea floor that possibly contained DDT and other toxic chemicals. The mission included a team of 31 scientists and engineers, led by the Scripps Institution of Oceanography and the National Oceanic and Atmospheric Administration. In 2023, an expedition led by the Scripps Institution of Oceanography re-surveyed the area, and used high-resolution photography. They confirmed the large number of barrels, and the photography revealed a large number of munitions on the ocean floor. In 2024, a team of scientists from the University of California at Santa Barbara discovered evidence low-level radioactive waste was dumped in the ocean during the 1960s. The material was probably dumped by California Salvage, a now-defunct company that also dumped DDT in the ocean during the 1960s. Future plans As of 2024, the EPA and the Army Corps of Engineers are actively working on remediation of the Montrose site (in Torrance) and the Palos Verdes Shelf. None of the 14 numbered offshore locations in the 1973 SCCWRP map (see map below) have been designated as operable units of the EPA's Montrose Superfund site, and hence are not subject to remediation efforts as of 2024. However, the EPA is performing initial studies on "site 2" from the 1973 SCCWRP map. Whether additional waste will be discovered in the ocean is an open question. According to environmental scientist Mark Gold from the Natural Resources Defense Council, “[t]he more we look, the more we find, and every new bit of information seems to be scarier than the last... This has shown just how egregious and harmful the dumping has been off our nation’s coasts, and that we have no idea how big of an issue and how big of a problem this is nationally.” Locations The Southern California Coastal Water Research Project (SCCWRP) published a report in March 1973 that identified 14 waste dump sites in the Southern California Bight. The SCCWRP map does not include the Palos Verdes Shelf site. The map below shows 13 of the 14 dump sites from the 1973 SCCWRP map; the missing fourteenth site is off the southern edge of the map, located at 31.847120N 118.57122W, off the coast of Mexico. The map below uses the same site numbering as the 1973 SCCWRP map. The map below includes two sites which are not included in the 1973 map: the Palos Verdes Shelf site, and the Montrose Chemical facility in Torrance. List of dumped chemicals and waste Chemicals and other waste that have been documented in waste sites off the Southern California shore include: DDT Munitions, including depth charges and smoke devices. Radioactive waste PCBs Petroleum products, including refinery wastes, filter cakes and oil drilling wastes. Sulfuric acid See also Marine pollution Toxic waste Superfund sites References External links Report from EPA including map of 14 dump sites in ocean 1973 report from SCCWRP with detailed maps of disposal sites 1985 report on dumping, published by California Regional Water Quality Control Board Environment and health Disasters in California Environmental issues in California Environmental controversies Environmental disasters in the United States Superfund sites in California Waste disposal incidents in the United States 1973 in the environment 2021 in the environment Environment of California
Chemical dumps in ocean off Southern California
[ "Chemistry", "Technology", "Environmental_science" ]
1,850
[ "Ocean pollution", "Hazardous waste", "Water pollution" ]
76,155,381
https://en.wikipedia.org/wiki/Leiden%20algorithm
The Leiden algorithm is a community detection algorithm developed by Traag et al at Leiden University. It was developed as a modification of the Louvain method. Like the Louvain method, the Leiden algorithm attempts to optimize modularity in extracting communities from networks; however, it addresses key issues present in the Louvain method, namely poorly connected communities and the resolution limit of modularity. Improvement over Louvain method Broadly, the Leiden algorithm uses the same two primary phases as the Louvain algorithm: a local node moving step (though, the method by which nodes are considered in Leiden is more efficient) and a graph aggregation step. However, to address the issues with poorly-connected communities and the merging of smaller communities into larger communities (the resolution limit of modularity), the Leiden algorithm employs an intermediate refinement phase in which communities may be split to guarantee that all communities are well-connected. Consider, for example, the following graph: Three communities are present in this graph (each color represents a community). Additionally, the center "bridge" node (represented with an extra circle) is a member of the community represented by blue nodes. Now consider the result of a node-moving step which merges the communities denoted by red and green nodes into a single community (as the two communities are highly connected): Notably, the center "bridge" node is now a member of the larger red community after node moving occurs (due to the greedy nature of the local node moving algorithm). In the Louvain method, such a merging would be followed immediately by the graph aggregation phase. However, this causes a disconnection between two different sections of the community represented by blue nodes. In the Leiden algorithm, the graph is instead refined: The Leiden algorithm's refinement step ensures that the center "bridge" node is kept in the blue community to ensure that it remains intact and connected, despite the potential improvement in modularity from adding the center "bridge" node to the red community. Graph components Before defining the Leiden algorithm, it will be helpful to define some of the components of a graph. Vertices and edges A graph is composed of vertices (nodes) and edges. Each edge is connected to two vertices, and each vertex may be connected to zero or more edges. Edges are typically represented by straight lines, while nodes are represented by circles or points. In set notation, let be the set of vertices, and be the set of edges: where is the directed edge from vertex to vertex . We can also write this as an ordered pair: Community A community is a unique set of nodes: and the union of all communities must be the total set of vertices: Partition A partition is the set of all communities: Partition Quality How communities are partitioned is an integral part on the Leiden algorithm. How partitions are decided can depend on how their quality is measured. Additionally, many of these metrics contain parameters of their own that can change the outcome of their communities. Modularity Modularity is a highly used quality metric for assessing how well a set of communities partition a graph. The equation for this metric is defined for an adjacency matrix, A, as: where: represents the edge weight between nodes and ; see Adjacency matrix; and are the sum of the weights of the edges attached to nodes and , respectively; is the sum of all of the edge weights in the graph; and are the communities to which the nodes and belong; and is Kronecker delta function: Reichardt Bornholdt Potts Model (RB) One of the most well used metrics for the Leiden algorithm is the Reichardt Bornholdt Potts Model (RB). This model is used by default in most mainstream Leiden algorithm libraries under the name RBConfigurationVertexPartition. This model introduces a resolution parameter and is highly similar to the equation for modularity. This model is defined by the following quality function for an adjacency matrix, A, as: where: represents a linear resolution parameter Constant Potts Model (CPM) Another metric similar to RB, is the Constant Potts Model (CPM). This metric also relies on a resolution parameter The quality function is defined as: Understanding Potts Model resolution parameters/Resolution limit Typically Potts models such as RB or CPM include a resolution parameter in their calculation. Potts models are introduced as a response to the resolution limit problem that is present in modularity maximization based community detection. The resolution limit problem is that, for some graphs, maximizing modularity may cause substructures of a graph to merge and become a single community and thus smaller structures are lost. These resolution parameters allow modularity adjacent methods to be modified to suit the requirements of the user applying the Leiden algorithm to account for small substructures at a certain granularity. The figure on the right illustrates why resolution can be a helpful parameter when using modularity based quality metrics. In the first graph, modularity only captures the large scale structures of the graph; however, in the second example, a more granular quality metric could potentially detect all substructures in a graph. Algorithm The Leiden algorithm starts with a graph of disorganized nodes (a) and sorts it by partitioning them to maximize modularity (the difference in quality between the generated partition and a hypothetical randomized partition of communities). The method it uses is similar to the Louvain algorithm, except that after moving each node it also considers that node's neighbors that are not already in the community it was placed in. This process results in our first partition (b), also referred to as . Then the algorithm refines this partition by first placing each node into its own individual community and then moving them from one community to another to maximize modularity. It does this iteratively until each node has been visited and moved, and each community has been refined - this creates partition (c), which is the initial partition of . Then an aggregate network (d) is created by turning each community into a node. is used as the basis for the aggregate network while is used to create its initial partition. Because we use the original partition in this step, we must retain it so that it can be used in future iterations. These steps together form the first iteration of the algorithm. In subsequent iterations, the nodes of the aggregate network (which each represent a community) are once again placed into their own individual communities and then sorted according to modularity to form a new , forming (e) in the above graphic. In the case depicted by the graph, the nodes were already sorted optimally, so no change took place, resulting in partition (f). Then the nodes of partition (f) would once again be aggregated using the same method as before, with the original partition still being retained. This portion of the algorithm repeats until each aggregate node is in its own individual network; this means that no further improvements can be made. The Leiden algorithm consists of three main steps: local moving of nodes, refinement of the partition, and aggregation of the network based on the refined partition. All of the functions in the following steps are called using our main function Leiden, depicted below: The Fast Louvain method is borrowed by the authors of Leiden from "A Simple Acceleration Method for the Louvain Algorithm". function Leiden_community_detection(Graph G, Partition P) do P = fast_louvain_move_nodes(G, P) /* Call the function to move the nodes to communities.(more details in function below). */ done = (|P| == |V(G)|) /* If the number of partitions in P equals the number of nodes in G, then set done flag to True to end do-while loop, as this will mean that each node has been aggregated into its own community. */ if not done P_refined = get_p_refined(G, P) /* This is a crucial part of what separates Leiden from Louvain, as this refinement of the partition enforces that only nodes that are well connected within their community are considered to be moved out of the community. (more detail in function refine_partition_subset below). */ G = aggregate_graph(G, P_refined) /* Aggregates communities into single nodes for next iteration (details in function below). */ P = {{v | v ⊆ C, v ∈ V (G)} | C ∈ P} /* This line essentially takes nodes from the communities in P and breaks them down so that each node is treated as its own singleton community (community made up of one node). */ end if while not done return flattened(P) /* Return final partition where all nodes of G are listed in one community each. */ end function Step 1: Local Moving of Nodes First, we move the nodes from into neighboring communities to maximize modularity (the difference in quality between the generated partition and a hypothetical randomized partition of communities). In the above image, our initial collection of unsorted nodes is represented by the graph on the left, with each node's unique color representing that they do not belong to a community yet. The graph on the right is a representation of this step's result, the sorted graph ; note how the nodes have all been moved into one of three communities, as represented by the nodes' colors (red, blue, and green). function fast_louvain_move_nodes(Graph G, Partition P) Q = queue(V(G)) /* Place all of the nodes of G into a queue to ensure that they are all visited. */ while Q not empty v = Q.pop_front() /* Select the first node from the queue to visit. */ C_prime = arg maxC∈P∪∅ ∆HP(v → C) /* Set C_prime to be the community in P or the empty set (no community) that provides the maximum increase in the Quality function H when node v is moved into that community. */ if ∆HP(v → C_prime) > 0 /* Only look at moving nodes that will result in a positive change in the quality function. */ v → C_prime /* Move node v to community C_prime */ N = {u | (u, v) ∈ E(G), u !∈ C_prime} /* Create a set N of nodes that are direct neighbors of v but are not in the community C_prime. */ Q.add(N - Q) /* Add all of the nodes from N to the queue, unless they are already in Q. */ end if return P /* Return the updated partition. */ end function Step 2: Refinement of the Partition Next, each node in the network is assigned to its own individual community and then moved them from one community to another to maximize modularity. This occurs iteratively until each node has been visited and moved, and is very similar to the creation of except that each community is refined after a node is moved. The result is our initial partition for , as shown on the right. Note that we're also keeping track of the communities from , which are represented by the colored backgrounds behind the nodes. function get_p_refined(Graph G, Partition P) P_refined = get_singleton_partition(G) /* Assign each node in G to a singleton community (a community by itself). */ for C ∈ P P_refined = refine_partition_subset(G, P_refined, C) /* Refine partition for each of the communities in P_refined. */ end for return P_refined /* return newly refined partition. */ function refine_partition_subset(Graph G, Partition P, Subset S) R = {v | v ∈ S, E(v, S − v) ≥ γ * degree(v) * (degree(S) − degree(v))} /* For node v, which is a member of subset S, check if E(v, S-v) (the edges of v connected to other members of the community S, excluding v itself) are above a certain scaling factor. degree(v) is the degree of node v and degree(S) is the total degree of the nodes in the subset S. This statement essentially requires that if v is removed from the subset, the community will remain in tact. */ for v ∈ R if v in singleton_community /* If node v is in a singleton community, meaning it is the only node. */ T = {C | C ∈ P, C ⊆ S, E(C, S − C) ≥ γ * degree(C) · (degree(S) − degree(C)} /* Create a set T of communities where E(C, S - C) (the edges between community C and subset S, excluding edges between community C and itself) is greater than the threshold. The threshold here is γ * degree(C) · (degree(S) − degree(C). */ Pr(C_prime = C) ∼ exp(1/θ ∆HP(v → C) if ∆HP(v → C) ≥ 0 0 otherwise for C ∈ T /* If moving the node v to C_prime changes the quality function in the positive direction, set the probability that the community of v to exp(1/θ * ∆HP(v → C)) else set it to 0 for all of the communities in T. */ v → C_prime /* Move node v into a random C_prime community with a positive probability. */ end if end for return P /* return refined partition */ end function Step 3: Aggregation of the Network We then convert each community in into a single node. Note how, as is depicted in the above image, the communities of are used to sort these aggregate nodes after their creation. function aggregate_graph(Graph G, Partition P) V = P /* Set communities of P as individual nodes of the graph. */ E = {(C, D) | (u, v) ∈ E(G), u ∈ C ∈ P, v ∈ D ∈ P} /* If u is a member of subset C of P, and v is a member subset D of P and u and v share an edge in E(G), then we add a connection between C and D in the new graph. */ return Graph(V, E) /* Return the new graph's nodes and edges. */ end function function get_singleton_partition(Graph G) return {{v} | v ∈ V (G)} /* This is the function where we assign each node in G to a singleton community (a community by itself). */ end function We repeat these steps until each community contains only one node, with each of these nodes representing an aggregate of nodes from the original network that are strongly connected with each other. Limitations The Leiden algorithm does a great job of creating a quality partition which places nodes into distinct communities. However, Leiden creates a hard partition, meaning nodes can belong to only one community. In many networks such as social networks, nodes may belong to multiple communities and in this case other methods may be preferred. Leiden is more efficient than Louvain, but in the case of massive graphs may result in extended processing times. Recent advancements have boosted the speed using a "parallel multicore implementation of the Leiden algorithm". The Leiden algorithm does much to overcome the resolution limit problem. However, there is still the possibility that small substructures can be missed in certain cases. The selection of the gamma parameter is crucial to ensure that these structures are not missed, as it can vary significantly from one graph to the next. References Algorithms Network theory
Leiden algorithm
[ "Mathematics" ]
3,304
[ "Algorithms", "Mathematical logic", "Applied mathematics", "Graph theory", "Network theory", "Mathematical relations" ]
76,155,969
https://en.wikipedia.org/wiki/Voie%20verte
A voie verte or greenway is an autonomous communication route reserved for non-motorized traffic, such as pedestrians and cyclists. Voies vertes are developed with a view to integrated development that enhances the environment, heritage, quality of life, and user-friendliness. In Europe, they have been organized since October 1997 within the framework of the European Green Network to coordinate and regulate uses often prohibited in certain countries or that compete with motorized practices. Context In this regard, towpaths, old rural paths, and disused railway tracks are privileged mediums for the development of voies vertes. If managed appropriately (through sustainable gardening and restoration ecology, and without the use of pesticides in the surroundings, which can then potentially play a role in the green infraestructure and blue network), voires vertes are one of the elements of sustainable development policies in the relevant areas. For English speakers, greenways refers to voies vertes, but also more generally to "a road that is good from an environmental point of view" (Turner, 1995, or - in England, according to a survey cited by Turner in 2006: "a linear space containing elements planned, designed, and managed for multiple purposes, including ecological, recreational, cultural, aesthetic, and others compatible with the concept of sustainable land use") or a wide range of landscape and urban planning strategies including, to varying degrees, an environmental concern associated with transportation infrastructure, the edges of which have often acquired special value and are sometimes associated with the concept of a biological corridor in Europe. History and evolution From 1975 to 1995, voies vertes proliferated significantly in the urban landscapes of so-called developed countries. For example, by 1995, more than 500 communities were building them in North America alone. They address new human needs while also extending some of the functions of ancient rural roads. More than simple facilities or landscaping, they increasingly aim to provide a counterbalance to the loss of natural landscape in the context of increasing urbanization and agricultural industrialization. As times changed, the notion of chemins verts ou corridors verts evolved to meet new needs and challenges. Three distinct stages (or "generations") of voies vertes can be identified as forms of urban and peri-urban landscape: The first generation consisted of wooded paths, bordered by grassy and flowered embankments or ancestral walking paths, complementary to road networks; Recreational and discovery trails, or routes away from traffic zones, providing access to rivers, streams, ridges, and urban fabric, allotment gardens, etc., followed. Generally, automobiles were excluded (Reserved lanes); The latest generation is often more multifunctional, primarily reserved for soft travel and leisure, sometimes also for landscape enhancement, while also seeking to address certain vital needs of fauna, and flora (and sometimes fungi, with the conservation of deadwood). Ditches, swales, and flood-prone areas can also play a roleb in water and flood management (in urban or rural areas). Path edges are designed and managed to act as wildlife corridors with a potential buffer strip. Like grassy strips or other types of buffer zones, some voies vertes also contribute to improving water quality (with, for example, ditches and swales serving as natural wetland). They also provide resources for outdoor education, landscape discovery, and interpretation. Planners must therefore adopt multidisciplinary approaches, sometimes merging formerly opposing disciplines such as civil engineering, architecture, landscape ecology, sustainable gardening, or wetland ecology. In France, the term voies vertes tends to overlap with that of the voies vertes in the cycle route and voies vertes network. Network status In Belgium, a network of of voies vertes was already defined in 2003, of which were developed. In the Walloon Region, they form the RAVeL network. In Flanders, there is a network of towpaths, railway trails, and other independent cycle paths. Most are integrated into the numbered-node cycle networks of the provinces, or belong to LF-routes (Dutch: lange-afstandsfietsroute, long-distance tourist cycle routes) or the bicycle highway network (Dutch: fietssnelweg, utilitarian voies vertes providing direct routes between and around cities). In the Netherlands, the situation and terminology are comparable to Flanders, with the difference that there are few rail trails and many other independent cycle paths. In France, a decree of 16 September 2004 introduced voies vertes into the Highway Code: voies vertes are defined as roads "." In Switzerland, there's a cross-border voie verte from Geneva to Annemasse. A voie verte through Lausanne (along the railroad tracks) is programmed for completion in 2018. Features and Benefits They are most often developed on old railway lines, towpaths, roads closed to automobile traffic, and cultural routes (Roman roads, pilgrimage routes). They have certain characteristics: Ease of access: their low or nonexistent slopes allow for use by all types of users, including those with reduced mobility; Safety due to their physical separation from roadways and appropriate intersection design; Continuity of routes with alternative solutions in case of obstacles; Environmental respect along the paths and encouragement for users to respect it. Voies vertes also offer services, located in preserved old facilities such as former railway stations and lockkeeper's houses. These services can be of various types: accommodation, museums, bike rental, equestrian accommodation, community centers, etc. They cater to both local users and tourists. voies vertes are provided with information (maps, brochures, etc.) about the route itself and nearby sites. For example, several tens of kilometers of the former coastal railway of the Chemins de Fer de Provence have been converted into a cycle path between Toulon and Pramousquier (in the municipality of Le Lavandou). This example illustrates the main criticism of voies vertes, namely the fact that they sometimes contribute to downgrading and therefore definitively condemning railway lines that could potentially be reopened for collectivization and decarbonization of travel in peri-urban or rural areas, instead of taking up space on roads. This competition between two complementary modes in an era of energy transition inducing increasing decarbonization of travel can therefore be ironic. Photographs Notes and references See also Aménagement cyclable Bibliography Related articles Greenway (landscape) RAVeL network Green infrastructure Long-distance cycling route Rail trail Otago Central Rail Trail External links Team VéloTousTerriens Evasion Rouen Association Européenne des Voies Vertes Association Française des Véloroutes et Voies Vertes France Vélo Tourisme Carte de France des voies vertes Le portail touristique national des parcours à vélo et de voies vertes Observatoire Européen des Voies Vertes 4e Conférence européenne sur les voies vertes, Actes du colloque sur les Voies vertes urbaines et périurbaines(6,7,8 novembre 2003), Liège - Belgique Traffic signs Sustainable development Ecology
Voie verte
[ "Biology" ]
1,491
[ "Ecology" ]
76,156,650
https://en.wikipedia.org/wiki/Steptoean%20positive%20carbon%20isotope%20excursion
The Steptoean positive carbon isotope excursion (SPICE) is a global chemostratigraphic event which occurred during the upper Cambrian period between 497 and 494 million years ago. This event corresponds with the ICS Guzhangian-Paibian Stage boundary and the Marjuman-Steptoean stage boundary in North America. The general signature of the SPICE event is a positive δ13C excursion, characterized by a 4 to 6 ‰ (per mille) shift in δ13C values within carbonate successions around the world. SPICE was first described in 1993, and then named later in 1998. In both these studies, the SPICE excursion was identified and trends were observed within Cambrian formations of the Great Basin of the western United States. Age The age of the SPICE is dated to between 497 to 494 MA, where it has primarily been identified through the use of relative dating and biostratigraphy. The onset of SPICE is generally accepted to correspond with the second wave of the End-Marjuman Biomere Extinction, and its termination corresponds to the End-Steptoean Biomere Extinction. Using trilobite and brachiopod index fossils linked to these extinctions, the upper and lower boundaries of the event can be defined. The beginning of the SPICE is identified by the extinction of shallow water polymerid trilobites, later replaced by deep water olenimorph trilobites following the observed peak δ13C value of the SPICE event. The age of SPICE can also be determined based on its correlation with the well-known Sauk II- Sauk III Sequence boundary in North America. Furthermore, in addition to biostratigraphic markers the 3 million year time frame of the SPICE event has also been determined using calculated deposition rates and the length of some of the more extensively studied SPICE sequences. Localities, geology and δ13C characteristics Localities The SPICE event is expressed globally with known formations in 11 countries: United States, China, Australia, South Korea, Argentina, Canada, France, Kazakhstan, Scotland and Sweden (ordered by greatest to least number of localities). These locations span 4 modern continents (North America, Asia, Australia, Europe and South America), and represent 5 upper Cambrian paleocontinents: Laurentia, Gondwana, Kazakhstania, Siberia, and Baultica. All formations containing SPICE intervals formed between the paleolatitudes of 30°N and 60°S. For a full list of SPICE localities and formation see the following maps and table. { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ -56.118164, 48.661943 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ -63.984375, -34.016242 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 2.109375, 46.316584 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 67.148438, 48.04871 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 107.226563, 32.324276 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 127.96875, 37.230328 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 123.574219, 68.592487 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 14.414063, 57.704147 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 144.140625, -29.22889 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ -113.90625, 40.713956 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ -86.484375, 35.173808 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ 89.296875, 39.909736 ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ -3.603516, 55.429013 ] } } ] } Geology Formations containing SPICE excursions are highly variable with geologic characteristics varying greatly amongst localities. Stratigraphic thickness in particular has very large ranges between locations, with the smallest being the Wangliangyu section of China which is less than 3m. This thickness is in contrast to the Kulyumbe section of Siberia which is greater than 800m. This variability of stratigraphic thickness suggests that the regional deposition rates during the 497 Ma to 494 Ma SPICE period were not globally uniform and more regionally dependent. Furthermore, formations containing the SPICE excursion represent a wide variety of lithologies, facies and water depths. In terms of lithology, all SPICE intervals are contained within carbonate units within carbonate and silicate sequences. The most common lithology for SPICE intervals are micritic limestones, or carbonate shales, generally interbedded with thin layers of calcareous mudstone. SPICE intervals have also been observed in dolostone units, however these are not as common as the carbonate rocks. SPICE intervals are also highly variable when it comes to facies, with examples for shallow, intermediate and deep water settings (see map in the localities section). Considering the two most prominent areas of study, Laurentian formations (USA) tend to have stronger representation from shallow and intermediate facies (shallow/ near shore, shelf, intrashelf basin), while Gondwanan sections (China & Australia) have better representation of deep water facies (slope and basin), along with shallow and intermediate facies. Stages of SPICE Defining standard δ13C values of the SPICE interval, it can be noted the magnitude is highly variable from location to location, with maximum excursion values ranging from 0.64 ‰ to 8.03 ‰. Regardless of values though, the SPICE interval can be identified based on a similar pattern observed in each sequence. This pattern is identified based on 6 distinct stages: pre-SPICE, early SPICE, rising SPICE, plateau, falling SPICE, and post SPICE (see figure for visual representation of each stage). Stage 1: Pre-SPICE All areas of the section prior to the onset of the SPICE interval. δ13C values remain near 0 ‰, similar to modern marine dissolved inorganic carbon. Stage 2: Early SPICE Onset of SPICE, characterized by a slow increase in δ13C from 0 to approximately 1 ‰, suggesting a gradual increase in organic carbon burial and decrease in oceanic 12C. Stage 3: Rising SPICE Rapid increase in δ13C from the early SPICE value to the max value. This shift in value is generally between 3 ‰ and 6 ‰, suggesting a rapid increase in organic carbon burial. The onset of the rising SPICE also generally corresponds to fossil indicators for the 2nd stage of the end-Marjuman biomere extinction. Stage 4: Plateau δ13C values fluctuate but remain near the maximum value for a period of time. This stage is not observed in all SPICE intervals. After reaching the maximum value, most intervals proceed immediately into stage 5, the falling SPICE. Stage 5: Falling SPICE Rapid decrease from the maximum δ13C value to near the standard ocean water value (0 ‰). The rate of decrease in the falling SPICE is generally more rapid then the rate of increase in the Rising SPICE. Generally interpreted as ocean water returning to standard δ13C levels. Stage 6: Post SPICE All areas of the section immediately following the termination of SPICE. Factors affecting the magnitude of the δ13C anomaly Despite being a global event, the magnitude of δ13C values observed within a SPICE interval appear to be highly affected by a variety of local conditions. A few common trends that have been determined are as follows: Higher paleolatitude formations (greater than 30°S) tend to have lower δ13C values throughout the sequence. Shallower facies have lower values than deeper facies. Limestone tends to have marginally higher δ13C than dolostone. Proposed mechanism Regional sea level changes, cooling of upper sea water from the deep ocean, ocean anoxia/euoxia, and trilobite and brachiopod extinctions are all associated with the SPICE event. This combination of factors creates the conditions for the primary mechanism of formation for the SPICE, an increase in the burial of organic carbon, caused by increased primary productivity (e.g. photosynthesis). The spread of deep ocean anoxia or euxinia, indicated by a positive correlated δ34SCAS excursion and increased pyrite burial, created conditions encouraging the preservation of deposited organic material and stressful conditions for the marine organisms. Initially, these conditions would have spread slowly, limited to deep environments and having small impacts on the global carbon system. This slow initial change is represented by the gradual and small δ13C changes of the early SPICE. As time passed though, causes of anoxic/euxinia conditions increased and moving up the shelf into shallower facies. This combined with other factors such as ocean level regression (such as the Sauk II-Sauk III in North America), and global cooling of the atmosphere and oceans, increased pressure was imposed on ocean ecosystems. This increase in pressure likely triggered the second wave of the End-Marjuman Biomere Extinction, resulting in the disappearance of many shallow-water trilobite and brachiopod species from the fossil record at this time. With the extinction of these trilobites and brachiopods, photosynthetic primary producers likely flourished as a result of decreased predation. This combined with an increase in burial from expanding anoxic conditions and less bioturbation from now extinct ocean floor dwelling organisms would likely cause δ13C values to sharply rise. This sharp rise is captured in the SPICE through the rising SPICE stage, in which δ13C values reflect this rapid change in primary productivity and burial following the extinction. Finally, moving out of the End-Marjuman Biomere Extinction and into the falling SPICE, oceans likely experience significant recovery in biodiversity. Following the extinction of shallow water taxa, the better adapted deep water olenimorph trilobite fauna begin to diversify. Filling the shallow water environments left vacant by the End-Marjuman Biomere extinction. This return of secondary producers, along with reductions in anoxic conditions caused by changes in climate and stabilizing ocean levels causing reductions in primary productivity and organic carbon burial. Rapidly reducing δ13C values, and stabilizing to more standard ocean δ13C values observed in the post SPICE stage. Controversies One question still being researched in relation to the SPICE is the potential of an undescribed negative δ13C excursion directly before the early SPICE stage. This undetermined negative excursion does not appear at all localities. It is theorized this excursion may have remained undetected as a result of sampling discrepancies or because it only represents a local event. Another key controversy of the SPICE is its exact timing in relation to the end-Marjuman biomere extinction, and the end-Steptoean biomere extinction. Currently research can only link the onset of the SPICE to the second wave of the end-Marjuman extinction. More research is required to determine if and how the first wave of the extinction relates to SPICE. Furthermore, there is still a great deal of questions about SPICE and its implication for large biodiversity events occurring in the post SPICE such as the end-Steptoean biomere extinction and Great Ordovician Biodiversification Event (GOBE). Some research suggests that the turnover in trilobite and brachiopod species that occurred during the SPICE may have a direct correlation with these subsequent events. Comparison to other anomalies Hirnantian Isotopic Carbon Excursion (HICE) Similar to the SPICE, the HICE event is linked to changes in the climate and falls in global sea level, resulting in anoxic conditions and an increase in organic carbon burial. δ13C values for the HICE have a similar positive magnitude, ranging from ~+2‰ to ~+7‰. Furthermore, similar to SPICE, the HICE event likely occurred over a small time period. Occurring in the upper Ordovician and lasting less than 1.3 Ma. One difference between the HICE and SPICE though, is the HICE is generally restricted to shallow water carbonate facies. Silurian Ireviken event This early Silurian (431 Ma) δ13C excursion also shows similarities to the SPICE event, with positive max δ13C values around 4.5‰. Similar to the SPICE, research suggests this event is linked to falling ocean levels and a faunal turnover. Similar to SPICE, the Ireviken event also has a positive δ34SCAS excursion correlated with the δ13C excursion. Suggesting the influence of anoxic conditions and increased organic carbon burial. Furthermore, the Ireviken event also has global occurrences, but their expression is highly influenced by local facies characteristics, similar to SPICE. References Cambrian events Stratigraphy Events that forced the climate Isotope excursions
Steptoean positive carbon isotope excursion
[ "Chemistry" ]
3,030
[ "Isotope excursions", "Isotopes" ]
76,156,847
https://en.wikipedia.org/wiki/C14H18FNO
{{DISPLAYTITLE:C14H18FNO}} The molecular formula C14H18FNO (molar mass: 235.302 g/mol) may refer to: Fluorexetamine 2F-NENDCK Molecular formulas
C14H18FNO
[ "Physics", "Chemistry" ]
55
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
76,158,248
https://en.wikipedia.org/wiki/WAY-261240
WAY-261240 is a drug which acts as a potent and selective 5-HT2C receptor agonist, though its affinity at other serotonin receptors has not been disclosed. It produces anorectic effects in animal studies. A large family of related derivatives is known. See also Lorcaserin WAY-163909 WAY-470 References Serotonin receptor agonists Chloroarenes Chromanes Amines
WAY-261240
[ "Chemistry" ]
93
[ "Amines", "Bases (chemistry)", "Functional groups" ]
76,158,967
https://en.wikipedia.org/wiki/Sanfordiacaulis
Sanfordiacaulis is an enigmatic genus of early Carboniferous plant from New Brunswick, Canada, described in 2024, distinguished by its unusual crown morphology and known from five specimens. It was discovered in 2017 near Norton, now part of Valley Waters. Description Sanfordiacaulis is an indeterminate vascular plant, roughly in height, with a non-woody stem wide and a crown width of . Its leaves are arranged in a tightly packed, non-Fibonacci spiral, with the portion of the trunk bearing leaves estimated to have had over 200 laterals based on petiole distribution. Etymology Sanfordiacaulis'''s genus name is derived from the quarry containing the specimens and its owner, Laurie Sanford, whereas its specific name, densifolia'' is derived from the dense arrangement of leaves. References Carboniferous Plants described in 2024 Fossil taxa described in 2024 Monotypic plant genera Prehistoric plant genera
Sanfordiacaulis
[ "Biology" ]
189
[ "Enigmatic plant taxa", "Plants" ]
76,159,639
https://en.wikipedia.org/wiki/North%E2%80%93South%20Railway%20%28Brazil%29
North-South Railway (Portuguese: Ferrovia Norte-Sul), also known as EF-151, is a Brazilian longitudinal broad-gauge railroad. It was designed to connect the lines that provide access to Brazil's main ports and producing areas, which had been regionally isolated. Once completed, it will cover 4,155 kilometers and cross the states of Pará, Maranhão, Tocantins, Goiás, Minas Gerais, São Paulo, Mato Grosso do Sul, Paraná, Santa Catarina and Rio Grande do Sul. Its current route stretches from Açailândia (MA) to Estrela d'Oeste (SP). The section between Açailândia (MA) and Porto Nacional (TO) belongs to the VLI concessionaire, and the one between Rio Verde (GO) and Estrela d'Oeste (SP) belongs to the Rumo Logística concessionaire. The stretch between Açailândia (MA) and Anápolis (GO) was completed in 2014, while the stretch between Ouro Verde de Goiás (GO) and Estrela d'Oeste (SP) was completed in 2023. The northern extension has two projects filed with the Ministry of Infrastructure, one by Minerva Participações e Investimentos and one by VALE. The southern extension continues under development with no date for execution. Features The North-South Railway connects with the Carajás Railway in Açailândia (MA), which leads to the port of Itaqui. Departing from Porto Franco (MA), it will connect with the Transnordestina Railway, currently under construction by Transnordestina Logística S.A., after implementation of the line to Eliseu Martins (PI), which will provide alternative access to the ports of Suape (PE) and Pecém (CE). It will also connect with the West-East Integration Railway (FIOL) in Figueirópolis (TO) and with the Trans-Amazonian Railway in Mara Rosa (GO). In Anápolis (GO), north–south connects with the Central-Atlantic Railway, which operates in metre-gauge and requires transshipment. At Estrela d'Oeste (SP), the railroad connects with Rumo Logística's Malha Paulista, which provides access to the port of Santos and the economic and industrial hub of São Paulo. From the Malha Paulista, it is also possible to access Ferronorte, which continues to Mato Grosso. The North-South Railway has a minimum curve radius of 343 meters and a maximum ramp of 0.6%, which allows a maximum speed of 83 kilometers per hour. Throughout a portion of its route in Goiás, Tocantins and Maranhão, the railroad runs parallel to the Belém-Brasília highway (BR-153; BR-226 and BR-010) and the Tocantins River. On the border between the states of Goiás and Minas Gerais, it crosses the Paranaíba River, which is part of the Tietê-Paraná Waterway. History The North-South Railway was first discussed in 1985 during José Sarney government. The initial route envisaged a length of approximately 1,550 kilometers between Açailândia (MA) and Anápolis (GO). Work on the first stretch of 215 kilometers between Açailândia (MA) and Porto Franco (MA) began in 1987 and ended in 1996 under Fernando Henrique Cardoso administration. During the first term of President Luiz Inácio Lula da Silva, the construction of the railroad to Porto Nacional (TO) and Anápolis (GO) restarted. The first stretch of 146 kilometers between Porto Franco (MA) and Araguaína (TO) was inaugurated in 2007.In October 2007, the operation of the stretch of the North-South Railway between Açailândia (MA) and Porto Nacional (TO) was granted to Vale S.A. for a 30-year period. The company offered the minimum amount of R$1.478 billion, with half being paid on December 21, 2007, and the other half divided into two installments. The stretch granted covered 722 kilometers, but only 361 kilometers between Açailândia (MA) and Araguaína (TO) had been completed by October 2007. The money provided by the concession enabled the construction of the portion of 359 kilometers between Araguaína (TO) and Porto Nacional (TO). In December 2008, a 490 kilometer stretch between Açailândia (MA) and Colinas (TO) was completed. In March 2010, the 133 kilometer stretch between Colinas (TO) and Guaraí (TO) was inaugurated. The stretch between Colinas (TO) and Porto Nacional (TO), originally scheduled to be inaugurated in September 2010 by President Luiz Inácio Lula da Silva, began running at the end of 2012. In 2011, Vale S.A. dismembered the north–south operation, merged it with the Central-Atlantic Railway and established a company dedicated to logistics called VLI Multimodal S.A., which began to operate and manage the Açailândia (MA) and Porto Nacional (TO) stretch. In January 2011, work began on the 684 kilometer segment between Ouro Verde de Goiás (GO) and Estrela d'Oeste (SP). It was the first route outside of the initial project, which had been designed to reach Anápolis (GO) and was modified to extend to the port of Rio Grande (RS). The 855 kilometer section between Porto Nacional (TO) and Anápolis (GO), scheduled for 2010, was delivered in 2014 by President Dilma Rousseff without any operational terminal, which required investments of R$700 million in intermodal freight yards, viaducts and signage by the new operator. The segment between Ouro Verde de Goiás (GO) and Estrela d'Oeste (SP), scheduled for conclusion in the second half of 2018, was delivered 95% complete.On July 20, 2018, President Michel Temer signed Provisional Measure 845, which established the National Railway Development Fund (Fundo Nacional de Desenvolvimento Ferroviário - FNDF). It guaranteed that the entire amount paid for the concession of the stretch of the North-South Railway between Porto Nacional (TO) and Estrela d'Oeste (SP) would be allocated to the construction of the northern extension, connecting Açailândia (MA) to the port of Vila do Conde, in Barcarena (PA). On March 28, 2019, the Jair Bolsonaro government auctioned off the 1,537 kilometer central stretch of the North-South Railway between Porto Nacional (TO) and Estrela d'Oeste (SP). Rumo Logística acquired the railroad for R$2,719,530,000 for a 30-year non-extendable concession contract, signed in Goiás on July 31, 2019. On March 4, 2021, Rumo launched operations on the stretch between São Simão (GO) and Estrela d'Oeste (SP) after investments of R$711 million. A terminal was built in São Simão with a storage capacity of 42,000 tons and a transport capacity of 5.5 million tons of soybeans, corn and soybean meal per year. On May 29, 2021, the first train loaded with soybeans left the Multimodal Terminal in Rio Verde (GO) for the port of Santos. On June 9, 2022, the Iturama sugar terminal in the Triângulo Mineiro started operating in partnership with Usina Coruripe. On May 25, 2023, Rumo Logistica concluded the 50 kilometers of track remaining for the connection of the railroad between the towns of Goianira and Ouro Verde (GO). Route Southern Extension The segment under study linking Panorama (SP) and Rio Grande (RS) is currently called the Southern Extension (Prolongamento Sul). It was divided into two sections: Panorama (SP) to Chapecó (SC), 951 kilometers; Chapecó (SC) to Rio Grande (RS), 833 kilometers; Originally, the stretch linking Panorama (SP) and Porto Murtinho (MS), labeled EF-267, was called the Southern Extension of the North-South Railway. On May 8, 2008, it was included in the National Road Plan, not as part of the north–south, but as a railroad that will connect to it, called the Pantanal Railway. North Extension The stretch linking the port of Vila do Conde in Barcarena (PA) and Açailândia (MA) is called the Northern Extension (Prolongamento Norte) of the North-South Railway. On December 23, 2021, Minerva Participações e Investimentos S.A. signed the contract for the construction and operation of the stretch. On February 3, 2022, the Ministry of Infrastructure signed a contract with 3G Empreendimentos e Consultoria Ltda. for the construction of the section between Barcarena and Santana do Araguaia (PA). The companies' projects coincide in the municipalities of Barcarena and Rondon do Pará (PA). Branch lines Currently, the railroad includes two branch lines: A private section approximately 25 kilometers long owned by Suzano Papel e Celulose that connects its factory in Imperatriz (MA) to the North-South Railway in João Lisboa (MA); A section in Anápolis approximately 50 kilometers long that connects the North-South Railway in Ouro Verde de Goiás (GO) to the Agroindustrial District of Anápolis (GO), where it joins the metre-gauge network of the Central-Atlantic Railway. Operational stretches Porto Nacional (TO) to Açailândia (MA) The 720-kilometer stretch between Porto Nacional (TO) and Açailândia (MA) is operated by VLI Multimodal S.A., owner of the Ferrovia Norte-Sul S.A. concessionaire. Compared to the use of trucks, the route saves 8% on grain transportation to the port of Itaqui. It mainly transports soybeans (55%), cellulose (25%) and fuels (10%). The trip between Porto Nacional and the Port of Itaqui lasts 3.5 days on average. The main product is soybeans, which are shipped from Porto Franco (MA) and Colinas do Tocantins (TO). It also transports pulp for Suzano Maranhão, which built its own 28 kilometer branch line from the factory in Imperatriz (MA) to the railroad destined for the port of Itaqui. It runs fuel transportation from São Luís to the terminal in Porto Nacional and plans to ship alcohol from Tocantins to the port of Itaqui. The volume handled by the terminal rose from 110 million to 270 million liters of fuel from 2014 to 2016. Between 2012 and 2015, grain flow increased from 2.6 to 4.2 million tons, 55% of the total volume transported on the stretch. The Porto Nacional Integrator Terminal has a storage capacity of 60,000 tons of grain and can process 2.6 million tons a year. The Palmeirante Terminal (TO) holds a 90,000 ton warehouse and can dispatch up to 3.4 million tons a year. The stretch is able to transport up to 9 million tons of grain a year, but suffers from a shortage of access roads to the tracks. From 2014 to 2017, 11.7 million tons of grain were shipped between Porto Nacional (TO) and the port of Itaqui. From January to October 2017, 5.5 million tons of grain were transported, 30% more than 2015. In April 2017, an axis that crosses 4 kilometers of the Araguaia River for trucks bound for Porto Nacional by ferry from Santana do Araguaia (PA) to Caseara (TO) began operating, which reduced travel times and increased the volume received from the east and northeast of Mato Grosso and the south of Pará by 7%. Between January and October 2017, the port of Itaqui transported 1.184 million tons of pulp and paper produced by the Suzano Papel e Celulose unit in Imperatriz (MA), delivered by the north–south and Carajás railroads. In 2018, it transported 6.3 million tons of soybeans, corn and soybean meal to the port of Itaqui. In 2019, the volume was 7.9 million tons of grain, from eastern and northeastern Mato Grosso, southern Pará and MATOPIBA (Maranhão, Tocantins, Piauí and Bahia), the new agricultural hotspot; an increase of 25% on the previous year. Porto Nacional (TO) to Anápolis (GO) Since its inauguration until the concession, the stretch between Porto Nacional and Anapólis, under VALEC's management, has transported 18 locomotives (February 2015), 26,000 tons of soybean meal (December 2015), 13,000 tons of shredded wood (December 2016 to March 2017), 8,000 tons of manganese ore (October 2017) and 62 bars of rail measuring 240 meters each (December 2017). The capacity to transport new cargo through this section was limited by the lack of terminals and the inability to get the cargo to a port, constrained by the capacity on the Carajás Railway, which could only absorb new trains after the duplication was completed in 2018. Ouro Verde de Goiás (GO) to Estrela d'Oeste (SP) In the first half of 2021, Rumo Logística launched operations on the Central-South extension of the North-South Railway and built two terminals. In São Simão (GO), a terminal with a static capacity of 42,000 tons and a transport capacity of 5.5 million tons of soybeans, corn and soybean meal per year was constructed with investments of R$711 million. In Rio Verde (GO), a large Multimodal Terminal for soybeans and fertilizers with the capacity to handle 11 million tons per year was built. On June 9, 2022, Rumo Logística and Usina Coruripe inaugurated a sugar terminal in Iturama (MG) capable of handling 2 million tons of export sugar per year. The Santa Helena de Goiás Multimodal Terminal is under construction by Infra S.A. The entire stretch between Porto Nacional (TO) and Estrela d'Oeste (SP) became fully operational in June 2023. See also Rail transport in Brazil References Rail transport in Brazil Rail transport Trains Railway lines in Brazil
North–South Railway (Brazil)
[ "Technology" ]
3,103
[ "Trains", "Transport systems" ]
63,129,086
https://en.wikipedia.org/wiki/Edith%20M.%20Taylor
Edith M. Taylor (1899-1993) was a Canadian biochemist known primarily for her work in producing novel techniques in vaccine production, especially her work on the production of diphtheria toxoid, while employed as a researcher by Connaught Laboratories in Toronto, Canada. Early life and education Taylor was born in 1899 in Toronto to a family of 10 children. She attended the University of Toronto and graduated with a PhD in Chemistry in 1924. Career In 1925, Taylor began work with Connaught Laboratories, a public medical research group associated with the University of Toronto. One of her first projects at Connaught involved major contributions to the culturing process of diphtheria toxoid, a non-toxic form of diphtheria toxin safe for vaccination. Connaught had been producing tetanus toxoid since 1927 and, though their product was effective, it also produced unwanted side effects. Taylor lead a research team dedicated to streamlining and improving the production of the toxoid. Taylor's cultures were grown through a broth consisting of veal infusion and hog stomach treated with calcium chloride and nicotinic acid. The toxin cultures produced through Taylor's method were more potent than those produced using commercially available broths. Taylor also contributed to the development of a stabilized version of the Schick toxin made using borate-gelatin-saline. This stabilized toxin did not need to be diluted as heavily as the destabilized variant, allowing a more effective administration of the toxin during Schick diphtheria tests. Taylor collaborated with Leone Farrell and Robert J. Wilson to develop an improved large-scale pertussis vaccine production technique using a liquid medium. Taylor and Farrell published a paper in the Canadian Journal of Public Health suggesting that constant agitation of the samples and the introduction of a small amount of formalin could promote continuous growth of the samples and reduce clumping, respectively. Taylor also conducted a study examining the effectiveness of variants on the pertussis vaccine. She compared a concentrated, a heated, and a control version of the vaccine using several tests. She was not, however, able to find consistency among the results and reached no conclusion as to which vaccine was the most effective. Taylor applied in 1948 to patent a novel strategy for producing heparin, a form of anticoagulant. The patent was granted to the University of Toronto in 1952. Patents were also granted in both Canada and Germany. The method involves mincing animal tissue, the lungs, intestines, and pancreases of sheep and cows, and mincing these samples with water to allow the proteins to coagulate over heat. This coagulated sample is then digested using proteolytic enzyme to yield an extract from which pure heparin can be extracted. This mechanism proved more effective than the old method of extracting heparin by applying the digestive enzyme to the sample before separating the proteins, increasing the yield of heparin in each sample. In 1949, Taylor developed an apparatus and technique for using formalin vapor in the sterilization of plastic syringes that would melt in a steam-based sterilization system. Taylor also contributed to Connaught Laboratories research on the polio vaccine. In 1957, she developed a variant of the Nash colorimetric method for determining an estimation of the formaldehyde content in Polio vaccines. Taylor was awarded the title of Officer of the Most Excellent Order of the British Empire in 1946 for her development of a mass-produced tetanus vaccine to distribute to soldiers during World War 2. After many more years of work at the through Connaught Laboratories at the University of Toronto, she retired in 1962. References 1899 births 1993 deaths Canadian biochemists Canadian women biologists Canadian women chemists 20th-century Canadian women scientists Scientists from Toronto Women biochemists Canadian Officers of the Order of the British Empire
Edith M. Taylor
[ "Chemistry" ]
799
[ "Biochemists", "Women biochemists" ]
63,130,267
https://en.wikipedia.org/wiki/A%20Guide%20to%20the%20Classification%20Theorem%20for%20Compact%20Surfaces
A Guide to the Classification Theorem for Compact Surfaces is a textbook in topology, on the classification of two-dimensional surfaces. It was written by Jean Gallier and Dianna Xu, and published in 2013 by Springer-Verlag as volume 9 of their Geometry and Computing series (, ). The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries. Topics The classification of surfaces (more formally, compact two-dimensional manifolds without boundary) can be stated very simply, as it depends only on the Euler characteristic and orientability of the surface. An orientable surface of this type must be topologically equivalent (homeomorphic) to a sphere, torus, or more general handlebody, classified by its number of handles. A non-orientable surface must be equivalent to a projective plane, Klein bottle, or more general surface characterized by an analogous number, its number of cross-caps. For compact surfaces with boundary, the only extra information needed is the number of boundary components. This result is presented informally at the start of the book, as the first of its six chapters. The rest of the book presents a more rigorous formulation of the problem, a presentation of the topological tools needed to prove the result, and a formal proof of the classification. Other topics in topology discussed as part of this presentation include simplicial complexes, fundamental groups, simplicial homology and singular homology, and the Poincaré conjecture. Appendices include additional material on embeddings and self-intersecting mappings of surfaces into three-dimensional space such as the Roman surface, the structure of finitely generated abelian groups, general topology, the history of the classification theorem, and the Hauptvermutung (the theorem that every surface can be triangulated). Audience and reception This is a textbook aimed at the level of advanced undergraduates or beginning graduate students in mathematics, perhaps after having already completed a first course in topology. Readers of the book are expected to already be familiar with general topology, linear algebra, and group theory. However, as a textbook, it lacks exercises, and reviewer Bill Wood suggests its use for a student project rather than for a formal course. Many other graduate algebraic topology textbooks include coverage of the same topic. However, by focusing on a single topic, the classification theorem, the book is able to prove the result rigorously while remaining at a lower overall level, provide a greater amount of intuition and history, and serve as "a motivating tour of the discipline’s fundamental techniques". Reviewer complains that parts of the book are redundant, and in particular that the classification theorem can be proven either with the fundamental group or with homology (not needing both), that on the other hand several important tools from topology including the Jordan–Schoenflies theorem are not proven, and that several related classification results are omitted. Nevertheless, reviewer D. V. Feldman highly recommends the book, Wood writes "This is a book I wish I’d had in graduate school", and reviewer Werner Kleinert calls it "an introductory text of remarkable didactic value". References External links Author's web site for A Guide to the Classification Theorem for Compact Surfaces including a PDF version of Chapter 1 Low-dimensional topology Manifolds Mathematics textbooks 2013 non-fiction books Springer Science+Business Media books
A Guide to the Classification Theorem for Compact Surfaces
[ "Mathematics" ]
683
[ "Low-dimensional topology", "Space (mathematics)", "Topological spaces", "Topology", "Manifolds" ]
63,132,576
https://en.wikipedia.org/wiki/Electrical%20demonstration
Electrical demonstrations during the eighteenth century were performances by experimental philosophers before an audience to entertain with and teach about electricity. Such displays took place in British America as well across Europe. Their form varied from something similar to modern day carnival shows to grand displays in exhibition halls and theatres. With concern about safety of electrical power, these displays were sometimes pushed back upon. History Ebenezer Kinnersley Ebenezer Kinnersley was one of the first showmen of electricity in British America, touring his electrical displays from 1749-1774. His lectures of electrical phenomena did not only show natural phenomena to the audience, but instead were interactive demonstrations that required their active participation. In Kinnersley’s shows, audience members were able to have some embodied experience with electricity. One of two Kinnersley’s two touring electrical demonstrations focused on “the newly discovered electrical fire.” An audience member of this show would have the opportunity to directly interact with electricity in several different types of demonstrations. In one part of the show they could witness the attraction between positive and negative forces, using a leyden jar. In another, a charged coin would be placed in someone's mouth, then using an electrical discharge, Kinnersley would propel the coin across the room. The demonstration of the electrical fire allowed sparks to seemingly fly from participants fingers, lips or eyes. Kinnersley believed this method of electric display, relying on the physical senses, would better allow his audience to understand electrical phenomena. These demonstrations did not come without a price – admission typically cost about five shillings a person, well above a day’s worth of work for most laborers of the mid-eighteenth century. Still, persons of all socioeconomic classes were drawn to such curious displays, Kinnersley’s advertisements touting exhibitions of wonder and spectacular displays. Kinnersley toured as an itinerant across British America, taking his displays to colleges, courthouses and coffee houses. These spectacles of electricity were not intended only to teach, but also to entertain Kinnersley’s audience. In this way, Kinnersley’s shows served as a basis for similar scientists to promote their and further understanding of their practices to the masses, such as Archibald Spencer, Henry Moyes and Samuel Domjen, who took such electrical demonstrations throughout Europe. Other types of electrical displays Exhibitions Electricity was on display at the International Exposition of Electricity in Paris in 1881. The Times reported a few electrical accidents that resulted in fires. Though noting that such incidents aroused some alarm at the exhibition, the editorial sought to play down the events in effort to preserve the competence of the event. After about at least five electrical fires they were no longer able to directly discuss the danger of electricity when not properly demonstrated or attended to. In 1882, following the electrical accidents of the International Exhibition, there was a desire to utilize the electrical exhibition of the Crystal Palace to redeem the public’s conception of electricity as safe, reliable and economical. This was not without some initial fear that the dangers of electricity would follow, the exhibition opening nearly a month behind schedule to ensure that safety precautions were taken. The British Edison Company took a special interest in the safe display of electricity. The Edison Company displayed a miniature version of its entire distribution system within the Crystal Palace. Additionally, they pushed the use of low-voltage incandescent lights as a low hazard. During the exhibition there were no reported electrical accidents, to the success of the technology’s proponents. Further, the Times featured Edison’s novel displays as a marvelous part of the Palace exhibition and featuring two images of the displays out of eight total included from the Palace exhibits. Theatrical displays In theatrical performances the use of decorative electricity was used in costumes of female performers. Richard d’Oyly Carte’s Savoy Theatre in London was one of the first to use electric ornamentation during performances. See also Experiments and Observations on Electricity References History of electrical engineering
Electrical demonstration
[ "Engineering" ]
792
[ "Electrical engineering", "History of electrical engineering" ]
63,133,048
https://en.wikipedia.org/wiki/Dapagliflozin/saxagliptin/metformin
Dapagliflozin/saxagliptin/metformin, sold under the brand name Qternmet XR among others, is a fixed-dose combination anti-diabetic medication used as an adjunct to diet and exercise to improve glycemic control in adults with type 2 diabetes. It is a combination of dapagliflozin, saxagliptin, and metformin. It is taken by mouth. The drug is marketed by AstraZeneca. The most common side effects include infections of the nose and throat, hypoglycaemia (low blood sugar) when used with a sulphonylurea and effects on the gut such as nausea (feeling sick), vomiting, diarrhoea, abdominal (tummy) pain and loss of appetite. Dapagliflozin/saxagliptin/metformin was approved for medical use in the United States in May 2019, and in the European Union in November 2019. Its marketing authorisation was withdrawn in the European Union in August 2020, and its approval was withdrawn in the US in April 2021, in both cases at the request of AstraZeneca. Medical uses In the United States, dapagliflozin/saxagliptin/metformin is indicated as an adjunct to diet and exercise to improve glycemic control in adults with type 2 diabetes. In the European Union it is indicated in adults aged 18 years and older with type 2 diabetes: to improve glycemic control when metformin with or without sulphonylurea (SU) and either saxagliptin or dapagliflozin does not provide adequate glycemic control. when already being treated with metformin and saxagliptin and dapagliflozin. References Drugs developed by AstraZeneca Biguanides Chloroarenes Combination diabetes drugs Glucosides Guanidines Phenol ethers SGLT2 inhibitors
Dapagliflozin/saxagliptin/metformin
[ "Chemistry" ]
408
[ "Guanidines", "Functional groups" ]
63,133,062
https://en.wikipedia.org/wiki/Dapagliflozin/metformin
Dapagliflozin/metformin, sold under the brand name Xigduo Xr among others, is a fixed-dose combination anti-diabetic medication used as an adjunct to diet and exercise to improve glycemic control in adults with type 2 diabetes. It is a combination of dapagliflozin and metformin and is taken by mouth. Dapagliflozin/metformin was approved for use in the European Union in January 2014, in the United States in February 2014, and in Australia in July 2014. Adverse effects To lessen the risk of developing ketoacidosis (a serious condition in which the body produces high levels of blood acids called ketones) after surgery, the FDA has approved changes to the prescribing information for SGLT2 inhibitor diabetes medicines to recommend they be stopped temporarily before scheduled surgery. Canagliflozin, dapagliflozin, and empagliflozin should each be stopped at least three days before, and ertugliflozin should be stopped at least four days before scheduled surgery. Symptoms of ketoacidosis include nausea, vomiting, abdominal pain, tiredness, and trouble breathing. A potential interaction between dapagliflozin and lithium concomitant causing a reduction in serum lithium levels was bulletined in 2022. References Drugs developed by AstraZeneca Biguanides Chloroarenes Combination diabetes drugs Glucosides Guanidines Phenol ethers SGLT2 inhibitors
Dapagliflozin/metformin
[ "Chemistry" ]
320
[ "Guanidines", "Functional groups" ]
63,133,081
https://en.wikipedia.org/wiki/Dapagliflozin/saxagliptin
Dapagliflozin/saxagliptin, sold under the brand name Qtern, is a fixed-dose combination anti-diabetic medication used as an adjunct to diet and exercise to improve glycemic control in adults with type 2 diabetes. It is a combination of dapagliflozin and saxagliptin. It is taken by mouth. The most common side effects include upper respiratory tract infection (such as nose and throat infections) and, when used with a sulphonylurea, hypoglycaemia (low blood glucose levels). Dapagliflozin/saxagliptin was approved for medical use in the European Union in July 2016, and in the United States in February 2017. Medical uses In the United States, dapagliflozin/saxagliptin is indicated as an adjunct to diet and exercise to improve glycemic control in adults with type 2 diabetes. In the European Union, it is indicated in adults aged 18 years and older with type 2 diabetes: to improve glycemic control when metformin with or without sulphonylurea (SU) and either saxagliptin or dapagliflozin does not provide adequate glycemic control. when already being treated with saxagliptin and dapagliflozin. References Adamantanes Drugs developed by AstraZeneca Carboxamides Chloroarenes Combination diabetes drugs Dipeptidyl peptidase-4 inhibitors Glucosides Nitriles Nitrogen heterocycles Phenol ethers SGLT2 inhibitors Tertiary alcohols
Dapagliflozin/saxagliptin
[ "Chemistry" ]
336
[ "Nitriles", "Functional groups" ]
63,135,851
https://en.wikipedia.org/wiki/Immunometabolism
Immunometabolism is a branch of biology that studies the interplay between metabolism and immunology in all organisms. In particular, immunometabolism is the study of the molecular and biochemical underpinninngs for i) the metabolic regulation of immune function, and ii) the regulation of metabolism by molecules and cells of the immune system. Further categorization includes i) systemic immunometabolism and ii) cellular immunometabolism. Immunometabolism includes metabolic inflammation:a chronic, systemic, low grade inflammation, orchestrated by metabolic deregulation caused by obesity or aging. Immunometabolism first appears in academic literature in 2011, where it is defined as "an emerging field of investigation at the interface between the historically distinct disciplines of immunology and metabolism." A later article defines immunometabolism as describing "the changes that occur in intracellular metabolic pathways in immune cells during activation". Broadly, immunometabolic research records the physiological functioning of the immune system in the context of different metabolic conditions in health and disease. These studies can cover molecular and cellular aspects of immune system function in vitro, in situ, and in vivo, under different metabolic conditions. For example, highly proliferative cells such as cancer cells and activating T cells undergo metabolic reprogramming, increasing glucose uptake to shift towards aerobic glycolysis during normoxia. While aerobic glycolysis is an inefficient pathway for ATP production in quiescent cells, this so-called “Warburg effect” supports the bioenergetic and biosynthetic needs of rapidly proliferating cells. Signalling and metabolic network There are many indispensable signalling molecules connected to metabolic processes, which play an important role in both the immune system homeostasis and in the immune response. From these the most significant are mammalian target of rapamycin (mTOR), liver kinase B1 (LKB1), 5' AMP-activated protein kinase (AMPK), phosphoinositide 3 kinase (PI3K) and protein kinase B (akt). All of the aforementioned molecules together control the most important metabolic pathways in cells like glycolysis, krebs cycle or oxidative phosphorylation. To fully understand how all of these molecules and pathways affect the immune cells, it is first needed to examine the delicate interplay of these molecules. mTOR mTOR is a serine/threonine protein kinase, which is found in 2 complexes in cells: mTOR complex 1 and 2 (mTORC1 and mTORC2). mTORC1 is activated through the T cell receptor (TCR) and the costimulatory molecule cluster of differentiation 28 (CD28) engagement. However, it can also be activated by growth factors like IL-7 or IL-2 and by metabolites like glucose or amino acids (leucin, arginine or glutamine). In contrast, there are more gaps as to how mTORC2 pathway functions, but its activation is also achieved through growth factors as exemplified by IL-2. When activated mTORC1 negatively regulates autophagy (through inhibiting the ULK complex) and shifts the cell towards aerobic glycolysis, glutaminolysis (through activation of c-Myc) and promotes lipid synthesis and mitochondrial remodelling. mTORC2 enhances glycolysis as well, but in contrast to mTORC1, it activates akt, which in turn promotes glucose transporter 1 (GLUT1) membrane deposition. It also further promotes, through other kinases, cell proliferation and survival. PI3K-akt PI3K mediates the phosphorylation of phosphatidylinositol-(4,5)-bisphosphate (PIP2) into phosphatidylinositol-(3,4,5)-trisphosphate (PIP3). PIP3 then serves as a scaffold for other proteins, which contain a pleckstrin homology (PH) domain. It can be activated, just like mTOR, through TCR, CD28 and, unlike mTOR, through another costimulatory molecule: Inducible T-cell COStimulator (ICOS). The present of PIP3 on a membrane recruits many proteins including phosphoinositide-dependent protein kinase 1 (PDK1), which after its phosphorylation together with mTORC2 activates akt, a serine/threonine kinase. As a result akt promotes GLUT1 membrane deposition and akt also inhibits transcription factor forkhead box O (FoxO), whose inactivation acts in synergy with the mTORC2 above mentioned changes. LKB1-AMPK Both LKB1 and AMPK are serine/threonine kinases acting predominantly opposingly to the aforementioned molecules. From the two, LKB1's activation is less understood, as it is mainly dependants on cellular localization and on many posttranslational modifications. For instance the above-mentioned akt can stimulate LKB1 inhibition through promoting nuclear retention. When activated, LKB1 can activate, apart from other targets, AMPK, whose activation leads to mTORC1 destabilization. Furthermore, it activates ULK complex, phosphorylates p53 and acetyl-CoA carboxylase (ACC), which promotes autophagy, cell cycle arrest and fatty acids oxidation respectively. Since AMPK can also be activated through adenosine monophosphate (AMP) or by glucose insufficiency, it acts as a sensor of starvation and therefore activates many already mentioned catabolic processes, which is in direct contrast with mTOR, which activates myriad of anabolic processes. Immune cells Generally speaking, cells, whose primary objective is their long-term survival or control of inflammation, in terms of energy tend to rely on Krebs cycle and lipid oxidation which are both coupled with functional oxidative phosphorylation. Among these cells we can include naive T cells, memory T cells, regulatory T cells (Tregs), unstimulated innate immune cells like macrophages and M2 macrophages. On the contrary, cells whose main function is proliferation, synthesis of different molecules or propagation of inflammation often prefer glycolysis as a source of energy and metabolites. Therefore, into these cells belong for instance effector T cells and M1 macrophages. T cells Naive T cells have to be kept in a permanent state of quiescence, until they encounter their cognate antigen. The quiescence state is sustained by tonic TCR signalling and by IL-7. Tonic TCR signalling is necessary to keep the FoxO transcription factor active, which in turn allows for IL-7R transcription. This enables the T cell to survive and proliferate at a low rate. However, during this tonic TCR signalling proteins, that control metabolism, have to be strictly regulated, because their activation could lead to spontaneous exit of quiescence and differentiation into various T cells subset, as exemplified by the uncontrolled activation of PI3K which causes the development of Th1 or Th2. Both of the aforementioned signals should lead to the mTOR and akt activation, but in quiescence T cells there are tuberous sclerosis complex (TSC) and phosphatase and tensin homolog (PTEN) acting against their activations. Therefore, a naive T cell dependent predominantly on oxidative phosphorylation and has much lower glucose uptake and ATP production than their activated counterparts (effector T cells). Quiescence exit begins when a T cell encounters its cognate antigen usually during an infection. The TCR signal together with the costimulation signal lead to downregulation of PTEN and TSC. This causes the phosphorylation cascades of mTOR and akt and many more kinases to be fully activated. These cascades activities result in glucose and glutamine uptake coupled with higher glycolysis and glutaminolysis, which not only supports rapid cell growth, but also further promotes mTOR activation. Furthermore, mTOR stimulates lipid synthesis and mitochondria remodelling, exemplified by increased expression of sterol regulatory element-binding protein (SREBP) and mitochondria undergoing fission, which causes them to function predominantly as biosynthetic hubs, rather than energy production hubs. After their activation and the metabolic reprogramming, T cells compete with one another and consequently, it is very likely that during its effector phase T cells reach a point, where they suffer from lack of nutrients. In such cases AMPK is activated to balance the mTOR signalling and to prevent apoptosis. The described scheme of quiescence exit holds true for inflammatory T cells subsets like Th1, Th2, Th17 and cytotoxic T cells. However, mTOR activity can be detrimental when we focus on Tregs. This is shown by the fact that in Tregs high activation of mTORC1 coupled with a higher level of glycolysis leads to the failure of Treg lineage commitment. Therefore, in contrast to inflammatory cell subsets, Tregs rely on oxidative phosphorylation fuelled by lipid oxidation. Although, it is important to note that complete suppression of glycolysis leads to enolase (a glycolytic enzyme) binding to a splice variant of Foxp3, which effectively compromises peripheral Tregs abilities to act as immunosuppressive cells. After the infection is cleared most of the activated T cells succumb to apoptosis. However, few of them survive and develop into the memory T cell subsets. For this development the engagement of costimulatory molecules, like CD28, appears to be crucial, as the co-stimulation manifests in mitochondrial morphology, thus allowing for higher oxidative phosphorylation but also retaining the potential to quickly revert to glycolysis. Moreover, T cell activation causes an overall increase in acetyl-CoA, which is a substrate for the histone acetylation. As a results, many genes are acetylated and therefore accessible to transcription even after the differentiation into memory subsets, hence allowing memory T cells to rapidly re-express some effector related genes. The aforementioned changes allow T cells to become memory cells, but what exactly drives the memory cell differentiation is still under debate, even though IL-15 seems to be necessary for the T cell memory induction. Recently, asymmetric division of mTORC1, during the first divisions after TCR activation, has been shown to drive the memory cell differentiation in those cells which receive lower amount of mTORC1. Macrophages Immunometabolism of macrophages is mostly studied in the two opposing populations of macrophages: M1 and M2. M1 macrophages are a pro-inflammatory population induced by LPS or IFNγ. This activation leads, as in the case of T cells, to increase in glycose uptake and glycolysis. What is strikingly different is the Krebs cycle, as in the case of M1 macrophages the cycle is broken at two places. The first break is the conversion of iso-citrate to α-ketoglutarate owing to the downregulation of isocitrate dehydrogenase. Accumulated citrate is subsequently used for lipid and itaconate synthesis, which are both indispensable for M1 macrophages function. The second break at the succinate to fumarate transition occurs probably due to the itaconate production and causes a build up of succinate. This triggers ROS production, which stabilizes HIF-1α. This transcription factor further promotes glycolysis and it is essential for activation of inflammatory macrophages. M2 macrophages are anti-inflammatory cells which need for their induction IL-4. M2 macrophages metabolism is markedly distinct from M1 macrophages due to their unbroken Krebs cycle, which after their activation is fuelled by upragulated glycolysis, glutaminolysis and fatty acid oxidation. How the fully operational Krebs cycle exactly translates to M2 macrophages functions is still poorly understood, but the upregulated pathways allow for production of intermediates (mainly acetyl-CoA and S-adenosyl methionine), which are needed for histone modifications of genes targeted by IL-4 signalling. Drug discovery Immunometabolism is an area of growing drug discovery research investment in numerous areas of medicine, such as for example, in lessening the impact of age-related metabolic dysfunction and obesity on incidence of type 2 diabetes/ cardiovascular disease, cancer, as well as infectious diseases. In recent years, evidence suggests that immunometabolism is implicated in autoimmune disorders. The metabolic alterations on immune system regulation have provided unique insights into disease pathogenesis and development, as well as potential therapeutic targets. Immunometabolism - from inflammation to sepsis Sepsis-Related Immunometabolic Paralysis Sepsis pathophysiology now includes immunometabolic paralysis, a condition marked by severe abnormalities in cellular energy metabolism. This phenomenon affects both the acute and late stages of the disease, playing a critical role in the immune response during sepsis. Summary A potentially fatal illness known as sepsis is brought on by the body's overreaction to an infection. Although there is a strong inflammatory response during the early phase of sepsis, immunometabolic paralysis may appear later on and is linked to a bad prognosis for the patient. Shih Chin Cheng and colleagues have conducted recent research that explores the complex interplay between cellular metabolism and the immune response in sepsis. Important Results • 1. Transition from Oxidative Phosphorylation to Aerobic Glycolysis: The Warburg effect, which occurs during the acute stage of sepsis, is characterized by a change from oxidative phosphorylation to aerobic glycolysis. One of the key mechanisms in the first activation of the host defense against infections is this metabolic change. • 2. Impaired Energy Metabolism in Leukocytes: It was shown that patients experiencing acute sepsis exhibited extensive impairments in cellular energy metabolism, which impacted leukocyte glycolysis and oxidative metabolism. The ailment known as immunometabolic paralysis is associated with a compromised capacity to react to secondary stimulus. • 3. IFN-γ's Function in Restoring Glycolysis: Interferon-gamma, or IFN-γ, is being explored as a possible treatment option IFN-γ therapy partially restored glycolysis, in tolerant monocytes, as demonstrated by in vitro tests, demonstrating its ability to mitigate the metabolic abnormalities linked to immunotolerance. Therapeutic Implications The work emphasizes how cellular metabolism in sepsis might be targeted therapeutically. Although few medicines possessing metabolic-regulatory properties have been investigated, the study emphasizes how important it is to comprehend and treat immunometabolic paralysis in order to improve outcomes for individuals suffering from sepsis. Conclusion To sum up, the research conducted by Cheng and colleagues provides significant understanding of the intricate relationship between immune response and cellular metabolism in sepsis. A crucial role for immunometabolic paralysis—a condition marked by impaired energy metabolism—in the development and cure of sepsis is revealed. It appears that more investigation and testing of therapeutic approaches aimed at cellular metabolism will help to improve the management of sepsis. References External links Metabolism Immunology
Immunometabolism
[ "Chemistry", "Biology" ]
3,315
[ "Biochemistry", "Immunology", "Metabolism", "Cellular processes" ]
63,136,947
https://en.wikipedia.org/wiki/Edward%20Teller%20Award
The Edward Teller Award (or the Edward Teller Medal) is an award presented every two years by the American Nuclear Society for "pioneering research and leadership in the use of laser and ion-particle beams to produce unique high-temperature and high-density matter for scientific research and for controlled thermonuclear fusion". It was established in 1999 and is named after Edward Teller. The award carries a $2000 cash prize and an engraved silver medal. Recipients See also List of physics awards List of prizes named after people Publications References Awards established in 1991 Physics awards
Edward Teller Award
[ "Technology" ]
114
[ "Science and technology awards", "Physics awards" ]
63,137,166
https://en.wikipedia.org/wiki/James%20F.%20Drake
For other persons named James Drake, see James Drake (disambiguation) James F. Drake (born June 26, 1947) is an American theoretical physicist who specializes in plasma physics. He is known for his studies on plasma instabilities and magnetic reconnection for which he was awarded the 2010 James Clerk Maxwell Prize for Plasma Physics by the American Physical Society. Early life and career Drake studied at the University of California, Los Angeles (UCLA), where he received his bachelor's degree in 1969 and received his doctorate in 1975. In 1977, he went to the University of Maryland, where he has been a professor since 1987. He dealt with laser-plasma interaction and plasma turbulence. He is known for explaining the mechanisms of the rapid reconnection of magnetic field lines and the resulting particle accelerations in astrophysical plasmas (for example on the sun). In computer simulations with Amitava Bhattacharjee and Michael Hesse (NASA), he was able to demonstrate the explosive nature of the dynamics of the magnetic fields, for example in solar flares. Honors and awards In 1986, Drake was elected a fellow of the American Physical Society. He was subsequently awarded the 2010 James Clerk Maxwell Prize for Plasma Physics for "pioneering investigations of plasma instabilities in magnetically-confined, astrophysical and laser-driven plasmas; in particular, explication of the fundamental mechanism of fast reconnection of magnetic fields in plasmas; and leadership in promoting plasma science". He also received a Senior US Research Scientist Award from the Alexander von Humboldt Foundation. References Fellows of the American Physical Society 1947 births American plasma physicists Plasma physicists Living people
James F. Drake
[ "Physics" ]
339
[ "Plasma physicists", "Plasma physics" ]
63,138,024
https://en.wikipedia.org/wiki/Shahzeen%20Attari
Shahzeen Attari is a professor at the O'Neill School of Public and Environmental Affairs at Indiana University Bloomington. She studies how and why people make the judgements and decisions they do with regards to resource use and how to motivate climate action. In 2018, Attari was selected as an Andrew Carnegie Fellow in recognition of her work addressing climate change. She was also a fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) from 2017 to 2018, and received a Bellagio Writing Fellowship in 2022. Early life and education Shahzeen Attari was born in Mumbai, India and grew up in Dubai, United Arab Emirates. As she grew up, she witnessed first-hand how the desert transformed into a metropolis over the short span of time. In coming to understand the massive impacts humans can have on nature, Attari became drawn to work in the environmental sphere and human behavior.  Attari studied physics and math at the University of Illinois Urbana-Champaign Grainger College of Engineering, earning her B.S. in Engineering Physics in 2004. Drawn to interdisciplinary research, she then went on to earn her M.S. in Civil and Environmental Engineering from Carnegie Mellon College of Engineering in 2005, and her Ph.D. in Civil and Environmental Engineering & Engineering and Public Policy, also from Carnegie Mellon. Her dissertation assessed how demand-side management methods can mitigate carbon emissions. She completed her doctorate in 2009. Research and career Attari is a professor at the O’Neill School of Public and Environmental Affairs at Indiana University Bloomington. Previously, she was a postdoctoral fellow at the Earth Institute at the Center for Research on Environmental Decisions (CRED) at Columbia University from 2009 to 2011. Perceptions of energy and water During her Ph.D., Attari conducted a study on how people perceive how much energy different appliances use. In this work Attari and colleagues found that for a sample of 15 activities, participants underestimated energy use and savings by a factor of 2.8 on average, with small overestimates for low-energy activities and large underestimates for high-energy activities. This study, published in the Proceedings of the National Academy of Sciences, highlighted the need for communication campaigns to correct these skewed perceptions and inform individuals of ways in which they can most successfully reduce their energy use. This study has been summarized by The Economist, The New York Times, and BBC. Later Attari independently investigated how participants think about water use. In another study published in the Proceedings of the National Academy of Sciences, Attari showed that participants still favor curtailment (doing the same behavior but less of it) over efficiency (switching to more effective technologies that use less energy for the work needed to be done). For a sample of 17 activities, participants underestimated water use by a factor of 2 on average, with large underestimates for high water-use activities. Combining both her work on energy and water, Attari showed that perceptions of energy use are far worse than for water use. Overall, her work has found that participants consistently underestimate their water and energy use and know surprisingly little about which curtailment efforts will have the greatest impact on the environment.  She presented these results at TEDx Bloomington, answering the question: why don’t people conserve energy and water? Credibility and climate communication Another line of research that Attari and collaborators have worked on is to understand the relationship between a climate communicator's carbon footprint and the effect of their advocacy on participants. They find that the communicators’ carbon footprint massively affects their credibility and intentions of their audience to conserve energy and also affects audience support for public policies advocated by the communicator. They also show that the negative effects of a large carbon footprint on credibility are greatly reduced if the communicator reforms their behavior by reducing their personal carbon footprints. The implications of these results are stark: effective communication of climate science and advocacy of both individual behavior change and public policy interventions are greatly helped when advocates lead the way by reducing their own carbon footprint. With funding from the Andrew Carnegie Fellowship, Attari is conducting the following research project: Motivating climate change solutions by fusing facts and feelings. Attari has assumed the role of both a scientist and activist, using her research to inspire greater change. She consistently gives public lectures and academic talks to communicate her research results and to advocate for solutions. Awards and grant Her awards and honors: Andrew Carnegie Fellow Indiana University Bicentennial Professorship Center for Advanced Study in the Behavioral Sciences Fellowship SN10 – Among top ten scientists to watch under the age of 40, Science News Outstanding Junior Faculty Award, Indiana University Excellence in Teaching, Campus Catalyst Award, Office of Sustainability, Indiana University Attari has received research grants from the following: Carnegie Corporation, Andrew Carnegie Fellowship National Science Foundation -  Decision, Risk, and Management Science Environmental Resilience Institute, Indiana University's Prepared for Environmental Change Grand Challenge Initiative Selected publications Her publications include: Shahzeen Z. Attari, David H. Krantz, & Elke U. Weber. (2019). Climate change communicators’ carbon footprints affect their audience's policy support. Climatic Change, 154(3–4), 529–545. [] Shahzeen Z. Attari, David H. Krantz, & Elke U. Weber (2016). Statements about climate researchers’ carbon footprints affect their credibility and the impact of their advice. Climatic Change, 138(1–2), 325–338. [] Benjamin D. Inskeep & Shahzeen Z. Attari (2014) The Water Shortlist, Environment: Science and Policy for Sustainable Development [] Shahzeen Z. Attari (2014) Perceptions of Water Use, Proceedings of the National Academy of Sciences [] Jonathan E. Cook & Shahzeen Z. Attari (2012) Paying for What Was free: Lessons from the New York Times Paywall, Cyberpsychology, Behavior, and Social Networking [DOI: http://doi.org/10.1089/cyber.2012.0251] Shahzeen Z. Attari, Michael L. DeKay, Cliff I. Davidson, and Wändi Bruine de Bruin (2010) Public perceptions of energy consumption and savings, Proceedings of the National Academy of Sciences [] Personal life Attari enjoys hiking with her dog, spicy food, and reading science fiction novels. She believes that science fiction books inspire us to reimagine the world we live in. References Year of birth missing (living people) Living people American environmentalists 21st-century American women scientists Carnegie Mellon University College of Engineering alumni Indiana University Bloomington faculty Grainger College of Engineering alumni Scientists from Mumbai 21st-century American engineers American women engineers People from Dubai American climate activists Indian climate activists Energy use comparisons Indian emigrants to the United States Indian environmental scientists
Shahzeen Attari
[ "Environmental_science" ]
1,432
[ "Indian environmental scientists", "Environmental scientists" ]
63,138,033
https://en.wikipedia.org/wiki/Fusion%20Industry%20Association
The Fusion Industry Association is a US-registered non-profit independent trade association for the international nuclear fusion industry. It is headquartered in Washington, D.C. It was founded in 2018 to advocate for policies to accelerate the arrival of fusion power. Its CEO is Andrew Holland, former Chief Operating Officer of the American Security Project. The Fusion Industry Association has 28 members and 35 affiliate members, including nuclear reactor designers, engineering firms, suppliers, academic institutions, and various professional services with business in the nuclear fusion industry such as research consultancies. The emergence of the Fusion Industry Association can be traced back to the 2013 publication of a white paper on fusion energy by the American Security Project. The Fusion Industry Association's stated advocacy objectives are to encourage private sector fusion companies' partnering with the public sector for applied fusion research, to increase financial support for the industry, and to ensure regulatory certainty. It is seen as one of the main drivers behind the development of Fusion Pilot Plants and supported the fusion energy public-private partnership amendment in H.R.133 - Consolidated Appropriations Act, 2021, which authorized $325 million over 5 years for the partnership program to build fusion demonstration facilities. The Fusion Industry Association has also played a role in the formation of the Congressional Fusion Caucus. Challenges facing the Fusion Industry Association include attracting the billions of dollars of funding necessary to create a commercial fusion power industry; improving the private sector's relationship with the public sector, including the world's largest fusion power science experiment, ITER; internationalizing a Global North-dominated energy development sector by bridging the North–south divide, and the credibility of some of its members. See also Canadian Nuclear Association Nuclear Energy Institute Nuclear Industry Association World Nuclear Association References External links Nuclear industry organizations
Fusion Industry Association
[ "Engineering" ]
352
[ "Nuclear industry organizations", "Nuclear organizations" ]
63,138,221
https://en.wikipedia.org/wiki/Runtime%20predictive%20analysis
Runtime predictive analysis (or predictive analysis) is a runtime verification technique in computer science for detecting property violations in program executions inferred from an observed execution. An important class of predictive analysis methods has been developed for detecting concurrency errors (such as data races) in concurrent programs, where a runtime monitor is used to predict errors which did not happen in the observed run, but can happen in an alternative execution of the same program. The predictive capability comes from the fact that the analysis is performed on an abstract model extracted online from the observed execution, which admits a class of executions beyond the observed one. Overview Informally, given an execution , predictive analysis checks errors in a reordered trace of . is called feasible from (alternatively a correct reordering of ) if any program that can generate can also generate . In the context of concurrent programs, a predictive technique is sound if it only predicts concurrency errors in feasible executions of the causal model of the observed trace. Assuming the analysis has no knowledge about the source code of the program, the analysis is complete (also called maximal) if the inferred class of executions contains all executions that have the same program order and communication order prefix of the observed trace. Applications Predictive analysis has been applied to detect a wide class of concurrency errors, including: Data races Deadlocks Atomicity violations Order violations, e.g., use-after-free errors Implementation As is typical with dynamic program analysis, predictive analysis first instruments the source program. At runtime, the analysis can be performed online, in order to detect errors on the fly. Alternatively, the instrumentation can simply dump the execution trace for offline analysis. The latter approach is preferred for expensive refined predictive analyses that require random access to the execution trace or take more than linear time. Incorporating data and control-flow analysis Static analysis can be first conducted to gather data and control-flow dependence information about the source program, which can help construct the causal model during online executions. This allows predictive analysis to infer a larger class of executions based on the observed execution. Intuitively, a feasible reordering can change the last writer of a memory read (data dependence) if the read, in turn, cannot affect whether any accesses execute (control dependence). Approaches Partial order based techniques Partial order based techniques are most often employed for online race detection. At runtime, a partial order over the events in the trace is constructed, and any unordered pairs of critical events are reported as races. Many predictive techniques for race detection are based on the happens-before relation or a weakened version of it. Such techniques can typically be implemented efficiently with vector clock algorithms, allowing only one pass of the whole input trace as it is being generated, and are thus suitable for online deployment. SMT-based techniques SMT encodings allow the analysis to extract a refined causal model from an execution trace, as a (possibly very large) mathematical formula. Furthermore, control-flow information can be incorporated into the model. SMT-based techniques can achieve soundness and completeness (also called maximal causality ), but has exponential-time complexity with respect to the trace size. In practice, the analysis is typically deployed to bounded segments of an execution trace, thus trading completeness for scalability. Lockset-based approaches In the context of data race detection for programs using lock based synchronization, lockset-based techniques provide an unsound, yet lightweight mechanism for detecting data races. These techniques primarily detect violations of the lockset principle. which says that all accesses of a given memory location must be protected by a common lock. Such techniques are also used to filter out candidate race reports in more expensive analyses. Graph-based techniques In the context of data race detection, sound polynomial-time predictive analyses have been developed, with good, close to maximal predictive capability based on a graphs. Computational Complexity Given an input trace of size executed by threads, general race prediction is NP-complete and even W[1]-hard parameterized by , but admits a polynomial-time algorithm when the communication topology is acyclic. Happens-before races are detected in time, and this bound is optimal. Lockset races over variables are detected in time, and this bound is also optimal. Tools Here is a partial list of tools that use predictive analyses to detect concurrency errors, sorted alphabetically. : a lightweight framework for implementing dynamic race detection engines. : a dynamic analysis framework designed to facilitate rapid prototyping and experimentation with dynamic analyses for concurrent Java programs. : SMT-based predictive race detection. : SMT-based predictive use-after-free detection. See also Model checking Dynamic program analysis Runtime verification References Software testing
Runtime predictive analysis
[ "Engineering" ]
972
[ "Software engineering", "Software testing" ]
63,138,616
https://en.wikipedia.org/wiki/Success
Success is the state or condition of meeting a defined range of expectations. It may be viewed as the opposite of failure. The criteria for success depend on context, and may be relative to a particular observer or belief system. One person might consider a success what another person considers a failure, particularly in cases of direct competition or a zero-sum game. Similarly, the degree of success or failure in a situation may be differently viewed by distinct observers or participants, such that a situation that one considers to be a success, another might consider to be a failure, a qualified success or a neutral situation. For example, a film that is a commercial failure or even a box-office bomb can go on to receive a cult following, with the initial lack of commercial success even lending a cachet of subcultural coolness. It may also be difficult or impossible to ascertain whether a situation meets criteria for success or failure due to ambiguous or ill-defined definition of those criteria. Finding useful and effective criteria, or heuristics, to judge the failure or success of a situation may itself be a significant task. In American culture DeVitis and Rich link the success to the notion of the American Dream. They observe that "[t]he ideal of success is found in the American Dream which is probably the most potent ideology in American life" and suggest that "Americans generally believe in achievement, success, and materialism." Weiss, in his study of success in the American psyche, compares the American view of success with Max Weber's concept of the Protestant work ethic. A private opinion survey by the think tank Populace, found that Americans now emphasize secure retirement, financial independence, parenthood and work fulfillment as their American Dream. In biology Natural selection is the variation in successful survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Charles Darwin popularized the term "natural selection", contrasting it with artificial selection, which in his view is intentional, whereas natural selection is not. As Darwin phrased it in 1859, natural selection is the "principle by which each slight variation [of a trait], if useful, is preserved". The concept was simple but powerful: individuals best adapted to their environments are more likely to survive and reproduce. As long as there is some variation between them and that variation is heritable, there will be an inevitable selection of individuals with the most advantageous variations. If the variations are heritable, then differential reproductive success leads to a progressive evolution of particular populations of a species, and populations that evolve to be sufficiently different eventually become different species. In education A student's success within an educational system is often expressed by way of grading. Grades may be given as numbers, letters or other symbols. By the year 1884, Mount Holyoke College was evaluating students' performance on a 100-point or percentage scale and then summarizing those numerical grades by assigning letter grades to numerical ranges. Mount Holyoke assigned letter grades A through E, with E indicating lower than 75% performance. The A–E system spread to Harvard University by 1890. In 1898, Mount Holyoke adjusted the grading system, adding an F grade for failing (and adjusting the ranges corresponding to the other letters). The practice of letter grades spread more broadly in the first decades of the 20th century. By the 1930s, the letter E was dropped from the system, for unclear reasons. Educational systems themselves can be evaluated on how successfully they impart knowledge and skills. For example, the Programme for International Student Assessment (PISA) is a worldwide study by the Organisation for Economic Co-operation and Development (OECD) intended to evaluate educational systems by measuring 15-year-old school pupils' scholastic performance on mathematics, science, and reading. It was first performed in 2000 and then repeated every three years. Carol Dweck, a Stanford University psychologist, primarily researches motivation, personality, and development as related to implicit theories of intelligence, her key contribution to education the 2006 book Mindset: The New Psychology of Success. Dweck's work presents mindset as on a continuum between fixed mindset (intelligence is static) and growth mindset (intelligence can be developed). Growth mindset is a learning focus that embraces challenge and supports persistence in the face of setbacks. As a result of growth mindset, individuals have a greater sense of free will and are more likely to continue working toward their idea of success despite setbacks. In business and leadership Malcolm Gladwell's 2008 book Outliers: The Story of Success suggests that the notion of the self-made man is a myth. Gladwell argues that the success of entrepreneurs such as Bill Gates is due to their circumstances, as opposed to their inborn talent. Andrew Likierman, former Dean of London Business School, argues that success is a relative rather than an absolute term: success needs to be measured against stated objectives and against the achievements of relevant peers: he suggests Jeff Bezos (Amazon) and Jack Ma (Alibaba) have been successful in business "because at the time they started there were many companies aspiring to the dominance these two have achieved". Likierman puts forward four propositions regarding company success and its measurement. There is no single definition of "a successful company" and no single measure of "company success" Profit and share value cannot be taken directly as measures of company success and require careful interpretation Judgement is required when interpreting past and present performance "Company success" reflects an interpretation of key factors: it is not a "fact". In philosophy of science Scientific theories are often deemed successful when they make predictions that are confirmed by experiment. For example, calculations regarding the Big Bang predicted the cosmic microwave background and the relative abundances of chemical elements in deep space (see Big Bang nucleosynthesis), and observations have borne out these predictions. Scientific theories can also achieve success more indirectly, by suggesting other ideas that turn out correct. For example, Johannes Kepler conceived a model of the Solar System based on the Platonic solids. Although this idea was itself incorrect, it motivated him to pursue the work that led to the discoveries now known as Kepler's laws, which were pivotal in the development of astronomy and physics. In probability The fields of probability and statistics often study situations where events are labeled as "successes" or "failures". For example, a Bernoulli trial is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. The concept is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his Ars Conjectandi (1713). The term "success" in this sense consists in the result meeting specified conditions, not in any moral judgement. For example, the experiment could be the act of rolling a single die, with the result of rolling a six being declared a "success" and all other outcomes grouped together under the designation "failure". Assuming a fair die, the probability of success would then be . Dissatisfaction with success Although fame and success are widely sought by many people, successful people are often displeased by their status. Overall, there is a general correlation between success and unhappiness. A study done in 2008 notes that CEOs are depressed at more than double the rate of the public at large, suggesting that this is not a phenomenon exclusive to celebrities. Research suggests that people tend to focus more on objective success (ie: status, wealth, reputation) as benchmarks for success, rather than subjective success (ie: self-worth, relationships, moral introspection), and as a result become disillusioned with the success they do have. Celebrities in particular face specific circumstances that cause them to be displeased by their success. See also Critical success factor Customer success Probability of success Propaganda of success Success trap Survivorship bias Victory References Sources Further reading Social concepts Sociological terminology Neuroscience Management
Success
[ "Biology" ]
1,640
[ "Neuroscience" ]
63,139,408
https://en.wikipedia.org/wiki/Jill%20Bubier
Jill L. Bubier is a professor emerita of environmental science at Mount Holyoke College (MHC). Her research examines how Northern ecosystems respond to climate change. Education Bubier graduated from Bowdoin College in 1974 with a bachelor's degree in government and history. She then studied in University of Maine School of Law to earn her Juris Doctor (J.D.) degree in 1978. She earned a Master of Science (M.S.) in botany at the University of Vermont in 1989. She then earned her PhD in physical geography at McGill University in 1994. Bubier's PhD thesis "Methane flux and plant distribution in northern peatlands" examined at how floristic pattern relates to methane emission in the mid-boreal clay belt region of Canada, and sub-arctic region of Quebec. Career and research Bubier worked briefly as a staff attorney in the Maine Law Institute. At the Coastal Zone 85 Conference in Baltimore, she presented a paper on "The Atlantic Striped Bass Conservation Act" in 1985. Bubier is a professor emeritus in the Environmental Science department of Mount Holyoke College where she began working in 1998. Her research focused at peatland, wetland ecology, plant ecology, greenhouse gases exchange, and the related feedbacks connected to climate change. Moreover, in Bubier's time as a professor, she did not abandon her role as a field scientist, working in the peatland systems of boreal, sub-arctic, and Arctic regions in Canada, Alaska, and Scandinavia. One of her most cited papers, "Spatial and Temporal Variability in Growing-Season Net Ecosystem Carbon Dioxide Exchange at a Large Peatland in Ontario, Canada" addressed the net ecosystem exchange (NEE) of carbon dioxide (CO2) across the peatland in Ottawa to better understand and predict the ecosystem response to climate change. Bubier also has received funding awards from multiple organizations including the NSF and NASA. In 1999, she received a grant of $350,000 study boreal system's atmospheric exchange. She received ~$500K from NSF for her research titled "Strategies for Understanding the Effects of Global Climate and Environmental Change on Northern Peatlands" in 2004. She also received another grant of ~$885K for her research titled "Ecosystem responses to atmospheric N deposition in an ombrotrophic bog: vegetation and microclimate feedbacks lead to stronger C sink or source?" in 2014. Some of her notable publications include: Spatial and Temporal Variability in Growing-Season Net Ecosystem Carbon Dioxide Exchange at a Large Peatland in Ontario, Canada. Ecological controls on methane emissions from a Northern Peatland Complex in the zone of discontinuous permafrost, Manitoba, Canada. The Relationship of Vegetation to Methane Emission and Hydrochemical Gradients in Northern Peatlands. Methane Emissions from Wetlands in the Midboreal Region of Northern Ontario, Canada. Seasonal patterns and controls on net ecosystem CO2 exchange in a boreal peatland complex. Awards and honors Bubier is a renowned environmental scientist. From 2007 to 2009, she was a member of the Advisory Committee for Environmental Research and Education in the National Science Foundation (NSF), on which she helped set guidelines for the priorities of Environmental Science research. She was awarded with Editors' Citation for Excellence in Refereeing in 2003 for reviewing Global Biogeochemical Cycles. References Environmental scientists Living people Year of birth missing (living people) Mount Holyoke College faculty Bowdoin College alumni University of Maine School of Law alumni University of Vermont alumni McGill University Faculty of Science alumni
Jill Bubier
[ "Environmental_science" ]
726
[ "American environmental scientists", "Environmental scientists" ]
63,140,623
https://en.wikipedia.org/wiki/Ingrid%20Burke
Ingrid C. "Indy" Burke is the Carl W. Knobloch, Jr. Dean at the Yale School of Forestry & Environmental Studies. She is the first female dean in the school's 116 year history. Her area of research is ecosystem ecology with a primary focus on carbon cycling and nitrogen cycling in semi-arid rangeland ecosystems. She teaches on subjects relating to ecosystem ecology, and biogeochemistry. Early life and education Burke received her B.S in biology from Middlebury College and her Ph.D in botany from the University of Wyoming. At Middlebury College, Burke was planning on becoming an English major, but after taking a science class where they examined the role of photosynthesis in aquatic environments she became fascinated by the topic of environmental science. Soon after taking this class, Burke decided to switch her major to biology after realizing that she could spend her life working outside and be able to solve scientific mysteries as a profession. After her time at Middlebury College she started a Ph.D. track at Dartmouth College. Here she planned on studying a phenomenon known as “fir waves,” where rows of balsam fir trees die collectively, forming arresting patterns across the landscape, but after her advisor moved to work at the University of Wyoming, Burke decided to move as well. After finishing her Ph.D, she moved to Colorado State University where she started her professional career. Career and research Burke's career as an environmental scientist began with a job teaching at Colorado State University in 1987 in the Natural Resource Ecology Laboratory. She became an associate professor in the Department of Forest Sciences at Colorado State University in 1994. In 2008 she began teaching at the University of Wyoming where she earned a spot as the director of the Haub School of Environment and Natural Resources. She worked there until 2016 when she became the Carl W. Knobloch, Jr. Dean at the Yale School of Forestry & Environmental Studies. Burke is also on the board of directors at The Conservation Fund. Burke has published over 150 peer reviewed articles, chapter, books and reports including the investigation of a significant project titled, "A Regional Assessment of Land Use Effects on Ecosystem Structure and Function in the Central Grasslands" from 1996-1999. This project had major implications for understanding and managing ecosystems in the central United States. Selected publications The Importance of Land-Use Legacies to Ecology and Conservation (2003) BioScience, Vol 53, Issue 1, 77–88 Texture, Climate, and Cultivation Effects on Soil Organic Matter Content in U.S. Grassland Soils (1989) Soil Science Society of America Journal, Vol. 53 No. 3, 800-805 Global-Scale Similarities in Nitrogen Release Patterns During Long-Term Decomposition (2007) Science, Vol. 315, Issue 5810, 361-364 ANPP Estimates From NDVI for the Central Grasslands Region of The United States (1997) Ecology, Vol. 78, No 3, 953-958 Interactions Between Individual Plant Species and Soil Nutrient Status in Shortgrass Steppe (1995) Ecology, Vol. 76, No 4, 45-52 additional publications can be found on her Google Scholar profile. Notable awards and honors Her awards and honors include: 2019 Fellow, Ecological Society of America, for advancing our understanding of ecosystem processes, in particular nitrogen and carbon cycling in grasslands. 2018 Fellow, Connecticut Academy of Science and Engineering 2012 Promoting Intellectual Engagement Award, University of Wyoming 2010 Fellow, American Association for the Advancement of Sciences 2008 USDA Agricultural Research Service, Rangeland Resources Unit: Award for Enhancing Collaborative Research Partnerships 2005 Colorado State University Honors Professor 2004–2005 National Academy of Sciences Education Fellow in the Life Science 2001-2008 University Distinguished Teaching Scholar, Colorado State University 2000 Mortar Board Rose Award, Colorado State University 1993–‘98 National Science Foundation Presidential Faculty Fellow Award References Living people Year of birth missing (living people) Middlebury College alumni University of Wyoming alumni Yale University faculty Colorado State University faculty University of Wyoming faculty American botanists Biogeochemists Fellows of the American Association for the Advancement of Science
Ingrid Burke
[ "Chemistry" ]
815
[ "Geochemists", "Biogeochemistry", "Biogeochemists" ]
59,556,850
https://en.wikipedia.org/wiki/Conny%20Aerts
Conny Clara Aerts, born 26 January 1966, is a Belgian (Flemish) professor in astrophysics. She specialises in asteroseismology. She is associated with KU Leuven and Radboud University, where she leads the Chair in the Astroseismology group. In 2012, she became the first woman to be awarded the Francqui Prize in the category of Science & Technology. In 2022, she became the third woman to be awarded the Kavli Prize in Astrophysics for her work in asteroseismology. Biography Aerts was born in Brasschaat, Belgium. She received her Bachelor and Master in Mathematics from the University of Antwerp. She participated in the International Astronomical Youth Camp in 1987 and 1988. She then went on to complete her PhD in 1993 at KU Leuven. After completing she spent several months doing research at the University of Delaware. She was a postdoctoral fellow with the Fund for Scientific Research from 1993 to 2001, when she was appointed lecturer at KU Leuven. She became first associate professor in 2004, and then full professor in 2007 at KU Leuven. Research In her research, she uses the star oscillations to determine the internal rotation profile of stars. The oscillations are obtained from both ground and space-based telescopes. In her PROSPERITY project, she used data obtained from the CoRoT satellite and the NASA Kepler satellite. She is currently the Belgian principal investigator on the PLATO mission. Aerts developed methodology using Gaussian mixture classification to analyse the data. She uses this to determine the star structure and inform stellar models within stellar evolution theory. With these techniques she has made a number of discoveries, including that of non-non-rigid rotation in giant stars. The theoretical models she develops based on the star oscillations also allow her determine the age of stars with a high accuracy. Aerts has twice been awarded an Advanced Grant by the European Research Council (ERC): in 2008 for PROSPERITY, and again in 2015 for a project entitled MAMSIE (Mixing and Angular Momentum Transport of Massive Stars). Outreach Aerts is Vice-Dean of Communication & Outreach at the Faculty of Science at KU Leuven. She is outspoken about the need to increase gender equality in the sciences, and is a member of the International Astronomical Union Women in Astronomy Working Group. Prizes and recognition In 2010 she was elected Honorary Fellow of the Royal Astronomical Society. In 2011 she was elected Member of the Royal Flemish Academy for Sciences and Arts. In 2012 she won the Francqui prize. In 2016 she received the title of Commander of the Order of Leopold. In 2017 she was awarded the Hintze Lecture at Oxford University, delivering also a public lecture entitled "Starquakes expose stellar heartbeats". In 2018 she was awarded the ESA Lodewijk Woltjer Lecture for her work in the field of asteroseismology. In 2019, asteroid 413033 Aerts was named in her honor. The official was published by the Minor Planet Center on 18 May 2019 (). In 2020 she was awarded the Research Foundation – Flanders (FWO) Excellence Prize in Exact Sciences. In 2022 she was awarded the Kavli Prize in Astrophysics. In 2024 she received the Crafoord Prize in Astronomy. References 21st-century Belgian astronomers Women astronomers KU Leuven alumni Academic staff of KU Leuven Academic staff of Radboud University Nijmegen 1966 births Living people Kavli Prize laureates in Astrophysics
Conny Aerts
[ "Astronomy" ]
710
[ "Women astronomers", "Astronomers" ]
59,557,900
https://en.wikipedia.org/wiki/NGC%206951
NGC 6951 (also catalogued as NGC 6952) is a barred spiral galaxy located in the constellation Cepheus. It is located at a distance of about 75 million light-years from Earth, which, given its apparent dimensions, means that NGC 6951 is about 100,000 light-years across. It was discovered by Jérôme Eugène Coggia in 1877 and independently by Lewis Swift in 1878. Characteristics NGC 6951 has a large stellar bar with dust lanes running across it. These lanes come in contact with the circumnuclear ring at its north and south points. Gas is channeled inwards, towards the ring, through the bar. Observations in CO revealed also the presence of molecular gas with inflow motion towards the galactic nucleus. Since there are no signs of interaction with another galaxy in the isolated NGC 6951 for the last one billion years, it is believed that the origin of the gas is internal. Gas kinematics have also been observed for the rest of the galaxy, where the gravitational torques caused by the bar have a dominant role. Active nucleus The nucleus of NGC 6951 is active. It has been classified both as a type 2 Seyfert galaxy and a LINER and it has been suggested that it is in transition form, between a Seyfert galaxy and a very-high-excitation LINER, with very strong [N II] and [S II] lines. A supermassive black hole which accretes material in the centre of the galaxy is believed to be the cause of the nuclear activity. The upper mass limit of the supermassive black hole at the centre of NGC 6951 is estimated to be between 6 and 14 million based on velocity dispersion. Molecular gas, most probably a circumnuclear dust disk or torus less than 50 parsec in radius, has been detected around the nucleus. Circumnuclear ring Around the nucleus of NGC 6951 has been observed a star formation ring with a radius of 5 arcseconds. It also emits radio waves. The total gas mass at and inside the ring is estimated to be . Inside the ring is detected a spiral-like structure, with two spiral arms, that extends up to 0.5 arcseconds from the nucleus, while no inner bar was detected in the images obtained by Hubble Space Telescope. The central part of the nuclear area contains red supergiant stars. The ring is complete and features H II regions. It is characterised by a gradient in stellar population ages, with the younger stars being a few million years old while the older are more than a hundred million years old. The stars in the ring form star clusters with masses between and . Although there have been observed clusters with ages as little as 4 million years or over one billion years, the star clusters predominantly have intermediate ages, with average ages of 200–300 million years, and are massive. Based on the ages of the clusters it is suggested that the most intense star formation in the ring took place 800 million years ago, then lowered, only to increase again 400 million years ago. Supernovae Five supernovae have been observed in NGC 6951: SN 1999el (typeIIn, mag 15.4) was discovered by the Beijing Astronomical Observatory on 20 October 1999. SN 2000E (type Ia, mag 14.3) was discovered by the Collurania-Teramo Observatory on 26 January 2000. This supernova was visible at the same time as SN 1999el. SN 2015G (type Ibn, mag 15.5) was discovered by Kunihiro Shima on 23 March 2015. SN 2020dpw (type IIP, mag. 17) was discovered by Patrick Wiggins on 26 February 2020. SN 2021sjt (typeIIb, mag. 18.58) was discovered by the Zwicky Transient Facility on 7 July 2021. See also NGC 5135 - a similar active galaxy List of NGC objects (6001–7000) References External links NGC 6951 on SIMBAD Barred spiral galaxies Seyfert galaxies Cepheus (constellation) 6951 11604 065086 +11-25-002 J20371406+6606201 Astronomical objects discovered in 1877 Discoveries by Jérôme Coggia
NGC 6951
[ "Astronomy" ]
882
[ "Constellations", "Cepheus (constellation)" ]
59,558,412
https://en.wikipedia.org/wiki/Karen%20Sp%C3%A4rck%20Jones%20Award
To commemorate the achievements of Karen Spärck Jones, the Karen Spärck Jones Award was created in 2008 by the British Computer Society (BCS) and its Information Retrieval Specialist Group (BCS IRSG). Since 2024, the award has been sponsored by Bloomberg. Prior to 2024, it was sponsored by Microsoft Research. The winner of the award is invited to present a keynote talk the following year alternately at the European Conference on Information Retrieval (ECIR) or the Conference of the European Chapter of the Association for Computational Linguistics (EACL). Chronological recipients and keynote talks 2009: Mirella Lapata : “Image and Natural Language Processing for Multimedia Information Retrieval” 2010: Evgeniy Gabrilovich : “Ad Retrieval Systems in vitro and in vivo: Knowledge-Based Approaches to Computational Advertising” 2011: No award was made 2012: Diane Kelly : “Contours and Convergence” 2013: Eugene Agichtein : “Inferring Searcher Attention and Intention by Mining Behavior Data” 2014: Ryen White : “Mining and Modeling Online Health Search” 2015: Jordan Boyd-Graber : “Opening up the Black Box: Interactive Machine Learning for Understanding Large Document Collections, Characterizing Social Science, and Language-Based Games”, Emine Yilmaz : “A Task-Based Perspective to Information Retrieval” 2016: Jaime Teevan : “Search, Re-Search.” 2017: Fernando Diaz (computer scientist) : “The Harsh Reality of Production Information Access Systems” 2018: Krisztian Balog : “On Entities and Evaluation” 2019: Chirag Shah : “Task-Based Intelligent Retrieval and Recommendation” 2020: Ahmed H. Awadallah : “Learning with Limited Labeled Data: The Role of User Interactions” 2021: Ivan Vulić : “Towards Language Technology for a Truly Multilingual World?” 2022: William Yang Wang "Large Language Models for Question Answering: Challenges and Opportunities" 2023: Hongning Wang "Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict?" References Computer science awards
Karen Spärck Jones Award
[ "Technology" ]
422
[ "Science and technology awards", "Computer science", "Computer science awards" ]
59,558,857
https://en.wikipedia.org/wiki/Stabilization%20hypothesis
In mathematics, specifically in category theory and algebraic topology, the Baez–Dolan stabilization hypothesis, proposed in , states that suspension of a weak n-category has no more essential effect after n + 2 times. Precisely, it states that the suspension functor is an equivalence for . References Sources External links https://ncatlab.org/nlab/show/stabilization+hypothesis Algebraic topology Higher category theory
Stabilization hypothesis
[ "Mathematics" ]
86
[ "Mathematical structures", "Category theory stubs", "Algebraic topology", "Higher category theory", "Topology", "Category theory", "Fields of abstract algebra" ]
59,561,324
https://en.wikipedia.org/wiki/Heartbeat%20star
Heartbeat stars are pulsating variable binary star systems in eccentric orbits with vibrations caused by tidal forces. The name "heartbeat" comes from the similarity of the light curve of the star with what a heartbeat looks like through an electrocardiogram if their brightness was mapped over time. Many heartbeat stars have been discovered with the Kepler Space Telescope. Orbital information Heartbeat stars are binary star systems where each star travels in a highly elliptical orbit around the common mass center, and the distance between the two stars varies drastically as they orbit each other. Heartbeat stars can get as close as a few stellar radii to each other and as far as 100 times that distance during one orbit. As the star with the more elliptical orbit swings closer to its companion, gravity will stretch the star into a non-spherical shape, changing its apparent light output. At their closest point in orbit, the tidal forces cause the shape of the heartbeat stars to fluctuate rapidly. When the stars reach the point of their closest encounter, the mutual gravitational pull between the two stars will cause them to become slightly ellipsoidal in shape, which is one of the reasons for their observed brightness being so variable. Discoveries Heartbeat stars were studied for the first time on the basis of OGLE project observations. The Kepler Space Telescope with its long monitoring of the brightness of hundreds of thousands of stars enabled the discovery of many heartbeat stars. One of the first binary systems discovered to show the elliptical orbits, KOI-54, has been shown to increase in brightness every 41.8 days. A subsequent study in 2012 characterized 17 additional objects from the Kepler data and united them as a class of binary stars. A study which measured the rotation rate of star spots on the surface of heartbeat stars showed that most heartbeat stars rotate slower than expected. A study which measured the orbits of 19 heartbeat star systems, found that surveyed heartbeat stars tend to be both bigger and hotter than the Sun. The star HD 74423, discovered using NASA's Transiting Exoplanet Survey Satellite, was found to be unusually teardrop-shaped, which causes the star to pulsate only on one side, the first known heartbeat star to do so. References Further reading Kepler space telescope Variable stars
Heartbeat star
[ "Astronomy" ]
449
[ "Space telescopes", "Kepler space telescope" ]
59,562,660
https://en.wikipedia.org/wiki/Bota%C5%9F%20D%C3%B6rtyol%20LNG%20Storage%20Facility
Botaş Dörtyol LNG Storage Facility () is a floating storage and regasification unit (FSRU) for liquefied natural gas (LNG) in Hatay Province, southern Turkey. It is the country's second floating LNG storage facility after the Egegaz Aliağa LNG Storage Facility. The floating LNG storage facility is the world's largest vessel MT MOL FSRU Challenger, which was chartered by the Turkish state-owned crude oil and natural gas pipelines and trading company BOTAŞ. The FSRU was delivered to its owner, the Mitsui O.S.K. Lines (MOL) LNG Transport (Europe) Ltd. in October 2017, and then sailed to Turkey, arriving at its Mediterranean seaport Dörtyol in November the same year. The FSRU terminal went into service on 7 February 2018. The special vessel has an LNG storage capacity of 2, and features a regas discharge capacity of . With the use of the FSRU as an import terminal, minimization of the investment costs for transmission and distribution lines as well as of the transportation costs is aimed. The chartered MT MOL FSRU Challenger was replaced by the new Turkish FSRU MT Botaş FSRU Ertuğrul Gazi commissioned on 25 June 2021. See also Egegaz Aliağa LNG Storage Facility, Lake Tuz Natural Gas Storage, Northern Marmara and Değirmenköy (Silivri) Depleted Gas Reservoir, Marmara Ereğlisi LNG Storage Facility. References Natural gas storage Floating production storage and offloading vessels Energy infrastructure in Turkey Natural gas in Turkey 2018 establishments in Turkey Energy infrastructure completed in 2018 Buildings and structures in Hatay Province Dörtyol District Botaş 21st-century architecture in Turkey
Botaş Dörtyol LNG Storage Facility
[ "Chemistry" ]
376
[ "Natural gas storage", "Petroleum stubs", "Petroleum technology", "Petroleum", "Floating production storage and offloading vessels", "Natural gas technology" ]
59,564,498
https://en.wikipedia.org/wiki/Fusobacteriaceae
The Fusobacteriaceae are a family of the bacterial order Fusobacteriales. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of bacteria genera List of bacterial orders References Fusobacteriota Taxa described in 2012
Fusobacteriaceae
[ "Biology" ]
82
[ "Bacteria stubs", "Bacteria" ]
59,564,887
https://en.wikipedia.org/wiki/Photoactive%20yellow%20protein
In molecular biology, the PYP domain (photoactive yellow protein) is a p-coumaric acid-binding protein domain. They are present in various proteins in bacteria. PYP is a highly soluble globular protein with an alpha/beta fold structure. It is a member of the PAS domain superfamily, which also contains a variety of other kinds of photosensory proteins. PYP was first discovered in 1985. A recently (2016) developed chemogenetic system named FAST (Fluorescence-Activating and absorption Shifting Tag) was engineered from PYP to specifically and reversibly bind a series of hydroxybenzylidene rhodanine (HBR) derivatives for their fluorogenic properties. Upon interaction with FAST, the fluorogen is locked into a fluorescent conformation unlike when in solution. This new protein labelling system is used in a variety of microscopy and cytometry setups. p-Coumaric acid p-Coumaric acid is a cofactor of Photoactive yellow protein|photoactive yellow proteins. Adducts of p-coumaric acid bound to PYP form crystals that diffract well for x-ray crystallography experiments. These structural studies have provided insight into photosensitive proteins, e.g. the role of hydrogen bonding, molecular isomerization and photoactivity. Photochemical transitions It was originally believed that due to light emissions resembling that of retinal bound rhodopsin, the photosensor molecule bound to PYP should resemble the structure of retinal bound rhodopsin, the photosensor molecule bound to PYP should resemble the structure of retinal. Scientists were therefore amazed when the PYP Cys 69 was bound by a thiol ester linkage as the light sensitive prosthetic group p-coumaric acid. During the photoreactive mechanism: Light absorption yields the native protein to absorb a maximum wavelength of 446 nm, ε = 45500 M−1 cm−1. Within a nanosecond the absorbed maximum wavelength is shifted to 465 nm. Then on a sub-millisecond timescale is excited to a 355 nm state. These observed phenomena are due to the trans–cis isomerization of the vinyl trans double bond in the p-coumaric acid. Scientists noted by observing the crystal structure of p-coumaric acid bound by PYP that the hydroxyl group connected to the C4 carbon of the phenyl ring appeared to be deprotonated – effectively a phenolate functional group. This was due to abnormally short hydrogen bonding lengths observed in the protein crystal structure. Role of hydrogen bonding Hydrogen bonds in proteins such as PYP take part in interrelated networks, where at the center of p-coumaric acid's phenolate O4 atom, there is an oxyanion hole that is crucial for photosensory function. Oxyanion holes exist in enzymes to stabilize transitions states of reaction intermediates, thus stabilizing the trans–cis isomerization of p-coumaric acid. During the transition state it is believed that the p-coumaric acid phenolate O4 takes part in a hydrogen bond network between Glu46, Tyr42 and Thr50 of PYP. These interactions are apart from the thiol ester linkage to Cys 69 keeping p-coumaric acid in the ligand binding site. Upon transitioning to the cis-isomeric form of p-coumaric acid the favorable hydrogen bonds are no longer in close interaction. References Further reading External links Protein domains
Photoactive yellow protein
[ "Biology" ]
748
[ "Protein domains", "Protein classification" ]
59,565,425
https://en.wikipedia.org/wiki/Marc%20J.%20Assael
Marc J. Assael (born 5 August 1954) is a Greek Chemical Engineer and a professor of Thermophysical Properties. Career From 1995 to 1997 he was the Head of the Department of Chemical Engineering at Aristotle University of Thessaloniki, Greece, where he currently holds the position of Professor of Thermophysical Properties. In 1998 he was TEPCO Chair Visiting Chair in Keio University, Tokyo, Japan, and during 2007-2011 he was adjunct professor in Xi’an Jiaotong University, P.R. China. He is currently the Secretary of the International Association for Transport Properties, the Secretary of the International Organization Committee of the European Conference on Thermophysical Properties, and a Fellow of the International Thermal Conductivity Conferences (FITCc). In 2023, during the 22nd European Conference on Thermophysical Properties, Prof. Marc J. Assael was honored with the Lifetime Achievements Award, for his longstanding and distinguished contributions to the field of thermophysics. He is Editor-in-Chief of the Springer-Nature International Journal of Thermophysics, Editor of Old City Publishing High Temperatures - High Pressures journal, Guest Editor of Elsevier Education for Chemical Engineers and of the Praise Worthy Prize International Review of Chemical Engineering journal. Education He received his BSc, MSc and PhD degrees in Chemical Engineering from Imperial College London. Published work Assael has authored and co-authored about 200 scientific publications in the field of Thermophysical Properties. Most cited books: Assael M.J., Trusler J.M.P., and Tsolakis Th.F., "Thermophysical Properties of Fluids. An Introduction to their Prediction", Imperial College Press, London, U.K. (1996). Assael M.J. and Kakosimos K.E., “Fires, Explosions and Toxic Gas Dispersions: Effects Calculation and Risk Analysis”, CRC Press., Boca Raton, U.S.A. (2010). Assael M.J., Wakeham W.A., Goodwin A.R.H., Will S., Stamatoudis M., "Commonly Asked Questions in Thermodynamics", CRC Press., Boca Raton, U.S.A. (2011). Assael M.J., Goodwin A.R.H., Vesovic V. and Wakeham W.A. Eds., “Experimental Thermodynamics Volume IX: Advances in Transport Properties of Fluids”, RSC Press., London U.K. (2014). References External links 20th-century Greek scientists 21st-century Greek scientists Living people Chemical engineering academics Engineers from Thessaloniki Academic staff of the Aristotle University of Thessaloniki Fellows of the Institution of Chemical Engineers 1954 births
Marc J. Assael
[ "Chemistry" ]
591
[ "Chemical engineering academics", "Chemical engineers" ]
59,566,268
https://en.wikipedia.org/wiki/NGC%203489
NGC 3489 is a lenticular galaxy located in the constellation Leo. It is located at a distance of about 30 million light years from Earth, which, given its apparent dimensions, means that NGC 3489 is about 30,000 light years across. It was discovered by William Herschel on April 8, 1784. NGC 3489 is a member of the Leo Group. NGC 3489 has a weak bar, seen along the minor axis, and a small bulge. The age of the stellar population in NGC 3489 shows a gradient, with the younger stars lying closer to the core. When observed in H-beta, the central arcsecond of NGC 3489 shows a peak, indicating the presence of younger stars at the core, whose age is estimated to be about 1.7 Gys. Although currently NGC 3489 is considered a post-starburst galaxy, there is still molecular gas in the nucleus that can lead to star formation, although its mass is less than what is found in galaxies with active star formation. In the nuclear region of NGC 3489 has been observed dust with an open spiral pattern. The galaxy has an outer ring structure, with a diameter of 1.54 arcminutes along the major axis. NGC 3489 has an active galactic nucleus, which has been categorised based on its spectrum as a type 2 Seyfert galaxy or, based on the nuclear [O I] emission strength, which lies between that of H II nuclei and LINER, a transition object. This kind of transition emission could be attributed to post AGB stars located in the core. A supermassive black hole which accretes material in the centre of the galaxy is believed to be the cause of nuclear activity. In the centre of NGC 3489 lies a black hole with estimated mass based on velocity dispersion. Gallery References External links NGC 3489 on SIMBAD Lenticular galaxies Leo (constellation) M96 Group 3489 6082 33160 Astronomical objects discovered in 1784 Discoveries by William Herschel
NGC 3489
[ "Astronomy" ]
423
[ "Leo (constellation)", "Constellations" ]
59,568,204
https://en.wikipedia.org/wiki/NGC%203585
NGC 3585 is an elliptical or a lenticular galaxy located in the constellation Hydra. It is located at a distance of circa 60 million light-years from Earth, which, given its apparent dimensions, means that NGC 3585 is about 80,000 light years across. It was discovered by William Herschel on December 9, 1784. NGC 3585 features a red discy region in the core with a semi-major axis of circa 45 arcseconds, probably associated with diffuse dust. There are nearly 130 globular cluster candidates in the galaxy, with the total number of globular clusters estimated to be nearly 550. This number is quite low, but it is typical for field elliptical galaxies. Based on luminosity turnover of the globular clusters, it is suspected that there is a subpopulation of younger clusters. The outer isophotes of the galaxy are asymmetrical, maybe due to a tidal disruption. In the centre of NGC 3585 lies a supermassive black hole whose mass is estimated to be based on the tidal disruption rate or 108.53 ± 0.122 based on the observation of the circumnuclear ring with very-long-baseline interferometry. Based on observations by the Hubble Space Telescope to determine the stellar velocity dispersion at the core, the mass of the hole was estimated to be between 280 and 490 million by using the M–sigma relation. NGC 3585 is the most prominent member of a loose galaxy group known as the NGC 3585 group. Other members of the group are the spiral galaxies UGCA 226, ESO 502- G 016, and UGCA 230. References External links NGC 3585 on SIMBAD Elliptical galaxies Lenticular galaxies Hydra (constellation) 3585 34160 Astronomical objects discovered in 1784 Discoveries by William Herschel
NGC 3585
[ "Astronomy" ]
376
[ "Hydra (constellation)", "Constellations" ]
59,568,402
https://en.wikipedia.org/wiki/Lanthanum%20decahydride
Lanthanum decahydride is a polyhydride or superhydride compound of lanthanum and hydrogen (LaH10) that has shown evidence of being a high-temperature superconductor. It was the first metal superhydride to be theoretically predicted, synthesized, and experimentally confirmed to superconduct at near room-temperatures. It has a superconducting transition temperature TC around at a pressure of , and its synthesis required pressures above approximately . Synopsis Since its discovery in 2019, the superconducting properties of LaH10 and other lanthanum-based superhydrides have been experimentally confirmed in multiple independent experiments. The compound exhibits a Meissner effect below the superconducting transition temperature. A cubic form can be synthesized at , and a hexagonal crystal structure can be formed at room temperature. Further reports indicate Tc is increased with nitrogen doping, and decreased with the introduction of magnetic impurities. The cubic form has each lanthanum atom surrounded by 32 hydrogen atoms, which form the vertices of an 18-faced shape called a chamfered cube. A similar compound, lanthanum boron octahydride, was computationally predicted to be a superconductor at and pressure . References Lanthanum compounds High-temperature superconductors Metal hydrides
Lanthanum decahydride
[ "Chemistry" ]
279
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
51,805,327
https://en.wikipedia.org/wiki/Outline%20of%20galaxies
The following outline is provided as an overview of and topical guide to galaxies: Galaxies – gravitationally bound systems of stars, stellar remnants, interstellar gas, dust, and dark matter. The word galaxy is derived from the Greek galaxias (γαλαξίας), literally "milky", a reference to the Milky Way. Galaxies range in size from dwarfs with just a few billion (109) stars to giants with one hundred trillion (1014) stars, each orbiting its galaxy's center of mass. Galaxies are categorized according to their visual morphology as elliptical, spiral and irregular. Many galaxies are thought to have black holes at their active centers. The Milky Way's central black hole, known as Sagittarius A*, has a mass four million times greater than the Sun. As of March 2016, GN-z11 is the oldest and most distant observed galaxy with a comoving distance of 32 billion light-years from Earth, and observed as it existed just 400 million years after the Big Bang. Previously, as of July 2015, EGSY8p7 was the most distant known galaxy, estimated to have a light travel distance of 13.2 billion light-years away. Types of galaxies List of galaxies Lists of galaxies By morphological classification Galaxy morphological classification Disc galaxy Lenticular galaxy Barred lenticular galaxy Unbarred lenticular galaxy Spiral galaxy   (list) Anemic galaxy Barred spiral galaxy Flocculent spiral galaxy Grand design spiral galaxy Intermediate spiral galaxy Magellanic spiral Unbarred spiral galaxy Dwarf galaxy Dwarf elliptical galaxy Dwarf spheroidal galaxy Dwarf spiral galaxy Elliptical galaxy Type-cD galaxy Irregular galaxy Barred irregular galaxy Peculiar galaxy Ring galaxy   (list) Polar-ring galaxy   (list) By nucleus Active galactic nucleus Blazar Low-ionization nuclear emission-line region Markarian galaxies Quasar Radio galaxy X-shaped radio galaxy Relativistic jet Seyfert galaxy By emissions Energetic galaxies Lyman-alpha emitter Luminous infrared galaxy Starburst galaxy Pea galaxy Hot, dust-obscured galaxies (Hot DOGs) Low activity Low-surface-brightness galaxy Ultra diffuse galaxy By interaction Field galaxy Galactic tide Galaxy cloud Interacting galaxy Galaxy merger Jellyfish galaxy Satellite galaxy Superclusters Galaxy filament Void galaxy By other aspect Galaxies named after people Largest galaxies Nearest galaxies Nature of galaxies Galactic phenomena Galactic year – duration of time required for the Sun to orbit once around the center of the Milky Way Galaxy. Galaxy formation and evolution Galaxy merger Hubble's law Galaxy components Components of galaxies in general Active galactic nucleus Galactic bulge Galactic disc Galactic habitable zone Galactic halo Dark matter halo Galactic corona Galactic magnetic fields Galactic plane Galactic spheroid Interstellar medium Spiral arms Supermassive black hole Structure of specific galaxies Milky Way components Galactic Center Galactic quadrant Spiral arms of the Milky Way Carina–Sagittarius Arm Norma Arm Orion Arm Perseus Arm Scutum–Centaurus Arm Galactic ridge Galactic cartography Galactic coordinate system Galactic longitude Galactic latitude Galaxy rotation curve Larger constructs composed of galaxies Galaxy groups and clusters   (list) Local Group – galaxy group that includes the Milky Way Galaxy group Galaxy cluster Supercluster   (list) Brightest cluster galaxy Fossil galaxy group Galaxy filament Intergalactic phenomena Galactic orientation Galaxy merger Andromeda–Milky Way collision Hypothetical intergalactic phenomena Intergalactic travel Intergalactic dust Intergalactic stars Void   (list) Fields that study galaxies Astronomy Galactic astronomy – studies the Milky Way galaxy. Extragalactic astronomy – studies everything outside the Milky Way galaxy, including other galaxies. Astrophysics Cosmology Physical cosmology Galaxy-related publications Galaxy catalogs Atlas of Peculiar Galaxies Catalogue of Galaxies and Clusters of Galaxies David Dunlap Observatory Catalogue Lyon-Meudon Extragalactic Database Morphological Catalogue of Galaxies Multiwavelength Atlas of Galaxies Principal Galaxies Catalogue Shapley-Ames Catalog Uppsala General Catalogue Vorontsov-Vel'yaminov Interacting Galaxies Persons influential in the study of galaxies Galileo Galilei – discovered that the Milky Way is composed of a huge number of faint stars. Edwin Hubble See also Barred spiral galaxy Galaxy color–magnitude diagram Dark galaxy Faint blue galaxy Galaxy color–magnitude diagram Illustris project Protogalaxy Cosmos Redshift 7 List of quasars References External links Galaxies, SEDS Messier pages An Atlas of The Universe Galaxies — Information and amateur observations The Oldest Galaxy Yet Found Galaxy classification project, harnessing the power of the internet and the human brain How many galaxies are in our Universe? 3-D Video (01:46) – Over a Million Galaxies of Billions of Stars each – BerkeleyLab/animated. Galaxies
Outline of galaxies
[ "Astronomy" ]
939
[ "Galaxies", "Astronomical objects" ]
51,805,757
https://en.wikipedia.org/wiki/HR%204887
HR 4887 (HD 111904) is a suspected variable star in the open cluster NGC 4755, which is also known as the Kappa Crucis Cluster or Jewel Box Cluster. Location HR 4887 is one of the brightest members of the NGC 4775 open cluster, better known as the Jewel Box Cluster. It forms the apex of the prominent letter "A" asterism at the centre of the cluster. The cluster is part of the larger Centaurus OB1 association and lies about 8,500 light years away. The cluster, and HR 4887 itself, is just to the south-east of β Crucis, the lefthand star of the famous Southern Cross. Properties HR 4887 is a B9 bright supergiant (luminosity class Ia). It is over 100,000 times the luminosity of the sun, partly due to its higher temperature over 12,680 K, and partly to being seventy times larger than the sun. The κ Crucis cluster has a calculated age of 11.2 million years, and HR 4887 itself eight million years. In 1958, HR 4887 was reported to be at visual magnitude 6.80 and on this basis is included in the New Catalogue of Suspected Variable Stars with a range of variation of 5.70 - 6.80. No other observer has measured it far from magnitude 5.75, but measurements of known variable class B stars found that HR 4887 is variable by about a tenth of a magnitude. It is thought likely to be an α Cygni-type variable. References External links Crux 111904 B-type supergiants 062894 4887 Durchmusterung objects J12532189-6019424 Suspected variables Alpha Cygni variables
HR 4887
[ "Astronomy" ]
370
[ "Crux", "Constellations" ]
51,806,224
https://en.wikipedia.org/wiki/DS%20Crucis
DS Crucis (HR 4876, HD 111613) is a variable star near the open cluster NGC 4755, which is also known as the Kappa Crucis Cluster or Jewel Box Cluster. It is in the constellation Crux. Location DS Crucis is one of the brightest stars in the region of the NGC 4775 open cluster, better known as the Jewel Box Cluster, but its membership of the cluster is in doubt. The cluster is part of the larger Centaurus OB1 association and lies about 8,500 light years away. DS Crucis and NGC 4755 lie just to the south-east of β Crucis, the lefthand star of the famous Southern Cross. Variability DS Crucis is a variable star with an amplitude of about 0.05 magnitudes. It was found to be variable from the photometry performed by the Hipparcos satellite. The variability type is unclear but it is assumed to be an α Cygni variable. Properties DS Crucis is an A1 bright supergiant (luminosity class Ia), although it has also been classified as A2 Iabe. It is nearly 80,000 times the luminosity of the sun, partly due to its higher temperature of 9,000 K, and partly to being over a hundred times larger than the sun. The κ Crucis cluster has a calculated age of 11.2 million years, and DS Crucis an age of seven million years. References Crux 111613 B-type supergiants 062732 4876 CD-59 4432 J12511794-6019473 Alpha Cygni variables Crucis, DS
DS Crucis
[ "Astronomy" ]
344
[ "Crux", "Constellations" ]
51,806,253
https://en.wikipedia.org/wiki/Exhaust%20gas%20analyzer
An exhaust gas analyser or exhaust carbon monoxide (CO) analyser is an instrument for the measurement of carbon monoxide among other gases in the exhaust, caused by an incorrect combustion, the Lambda coefficient measurement is the most common. The principles used for CO sensors (and other types of gas) are infrared gas sensors and chemical gas sensors. Carbon monoxide sensors are used to assess the CO amount during an Ministry of Transport test. In order to be used for such test it must be approved as suitable for use in the scheme. In the UK, a list of acceptable exhaust gas analysers for use within the MOT test is available via the Driver and Vehicle Standards Agency website. Lambda coefficient measurement The presence of oxygen in the exhaust gases indicates that the combustion of the mixture was not perfect, resulting in contaminant gases. Thus measuring the proportion of oxygen in the exhaust gases of these engines can monitor and measure these emissions. This measurement is performed in the MOT test through Lambda coefficient measurement. The Lambda coefficient (λ) is obtained from the relationship between air and gasoline involved in combustion of the mixture. It is a measure of the efficiency of the gasoline engine by measuring the percentage of oxygen in the exhaust. When gasoline engines operate with a stoichiometric mixture of 14.7: 1 the value of lambda (λ) is "1". Mixing ratio = weight of fuel / weight of air - Expressed as mass ratio: 14.7 kg of air per 1 kg. of fuel. - Expressed as volume ratio: 10,000 liters of air per 1 liter of fuel. With this relationship theoretically a complete combustion of gasoline is achieved and greenhouse gas emissions would be minimal. The coefficient is defined as Lambda coefficient If Lambda > 1 = lean mixture, excess of air. If Lambda < 1 = rich mixture, excess of gasoline. A lean mixture contains an excess of oxygen. The surplus oxygen will react with nitrogen to (oxides of nitrogen), if the temperature is high enough (around 1600 °C) for enough time to permit so. A rich mixture contains a deficit of oxygen. This makes it impossible for all fuel to combust completely to carbon dioxide and water vapour. Hence, some fuel will remain as a hydrocarbon, or it will react only to carbon monoxide (CO). The carbon monoxide concentration in exhaust gases is closely related, and almost proportional to the air fuel ratio in the rich regions. It is, therefore, of great value when tuning an engine. Carbon dioxide emitted is theoretically directly proportional to the fuel consumed at a given and constant air fuel ratio. Less carbon dioxide will be emitted per litre of fuel if λ < 1, since some fuel won't be able to combust completely. Types of sensors Chemical CO sensors Chemical CO gas sensors with sensitive layers based on polymer- or heteropolysiloxane have the principal advantage of a very low energy consumption, and can be reduced in size to fit into microelectronic-based systems. On the downside, short- and long term drift effects as well as a rather low overall lifetime are major obstacles when compared with the nondispersive infrared sensor measurement principle. Another method (Henry's Law) also can be used to measure the amount of dissolved CO in a liquid, if the amount of foreign gases is insignificant. Nondispersive infrared CO sensors Nondispersive infrared sensors are spectroscopic sensors to detect CO in a gaseous environment by its characteristic absorption. The key components are an infrared source, a light tube, an interference (wavelength) filter, and an infrared detector. The gas is pumped or diffuses into the light tube, and the electronics measures the absorption of the characteristic wavelength of light. Sensors are most often used for measuring carbon monoxide. The best of these have sensitivities of 20–50 PPM. Most CO sensors are fully calibrated prior to shipping from the factory. Over time, the zero point of the sensor needs to be calibrated to maintain the long term stability of the sensor. New developments include using microelectromechanical systems to bring down the costs of this sensor and to create smaller devices. Typical sensors cost in the (US) $100 to $1000 range. Cambridge indicator Used by older aircraft, the Cambridge Mixture Indicator displayed air-fuel ratio by measuring the thermal conductivity of exhaust gas. It was manufactured by the Cambridge Instrument Company. This device was installed on airplanes in the 1930s, including the Lockheed Model 10 Electra flown by Amelia Earhart on her last flight. See also AFR sensor Oxygen sensor Auto mechanic Automobile repair shop Car ramp, a means of accessing the underside of a vehicle Engine tuning Italian tuneup Mechanical engineering Service (motor vehicle) References Sensors Gas sensors
Exhaust gas analyzer
[ "Technology", "Engineering" ]
969
[ "Sensors", "Measuring instruments" ]
51,806,669
https://en.wikipedia.org/wiki/Marketing%20engineering
Marketing engineering is currently defined as "a systematic approach to harness data and knowledge to drive effective marketing decision making and implementation through a technology-enabled and model-supported decision process". History The term marketing engineering can be traced back to Lilien et al. in "The Age of Marketing Engineering" published in 1998; in this article the authors define marketing engineering as the use of computer decision models for making marketing decisions. Marketing managers typically use "conceptual marketing", that is they develop a mental model of the decision situation based on past experience, intuition and reasoning. That approach has its limitations though: experience is unique to every individual, there is no objective way of choosing between the best judgments of multiple individuals in such a situation and furthermore judgment can be influenced by the person's position in the firm's hierarchy. In the same year Lilien G. L. and A. Rangaswamy published Marketing Engineering: Computer-Assisted Marketing Analysis and Planning, Fildes and Ventura praised the book in their review, while noting that a fuller discussion of market share models and econometric models would have made the book better for teaching and that "conceptual marketing" should not be discarded in the presence of marketing engineering, but that both approaches should be used together. Leeflang and Wittink (2000) have identified five eras of model building in marketing: (1950-1965) The first era of application of operations research and management science to marketing (1965-1970) Adaptation of models to fit marketing problems (1970-1985) Emphasis on models that are an acceptable representation of reality and are easy to use (1985-2000) Increase interest in marketing decision support systems, meta-analyses and studies of the generalizability of results (2000- . ) Growth of new exchange systems (ex: e-commerce) and need for new modeling approaches How to build market models and how to develop a structured approach to marketing questions has been an issue of active discussion between researchers, L. Lilien and A. Rangaswamy (2001) have observed that while having data gives a competitive advantage, having too much data without the models and systems for working with it may turn out to be as bad as not having the data. Lodish (2001) observed that the most complicated and elegant model will not necessarily be the one adopted in the firm, good models are the ones that capture the trade-offs of decision making, subjective estimates may be necessary to complete the model, risk needs to be taken into account, model complexity must be balanced versus ease of understanding, models should integrate tactical with strategic aspects. Migley (2002) identifies four purposes in codifying marketing knowledge: To facilitate the progress of marketing as a science To promote the discipline within its institutional and professional environments To better educate and credential the potential manager To provide competitive advantage to the firm Lilien et al.(2002) define marketing engineering as "the systematic process of putting marketing data and knowledge to practical use through the planning, design, and construction of decision aids and marketing management support systems (MMSSs)". One the driving factors toward the development of marketing engineering are the use of high-powered personal computers connected to LANs and WANs, the exponential growth in the volume of data, the reengineering of marketing functions. The effectiveness of the implementation of marketing engineering and MMSSs in the firm depend on the decision situation characteristics(demand), the nature of the MMSS (supply), match between supply and demand, design characteristics of the MMSS, characteristics of implementation process. Wider adoption depend on difference between end-user systems and high-end systems, user training and the growth of the Internet. Market response models All market response models include: Inputs: price, advertising, selling effort, product design, market size, competitive environment Response Model: links inputs to outputs such as product perceptions, sales, profits Objectives: used to evaluate actions such as sales Models In marketing engineering methods and models can be classified in several categories: Customer value assessment Objective measures: internal engineering assessment, indirect survey questions, field value-in-use assessment Perceptual measures: focus groups, direct survey questions, importance ratings, conjoint analysis, benchmarking Behavioral measures: choice models, data mining Segmentation and targeting Reducing data: factor analysis Association measures: cluster analysis Outlier detection and removal Forming Segments: cluster analysis Profiling Segments: discriminant analysis Positioning Perceptual maps: similitarity-based methods, attribute-based methods Preference maps: ideal-point model, vector model Joint-space maps: averaged ideal-point model, averaged vector model, external analysis Forecasting Judgmental methods: sales force composite estimates, jury of executive opinion, Delphi method, scenario analysis Market and Survey Analysis: buyer intentions, Product testing, chain ratio Time Series: naive methods, moving averages, exponential smoothing, Box–Jenkins method, decompositional methods Causal analyses: regression analysis, econometric models, input-output models, multivariate ARMA, neural networks New product forecasting models: Bass Model, ASSESSOR model New product and service design Creativity software: idea generation, idea evaluation, GE/Mckinsey portfolio model, conjoint analysis Marketing mix Pricing: classic approach, cost-oriented pricing, demand-oriented pricing, competition-oriented pricing Promotion: affordable method, percentage-of-sales method, competitive parity method, objective-and-task method Sales force decisions: intuitive methods, market-response methods, response functions References Marketing analytics Engineering disciplines
Marketing engineering
[ "Engineering" ]
1,121
[ "nan" ]
51,806,873
https://en.wikipedia.org/wiki/BU%20Crucis
BU Crucis (HD 111934) is a variable star in the open cluster NGC 4755, which is also known as the Kappa Crucis Cluster or Jewel Box Cluster. Location BU Cru is one of the brightest members of the NGC 4775 open cluster, better known as the Jewel Box Cluster. It forms the right end of the bar of the prominent letter "A" asterism at the centre of the cluster. The cluster is part of the larger Centaurus OB1 association and lies about 8,500 light years away. The cluster, and BU Crucis itself, is just to the south-east of β Crucis, the lefthand star of the famous Southern Cross. Properties BU Crucis is a B2 bright supergiant (luminosity class Ia). It is 275,000 times the luminosity of the sun, partly due to its higher temperature over 20,000 K, and partly to being forty times larger than the sun. The κ Crucis cluster has a calculated age of 11.2 million years, and BU Crucis itself around five million years. Variability BU Crucis is a variable star with a brightness range of about 0.1 magnitudes. It is listed as a probable eclipsing binary in the General Catalogue of Variable Stars, but the International Variable Star Index classifies it as an α Cygni variable with a visual magnitude range of 6.82 - 7.01. References External links Crux 111934 B-type supergiants 062913 CD-59 04458 J12533761-6021254 Suspected variables Alpha Cygni variables Eclipsing binaries Crucis, BU
BU Crucis
[ "Astronomy" ]
350
[ "Crux", "Constellations" ]
51,807,728
https://en.wikipedia.org/wiki/Western%20blot%20normalization
Normalization of Western blot data is an analytical step that is performed to compare the relative abundance of a specific protein across the lanes of a blot or gel under diverse experimental treatments, or across tissues or developmental stages. The overall goal of normalization is to minimize effects arising from variations in experimental errors, such as inconsistent sample preparation, unequal sample loading across gel lanes, or uneven protein transfer, which can compromise the conclusions that can be obtained from Western blot data. Currently, there are two methods for normalizing Western blot data: (i) housekeeping protein normalization and (ii) total protein normalization. Procedure Normalization occurs directly on either the gel or the blotting membrane. First, the stained gel or blot is imaged, a rectangle is drawn around the target protein in each lane, and the signal intensity inside the rectangle is measured. The signal intensity obtained can then be normalized with respect to the signal intensity of the loading internal control detected on the same gel or blot. When using protein stains, the membrane may be incubated with the chosen stain before or after immunodetection, depending on the type of stain. Housekeeping protein controls Housekeeping genes and proteins, including β-Actin, GAPDH, HPRT1, and RPLP1, are often used as internal controls in western blots because they are thought to be expressed constitutively, at the same levels, across experiments. However, recent studies have shown that expression of housekeeping proteins (HKPs) can change across different cell types and biological conditions. Therefore, scientific publishers and funding agencies now require that normalization controls be previously validated for each experiment to ensure reproducibility and accuracy of the results. Fluorescent antibodies When using fluorescent antibodies to image proteins in western blots, normalization requires that the user define the upper and lower limits of quantitation and characterize the linear relationship between signal intensity and the sample mass volume for each antigen. Both the target protein and the normalization control need to fluoresce within the dynamic range of detection. Many HKPs are expressed at high levels and are preferred for use with highly-expressed target proteins. Lower expressing proteins are difficult to detect on the same blot. Fluorescent antibodies are commercially available, and fully characterized antibodies are recommended to ensure consistency of results. When fluorescent detection is not utilized, the loading control protein and the protein of interest must differ considerably in molecular weight so they are adequately separated by gel electrophoresis for accurate analysis. Membrane stripping Membranes need to be stripped and re-probed using a new set of detection antibodies when detecting multiple protein targets on the same blot. Ineffective stripping could result in a weak signal from the target protein. To prevent loss of the antigen, only three stripping incubations are recommended per membrane. It could be difficult to completely eliminate signal from highly-abundant proteins, so it is recommended that one detects lowly-expressed proteins first. Exogenous spike-in controls Since HKP levels can be inconsistent between tissues, scientists can control for the protein of interest by spiking in a pure, exogenous protein of a known concentration within the linear range of the antibody. Compared to HKP, a wider variety of proteins are available for spike-in controls. Total protein normalization In total protein normalization (TPN), the abundance of the target protein is normalized to the total amount of protein in each lane. Because TPN is not dependent on a single loading control, validation of controls and stripping/reprobing of blots for detection of HKPs is not necessary. This can improve precision (down to 0.1 μg of total protein per lane), cost-effectiveness, and data reliability. Fluorescent stains and stain-free gels require special equipment to visualize the proteins on the gel/blot. Stains may not cover the blot evenly; more stain might collect towards the edges of the blot than in the center. Non-uniformity in the image can result in inaccurate normalization. Pre-antibody stains Anionic dyes such as Ponceau S and Coomassie brilliant blue, and fluorescent dyes like Sypro Ruby and Deep Purple, are used before antibodies are added because they do not affect downstream immunodetection. Ponceau S is a negatively charged reversible dye that stains proteins a reddish pink color and is removed easily by washing in water. The intensity of Ponceau S staining decreases quickly over time, so documentation should be conducted rapidly. A linear range of up to 140 μg is reported for Ponceau S with poor reproducibility due to its highly time-dependent staining intensity and low signal-to-noise ratio. Fluorescent dyes like Sypro Ruby have a broad linear range and are more sensitive than anionic dyes. They are permanent, photostable stains that can be visualized with a standard UV or blue-light transilluminator or a laser scan. Membranes can then be documented either on film or digitally using a charge-coupled device camera. Sypro Ruby blot staining is time-intensive and tends to saturate above 50 μg of protein per lane. Post-antibody stains Amido black is a commonly used permanent post-antibody anionic stain that is more sensitive than Ponceau S. This stain is applied after immunodetection. Stain-free technology Stain-free technology employs an in-gel chemistry for imaging. This chemical reaction does not affect protein transfer or downstream antibody binding. Also, it does not involve staining/destaining steps, and the intensity of the bands remain constant over time. Stain-free technology cannot detect proteins that do not contain tryptophan residues. A minimum of two tryptophans is needed to enable detection. The linear range for stain-free normalization is up to 80 μg of protein per lane for 18-well and up to 100 μg per lane for 12-well Criterion mid-sized gels. This range is compatible with typical protein loads in quantitative western blots and enables loading control calculations over a wide protein-loading range. A more efficient stain-free method has also recently become available. When using high protein loads, stain-free technology has demonstrated greater success than stains. References External links V3 Stain-free Workflow for a Practical, Convenient, and Reliable Total Protein Loading Control in Western Blotting Molecular biology techniques
Western blot normalization
[ "Chemistry", "Biology" ]
1,322
[ "Molecular biology techniques", "Molecular biology" ]
51,809,009
https://en.wikipedia.org/wiki/17%CE%B2-Dihydroequilin
17β-Dihydroequilin is a naturally occurring estrogen sex hormone found in horses as well as a medication. As the C3 sulfate ester sodium salt, it is a minor constituent (1.7%) of conjugated estrogens (CEEs; brand name Premarin). However, as equilin, with equilin sulfate being a major component of CEEs, is transformed into 17β-dihydroequilin in the body, analogously to the conversion of estrone into estradiol, 17β-dihydroequilin is, along with estradiol, the most important estrogen responsible for the effects of CEEs. Pharmacology Pharmacodynamics 17β-Dihydroequilin is an estrogen, or an agonist of the estrogen receptors (ERs), the ERα and ERβ. In terms of relative binding affinity for the ERs, 17β-dihydroequilin has about 113% and 108% of that of estradiol for the ERα and ERβ, respectively. 17β-Dihydroequilin has about 83% of the relative potency of CEEs in the vagina and 200% of the relative potency of CEEs in the uterus. Of the equine estrogens, it shows the highest estrogenic activity and greatest estrogenic potency. Like CEEs as a whole, 17β-dihydroequilin has disproportionate effects in certain tissues such as the liver and uterus. Equilin, the second major component of conjugated estrogens after estrone, is reversibly transformed into 17β-dihydroequilin analogously to the transformation of estrone into estradiol. However, whereas the balance of mutual interconversion of estrone and estradiol is largely shifted in the direction of estrone, it is nearly equal in the case of equilin and 17β-dihydroequilin. As such, although 17β-dihydroequilin is only a minor constituent of CEEs, it is, along with estradiol, the most important estrogen relevant to the estrogenic activity of the medication. Pharmacokinetics 17β-Dihydroequilin has about 30% of the relative binding affinity of testosterone for sex hormone-binding globulin (SHBG), relative to 50% for estradiol. The metabolic clearance rate of 17β-dihydroequilin is 1,250 L/day/m2, relative to 580 L/day/m2 for estradiol. Chemistry 17β-Dihydroequilin, or simply β-dihydroequilin, also known as δ7-17β-estradiol or as 7-dehydro-17β-estradiol, as well as estra-1,3,5(10),7-tetraen-3,17β-diol, is a naturally occurring estrane steroid and an analogue of estradiol. In terms of chemical structure and pharmacology, equilin (δ7-estrone) is to 17β-dihydroequilin as estrone is to estradiol. References Secondary alcohols Estranes Estrogens Human drug metabolites
17β-Dihydroequilin
[ "Chemistry" ]
715
[ "Chemicals in medicine", "Human drug metabolites" ]
51,809,242
https://en.wikipedia.org/wiki/8%2C9-Dehydroestrone
8,9-Dehydroestrone, or Δ8-estrone, also known as estra-1,3,5(10),8-tetraen-3-ol-17-one, is a naturally occurring estrogen found in horses which is closely related to equilin, equilenin, and estrone, and, as the 3-sulfate ester sodium salt, is a minor constituent (3.5%) of conjugated estrogens (Premarin). It produces 8,9-dehydro-17β-estradiol as an important active metabolite, analogously to conversion of estrone or estrone sulfate into estradiol. The compound was first described in 1997. In addition to 8,9-dehydroestrone and 8,9-dehydro-17β-estradiol, 8,9-dehydro-17α-estradiol is likely also to be present in conjugated estrogens, but has not been identified at this time. See also List of estrogens § Equine estrogens References Hydroxyarenes Estranes Estrogens Ketones
8,9-Dehydroestrone
[ "Chemistry" ]
250
[ "Ketones", "Functional groups" ]
51,810,734
https://en.wikipedia.org/wiki/S%20Cassiopeiae
S Cassiopeiae (S Cas, HD 7769) is a Mira variable and S-type star in the constellation Cassiopeia. It is an unusually cool star, rapidly losing mass and surrounded by dense gas and dust producing masers. Distance In the absence of a measurement of its parallax by the Hipparcos satellite, its distance from the Solar System was estimated between 1,860 and 2,770 light-years. Gaia Data Release 2 published a parallax of , indicating a distance around , but the observations have a very high noise level and are considered unreliable. A distance of is preferred. Spectral type With a spectral type of S3,4e-S5,8e, S Cassiopeiae is an S-type star similar to χ Cygni; these are asymptotic giant branch (AGB) stars similar to those of class M except that the dominant spectral bands of metal oxides are formed by metals of the fifth period of the periodic table as zirconium or yttrium. Another feature of this class of stars is the high mass loss; in the case of S Cassiopeiae it is estimated at per year. Characteristics S Cassiopeiae has a radius of 934 solar radii; if placed at the center of the Solar System, it would extend past the orbit of Mars and the Asteroid Belt. Its effective temperature is 1,800 K, which is possibly a late thermal pulse asymptotic giant branch red giant star near the tip of its evolution, after this, it may enter its white dwarf phase after it sheds its outer layers or shrinks and gets hotter to a possibly orange giant. Its surface temperature is exceptionally cool for any star other than the brown dwarfs, and its bolometric luminosity is 5,210 times that of the sun. S Cassiopeiae is a variable Mira, a pulsating variable star whose visual brightness varies over several magnitudes with a somewhat regular period and amplitude. Its visual magnitude varies between +7.9 and +16.1 over an average period of 612.43 days. Mira variables are stars in the last stages of evolution whose instability comes from pulsations in their surfaces, causing changes in color and brightness. Some of them, including S Cassiopeiae show SiO maser emission. See also TZ Cassiopeiae PZ Cassiopeiae List of largest known stars T Cephei References S-type stars Mira variables Cassiopeia (constellation) Cassiopeiae, S J01194198+7236407 BD+71 66 IRAS catalogue objects 007769 Emission-line stars
S Cassiopeiae
[ "Astronomy" ]
555
[ "Cassiopeia (constellation)", "Constellations" ]
51,811,236
https://en.wikipedia.org/wiki/NGC%20263
NGC 263 is a spiral galaxy located in the constellation Cetus. It was discovered in 1886 by Francis Leavenworth. References External links 0263 Cetus Spiral galaxies 002856
NGC 263
[ "Astronomy" ]
39
[ "Cetus", "Constellations" ]
51,811,965
https://en.wikipedia.org/wiki/1-Keto-1%2C2%2C3%2C4-tetrahydrophenanthrene
1-Keto-1,2,3,4-tetrahydrophenanthrene (THP-1), or 1,2,3,4-tetrahydrophenanthren-1-one, is a synthetic steroid-like compound which was reported to be the first synthetic estrogen, or the first synthetic compound identified with estrogenic activity. It was first synthesized in 1933 by Cook et al. and was tested due to its similarity to the presumed chemical structure of estrone. Upon reassessment many decades later, the compound was found to bind only weakly to the estrogen receptors, and, unexpectedly, did not actually have functional activity as an estrogen or antiestrogen in vitro or in vivo. It did, however, show some androgenic and antiandrogenic activity in vitro. See also Estrin References Ketones Phenanthrenes Synthetic estrogens Substances discovered in the 1930s
1-Keto-1,2,3,4-tetrahydrophenanthrene
[ "Chemistry" ]
197
[ "Ketones", "Functional groups" ]
51,812,024
https://en.wikipedia.org/wiki/Estriol%20triacetate
Estriol triacetate is an estrogen medication and an estrogen ester – specifically, the triacetate ester of estriol – which was never marketed. It has been said to be 10 times as physiologically active as estriol. See also List of estrogen esters § Estriol esters References Abandoned drugs Acetate esters Estriol esters Synthetic estrogens
Estriol triacetate
[ "Chemistry" ]
85
[ "Drug safety", "Abandoned drugs" ]
51,812,133
https://en.wikipedia.org/wiki/Estradiol%20mustard
Estradiol mustard, also known as estradiol 3,17β-bis(4-(bis(2-chloroethyl)amino)phenyl)acetate, is a semisynthetic, steroidal estrogen and cytostatic antineoplastic agent and a phenylacetic acid nitrogen mustard-coupled estrogen ester that was never marketed. It is selectively distributed into estrogen receptor (ER)-positive tissues such as ER-expressing tumors like those seen in breast and prostate cancers. For this reason, estradiol mustard and other cytostatic-linked estrogens like estramustine phosphate have reduced toxicity relative to non-linked nitrogen mustard cytostatic antineoplastic agents. However, they may stimulate breast tumor growth due to their inherent estrogenic activity and are said to be devoid of major therapeutic efficacy in breast cancer, although estramustine phosphate has been approved for and is used (almost exclusively) in the treatment of prostate cancer. See also List of hormonal cytostatic antineoplastic agents List of estrogen esters § Estradiol esters References Abandoned drugs Antineoplastic drugs Chloroethyl compounds Estradiol esters Nitrogen mustards Synthetic estrogens
Estradiol mustard
[ "Chemistry" ]
268
[ "Drug safety", "Abandoned drugs" ]
51,813,998
https://en.wikipedia.org/wiki/Nozzle%20and%20flapper
The nozzle and flapper mechanism is a displacement type detector which converts mechanical movement into a pressure signal by covering the opening of a nozzle with a flat plate called the flapper. This restricts fluid flow through the nozzle and generates a pressure signal. It is a widely used mechanical means of creating a high gain fluid amplifier. In industrial control systems, they played an important part in the development of pneumatic PID controllers and are still widely used today in pneumatic and hydraulic control and instrumentation systems. Operating principle The operating principle makes use of the high gain effect when a flapper plate is placed a small distance from a small pressurized nozzle emitting a fluid. The example shown is pneumatic. At sub-millimeter distances, a small movement of the flapper plate results in a large change in flow. The nozzle is fed from a chamber which is in turn fed by a restriction, so changes of flow result in changes of chamber pressure. The nozzle diameter must be larger than the restriction orifice in order to work. The high gain of the open loop mechanism can be made linear using a pressure feedback bellows on the flapper to create a force balance system with a linear output. The "live" zero of 0.2 bar or 3 psi is set by the bias spring which ensures that the device is working in its linear region. The industry standard ranges of either 3-15 psi (USA), or 0.2 - 1.0 bar (metric), is normally used in pneumatic PID controllers, valve positioning servomechanisms and force balance transducers. Application The nozzle and flapper in pneumatic controls is a simple low maintenance device which operates well in a harsh industrial environment, and does not present an explosion risk in hazardous atmospheres. They were the industry controller amplifier for many decades until the advent of practical and reliable electronic high gain amplifiers. However they are still used extensively for field devices such as control valve positioners, and I to P and P to I converters. A proportional controller schematic is shown here. The set point is transmitted through the flapper plate via the fulcrum to close the orifice and increase the chamber pressure. The feedback bellows resists and the output signal goes to the control valve which opens with increasing actuator pressure. As the flow increases, the process value bellows counteracts the set point bellows until equilibrium is reached. This will be a value below the set point, as there must always be an error to generate an output. The addition of an integral or "reset" bellows would remove this error. The principle is also used in hydraulic systems controls. References Control engineering Control devices
Nozzle and flapper
[ "Engineering" ]
556
[ "Control devices", "Control engineering" ]
51,814,011
https://en.wikipedia.org/wiki/Bounded%20arithmetic
Bounded arithmetic is a collective name for a family of weak subtheories of Peano arithmetic. Such theories are typically obtained by requiring that quantifiers be bounded in the induction axiom or equivalent postulates (a bounded quantifier is of the form ∀x ≤ t or ∃x ≤ t, where t is a term not containing x). The main purpose is to characterize one or another class of computational complexity in the sense that a function is provably total if and only if it belongs to a given complexity class. Further, theories of bounded arithmetic present uniform counterparts to standard propositional proof systems such as Frege system and are, in particular, useful for constructing polynomial-size proofs in these systems. The characterization of standard complexity classes and correspondence to propositional proof systems allows to interpret theories of bounded arithmetic as formal systems capturing various levels of feasible reasoning (see below). The approach was initiated by Rohit Jivanlal Parikh in 1971, and later developed by Samuel R. Buss. and a number of other logicians. Theories Cook's equational theory Stephen Cook introduced an equational theory (for Polynomially Verifiable) formalizing feasibly constructive proofs (resp. polynomial-time reasoning). The language of consists of function symbols for all polynomial-time algorithms introduced inductively using Cobham's characterization of polynomial-time functions. Axioms and derivations of the theory are introduced simultaneously with the symbols from the language. The theory is equational, i.e. its statements assert only that two terms are equal. A popular extension of is a theory , an ordinary first-order theory. Axioms of are universal sentences and contain all equations provable in . In addition, contains axioms replacing the induction axioms for open formulas. Buss's first-order theories Samuel Buss introduced first-order theories of bounded arithmetic . are first-order theories with equality in the language , where the function is intended to designate (the number of digits in the binary representation of ) and is . (Note that , i.e. allows to express polynomial bounds in the bit-length of the input.) Bounded quantifiers are expressions of the form , , where is a term without an occurrence of . A bounded quantifier is sharply bounded if has the form of for a term . A formula is sharply bounded if all quantifiers in the formula are sharply bounded. The hierarchy of and formulas is defined inductively: is the set of sharply bounded formulas. is the closure of under bounded existential and sharply bounded universal quantifiers, and is the closure of under bounded universal and sharply bounded existential quantifiers. Bounded formulas capture the polynomial-time hierarchy: for any , the class coincides with the set of natural numbers definable by in (the standard model of arithmetic) and dually . In particular, . The theory consists of a finite list of open axioms denoted BASIC and the polynomial induction schema where . Buss's witnessing theorem Buss (1986) proved that theorems of are witnessed by polynomial-time functions. Theorem (Buss 1986) Assume that , with . Then, there exists a -function symbol such that . Moreover, can -define all polynomial-time functions. That is, -definable functions in are precisely the functions computable in polynomial time. The characterization can be generalized to higher levels of the polynomial hierarchy. Correspondence to propositional proof systems Theories of bounded arithmetic are often studied in connection to propositional proof systems. Similarly as Turing machines are uniform equivalents of nonuniform models of computation such as Boolean circuits, theories of bounded arithmetic can be seen as uniform equivalents of propositional proof systems. The connection is particularly useful for constructions of short propositional proofs. It is often easier to prove a theorem in a theory of bounded arithmetic and translate the first-order proof into a sequence of short proofs in a propositional proof system than to design short propositional proofs directly in the propositional proof system. The correspondence was introduced by S. Cook. Informally, a statement can be equivalently expressed as a sequence of formulas . Since is a coNP predicate, each can be in turn formulated as a propositional tautology (possibly containing new variables needed to encode the computation of the predicate ). Theorem (Cook 1975) Assume that , where . Then tautologies have polynomial-size Extended Frege proofs. Moreover, the proofs are constructible by a polynomial-time function and proves this fact. Further, proves the so called reflection principle for Extended Frege system, which implies that Extended Frege system is the weakest proof system with the property from the theorem above: each proof system satisfying the implication simulates Extended Frege. An alternative translation between second-order statements and propositional formulas given by Jeff Paris and Alex Wilkie (1985) has been more practical for capturing subsystems of Extended Frege such as Frege or constant-depth Frege. See also Proof complexity Computational complexity Mathematical logic Proof theory Complexity classes NP (complexity) coNP References Further reading (draft from 2008) Krajíček, Jan, Proof Complexity, Cambridge University Press, 2019. External links Proof complexity mailing list. Formal theories of arithmetic
Bounded arithmetic
[ "Mathematics" ]
1,079
[ "Mathematical logic", "Formal theories of arithmetic", "Arithmetic" ]
51,815,005
https://en.wikipedia.org/wiki/Mine%20survey
Mine surveying is the practice of determining the relative positions of points on or beneath the surface of the earth by direct or indirect measurements of distance, direction & elevation. International and National Institutions International Society for Mine Surveying (ISM) Australian Institute of Mine Surveyiors (AIMS) Czech Society of Mine Surveyors and Geologists (SDMG) German Mine Surveying Association (DMV e.V) Polish Mine Surveying Committee (PK-ISM) Institute of Mine Surveyors of South Africa (IMSSA) See also Surveying Land subsidence geological survey spatial sciences References Mining engineering Civil engineering
Mine survey
[ "Engineering" ]
121
[ "Construction", "Civil engineering", "Mining engineering" ]
56,173,597
https://en.wikipedia.org/wiki/Microporellus%20obovatus
Microporellus obovatus is a species of poroid fungus in the family Polyporaceae. It has been found in French Guiana, Guadeloupe, and various locales in Africa, India, and the United States. References obovatus Fungi described in 1838 Fungi of Africa Fungi of India Fungi of the United States Fungi without expected TNC conservation status Fungus species
Microporellus obovatus
[ "Biology" ]
77
[ "Fungi", "Fungus species" ]
56,173,936
https://en.wikipedia.org/wiki/NGC%204551
NGC 4551 is an elliptical galaxy located about 70 million light-years away in the constellation Virgo. It was discovered by astronomer William Herschel on April 17, 1784. NGC 4551 appears to lie close to the lenticular galaxy NGC 4550. However, both galaxies show no sign of interaction and have different red shifts. Both galaxies are also members of the Virgo Cluster. See also List of NGC objects (4001–5000) Dwarf elliptical galaxy References External links Virgo (constellation) Elliptical galaxies 4551 41963 7759 Astronomical objects discovered in 1784 Virgo Cluster Discoveries by William Herschel
NGC 4551
[ "Astronomy" ]
125
[ "Virgo (constellation)", "Constellations" ]
56,176,182
https://en.wikipedia.org/wiki/PomB
PomB is a protein that is part of the stator in Na+ driven bacterial flagella. Na influx to torque generation in the polar flagellar motor of Vibrio alginolyticus. The stator complex is fixed-anchored around the rotor through a putative peptidoglycan-binding (PGB) domain in the periplasmic region of PomB. See also MotB - MotA and MotB make the stator MotA - MotA and MotB make the stator PomA - protein that is part of the stator in Na+ Integral membrane protein a type of membrane protein Archaellum Cilium Ciliopathy Rotating locomotion in living systems Undulipodium References Motor proteins Bacterial proteins
PomB
[ "Chemistry" ]
157
[ "Molecular machines", "Protein stubs", "Motor proteins", "Biochemistry stubs" ]
56,178,504
https://en.wikipedia.org/wiki/External%20support
In cardiac surgery and vascular surgery, external support (or external stent) is a type of scaffold made of metal or plastic material that is inserted over the outside of the vein graft in order to decrease the intermediate and late vein graft failure after bypass surgery (e.g. CABG). An external support (external stent) should be differentiated from a stent. An external support is placed on the outside of the vessel whereas a stent is inserted into the lumen of a vessel. Background Veins are adapted to an environment of low pressure and low flow. In order to bypass the coronary obstruction and restore blood flow, veins are transferred and integrated into the arterial circulation, where they exposed to high pressure and flow. These new hemodynamic conditions cause intimal hyperplasia and atherosclerosis that cause intermediate and late vein graft failure. The idea of placing an external support on the vein graft was first suggested in 1963. The rational being that it will diminish the circumferential strain of the graft wall, therefore inhibiting intimal hyperplasia and later superimposed atherosclerosis, aiding with the adaptation of the vein toward the arterial environment. Method The external scaffold provides a mechanical support for the vein graft, absorbs the high arterial pressure, constrict the vein graft dilatation, reduces lumen irregularities and mitigates intimal hyperplasia formation. As shown both in human tissue cultures and experimental models. However, until recently there were a limited number of clinical studies that showed less positive results. It was hypothesized that graft patency rates were lower with external support, because of aggressive over constriction of the vein graft, unsuitable material of the devices and the use of fibrin glue that has shown to cause tissue damage, fibrosis and intimal hyperplasia. Lately, more promising results with second generation devices showed that external support can mitigate intimal hyperplasia development, improve vein graft lumen uniformity and improve the flow pattern inside the graft. These benefits shown to remain for up to five years follow up. In addition to improving the vein graft failure rates in bypass surgeries, other research studies showed that external support might allow the use of conduits that previously have been considered to be unsuitable for surgery. To date the technique is practiced in several cardiac centers in Europe, Israel and South Africa. Further clinical studies are ongoing in Europe and the US on a larger number of patients. References Biomedical engineering Cardiac surgery Implants (medicine) Vascular surgery
External support
[ "Engineering", "Biology" ]
542
[ "Biological engineering", "Medical technology", "Biomedical engineering" ]
56,180,154
https://en.wikipedia.org/wiki/3-Chlorobenzoic%20acid
3-Chlorobenzoic acid is an organic compound with the molecular formula ClC6H4CO2H. It is a white solid that is soluble in some organic solvents and in aqueous base. Synthesis and occurrence 3-Chlorobenzoic acid is prepared by oxidation of 3-chlorotoluene. It is a metabolic byproduct of the drug bupropion. References Benzoic acids 3-Chlorophenyl compounds Human drug metabolites
3-Chlorobenzoic acid
[ "Chemistry" ]
104
[ "Chemicals in medicine", "Human drug metabolites" ]
56,181,508
https://en.wikipedia.org/wiki/Chaim%20Goodman-Strauss
Chaim Goodman-Strauss (born June 22, 1967 in Austin, Texas) is an American mathematician who works in convex geometry, especially aperiodic tiling. He retired from the faculty of the University of Arkansas and currently serves as outreach mathematician for the National Museum of Mathematics. He is co-author with John H. Conway and Heidi Burgiel of The Symmetries of Things, a comprehensive book surveying the mathematical theory of patterns. Education and career Goodman-Strauss received both his B.S. (1988) and Ph.D. (1994) in mathematics from the University of Texas at Austin. His doctoral advisor was John Edwin Luecke. He joined the faculty at the University of Arkansas, Fayetteville (UA) in 1994 and served as departmental chair from 2008 to 2015. He held visiting positions at the National Autonomous University of Mexico and Princeton University. During 1995 he did research at The Geometry Center, a mathematics research and education center at the University of Minnesota, where he investigated aperiodic tilings of the plane. Goodman-Strauss has been fascinated by patterns and mathematical paradoxes for as long as he can remember. He attended a lecture about the mathematician Georg Cantor when he was 17 and says, "I was already doomed to be a mathematician, but that lecture sealed my fate." He became a mathematics writer and popularizer. From 2004 to 2012, in conjunction with KUAF 91.3 FM, the University of Arkansas NPR affiliate, he presented "The Math Factor," a podcast website dealing with recreational mathematics. He is an admirer of Martin Gardner and is on the advisory council of Gathering 4 Gardner, an organization that celebrates the legacy of the famed mathematics popularizer and Scientific American columnist, and is active in the associated Celebration of Mind events. In 2022 Goodman-Strauss was awarded the National Museum of Mathematics' Rosenthal Prize, which recognizes innovation and inspiration in math teaching. Aperiodic monotiles On Mar 20, 2023 Strauss, together with David Smith, Joseph Samuel Myers and Craig S. Kaplan, announced the proof that the tile discovered by David Smith is an aperiodic monotile, i.e., a solution to a longstanding open einstein problem. The team continues to refine this work. Mathematical artist In 2008 Goodman-Strauss teamed up with J. H. Conway and Heidi Burgiel to write The Symmetries of Things, an exhaustive and reader-accessible overview of the mathematical theory of patterns. He produced hundreds of full-color images for this book using software that he developed for the purpose. The Mathematical Association of America said, "The first thing one notices when one picks up a copy … is that it is a beautiful book … filled with gorgeous color pictures … many of which were generated by Goodman-Strauss. Unlike some books which add in illustrations to keep the reader's attention, the pictures are genuinely essential to the topic of this book." He also creates large-scale sculptures inspired by mathematics, and some of these have been featured at Gathering 4 Gardner conferences. Books 2008 The symmetries of things (with by John H. Conway and Heidi Burgiel). A. K. Peters, Wellesley, MA, 2008, Papers "Matching Rules and Substitution Tilings", Annals of Mathematics, Second Series, Vol 147, Issue 1 (January 1998), pp. 181–223 "A Small Aperiodic Set of Planar Tiles" European Journal of Combinatorics, Vol 20, Issue 5, (July 1999) pp. 375–384 "Compass and Straightedge in the Poincaré Disk" American Mathematical Monthly Vol. 108 (January 2001), pp. 38–49 "Can’t Decide? Undecide!" Notices of the American Mathematical Society Vol. 57 (March 2010), pp. 343–356 "A strongly aperiodic set of tiles in the hyperbolic plane" Inventiones Mathematicae, Vol 159, Issue 1 (2005), pp. 119–132 "Lots of Aperiodic Sets of Tiles", Journal of Combinatorial Theory, Series A, Vol 160 (November 2018), pp. 409–445 References External links Personal web page "Shaping Surfaces" [Video] Address to National Museum of Mathematics (MoMath) on December 3, 2014 Mathematics popularizers Recreational mathematicians 20th-century American mathematicians 21st-century American mathematicians Academics from Austin, Texas University of Texas at Austin College of Liberal Arts alumni University of Arkansas faculty American geometers Mathematical artists Scientific illustrators Combinatorialists American topologists 1967 births Living people
Chaim Goodman-Strauss
[ "Mathematics" ]
929
[ "Recreational mathematics", "Combinatorialists", "Recreational mathematicians", "Combinatorics" ]
56,182,351
https://en.wikipedia.org/wiki/Managed%20access%20%28corrections%29
Managed access is managing cellular network access from contraband phones within a corrections facility. Managed access differs from cellular jamming technologies, which are outlawed in the United States. A managed access system functions like a femtocell or low-power cell tower which passes calls to cellular carriers; however, only communications from approved devices and emergency calling are allowed. The managed access signal appears as an extension of nearby commercial cellular signals; once a phone connects to the network its identifying information is compared with approved devices and communications are accepted or denied. Managed access networks work with commercial cellular signals including 2G, 3G, 4G/LTE, and WiMAX. In 2010, the Mississippi Department of Corrections tested the first managed access system at Parchman Mississippi State Penitentiary; during one month the system blocked more than 216,000 texts and 600 phone calls. In 2013, the FCC recommended that prisons be allowed to manage their own network access without having to seek approval from the agency, saying that the process of inspecting the systems is "time-consuming and complex" and "discourages their use". In a 2016 op-ed, FCC Chairman Ajit Pai requested that the reforms proposed in 2013 aimed at loosening regulations on managed access and other solutions used to prevent the use of contraband cell phones should be enacted. As of 2016, only California, Maryland, Mississippi, South Carolina, and Texas had tested managed access systems. Drawbacks Managed access systems are unable to stop the use of contraband devices using Wi-Fi to connect to the internet. Deployment of managed access systems requires FCC approval and may require consent from cellular network carriers. The devices can also cause interference outside of the prison if they are not properly implemented. References External links Research on managed access funded by the U.S. Department of Justice Mobile technology
Managed access (corrections)
[ "Technology" ]
365
[ "nan" ]
56,182,553
https://en.wikipedia.org/wiki/Wheeler%20incremental%20inductance%20rule
The incremental inductance rule, attributed to Harold Alden Wheeler by Gupta and others is a formula used to compute skin effect resistance and internal inductance in parallel transmission lines when the frequency is high enough that the skin effect is fully developed. Wheeler's concept is that the internal inductance of a conductor is the difference between the computed external inductance and the external inductance computed with all the conductive surfaces receded by one half of the skin depth. Linternal = Lexternal(conductors receded) − Lexternal(conductors not receded). Skin effect resistance is assumed to be equal to the reactance of the internal inductance. Rskin = ωLinternal. Gupta gives a general equation with partial derivatives replacing the difference of inductance. where is taken to mean the differential change in inductance as surface m is receded in the nm direction. is the surface resistivity of surface m. magnetic permeability of conductive material at surface m. skin depth of conductive material at surface m. unit normal vector at surface m. Wadell and Gupta state that the thickness and corner radius of the conductors should be large with respect to the skin depth. Garg further states that the thickness of the conductors must be at least four times the skin depth. Garg states that the calculation is unchanged if the dielectric is taken to be air and that where is the characteristic impedance and the velocity of propagation, i.e. the speed of light. Paul, 2007, disputes the accuracy of at very high frequency for rectangular conductors such as stripline and microstrip due to a non-uniform distribution of current on the conductor. At very high frequency, the current crowds into the corners of the conductor. Example In the top figure, if is the inductance and is the characteristic impedance using the dimensions , and , and is the inductance and is the characteristic impedance using the dimensions , and then the internal inductance is where is the velocity of propagation in the dielectric. and the skin effect resistance is Notes References Signal cables Telecommunications engineering Transmission lines Distributed element circuits
Wheeler incremental inductance rule
[ "Engineering" ]
440
[ "Electrical engineering", "Electronic engineering", "Telecommunications engineering", "Distributed element circuits" ]
56,183,086
https://en.wikipedia.org/wiki/Ivan%20Marusic
Ivan Marusic (born 1965) is an Australian engineer and physicist. He is known for his work on turbulence at high Reynolds number, using both theoretical and experimental approaches. Marusic was born to Croatian parents in Široki Brijeg in Bosnia and Herzegovina. He emigrated to Australia when he was three years old along with his parents and older sister. He grew up in Melbourne. He received his PhD in 1992 and a bachelor's degree in mechanical engineering in 1987 from the University of Melbourne. From 1998 to 2002 he was a faculty member at the University of Minnesota, USA, where he was a recipient of an NSF Career Award, Packard Fellowship in Science and Engineering and Taylor Career Development Award. He received an ARC Federation Fellowship in 2006, ARC Laureate Fellowship in 2012 and since 2014 is an elected Fellow of the Australian Academy of Science. In 2010 Marusic was elected a Fellow of the American Physical Society. He was awarded a 2016 APS Stanley Corrsin Award for fluid dynamics research. He was elected a Fellow of the Australian Academy of Technology and Engineering in 2021 and of the Royal Society in 2024. References Living people 1965 births Australian physicists Australian engineers Fellows of the Australian Academy of Science Fellows of the American Physical Society Fellows of the Australian Academy of Technological Sciences and Engineering Fellows of the Royal Society Fluid dynamicists
Ivan Marusic
[ "Chemistry" ]
266
[ "Fluid dynamicists", "Fluid dynamics" ]
56,183,235
https://en.wikipedia.org/wiki/Estradiol%20phosphate
Estradiol phosphate, or estradiol 17β-phosphate, also known as estra-1,3,5(10)-triene-3,17β-diol 17β-(dihydrogen phosphate), is an estrogen which was never marketed. It is an estrogen ester, specifically an ester of estradiol with phosphoric acid, and acts as a prodrug of estradiol in the body. It is rapidly cleaved by phosphatase enzymes into estradiol upon administration. Estradiol phosphate is contained within the chemical structures of two other estradiol esters, polyestradiol phosphate (a polymer of estradiol phosphate) and estramustine phosphate (estradiol 3-normustine 17β-phosphate), both of which have been marketed for the treatment of prostate cancer. See also List of estrogen esters § Estradiol esters References Abandoned drugs Estradiol esters Phosphate esters Synthetic estrogens Steroids
Estradiol phosphate
[ "Chemistry" ]
216
[ "Drug safety", "Abandoned drugs" ]
54,624,511
https://en.wikipedia.org/wiki/Sol%C3%A8r%27s%20theorem
In mathematics, Solèr's theorem is a result concerning certain infinite-dimensional vector spaces. It states that any orthomodular form that has an infinite orthonormal set is a Hilbert space over the real numbers, complex numbers or quaternions. Originally proved by Maria Pia Solèr, the result is significant for quantum logic and the foundations of quantum mechanics. In particular, Solèr's theorem helps to fill a gap in the effort to use Gleason's theorem to rederive quantum mechanics from information-theoretic postulates. It is also an important step in the Heunen–Kornell axiomatisation of the category of Hilbert spaces. Physicist John C. Baez notes,Nothing in the assumptions mentions the continuum: the hypotheses are purely algebraic. It therefore seems quite magical that [the division ring over which the Hilbert space is defined] is forced to be the real numbers, complex numbers or quaternions.Writing a decade after Solèr's original publication, Pitowsky calls her theorem "celebrated". Statement Let be a division ring. That means it is a ring in which one can add, subtract, multiply, and divide but in which the multiplication need not be commutative. Suppose this ring has a conjugation, i.e. an operation for which Consider a vector space V with scalars in , and a mapping which is -linear in left (or in the right) entry, satisfying the identity This is called a Hermitian form. Suppose this form is non-degenerate in the sense that For any subspace S let be the orthogonal complement of S. Call the subspace "closed" if Call this whole vector space, and the Hermitian form, "orthomodular" if for every closed subspace S we have that is the entire space. (The term "orthomodular" derives from the study of quantum logic. In quantum logic, the distributive law is taken to fail due to the uncertainty principle, and it is replaced with the "modular law," or in the case of infinite-dimensional Hilbert spaces, the "orthomodular law.") A set of vectors is called "orthonormal" if The result is this: If this space has an infinite orthonormal set, then the division ring of scalars is either the field of real numbers, the field of complex numbers, or the ring of quaternions. References Hilbert spaces Mathematical logic Theorems in quantum mechanics
Solèr's theorem
[ "Physics", "Mathematics" ]
532
[ "Theorems in quantum mechanics", "Equations of physics", "Mathematical logic", "Quantum mechanics", "Theorems in mathematical physics", "Hilbert spaces", "Physics theorems" ]
54,625,341
https://en.wikipedia.org/wiki/Autonomous%20Rail%20Rapid%20Transit
Autonomous Rail Rapid Transit (ART) is a lidar (light detection and ranging) guided bi-articulated bus system for urban passenger transport. Developed and manufactured by CRRC through CRRC Zhuzhou Institute Co Ltd, it was unveiled in Zhuzhou in the Hunan province on June 2, 2017. ART is specifically referred to as a train or rapid transit as Digital-rail Rapid Transit and electric road by its manufacturer, however the public describes it as a bus or trolleybus and bus rapid transit. Its exterior is composed of individual fixed sections joined by articulated gangways, resembling a rubber-tyred tram and translohr. The system is labelled as "autonomous" in English, however, the models in operation are optically guided and feature a driver on board. Despite "rail" in the name, the system does not use rails. Automated Rapid Transit systems (ARTs) can operate independently without the need for a guiding sensor and as a result, they fall under the classification of buses. Consequently, vehicles deployed on these routes are mandated to display license plates. Background Before the announcement by CRRC, optical guided buses have been in use in a number of cities in Europe and North America, including in Rouen as part of Transport Est-Ouest Rouennais, in Las Vegas as a segment of Metropolitan Area Express BRT service (now discontinued), and in Castellón de la Plana as . The guidance system technology used on these systems was called Visée under their original developer Matra, and is now named Optiguide after being acquired by Siemens. Description An ART vehicle with three carriages is approximately long. It can travel at a speed of and can carry up to 300 passengers. A five-carriage ART vehicle provides space for 500 passengers. A four carriage model was introduced in 2021 which can carry 400 passengers. Two vehicles can closely follow each other without being mechanically connected, similarly to multiple unit train control. The entire ART has a low-floor design from a space frame with bolted-on panels to support the weight of passengers. It is built as a bi-directional vehicle, with driver's cabs at either end, allowing it to travel in either direction at full speed. The long ART lane was built through downtown Zhuzhou and inaugurated in 2018. Sensors and batteries The ART is equipped with various optical and other types of sensors to allow the vehicle to automatically follow a route defined by a virtual track of markings on the roadway. A steering wheel also allows the driver to manually guide the vehicle, including around detours. A Lane Departure Warning System helps to keep the vehicle in its lane and automatically warns, if it drifts away from the lane. A Collision Warning System supports the driver on keeping a safe distance with other vehicles on the road and if the proximity reduces below a given level, it alerts the driver by a warning sign. The Route Change Authorization is a navigation device, which analyzes the traffic conditions on the chosen route and can recommend a detour to avoid traffic congestion. The Electronic Rearview Mirrors work with remotely adjustable cameras and provide a clearer view than conventional mirrors, including an auto dimming device to reduce the glare. The ART is powered by lithium–titanate batteries and can travel a distance of per full charge. The batteries can be recharged via current collectors at stations. The recharging time for a trip is 30 seconds and for a trip, 10 minutes. Benefits and limitations A 2018 article by a sustainability academic argued trackless trams could replace both light-rail and bus rapid transit due to low cost, quick installation and low emissions. Others have disputed the claims about cost and quick installations, and argued that ART is a proprietary technology with little deployment worldwide. Other experts have argued the technology is overhyped, that optical guidance technology is not new, and that current proposals largely represent a repackaging of the bus as a rail-replacement technology. As of 2022 there are no systems outside of China and few proposals. That may be because: The system is not fully autonomous The system is not rail-based and so has the ride qualities of a bus The vehicles can get stuck in road traffic when not operated in dedicated rights of way The required vehicles cannot be bought through competitive tender Proponents have argued the lack of rails means cheaper construction costs. Multi-axle hydraulic steering technology and bogie-like wheel arrangement could allow lower swept path in turns, thus requiring less side clearance. The minimum turning radius of is similar to buses. However, because the ART is a guided system, ruts and depressions could be worn into the road by the alignment of the large number of wheels, so reinforcement of the roadway to prevent those problems may be as disruptive as the installation of rails in a light rail system. Researchers in 2021 found evidence of significant road wear due to trackless tram vehicles, which undermined claims of quick construction, with the researchers finding significant road strengthening was required by the technology. The suitability of the system for winter climates with ice and snow has not yet been proven. The higher rolling resistance of rubber tires requires more energy for propulsion than the steel wheels of a light rail vehicle. A few abandoned proposals for light-rail lines have been revived as ART proposals because of the lower projected costs. However, a different report, by the Australian Railways Association, which supports light rail, said there were reliability questions with ART installations, implying the initial suggested capital cost savings were illusory. A November 2020 proposal for a trackless tram system in the City of Wyndham, near Melbourne, posited a cost of $AU23.53M per km for roadworks, vehicles, recharge point and depots. Recently completed light rail systems in Australia have had costs of between $AU80M and $AU150M per km. The Government of New South Wales considered the system as an alternative to light rail for a line to connect Sydney Olympic Park to Parramatta. However, concerns were raised that there was only one supplier of the technology, and that the development of "long articulated buses" was "too much in its preliminary phase" to meet the project deadlines. Instead, the plan was to build a light-rail line which would connect to another light-rail route already under construction, so passengers would not have to change vehicles. The Auckland Light Rail Group, in its studies of trackless trams for the City Centre to Māngere line, found that trackless trams would have a lower capacity than claimed. The official specifications for the ARRT assume a standing density of eight passengers per square meter, whereas many transit systems have more typical standing densities of four passengers per square meter. Based on that, the long ARRT would more realistically have a capacity of 170 passengers, rather than the claimed 307. This would be only a slight increase over the typical capacity of conventional bi-articulated buses at the same passenger density (~150 passengers), and less than a typical long LRV (~210-225 passengers). List of commercially operating lines Proposed systems Proposals, including vehicle testing, have been made in several countries. China, Changsha. Changsha Meixi Lake to Changsha Municipal Government line, reported to start construction in 2021 for completion in 2022 China, Harbin. In May 2021 testing of a vehicle was underway with plans for an route with 11 stations. There are reports that stations have been constructed in January 2021 and trial operations will commence in August 2021. China, Tongli. testing was underway with the service expected to open to passengers by the end of 2021. China, Xi'an. Two routes. One with 18 stations over and second with 9 stations over . Malaysia. Iskandar Malaysia Bus Rapid Transit in Johor. ART is one technology under consideration for the corridor. A three-month test of an ART vehicle, along with eight other bus types, began in April 2021. In May 2024, the planned three line IMBRT was shelved due to unable to handle the traffic flow and affect the efficiency of the service. The traffic flow condition were projected to be much worse when the now under-construction RTS Link train line were expected to be completed by end 2026. The Johor government then propose the construction of Elevated Autonomous Rapid Transit (E-ART) system, a hybrid system utilising LRT infrastructure (without the LRT track) and ART system to replaced the now cancelled IMBRT. Malaysia. Kuching Urban Transportation System in Kuching. The three line Kuching LRT project was proposed as a light-rail in 2018, but shelved due to costs. In 2019, the government of Sarawak announced that the ART technology had been selected instead, due to its lower costs for similar levels of service. , the project has commenced construction and is under testing. Mexico. Metrorrey Line 5 in Monterrey. This new line of the Metrorrey system is currently being built by the government of the state of Nuevo León. The public tender was awarded in 2022 to a consortium formed by the Portuguese firm Mota-Engil and the Chinese CRRC. In October 2023 Governor Samuel García presented the ART vehicles that would be used for the Line 5. It is expected to open on 2027. Qatar. The system was considered for use during the 2022 FIFA World Cup, but was not pursued. In July 2019 a two-week test with one vehicle was undertaken in Doha, the first trial outside China. Australia. Perth In March 2021, the Australian Government provided $2 million to produce a business case to investigate a trackless tram on Scarborough Beach Road between the Stirling City Centre and the Perth CBD. In September 2023, an ART vehicle was delivered to the City of Stirling to begin trials for a proposed route between Glendalough railway station and Scarborough Beach. Indonesia. The system is considered for use in Nusantara, the future capital city. The bus has been delivered in July 2024, will be showcased in August 2024 at the time of the Independence Day, and tested in October-December 2024. New Zealand. In June 2024, Auckland Transport indicated it was interested in trialling a trackless tram on the Northern Busway. See also Automatic train operation Automated guideway transit Articulated bus Bi-articulated bus Battery electric bus Capacitor electric vehicle#Capabus Charging station Electric road Fuel cell bus Gadgetbahn Guided bus Personal rapid transit Rubber-tyred metro Rubber-tyred tram Translohr Trolleybus Trackless train Transit Elevated Bus (TEB) Trolleybus#Off-wire power developments (In Motion Charging) Wright StreetCar References Guided bus Self-driving cars Bus rapid transit in China
Autonomous Rail Rapid Transit
[ "Engineering" ]
2,148
[ "Automotive engineering", "Self-driving cars" ]
54,625,345
https://en.wikipedia.org/wiki/Right%20to%20explanation
In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation (or right to an explanation) is a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureau X reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for." Some such legal rights already exist, while the scope of a general "right to explanation" is a matter of ongoing debate. There have been arguments made that a "social right to explanation" is a crucial foundation for an information society, particularly as the institutions of that society will need to use digital technologies, artificial intelligence, machine learning. In other words, that the related automated decision making systems that use explainability would be more trustworthy and transparent. Without this right, which could be constituted both legally and through professional standards, the public will be left without much recourse to challenge the decisions of automated systems. Examples Credit scoring in the United States Under the Equal Credit Opportunity Act (Regulation B of the Code of Federal Regulations), Title 12, Chapter X, Part 1002, §1002.9, creditors are required to notify applicants who are denied credit with specific reasons for the detail. As detailed in §1002.9(b)(2): The official interpretation of this section details what types of statements are acceptable. Creditors comply with this regulation by providing a list of reasons (generally at most 4, per interpretation of regulations), consisting of a numeric (as identifier) and an associated explanation, identifying the main factors affecting a credit score. An example might be: 32: Balances on bankcard or revolving accounts too high compared to credit limits European Union The European Union General Data Protection Regulation (enacted 2016, taking effect 2018) extends the automated decision-making rights in the 1995 Data Protection Directive to provide a legally disputed form of a right to an explanation, stated as such in Recital 71: "[the data subject should have] the right ... to obtain an explanation of the decision reached". In full: However, the extent to which the regulations themselves provide a "right to explanation" is heavily debated. There are two main strands of criticism. There are significant legal issues with the right as found in Article 22 — as recitals are not binding, and the right to an explanation is not mentioned in the binding articles of the text, having been removed during the legislative process. In addition, there are significant restrictions on the types of automated decisions that are covered — which must be both "solely" based on automated processing, and have legal or similarly significant effects — which significantly limits the range of automated systems and decisions to which the right would apply. In particular, the right is unlikely to apply in many of the cases of algorithmic controversy that have been picked up in the media. A second potential source of such a right has been pointed to in Article 15, the "right of access by the data subject". This restates a similar provision from the 1995 Data Protection Directive, allowing the data subject access to "meaningful information about the logic involved" in the same significant, solely automated decision-making, found in Article 22. Yet this too suffers from alleged challenges that relate to the timing of when this right can be drawn upon, as well as practical challenges that mean it may not be binding in many cases of public concern. Other EU legislative instruments contain explanation rights. The European Union's Artificial Intelligence Act provides in Article 86 a "[r]ight to explanation of individual decision-making" of certain high risk systems which produce significant, adverse effects to an individual's health, safety or fundamental rights. The right provides for "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken", although only applies to the extent other law does not provide such a right. The Digital Services Act in Article 27, and the Platform to Business Regulation in Article 5, both contain rights to have the main parameters of certain recommender systems to be made clear, although these provisions have been criticised as not matching the way that such systems work. The Platform Work Directive, which provides for regulation of automation in gig economy work as an extension of data protection law, further contains explanation provisions in Article 11, using the specific language of "explanation" in a binding article rather than a recital as is the case in the GDPR. Scholars note that remains uncertainty as to whether these provisions imply sufficiently tailored explanation in practice which will need to be resolved by courts. France In France the 2016 Loi pour une République numérique (Digital Republic Act or loi numérique) amends the country's administrative code to introduce a new provision for the explanation of decisions made by public sector bodies about individuals. It notes that where there is "a decision taken on the basis of an algorithmic treatment", the rules that define that treatment and its “principal characteristics” must be communicated to the citizen upon request, where there is not an exclusion (e.g. for national security or defence). These should include the following: the degree and the mode of contribution of the algorithmic processing to the decision- making; the data processed and its source; the treatment parameters, and where appropriate, their weighting, applied to the situation of the person concerned; the operations carried out by the treatment. Scholars have noted that this right, while limited to administrative decisions, goes beyond the GDPR right to explicitly apply to decision support rather than decisions "solely" based on automated processing, as well as provides a framework for explaining specific decisions. Indeed, the GDPR automated decision-making rights in the European Union, one of the places a "right to an explanation" has been sought within, find their origins in French law in the late 1970s. Criticism Some argue that a "right to explanation" is at best unnecessary, at worst harmful, and threatens to stifle innovation. Specific criticisms include: favoring human decisions over machine decisions, being redundant with existing laws, and focusing on process over outcome. Authors of study “Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For” Lilian Edwards and Michael Veale argue that a right to explanation is not the solution to harms caused to stakeholders by algorithmic decisions. They also state that the right of explanation in the GDPR is narrowly-defined, and is not compatible with how modern machine learning technologies are being developed. With these limitations, defining transparency within the context of algorithmic accountability remains a problem. For example, providing the source code of algorithms may not be sufficient and may create other problems in terms of privacy disclosures and the gaming of technical systems. To mitigate this issue, Edwards and Veale argue that an auditing system could be more effective, to allow auditors to look at the inputs and outputs of a decision process from an external shell, in other words, “explaining black boxes without opening them.” Similarly, Oxford scholars Bryce Goodman and Seth Flaxman assert that the GDPR creates a ‘right to explanation’, but does not elaborate much beyond that point, stating the limitations in the current GDPR. In regards to this debate, scholars Andrew D Selbst and Julia Powles state that the debate should redirect to discussing whether one uses the phrase ‘right to explanation’ or not, more attention must be paid to the GDPR's express requirements and how they relate to its background goals, and more thought must be given to determining what the legislative text actually means. More fundamentally, many algorithms used in machine learning are not easily explainable. For example, the output of a deep neural network depends on many layers of computations, connected in a complex way, and no one input or computation may be a dominant factor. The field of Explainable AI seeks to provide better explanations from existing algorithms, and algorithms that are more easily explainable, but it is a young and active field. Others argue that the difficulties with explainability are due to its overly narrow focus on technical solutions rather than connecting the issue to the wider questions raised by a "social right to explanation." Suggestions Edwards and Veale see the right to explanation as providing some grounds for explanations about specific decisions. They discuss two types of algorithmic explanations, model centric explanations and subject-centric explanations (SCEs), which are broadly aligned with explanations about systems or decisions. SCEs are seen as the best way to provide for some remedy, although with some severe constraints if the data is just too complex. Their proposal is to break down the full model and focus on particular issues through pedagogical explanations to a particular query, “which could be real or could be fictitious or exploratory”. These explanations will necessarily involve trade offs with accuracy to reduce complexity. With growing interest in explanation of technical decision-making systems in the field of human-computer interaction design, researchers and designers put in efforts to open the black box in terms of mathematically interpretable models as removed from cognitive science and the actual needs of people. Alternative approaches would be to allow users to explore the system's behavior freely through interactive explanations. One of Edwards and Veale's proposals is to partially remove transparency as a necessary key step towards accountability and redress. They argue that people trying to tackle data protection issues have a desire for an action, not for an explanation. The actual value of an explanation will not be to relieve or redress the emotional or economic damage suffered, but to understand why something happened and helping ensure a mistake doesn't happen again. On a broader scale, In the study Explainable machine learning in deployment, authors recommend building an explainable framework clearly establishing the desiderata by identifying stakeholder, engaging with stakeholders, and understanding the purpose of the explanation. Alongside, concerns of explainability such as issues on causality, privacy, and performance improvement must be considered into the system. See also Algorithmic transparency Automated decision-making Explainable artificial intelligence Regulation of algorithms References Accountability Algorithms Human rights Machine learning Regulation of artificial intelligence
Right to explanation
[ "Mathematics", "Technology", "Engineering" ]
2,118
[ "Regulation of artificial intelligence", "Machine learning", "Applied mathematics", "Algorithms", "Mathematical logic", "Computing and society", "Artificial intelligence engineering" ]
54,625,930
https://en.wikipedia.org/wiki/Tyromyces%20pulcherrimus
Tyromyces pulcherrimus, commonly known as the strawberry bracket, is a species of poroid fungus in the family Polyporaceae. It is readily recognisable by its reddish fruit bodies with pores on the cap underside. The fungus is found natively in Australia and New Zealand, where it causes a white rot in living and dead logs of southern beech and eucalyptus. In southern Brazil, it is an introduced species that is associated with imported eucalypts. Taxonomy The fungus was first described in 1922 by English-born Australian dentist and botanist Leonard Rodway, who called it Polyporus pulcherrimus. Curtis Gates Lloyd, an American mycologist to whom Rodway had sent a specimen for examination, suggested a similarity to Albatrellus confluens. Gordon Herriot Cunningham transferred it to the genus Tyromyces in 1922 to give it the name by which it is known by today. Some sources refer to the species as Aurantiporus pulcherrimus, after Buchanan and Hood's proposed 1992 transfer to Aurantiporus. The specific epithet pulcherrimus is derived from the Latin word for "very beautiful". One common name used for the fungus is strawberry bracket. Description The fruit bodies of Tyromyces pulcherrimus are bracket-shaped caps that measure in diameter. They are sessile, lacking a stipe, and are instead attached directly to the substrate. The cap colour when fresh is cherry red or salmon, but it dries to become brownish. The cap surface can be hairy, particularly near the point of attachment. Pores on the cap underside are red, and number about 1–3 per millimetre. The flesh is soft and thick, red, and watery. It does not have any distinct odour. Tyromyces pulcherrimus is inedible. With a monomitic hyphal system, Tyromyces pulcherrimus contains only generative hyphae. These hyphae are clamped, and are sometime covered with granules, or an orange substance that appears oily. The hyphae in the context are arranged in a parallel fashion, and strongly agglutinated to form a densely packed tissue. Cystidia are absent from the hymenium. The basidia are club shaped with typically four sterigmata, and measure 15–23 by 6.5–7.5 μm. Spores are ellipsoid to more or less spherical, hyaline, and measure 5–7 by 3.5–4.5 μm. Habitat and distribution Tyromyces pulcherrimus is a white rot fungus that grows on the exposed heartwood of several tree species. It has been recorded on southern beech (Nothofagus cunninghamii) in Victoria and Tasmania and on Antarctic beech (Nothofagus moorei) in Queensland and New South Wales. In Tasmania, evidence suggests that it prefers wet forests, including rainforest and wet sclerophyll forest. In Brazil, it is an introduced species that has been recorded on imported eucalypts. It has been found there in Rio Grande do Sul State. In New Zealand, the fungus has been recorded on red beech (Fuscospora fusca) and silver beech (Lophozonia menziesii). References Fungi native to Australia Fungi of New Zealand Fungi of Brazil Fungi described in 1922 pulcherrimus Fungus species
Tyromyces pulcherrimus
[ "Biology" ]
717
[ "Fungi", "Fungus species" ]
54,626,053
https://en.wikipedia.org/wiki/UGC%206614
UGC 6614 is a giant spiral galaxy located about 330 million light-years away in the constellation Leo. It has an estimated diameter of nearly 300,000 light-years. Physical characteristics UGC 6614 is classified as a low surface brightness (LSB) galaxy. The galaxy is nearly face-on and has a ring-like feature around its bulge, with distinctive extended spiral arms. The bulge of UGC 6614 is found to be red, similar to those of S0 and other elliptical galaxies, hinting at the existence of an old star population. In its center, globular clusters are present. It is hypothesised UGC 6614 might be a giant elliptical galaxy, but because of repeated mergers with other disk galaxies, it shows a stellar disk structure, causing its spiral-like appearance. UGC 6614 possibly shows the highest metallicity known for an LSB galaxy with an estimated log value of (O/H) 1⁄4 3 to 2.84. Its nucleus shows AGN activity at optical wavelengths and appears as a bright core in X-ray emission, according to XMM-Newton archival data. Black hole UGC 6614 contains a supermassive black hole in its center with an estimated solar mass of 3.8 x 106. Unconfirmed Supernova AT 2020ojw, an astronomical transient, was discovered in UGC 6614 in July 2020 by ATLAS (Asteroid Terrestrial-impact Last Alert System). It had a magnitude of 18.4 and is a candidate supernova. Group Membership UGC 6614 is a member of a small group of 3 galaxies known as [T2015] nest 100958. [T2015] nest 100958 has a velocity dispersion of 244 km/s and an estimated mass of 1.38 × 1013 M☉. Other members of the group incude its brightest member, NGC 3767, and CGCG 097-024. The group is part of the Coma Supercluster. See also NGC 45 Low-surface-brightness galaxy References External links Unbarred spiral galaxies Leo (constellation) 06614 036122 +03-30-029 Ring galaxies Low surface brightness galaxies T2015 nest 100958
UGC 6614
[ "Astronomy" ]
460
[ "Leo (constellation)", "Constellations" ]
54,626,105
https://en.wikipedia.org/wiki/Photochlorination
Photochlorination is a chlorination reaction that is initiated by light. Usually a C-H bond is converted to a C-Cl bond. Photochlorination is carried out on an industrial scale. The process is exothermic and proceeds as a chain reaction initiated by the homolytic cleavage of molecular chlorine into chlorine radicals by ultraviolet radiation. Many chlorinated solvents are produced in this way. History Chlorination is one of the oldest known substitution reactions in chemistry. The French chemist Jean-Baptiste Dumas investigated the substitution of hydrogen for chlorine by acetic acid in candle wax as early as 1830. He showed that for each mole of chlorine introduced into a hydrocarbon, one mole of hydrogen chloride is also formed and noted the light-sensitivity of this reaction. The idea that these reactions might be chain reactions is attributed to Max Bodenstein (1913). He assumed that in the reaction of two molecules not only the end product of the reaction can be formed, but also unstable, reactive intermediates which can continue the chain reaction. Photochlorination garnered commercial attention with the availability of cheap chlorine from chloralkali electrolysis. Chlorinated alkanes found an initial application in pharyngeal sprays. These contained chlorinated alkanes in relatively large quantities as solvents for chloramine T from 1914 to 1918. The Sharpless Solvents Corporation commissioned the first industrial photochloration plant for the chlorination of pentane in 1929. The commercial production of chlorinated paraffins for use as high-pressure additives in lubricants began around 1930. Around 1935 the process was technically stable and commercially successful. However, it was only in the years after World War II that a greater build-up of photochloration capacity began. In 1950, the United States produced more than 800,000 tons of chlorinated paraffin hydrocarbons. The major products were ethyl chloride, tetrachlorocarbon and dichloromethane. Because of concerns about health and environmentally relevant problems such as the ozone depletion behavior of light volatile chlorine compounds, the chemical industry developed alternative procedures that did not require chlorinated compounds. As a result of the following replacement of chlorinated by non-chlorinated products, worldwide production volumes have declined considerably over the years. Reactions Photochlorinations are usually effected in the liquid phase, usually employing chemically inert solvents. Alkane substrates The photochlorination of hydrocarbon is unselective, although the reactivity of the C-H bonds is tertiary>secondary>primary. At 30 °C the relative reaction rates of primary, secondary and tertiary hydrogen atoms are in a relative ratio of approximately 1 to 3.25 to 4.43. The C-C bonds remain unaffected. Upon radiation the reaction involves alkyl and chlorine radicals following a chain reaction according to the given scheme: Chain termination occurs by recombination of chlorine atoms. Impurities such as oxygen (present in electrochemically obtained chlorine) also cause chain termination. The selectivity of photochlorination (with regard to substitution of primary, secondary or tertiary hydrogens) can be controlled by the interaction of the chlorine radical with the solvent, such as benzene, tert-butylbenzene or carbon disulfide. Selectivity increases in aromatic solvents. By varying the solvent the ratio of primary to secondary hydrogens can be tailored to ratios between 1: 3 to 1: 31. At higher temperatures, the reaction rates of primary, secondary and tertiary hydrogen atoms equalize. Therefore, photochlorination is usually carried out at lower temperatures. Aromatic substrates The photochlorination of benzene proceeds also via a radical chain reaction: […] In some applications, the reaction is carried out at 15 to 20 °C. At a conversion of 12 to 15% the reaction is stopped and the reaction mixture is worked up. Products Chloromethanes An example of photochlorination at low temperatures and under ambient pressure is the chlorination of chloromethane to dichloromethane. The liquefied chloromethane (boiling point -24 °C) is mixed with chlorine in the dark and then irradiated with a mercury-vapor lamp. The resulting dichloromethane has a boiling point of 41 °C and is later separated by distillation from methyl chloride. The photochlorination of methane has a lower quantum yield than the chlorination of dichloromethane. Due to the high light intensity required, the intermediate products are directly chlorinated, so that mainly tetrachloromethane is formed. Chlorinated paraffins A major application of photochlorination is the production of chloroparaffins. Mixtures of complex composition consisting of several chlorinated paraffins are formed. Chlorinated paraffins have the general sum formula CxH(2x−y+2)Cly and are categorized into three groups: Low molecular weight chlorinated paraffins are short chain chloroparaffins (SCCP) with 10 to 13 carbon atoms, followed by medium chain chloroparaffins (MCCP) with carbon chain lengths of 14 to 17 carbon atoms and long chain chlorinated paraffins (LCCP), owing a carbon chainwith more than 17 carbon atoms. Approximately 70% of the chloroparaffins produced are MCCPs with a degree of chlorination from 45 to 52%. The remaining 30% are divided equally between SCCP and LCCP. Short chain chloroparaffins have high toxicity and easily accumulate in the environment. The European Union has classified SCCP as a category III carcinogen and restricted its use. In 1985 the world production was 300,000 tonnes; since then the production volumes are falling in Europe and North America. In China, on the other hand, production rose sharply. China produced more than 600,000 tonnes of chlorinated paraffins in 2007, while in 2004 it was less than 100,000 tonnes. The quantum yield for the photochlorination of n-heptane is about 7000, for example. In photochlorination plants, the quantum yield is about 100. In contrast to the thermal chlorination, which can utilize the formed reaction energy, the energy required to maintain the photochemical reaction must be constantly delivered. The presence of inhibitors, such as oxygen or nitrogen oxides, must be avoided. Too high chlorine concentrations lead to high absorption near the light source and have a disadvantageous effect. Benzyl chloride, benzal chloride and benzotrichloride The photochlorination of toluene is selective for the methyl group. Mono- to trichlorinated products are obtained. The most important of which is the mono-substituted benzyl chloride, which is hydrolyzed to benzyl alcohol. Benzyl chloride can also be converted via benzyl cyanide with subsequent hydrolysis into phenylacetic acid. The disubstituted benzal chloride is converted to benzaldehyde, a popular flavorant and intermediate for the production of malachite green and other dyes. The trisubstituted benzotrichloride is used for the hydrolysis of the synthesis of benzoyl chloride: By reaction with alcohols, benzoyl chloride can be converted into the corresponding esters. With sodium peroxide it turns into dibenzoyl peroxide, a radical initiator for polymerizations. However, the atom economy of these syntheses is poor, since stoichiometric amounts of salts are obtained. Process variants Sulfochlorination The sulfochlorination first described by Cortes F. Reed in 1936 proceeds under almost identical conditions as the conventional photochlorination. In addition to chlorine, sulfur dioxide is also introduced into the reaction mixture. The products formed are alkylsulfonyl chlorides, which are further processed into surfactants. Hydrochloric acid is formed as a coupling product, as is the case with photochlorination. Since direct sulfonation of the alkanes is hardly possible, this reaction has proven to be useful. Due to chlorine, which is bound directly to the sulfur, the resulting products are highly reactive. As secondary products there are alkyl chlorides formed by pure photochlorination, as well as several sulfochlorinated products in the reaction mixture. Photobromination Photobromination with elemental bromine proceeds analogous to photochlorination also via a radical mechanism. In the presence of oxygen, the hydrogen bromide formed is partly oxidised back to bromine, resulting in an increased yield. Because of the easier dosage of the elemental bromine and the higher selectivity of the reaction, photobromination is preferred over photochlorination at laboratory scale. For industrial applications, bromine is usually too expensive (as it is present in sea water in small quantities only and produced from oxidation with chlorine). Instead of elemental bromine, N-bromosuccinimide is also suitable as a brominating agent. The quantum yield of photobromination is usually much lower than that of photochlorination. Further reading H. Hartig: Einfache Dimensionierung, photochemischer Reaktoren. In: Chemie Ingenieur Technik – CIT. 42, 1970, p. 1241–1245, . Dieter Wöhrle, Michael W. Tausch, Wolf-Dieter Stohrer: Photochemie: Konzepte, Methoden, Experimente. Wiley & Sons, 1998, , p. 271–275. David A. Mixon, Michael P. Bohrer, Patricia A. O’Hara: Ultrapurification of SiCl4 by photochlorination in a bubble column reactor. In: AIChE Journal. 36, 1990, p. 216–226, doi:10.1002/aic.690360207. Hans Von Halban: Die Lichtabsorption des Chlors. In: Zeitschrift für Elektrochemie und angewandte physikalische Chemie. 28, 1922, p. 496–499, doi:10.1002/bbpc.19220282304. Arthur John Allmand: Part I.—Einstein’s law of photochemical equivalence. Introductory address to Part I. In: Trans. Faraday Soc. 21, 1926, p. 438, doi:10.1039/TF9262100438. Marion L. Sharrah, Geo. C. Feighner: Synthesis of Dodecylbenzen – Synthetic Detergent Intermediate. In: Industrial & Engineering Chemistry. 46, 1954, p. 248–254, doi:10.1021/ie50530a020. Directive 2003/53/EC of the European Parliament and of the Council of 18 June 2003 amending for the 26th time Council Directive 76/769/EEC relating to restrictions on the marketing and use of certain dangerous substances and preparations (nonylphenol, nonylphenol ethoxylate and cement) C. Decker, M. Balandier, J. Faure: Photochlorination of Poly(vinyl Chloride). I. Kinetics and Quantum Yield. In: Journal of Macromolecular Science: Part A – Chemistry. 16, 2006, p. 1463–1472, doi:10.1080/00222338108063248. Theodor Weyl (Begr.), Josef Houben (Hrsg.), Eugen Müller (Hrsg.), Otto Bayer, Hans Meerwein, Karl Ziegler: Methoden der organischen Chemie. V/3 Fluorine and Chlorine Compounds . Thieme Verlag, Stuttgart 1962, , p. 524. R. Newe, P. Schmidt, K. Friese, B. Hösselbarth: Das Verfahren der strahlenchemischen Chlorierung von Polyvinylchlorid. In: Chemische Technik, 41(4), 1989, p. 141–144. Rolf C. Schulz, Rainer Wolf: Copolymerisation zwischen Vinylencarbonat und Isobutylvinyläther. In: Kolloid-Zeitschrift & Zeitschrift für Polymere. 220, 1967, p. 148–151, doi:10.1007/BF02085908. M. Le Blanc, K. Andrich: Photobromierung des Toluols. In: Zeitschrift für Elektrochemie und angewandte physikalische Chemie. 20.18‐19, 1914, p. 543–547, doi:10.1002/bbpc.19140201804. References Chemical processes Photochemical reactions
Photochlorination
[ "Chemistry" ]
2,733
[ "Chemical process engineering", "Chemical processes", "Photochemical reactions", "nan" ]
54,626,167
https://en.wikipedia.org/wiki/FRD-903
FRD-903 (also known as hexafluoropropylene oxide dimer acid, HFPO-DA, and 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoic acid) is a chemical compound that is among the class of per- and polyfluoroalkyl substances (PFASs). More specifically, this synthetic petrochemical is also described as a perfluoroalkyl ether carboxylic acid (PFECA) and a Fluorointermediate. It is not biodegradable and is not hydrolyzed by water. Production The production process involves 2 molecules of hexafluoropropylene oxide (HFPO) to produce hexafluoropropylene oxide dimer acid fluoride (FRD-903). The ammonium salt of FRD-903 is FRD-902 (ammonium (2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoate)), which is the specific chemical which Chemours has trademarked as part of GenX process. FRD-903 is used as an aid within the manufacturing process for fluoropolymers by reducing the surface tension in the process, allowing polymer particles to increase in size. The process is completed with chemical treatment or heating to remove the FRD-903 from the final polymer product. It can then be recovered for re-use within the process. Drinking water In 2020, Michigan adopted drinking water standards for five previously unregulated PFAS compounds including HFPO-DA which has a maximum contaminant level (MCL) of 370 parts per trillion (ppt). In March 2023, the EPA announced drinking water standards for several PFAS chemicals, which included FRD-903. References Perfluorocarboxylic acids Ethers
FRD-903
[ "Chemistry" ]
424
[ "Organic compounds", "Functional groups", "Ethers" ]
54,626,282
https://en.wikipedia.org/wiki/GenX
GenX is a Chemours trademark name for a synthetic, short-chain organofluorine chemical compound, the ammonium salt of hexafluoropropylene oxide dimer acid (HFPO-DA). It can also be used more informally to refer to the group of related fluorochemicals that are used to produce GenX. DuPont began the commercial development of GenX in 2009 as a replacement for perfluorooctanoic acid (PFOA, also known as C8), in response to legal action due to the health effects and ecotoxicity of PFOA. Although GenX was designed to be less persistent in the environment compared to PFOA, it has proven to be a "regrettable substitute". Its effects may be equally harmful or even more detrimental than those of the chemical it was meant to replace. GenX is one of many synthetic organofluorine compounds collectively known as per- and polyfluoroalkyl substances (PFASs). Uses The chemicals are used in products such as food packaging, paints, cleaning products, non-stick coatings, outdoor fabrics and firefighting foam. The chemicals are manufactured by Chemours, a corporate spin-off of DuPont, in Fayetteville, North Carolina. GenX chemicals are used as replacements for PFOA for manufacturing fluoropolymers such as Teflon, the GenX chemicals serve as surfactants and processing aids in the fluoropolymer production process to lower the surface tension allowing the polymer particles to grow larger. The GenX chemicals are then removed from the final polymer by chemical treatment and heating. Chemistry The manufacturing process combines two molecules of hexafluoropropylene oxide (HFPO) to form HFPO-DA. HFPO-DA is converted into its ammonium salt that is the official GenX compound. The chemical process uses 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoic acid (FRD-903) to generate ammonium 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoate (FRD-902) and heptafluoropropyl 1,2,2,2-tetrafluoroethyl ether (E1). When GenX contacts water, it releases the ammonium group to become HFPO-DA. Because HFPO-DA is a strong acid, it deprotonates into its conjugate base, which can then be detected in the water. Pollution In North Carolina, the Chemours Fayetteville plant released GenX compounds into the Cape Fear River, which is a drinking water source for the Wilmington area. A documentary film, The Devil We Know; a fictional dramatization, Dark Waters; and a nonfiction memoir, Exposure: Poisoned Water, Corporate Greed, and One Lawyer's Twenty-Year Battle Against DuPont by Robert Bilott, subsequently publicized the discharges, leading to controversy over possible health effects. HFPO-DA was first reported to be in the Cape Fear River in 2012 and an additional eleven polyfluoroalkyl substances (PFAS) were reported 2014. These results were published as a formal paper in 2015. The following year, North Carolina State University and the EPA jointly published a study demonstrating HFPO-DA and other PFAS were present in the Wilmington-area drinking water sourced from the Cape Fear river. In September 2017, the North Carolina Department of Environmental Quality (NCDEQ) ordered Chemours to halt discharges of all fluorinated compounds into the river. Following a chemical spill one month later, NCDEQ cited Chemours for violating provisions in its National Pollutant Discharge Elimination System wastewater discharge permit. In November 2017, the Brunswick County Government filed a federal lawsuit alleging that DuPont failed to disclose research regarding potential risks from the chemical. In spring 2018, Cape Fear River Watch sued Chemours for Clean Water Act violations and sued the NCDEQ for inaction. After Cape Fear River Watch's suits were filed, NCDEQ filed a suit against Chemours, the result of all 3 lawsuits culminated in a consent order. The order signed by all 3 parties requires Chemours drastically reduce PFAS containing water discharges and air emissions, as well as sampling and filtration for well owners with contaminated wells, among other requirements. All materials relative to status of consent order requirements must be published to a public website,https://www.chemours.com/en/about-chemours/global-reach/fayetteville-works/compliance-testing. One requirement under the order was for non-targeted analysis which found 257 "unknown" PFAS being released from Fayetteville Works, (aside from the 100 'known' PFAS which can be quantified. Cape Fear River Watch published that their research of the NC DEQ permit file indicates that the first PFAS byproducts were likely released from Fayetteville Works in 1976 with the production of Nafion which uses HFPO in production (otherwise known as GenX) and creates byproducts termed Nafion Byproducts 1 through 5, some of which have been found in the blood of Cape Fear area residents. In 2020 Michigan adopted drinking water standards for 5 previously unregulated PFAS compounds including HFPO-DA which has a maximum contaminant level (MCL) of 370 parts per trillion (ppt). Two previously regulated PFAS compounds PFOA and PFOS had their acceptable limits lowered to 8 ppt and 16 ppt respectively. In 2022 Virginia's Roanoke River had become contaminated by GenX at levels reported to be 1.3 million parts per trillion. Health effects GenX has been shown to cause a variety of adverse health effects. While it was originally marketed as a safer alternative to legacy PFAS, research suggests that GenX poses significant health risks similar to those associated with its predecessor. Liver and kidney toxicity Studies have demonstrated that the liver is especially vulnerable to GenX exposure. Animal research has shown that even low doses of GenX can cause liver enlargement and damage. Similarly, the kidneys are also sensitive to GenX, with chronic exposure leading to renal toxicity. These effects highlight the potential dangers of prolonged exposure to even small amounts of the chemical. Cancer risk There is increasing concern about the carcinogenic potential of GenX. Research in animal models has linked exposure to various cancers, including liver, pancreatic, and testicular cancers. Although data on humans are limited, the results from these studies have prompted further investigation into the possible cancer risks posed by GenX. Neurotoxicity and developmental effects Two 2023 studies have identified potential neurotoxic effects of GenX, particularly during critical developmental windows. Pre-differentiation exposure of human dopaminergic-like neurons (SH-SY5Y cells) to low-dose GenX (0.4 and 4 μg/L) resulted in persistent alterations in neuronal characteristics. The study reported significant changes in nuclear morphology, chromatin arrangement, and increased expression of the repressive marker H3K27me3, which is associated with neurodegeneration. These changes were accompanied by disruptions in mitochondrial function and an increase in intracellular calcium levels, which are critical markers of neuronal health. Notably, GenX exposure led to altered expression of α-synuclein, a protein closely linked to the development of Parkinson's disease. The findings suggest that developmental exposure to GenX may pose a long-term risk for neurodegenerative disorders, particularly Parkinson's disease, due to its impact on key neuronal processes. Recent research has also underscored the potential for GenX to disrupt glucose and lipid metabolism during critical developmental periods. A 2021 study published in Environment International investigated the effects of prenatal exposure to GenX in Sprague-Dawley rats, revealing significant maternal and neonatal adverse outcomes, such as increased maternal liver weight, altered lipid profiles, and reduced glycogen accumulation in neonatal livers, resulting in hypoglycemia. Additionally, neonatal mortality and lower birth weights were observed at higher doses of GenX . A 2024 study in Science of the Total Environment expanded upon these findings in mice, demonstrating that gestational exposure to GenX led to increased liver weight, elevated liver enzyme levels (e.g., ALT and AST), and decreased glycogen storage capacity in the liver. Disruptions in gut flora and the intestinal mucosal barrier were also noted, further linking GenX exposure to hepatotoxicity. Both studies revealed significant alterations in gene expression, particularly in pathways regulating glucose and lipid metabolism. Genes such as CYP4A14, Sult2a1, and Igfbp1 were upregulated, which may have long-term implications for metabolic health. These findings suggest that gestational GenX exposure could trigger metabolic disorders and liver toxicity, posing potential health risks for populations exposed to GenX through contaminated water sources . Immune system and metabolic effects Studies have demonstrated that exposure to GenX, a replacement for long-chain PFAS chemicals, can lead to complex health effects. GenX has been linked to alterations in immune responses and metabolic processes, as observed in both human and animal studies. For instance, in a study using Monodelphis domestica, GenX exposure upregulated genes associated with inflammation and fatty acid transport. Another study on mice showed that GenX suppressed innate immune responses to inhaled carbon black nanoparticles, while simultaneously promoting lung cell proliferation, including macrophages and epithelial cells. These findings suggest that GenX may have immunosuppressive effects, potentially increasing susceptibility to respiratory agents while encouraging cellular growth in the lungs, raising concerns about respiratory health risks. This research highlights the potential health implications of GenX exposure, particularly its impact on immune system function and cell proliferation, which may contribute to both immune suppression and adverse health outcomes like inflammation or respiratory diseases. These findings raise concerns about the long-term impact on human health, especially in vulnerable populations. Drinking water health advisories In June 2022 the U.S. Environmental Protection Agency (EPA) published drinking water health advisories, which are non-regulatory technical documents, for GenX and PFBS. The lifetime health advisories and health effects support documents assist federal, state, tribal, and local officials and managers of drinking water systems in protecting public health when these chemicals are present in drinking water. EPA has listed recommended steps that consumers may take to reduce possible exposure to GenX and other PFAS chemicals. See also Perfluorinated alkylated substances (PFAS) Timeline of events related to per- and polyfluoroalkyl substances (PFAS) References Chemical processes Chemours DuPont products Pollutants
GenX
[ "Chemistry" ]
2,301
[ "Chemical process engineering", "Chemical processes", "nan" ]
54,626,843
https://en.wikipedia.org/wiki/Hinodeyama%20Tile%20Kiln%20Site
is an archaeological site consisting of the remains of seven Nara period kilns located in what is now the town of Shikama, Miyagi Prefecture in the Tōhoku region of northern Japan. It has been protected by the central government as a National Historic Site since 1976. Overview As the imperial government extended control over Mutsu Province in the 8th Century AD, a number of fortified administrative centers and Buddhist temples were built in the area centered on Taga Castle. One feature of the buildings in these structures was the use of tiled roofs, which was a symbol of continental culture and the advanced state of the central administration. The Hinodeyama Kilns are one of several which have been found within what is now Miyagi Prefecture. These kilns were located in hilly land, near the sources of clay and fuel for the kilns; however, from the design patterns on shards found at the site, it can be determined that the tiles from the Hinodeyama kilns were used at Taga Castle, 40 kilometers to the south, among other areas. The site is contemporary with the Daikichiyama tile kiln ruins and both kilns supplied the various government administrative complexes (such as at the Higashiyama Kanga ruins) located around the province. There are seven kilns at the site, six of which were used for roof tile production, and one of which was used to make Sue pottery. The kilns are built underground into the slope of a hill, and are a stepless form of the traditional anagama kiln. Each has a length of approximately five meters and a width and height of one meter. Shards found at the location indicate that cylindrical, curved and flat tiles were all produced. The site was backfilled after excavation and is now a grassy slope with a stone monument marking the location. The site is about 20 minutes by car from Nishi-Furukawa Station on the JR East Rikuu East Line. See also List of Historic Sites of Japan (Miyagi) References External links Miyagi Prefecture official site Shikama, Miyagi Nara period Japanese pottery kiln sites History of Miyagi Prefecture Historic Sites of Japan Mutsu Province
Hinodeyama Tile Kiln Site
[ "Chemistry", "Engineering" ]
455
[ "Kilns", "Japanese pottery kiln sites" ]
54,627,441
https://en.wikipedia.org/wiki/Buglossoporus%20eucalypticola
Buglossoporus eucalypticola is a species of poroid fungus in the family Fomitopsidaceae. It was described as a new species in 2016 by mycologists Mei-Ling Han, Bao-Kai Cui, and Yu-Cheng Dai. The type specimen was collected in the Danzhou Tropical Botanical Garden, in Danzhou, China. It was growing on a dead Eucalyptus tree. The fruit body has a fan-shaped or semicircular cap that projects up to , wide, and thick at its base. The surface colour when fresh is peach to brownish orange, but when dry becomes clay-pink to cinnamon. The pore surface on the cap underside is initially white before becoming pinkish buff or clay-buff to dark brown. B. eucalypticola causes a brown rot in its host. References Fungi of China Fomitopsidaceae Fungi described in 2016 Taxa named by Yu-Cheng Dai Taxa named by Bao-Kai Cui Fungus species
Buglossoporus eucalypticola
[ "Biology" ]
203
[ "Fungi", "Fungus species" ]
54,629,028
https://en.wikipedia.org/wiki/HD%20131399
HD 131399 is a star system in the constellation of Centaurus. Based on the system's electromagnetic spectrum, it is located around 350 light-years (107.9 parsecs) away. The total apparent magnitude is 7.07, but because of interstellar dust between it and the Earth, it appears 0.22 ± 0.09 magnitudes dimmer than it should be. The brightest star, is a young A-type main-sequence star, and further out are two lower-mass stars. A Jupiter-mass planet or a low-mass brown dwarf was once thought to be orbiting the central star, but this has been ruled out. Stellar system The brightest star in the HD 131399 system is designated HD 131399 A. Its spectral type is A1V, and it is 2.08 times as massive as the Sun. The two lower-mass stars are designated HD 131399 B and C, respectively. B is a G-type main-sequence star, while HD 131399 C is a K-type main-sequence star. Both stars are less massive than the Sun. HD 131399 B and C are located very close to each other, and the two orbit each other at about 10 AU. In turn, the B-C pair orbits the central star A at a distance of 349 astronomical units (au). This orbit takes about 3,600 years to complete, and it has an eccentricity of about 0.13 The entire system is about 21.9 million years old. One paper has reported that HD 131399 A has a companion in an inclined 10-day orbit with a semi-major axis of . HD 131399 A has been described as a "nascent Am star"; although it has a very slow projected rotation rate and would be expected to show chemical peculiarities, its spectrum is relatively normal, possibly due to its young age. Claims of a planetary system The claimed discovery of a massive planet, named HD 131399 Ab, was announced in a paper published in the journal Science. The object was imaged using the SPHERE imager of the Very Large Telescope at the European Southern Observatory, located in the Atacama Desert of Chile, and announced in a July 2016 paper in the journal Science. It was thought to be a T-type object with a mass of , but its orbit would have been unstable, causing it to be ejected between the primary's red giant phase and white dwarf phase. This was the first exoplanet candidate to be discovered by SPHERE. The image was created from two separate SPHERE observations: one to image the three stars and one to detect the faint planet. After its discovery, the team unofficially named the system "Scorpion-1" and the planet "Scorpion-1b", after the survey that prompted its discovery, the Scorpion Planet Survey (principal investigator: Daniel Apai). In May 2017, observations made by the Gemini Planet Imager and including a reanalysis of the SPHERE data suggest that this target is, in fact, a background star. This object's spectrum seems to be like that of a K-type or M-type dwarf, not a T-type object as first thought. It also initially appeared to be associated with HD 131399, but this was because of its unusually high proper motion (in the top 4% fastest-moving stars). After subsequent data published in 2022 confirmed that the object is a background star, the paper announcing the putative discovery was retracted. References Notes Centaurus Hypothetical planetary systems Triple star systems A-type main-sequence stars G-type main-sequence stars K-type main-sequence stars Durchmusterung objects 131399 072940
HD 131399
[ "Astronomy" ]
761
[ "Centaurus", "Constellations" ]
54,629,293
https://en.wikipedia.org/wiki/Single%20cell%20epigenomics
Single cell epigenomics is the study of epigenomics (the complete set of epigenetic modifications on the genetic material of a cell) in individual cells by single cell sequencing. Since 2013, methods have been created including whole-genome single-cell bisulfite sequencing to measure DNA methylation, whole-genome ChIP-sequencing to measure histone modifications, whole-genome ATAC-seq to measure chromatin accessibility and chromosome conformation capture. Single-cell DNA methylome sequencing Single cell DNA genome sequencing quantifies DNA methylation. This is similar to single cell genome sequencing, but with the addition of a bisulfite treatment before sequencing. Forms include whole genome bisulfite sequencing, and reduced representation bisulfite sequencing Single-cell ATAC-seq ATAC-seq stands for Assay for Transposase-Accessible Chromatin with high throughput sequencing. It is a technique used in molecular biology to identify accessible DNA regions, equivalent to DNase I hypersensitive sites. Single cell ATAC-seq has been performed since 2015, using methods ranging from FACS sorting, microfluidic isolation of single cells, to combinatorial indexing. In initial studies, the method was able to reliably separate cells based on their cell types, uncover sources of cell-to-cell variability, and show a link between chromatin organization and cell-to-cell variation. Single-cell ChIP-seq ChIP-sequencing, also known as ChIP-seq, is a method used to analyze protein interactions with DNA. ChIP-seq combines chromatin immunoprecipitation (ChIP) with massively parallel DNA sequencing to identify the binding sites of DNA-associated proteins. In epigenomics, this is often used to assess histone modifications (such as methylation). ChIP-seq is also often used to determine transcription factor binding sites. Single-cell ChIP-seq is extremely challenging due to background noise caused by nonspecific antibody pull-down, and only one study so far has performed it successfully. This study used a droplet-based microfluidics approach, and the low coverage required thousands of cells to be sequenced in order to assess cellular heterogeneity. Single-cell Hi-C Chromosome conformation capture techniques (often abbreviated to 3C technologies or 3C-based methods) are a set of molecular biology methods used to analyze the spatial organization of chromatin in a cell. These methods quantify the number of interactions between genomic loci that are nearby in three dimensional space, even if the loci are separated by many kilobases in the linear genome. Currently, 3C methods start with a similar set of steps, performed on a sample of cells. First, the cells are cross-linked, which introduces bonds between proteins, and between proteins and nucleic acids, that effectively "freeze" interactions between genomic loci. The genome is then cut digested into fragments through the use of restriction enzymes. Next, proximity based ligation is performed, creating long regions of hybrid DNA. Lastly, the hybrid DNA is sequenced to determine genomic loci that are in close proximity to each other. Single-cell Hi-C is a modification of the original Hi-C protocol, which is an adaptation of the 3C method, that allows you to determine proximity of different regions of the genome in a single cell. This method was made possible by performing the digestion and ligation steps in individual nuclei, as opposed to the original Hi-C protocol, where ligation was performed after cell lysis in a pool containing crosslinked chromatin complexes. In single cell Hi-C, after ligation, single cells are isolated and the remaining steps are performed in separate compartments, and hybrid DNA is tagged with a compartment specific barcode. High-throughput sequencing is then performed on the pool of the hybrid DNA from the single cells. Although the recovery rate of sequenced interactions (hybrid DNA) can be as low as 2.5% of potential interactions, it has been possible to generate three dimensional maps of entire genomes using this method. Additionally, advances have been made in the analysis of Hi-C data,  allowing for the enhancement of HiC datasets to generate even more accurate and detailed contact maps and 3D models. See also Single cell sequencing Epigenomics Chromosome conformation capture References Epigenetics DNA sequencing Genomics Cell biology
Single cell epigenomics
[ "Chemistry", "Biology" ]
916
[ "Cell biology", "Molecular biology techniques", "DNA sequencing" ]
54,630,202
https://en.wikipedia.org/wiki/NGC%204699
NGC 4699 is an intermediate spiral galaxy located in the constellation Virgo. It is located at a distance of about 65 million light years from Earth, which, given its apparent dimensions, means that NGC 4699 is about 85,000 light years across. It was discovered by William Herschel in 1786. It is a member of the NGC 4699 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. Characteristics NGC 4699 is a Seyfert like galaxy with very weak nuclear emission. The galaxy features a bar that is 0.41 arcminutes long and a ring with diameter 1.95 arcminutes. The galaxy features a large bulge which accounts for the 11.3% of the stellar mass of the galaxy and a large disky pseudobulge, which is larger than the strong bar. The disk within the bulge features tightly wrapped spiral arms. There are a lot of HII regions in the disk. The galaxy has an extended type-III outer disk, with low central surface magnitude and which is thicker than the inner disk. Supernovae Three supernovae have been observed in NGC 4699: SN 1948A (type unknown, mag. 17) was discovered by Edwin Hubble on 5 March 1948. SN 1983K (type II-P, mag. 17) was discovered by Marina Wischnjewsky on 6 June 1983. The supernova had brightened from magnitude 17 on discovery, to magnitude 14 on 10 June 1983. It had a plateau-shaped light curve, and its spectra featured a progressive violet shift, which was explained by the presence of a preexisting outer shell of materials around the progenitor of the supernova. SN 2024muv (type Ia, mag. 14.5) was discovered by Zwicky Transient Facility on 26 June 2024. Nearby galaxies NGC 4699 belongs in the NGC 4697 group according to Makarov and Karachentsev. Other members of the group include NGC 4697, NGC 4674, NGC 4700, NGC 4731, NGC 4742, NGC 4775, NGC 4781, NGC 4784, NGC 4790, NGC 4813, NGC 4948 and NGC 4958. It belongs to the Virgo II groups, an extension of the Virgo Cluster. Gallery References External links Intermediate spiral galaxies Virgo (constellation) 4699 UGCA objects 43321
NGC 4699
[ "Astronomy" ]
528
[ "Virgo (constellation)", "Constellations" ]
54,631,398
https://en.wikipedia.org/wiki/NGC%207209
NGC 7209 is an open cluster in the constellation Lacerta. It was discovered by William Herschel on 19 October 1787. The cluster lies 3,810 light years away from Earth. It has been suggested that there is another cluster at a distance of 2,100 light years projected in front of a cluster lying at 3,800 light years away, based on the reddening of the cluster, however, further photometric studies of the cluster did not support that claim. The cluster is made up out of 150 stars with magnitude from 9 to 15 within a tidal radius of 9 parsec (30 light years). From its members, 3 are probably delta Scuti variables. One other member of the cluster is the variable SS Lancertae, a binary star with 14.4 day period whose magnitude stopped varying in the middle of the 20th century. This has been attributed to the presence of a third star with period 679 days, whose perturbations change the line of sight. The nodal cycle is found to be about 600 years, within which occur two ecliptic phases, each lasting about 100 yr. References External links 7209 Lacerta Open clusters
NGC 7209
[ "Astronomy" ]
240
[ "Lacerta", "Constellations" ]
54,631,939
https://en.wikipedia.org/wiki/Svetlana%20%28company%29
PJSC Svetlana () is a company based in Saint Petersburg, Russia. It is primarily involved in the research, design, and manufacturing of electronic and microelectronic instruments. Svetlana is part of Ruselectronics. The name of the company is said to originate from the words for 'light of an incandescent lamp' (СВЕТ ЛАмпы НАкаливания). History The company was established in 1889 as the Ya. M. Aivaz () Factory. Svetlana was a major producer of vacuum tubes. In 1937, the Soviet Union purchased a tube assembly line from RCA, including production licenses and initial staff training, and installed it on the St Petersburg plant. US-licensed tubes were produced since then. Since 2001, New Sensor Corp. has been holding the rights for the Svetlana vacuum tube brand for the US and Canada. The New Sensor tubes are actually manufactured at the Expo-pul factory (former Reflektor plant) in Saratov. Tubes manufactured by Svetlana in Saint Petersburg still bear the "winged С" (cyrillic S) logo (see the image below) but no longer the name Svetlana. In 2017 the company announced a 3-billion-ruble modernization plan. Products The Svetlana Association produces a variety of electronic and microelectronic instruments, including transmitting and modulator tubes for all frequency ranges; X-band broadband passive TR limiter; KU-band broadband TR tube; klystron amplifiers; X-ray tubes; portable X-ray units for medicine and industry; high-frequency fast response thyristors; transistors; integrated microcircuits; microcomputers; microcontrollers; microcalculators; ultrasonic delay lines; receiving tubes; process equipment for the manufacture of electronic engineering items. Vacuum tubes currently in production include the 6550, 6L6, EL34, and KT88. Directors 1961-1969 — Kaminsky I. I. 1969-1988 — Filatov O. V. 1988-1991 — Khizha G. S. 1991-1993 — Shchukin Gennady Anatolyevich 1993-1994 — Bashkatov V. E. 1994-2014 — Popov V. V. since 2014 — Gladkov N. Y. Awards 1931 – Order of Lenin (№8) for the implementation of the production plan of the first five-year plan in two and a half years. 1937 – diploma and "Grand Prix" at the International Exhibition of Art and Technology in Paris for powerful generator lamps GDO-15, GKO-10 manufactured by the Svetlana. See also 6P1P vacuum tube Russian tube designations 7400 series – Second sources in Europe and the Eastern Bloc Soviet integrated circuit designation References External links Official website Electronics companies of Russia Manufacturing companies of Russia Companies based in Saint Petersburg Ruselectronics Vacuum tubes Manufacturing companies established in 1889 Electronics companies of the Soviet Union Companies nationalised by the Soviet Union Ministry of the Electronics Industry (Soviet Union) Russian brands
Svetlana (company)
[ "Physics" ]
642
[ "Vacuum tubes", "Vacuum", "Matter" ]
54,632,107
https://en.wikipedia.org/wiki/Haloferacaceae
Haloferacaceae is a family of halophilic, chemoorganotrophic or heterotrophic archaea within the order Haloferacales. The type genus of this family is Haloferax. Its biochemical characteristics are the same as the order Haloferacales. The name Haloferacaceae is derived from the Latin term Haloferax, referring to the type genus of the family and the suffix "-ceae", an ending used to denote a family. Together, Haloferacaceae refers to a family whose nomenclatural type is the genus Haloferax. Taxonomy and molecular signatures As of 2021, Haloferacaceae contains 10 validly published genera. This family can be molecularly distinguished from other Halobacteria by the presence of five conserved signature proteins (CSPs) and four conserved signature indels (CSIs) present in the following proteins: thermosome, ribonuclease BN and hypothetical proteins. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). Note: * Haloferacaceae See also List of Archaea genera References Halobacteria
Haloferacaceae
[ "Biology" ]
258
[ "Archaea", "Archaea stubs" ]
54,632,356
https://en.wikipedia.org/wiki/Halarchaeum
Halarchaeum (common abbreviation Hla.) is a genus of halophilic archaea in the family of Halobacteriaceae. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of Archaea genera References Archaea genera Taxa described in 2010 Euryarchaeota
Halarchaeum
[ "Biology" ]
90
[ "Archaea", "Archaea stubs" ]
54,632,360
https://en.wikipedia.org/wiki/Halomarina
Halomarina (common abbreviation Hmr.) is a genus of halophilic archaea in the family of Halobacteriaceae. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of Archaea genera References External links Archaea genera Taxa described in 2011 Halobacteria
Halomarina
[ "Biology" ]
90
[ "Archaea", "Archaea stubs" ]
54,632,363
https://en.wikipedia.org/wiki/Halonotius
Halonotius (common abbreviation Hns.) is a genus of halophilic archaea in the family of Halorubraceae. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of Archaea genera References Archaea genera Taxa described in 2010 Euryarchaeota
Halonotius
[ "Biology" ]
89
[ "Archaea", "Archaea stubs" ]
54,632,364
https://en.wikipedia.org/wiki/Halogranum
Halogranum (common abbreviation Hgn.) is a genus of halophilic archaea in the family of Haloferacaceae. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of Archaea genera References Archaea genera Taxa described in 2010 Euryarchaeota
Halogranum
[ "Biology" ]
89
[ "Archaea", "Archaea stubs" ]
54,632,366
https://en.wikipedia.org/wiki/Halorientalis
Halorientalis is a genus of archaea in the family of Haloarculaceae. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of Archaea genera References Archaea genera Taxa described in 2011
Halorientalis
[ "Biology" ]
75
[ "Archaea", "Archaea stubs" ]
54,632,397
https://en.wikipedia.org/wiki/Haloarcula%20marismortui
Haloarcula marismortui is a halophilic archaeon isolated from the Dead Sea. Morphology Haloarcula marismortui is a Gram-negative archaeon with a cell size of 1.0–2.x 2.0–3.0 μm (diameter x length). Cells are pleomorphic appearing as short rods to rectangles. H. marismortui is motile via archaellum and possesses a cell membrane that consists of triglycosyl, diether lipids, and glycoproteins. Metabolism H. marismortui is an aerobic chemoorganotroph that utilizes glycolysis and a modified Entner-Doudaroff pathway for the breakdown of nutrients. H. marismortui utilizes energy sources such as glucose, sucrose, fructose, glycerol, malate, acetate & succinate while producing nitrogen, metabolic carbon, and acid as byproducts. Can also grow anaerobically by using nitrate as an electron acceptor. Genomic properties The genome of H. marismortui is organized into nine circular replicons, in which individual G+C content varies from 54 to 62%. H. marismortui contains 4,366 genes and 4,274,642 base pairs (Strain ATCC 43049). H. marismortui has one of the only two prokaryotic large ribosomal subunits which have so far been crystallized. The other one is Deinococcus radiodurans. Ecology Habitat Haloarcula marismortui is considered an extreme halophile and has been isolated from the Dead Sea. H. marismortui has a temperature optima between 40 and 50 °C and a pH range of 5.5–8.0. Growth can occur at a wide range of NaCl concentrations spanning 5-35% with optimal growth between 15 and 25%. The unusually large number of environmental regulatory genes found within the H. marismortui genome suggests higher fitness in extreme environments compared to other species of Halobacterium. Adaptability H. marismortui encodes a large family of multi-domain proteins(49) that act as sensors and regulators including Opsinproteins "Sop I & II, Hop, & Bop". These proteins help maintain physiological ion concentrations, facilitate phototaxis, and generate chemical energy via a proton gradient. H. marismortui is believed to possess over 100 ecoparalogs, genes that perform the same function under environmental stress, that helps maintain its system of environmental adaptability. Multiple genes were found to have a factor on temperature control (rrnA/B/C) and cell motility (FlaA2 & FlaB). H. marismortui encodes a large family of multidomain proteins (49) that act as environmental regulators and sensors. This allows H. marimortui to survive in highly variable environmental conditions. High environmental adaptability makes H. marismortui an ideal candidate for future bioremediation research with the potential of utilizing its environmental sensory genes in environmental clean up. References Archaea genera Archaea described in 1940
Haloarcula marismortui
[ "Biology" ]
666
[ "Archaea", "Archaea stubs" ]
54,632,425
https://en.wikipedia.org/wiki/Haloferax%20lucentense
Haloferax lucentense is a halophilic archaeon in the family of Haloferacaceae. References Archaea genera Archaea described in 2004
Haloferax lucentense
[ "Biology" ]
34
[ "Archaea", "Archaea stubs" ]
54,632,447
https://en.wikipedia.org/wiki/Haloplanus%20natans
Haloplanus natans is a halophilic Archaeon in the family of Halobacteriaceae and the type species of the genus Haloplanus. It was isolated from controlled mesocosms with a mixture of water from the Dead Sea and the Red Sea. References External links Type strain of Haloplanus natans at BacDive - the Bacterial Diversity Metadatabase Euryarchaeota Archaea described in 2007
Haloplanus natans
[ "Biology" ]
88
[ "Archaea", "Archaea stubs" ]
54,632,449
https://en.wikipedia.org/wiki/Halorhabdus%20utahensis
Halorhabdus utahensis is a halophilic archaeon isolated from the Great Salt Lake in Utah. Cell structure and metabolism Halorhabdus utahensis (salt-loving rod) is a motile, Gram-negative, extremely halophilic archaeon that forms red, circular colonies. It grows at the temperatures between 17 and 55 °C, with optimal growth occurring at 50 °C. It can also grow over a pH range of 5.5–8.5 with the optimal pH value between 6.7 and 7.1. Further, with its extremely high salinity optimum of 27% NaCl, Halorhabdus has one of the highest reported salinity optima of any living organism. The cells of H. utahensis are extremely pleomorphic, exhibiting any shape from irregular coccoid or ellipsoid to triangular, club-shaped or rod-shaped forms. The rod-shaped and ellipsoid cells are 2-10 by 0.5-1 μm and 1-2 by 1 μm in size, respectively, and the spherical cells have a diameter of approximately 1 μm. The archaeon uses only a limited range of substrates, such as glucose, xylose, and fructose, for growth, and is unique in its inability to utilize yeast extract or peptone. Other substances that did not stimulate the organism's growth include organic acids, amino acids, alcohols, glycogen, and starch. References Further reading Scientific journals Scientific books Euryarchaeota Archaea described in 2000
Halorhabdus utahensis
[ "Biology" ]
325
[ "Archaea", "Archaea stubs" ]
54,632,450
https://en.wikipedia.org/wiki/Halorhabdus%20tiamatea
Halorhabdus tiamatea is a halophilic archaeon isolated from the Red Sea. With its extremely high salinity optimum of 27% NaCl, Halorhabdus has one of the highest reported salinity optima of any living organism. Genome structure The genome of Halorhabdus was sequenced in August 2014. The G + C content of its DNA is estimated to be 64%. References Further reading Archaea genera Archaea described in 2008
Halorhabdus tiamatea
[ "Biology" ]
100
[ "Archaea", "Archaea stubs" ]