id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
78,117,246
https://en.wikipedia.org/wiki/Tactical%20deception%20in%20animals
Tactical deception in animals, also called functional deception, is the use by an animal of signals or displays from an animal's normal repertoire to mislead or deceive another individual. Definition Tactical or functional deception is the use of signals or displays from an animal's normal repertoire to mislead or deceive another individual. Some researchers limit this term to intraspecific behaviour, meaning that it occurs between members of the same species. Relation to cognitive ability Tactical deception has been used as a measure of advanced social cognition, as it relates to brain function. Primates have larger brains, relative to body size, than in any other mammal except for dolphins, and this size difference is mainly due to an enlarged neocortex. Research has suggested that the evolution of the primate brain is selected-for in highly social species. One study used 18 species with varying brain volumes (three strepsirrhines, four New World monkeys, seven Old World monkeys, and four ape species). The study used the frequency of tactical deception as a measure of social cognition, and it found a strong correlation between the use of social deception and size of the neocortex. Taxonomic range Cephalopods Among cephalopods, some colour changes in cuttlefish might be called tactical deception, as these fish sometimes present entirely different displays to two different observers. When a male cuttlefish courts a female in the presence of other males, he displays a male pattern facing the female (courtship), and a female pattern facing away, to deceive other males. Birds In an anecdotal account, Simmons reported that a female marsh harrier courted a male to obtain access to food he had stored. She then took this food and fed it to chicks that had been fathered by another male. More extensive studies focused on possibly deceitful behaviour in the pied flycatcher, a species in which males may possess more than one territory. Females gain from mating with a male that has no other mates and males may try to deceive females about their mating status (mated or unmated). Females frequently visit the male, and if he is always alone on his territory he is probably unmated. Thus, by repeated sampling of male behaviour, females are usually able to avoid mating with previously mated males. Group-foraging common ravens hoard their food in a number of places, and also raid the caches made by others. Cachers withdraw from conspecifics when hiding their food and usually place their caches behind structures, out of sight of potential observers. Raiders remain inconspicuous, keeping at a distance from cachers near their cache sites, but within sight. In response, cachers often interrupt caching, change cache sites, or empty their caches. These behaviours suggest that ravens can withhold information about their intentions, which may qualify as tactical deception. Similarly, if a Eurasian jay (Garrulus glandarius) is being watched by another jay, it tends to cache food behind an opaque barrier rather than a transparent barrier, apparently to reduce the likelihood of other jays pilfering their caches. Mammals In domestic pigs, in a setting where the behaviour of a trained animal could reveal the source of food to another animal, the trained animal spent longer at the food source before other pigs arrived. Intentional tactical deception has been proposed for mice. In particular, d'Isa et al. have observed that free-living black-striped mice (Apodemus agrarius) perform a peculiar deceptive dodging maneuver to escape from a chaser mouse. The chased black-striped mice enters a single-entrance chamber, hides inside the chamber next to the entrance, waits until the chaser has entered and then, exploiting the distraction of the back-turned chaser, takes the exit to escape in the opposite direction. Primates Observations on great apes have been widely reported as evidence of tactical deception. Several great apes have been trained to use sign language, and in some instances these animals seem to have used language in an attempt to deceive human observers. Koko, a female gorilla, was trained to use a form of American Sign Language. It has been claimed that she once tore a steel sink out of its moorings and when her handlers confronted her, Koko signed "cat did it" and pointed at her innocent pet kitten. Nim Chimpsky was a common chimpanzee trained in American Sign Language. Trainers claimed that when Nim grew bored of learning to sign words, she would sign 'dirty' indicating she wanted to go to the toilet, which caused the trainer to stop the lesson. Another example involves a chimpanzee approached from behind by a loud aggressive rival. Here, the chimpanzee moved his lips until he lost his fear grin thereby concealing his fear. Only then did he turn around to face the challenger. Deceit in great apes has been studied under experimental conditions, one of which is summarised by Kirkpatrick: "...food was hidden and only one individual, named Belle, in a group of chimpanzees was informed of the location. Belle was eager to lead the group to the food but when one chimpanzee, named Rock, began to refuse to share the food, Belle changed her behaviour. She began to sit on the food until Rock was far away, then she would uncover it quickly and eat it. Rock figured this out though and began to push her out of the way and take the food from under her. Belle then sat farther and farther away waiting for Rock to look away before she moved towards the food. In an attempt to speed the process up, Rock looked away until Belle began to run for the food. On several occasions he would even walk away, acting disinterested, and then suddenly spin around and run towards Belle just as she uncovered the food." Deceptive behaviour has been observed in Old World monkeys including baboons (Papio ursinus). In one of their articles, Byrne and Whiten recorded observations of "intimate tactical deception" within a group of baboons, and documented examples that they classified as follows: A juvenile using warning screams to gain access to underground food storages which otherwise would have been inaccessible; an exaggerated "looking" gesture (which in an honest context would mean detection of a predator) produced by a juvenile to avoid attack by an adult male; recruitment of a "fall-guy" (a third party used by the deceiver to draw attention or aggression); and using one's own movement pattern to draw group-mates away from food caches. Byrne and Whiten also broke these categories into subcategories denoting the modality of the action (e.g. vocalization) and what the action would have signified if observed in an honest context. They noted whether the individual that had been manipulated was in turn used to manipulate others, what the costs had been to the manipulated individual, and whether or not there were additional costs to third parties. Byrne and Whiten expressed concern that these observations might be exceptions, and that such deceptive behaviour s might not be common to the species. Among New World monkeys, tufted capuchin (Cebus apella) monkey subordinates have been found to employ a vocal form of tactical deception when competing with dominant monkeys over valuable food resources. They use alarm calls normally reserved for predator sightings— either barks (used specifically for aerial stimuli), peeps, or hiccups— to elicit a response in fellow group members and then take advantage of the distraction to pilfer food. In a series of experiments directed by Brandon Wheeler a group of tufted capuchin monkeys was provided with bananas on feeding platforms. Here, subordinate monkeys made nearly all of the alarm calls that could be classified as false, and in many of the false alarms, the caller was on or within two meters of the feeding platform. The calls made dominant monkeys leave the platform while the subordinate caller stayed behind to eat. Costs Withholding information, a form of tactical deception, can be costly to the deceiver. For example, rhesus monkeys discovering food announce their discoveries by calling on 45% of occasions. Discoverers who fail to call, but are detected with food by other group members, receive significantly more aggression than vocal discoverers. Moreover, silent female discoverers eat significantly less food than vocal females. Presumably because of such costs to deceivers, tactical deception occurs rather rarely. It is thought to be more common in forms and species where the cost of ignoring the possibly deceptive act is even higher than the cost of believing. For example, tufted capuchin monkeys sometimes emit false alarm calls. The cost of ignoring one of these calls could be death, which may lead to a "better safe than sorry" philosophy even when the caller is a known deceiver. References Animal communication Ethology Deception
Tactical deception in animals
[ "Biology" ]
1,828
[ "Behavioural sciences", "Ethology", "Behavior" ]
78,117,494
https://en.wikipedia.org/wiki/Disaster%20restoration
Disaster restoration refers to the process of repairing and restoring property damaged by natural disasters such as floods, hurricanes, wildfires, or earthquakes. It typically involves various services such as structural repairs and water damage restoration, fire damage restoration, mold remediation, and content restoration. The industry The disaster restoration industry, encompassing services such as fire damage repair and mold remediation, has experienced significant growth in recent decades due to a confluence of factors. Severe natural disasters, coupled with increasing development in disaster-prone areas, have created a steady demand for restoration services. While historically dominated by local family-owned businesses, the industry has witnessed a notable consolidation trend driven by private equity firms seeking to capitalize on its recession-proof nature. Market size The global post-storm remediation market is projected to expand from $70 billion in 2024 to $92 billion by 2029, reflecting the enduring demand for restoration services in the face of climate change and other environmental challenges. References Companies Business services companies Cleaning industry
Disaster restoration
[ "Chemistry" ]
202
[ "Cleaning", "Surface science" ]
78,118,609
https://en.wikipedia.org/wiki/Cybersecurity%20engineering
Cybersecurity engineering is a tech discipline focused on the protection of systems, networks, and data from unauthorized access, cyberattacks, and other malicious activities. It applies engineering principles to the design, implementation, maintenance, and evaluation of secure systems, ensuring the integrity, confidentiality, and availability of information. Given the rising costs of cybercrimes, which now amount to trillions of dollars in global economic losses each year, organizations are seeking cybersecurity engineers to safeguard their data, reduce potential damages, and strengthen their defensive security systems. History Cybersecurity engineering began to take shape as a distinct field in the 1970s, coinciding with the growth of computer networks and the Internet. Initially, security efforts focused on physical protection, such as safeguarding mainframes and limiting access to sensitive areas. However, as systems became more interconnected, digital security gained prominence. In the 1970s, the introduction of the first public-key cryptosystems, such as the RSA algorithm, was a significant milestone, enabling secure communications between parties that did not share a previously established secret. During the 1980s, the expansion of local area networks (LANs) and the emergence of multi-user operating systems, such as UNIX, highlighted the need for more sophisticated access controls and system audits. The Internet and the consolidation of security practices In the 1990s, the rise of the Internet alongside the advent of the World Wide Web (WWW) brought new challenges to cybersecurity. The emergence of viruses, worms, and distributed denial-of-service (DDoS) attacks required the development of new defensive techniques, such as firewalls and antivirus software. This period marked the solidification of the information security concept, which began to include not only technical protections but also organizational policies and practices for risk mitigation. Modern era and technological advances In the 21st century, the field of cybersecurity engineering expanded to tackle sophisticated threats, including state-sponsored attacks, ransomware, and phishing. Concepts like layered security architecture and the use of artificial intelligence for threat detection became critical. The integration of frameworks such as the NIST Cybersecurity Framework emphasized the need for a comprehensive approach that includes technical defense, prevention, response, and incident recovery. Cybersecurity engineering has since expanded to encompass technical, legal, and ethical aspects, reflecting the increasing complexity of the threat landscape. Core principles Cybersecurity engineering is underpinned by several essential principles that are integral to creating resilient systems capable of withstanding and responding to cyber threats. Risk management: involves identifying, assessing, and prioritizing potential risks to inform security decisions. By understanding the likelihood and impact of various threats, organizations can allocate resources effectively, focusing on the most critical vulnerabilities. Defense in depth: advocates for a layered security approach, where multiple security measures are implemented at different levels of an organization. By using overlapping controls—such as firewalls, intrusion detection systems, and access controls—an organization can better protect itself against diverse threats. Secure coding practices: emphasizes the importance of developing software with security in mind. Techniques such as input validation, proper error handling, and the use of secure libraries help minimize vulnerabilities, thereby reducing the risk of exploitation in production environments. Incident response and recovery: effective incident response planning is crucial for managing potential security breaches. Organizations should establish predefined response protocols and recovery strategies to minimize damage, restore systems quickly, and learn from incidents to improve future security measures. Key areas of focus Cybersecurity engineering works on several key areas. They start with secure architecture, designing systems and networks that integrate robust security features from the ground up. This proactive approach helps mitigate risks associated with cyber threats. During the design phase, engineers engage in threat modeling to identify potential vulnerabilities and threats, allowing them to develop effective countermeasures tailored to the specific environment. This forward-thinking strategy ensures that security is embedded within the infrastructure rather than bolted on as an afterthought. Penetration testing is another essential component of their work. By simulating cyber attacks, engineers can rigorously evaluate the effectiveness of existing security measures and uncover weaknesses before malicious actors exploit them. This hands-on testing approach not only identifies vulnerabilities but also helps organizations understand their risk landscape more comprehensively. Moreover, cybersecurity engineers ensure that systems comply with regulatory and industry standards, such as ISO 27001 and NIST guidelines. Compliance is vital not only for legal adherence but also for establishing a framework of best practices that enhance the overall security posture. Technologies and tools Firewalls and IDS/IPS Firewalls, whether hardware or software-based, are vital components of a cybersecurity infrastructure, acting as barriers that control incoming and outgoing network traffic according to established security rules. By preventing unauthorized access, firewalls protect networks from potential threats. Complementing this, Intrusion Detection Systems (IDS) continuously monitor network traffic to detect suspicious activities, alerting administrators to potential breaches. Intrusion Prevention Systems (IPS) enhance these measures by not only detecting threats but also actively blocking them in real-time, creating a more proactive security posture. Encryption Encryption is a cornerstone of data protection, employing sophisticated cryptographic techniques to secure sensitive information. This process ensures that data is rendered unreadable to unauthorized users, safeguarding both data at rest—such as files stored on servers—and data in transit—like information sent over the internet. By implementing encryption protocols, organizations can maintain confidentiality and integrity, protecting critical assets from cyber threats and data breaches. Security Information and Event Management (SIEM) SIEM systems play a crucial role in modern cybersecurity engineering by aggregating and analyzing data from various sources across an organization's IT environment. They provide a comprehensive overview of security alerts and events, enabling cybersecurity engineers to detect anomalies and respond to incidents swiftly. By correlating information from different devices and applications, SIEM tools enhance situational awareness and support compliance with regulatory requirements. Vulnerability assessment tools Vulnerability assessment tools are essential for identifying and evaluating security weaknesses within systems and applications. These tools conduct thorough scans to detect vulnerabilities, categorizing them based on severity. This prioritization allows cybersecurity engineers to focus on addressing the most critical vulnerabilities first, thus reducing the organization's risk exposure and enhancing overall security effectiveness. Threat Detection and Response (TDR) TDR solutions utilize advanced analytics to sift through vast amounts of data, identifying patterns that may indicate potential threats. Tools like Security Information and Event Management (SIEM) and User and Entity Behavior Analytics (UEBA) provide real-time insights into security incidents, enabling organizations to respond effectively to threats before they escalate. Traffic control and Quality of Service (QoS) Traffic control measures in cybersecurity engineering are designed to optimize the flow of data within networks, mitigating risks such as Distributed Denial of Service (DDoS) attacks. By utilizing technologies like Web Application Firewalls (WAF) and load balancers, organizations can ensure secure and efficient traffic distribution. Additionally, implementing Quality of Service (QoS) protocols prioritizes critical applications and services, ensuring they maintain operational integrity even in the face of potential security incidents or resource contention. Endpoint detection and response (EDR) and extended detection and response (XDR) EDR tools focus on monitoring and analyzing endpoint activities, such as those on laptops and mobile devices, to detect threats in real time. XDR expands on EDR by integrating multiple security products, such as network analysis tools, providing a more holistic view of an organization's security posture. This comprehensive insight aids in the early detection and mitigation of threats across various points in the network. Standards and regulations Various countries establish legislative frameworks that define requirements for the protection of personal data and information security across different sectors. In the United States, specific regulations play a critical role in safeguarding sensitive information. The Health Insurance Portability and Accountability Act (HIPAA) outlines stringent standards for protecting health information, ensuring that healthcare organizations maintain the confidentiality and integrity of patient data. The Sarbanes-Oxley Act (SOX) sets forth compliance requirements aimed at enhancing the accuracy and reliability of financial reporting and corporate governance, thereby securing corporate data. Additionally, the Federal Information Security Management Act (FISMA) mandates comprehensive security standards for federal agencies and their contractors, ensuring a unified approach to information security across the government sector. Globally, numerous other regulations also address data protection, such as the General Data Protection Regulation (GDPR) in the European Union, which sets a high standard for data privacy and empowers individuals with greater control over their personal information. These frameworks collectively contribute to establishing robust cybersecurity measures and promote best practices across various industries. Education A career in cybersecurity engineering typically requires a strong educational foundation in information technology or a related field. Many professionals pursue a bachelor's degree in cybersecurity or computer engineering which covers essential topics such as network security, cryptography, and risk management. For those seeking advanced knowledge, a master's degree in cybersecurity engineering can provide deeper insights into specialized areas like ethical hacking, secure software development, and incident response strategies. Additionally, hands-on training through internships or lab experiences is highly valuable, as it equips students with practical skills essential for addressing real-world security challenges. Continuous education is crucial in this field, with many engineers opting for certifications to stay current with industry trends and technologies. Security certifications are important credentials for professionals looking to demonstrate their expertise in cybersecurity practices. Key certifications include: Certified Information Systems Security Professional (CISSP): Globally recognized for security professionals. Certified Information Security Manager (CISM): Focuses on security management. Certified Ethical Hacker (CEH): Validates skills in penetration testing and ethical hacking. References Computer engineering Computer networks engineering Cybersecurity engineering Computer security Engineering disciplines
Cybersecurity engineering
[ "Technology", "Engineering" ]
2,058
[ "Cybersecurity engineering", "Computer engineering", "Computer networks engineering", "nan", "Electrical engineering" ]
78,119,260
https://en.wikipedia.org/wiki/Hyperpositive%20nonlinear%20effect
A hyperpositive nonlinear effect is a very specific case of a nonlinear effect. A nonlinear effect in asymmetric catalysis is a phenomenon in which the enantiopurity of the catalyst (or chiral auxiliary) is not proportional to the enantiopurity of the product obtained. These phenomena were rationalized in the mid-1980s by Henri B. Kagan, who proposed simple mechanistic models, supported by mathematical models, to model experimental curves. In 1994, H. B. Kagan and collaborators proposed more elaborate models that more closely resembled the experimental results observed at the time. Using these models, the authors were able to make theoretical predictions about situations that had not been encountered experimentally. An example is a case “where the enantiomeric excess could take on much larger values for a partially resolved ligand than for an enantiomerically pure ligand”. The authors proposed the term “hyperpositive nonlinear effect” to characterize this situation. This statement may seem somewhat implausible at first glance, but the possibility was observed experimentally 26 years later: the first experimental example of a hyperpositive nonlinear effect was described in 2020 by S. Bellemin-Laponnaz and colleagues, but the mechanism of this phenomenon turned out to be different from that originally proposed. This mechanism, which explains a hyperpositive nonlinear effect, has also been validated to explain cases of enantiodivergence. References Catalysis
Hyperpositive nonlinear effect
[ "Chemistry" ]
304
[ "Catalysis", "Chemical kinetics" ]
78,121,829
https://en.wikipedia.org/wiki/Saratoga%20Water
Saratoga Water, also known as Saratoga Spring Water and Saratoga, is a bottled-water company founded in 1872 in Saratoga Springs, New York. Saratoga Spring Water is sold in sparkling and still versions in a cobalt-blue bottle. Saratoga Water is a brand of BlueTriton Brands. History In 1872 a group of Saratoga-based businessmen looking to take advantage of Saratoga’s famous healing waters began bottling a newly discovered spring under the name “Saratoga Vichy,” named after the mineral springs of Vichy, France. In 1903, the French Republic of Vichy sued the Saratoga Water company in French Republic v. Saratoga Vichy Spring Co., over the use of the Vichy name. The court ruled on behalf of the American company. In the mid-1980s, Saratoga Water was bought by Anheuser-Busch. Evian bought the company from Anheuser-Busch. Evian shut down the local bottling plant. In 2001, Adam Madkour Sr. and a group of local investors purchased the company and re-opened the plant. In 2021, the brand was sold to BlueTriton Brands. References Bottled water brands Mineral water Food and drink companies established in 1872 Saratoga Springs, New York
Saratoga Water
[ "Chemistry" ]
241
[ "Mineral water" ]
78,122,692
https://en.wikipedia.org/wiki/Mu%20%28land%29
The mu () in Mandarin, mau or mou in Cantonese, or bo in Taiwanese, also called Chinese acre, is a traditional Chinese unit of measurement for land area. One mu equals 666.67 square meters in China mainland, 761.4 square meters in Hong Kong and Macau, and 99.17 square meters in Taiwan and Japan. Mu is the only Chinese area unit legally retained by the PRC. Mainland On 7 January 1915, the Beiyang government promulgated a measurement law to use not only the metric system as the standard but also a set of Chinese measurement units based directly on the Qing dynasty definitions (). where mu is the basic unit of area measurement. On 16 February 1929, the Nationalist government promulgated The Weights and Measures Act to adopt the metric system as the official standard and to limit the newer Chinese units of measurement to private sales and trade in Article 11, effective on 1 January 1930. These newer "market" units are based on rounded metric numbers. And mu remains the base unit. In mainland China, mu is the only area unit retained after the traditional Chinese measurement system was discontinued in the "Decree of the State Council Concerning the Use of Uniform Legal Measures in the Country" promulgated in 1959. Now the Chinese measurement system stipulates that 1 mu is equal to 60 square zhang, which is approximately equal to 666.67 square meters; 15 mu is equal to 1 hectare; 1 square kilometer is equal to 1500 mu. Macau In Macau, mu is also the basic area unit of Chinese measurement. One mu is defined as 761.4 square meters. On 24 August 1992, Macau published Law No. 14/92/M that Chinese units of measurement similar to those used in Hong Kong, Imperial units, and United States customary units would be permissible for five years since the effective date of the Law, 1 January 1993, on the condition of indicating the corresponding International System of Units (SI) values, then for three more years thereafter, Chinese, Imperial, and US units would be permissible as secondary to the SI. Hong Kong The Chinese units of measurement used in Hong Kong are similar to those used in Macau. In 1976 the Hong Kong Metrication Ordinance allowed a gradual replacement of the system in favor of the SI metric system. The Weights and Measures Ordinance defines the metric, Imperial, and Chinese units. As of 2012, all three systems are legal for trade and are in widespread use. The standard commercial measure of real estate area is in square feet of the Imperial system. Apartment or office size is generally still given in square feet. However, square metres are used for official purposes. The traditional units of agricultural land area are the mau or mou (Cantonese for mu, a unit used throughout China) and the local dau chung (). Notionally the two units are defined differently, with the dau chung being the amount of land which could be planted by one dau () of rice; in practice the area of one dau chung is roughly equal to one mau. Taiwan In Taiwan, the principal unit for measuring the floor space of an office or apartment is (Taiwanese Hokkien: pêⁿ, Hakka: phiàng, Mandarin: píng). The unit is derives from the Japanese tsubo, the base unit of the Japanese area. The principal unit of land measure is (Taiwanese Hokkien: kah, Hakka: kap, Mandarin: jiǎ). The unit is derived from the obsolete Dutch morgen, which was introduced during Taiwan's Dutch era. Taiwanese mu is derived from Japanese se, i.e., equals one Japanese se or 30 ping. Officially, land area is measured in square metres. "Mu", "acre" and "are" There are three area units whose Chinese names include character . Their meanings and conversions are as follows: (Chinese mu; character-by-character translation: "market mu"): Or simply called mu, is a traditional Chinese unit of measure, roughly equals 667 square meters in Mainland China. (acre, "British mu"): A British Imperial unit, about 4,047 square meters or 0.405 hectares. (are, "common mu"): Part of the metric system, equivalent to 100 square meters. 1 Chinese mu =6.667 ares = 0.164 acre. Idioms One mu and three fen of land, or 1.3 mu of land () is a Chinese idiom that figuratively refers to someone's small personal domain or limited territory, often implying a narrow scope of influence or control. It is also the name of a Chinese website 1Point3Acres. See also Chinese units of measurement Taiwanese units of measurement Hong Kong units of measurement :zh:亩 (the Chinese Wiki article of Mu) Notes References Units of area Customary units of measurement External links https://www.britannica.com/science/mou (Mou: Chinese unit of measurement)
Mu (land)
[ "Mathematics" ]
1,017
[ "Quantity", "Units of area", "Customary units of measurement", "Units of measurement" ]
78,122,911
https://en.wikipedia.org/wiki/Veronte%20Autopilot
Veronte Autopilot is a family of autopilot systems developed by Embention, a Spanish company specializing in safety-critical avionics for unmanned aerial vehicles (UAVs) and electric vertical take-off and landing (eVTOL) aircraft. Known for its advanced control capabilities, the Veronte Autopilot systems are designed to meet stringent reliability and certification requirements, allowing their integration into both manned and unmanned aircraft. Overview Veronte Autopilot is used in various autonomous flight systems for both civil and military applications. It is a fully user-programmable flight controller that can be adapted to different aircraft through model-based design, enabling it to meet specific operational needs. The autopilot supports features such as obstacle avoidance, geofencing, satellite communications, and real-time telemetry, with built-in remote identification (Remote ID) and Automatic Dependent Surveillance–Broadcast (ADS-B) functionalities. The product family includes configurations for single core, redundant, and distributed redundancy setups, enhancing reliability for critical operations. It is equipped with advanced safety measures, making it suitable for use in applications requiring high safety standards, such as urban air mobility (UAM) and certified drone operations. Products Veronte Autopilot 1x: A miniaturized flight control system, optimized for UAVs and autonomous vehicles. Veronte Autopilot 4x: A redundant system designed for critical operations, particularly for drones and eVTOL vehicles. It features a fail-operational architecture to prevent single points of failure. Veronte Autopilot DRx: Developed to meet eVTOL certification requirements, this model supports fly-by-wire and autonomous control systems, making it suitable for UAM and other high-stakes applications. Certification Veronte Autopilot is developed in compliance with key aviation standards, including DO-178C, DO-254, and DO-160. The company behind the product, Embention, is certified under ISO9001 and EN9100, ensuring a robust quality management system. In 2024, the European Union Aviation Safety Agency (EASA) approved the certification basis for Veronte Autopilot under the ETSO-C198 framework, making it the first flight control system for UAS and eVTOL to undergo this process. This certification paves the way for Veronte Autopilot to be used in both manned and unmanned aircraft that require formal certification. Applications Veronte Autopilot is utilized in a variety of sectors, including defense, emergency response, and UAM. Its systems have been integrated into both drones and eVTOL aircraft used for air taxis and cargo transport. By achieving certifications that meet manned aviation standards, Veronte Autopilot enables seamless integration into broader aerospace operations, promoting the use of autonomous systems in regulated airspaces. See also Unmanned aerial vehicle (UAV) Electric vertical takeoff and landing (eVTOL) Urban air mobility (UAM) External links Veronte official website Reference links Aircraft Technologies aircraft Electric aircraft Safety engineering Avionics Unmanned aerial vehicle manufacturers of Spain
Veronte Autopilot
[ "Technology", "Engineering" ]
638
[ "Safety engineering", "Systems engineering", "Avionics", "Aircraft instruments" ]
78,125,030
https://en.wikipedia.org/wiki/1H%200323%2B342
1H 0323+342 known as 2MASX J032441.19+341045.9, is a galaxy located in the constellation of Perseus. It is located 831 million light years from Earth. It is classified a gamma-ray emitting narrow-line Seyfert galaxy, the nearest known example of this subtype. Observational history 1H 0323+342 was first discovered by Wood as an astrophysical X-ray source during the HEAO-1 X-ray survey in 1984. At the time of the observation the source had an unknown origin. In 1993, the source was confirmed as a Seyfert type 1 galaxy by Remillard and colleagues, who identified several emission-line AGNs from a further HEAO-1 X-ray survey. This galaxy has since been detected by both the Fermi Gamma-ray Space Telescope and INTEGRAL. Characteristics The nucleus of 1H 0323+342 is found to be active. The most likely explanation for this energy source in all active galactic nuclei is a presence of an accretion disk around a supermassive black hole. The mass of the black hole in center of 1H 0323+342 is estimated to be 107 Mʘ based on the width and luminosity of a Hβ line and empirical scaling relations, or ~ 2 x 107 Mʘ according to multi-wavelength observations. But later studies shows the mass of the black hole has different estimates. In 2016, a similar mass of (3.4+0.9-0.6) x 107 Mʘ was found from a reverberation study. However, in 2024, the mass is 107.24±0.01 Mʘ according to the galaxy's total flux spectrum. Additionally, the nucleus shows a quasi-stationary feature similar to a HST-1 structure inside the jet of Messier 87. 1H 0323+342 shows some characteristics of blazars, including variable fluxes in optical, radio and X-ray bands as well as a compact bright core. Moreover, the core is revealed to have a two-sided structure measuring ~ 15 kiloparsecs. The galaxy also contains a flat-spectrum radio source with a radio loudness of either R5 GHz = 246 or R1.4 GHz = 318. A relativistic jet is present in 1H 0323+342 although its jet power of 1.0 x 1045 erg s−1 is half the luminosity of its accretion disk. The host galaxy of 1H 0323+342 is a mystery but it has irregular morphology. There is a peculiar structure in the galaxy. It is either interpreted as a one-armed spiral structure based from an optical image taken by Hubble Space Telescope, or a ring-like structure suggesting a recent galaxy merger. References External links 1H 0323+342 on SIMBAD 2MASS objects Perseus (constellation) 2045127 Blazars Seyfert galaxies Astronomical objects discovered in 1984
1H 0323+342
[ "Astronomy" ]
627
[ "Perseus (constellation)", "Constellations" ]
78,125,660
https://en.wikipedia.org/wiki/%CE%91-Zeacarotene
α-Zeacarotene (alpha-zeacarotene) is a form of carotene with a β-ionone ring at one end and a ζ-ionone ring at the opposite end. It is an intermediate in the biosynthesis of various carotenoids and plays a crucial role in the metabolic pathway leading to the production of lycopene and other important carotenoids. Chemical structure and properties The molecular formula of α-zeacarotene is C40H58, with an average molecular weight of 538.89 g/mol. Its IUPAC name is 6-[(1E,3Z,5E,7E,9E,11Z,13E,15E,19Z)-3,7,12,16,20,24-hexamethylpentacosa-1,3,5,7,9,11,13,15,19,23-decaen-1-yl]-1,5,5-trimethylcyclohex-1-ene. The compound is an isomer of β-zeacarotene and exists in both (6R)-isomer and (trans)-isomer forms. α-Zeacarotene is characterized by a predicted boiling point of 637.98 °C at 760 mm Hg and an estimated water solubility of 8.7e-14 mg/L at 25°C, indicating very low solubility in water. Its predicted logP values range from 9.66 to 15.27, highlighting its lipophilic nature. Biological role and function In biological systems, α-zeacarotene functions as an intermediate in the biosynthesis of other carotenoids, including lycopene and β-carotene. It is primarily located in the cytoplasm and cell membranes. The compound also plays a role in cell signaling and lipid metabolism, particularly within the lipid peroxidation and fatty acid metabolism pathways. α-Zeacarotene has been detected in various plant sources, particularly cereals such as corn and breakfast cereals, and is considered both an endogenous (naturally occurring within organisms) and exogenous (obtained through diet) nutrient. Antioxidant activity and health implications Like many carotenoids, α-zeacarotene is recognized for its antioxidant properties, which play a crucial role in neutralizing reactive oxygen species (ROS) within biological systems. ROS are highly reactive molecules that can damage cells, leading to oxidative stress and contributing to the development of chronic diseases such as cardiovascular disease, cancer, and neurodegenerative disorders. While α-zeacarotene's antioxidant activity has not been studied as extensively as other carotenoids like β-carotene or lycopene, preliminary research suggests it may offer similar protective effects. Diets rich in carotenoids, including α-zeacarotene, are associated with a reduced risk of these conditions due to their ability to support cellular health and mitigate oxidative damage. Role in agriculture and biofortification α-Zeacarotene has also gained interest in agricultural research, particularly in the context of biofortification. Biofortification refers to the process of increasing the nutrient content of crops through conventional breeding or genetic engineering. Because carotenoids are important precursors to vitamin A, biofortifying staple crops like maize, rice, and wheat with α-zeacarotene and related carotenoids could help combat vitamin A deficiency in regions where access to diverse diets is limited. This deficiency is a significant public health issue, particularly in developing countries, where it can lead to visual impairment and increased susceptibility to infections. The ability to enhance carotenoid content in crops offers a sustainable way to improve nutritional outcomes in vulnerable populations. Environmental factors and stability in plants The concentration of α-zeacarotene in plants can be influenced by a variety of environmental factors, including light, temperature, and soil quality. Studies have shown that increased light exposure, particularly in the blue light spectrum, can enhance carotenoid production, including α-zeacarotene, in plant tissues. However, the compound is also prone to degradation when exposed to excessive sunlight, particularly ultraviolet (UV) radiation, which can break down the carotenoid structure and reduce its biological effectiveness. This sensitivity to environmental factors underscores the importance of optimal storage and handling conditions for α-zeacarotene-rich foods and products, both in agriculture and in post-harvest processes. Research on biosynthetic pathways Recent advances in plant molecular biology have allowed researchers to explore the specific enzymes involved in the biosynthesis of α-zeacarotene. Enzymes such as phytoene synthase and lycopene β-cyclase play key roles in converting precursor molecules into α-zeacarotene, which in turn can be further processed into other carotenoids. Genetic manipulation of these enzymes in model plants has demonstrated the potential to alter the levels of α-zeacarotene and related carotenoids, offering new insights into plant metabolism and the regulation of carotenoid synthesis. Understanding these pathways not only contributes to agricultural innovations but also offers opportunities for improving the nutritional content of foods and developing novel carotenoid-based supplements. Potential industrial uses in cosmetics and pharmaceuticals Beyond its applications in the food and agricultural industries, α-zeacarotene holds potential in the cosmetics and pharmaceutical sectors. Due to its lipophilic nature and antioxidant properties, it may be incorporated into skincare products aimed at protecting the skin from oxidative stress and environmental damage. Additionally, its potential role in reducing inflammation and supporting cell regeneration makes it a candidate for anti-aging formulations. In the pharmaceutical industry, research into carotenoid derivatives is exploring their use in preventing or treating diseases related to oxidative stress, such as age-related macular degeneration (AMD) and certain types of cancer. Mechanisms of action in the body The mechanisms by which α-zeacarotene exerts its biological effects are still under investigation. However, it is believed that its antioxidant properties primarily stem from its ability to scavenge free radicals and inhibit lipid peroxidation. This capability not only protects cellular components from oxidative damage but also helps maintain the integrity of cellular membranes. Additionally, α-zeacarotene may influence gene expression related to antioxidant enzymes, enhancing the body's overall antioxidant defense system. Research has indicated that carotenoids can modulate cell signaling pathways involved in inflammation and cell survival, potentially contributing to the prevention of various diseases. Impact on vision and eye health Carotenoids, including α-zeacarotene, have been linked to eye health due to their role in protecting retinal cells from oxidative damage and blue light exposure. The presence of carotenoids in the macula—a small area in the retina responsible for central vision—is essential for visual function. Some studies suggest that a diet rich in carotenoids may reduce the risk of age-related macular degeneration (AMD), a leading cause of vision loss in older adults. While α-zeacarotene's specific contribution to eye health requires further research, its antioxidant properties and presence in plant-based diets make it a candidate for supporting visual health. Potential synergistic effects with other nutrients The health benefits of α-zeacarotene may be enhanced when consumed in combination with other carotenoids and nutrients. For instance, the presence of dietary fats can improve the absorption of carotenoids, leading to greater bioavailability and effectiveness. Additionally, carotenoids often work synergistically, meaning that the combined effect of multiple carotenoids may be greater than the sum of their individual effects. This synergy is particularly relevant in the context of a balanced diet rich in fruits and vegetables, where various carotenoids, vitamins, and minerals coexist and contribute to overall health. Innovations in extraction and utilization Advancements in extraction techniques have opened new avenues for utilizing α-zeacarotene in various industries. Techniques such as supercritical fluid extraction (SFE) and cold pressing are being employed to obtain high-purity carotenoid extracts from plant sources. These innovations not only improve the yield of carotenoids but also preserve their bioactivity, making them more effective in dietary supplements, functional foods, and cosmetic formulations. Furthermore, research into nanoemulsions and delivery systems is enhancing the stability and absorption of α-zeacarotene, allowing for more effective applications in health and wellness products. Future research directions Future research on α-zeacarotene should focus on elucidating its specific biological roles and potential health benefits. Investigating its effects in clinical settings could provide insights into its efficacy in preventing or managing chronic diseases. Additionally, studies exploring the interactions of α-zeacarotene with other dietary components, including fatty acids and phytochemicals, could enhance our understanding of its health-promoting properties. Furthermore, research into genetically modified organisms (GMOs) that produce higher levels of α-zeacarotene may lead to more nutrient-dense crops, addressing nutritional deficiencies in vulnerable populations worldwide. Industrial applications In addition to its biological roles, α-zeacarotene has applications in the manufacturing industry, particularly as a fluid processing agent and surfactant. It also functions as an emulsifier, playing a role in stabilizing mixtures in industrial processes. Synonyms and identification Synonyms: α-Zeacarotene is also known by several other names, including 7',8'-dihydro-epsilon,Psi-carotene, 7',8'-dihydro-e,Y-carotene, and Zeacarotene. References Carotenoids Cyclohexenes Surfactants Phytochemicals
Α-Zeacarotene
[ "Biology" ]
2,097
[ "Biomarkers", "Carotenoids" ]
78,126,667
https://en.wikipedia.org/wiki/Cm28
Cm28, a scorpion toxin from Centruroides margaritatus, selectively blocks voltage-gated potassium channels KV1.2 and KV1.3 with high affinity. It also suppresses the activation of human CD4+ effector memory T cells, suggesting its potential as a therapeutic agent for autoimmune diseases. Phylogenetic analysis reveals that Cm28 belongs to a new α-KTx subfamily, highlighting its unique structural and functional properties for potential drug development. Etymology The peptide name "Cm28" is derived from the scorpion species Centruroides margaritatus and its molecular mass, which is estimated to be 2820 Daltons. Sources Cm28 was isolated from the venom of the Centruroides margaritatus scorpion. The venom was isolated through milking the animal by electric stimulation. Chemistry Structure Cm28 is a short peptide from the α-KTx subfamily composed of 27 amino acid residues with six cysteines forming three disulfide bridges, which is a specific quality of the proteins from the mentioned family. Both defensins and venom toxins, like Cm28, share a structural similarity. Both are cysteine-rich proteins with multiple disulfide bonds that help maintain their shape. This structural feature, often referred to as the CSα/β fold, is characterized by alternating alpha-helices and beta-sheets stabilized by disulfide bridges. This fold is essential for their ability to interact with and block ion channels, a function crucial for both immune defense (defensins) and venom toxicity (neurotoxins). Amino acid sequence KCRECGNTSPSCYFSGNCVNGKCVCPA Family Phylogenetic analysis comparing the amino acid sequence of Cm28 with 75 other reported scorpion toxins suggests that Cm28 belongs to the α-KTx family. It has been given the systematic number α-KTx 32.1. Cm28 lacks the typical lysine-tyrosine functional dyad required for blocking KV channels. The 3D model can be found on Swissmodel. Target and mode of action Cm28 is a potent inhibitor of voltage-gated potassium channels KV1.2 and KV1.3, with dissociation constants (Kd) of 0.96 nM and 1.3 nM, respectively. KV1.3 channels are essential for the activation and proliferation of TH17 cells, a T helper cell subset critical for immune responses, especially in autoimmune diseases. These channels regulate T cell proliferation and other signaling pathways necessary for T cell functioning. The binding of Cm28 to both KV1.2 and KV1.3 is reversible, allowing dynamic regulation of channel activity during immune responses. It operates by physically blocking the pores of these channels, preventing potassium ions from passing through. Rather than altering the voltage-sensing domain, Cm28 interacts with the selectivity filter region, effectively disrupting ion flow without shifting the activation thresholds. This specific interaction highlights Cm28's precise targeting of the pore region, making it a highly selective blocker for KV1.2 and KV1.3 channels. The exact residues involved in selectivity filter block are unknown. Toxicity In toxicity assays, Cm28 did not compromise the viability of human CD4+ T cells, even at concentrations much higher than its binding affinity to KV1.3 channels. Specifically, after a 24-hour incubation period with 1.5 μM Cm28, the cytotoxicity of the peptide on quiescent and TCR-activated CD4+ T cells was less than 1%. This finding was confirmed by both lactate dehydrogenase (LDH) assays and flow cytometry using Zombie NIR dye to evaluate cell viability. Therefore, Cm28 demonstrates minimal cytotoxicity in vitro under the experimental conditions. References External links https://swissmodel.expasy.org/repository/uniprot/C0HM22?csm=619126AE0255A0CA Ion channel toxins Neurotoxins Scorpion toxins Peptides
Cm28
[ "Chemistry" ]
848
[ "Biomolecules by chemical classification", "Molecular biology", "Neurochemistry", "Neurotoxins", "Peptides" ]
63,644,161
https://en.wikipedia.org/wiki/Free%20Air%20Humidity%20Manipulation
Free Air Humidity Manipulation (FAHM) experiment is a large-scale field experiment in Estonia, FAHM was established by plant biologists (ecophysiologists and applied ecologists) of the University of Tartu to investigate the long term effects of increasing air humidity on tree performance and on the functioning of the deciduous forest ecosystem. The design of the FAHM experiment is based on the Free Air Carbon dioxide Enrichment (FACE) technology. FAHM (58°14′N, 27°18′E) is located within the Järvselja Training and Experimental Forest District in the village of Rõka, Tartu county, FAHM infrastructure enables to increase air relative humidity up to 18% unit above ambient level (long-term mean increase 7%). References Ecology Tartu County Botany
Free Air Humidity Manipulation
[ "Biology" ]
161
[ "Ecology", "Plants", "Botany" ]
63,644,269
https://en.wikipedia.org/wiki/Samarium%28II%29%20fluoride
Samarium(II) fluoride is one of fluorides of samarium with a chemical formula SmF2. The compound crystalizes in the fluorite structure, and is significantly nonstoichiometric. Along with europium(II) fluoride and ytterbium(II) fluoride, it is one of three known rare earth difluorides, the rest are unstable. Preparation Samarium(II) fluoride can be prepared by using samarium or hydrogen gas to reduce samarium(III) fluoride: Properties Samarium(II) fluoride is a purple to black solid. This is present in the crystal structure of the cubic calcium fluoride type (space group Fmm; No. 225 with a = 587.7 pm). References Samarium(II) compounds Fluorides Lanthanide halides Fluorite crystal structure
Samarium(II) fluoride
[ "Chemistry" ]
188
[ "Fluorides", "Salts" ]
63,644,325
https://en.wikipedia.org/wiki/Ceramic%20engine
A ceramic engine is an internal combustion engine made from specially engineered ceramic materials. Ceramic engines allow for the compression and expansion of gases at extremely high temperatures without loss of heat or engine damage. Proof-of-concept ceramic engines were popularized by successful studies in the early 1980s and 1990s. Under controlled laboratory conditions, ceramic engines outperformed traditional metal engines in terms of weight, efficiency, and performance. All-ceramic engines were seen as the next advancement in future engine technology, but have not yet entered the automobile market because of manufacturing and economic problems. History Research into more efficient diesel engines occurred after the 1970s energy crisis, resulting in a new market for fuel-efficient vehicles. A newly developed gas turbine engine design promised high thermal efficiency, but needed a material that could withstand temperatures. The high heat did not allow for readily available materials like metals, superalloys, and carbon composites to be used. As a result, government-funded research facilities in the United States, Japan, Germany, and the United Kingdom experimented with replacing metal with ceramics. Ceramics' high resistance to heat helped pave the way towards the first commercial use of gas turbine engines, the successes of which led to the idea of an all-ceramic engine. Between 1985 and 1989, Nissan, in collaboration with NGK, produced the world's first ceramic turbocharger, later debuting this on the 1985 Fairlady Z 200ZR. Isuzu developed a diesel ceramic engine that used ceramic for the pistons, piston rings, and turbocharger wheels. Isuzu also developed an engine that used cylinder liners made of ceramic materials such as silicon nitride. Isuzu also used ceramics for the intake and exhaust valves, exhaust manifold, turbocharger housing, camshafts, heat insulation, and rocker arms. Predictions for an adiabatic turbo-compound engine (a theoretical heat-efficient engine) were seen as plausible with the use of technical ceramic material. A 1987 technical paper by Roy Kamo predicted the mass production of such engines to occur in the year 2000. However, these predictions were made with the belief that ceramics would overcome "the design methodology, manufacturing process, machining cost, and mass production quality control needed for high volume production." Currently, ceramic engines are not viable for mass production. Large parts, like the engine block, can be challenging to manufacture out of ceramics due to their brittleness and stiffness. Applications In 1982, Isuzu tested a car with an all-ceramic engine near the Kinko Bay. In 1988, Toyota introduced a ceramic engine into its Crown, as well as its GTV (Gas Turbine Vehicle) concept car. Notes Engines Ceramics
Ceramic engine
[ "Physics", "Technology" ]
540
[ "Physical systems", "Machines", "Engines" ]
63,645,035
https://en.wikipedia.org/wiki/1%2C1%2C3%2C3-Tetramethyl-1%2C3-divinyldisiloxane
1,1,3,3-Tetramethyl-1,3-divinyldisiloxane (also referred to as tetramethyldivinyldisiloxane) is the organosilicon compound with the formula O(SiMe2CH=CH2)2. Tetramethyldivinyldisiloxane is a colorless liquid that is employed as a ligand in organometallic chemistry and also as a homogeneous catalyst. The ligand is a component of Karstedt's catalyst. It was first prepared by hydrolysis of vinyldimethylmethoxysilane, (CH2=CH)Me2SiOMe. References Homogeneous catalysis Dienes Siloxanes Vinyl compounds
1,1,3,3-Tetramethyl-1,3-divinyldisiloxane
[ "Chemistry" ]
156
[ "Catalysis", "Homogeneous catalysis" ]
63,645,037
https://en.wikipedia.org/wiki/NGC%20608
NGC 608 is a lenticular galaxy in the constellation Triangulum. It is estimated to be about 230 million light-years from the Milky Way. It has a diameter of approximately 130,000 light-years. NGC 608 was discovered on November 22, 1827, by astronomer John Herschel. See also List of NGC objects (1–1000) References External links Lenticular galaxies Triangulum 0608 005913
NGC 608
[ "Astronomy" ]
89
[ "Triangulum", "Constellations" ]
63,645,100
https://en.wikipedia.org/wiki/NGC%20713
NGC 713 is a spiral galaxy located in the constellation of Cetus about 234 million light years from the Milky Way. It was discovered by the American astronomer Francis Leavenworth in 1886. See also List of NGC objects (1–1000) References External links Spiral galaxies Cetus 0713 007161
NGC 713
[ "Astronomy" ]
64
[ "Cetus", "Constellations" ]
63,645,173
https://en.wikipedia.org/wiki/Medical%20gown
Medical gowns are hospital gowns worn by medical professionals as personal protective equipment (PPE) in order to provide a barrier between patient and professional. Whereas patient gowns are flimsy often with exposed backs and arms, PPE gowns, as seen below in the cardiac surgeon photograph, cover most of the exposed skin surfaces of the professional medics. In several countries, PPE gowns for use in the COVID-19 pandemic became in appearance more like cleanroom suits as knowledge of the best practices filtered up through the national bureaucracies. For example, the European norm-setting bodies CEN and CENELEC on 30 March 2020 in collaboration with the European Commissioner for the Internal Market made freely-available the relevant standards documents in order "to tackle the severe shortage of protective masks, gloves and other products currently faced by many European countries. Providing free access to the standards will facilitate the work of the many companies wishing to reconvert their production lines in order to manufacture the equipment that is so urgently needed." History The concept of PPE in regards to medical professionals was seen as early as the 17th century Plague doctor's outfit. During the Ebola crisis of 2014, the WHO published a rapid advice guideline on PPE coveralls. Types The different levels of various gown types are categorized as follows: Local variants United States In the United States, medical gowns are medical devices regulated by the Food and Drug Administration. FDA divides medical gowns into three categories. A surgical gown is intended to be worn by health care personnel during surgical procedures. Surgical isolation gowns are used when there is a medium to high risk of contamination and a need for larger critical zones of protection. Non-surgical gowns are worn in low or minimal risk situations. Surgical and surgical isolation gowns are regulated by the FDA as a Class II medical device that require a 510(k) premarket notification, but non-surgical gowns are Class I devices exempt from premarket review. Surgical gowns only require protection of the front of the body due to the controlled nature of surgical procedures, while surgical isolation gowns and non-surgical gowns require protection over nearly the entire gown. In 2004, the FDA recognized ANSI/AAMI PB70:2003 standard on protective apparel and drapes for use in health care facilities. Surgical gowns must also conform to the ASTM F2407 standard for tear resistance, seam strength, lint generation, evaporative resistance, and water vapor transmission. Because surgical gowns are considered to be a surface-contacting device with intact skin, FDA recommends that cytotoxicity, sensitization, and irritation or intracutaneous reactivity is evaluated. China The First Affiliated Hospital of the Zhejiang University School of Medicine in Hangzhou, Zhejiang Province, People's Republic of China developed their own protocol and equipment during the early months of the COVID-19 pandemic. A screenshot of the cover of the Handbook of COVID-19 Prevention and Treatment shows a picture of two rows of medical personnel, each wearing PPE gowns and PPE masks and PPE hoods and PPE goggles. During the COVID-19 pandemic in Wuhan, doctors were provided with full PPE gown suits as early as January 2020. European Union During the COVID-19 pandemic, the European Commissioner for the Internal Market on 30 March 2020 listed the applicable norms for to help manufacturers re-convert their production lines: Protective masks EN 1492009-08: Respiratory protective devices – Filtering half masks to protect against particles - Requirements, testing, marking EN 146832019-10: Medical face masks - Requirements and test methods Eye protection EN 1662002-04: Personal eye-protection – Specifications Protective clothing EN 141262004-01: Protective clothing - Performance requirements and tests methods for protective clothing against infective agents EN 146052009-08: Protective clothing against liquid chemicals - performance requirements for clothing with liquid-tight (Type 3) or spray-tight (Type 4) connections, including items providing protection to parts of the body only (Types PB [3] and PB [4]) EN ISO 136882013-12 Protective clothing - General requirements (ISO 13688:2013) EN 13795-12019-06: Surgical clothing and drapes - Requirements and test methods – Part 1: Surgical drapes and gowns EN 13795-22019-06: Surgical clothing and drapes - Requirements and test methods – Part 2: Clean air suits Gloves EN 455-12001-01 Medical gloves for single use – Part 1: Requirements and testing for freedom from holes EN 455-22015-07: Medical gloves for single use – Part 2: Requirements and testing for physical properties EN 455-32015-07: Medical gloves for single use – Part 3: Requirements and testing for biological evaluation EN 455-42009-10: Medical gloves for single use – Part 4: Requirements and testing for shelf life determination EN 4202010-03: Protective gloves - General requirements and test methods EN ISO 374-12018-10 Protective gloves against dangerous chemicals and micro-organisms – Part 1: Terminology and performance requirements for chemical risks EN ISO 374-52017-03: Protective gloves against dangerous chemicals and micro-organisms – Part 5: Terminology and performance requirements for micro-organisms risks (ISO 374-5:2016) Israel As seen in the accompanying gallery figure, at least one Israeli hospital had access to full Tyvek PPE gowns as early as 17 March 2020 during the COVID-19 pandemic. Italy In an early April article, 20 doctors from the whole of Italy describe their experience with coronavirus patient care. Their conclusion reads: Their findings are set out in a table entitled "Necessary personal protection equipment": FFP2 facial mask or (in case of maneuvers at high risk of generating aerosolized particles:) FFP3 facial mask Disposable long sleeve waterproof coats, gowns, or Tyvek suits Disposable double pair of nitrile gloves Protective goggles or visors Disposable head caps Disposable long shoe covers Alcoholic hand hygiene solution Criticisms In a May 2017 research article, several French scientists complained that there was little harmonization across Europe for the names of pathogens, and went on to describe the PPE norms and regulations in France for infectious diseases under BSL-3. See also , historical equivalent Hazmat suit Workplace hazard controls for COVID-19 References Gowns Medical equipment Headgear Occupational safety and health Risk management in business Industrial hygiene Environmental social science Working conditions Personal protective equipment
Medical gown
[ "Engineering", "Biology", "Environmental_science" ]
1,376
[ "Personal protective equipment", "Safety engineering", "Medical equipment", "Environmental social science", "Medical technology" ]
63,645,205
https://en.wikipedia.org/wiki/Serbian%20barrel
A Serbian barrel is a sterilization device used for sterilizing clothes. It consists of a wooden or metal barrel or other container which is then heated to disinfect items hung inside it by moist heat sterilization. The Serbian barrel was pioneered by the British surgeon William Hunter during the 1915 typhus and relapsing fever epidemic in Serbia. References Medical equipment Sterilization (microbiology) 20th-century inventions
Serbian barrel
[ "Chemistry", "Biology" ]
88
[ "Microbiology techniques", "Sterilization (microbiology)", "Medical equipment", "Medical technology" ]
63,645,212
https://en.wikipedia.org/wiki/Oxalyl%20dicyanide
Oxalyl dicyanide is a chemical compound with the formula C4N2O2. Formation Oxalyl dicyanide can be formed by the hydrolysis of diiminosuccinonitrile. Reactions Oxalyl dicyanide can condense with diaminomaleonitrile to make pyrazinetetracarbonitrile and also 5,6-dihydroxypyrazine-2,3-dicarbonitrile, both derivatives of pyrazine. See also Cyanogen References Acyl cyanides Inorganic carbon compounds Inorganic nitrogen compounds
Oxalyl dicyanide
[ "Chemistry" ]
127
[ "Inorganic compounds", "Inorganic nitrogen compounds", "Organic compounds", "Inorganic carbon compounds", "Organic compound stubs", "Organic chemistry stubs" ]
63,645,762
https://en.wikipedia.org/wiki/Mushrooms%20in%20art
Mushrooms have been found in art traditions around the world, including in western and non-western works. Ranging throughout those cultures, works of art that depict mushrooms can be found in ancient and contemporary times. Often, symbolic associations can also be given to the mushrooms depicted in the works of art. For instance, in Mayan culture, mushroom stones have been found that depict faces in a dreamlike or trance-like expression, which could signify the importance of mushrooms giving hallucinations or trances. Another example of mushrooms in Mayan culture deals with their codices, some of which might have depicted hallucinogenic mushrooms. Other examples of mushroom usage in art from various cultures include the Pegtymel petroglyphs of Russia and Japanese Netsuke figurines.</onlyinclude> Examples of mushrooms being depicted in contemporary art are also prevalent. For example, a contemporary Japanese piece depicts baskets of matsutake mushrooms laid atop bank notes, signifying the association of mushrooms and prosperity. Other examples of contemporary art depicting fungi include Anselm Kiefer's Über Deutschland and Sonja Bäumel's Objects not static and silent but alive and talking. These contemporary works often outline themes greatly undercurrent in modern times, themes such as sustainable living, new materials, and ethical considerations associated with the science of fungi and biotechnologies. In fact, working with fungi allows contemporary artists to create art that is interactive and performative. Mushroom symbolism has also appeared in Christian paintings. The panel painting by Hieronymus Bosch, The Haywain Triptych, is considered the first depiction of mushroom in modern art. Another triptych by Hieronymus Bosch, The Garden of Earthly Delights, depicts scenes very similar to those experienced under the effects of psychoactive mushrooms. In fact, when considering the mushroom of Amanita muscaria, artistic representations throughout the ages show the association it has with psychotropic properties, being represented as being used for social, religious, and therapeutic purposes. Registry of Mushrooms in Works of Art The Registry of Mushrooms in Works of Art is maintained by the North American Mycological Association and its stated goal is, "to contribute to the understanding of the relationship between mushrooms and people as reflected in works of art from different historical periods, and to provide enjoyment to anyone interested in the subject." Started by Elio Schaechter, author of In the Company of Mushrooms, the project is ongoing. Art periods and artists are categorized as follows in the registry: 1300-1500 - Gothic and Early Renaissance 1500-1600 - High Renaissance Dutch Baroque 1600-1750 Flemish Baroque 1600-1750 Germanic Baroque 1600-1750 Italian Baroque 1600-1750 Miscellaneous Baroque 1600-1750 1750-1850 - Romanticism and Neoclassicism 1850-1950 - Modern Victorian Fairy Paintings Post 1950 - Contemporary Post 1999 - Contemporary Karl Hamilton Paolo Porpora Pseudo Fardella, Painter of Carlo Torre Van Schrieck, Otto Marseus Alexander Viazmensky References External links The Registry of Mushrooms in Works of Art from the North American Mycological Association Biology and culture Fungi and humans
Mushrooms in art
[ "Biology" ]
634
[ "Fungi and humans", "Fungi", "Humans and other species" ]
63,646,103
https://en.wikipedia.org/wiki/SN%202016aps
SN 2016aps (also known as PS16aqy and AT2016aps) is the brightest and most energetic supernova explosion ever recorded. It released more energy than ASASSN-15lh. In addition to the sheer amount of energy released, an unusually large amount of the energy was released in the form of radiation, probably due to the interaction of the supernova ejecta and a previously lost gas shell. Overview The event was discovered on 22 February 2016 by the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) in Hawaii, with follow-up observations by the Hubble Space Telescope. The supernova occurred at a high z-value indicating a distance of 3.6 billion light-years. and is located in the constellation Draco. The maximum apparent magnitude was 18.11, the corresponding absolute magnitude −22.35. The progenitor star is estimated to have had at least 50 to 100 solar masses. The spectrum of SN 2016aps revealed significant amounts of hydrogen, which is unexpected for supernovae of this type, which usually occur after nuclear fusion has consumed most of the star's hydrogen and the stars have shed the remaining hydrogen atmosphere. This led researchers to the theory that the progenitor star formed only shortly before the event from the merger of two very large stars, creating a "pulsational pair instability" supernova or possibly a full pair instability supernova. See also References External links SN 2016aps entry in the Open Supernova Catalog Supernovae Astronomical objects discovered in 2016 Draco (constellation)
SN 2016aps
[ "Chemistry", "Astronomy" ]
327
[ "Supernovae", "Astronomical events", "Constellations", "Draco (constellation)", "Explosions" ]
63,646,811
https://en.wikipedia.org/wiki/Vasily%20Omelianski
Vasily Leonidovich Omelianski (Vasilij Leonidovič Omeljanskij, Russian: Василий Леонидович Омелянский; 10 March 1867 – 21 April 1928) was a Russian microbiologist and author of the first original Russian text book on microbiology. He was the only student of Sergei Winogradsky and succeeded him as head of the department of General Microbiology at the Institute of Experimental Medicine in Saint Petersburg. Early life and education Omelianski was the youngest son of a college teacher in Zhytomyr. In 1885 or 1886, Omelianski enrolled in the natural history division of the physico-mathematical faculty of the University of Saint Petersburg. During his studies he visited the lectures of D. I. Mendeleev and N. A. Menshutkin. Career After finishing his studies with distinction in 1889 or 1890, he worked in the chemical laboratory of Menshutkin for further two years and published the first time. In 1891, financial difficulties forced Omelianski to work as laboratory chemist in a metallurgical factory in Southern Russia. However, two years later he became the assistant of S. N. Winogradsky, who hired him on recommendation of Menshutkin, at the new-founded Imperial Institute of Experimental Medicine. Omelianski supported Winogradskys work on nitrification. Later on he studied the fermentation of cellulose and did research on nitrogen fixation on his own. In 1909, he published the textbook "Principles of Microbiology" (Основы микробиологии) which was the first original Russian textbook on microbiology and remained a standard work at Soviet universities till the 1950s. Omelianski had conceived this text from his lectures he held on a women's college since 1906 or 1909. In 1922, he published his second textbook "Practical Manual of Microbiology" (Практическое руководство по микробиологии) in which he spread the methodology of Winogradsky (using enrichment cultures) and the so-called "Delft school of microbiology" (founded by M. Beijerinck) in Russia. Since 1912 till his death he led the department of General Microbiology at the Institute of Experimental Medicine succeeding Winogradsky. As head of the department he edited the "Archive of Biological Sciences" (Архив биологических наук), the first biology journal publishing in Russian. In 1924, Omelianski became editor of the popular journal “Progress of biological chemistry” (Успехи биологической химии). The last textbook he could finish in 1927 was “Short course in general and soil microbiology” (Краткий курс общей и почвенной микробиологии). In 1916, Omelianski became a corresponding member of the Russian Academy of Sciences and he was appointed to Doctor botanicus h. c. without examination in 1917. In 1923, he became of full member of the Russian Academy of Sciences. In 1926, he affiliated with the Society of American Bacteriologists and the Lombardic Academical Society. Personal life Omelianski was married and had a daughter, Maria Vasilevna Stepanova (1901-1946, an ethnographer). During World War I, the Russian Revolution and Russian Civil War, Omelianski was able to stay in Saint Petersburg, while Winogradsky (as a rich landowner) had to escape. Possibly, he was saved by his poor bourgeois ancestry, his interest in the starving poor, his popular commitment by publishing Russian text books and journals and lecturing in a women’s college or thanks to the Bolsheviks scientific progress friendly stance. In springtime 1927, Omelianski travelled to the Pasteur Institute in Paris to visit his mentor Winogradsky. There he suffered a first heart attack. Omelianski had a second heart attack in December 1927 but could recover. During a vacation in Gagra (Abkhazia) he died on April 21, 1928. Omelianski was also a gifted chess player who entered into competitions as a student, a reportedly gifted portraitist and wrote several short-stories, four of which have since been stored in the archive at the Russian Academy of Sciences in St. Petersburg. Impact on methanogenesis research Omelianski published only once in English on “aroma-producing microorganisms” in the American Journal of Bacteriology in 1923. However, today his international reputation is connected to microbial methanogenesis in syntrophic co-cultures. This is based on his French publication of 1916 on "methane fermentation of ethanol". Following this research, microbiologist Horace Barker isolated an ethanol-degrading microbe called Methanobacterium omelianskii. Barker had used the methodological approach of the "Delft school of microbiology" developed by Barkers mentors Albert Kluyver and Cornelis van Niel. In 1967, the renamed Methanobacterium omelianskii was specified as a co-culture of the ethanol-oxidizing S-organism and a methanogen, which uses hydrogen produced by its bacterial partner to reduce carbon dioxide to methane. Certainly, Omelianski was one of the founding fathers of methanogenesis research and the first scientist investigating methanogenic fermentation of cellulose and ethanol systematically. He even discovered hydrogen as a product of cellulose fermentation around 1900 but, of course, did not discover the concept of syntrophic electron transfer. Further reading Ackert Jr., L. T., The role of microbes in agriculture: Sergei Vinogradskii's discovery and investigation of chemosynthesis, 1880–1910. In: Journal of the History of Biology 39, S. 373–406. Ackert Jr., L. T., The "cycle of life" in ecology: Sergei Vinogradskii's soil microbioloy, 1885–1940. In: Journal of the History of Biology 40, S. 109–145. Russian Academy of Sciences: Biographical note on the 150th birthday of Vasilij Leonidovič Omeljanskij Zavarzin, G. A., Winogradsky and modern microbiology. In: Microbiology 75(5), S. 501–511 References 1928 deaths Biochemists from the Russian Empire Microbiologists from the Russian Empire Ecologists from the Russian Empire Academic staff of Saint Petersburg State University 1867 births Environmental microbiology Soil scientists from the Russian Empire Anaerobic digestion Soviet microbiologists Soviet soil scientists Chess players from the Russian Empire
Vasily Omelianski
[ "Chemistry", "Engineering", "Environmental_science" ]
1,479
[ "Water technology", "Environmental microbiology", "Anaerobic digestion", "Environmental engineering" ]
63,648,275
https://en.wikipedia.org/wiki/Maria%20Teohari
Maria Teohari (Giurgiu, Romania, 22 April 1885 - Bucharest, 1975) is credited as the first female astronomer of Romania. She lost part of her eyesight from viewing the Sun through telescopes without adequate eye protection. Biography Teohari was born in 1885 as the eldest of three daughters of physician Christu Teohari and his wife Alexandrei. Teohari started her formal schooling in Giurgiu but after her father's sudden death from infection, she and her mother assumed responsibility for supporting the family According to Emil Păunescu, deputy director of the Giurgiu County Museum,"Doctor Teohari died from a finger bite that occurred at the hospital in Giurgiu during an operation. The wound became infected, and Dr. Teohari passed away leaving behind a widow with three children. As it was then, women were not working. They were housewives, so you realize how hard it was for a woman to grow up alone with three girls." The family moved to Bucharest where Maria Teohari continued her high school studies at the "Regina Elena" School (In Romanian, "Elena Doamna") and then at the Central School. The training she received there gave her substantial credentials in drawing, literature and languages, but her passions went in a completely different direction: astronomy. Astronomer On the initiative of Professor Nicolae Coculescu, founder of the Bucharest Astronomical Observatory, Teohari obtained a scholarship for student astronomers. Moving abroad for one year, she began her studies at the Faculty of Sciences with specializations at astronomical observatories in Paris and Nice, (three months at the Paris Observatory and nine months at the Nice Observatory). There, she carried out observations of the Sun, small planets and asteroids. In 1914, at the start of World War I, Teohari returned to Romania to continue her studies and observations, conducting the first solar activity observations at the Astronomical Observatory of Bucharest (now known as Astronomical Institute of the Romanian Academy), thus becoming the first female astronomer in Romania. According to Păunescu:"At that time there was no female astronomer in Romania, and those abroad were very few. Women with aptitudes for science were marginalized at the time, but through perseverance and value they managed to be accepted.” She published several specialized papers on planets, sunspots, protrusions of Halley's comet and other celestial phenomena in the Observatory's yearbook and in the journal Nature aimed at making this field popular. However, her observations of the Sun and sunspots with inadequate protection led to impaired eyesight, which caused her to give up her hands-on work at the Observatory. Teacher For her next career, Teohari become an astronomy and mathematics teacher at the Princess Ileana high school (in Romanian "Domnița Ileana") in Bucharest. To help students learn her new coursework, she published a series of textbooks focusing on two disciplines, mathematics and astronomy. While still a teacher, she kept in touch with the Astronomical Observatory and became a mentor to researchers there. "We can say without mistaking that she was the unofficial professor of Romanian astronomers in the 20th century," said Păunescu. She was listed as a member of the Society of Science and Mathematics in Romania in 1929. Later years She was very familiar with German, English and French, the languages from which she had made translations since her childhood. She also enjoyed playing the piano. Teohari died in Bucharest soon after her 90th birthday. References 1885 births 1975 deaths People from Giurgiu Romanian astronomers Romanian schoolteachers Women astronomers 20th-century Romanian women 20th-century astronomers
Maria Teohari
[ "Astronomy" ]
746
[ "Women astronomers", "Astronomers" ]
63,649,139
https://en.wikipedia.org/wiki/Sum%20of%20residues%20formula
In mathematics, the residue formula says that the sum of the residues of a meromorphic differential form on a smooth proper algebraic curve vanishes. Statement In this article, X denotes a proper smooth algebraic curve over a field k. A meromorphic (algebraic) differential form has, at each closed point x in X, a residue which is denoted . Since has poles only at finitely many points, in particular the residue vanishes for all but finitely many points. The residue formula states: Proofs A geometric way of proving the theorem is by reducing the theorem to the case when X is the projective line, and proving it by explicit computations in this case, for example in . proves the theorem using a notion of traces for certain endomorphisms of infinite-dimensional vector spaces. The residue of a differential form can be expressed in terms of traces of endomorphisms on the fraction field of the completed local rings which leads to a conceptual proof of the formula. A more recent exposition along similar lines, using more explicitly the notion of Tate vector spaces, is given by . References Algebraic geometry Algebraic curves Differential forms
Sum of residues formula
[ "Mathematics", "Engineering" ]
228
[ "Fields of abstract algebra", "Tensors", "Differential forms", "Algebraic geometry" ]
63,649,291
https://en.wikipedia.org/wiki/Fuel%20bunker
Fuel bunkers, commonly simply known as bunkers, are containers for the storage of fuel on steam-powered boats or steam tank engines, or rooms for the storage of fuel in furnaces. The term "bunker" or "fuel bunker" is typically only used for storage areas for solid fuels, especially coal; the term "fuel tank" is typically used for liquid fuels (such as gasoline or petrol), or gaseous fuels (such as natural gas). History Usage Steam railway locomotives Steamships For example, on the Titanic the propulsion boilers were heated by burning coal. 6,611 tons of coal were carried in its official bunkers, with a further 1,092 tons carried in Hold 3. The furnaces required over 600 tons of coal a day to be shoveled into them by hand, requiring the services of 176 firemen working around the clock. Furnaces Fuel oil depots built in reinforced concrete and heated with steam to maintain a minimum temperature of 140°F and pump it to other heat exchangers in the boiler building. See Also Coal bunker Bunkering Bunker fuel References Fuel containers Locomotive parts Steamships Furnaces Coal
Fuel bunker
[ "Engineering" ]
232
[ "Furnaces", "Combustion engineering" ]
63,650,360
https://en.wikipedia.org/wiki/Institute%20of%20Soil%20Science%20and%20Agrochemistry
Institute of Soil Science and Agrochemistry () is a research institute in Akademgorodok of Novosibirsk, Russia. It was founded in 1968. History The institute was organized in 1968. It was created to study the soils of Siberia and the Russian Far East. Scientific activity The creation of new methods of soil and plant diagnostics, development of soil reclamation technologies etc. Laboratories Laboratory of Agrochemistry Laboratory of Soil Biogeochemistry Laboratory of Biogeocenology Laboratory of Geography and Genesis of Soils Laboratory of Soil Physical Processes Laboratory of Soil Reclamation References External links Institute of Soil Science and Agrochemistry of the Siberian Branch of the RAS. SB RAS Organizations and Employees. Agrochemistry Biochemistry research institutes Soil and crop science organizations Research institutes established in 1968 Research institutes in the Soviet Union Agriculture in the Soviet Union 1968 establishments in the Soviet Union
Institute of Soil Science and Agrochemistry
[ "Chemistry" ]
180
[ "Biochemistry research institutes", "Biochemistry organizations" ]
63,651,657
https://en.wikipedia.org/wiki/Mechanoreceptors%20%28in%20plants%29
A mechanoreceptor is a sensory organ or cell that responds to mechanical stimulation such as touch, pressure, vibration, and sound from both the internal and external environment. Mechanoreceptors are well-documented in animals and are integrated into the nervous system as sensory neurons. While plants do not have nerves or a nervous system like animals, they also contain mechanoreceptors that perform a similar function. Mechanoreceptors detect mechanical stimulus originating from within the plant (intrinsic) and from the surrounding environment (extrinsic). The ability to sense vibrations, touch, or other disturbance is an adaptive response to herbivory and attack so that the plant can appropriately defend itself against harm. Mechanoreceptors can be organized into three levels: molecular, cellular, and organ-level. Mechanism of sensation Signal There is a growing body of knowledge about how mechanoreceptors in plant cells receive information about a mechanical stimulation, but there are many gaps in the current understanding. While a complete model cannot yet be formed, we do know much of what is happening at the plasma membrane. The plasma membrane is full of membrane proteins and ion channels. One type of ion channel are Mechanosensitive (MS) ion channels. MS channels are different from other membrane proteins in that their primary gating stimulus is force, such that they open conduits for ions to pass through the membrane in response to mechanical stimuli. This system allows physical force to create an ion flux, which then results in signal integration and response (as detailed below). MS channels are hypothesized to be the working mechanism in the perception of gravity, vibration, touch, hyper-osmotic and hypo-osmotic stress, pathogenic invasion, and interaction with commensal microbes. MS channels have been discovered across a diverse array of genera as well as in different plant organs, like leaves and stems, and localize to diverse cellular membranes. Not only can mechanoreceptors be present within the plasma membrane of cells, but they can also exist as whole cells whose primary purpose is to detect mechanical stimuli. A well known example is the trigger hairs on the venus fly trap . When repeatedly touched within a certain time span, the plant will snap shut, entrapping and digesting its prey. Integration and response Once the plant perceives a mechanical stimulus via mechanoreceptor cells or mechanoreceptor proteins within the plasma membrane of a cell, the resulting ion flux is integrated through signaling pathways resulting in a response. The signaling cascade (integration) and response is dependent on the type of stimulus and the particular species. For instance, it can manifest as a change in turgor pressure resulting in movement, secretion of defense chemicals, and the closing of stomata. Examples Venus flytrap Dionaea muscipula (Venus fly-trap) is known to rapidly close its lobes when touched to capture and digest its prey. The unique carnivorous plant has extremely sensitive mechanosensory hairs located on the surface of its trap. When one hair is touched by its prey, anion channels will open and depolarize the plasma membrane thus firing an action potential (AP) through the phloem. The AP results in the accumulation of Ca2+ ions. If the hairs are then left alone, the Ca2+ will dissipate. If another hair is stimulated within 30 seconds of the first hair, however, another AP will fire and the [Ca+] will reach a threshold triggering changes in cell turgor in the petiole. This will cause the trap to swiftly snap shut, trapping the pray inside its lobes. As the prey moves around within the trap, it bumps the mechanosensory hairs more thus inducing repetitive firing of AP's. Just three AP's (including the initial two) initiate the production of Jasmonic Acid hormone signaling pathways, creating an airtight seal, beginning the secretion of digestive enzymes and up-regulating the production of transporters for nutrient-uptake. Arabidopsis thaliana When caterpillars chew on leaves, they create a very specific vibrational pattern. Arabidopsis thaliana plants have adapted to elicit chemical defenses when they detect these mechanical vibration patterns to protect themselves from continued herbivory. While the signal perception, integration, and response for this system has not yet been thoroughly researched, the general guidelines for mechanosensory stimulation are thought to hold true. Mechanoreception is thought to start by triggering of mechanosensors in the cell wall and/or plasma membrane of the leaf cells, causing ion fluxes of Ca2+, Reactive Oxygen Species (ROS), and H−. These fluxes initiate signaling pathways which involve many plant hormones and rapid expression of genes that respond early to many plant stresses. These genes up-regulate the production of chemical defense molecules like glucosinolates, polyphenol anthocyanins and a suite of volatile compounds. The plant not only secretes these chemicals in the leaf that is being attacked, but also in other leaves on the plant. It is hypothesized that while there are other signals that inform the plant of herbivory, it is the mechanical vibrations that are eliciting the whole-plant response. References Plant physiology
Mechanoreceptors (in plants)
[ "Biology" ]
1,105
[ "Plant physiology", "Plants" ]
63,652,843
https://en.wikipedia.org/wiki/Palestinabuch
The Palestinabuch (Book of Palestine), also Palestina Buch, Palästinabuch or Palästina Buch, is an allegedly lost manuscript by the German-Dutch historian and antisemite Herman Wirth (1885–1981), founder of the German Ahnenerbe, which its adherents claim would have changed the world if it had not disappeared. The myth plays a part in antisemitic theories maintaining that the Ashkenazi Jews descend largely or entirely from the Khazars or from Central Asia. Herman Wirth Wirth, who was one of the main spokesmen of Pre-War Ariosophy, first referred to the Palestinabuch in his commentary to the Ura Linda Chronicle (1933), a 19th-century literary forgery that he thought to be genuine. The Chronicle tells the story of a lost civilization in the Polar Region or the Northern Atlantic, from which the Aryan, Hyperborean or Nordic races descended. The Frisian people, as being the direct offspring of the first Nordic settlers, had been able to preserve the cultural traditions of their forebears, which he considered as a key to the understanding of primeval monotheism. According to Wirth, Christ descended from a lost tribe of the original Aryans, who had left their traces in the Near East in the form of Megalithic monuments. Christianity, however, was corrupted by oriental despotism and local superstition, and came back to the Northern Europe in a distorted manner. As Judaism was largely responsible for this distortion, Christianity had to be cleansed from Judaic elements and revert to its original fertility cults, in which the Mother Goddess played a central part. The Palestinabuch may have been conceived as a counterpart to the Ura Linda Chronicle. It was officially titled The Riddle of the Palestine's Megalithic Graves: From JAU to Jesus. By 1969 the proposed title had changed into Between the North Sea and the Sea of Genezareth, implicitly referring to an influential book by the 19th-century reactionary Julius Langbehn, Rembrandt as Educator (1890). The subtitle shifted between The Savior Myth of the Megalithic Religion's Crucified God and The Oriental-Occidental Community of the Megalithic Era. According to Miguel Serrano, Wirth considered the book his main opus, on which he had worked for many years. Both men probably first met in the late 1960s, when Serrano served as a diplomat in Vienna. Wirth's biographers do not mention the topic, though it is conceivable that the manuscripts have been hidden or destroyed by his collaborators because of their antisemitic tendencies. Or that, for the same reason, references to the supposed text have been deliberately ignored by the administrators of his estate. Even Wirth's loyal publisher, the neonazi Wilhelm Landig left several manuscripts Wirth wrote for him unprinted. Miguel Serrano Herman Wirth died in February 1981. Some time before, on 3 September 1979, he was interviewed by the Chilean former diplomat and Nazi propagandist Miguel Serrano (1917–2009), who asked him about the history of the Jewish people. The story is told in Serrano's Adolf Hitler: The Ultimate Avatar (1984): It is difficult to know the true origin of this people. In the visit I made to professor Herman Wirth, founder of the Ahnenerbe, high specialized organism of investigation of the SS, and one of the most extraordinary students of Nordic pre-antiquity, I asked him about the Jews. He gave me a strange unexpected answer: "Nomadic people, from slaves, who lived on the periphery of the great civilization of the Gobi..." I deeply regret not having asked more about this. [...] When I knew him he was 94 years old and remained agile and alert. Even then, not long before dying, the manuscripts of his work were stolen from him, it is believed by his own collaborators. Marxist infiltrators, or perhaps even Catholics, caused this most valuable work to disappear. The world will never know of it. It is a tragedy as great as the destruction of the Library of Alexandria. At least for me. The identical hand will have committed the same crime to cover up evidence. In subsequent publications Serrano blamed "the Great Conspiracy" for the loss. Wirths book would have "definitively clarified the true history of the Jews", but "the manuscript may now be found in some synagogue or in the subterranean vaults beneath the Vatican". According to Serrano, Wirth told him that the Frisian Sea-Kings, survivors of the catastrophe of Polar Hyperborea, first met the Jews in Northern Africa, where they were known as Golen (Gauls) or Golem, but the Frisians nicknamed them Triuweden (druids), meaning 'those who have no truth'. Moreover, the Jews then emigrated as parasites on the Hyperborean Aryans after the destruction of the post-Hyperborean civilization of the Gobi Desert (Shambhala). The story plays an important part in Serrano's Esoteric Hitlerism, in which the world is heading towards an ultimate combat between the Aryan forces of Light and the forces of Darkness, embodied in the false ideologies of Judeo-Christianity. Aleksandr Dugin During the early 1990s the Russian political theorist and philosopher Aleksandr Dugin (1962) spent two years studying Wirth's books. He devoted a whole volume, Hyperborean Theory: The Experience of Ariosophic Research (1993), to Wirth's geopolitical and religio-historical views. Apparently, this is "one of the most extensive summaries and treatments of Wirth in any language". Dugin probably first referred to the Palestinabuch in one of his controversial 1993 TV-shows with journalist Yury Vorobyevsky [Юрий Воробьевский], in which he claimed to have had access to the secret KGB-archives on the Ahnenerbe, captured by the Red Army in 1945. He subsequently elaborated the theme, suggesting that Wirth's superior knowledge was based on the Ahnenerbe's "vast archaeological material obtained during excavations in Palestine", to his judgement the most experienced organization of the time. As soon as Wirth's would apply his symbolic historical methods, [...] there is no line, not a word in the Old Testament that would not succumb to such Hyperborean deconstruction. It is not a question of criticizing the text [...]. What Wirth did was resacralize, reveal the original, Hyperborean gnosis - the true foundation of the Old Testament tradition, free it from biased interpretative models. [...] Unfortunately, now we can only guess about its contents. [...] Already in the 70s, when Wirth almost finished writing it, the only completed edition disappeared without a trace. In the scientist's absence, unknown people entered the house, turned everything upside down, but only took the Palestina Buch. Wirth turned to his students (there were two or three unfinished copies), but the mysterious strangers had also visited them. Dugin, who does not cite any sources, claims that the manuscript counted several thousands of pages. According to Dugin's former associate Yury Vorobyevsky the manuscript had already been stolen in the 1950s, probably by the Israeli Secret Service. Scholarly literature on the Ahnenerbe, on the other hand, does not present any evidence that the organisation ever conducted archaeological excavations in Palestine. The only relevant (but non-archaeological) expedition in 1938 went to Libanon, Syria and Iraq. Since Dugin's and Vorobyevsky's first publications on the topic, the idea of Jewish-American conspiracy being responsible for the loss of Wirth's world-explaining encyclopaedia, has gained foothold in Russian nationalist circles. It is constitutive for an aggressive nationalist approach, in which the values of the Eurasian civilisation are contrasted with the Jewish-imbued worldview of the Anglo-Saxon maritime world. Recently, the idea of a lost Palestinabuch has also gained hold among rightwing radicals in the United States. References Antisemitic forgeries Antisemitism in Russia Historical negationism Hoaxes in science Occultism in Nazism Nazism Political forgery Pseudoarchaeology Pseudohistory Religious hoaxes Scientific racism
Palestinabuch
[ "Biology" ]
1,779
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]
63,653,068
https://en.wikipedia.org/wiki/HKUST-1
HKUST-1 (HKUST ⇒ Hong Kong University of Science and Technology), which is also called MOF-199, is a material in the class of metal-organic frameworks (MOFs). Metal-organic frameworks are crystalline materials, in which metals are linked by ligands (so-called linker molecules) to form repeating coordination motives extending in three dimensions. The HKUST-1 framework is built up of dimeric metal units, which are connected by benzene-1,3,5-tricarboxylate linker molecules. The paddlewheel unit is the commonly used structural motif to describe the coordination environment of the metal centers and also called secondary building unit (SBU) of the HKUST-1 structure. The paddlewheel is built up of four benzene-1,3,5-tricarboxylate linkers molecules, which bridge two metal centers. One water molecules is coordinated to each of the two metal centers at the axial position of the paddlewheel unit in the hydrated state, which is usually found if the material is handled in air. After an activation process (heating, vacuum), these water molecules can be removed (dehydrated state) and the coordination site at the metal atoms is left unoccupied. This unoccupied coordination site is called coordinatively unsaturated site (CUS) and can be accessed by other molecules. Structural analogs Monometallic HKUST-1 analogs Cu2+ was used as metal center in the first synthesized HKUST-1 material, but the HKUST-1 structure was also obtained with other metals. The oxidation state of most used metals is +II, which results in a neutral overall framework. In the case of trivalent metals (oxidation state +3), the overall framework is positively charged and requires anions to compensate the charge and guarantee charge neutrality. Mixed-metal HKUST-1 analogs In addition to monometallic HKUST-1 analogs, several mixed-metal HKUST-1 materials were synthesized, in which two metals are incorporated into the framework structure at crystallographically equivalent positions. The incorporation of two metals can be achieved by using both metals for the synthesis (direct synthesis) or by using post-synthetic metal-exchange. For the post-synthetic metal exchange, a monometallic HKUST-1 material is synthesized in the first step. Subsequently, this monometallic HKUST-1 is suspended in a solution containing the second metal, which results in an exchange of metal centers in the framework leading to a mixed-metal HKUST-1. Theoretically calculated HKUST-1 analogs Several HKUST-1 analogs have already been synthesized, but several research groups have investigated the properties of the HKUST-1 structure by means of theoretical calculations. For this purpose, additional metal centers were incorporated into the framework on the theoretical level, which have not been used for the synthesis (e.g. Sc, V, Ti, W, Cd). Theoretical study on a mixed-metal HKUST-1 containing Cu in combination with various other metals (e.g. W, Re, Os, Ir, Pt, Au) were also reported, of which several metal combinations have not been synthesized. References Metal-organic frameworks Copper(II) compounds
HKUST-1
[ "Chemistry", "Materials_science" ]
683
[ "Porous polymers", "Metal-organic frameworks" ]
63,653,312
https://en.wikipedia.org/wiki/Akita%20Port%20Tower%20Selion
The is one of the landmarks in the city of Akita, Japan. The sightseeing tower with 6,272 tempered glasses was completed in 1994. It is located in the Tsuchizaki District, Akita, Akita Prefecture, Japan. The steel tower is the tallest structure in the 3 northern Tohoku prefectures with its observation deck at 100 metres (328 ft) and its spire at 143.6 metres (471 ft). The viewing platform provides a 360-degree panorama of the city and the mountains of Oga Peninsula, Taiheizan, and Mt. Chokai are visible. Cable Networks Akita received the TV-U Yamagata broadcast from Takadateyama, Tsuruoka at this landmark in the past. Events Cue sports at the 2001 World Games Gallery See also Port of Akita List_of_tallest_towers#Towers between 150 and 200 metres tall References External links Official Homepage 2001 World Games Towers completed in 1994 Buildings and structures in Akita (city) Glass architecture High-tech architecture Modernist architecture in Japan Observation towers in Japan Tourist attractions in Akita Prefecture 1994 establishments in Japan
Akita Port Tower Selion
[ "Materials_science", "Engineering" ]
230
[ "Glass architecture", "Glass engineering and science" ]
63,654,792
https://en.wikipedia.org/wiki/Medical%20Image%20Analysis%20%28journal%29
Medical Image Analysis (MedIA) is a peer-reviewed academic journal which focuses on medical and biological image analysis. The journal publishes papers which contribute to the basic science of analyzing and processing biomedical images acquired through means such as magnetic resonance imaging, ultrasound, computed tomography, nuclear medicine, x-ray, optical and confocal microscopy, among others. Common topics covered in the journal include feature extraction, image segmentation, image registration, and other image processing methods with applications to diagnosis, prognosis, and computer-assisted interventions. Alongside The International Journal of Computer Assisted Radiology and Surgery, Medical Image Analysis is an official publication of The Medical Image Computing and Computer Assisted Interventions Society and is published by Elsevier. See also Medical imaging Medical image computing Computer-assisted interventions The MICCAI Society References External links Journal homepage Elsevier academic journals Computer science journals Biomedical informatics journals Surgery journals English-language journals
Medical Image Analysis (journal)
[ "Biology" ]
184
[ "Bioinformatics", "Biomedical informatics journals" ]
63,655,804
https://en.wikipedia.org/wiki/Driver-controlled%20operation
Driver-controlled operation is the operation of a train in which the driver carries out all the essential roles needed to operate the train itself. It differs from driver-only operation (DOO, also called one-person operation) in that other members of staff also work on board—for example, revenue collectors. Currently, only around 30% of Britain's journeys are either DCO or DOO, meaning the remainder require a guard to operate and thus, if there is no available guard, the service must be cancelled. DCO means only the unavailability of a driver would lead to a cancellation of a train. Railways using DCO A deal agreed between Greater Anglia (train operating company) and RMT union meant that all of their intercity and regional services would change to DCO. However, unlike other DCO in place in the UK, a guard could still operate the doors in exceptional circumstances and must still be present in order for the service to run. The only exceptions are on intercity services between Liverpool Street and Ipswich, and regional services between Ely and Stansted Airport, where trains were already cleared to run without guards. Arriva Rail North were also hoping to agree a similar deal, however this was not achieved and on 1 March 2020, the Department for Transport took over operations as Northern Trains, who are also looking to implement DCO, so could be implemented in the future. Other examples of DCO within the UK includes Abellio ScotRail and Southern and Southeastern longer distance routes and services. Merseyrail have implemented DCO onboard their British Rail Class 777 fleet, requiring a signal from the Train Manager for the driver to begin the Close Door process, following a deal reached by the RMT after disputes with Merseyrail previously planning to run the trains with DOO. South Western Railway are planning to implement DCO on London suburban services for when its new fleet of British Rail Class 701 trains arrive. The RMT has opposed to these changes and have held various strikes on many occasions including 27 days in December 2019. In April 2021, a deal was agreed between South Western Railway and RMT. Whilst South Western Railway claimed to have implemented a DCO method for their inner suburban routes, unlike Southern, the guard would still be an essential crew member on the train and would be required to be onboard. References Transport operations Rail transport operations
Driver-controlled operation
[ "Physics" ]
474
[ "Physical systems", "Transport", "Transport operations" ]
72,343,391
https://en.wikipedia.org/wiki/TH-12
The TH-12 (, lit. Sky Fire 12) is an oxidizer-rich gas-generator cycle rocket engine burning LOX and kerosene under development by Space Pioneer. The TH-12 utilizes 3D printing and has the highest target thrust among all commercial rocket engines in China. The engine features deep throttling for reusability, re-ignition, thrust vectoring, and multi-mode starters. History Space Pioneer proposed the TH-12 engine for its Tianlong-3 launch vehicle. Engine development was underway in December 2020, with the first gas generator test performed in September 2022. In November 2022, a full-stage developmental TH-12 engine successfully completed its first static fire test. On July 24, 2023, the TH-12 engine, in the flight configuration of the first Tianlong-3 rocket, successfully completed a full-duration hot fire test at rated conditions for a single burn duration of 100 seconds, accumulating a total test duration of 200 seconds. This test demonstrated that the engine met the flight requirements for the Tianlong-3 rocket. In early January 2024, the TH-12 engine completed a calibration hot fire test for the first flight batch, subjecting the engine to a 50-second process verification test at rated conditions, demonstrating rapid startup, smooth operation, and normal shutdown. Later that month, the TH-12 engine underwent a spot check hot fire test for the first flight batch, fully simulating the flight conditions of the inaugural Tianlong-3 launch. The test involved 6 consecutive ignitions of the engine without removal from the test stand, accumulating a total test duration exceeding 1,000 seconds, with the single engine operating time surpassing the planned flight duration by a factor of 6. References Rocket engines of China Rocket engines using kerosene propellant
TH-12
[ "Astronomy" ]
373
[ "Rocketry stubs", "Astronomy stubs" ]
72,343,604
https://en.wikipedia.org/wiki/List%20of%20lichenicolous%20fungi%20of%20Iceland
This list of lichenicolous fungi of Iceland is based on a compiled checklist from 2009 with the taxonomy of the fungi revised in 2022 using the Global Biodiversity Information Facility online database. Abrothallus parmeliarum Arthonia epiphyscia Arthonia fuscopurpurea Arthonia gelidae Arthonia intexta Arthonia stereocaulina Arthonia varians Arthophacopsis parmeliarum Bachmanniomyces punctum (listed as Phaeopyxis punctum ) Bachmanniomyces uncialicola Buellia adjuncta Carbonea supersparsa Carbonea vitellinaria Cecidonia umbonella Cecidonia xenophana Cercidospora epipolytropa Cercidospora macrospora Cercidospora punctillata Cercidospora stereocaulorum Cercidospora thamnoliicola Cercidospora trypetheliza Cercidospora verrucosaria Clypeococcum placopsiphilum Collemopsidium cephalodiorum (listed as Cercidispora cephalodiorum ) Corticifraga peltigerae Didymellopsis pulposi Endococcus fusiger Endococcus propinquus Endococcus rugulosus (also listed as Endococcus perpusillus which is a synonym of E. rugulosus) Epibryon conductrix Geltingia associata Heterocephalacria bachmannii (listed as Syzygospora bachmannii ) Homostegia piggotii Intralichen christiansenii Lasiosphaeriopsis christiansenii Lasiosphaeriopsis stereocaulicola Lichenochora lepidiotae (listed as Sphaerulina lepidiotae ) Lichenodiplis lecanorae Lichenopeltella cetrariicola Lichenopeltella cladoniarum Lichenosticta alcicornaria Merismatium nigritellum Muellerella erratica (listed as Muellerella pygmaea var. athallina ) Muellerella pygmaea Muellerella pygmaea var. pygmaea Muellerella ventosicola (listed as Muellerella pygmaea var. ventosicola ) Niesslia peltigericola (listed as Raciborskiomyces peltigericola ) Opegrapha pulvinata (synonym of O. pulvinata ) Opegrapha stereocaulicola Phaeocalicium populneum Polycoccum amygdalariae Polycoccum deformans Polycoccum pulvinatum Polycoccum trypethelioides Polycoccum vermicularium Pronectria erythrinella Punctelia oxyspora (listed as Phacopsis oxyspora ) Pronectria robergei Pronectria solorinae Protothelenella croceae Pseudopyrenidium tartaricola (listed as Weddellomyces tartaricola ) Pyrenidium actinellum Rhagadostoma brevisporum Rhagadostoma lichenicola Roselliniopsis gelidaria (listed as Polycoccum gelidarium ) Sclerococcum amygdalariae (listed as Dactylospora amygdalariae ) Sclerococcum athallinum (listed as Dactylospora athallina ) Sclerococcum attendendum (listed as Dactylospora attendenda ) Sclerococcum deminuta (listed as Dactylospora deminuta ) Sclerococcum frigidum (listed as Dactylospora frigida ) Sclerococcum gelidarium Sclreococcum glaucomarioides (listed as Dactylospora glaucomarioides ) Sclerococcum parasiticum (listed as Dactylospora parasitica ) Sclerococcum parellarium (listed as Dactylospora parellaria ) Sclerococcum purpurescens (listed as Dactylospora purpurascens ) Sclerococcum sphaerale Scutula krempelhuberi Scutula stereocaulorum Scutula tuberculosa Sphaerellothecium araneosum Sphaeropezia santessonii (listed as Odontotrema santessonii ) Stigmidium allogenum (listed as Stigmidium psorae ) Stigmidium marinum Stigmidium peltideae Stigmidium stygnospilum Tetramelas pulverulentus Tetramelas phaeophysciae Thamnogalla crombiei Xenonectriella ornamentata (listed as Pronectria ornamentata ) Zwackhiomacromyces hyalosporus (listed as Pyrenidium hyalosporum ) Zwackhiomyces dispersus (listed as Stigmidium conspurcans ) References Lichenology lichenicolous fungi Lists of fungi
List of lichenicolous fungi of Iceland
[ "Biology" ]
1,127
[ "Lichenology", "Fungi", "Lists of fungi" ]
72,343,739
https://en.wikipedia.org/wiki/List%20of%20flower%20bulbs
List of flower bulbs is a list of flowering plants which come from ornamental bulbs. Most flower bulbs produce perennial flowers and in cold zones the bulbs are left in the ground year-round. Bulb planting Flowering plant bulbs are planted beneath the surface of the earth. The bulbs need some exposure to cold temperatures for 12 to 14 weeks in order to bloom. Flower bulbs are generally planted in the fall in colder climates. The bulbs go dormant in the winter but they continue to absorb water and nutrients from the soil and they develop roots. Most bulbs produce perennial flowers. Occasionally certain bulbs become crowded in the ground and they must be removed and separated. These include: amaryllis (Hippeastrum spp.) and cyclamen (Cyclamen persicum). Warm weather Some flower bulbs do well in hot climates: Lilies, Caladiums, Dahlias, Gladiolus, Narcissus (plant) and daffodils. To grow cold weather flower bulbs like Tulips and crocus in hot climates, gardeners must dig up the bulbs and store them in the cold for 3-4 months before replanting. A Allium siculum Agapanthus (Blue White and Dwarf) Albuca nelsonii Allium tuberosum Alocasia (Elephants Ears) Alstroemeria (Peruvian Lily) Amarcrinum (Amaryllis x Crinum) Amaryllis belladonna Amorphophallus Anemone (Wood Anemone) Anomatheca Anthericum (St Bernards Lily) Arisaema Arum Asarum B Babiana (Baboon Flower) Begonia Bletilla (Chinese Ground Orchid) Boophone Brunsvigia Bulbinella C Caladium Calochortus Cardiocrinum giganteum (Giant Himalayan Lily) Chlidanthus Scilla luciliae Colocasia (Elephants Ears) Convallaria (Lily of the Valley) Crinum Crocosmia Crocus Cyclamen Curtonus D Dahlia Tubers Daffodils Disporopsis Dracunculus vulgaris Dodecatheon (Shooting Stars) E Eremurus (Foxtail Lily) Eucomis (Pineapple Lily) Ferraria Freesia Fritillaria G Galanthus (Snowdrops) Galtonia (summer hyacinth) Gladiolus Gloriosa (plant) (Glory lily) Gloxinia H Habenaria (Egret orchid or Bog Orchid) Hedychium (Ginger Lily) Hesperantha (Evening Flower) Hippeastrum Hyacinth Hymenocallis (Spider Lily) I Incarvillea (Garden Gloxinia) Iris Ixia J Jonquils L Leucojum (Snowflakes) Liatris (Blazing Star or Gay Feather) Lilium (Asiatic) Littonia Lycoris M Mirabilis (Four o’clock flower) Muscari N Narcissus (plant) Nerine O Ornithogalum Oxalis P Pasithea Pleione (Rockery Orchid) Polianthes (Tuberosa) Puschkinia R Ranunculus Rhodohypoxis S Sandersonia Sauromatum Scadoxus Schizostylis (Kaffir Lilies) Scilla Sparaxis (Harlequin Flower) Sprekelia (Jacobean Lily) Sternbergia (Winter Daffodil) T Tacca (Bat Plant) Tigridia Trillium Triteleia Tropaeolum Tulbaghia Tulip U Uvularia Z Zantedeschia Zephyranthes Other In addition to flowers, some vegetables have bulbs and they include, garlic, Onions and shallots. Some other plant roots which bear a similarity to bulbs include: corms, tubers, tuberous roots and rhizomes. See also Ornamental bulbous plant References External links New York Botanical Garden: "Bulb Care and Selection" Flowers Flowers Plant morphology Horticulture Lists of plants
List of flower bulbs
[ "Biology" ]
842
[ "Lists of biota", "Lists of plants", "Plant morphology", "Plants" ]
72,344,061
https://en.wikipedia.org/wiki/Jorge%20Medina
Jorge Medina Barra (; 24 April 1968 – 23 November 2022) was a Bolivian civil rights activist and politician who served as a member of the Chamber of Deputies from La Paz, representing its special indigenous circumscription from 2010 to 2015. Raised in the Afro-Bolivian community of the tropical Yungas region, Medina became active in the Afro civil rights movement after moving to the city of La Paz. He was a founding member of the Afro-Bolivian Saya Cultural Movement and co-founded the Afro-Bolivian Center for Integral and Community Development, two organizations dedicated to promoting public and state recognition of Afro cultural identity. Having succeeded in securing the inclusion of Afro-Bolivians in the 2009 Constitution, Medina was later elected to represent La Paz's minority indigenous peoples in the Chamber of Deputies, becoming the first Afro-Bolivian to serve in either chamber of the Bolivian legislature. In parliament, Medina spearhead Bolivia's flagship Law Against Racism and promoted other pro-Afro pieces of legislation. He was not nominated for reelection. Early life and career Jorge Medina was born on 24 April 1968 to Paulino Medina and Sergia Barra, an Afro-Bolivian family from the rural community of Chijchipa in the La Paz Department's agricultural Nor Yungas Province. Medina completed his primary schooling in the nearby town of Tocaña before moving with his parents to Caranavi in Alto Beni, where he attended the city's Martín Cárdenas School. After graduating in 1988, Medina moved to the city of La Paz to pursue college education; he studied business administration at the Higher University of San Andrés and took courses in systems engineering at the University of Aquinas. In the ensuing years, Medina worked a number of odd jobs, spending six years as a chauffeur for the Golden Eagle Mining Company before being employed as a mechanic at a local workshop, and later as a laborer for a nearby paper company. He also spent short stints as an employee at YPFBthe state-owned petroleum enterpriseand the Ministry of Labor. Activism and organizational leadership Medina's entry into the Afro-Bolivian civil rights movement was precipitated by his early experiences residing in La Paz, a city "devoid of Afro-Bolivians... [where] it was not uncommon for other Bolivians to be oblivious to the existence of black people". Recognition of the Afro-Bolivian population was niche, limited in academia to Western scholars studying the African diaspora. Their presence in public often promoted racial discrimination, including physical harassment, due to the superstitious belief that pinching a black person would bring good luck. "It was 'lucky negro'; they surrounded us... and fought among themselves over who saw us first", Medina recalled. Starting from the late 1980s, Medina became active in promoting the saya, a style of Afro-Bolivian folk songs mixed with drums, which the Afro movement had begun using to generate cultural visibility. During this time, Medina also distinguished himself as a popular saya composer and performer in his own right, writing the songs "Flor de Alelí", "Ser Líder de un Grupo", and "Guarachera de Cuba", among others. Along with other activists, Medina founded the (MOCUSABOL) in 1988, which became one of the country's leading Afro-Bolivian civil rights and awareness organizations. He served as vice president and later president of the body for five years between 1999 and 2004. Despite MOCUSABOL's successes in spreading the saya popularity, Medina soon grew frustrated with the growing perception that Afro-Bolivians were just "'the negros who dance.' That made me angry because we are not only good at dancing we can do other things", he stated. In 2006, together with Marfa Inofuentes, Medina founded the Afro-Bolivian Center for Integral and Community Development (CADIC), of which he served as executive director. The organization actively worked to advance Afro-Bolivian civil and political rights, taking a leading role in attaining state recognition of the Afro community during the 2006–2007 Constituent Assembly, which was then redrafting the Bolivian constitution. By the end of the process, CADIC was successful in securing the same minority rights for Afro-Bolivians as those granted to the country's indigenous peoples. Chamber of Deputies Election With minority groups increasingly encouraged to participate in politics, Medina was nominated to contest a seat in the Chamber of Deputies on behalf of the Movement for Socialism. Given the offer, Medina recalled stating, "Barack Obama is president of the United States; why should an Afro not be able to be in parliament here in Bolivia". He ran to represent the La Paz Department's newly created special rural native indigenous circumscriptionan innovation of the 2009 Constitutiona district with constituents comprising the department's Afro-Bolivian, Araona, Kallawaya, Leco, Mosetén, and Tacana peoples. He received one of the highest vote shares of the entire election cycle, winning nearly ninety-two percent of the popular vote, becoming the first Afro-Bolivian in history to serve in either chamber of the Legislative Assembly. Tenure As a parliamentarian, Medina was instrumental in drafting what became the Law Against Racism and All Forms of Discrimination. Promulgated by President Evo Morales in October 2010, the new legislation imposed varying penalties for perpetrators of racism and discrimination, identifying Afro-Bolivians as a particularly vulnerable ethnic minority group. In the years following its enactment, the law faced numerous challenges and shortfalls, including difficulty of enforcement and lack of knowledge from both the general public and the judiciary regarding what exactly constitutes a prosecutable offense. According to analyst Henry Stobart, "newspaper reports from 2014 characterized the law as 'not worth the paper it was written on', arguing that there had been no prosecutions, despite the hundreds of complaints received". On the other hand, Professor Sara Busdiecker pointed out that "despite its flaws and limitations to prevent racism outright, [the law] has given people pause to think before acting and speaking in some circumstances", producing a "social 'hesitation' of sorts" because "even if [offenders] do not understand all the details, 'they know there is a law. For his part, Medina echoed that sentiment, stating that the legislation's intention was never to "fill the prisons with those who discriminate" but rather to promote conciliation. For the duration of his tenure, Medina continuously worked to promote the recognition of the Afro-Bolivian population. He facilitated the passage of numerous symbolic laws recognizing Afro culture, including one declaring 23 September to be the National Day of the Afro-Bolivian People and another establishing the saya as part of the country's cultural heritage. The 2012 census was held midway through Medina's term, a notable event, as it was the first time the government had ever tabulated the Afro-Bolivian population in its history. Using this data, Medina put forward the concept of creating a majority Afro-Bolivian municipality to give greater electoral representation to its inhabitants, though the concept never got off the ground. Upon the conclusion of his term in 2015, Medina retired from politics and returned to the direction of CADIC. Commission assignments Rural Native Indigenous Peoples and Nations, Cultures, and Interculturality Commission (President; 2011–2012) Rural Native Indigenous Peoples and Nations Committee (2010–2011) Plural Economy, Production, and Industry Commission Industry, Commerce, Transport, and Tourism Committee (2012–2013) Social Policy Commission Social Security and Protection Committee (2013–2014) Planning, Economic Policy, and Finance Commission Financial, Monetary, and Insurance Policy Committee (Secretary; 2014–2015) Personal life and death Medina married Miriam Iriondo, an ethnic Afro-Bolivian saya performer from Chulumani in the Sud Yungas Province. The couple had two children: Adin Dube and Malaika, Swahili names meaning "strength and passion" and "angel", respectively. His relative, Tomasa Medina, also gained notoriety in the Afro-Bolivian community, though as an opponent of the MAS and its efforts to expand its influence over the Yungas-based coca market. Outside of his work at MOCUSABOL and CADIC, Medina was also active in radio, hosting the talk show Raíces Africanas, which began airing in 2001. In film, he starred as a supporting actor in 2005's American Visa. Medina died on 23 November 2022, aged 54. His passing was commemorated by various Afro-Bolivian organizations and by the Chamber of Deputies. Electoral history References Notes Footnotes Bibliography External links Parliamentary profile Office of the Vice President . 1968 births 2022 deaths 21st-century Bolivian male actors 21st-century Bolivian politicians Afro-Bolivian people Bolivian composers Bolivian engineers Bolivian folk musicians Bolivian male film actors Bolivian radio presenters Chauffeurs Civil rights activists Higher University of San Andrés alumni Members of the Bolivian Chamber of Deputies from La Paz Movimiento al Socialismo politicians People from Nor Yungas Province Systems engineers
Jorge Medina
[ "Engineering" ]
1,835
[ "Systems engineers", "Systems engineering" ]
72,344,861
https://en.wikipedia.org/wiki/Source%20attribution
In the field of epidemiology, source attribution refers to a category of methods with the objective of reconstructing the transmission of an infectious disease from a specific source, such as a population, individual, or location. For example, source attribution methods may be used to trace the origin of a new pathogen that recently crossed from another host species into humans, or from one geographic region to another. It may be used to determine the common source of an outbreak of a foodborne infectious disease, such as a contaminated water supply. Finally, source attribution may be used to estimate the probability that an infection was transmitted from one specific individual to another, i.e., "who infected whom". Source attribution can play an important role in public health surveillance and management of infectious disease outbreaks. In practice, it tends to be a problem of statistical inference, because transmission events are seldom observed directly and may have occurred in the distant past. Thus, there is an unavoidable level of uncertainty when reconstructing transmission events from residual evidence, such as the spatial distribution of the disease. As a result, source attribution models often employ Bayesian methods that can accommodate substantial uncertainty in model parameters. Molecular source attribution is a subfield of source attribution that uses the molecular characteristics of the pathogen — most often its nucleic acid genome — to reconstruct transmission events. Many infectious diseases are routinely detected or characterized through genetic sequencing, which can be faster than culturing isolates in a reference laboratory and can identify specific strains of the pathogen at substantially higher precision than laboratory assays, such as antibody-based assays or drug susceptibility tests. On the other hand, analyzing the genetic (or whole genome) sequence data requires specialized computational methods to fit models of transmission. Consequently, molecular source attribution is a highly interdisciplinary area of molecular epidemiology that incorporates concepts and skills from mathematical statistics and modeling, microbiology, public health and computational biology. There are generally two ways that molecular data are used for source attribution. First, infections can be categorized into different "subtypes" that each corresponds to a unique molecular variety, or a cluster of similar varieties. Source attribution can then be inferred from the similarity of subtypes. Individual infections that belong to the same subtype are more likely to be related epidemiologically, including direct source-recipient transmission, because they have not substantially evolved away from their common ancestor. Similarly, we assume the true source population will have frequencies of subtypes that are more similar to the recipient population, relative to other potential sources. Second, molecular (genetic) sequences from different infections can be directly compared to reconstruct a phylogenetic tree, which represents how they are related by common ancestors. The resulting phylogeny can approximate the transmission history, and a variety of methods have been developed to adjust for confounding factors. Due to the associated stigma and the criminalization of transmission for specific infectious diseases, molecular source attribution at the level of individuals can be a controversial use of data that was originally collected in a healthcare setting, with potentially severe legal consequences for individuals who become identified as putative sources. In these contexts, the development and application of molecular source attribution techniques may involve trade-offs between public health responsibilities and individual rights to data privacy. Microbial subtyping Microbial subtyping or strain typing is the use of laboratory methods to assign microbial samples to subtypes, which are predefined classifications based on distinct characteristics. The assignment of specimens to subtypes can provide a basis of source attribution, since we assume that a pathogen undergoes minimal change when transmitted to an uninfected host. Therefore, infections of the same subtype are implied to be epidemiologically related, i.e., linked by one or more recent transmission events. The assumption that the pathogen is unchanged when transmitted is generally reasonable if the rate of evolution for the pathogen is slower than the rate of transmission, such that few mutations are observed on an epidemiological time scale. For example, suppose host A is infected by a pathogen that we have categorized as subtype 1. They are more likely to have been infected by host B, who also carries the subtype 1 pathogen, than host C who carries the subtype 2 pathogen (Figure 1). In other words, transmission from host B is a more parsimonious explanation if there is a relatively small probability that the pathogen population in host C evolved from subtype 1 to subtype 2 after transmission to host A. Today it is more common to use genetic sequencing to characterize the microbial sample at the level of its nucleotide sequence by sequencing the whole genome or proportions thereof. However, other molecular methods such as restriction length fragment polymorphism have historically played an important role in microbial subtyping before genetic sequencing became an affordable and ubiquitous technology in reference laboratories. Sequence-based typing methods confer an advantage over other laboratory methods (such as serotyping or pulsed-field gel electrophoresis ) because there is an enormous number of potential subtypes that can be resolved at the level of the genetic sequence. Consider the above example again; however, this time host A carries the same infection subtype as many other hosts. In this case we would have no information to differentiate between these hosts as the potential source of host A's infection. Our ability to identify potential sources, therefore, depends on having a sufficient number of different subtypes. However, defining too many subtypes in the population makes it likely that every individual carries a unique subtype, especially for rapidly-evolving pathogens that can accumulate high levels of genetic diversity in a relatively short period of time. Hence, there exists an intermediate level of subtype resolution that confers the greatest amount of information for source attribution. When source attribution is considered for a pathogen with high diversity, such that most specimens have unique genetic sequences, it is useful to group multiple unique sequences with a clustering method. Single and multi-locus typing Before whole-genome sequencing was cost-effective, targeting a specific part of the pathogen genome (a.k.a. single-locus typing) was an important step to facilitate microbial subtyping. For example, the ribosomal gene 16S is a standard target for identifying bacteria, in part because it is present across all known species and contains a mixture of conserved and variable regions. Within a pathogen species, sequencing targets tended to be selected on the basis of their length, ubiquity and exposure to diversifying selection, which may be dictated by the function of the gene product for expressed regions. For example, so-called "housekeeping" or core genes have indispensable biological functions, such as copying genetic material or building proteins. These genes are often preferred candidates for microbial subtyping because they are less likely to be absent from a given genome. Gene presence/absence is particularly relevant for bacteria where genetic material is frequently exchanged through horizontal gene transfer. Targeting multiple regions (loci) of the pathogen genome confers greater precision to distinguish between lineages, since the chance to observe informative genetic differences between infections is increased. This approach is referred to as multi-locus sequence typing (MLST). Similar to single-locus typing, MLST requires the selection of specific loci to target for sequencing. Moreover, for subtyping to be consistent across laboratories a reference database must be maintained that maps sequences from single or multiple loci to a fixed notation of allele numbers or designations. Whole genome sequencing Although single- and multiple-locus subtyping is still predominantly used for molecular epidemiology, ongoing improvements in sequencing technologies and computing power continue to lower the barrier to whole-genome sequencing. Next-generation sequencing (NGS) technologies provide cost-effective methods to generate whole genome sequences from a given sample by individually amplifying and sequencing templates in parallel using customized technologies such as sequencing-by-synthesis. Shotgun sequencing applications of NGS generate full-length genome sequences by shearing the nucleic acid extracted from the sample into small fragments that are converted into a sequencing library, and then using a de novo sequence assembler program the genome sequence is reconstituted from the sequence fragments (short reads). Alternatively, short reads can be mapped to a reference genome sequence that has been converted into an index for efficient lookup of exact substring matches. This approach can be faster than de novo assembly, but relies on having a reference genome that is sufficiently similar to the genome sequence of the sample. While NGS makes it feasible to simultaneously generate full-length genome sequences from hundreds of pathogen samples in a single run, it introduces a number of other challenges. For instance, NGS platforms tend to have higher sequencing error rates than conventional sequencing, and regions of the genome with long stretches of repetitive sequence can be difficult to reassemble. Whole genome sequencing (WGS) can confer a significant advantage for source attribution over single- or multiple-locus subtyping. Sequencing the entire genome is the maximal extent of multi-locus typing, in that all possible loci are covered. Having whole genome sequences will tend to make one-to-one subtyping (Figure 1) less useful, since most genomes will be unique by at least one mutation for rapidly evolving pathogens. Consequently, applications of WGS for source attribution at a population level will likely have to cluster similar genomes together. The breadth of coverage offered by WGS is more advantageous for the epidemiology of bacterial pathogens than viruses. Bacterial genomes tend to be longer, ranging from about 106 to 107 base pairs, whereas virus genomes seldom exceed 106 base pairs. In addition, bacteria tend to evolve at a slower rate than viruses, so mutations tend to be distributed more sparsely throughout a bacterial genome. For example, WGS data revealed differences between isolates of Burkholderia pseudomallei from Australia and Cambodia that had otherwise appeared to be identical by multi-locus subtyping due to convergent evolution. WGS has also been utilized in several recent studies to resolve transmission networks of Mycobacterium tuberculosis in greater detail, because isolates with identical multi-locus subtypes (e.g., MIRU-VNTR profiles targeting 24 loci) were frequently separated by large numbers of nucleotide differences in the full genome sequence, comprising roughly 4.3 million nucleotides encodoing over 4,000 genes. Genetic clustering When applied to genetic sequences, a clustering method is a set of rules for assigning the sequences to a smaller number of clusters such that members of the same cluster are more genetically similar to each other than sequences in other clusters. Put another way, a clustering method defines a partition on the set of genetic sequences using some similarity measure. Clustering is inherently subjective and there are usually no formal guidelines for setting the clustering criteria. Consequently, cluster definitions can vary substantially from one study to the next. In addition, clustering is an intuitive process that can be accomplished by a wide variety of approaches; because of this flexibility, numerous different methods of genetic clustering have been described in the literature. Genetic clustering provides a way of dealing with sequences from rapidly evolving pathogens, or whole genome sequences from pathogens with less divergence. In either case, there can be an enormous number of distinct genetic sequences in the data set. If each subtype must correspond to a unique sequence variant, then one could potentially have to track an unwieldy number of microbial subtypes for these pathogens when subtypes are defined on a one-to-one basis (Figure 1). The number of subtypes can be greatly reduced by expanding the definition of microbial subtypes from individually unique sequence variants to clusters of similar sequences. For example, pairwise distance clustering is a nonparametric approach in which clusters are assembled from pairs of sequences that fall within a threshold distance of each other. The distance between sequences is computed by a genetic distance measure (a mathematical formula that maps two sequences to a non-negative real number) that quantifies the evolutionary divergence between the sequences under some model of molecular evolution. Frequency-based attribution When the potential sources are populations, not individuals, then we are comparing the frequencies of subtypes in the respective populations. The most likely source population should have a subtype frequency distribution that is the most similar to the reference population. Methods that employ this approach have been referred to as "frequency-based" or "frequency-matching" models. These subtypes are not necessarily derived from molecular data; for instance, these methods were originally applied to microbial strains defined by non-genetic antigenic or resistance profiling. For example, the "Dutch model" was originally developed to estimate the most likely source of a number of foodborne illnesses due to Salmonella by comparing the relative frequencies of bacterial subtypes (based on phage typing) in different commercial livestock populations (including poultry, swine and cattle) through routine surveillance programs. For a given subtype, the expected number of human cases attributed to each source is proportional to the relative frequencies of that subtype among sources: where is the proportion of (non-human) cases in the -th source population associated with subtype , and is the number of cases of subtype in the recipient (human) population. For instance, if the frequencies of subtype X among three potential sources was 0.8, 0.5 and 0.1, respectively, then the expected number of cases (out of a total of 100) from the second source is . This simple formula is a maximum likelihood estimator when the total force of infection from each source into the human population is uniform, e.g., the sources have equal population sizes. Subsequently, this model was extended by Hald and colleagues to account for variation among sources and subtypes using Bayesian inference methods. This extension, typically referred to as the Hald model, has become a standard model in source attribution for food-borne illnesses. The observed numbers of each subtype in the human population was assumed to be a Poisson distributed outcome with a mean for the i-th subtype, after adjusting for cases related to travel and outbreaks: where is the marginal effect of the i-th subtype (e.g., elevated infectiousness of a bacterial variant), is the observed total amount (mass) of the j-th food source, is the marginal effect of the j-th food source, and is the same observed case proportion as the original "Dutch" model. This model is visualized in Figure 2. Bayesian inference The addition of a large number of parameters to the "Dutch" model by Hald and colleagues yielded a more realistic model. However, it was too complex to solve for exact maximum likelihood estimates, in contrast to the original model. Many of the parameters could not be directly measured, such as the relative transmission risk associated with a specific food source. Consequently, Hald and colleagues adopted a Bayesian approach to estimate the model parameters. A similar approach has also been used to reconstruct the contribution of different environmental and livestock reservoirs of the bacteria Campylobacter jejuni to an outbreak of food poisoning in England, where the migration of different subtypes among reservoirs was jointly estimated by Bayesian methods. Although Bayesian inference is discussed extensively elsewhere, it plays an important role in computationally-intensive methods of source attribution, so we provide a brief description here. In the context of Bayesian inference every parameter is described by a probability distribution that represents our belief about its true value. Thus, the statistical principle that underlies Bayesian inference (i.e., Bayes' rule) can be expressed in terms of the model parameters () and the data (): where , and are known as the posterior, sampling (likelihood), and prior distributions, respectively. A simple way to think about Bayesian inference is that our prior belief about the parameters is "updated" once we have seen the data. As a result, our posterior belief becomes a compromise between our prior belief and the data. To update our belief, we need to have a sampling distribution or model that describes the probability of different outcomes of an experiment. We also require a prior distribution that represents our belief in a statistical form. While modern computation allows almost any probability distribution to be used, the uniform distribution is commonly used because it assigns the same probability to every value within some range. After incorporating new information from the data, our updated belief about the model parameters is represented by the posterior distribution. This use of distributions to represent our belief distinguishes Bayesian inference from maximum likelihood, which results in a single combination of parameter values as a point estimate. Hald and colleagues used uniform prior distributions for many of their parameters to express the prior belief that the true value fell within a continuous range with specific upper and lower limits. They constrained some parameters to take the same numerical value as others. For example, the effects of domestic and imported supplies of the same food source were linked in this manner. This assumption expressed a strong belief that a given food source carried the same transmission risk irrespective of its origin, and simplified the model so that it was more feasible to fit the data. Other parameters were set to a fixed reference value to further simplify the model. Hald and colleagues employed a Poisson model to describe the probability of observing the number () of rare transmission events that occur at a rate . As described above, the rate of cases due to a specific bacterial subtype was the sum of transmission rates across all potential sources. The Hald model was more realistic than the "Dutch" model because it allowed transmission rates to vary between subtypes and food sources. However, it was not feasible to directly measure these different rates — these parameters needed to be estimated from the data. Comparative methods Instead of comparing the frequencies of subtypes to reconstruct the transmission of pathogens between populations, many source attribution methods compare the pathogen sequences at the level of individual hosts. One way of comparing sequences is to calculate some measure of genetic distance or similarity, a concept that we introduced earlier on the topic of pooling sequences into composite subtypes. For example, infections that are grouped into clusters are assumed to be related through one or more recent and rapid transmission events. Short genetic distances imply that limited time has passed for mutations to accumulate in lineages descending from their common ancestor. Consequently, these clusters are often referred to as "transmission clusters". Other studies have used genetic distances that exceed some threshold to rule out host individuals as potential sources of transmission. Although this application of clustering is related to source attribution, it is not possible to infer the direction of transmission solely from the genetic distance between infections. Furthermore, the genetic distance separating infections is not solely determined by the rate of transmission; for example, they are strongly influenced by how infections are sampled from the population. Sequences can also be compared in the context of their shared evolutionary history. A phylogenetic tree or phylogeny is a hypothesis about the common ancestry of species or populations. In the context of molecular epidemiology, phylogenies are used to relate infections in different hosts and are usually reconstructed from genetic sequences of each pathogen population. To reconstruct the phylogeny, the sequences must cover the same parts of the pathogen genome; for example, sequences that represent multiple copies of the same gene from different infections. It is this residual similarity (homology) between diverging populations that implies recent common ancestry. A molecular phylogeny comprises "tips" or "leaves" that represent different genetic sequences that are connected by branches to a series of common ancestors that eventually converge to a "root". The composition of the ancestral sequence at the root, the order of branching events, and the relative amount of change along each branch are all quantities that must be extrapolated from the observed sequences at the tips. There are multiple approaches to reconstruct a phylogenetic tree from genetic sequence variation. For example, distance-based methods use a hierarchical clustering method to build up a tree based on the observed genetic distances. Phylogenetic uncertainty A common simplifying assumption in phylogenetic investigations is that the phylogenetic tree reconstructed from the data is the "true" tree — that is, an accurate representation of the common ancestry relating the sampled infections. For instance, a single tree is often used as the input for comparative methods to detect the signature of natural selection in protein-coding sequences. On the other hand, if the phylogeny is handled as an uncertain estimate derived from the data (including the sequence alignment), then the analysis becomes a hierarchical model in which the problem of phylogenetic reconstruction is nested within the problem of estimating the other model parameters that are conditional on the phylogeny (Figure 3). Sampling both the phylogeny and other model parameters from their joint posterior distribution using methods such as Markov chain Monte Carlo (MCMC) should confer more accurate parameter estimates. However, the greatly expanded model space also makes it more difficult for MCMC samples to converge to the posterior distribution. Such hierarchical methods are often implemented in the software package BEAST2 (Bayesian Evolutionary Analysis by Sampling Trees), which provides generic routines for MCMC sampling from tree space, and calculates the likelihood of a time-scaled phylogenetic tree given sequence data and sample collection dates. There are a number of sources of phylogenetic uncertainty. For instance, the common ancestry of lineages can be difficult to reconstruct if there has been little to no evolution along the respective branches. This can occur when the rate of evolution is substantially slower than the time scale of transmission, such that mutations are unlikely to accumulate between the start of one infection and its transmission to the next host (i.e., the generation time). It can also arise when existing divergence is not captured due to incomplete sequencing of the respective genomes. Furthermore, reconstructing the common ancestry of lineages is progressively more uncertain as we move deeper into the tree, forcing us to extrapolate the ancestral states at greater distances from the observed data. Alignment Reconstructing phylogenies from molecular sequences generally requires a multiple sequence alignment, a table in which homologous residues in different sequences occupy the same position. Although alignments are often treated as observed data known without ambiguity, the process of aligning sequences is also uncertain and can become more difficult with the rapid accumulation of sequence insertions and deletions among diverging pathogen lineages. While there are Bayesian methods that address uncertainty in alignment by joint sampling of the alignment along with the phylogeny, this approach is computationally complex and is seldom used in the context of source attribution. Furthermore, sequences are themselves uncertain estimates of the genetic composition of individual pathogens or infecting populations, and next-generation sequencing technologies tend to have substantially higher error rates than conventional Sanger sequencing, and analysis pipelines must be carefully validated to reduce the effects of sample cross-contamination and adapter contamination. Recombination Genetic recombination is the exchange of genetic material between individual genomes. For pathogens, recombination can occur when a cell is infected by multiple copies of the pathogen. If some hosts were infected multiple times by two or more divergent variants from different sources (i.e., superinfection), then recombination can produce mosaic genomes that complicate the reconstruction of an accurate phylogeny. In other words, different segments of a recombinant genome may be related to other genomes through discordant phylogenies in such a way that cannot be accurately represented by a single tree. In practice, it is common to screen for recombinant sequences and discard them before reconstructing a phylogeny from an alignment that is assumed to be free of recombination. Inferring transmission history from the phylogeny The basic premise in applying phylogenetics to source attribution is that the shape of the phylogenetic tree approximates the transmission history, which can also be represented by a tree where each split into two branches represents the transmission of an infection from one host to another. In conjunction with reconstructing the transmission tree from other sources of information, such as contact tracing, reconstructing a phylogenetic tree can serve as a useful, additional information source especially when genetic sequences are already available. Because of the visual and conceptual similarity between phylogenetic and transmission trees, it is a common assumption that the branching points (splits) of the phylogeny represent transmission events. However, this assumption will often be inaccurate. A transmission event may have occurred at any point along the two branches that separate one sampled infection from the other in the virus phylogeny (Figure 3A). The transmission tree only constrains the shape of the phylogenetic tree. Thus, even if we can reconstruct the phylogenetic tree without error, there are several reasons why it will not be an accurate representation of the transmission tree, including incomplete sampling, pathogen evolution within hosts, and secondary infection of the same host. Incomplete sampling Equating the phylogenetic tree with the transmission history implicitly assumes that genetic sequences have been obtained from every infected host in the epidemic. In practice, only a fraction of infected hosts are represented in the sequence data. The existence of an unknown and inevitably substantial number of unsampled infected hosts is a major challenge for source attribution. Even if the phylogenetic tree indicates that two infections are most closely related than any other sampled infection, one cannot rule out the existence of one or more unsampled hosts whom are intermediate links in the "transmission chain" separating the known hosts (Figure 3B). Similarly, an unsampled infection may have been the source population for both observed infections at the tips of the tree (Figure 3C). By itself, the phylogenetic tree does not explicitly discriminate among these alternative transmission scenarios. Evolution within hosts The shape of the phylogenetic tree may diverge from the underlying transmission history because of the evolution of diverse populations of the pathogen within each host. Individual copies of the pathogen genome that are transmitted to the next host are, by definition, no longer in the source population. A split exists in the phylogenetic tree that represents the common ancestor between the transmitted lineages and the other lineages that have remained and persisted in the source population. If we follow both sets of lineages back in time, the time of the transmission event is the most recent possible time that they could converge to a common ancestor. Put another way, this event represents one extreme of a continuous range where the common ancestor is located further back in time. This process is often modelled by Kingman's coalescent, which describes the number of generations we expect to follow randomly selected lineages back in time until we encounter a common ancestor. The expected time until two lineages converge to a common ancestor, known as a coalescence event, is proportional to the effective population size, which determines the number of possible ancestors. Put another way, two randomly selected people in a large city are less likely to have a great-grandparent in common than two people in a small rural community. Longer coalescence times in a large, diverse within-host pathogen populations are a significant challenge for source attribution, because it uncouples the virus phylogeny from the transmission tree. For example, if a host has transmitted their infection to two others, then there can be as many as three sets of lineages whose ancestry can be traced in the source population in that host (Figure 3D). As a result, there is some chance that the branching order in the virus phylogeny implies a different order of transmission events if we interpret the phylogeny as equivalent to a transmission tree. For example, in Figure 3D hosts 1 and 3 are more closely related in the transmission history, but not in the phylogeny. Clearance and secondary infection Many infections can be spontaneously cleared by the host's immune system. If a host that has cleared a previously diagnosed infection becomes re-infected from another source, then it is possible for the same host to be represented by different infections in the phylogenetic and transmission trees, respectively. In addition, some individuals may become infected from multiple different sources. For example, roughly one-third of infections by hepatitis C virus are spontaneously cleared within the first six months of infection. This previous exposure, however, does not confer immunity to re-infection by the same virus. In addition, co-infection by multiple strains of hepatitis C virus that persist simultaneously within the same host can occur relatively frequently in populations with a high rate of transmission, such as people who inject drugs using shared equipment (ranging from 14% to 39%). The persistence of strains from additional exposures may be missed by conventional genetic sequencing techniques if they are present at a low frequencies within the host, necessitating the use of "next-generation" sequencing technologies. For these reasons, the epidemiological linkage of hepatitis C virus infections through genetic similarity may be a transient phenomenon, leading some investigators to recommend using multiple virus sequences sampled from different time points of each infection for molecular epidemiology applications. Ancestral host-state reconstruction Ancestral reconstruction is the application of a model of evolution to a phylogenetic tree to reconstruct character states, such as nucleotide sequences or phenotypes, at the different ancestral nodes of the tree down to the root. In the context of source attribution, ancestral reconstruction is frequently used to estimate the geographic location of pathogen lineages as they are carried from one region to another by their hosts. Drawing this analogy between character evolution and the spatial migration of individuals or populations is known as phylogeography, where the geographic location of an ancestral population is reconstructed from the current locations of its sampled descendants under some model of migration. Migration models generally fall into two categories of discrete-state and continuous-state models. Discrete-state or island migration models assume that a given lineage is in one of a finite number of locations, and that it changes location at a constant rate over time according to a continuous-time Markov process, analogous to the models used for molecular evolution. Ancestral reconstruction with a discrete-state migration model has also been utilized to reconstruct the early spread of HIV-1 in association with development of transport networks and increasing population density in central Africa. Discrete models can also be applied to the population-level source attribution of zoonotic transmissions by reconstructing different host species as ancestral character states. For example, a discrete trait model of evolution was used to reconstruct the ancestral host species in a phylogeny relating Staphylococcus aureus specimens from humans and domesticated animals. Similarly, Faria and colleagues analyzed the cross-species transmission of rabies virus as a discrete diffusion process along the virus phylogeny, with rates influenced by the evolutionary relatedness and geographic range overlap of the respective host species. Continuous-state migration models are more similar to models of Brownian motion in that a lineage may occupy any point within a defined space. Although continuous models can be more realistic than discrete migration models, they may also be more challenging to fit to data. Taken literally, a continuous model requires precise geolocation data for every infection sampled from the population. In many applications, however, these metadata are not available; for example, some studies approximate the true spatial distribution of sampled infections by the centroids of their respective regions. This can become problematic if the regions vary substantially in area, and host populations are seldom uniformly distributed within regions. Paraphyly Paraphyly is a term that originates from the study of cladistics, an evolutionary approach to systematics that groups organisms on the basis of their common ancestry. A group of infections is paraphyletic if the group includes the most recent common ancestor, but does not include all its descendants. In other words, one group is nested within an ancestral group. For example, birds are descended from a common ancestor that in turn shares a common ancestor with all reptiles; thus, birds are nested within the phylogeny of reptiles, making the latter a paraphyletic group. Thus, paraphyly is evidence of evolutionary precedence: the ancestor of all birds was a reptile. In the context of source attribution, paraphyly can be used as evidence that one infection preceded another. It does not provide evidence that the infection was directly transmitted from one individual to another, in part because of incomplete sampling. The application of paraphyly for source attribution requires that the phylogenetic tree relates multiple copies of the pathogen from both the putative source and recipient hosts. To elaborate, phylogenetic trees relating different infections are often reconstructed from population-based sequences (direct sequencing of the PCR amplification product), where each sequence represents the consensus of the individual pathogen genomes sampled from the infected host. If copies of the pathogen genome are sequenced individually by limiting dilution protocols or next-generation sequencing, then one can reconstruct a tree that represents the genealogy of individual pathogen lineages, rather than the phylogeny of pathogen populations. If sequences from host A form a monophyletic clade (in which members are the complete set of descendants from a common ancestor) that has a nested paraphyletic clade of sequences from host B, then the tree is consistent with the direction of transmission having originated from host A. Directionality does not imply that host A directly transmitted their infection to host B, because the pathogen may have been transmitted through an unknown number of intermediate unsampled hosts before establishing an infection in host B. Node support The statistical confidence in directionality of transmission from a given tree is usually quantified by the support value associated with the node that is ancestral to the nested monophyletic clade. The support of node X is the estimated probability that if we repeated the phylogenetic reconstruction on an equivalent data set, the new tree would contain exactly the same clade consisting exclusively of all descendants of node X in the original tree. In other words, it quantifies the reproducibility of that node given the data. It should not be interpreted as the probability that the clade below node X appears in the "true" tree. There are generally three approaches to estimating node support: 1. Bootstrapping. Felsenstein adapted the concept of nonparametric bootstrapping to the problem of phylogenetic reconstruction by maximum likelihood. Bootstrapping provides a way to characterize the sampling variation associated with the data without having to collect additional, equivalent samples. To start, one generates a new data set by sampling an equivalent number of nucleotide or amino acid positions at random with replacement from the multiple sequence alignment – this new data set is referred to as a "bootstrap sample". A tree is reconstructed from the bootstrap sample using the same method as the original tree. Since we are sampling sets of homologous characters (columns) from the alignment, the information on the evolutionary history contained at that position is intact. We record the presence or absence of clades from the original tree in the new tree, and then repeat the entire process until a target number of replicate trees have been processed. The frequency at which a given clade is observed in the bootstrap sample of trees quantifies the reproducibility of that node in the original tree. Non-parametric bootstrapping is a time-consuming process that scales linearly with the number of replicates, since every bootstrap sample is processed by the same method as the original tree, and post-processing steps are required to enumerate clades. The precision of estimating the node support values increases with the number of bootstrap replicates. For instance, it is not possible to obtain a node support of 99% if fewer than 100 bootstrap samples have been processed. Consequently, it is now more common to use faster approximate methods to estimate the support values associated with different nodes of the tree (for instance, see approximate likelihood-ratio testing below). 2. Bayesian sampling. Instead of using bootstrapping to resample the data, one can quantify node support by examining the uncertainty in reconstructing the phylogeny from the given data. Bayesian sampling methods such as Markov chain Monte Carlo (see Hald model) are designed to generate a random sample of parameters from the posterior distribution given the model and data. In this case, the tree is a collection of parameters. A Bayesian estimate of node support can be extracted from this sample of trees by counting the number of trees in which the monophyletic clade that descends from that specific node appears. Bayesian sampling is computationally demanding because the space of all possible trees is enormous, making convergence difficult or not feasible to attain for large data sets. 3. Approximate likelihood-ratio testing. Unlike Bayesian sampling, this method is performed on a single estimate of the tree based on maximum likelihood, where the likelihood is the probability of the observed data given the tree and model of evolution. The likelihood ratio test (LRT) is a method for selecting between two models or hypotheses, where the ratio of their likelihoods is a test statistic that is mapped to a null distribution to assess statistical significance. In this application, the alternative hypothesis is that a branch in the reconstructed tree has a length of zero, which would imply that the descendant clade cannot be distinguished from its background. This makes the LRT a localized analysis: it evaluates the support of a node when the rest of the tree is assumed to be true. On the other hand, this narrow scope makes the approximate LRT method computationally efficient in comparison to Bayesian sampling and bootstrap sampling. In addition to the LRT method, there are several other methods for fast approximation of bootstrap support and this remains an active area of research. Background sequences The interpretation of monophyletic and paraphyletic clades is contingent on whether a sufficient number of infections have been sampled from the host population. Sequences from one host can only become paraphyletic relative to sequences from a second host if the tree contains additional sequences from at least one other host in the population. As noted above, there may be unsampled host individuals in a "transmission chain" connecting the putative source to the recipient host (Figure 3B). The incorporation of background sequences from additional hosts in the population is similar to the problem of rooting a phylogeny using an outgroup, where the root represents the earliest point in time in the tree. The location of this "root" in the section of the tree relating the sequences from the two hosts determines which host is interpreted to be the potential source. There are no formal guidelines for selecting background sequences. Typically, one incorporates sequences that were collected in the same geographic region as the two hosts under investigation. These local sequences are sometimes augmented with additional sequences that are retrieved from public databases based on their genetic similarity (e.g., BLAST), which were not necessarily collected from the same region. Generally, the background data comprise consensus (bulk) sequences where each host is represented by a single sequence, unlike the putative source and recipient hosts from whom multiple clonal sequences have been sampled. Because clonal sequencing is more labor-intensive, such data are usually not available to use as background sequences. The incorporation of different types of sequences (clonal and bulk) into the same phylogeny may bias the interpretation of results, because it is not possible for sequences to be nested within the consensus sequence from a single background host. Phylodynamic methods In general, phylodynamics is a subdiscipline of molecular epidemiology and phylogenetics that concerns the reconstruction of epidemiological processes, such as the rapid expansion of an epidemic or the emergence of herd immunity in the host population, from the shape of the phylogenetic tree relating infections sampled from the population. A phylodynamic method uses tree shape as the primary data source to parameterize models representing the biological processes that influenced the evolutionary relationships among the observed infections. This process should not be confused with fitting models of evolution (such as a nucleotide substitution model or molecular clock model) to reconstruct the shape of the tree from the observed characteristics of related populations (infections), which originates from the field of phylogenetics. The relatively rapid evolution of viruses and bacteria makes it feasible to reconstruct the recent dynamics of an epidemic from the shape of the phylogeny reconstructed from infections sampled in the present. The use of phylodynamic methods for source attribution involve reconstruction of the transmission tree, which cannot be directly observed, from its residual effect on the shape of the phylogenetic tree. Although there are established methods for reconstructing phylogenetic trees from the genetic divergence among pathogen populations sampled from different host individuals, there are several reasons why the phylogeny may be a poor approximation of the transmission tree (Figure 3). In this context, phylodynamic methods attempt to reconcile the discordance between the phylogeny and the transmission tree by modeling one or more of the processes responsible for this discordance, and fitting these models to the data (Figure 4). Given the complexity of phylodynamic models, these methods predominantly use Bayesian inference to sample transmission trees from the posterior distribution, where the transmission tree is an explicit model of "who infected whom". Although these methods can estimate the probability of a direct transmission from one individual to another, this probability is conditional on how well the model (selected from a number of possible models) approximates reality. Below we describe models that have been implemented to incorporate, but not eliminate, the additional uncertainty caused by the various assumptions required when using the phylogenetic tree as an approximation of the transmission history. Demographic and transmission models A basic simplifying assumption is that every infection in the epidemic is represented by at least one genetic sequence in the data set (complete sampling). Although complete sampling may be feasible in circumstances such as an outbreak of disease transmission among farms in a defined geographic region, it is generally not possible to rule out unsampled sources in other contexts. This is especially true for infectious diseases that are stigmatized and/or associated with marginalized populations, that have a long asymptomatic period, or in the context of a generalized epidemic where disease prevalence may substantially exceed the local capacity for sample collection and genetic sequencing. Several methods attempt to address the presence of unsampled hosts by modeling the growth of the epidemic over time, which predicts the total number of infected hosts at any given time. Put another way, the probability that an infection was transmitted from an unsampled source is determined in part by the total size of the infected population at the time of transmission. These models of epidemic growth are sometimes referred to as demographic models because some are derived from population growth models such as the exponential and logistic growth models. Alternatively, the number of infections can be modeled by a compartmental model that describes the rate that individual hosts switch from susceptible to infected states, and can be extended to incorporate additional states such as recovery from infection or different stages of infection. An important distinction between population growth and compartmental models is that the number of uninfected susceptible hosts is tracked explicitly in the latter. A phylodynamic analysis attempts to parameterize the growth model by using the phylogeny as either a direct proxy of the transmission tree, or to account for the discordance between these trees due to within-host diversity using a population genetic model, such as the coalescent (Figure 4). Bayesian methods make it feasible to supplement this task with other data sources, such as the reported case incidence and/or prevalence over time. The transmission process can be mapped to the size of the infected population using either a coalescent (reverse-time) model or a forward-time model such as birth-death or branching processes. Thus, the coalescent model has two different applications in phylodynamics. First, it can be used to address the confounding effect of diverse pathogen populations within hosts, by explicitly modeling the common ancestry of individual pathogens. Second, the coalescent can be adapted to model the spread of infections back in time, drawing an analogy between the common ancestry of individuals within hosts and the transmission of infections among hosts. This parallel has also been explored by phylodynamic models based on the structured coalescent, where the population can be partitioned into two or more subpopulations (demes). Each deme represents an infected host individual. Due to limited migration of pathogen lineages between demes, two pathogen lineages sampled at random are more likely to share a recent common ancestor if they belong to the same deme. Birth-death models describe the proliferation of infections forward in time, where a "birth" event represents the transmission of an infection to an uninfected susceptible host, and a "death" event can represent either the diagnosis and treatment of an infection, or its spontaneous clearance by the host. This class of models was originally formulated to describe the proliferation of species through speciation and extinction. Similarly, branching processes model the growth of an epidemic forward in time where the number of transmissions from each infected host ("offspring") is described by a discrete probability distribution over non-negative integers, such as the negative binomial distribution. Branching process models tend to use the simplifying assumption that this offspring distribution remains constant over time, making this class of models more appropriate for the initial stage of an epidemic where most of the population is uninfected. Within-host diversity As noted above, the diversification of pathogen populations within each host results in a discordance between the shapes of the pathogen phylogeny and the transmission tree. Phylodynamic methods that treat the phylogeny as equivalent to the transmission tree assume implicitly that the population within each host is small enough to be approximated by a single lineage. If the within-host population is diverse, then a transmission event will tend to underestimate the time since two lineages split from their common ancestor (Figure 3A); this phenomenon is analogous to the incomplete lineage sorting affecting gene trees relative to the species tree. The resulting discordance between the phylogenetic and transmission trees makes it more difficult to reconstruct the latter from the observed data. Moreover, the effect of within-host diversity becomes even greater if there are incomplete transmission bottlenecks — where a new infection is established by more than one lineage transmitted from the source population — because the common ancestor of pathogen lineages may be located in previous hosts further back in time. Controversies Source attribution is an inherently controversial application of molecular epidemiology because it identifies a specific population or individual as being responsible for the onward transmission of an infectious disease. Because molecular source attribution increasingly requires the specialized and computationally-intensive analysis of complex data, the underlying model assumptions and level of uncertainty in these analyses are often not made accessible to principal stakeholders, including the key affected populations and community advocates. Molecular forensics and HIV-1 transmission Outside of a public health context, the concept of source attribution has significant legal and ethical implications for people living with HIV to potentially become prosecuted for transmitting their infection to another person. The transmission of HIV-1 without disclosing one's infection status is a criminally prosecutable offense in many countries, including the United States. For example, defendants in HIV transmission cases in Canada have been charged with aggravated sexual assault, with a "maximum penalty of life imprisonment and mandatory lifetime registration as a sex offender". Molecular source attribution methods have been utilized as forensic evidence in such criminal cases. Forensic applications of phylogenetic clustering One of the earliest and well-known examples of an HIV-1 transmission case was the investigation of the so-called "Florida dentist", where an HIV-positive dentist was accused of transmitting his infection to a patient. Although genetic clustering — specifically, clustering in the context of a phylogeny — was applied to these data to demonstrate that HIV-1 particles sampled from the dentist were genetically similar to those sampled from the patient, clustering alone is not sufficient for source attribution. Clusters can only provide evidence that infections are unlikely to be epidemiologically linked because they are too dissimilar relative to other infections in the population. For example, similar phylogenetic methods were used in a subsequent case to demonstrate that the HIV-1 sequence obtained from the patient was far more similar to the sequence from their sexual partner than the sequence from a third party under investigation. Clustering provides no information on the directionality of transmission (e.g., whether the infection was transmitted from individual A to individual B, or from B to A; Figure 3), nor can it rule out the possibility that one or more other unknown persons (from whom no virus sequences have been obtained) were involved in the transmission history. Despite these known limitations of clustering, statements on the genetic similarity of infections continue to appear in court cases. On the other hand, clustering can have population-level benefits by enabling public health agencies to rapidly detect elevated rates of transmission in a population, and thereby optimize the allocation of prevention efforts. The expansion of public health applications of clustering has raised concerns among people living with HIV that this use of personal health data might also expose them to a greater risk of criminal prosecution for transmission. Forensic applications of paraphyly methods Source attribution methods based on paraphyly have been used in the prosecution of individuals for HIV-1 transmission. One of the earliest examples was published in 2002, where a physician was accused of intentionally injecting blood from one patient (P) who was HIV-1 positive into another patient (V) who had previous been in a relationship with the physician. This study used maximum likelihood methods to reconstruct a phylogenetic tree relating HIV-1 sequences from both patients. Paraphyly of sequences from P implying either direct or indirect transmission to V was reported for the phylogeny reconstructed from RT sequences (Figure 5). However, a second tree reconstructed from the more diverse HIV-1 envelope (env) sequences from the same group was inconclusive on the direction of transmission - only that the env sequences from patients P and V clustered respectively into two monophyletic groups that were jointly distinct from the background. The use of paraphyly for source attribution was stimulated with the onset of next-generation sequencing, which made it more cost-effective to rapidly sequence large numbers of individual viruses from multiple host individuals. More recent work has also developed a formalized framework for interpreting the distribution of sequences in the phylogeny as being consistent with a direction of transmission. Several studies have since applied this framework to re-analyze or develop forensic evidence for HIV transmission cases in Serbia, Taiwan, China and Portugal The growing number of such studies has led to controversy on the ethical and legal implications of this type of phylogenetic analysis for HIV-1. The accuracy of classifying a group of sequences in a phylogeny into monophyletic or paraphyletic groups is highly contingent on the accuracy of tree reconstruction. As described above (see Paraphyly), our statistical confidence of a specific clade in the tree is quantified by the estimated probability that the same clade would be obtained if the tree reconstruction was repeated on an equivalent data set. This support value is not the probability that the clade appears in the "true" tree because this quantity is conditional on the data at hand - however, it is often misinterpreted this way. If the branch separating a nested monophyletic clade of sequences from host A from the paraphyletic group of sequences from host B has a low support value, then the conventional procedure would be to remove that branch from the tree. This would have the result of collapsing the monophyletic and paraphyletic clades so that the tree is inconclusive about either direction of transmission. However, this procedure has not been consistently used in source attribution investigations. For example, the trees displayed in the 2020 study in Taiwan do not support transmission from the defendant to the plaintiff when branches with low support (<80%) are collapsed. Moreover, the result can vary with the region of the virus genome targeted for sequencing. The use of paraphyly to infer the direction of transmission was recently evaluated on a prospective cohort of HIV serodiscordant couples (where one partner was HIV positive at the start of the study). Applying the paraphyly method to next-generation sequence data generated from samples obtained from 33 pairs where the HIV negative partner became infected over the course of the study, the authors found that the direction of transmission was incorrectly reconstructed in about 13% to 21% of cases, depending on which sequences were analyzed. However, a follow-up study involving many of the same authors used a more comprehensive sequencing method to cover the full virus genome in depth from all host individuals, lowering the percentage of misclassified cases to 3.1%. Forensic applications of phylodynamics A common feature of both clustering and paraphyly methods is that neither approach explicitly tests the hypothesis that an infection was directly transmitted from a specific source population or individual to the recipient. Phylodynamic methods attempt to overcome the discordance between the pathogen phylogeny and the underlying transmission history by modeling the processes that contribute to this discordance, such as the evolution of pathogen populations within each host. The development of phylodynamic methods for source attribution has been a rapidly expanding area, with a large number of published studies and associated software released since 2014 (see Software). Because these methods have tended to be applied to other infectious diseases including influenza A virus, foot-and-mouth disease virus and Mycobacterium tuberculosis, they have so far avoided the ethical issues of stigma and criminalization associated with HIV-1. However, applications of phylodynamic source attribution to HIV-1 have begun to appear in the literature. For example, in a study based in Alberta, Canada, the investigators used a phylodynamic method (TransPhylo ) to reconstruct transmission events among patients receiving treatment at their clinic from HIV-1 sequence data. Although the program TransPhylo attempts, by default, to estimate the proportion of infections that are unsampled, the investigators fixed this proportion to 1%. By so doing, their analysis carried the unrealistic assumption that nearly every person living with HIV-1 in their regional epidemic (comprising at least 1,800 people) was represented in their data set of 139 sequences. 2010 cholera outbreak in Haiti In the aftermath of a magnitude 7.0 earthquake that struck Haiti in 2010, there was a large-scale outbreak of cholera, a gastrointestinal infection caused by the bacterium Vibrio cholerae. Nearly 800,000 Haitians became infected and nearly 10,000 died in one of the most significant outbreaks of cholera in modern history. Initial microbial subtyping using pulsed-field gel electrophoresis indicated that the outbreak was most genetically similar to cholera strains sampled in South Asia. In order to more comprehensively map the plausible source of infection, cholera strains from Southern Asia and South America were compared to the strains sampled from the Haitian outbreak. Whole genome sequences taken from cases in Haiti shared more sites in common with the sequences taken from South Asia (i.e., Nepal and Bangladesh) than those in geographic areas more immediate to Haiti. Direct comparisons were also made between the cholera strains taken from three Nepalese soldiers and three Haitian locals, which were nearly identical in genome sequence, forming a phylogenetic cluster. Based on the evidence gathered by phylogenetic source attribution studies, the role of Nepalese soldiers who were part of the United Nations Stabilization Mission to Haiti (MINUSTAH) in this outbreak was officially recognized by the United Nations in 2016. 2019/2020 novel coronavirus outbreak In December 2019, an outbreak of 27 cases of viral pneumonia was reported in association with a seafood market in Wuhan, China. Known respiratory viruses including influenza A virus, respiratory syncytial virus and SARS coronavirus were soon ruled out by laboratory testing. On January 10, 2020, the genome sequence of the novel coronavirus, most closely related to bat SARS-coronaviruses, was released into the public domain. Despite unprecedented quarantine measures, the virus (eventually named SARS-CoV-2) spread to other countries including the United States, with global prevalence exceeding 556 million confirmed cases as of July 15, 2022. This outbreak spurred an unprecedented level of epidemiological and genomic data sharing and real-time analysis, which was often communicated by social media prior to peer review. Much of this knowledge translation was mediated through the open-source project Nextstrain that performs phylogenetic analyses on pathogen sequence data as they become available on public and access-restricted databases, and uses the results to update web documents in real time. On March 4, 2020, Nextstrain developers released a phylogeny in which a SARS-CoV-2 genome that was isolated from a German patient occupied an ancestral position relative to a monophyletic clade of sequences sampled from Europe and Mexico. Users of the Twitter social media platform soon commented on the related post from Nextstrain that onward transmission from the German individual seemed to have "led directly to some fraction of the widespread outbreak circulating in Europe today". These comments were soon followed by criticism from other users that attributing the outbreak in Europe to the German patient as the source individual was drawing conclusions about the directionality of transmission from an incompletely sampled tree. In other words, the tree was reconstructed from a highly incomplete sample of cases from the ongoing outbreak, and the addition of other sequences had a substantial probability of modifying the inferred relationship between the German sequence and the clade in question. Nevertheless, the interpretation attributing the European outbreak to a German source propagated through social media, causing some users to call on Germany to apologize. Software There are numerous computational tools for source attribution that have been published, particularly for phylodynamic methods. Table 1 provides a non-exhaustive listing of some of the software in the public domain. Several of these programs are implemented within the Bayesian software package BEAST, including SCOTTI, BadTrIP, and beastlier. This listing does not include clustering methods, which are not designed for the purpose of source attribution, but may be used to develop microbial subtype definitions — clustering methods have previously been reviewed in molecular epidemiology literature Sources References Epidemiology Computational biology
Source attribution
[ "Biology", "Environmental_science" ]
12,064
[ "Epidemiology", "Environmental social science", "Computational biology" ]
72,346,236
https://en.wikipedia.org/wiki/From%20Zero%20to%20Infinity
From Zero to Infinity: What Makes Numbers Interesting is a book in popular mathematics and number theory by Constance Reid. It was originally published in 1955 by the Thomas Y. Crowell Company. The fourth edition was published in 1992 by the Mathematical Association of America in their MAA Spectrum series. A K Peters published a fifth "Fiftieth anniversary edition" in 2006. Background Reid was not herself a professional mathematician, but came from a mathematical family that included her sister Julia Robinson and brother-in-law Raphael M. Robinson. She had worked as a schoolteacher, but by the time of the publication of From Zero to Infinity she was a "housewife and free-lance writer". She became known for her many books about mathematics and mathematicians, aimed at a popular audience, of which this was the first. Reid's interest in number theory was sparked by her sister's use of computers to discover Mersenne primes. She published an article on a closely related topic, perfect numbers, in Scientific American in 1953, and wrote this book soon afterward. Her intended title was What Makes Numbers Interesting; the title From Zero to Infinity was a change made by the publisher. Topics The twelve chapters of From Zero to Infinity are numbered by the ten decimal digits, (Euler's number, approximately 2.71828), and , the smallest infinite cardinal number. Each chapter's topic is in some way related to its chapter number, with a generally increasing level of sophistication as the book progresses: Chapter 0 discusses the history of number systems, the development of positional notation and its need for a placeholder symbol for zero, and the much later understanding of zero as being a number itself. It discusses the special properties held by zero among all other numbers, and the concept of indeterminate forms arising from division by zero. Chapter 1 concerns the use of numbers to count things, arithmetic, and the concepts of prime numbers and integer factorization. The topics of Chapter 2 include binary representation, its ancient use in peasant multiplication and in modern computer arithmetic, and its formalization as a number system by Gottfried Leibniz. More generally, it discusses the idea of number systems with different bases, and specific bases including hexadecimal. Chapter 3 returns to prime numbers, including the sieve of Eratosthenes for generating them as well as more modern primality tests. Chapter 4 concerns square numbers, the observation by Galileo that squares are equinumerous with the counting numbers, the Pythagorean theorem, Fermat's Last Theorem, and Diophantine equations more generally. Chapter 5 discusses figurate numbers, integer partitions, and the generating functions and pentagonal number theorem that connect these two concepts. In chapter 6, Reid brings in the material from her earlier article on perfect numbers (of which 6 is the smallest nontrivial example), their connection to Mersenne primes, the search for large prime numbers, and Reid's relatives' discovery of new Mersenne primes. Mersenne primes are the primes one unit less than a power of two. Chapter 7 instead concerns the primes that are one more than a power of two, the Fermat primes, and their close connection to constructible polygons. The heptagon, with seven sides, is the smallest polygon that is not constructible, because it is not a product of Fermat primes. Chapter 8 concerns the cubes and Waring's problem on representing integers as sums of cubes or other powers. The topic of Chapter 9 is modular arithmetic, divisibility, and their connections to positional notation, including the use of casting out nines to determine divisibility by nine. In Chapter , From Zero to Infinity shifts from the integers to irrational numbers, complex numbers, logarithms, and Euler's formula . It connects these topics back to the integers through the theory of continued fractions and the prime number theorem. The final chapter, Chapter , provides a basic introduction to Aleph numbers and the theory of infinite sets, including Cantor's diagonal argument for the existence of uncountable infinite sets. The first edition included only chapters 0 through 9. The chapter on infinite sets was added in the second edition, replacing a section on the interesting number paradox. Later editions of the book were "thoroughly updated" by Reid; in particular, the fifth edition includes updates on the search for Mersenne primes and the proof of Fermat's Last Theorem, and restores an index that had been dropped from earlier editions. Audience and reception From Zero to Infinity has been written to be accessible both to students and non-mathematical adults, requiring only high-school level mathematics as background. Short sets of "quiz questions" at the end chapter could be helpful in sparking classroom discussions, making this useful as supplementary material for secondary-school mathematics courses. In reviewing the fourth edition, mathematician David Singmaster describes it as "one of the classic works of mathematical popularisation since its initial appearance", and "a delightful introduction to what mathematics is about". Reviewer Lynn Godshall calls it "a highly-readable history of numbers", "easily understood by both educators and their students alike". Murray Siegel describes it as a must have for "the library of every mathematics teacher, and university faculty who prepare students to teach mathematics". Singmaster complains only about two pieces of mathematics in the book: the assertion in chapter 4 that the Egyptians were familiar with the 3-4-5 right triangle (still the subject of considerable scholarly debate) and the omission from chapter 7 of any discussion of why classifying constructible polygons can be reduced to the case of prime numbers of sides. Siegel points out another small error, on algebraic factorization, but suggests that finding it could make another useful exercise for students. References Popular mathematics books 1955 non-fiction books Elementary number theory
From Zero to Infinity
[ "Mathematics" ]
1,217
[ "Elementary number theory", "Elementary mathematics", "Number theory" ]
72,348,357
https://en.wikipedia.org/wiki/Pandoravirus%20yedoma
Pandoravirus yedoma is a virus that originated 48,500 years ago which was discovered in the deep Siberian permafrost in 2022. The scientists also revived 13 new pathogens and characterized them as 'zombie viruses'. It has been shown to infect amoeba cells (particularly A. castellanii) killing them in the process. References Bamfordvirae Unaccepted virus taxa
Pandoravirus yedoma
[ "Biology" ]
85
[ "Viruses", "Controversial taxa", "Virus stubs", "Unaccepted virus taxa", "Biological hypotheses" ]
72,348,380
https://en.wikipedia.org/wiki/Microbiology%20Outreach%20Prize
Microbiology Outreach Prize is awarded annually by the Microbiology Society to those who made outstanding innovation in outreach about microbiology. It was introduced in 2009 and is awarded to individuals or teams. All members can nominate anyone they consider appropriate for this award. The award consists of £500 and an invitation to give a demonstration or talk at the society's Annual Society Showcase in September. The following have been awarded this prize: 2009 Jo Heaton 2010 Gemma Walton 2011 Nicola Stanley-Wall 2012 Marieke Hoeve 2013 James Redfern and Helen Brown 2014 Joana Alves Moscoso 2015 Adam Roberts 2016 Laura Piddock 2017 No award made 2018 Senga Robertson-Albertyn 2019 Matt Hutchings 2020 Sreyashi Basu and Sanjib Bhakta for Project Joi Hok! a community tuberculosis awareness programme in the UK 2021 Edward Hutchinson 2022 Kalai Mathee and Jonathan Tyrrell References Microbiology awards Outreach Prize 2009 establishments in the United Kingdom Awards established in 2009 Science communication awards British science and technology awards
Microbiology Outreach Prize
[ "Technology" ]
207
[ "Science and technology awards", "Science communication awards" ]
72,349,445
https://en.wikipedia.org/wiki/Ver%C3%B3nica%20Mart%C3%ADnez%20de%20la%20Vega
Verónica Martínez de la Vega y Mansilla is a Mexican mathematician whose research involves topology and hypertopology. She is a researcher in the Institute of Mathematics at the National Autonomous University of Mexico (UNAM). Education and career Martínez de la Vega was born in Mexico City, on January 5, 1971. Her family worked as lawyers, and discouraged her from going into science, but nevertheless she ended up studying mathematics at UNAM, and wrote an undergraduate thesis in topology that she published as a journal paper in Topology and its Applications. Continuing to graduate study in topology at UNAM, she completed her PhD in 2002 with the dissertation Estudio sobre dendroides y compactaciones supervised by Polish topologist Janusz J. Charatonik, becoming his only female doctoral student. After postgraduate research at UAM Iztapalapa and California State University, Sacramento, she joined the Institute of Mathematics as a researcher in 2005. Recognition Martínez de la Vega is a member of the Mexican Academy of Sciences. In 2017 UNAM gave her their "Reconocimiento Sor Juana Inés de la Cruz" award. References 1971 births Living people Mexican mathematicians Mexican women mathematicians Topologists National Autonomous University of Mexico alumni Members of the Mexican Academy of Sciences People from Mexico City
Verónica Martínez de la Vega
[ "Mathematics" ]
258
[ "Topologists", "Topology" ]
72,351,291
https://en.wikipedia.org/wiki/Jongensland
Jongensland (Dutch: “Boysland”) was a playground for Dutch children built in the aftermath of World War II as an attempt by urban planners and child psychologists to undo fascist ideas of child development. It was established in 1948 on an island in eastern Amsterdam, and allowed boys and girls to build simple strictures and play without adult supervision. The island was only accessible by boat. References Further reading Ursula Schulz-Dornburg, Huts, Temples, Castles, Mack, 2022 Parks in Amsterdam Child development Developmental psychology Childhood Playgrounds Play (activity) Outdoor recreation
Jongensland
[ "Biology" ]
118
[ "Behavior", "Developmental psychology", "Behavioural sciences", "Play (activity)", "Human behavior" ]
72,351,880
https://en.wikipedia.org/wiki/HD%20198716
HD 198716, also known as HR 7987 or 33 G. Microscopii, is a solitary star located in the southern constellation Microscopium. Eggen (1993) lists it as a member of the Milky Way's old disk population. The object has an apparent magnitude of 5.33, making it faintly visible to the naked eye under ideal conditions. Based on parallax measurements from the Gaia satellite, it is estimated to be 396 light years away from the Solar System. However, it is drifting closer with a somewhat constrained heliocentric radial velocity of . At its current distance, HD 198716's brightness is diminished by 0.1 magnitude due to interstellar dust. This is an evolved red giant star with a stellar classification of K2 III. It has 2.45 times the mass of the Sun but at an age of 622 million years, it has expanded to 23.9 times the Sun's radius. It radiates 160 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it an orange hue. HD 198716 is slightly metal deficient and spins moderately with a projected rotational velocity of . References K-type giants Microscopium Microscopii, 33 CD-40 14078 198716 103127 7987
HD 198716
[ "Astronomy" ]
275
[ "Microscopium", "Constellations" ]
72,353,274
https://en.wikipedia.org/wiki/Simplicial%20complex%20recognition%20problem
The simplicial complex recognition problem is a computational problem in algebraic topology. Given a simplicial complex, the problem is to decide whether it is homeomorphic to another fixed simplicial complex. The problem is undecidable for complexes of dimension 5 or more. Background An abstract simplicial complex (ASC) is family of sets that is closed under taking subsets (the subset of a set in the family is also a set in the family). Every abstract simplicial complex has a unique geometric realization in a Euclidean space as a geometric simplicial complex (GSC), where each set with k elements in the ASC is mapped to a (k-1)-dimensional simplex in the GSC. Thus, an ASC provides a finite representation of a geometric object. Given an ASC, one can ask several questions regarding the topology of the GSC it represents. Homeomorphism problem The homeomorphism problem is: given two finite simplicial complexes representing smooth manifolds, decide if they are homeomorphic. If the complexes are of dimension at most 3, then the problem is decidable. This follows from the proof of the geometrization conjecture. For every d ≥ 4, the homeomorphism problem for d-dimensional simplicial complexes is undecidable. The same is true if "homeomorphic" is replaced with "piecewise-linear homeomorphic". Recognition problem The recognition problem is a sub-problem of the homeomorphism problem, in which one simplicial complex is given as a fixed parameter. Given another simplicial complex as an input, the problem is to decide whether it is homeomorphic to the given fixed complex. The recognition problem is decidable for the 3-dimensional sphere . That is, there is an algorithm that can decide whether any given simplicial complex is homeomorphic to the boundary of a 4-dimensional ball. The recognition problem is undecidable for the d-dimensional sphere for any d ≥ 5. The proof is by reduction from the word problem for groups. From this, it can be proved that the recognition problem is undecidable for any fixed compact d-dimensional manifold with d ≥ 5. As of 2014, it is open whether the recognition problem is decidable for the 4-dimensional sphere . Manifold problem The manifold problem is: given a finite simplicial complex, is it homeomorphic to a manifold? The problem is undecidable; the proof is by reduction from the word problem for groups. References Undecidable problems Simplicial sets
Simplicial complex recognition problem
[ "Mathematics" ]
546
[ "Computational problems", "Basic concepts in set theory", "Families of sets", "Undecidable problems", "Simplicial sets", "Mathematical problems" ]
72,358,762
https://en.wikipedia.org/wiki/Adherent%20culture
Adherent cell cultures are a type of cell culture that requires cells to be attached to a surface in order for growth to occur. Most vertebrate derived cells (with the exception of hematopoietic cells) can be cultured and require a 2 dimensional monolayer that to facilitate cell adhesion and spreading. Cell samples can be taken from tissue explants or cell suspension cultures. Adherent cell cultures with an excess of nutrient-containing growth medium will continue to grow until they cover the available surface area. Proteases like trypsin are most commonly used to break the adhesion from the cells to the flask. Alternatively, cell scrapers can be used to mechanically break the adhesion if introducing proteases could damage the cell cultures. Unlike suspension cultures, the other main type of cell culture, adherent cultures require regular passaging performed using mechanical or enzymatic dissociation. The culture can be visualized using an inverted microscope, however the growth of adherent cultures is dependent on the available surface area. For this reason, adherent cell cultures are not commonly used to obtain a high yield of cells, instead the use of suspension cultures is preferred. Methods and Maintenance Isolating Cells Primary cells used for adherent cultures must be isolated and treated from a subject, or may be transferred from pre-existing cell lines. Adherent cells must first be transferred to a monolayer attached to a surface, and are categorized by their morphological differences. Fibroblast-like adherent cells have a linear and stretched shape, and migrate when attached to the monolayer. Epithelial-like adherent cells have a wider and polygonal shape, and do not migrate when attached to the monolayer. Once cells are properly isolated from their source and are transferred to the media, cell passaging can be conducted. Adherent Cell Culture Maintenance for Laboratories While passaging adherent cell cultures, spent media must be repeatedly pipetted out and replaced with fresh media. The culture vessel can also be repeatedly tapped, which should be combined with either mechanical or enzymatic methods to facilitate cell detachment. The culture vessel can also be centrifuged, forming a supernatant that can be extracted using a pipette. Cells must be fed 2 to 3 times per week, and must be cultured at an appropriate temperature, humidity, light, and pH in order to ensure optimal cell proliferation. Passaging (subculturing) Cells While adherent cultures share similarities with suspension cultures, there are many key differences in how they are cultured and passaged. For adherent culture passaging, the spent media is first pipetted out of the flask containing cells as a waste product. Cells are adhered to the media that was not removed in a culture vessel, and a series of wash and incubation steps are then necessary to detach the cells. For the wash steps, a balanced salt solution is poured to the side opposite the cell culture, and the culture vessel is then shaken before draining the balanced salt solution. Heat is applied to the culture vessel for the incubation steps, causing protein denaturation and the gradual separation of the cells from the media. Similarly to suspension cultures, the total number of cells can be calculated using a hemocytometer and trypan blue. Commercial Applications and Limitations Adherent cultures are most commonly used for cytology and for harvesting cellular products on a small scale. Since their growth is limited to 2D, it is difficult to use adherent cultures to study in-vivo cell structure and function. Research is being done to grow adherent cell cultures using 3D microcarriers in order to avoid this limitation and to use adherent cell cultures for drug testing. Commercial applications of adherent cultures include: Producing adherent cells that create proteins of interest used for vaccine development. Adherent cells used in conjunction with viral vectors for cell and gene therapy. Delivering micro and nanotechnology to adherent cells in vitro. Adjusting adherent cell morphology for cancer cell screening. References Cell culture
Adherent culture
[ "Biology" ]
814
[ "Model organisms", "Cell culture" ]
72,359,343
https://en.wikipedia.org/wiki/Bias%20in%20the%20introduction%20of%20variation
Bias in the introduction of variation ("arrival bias") is a theory in the domain of evolutionary biology that asserts biases in the introduction of heritable variation are reflected in the outcome of evolution. It is relevant to topics in molecular evolution, evo-devo, and self-organization. In the context of this theory, "introduction" ("origination") is a technical term for events that shift an allele frequency upward from zero (mutation is the genetic process that converts one allele to another, whereas introduction is the population genetic process that adds to the set of alleles in a population with non-zero frequencies). Formal models demonstrate that when an evolutionary process depends on introduction events, mutational and developmental biases in the generation of variation may influence the course of evolution by a first come, first served effect, so that evolution reflects the arrival of the likelier, not just the survival of the fitter. Whereas mutational explanations for evolutionary patterns are typically assumed to imply or require neutral evolution, the theory of arrival biases distinctively predicts the possibility of mutation-biased adaptation. Direct evidence for the theory comes from laboratory studies showing that adaptive changes are systematically enriched for mutationally likely types of changes. Retrospective analyses of natural cases of adaptation also provide support for the theory. This theory is notable as an example of contemporary structuralist thinking, contrasting with a classical functionalist view in which the course of evolution is determined by natural selection (see ). History The theory of biases in the introduction process as a cause of orientation or direction in evolution has been explained as the convergence of two threads. The first, from theoretical population genetics, is the explicit recognition by theoreticians (toward the end of the 20th century) that a correct treatment of evolutionary dynamics requires a rate-dependent process of introduction (origination) missing from classical treatments of evolution as a process of shifting frequencies of available alleles. This recognition is evident in the emergence of origin-fixation models that depict evolution as a 2-step process of origination and fixation (by drift or selection), with a rate specified by multiplying a rate of introduction (based on the mutation rate) with a probability of fixation (based on the fitness effect). Origin-fixation models appeared in the midst of the molecular revolution, a half-century after the origins of theoretical population genetics: they were soon widely applied in neutral models for rates and patterns of molecular evolution; their use in models of molecular adaptation was popularized in the 1990s; by 2014 they were described as a major branch of formal theory. The second thread is a long history of attempts to establish the thesis that mutation and development exert a dispositional influence on evolution by presenting options for subsequent functional evaluation, i.e., acting in a manner that is logically prior to selection. Many evolutionary thinkers have proposed some form of this idea. In the early 20th-century, authors such as Eimer or Cope held that development constrains or channels evolution so strongly that the effect of selection is of secondary importance. Early geneticists such as Morgan and Punnett proposed that common parallelisms (e.g., involving melanism or albinism) may reflect mutationally likely changes. Expanding on Vavilov's (1922) exploration of this theme, Spurway (1949) wrote that "the mutation spectrum of a group may be more important than many of its morphological or physiological features." Similar thinking featured in the emergence of evo-devo, e.g., Alberch (1980) suggests that "in evolution, selection may decide the winner of a given game but development non-randomly defines the players" (p. 665) (see also ). Thomson (1985), reviewing multiple volumes addressing the new developmentalist thinking— a book by Raff and Kaufman (1983) and conference volumes edited by Bonner (1982)   and Goodwin, et al (1983) — wrote that "The whole thrust of the developmentalist approach to evolution is to explore the possibility that asymmetries in the introduction of variation at the focal level of individual phenotypes, arising from the inherent properties of developing systems, constitutes a powerful source of causation in evolutionary change" (p. 222). Likewise, the paleontologists Elisabeth Vrba and Niles Eldredge summarized this new developmentalist thinking by saying that "bias in the introduction of phenotypic variation may be more important to directional phenotypic evolution than sorting by selection." However, the notion of a developmental influence on evolution was rejected by Mayr and others such as Maynard Smith ("If we are to understand evolution, we must remember that it is a process which occurs in populations, not in individuals.") and Bruce Wallace ("problems concerned with the orderly development of the individual are unrelated to those of the evolution of organisms through time"), as being inconsistent with accepted concepts of causation. This conflict between evo-devo and neo-Darwinism is the focus of a book-length treatment by philosopher Ron Amundson (see also Scholl and Pigliucci, 2015 ). In the theory of evolution as shifting gene frequencies that prevailed at the time, evolutionary causes are "forces" that act as mass pressures (i.e., the aggregate effects of countless individual events) shifting allele frequencies (see Ch. 4 of ), thus development did not qualify as an evolutionary cause. A widely cited 1985 commentary on "developmental constraints" advocated the importance of developmental influences, but did not anchor this claim with a theory of causation, a deficiency noted by critics, e.g., Reeve and Sherman (1993) defended the adaptationist program (against the developmentalists and the famous critique of adaptationism by Gould and Lewontin), arguing that the "developmental constraints" argument simply restates the idea that development shapes variation, without explaining how such preferences prevail against the pressure of selection. Mayr (1994) insisted that developmentalist thinking was "hopelessly mixed up" because development is a proximate cause and not an evolutionary one. In this way, developmentalist thinking was received in the 1980s and 1990s as speculation without a rigorous grounding in causal theories, an attitude that persists (e.g., Lynch, 2007 ). In response to these rebukes, developmentalists concluded that population genetics cannot provide a complete account of evolutionary causation: instead, a dry statistical account of changes in gene frequencies from population genetics must be supplemented with a wet biological account of changes in developmental-genetic organization (called "lineage explanation" in ). The beliefs that (1) developmental biology was never integrated into the Modern Synthesis and (2) population genetics must be supplemented with alternative narratives of developmental causation, are now widely repeated in the evo-devo literature and are given explicitly as motivations for reform via an Extended Evolutionary Synthesis. The proposal to recognize the introduction process formally as an evolutionary cause provides a different resolution to this conflict. Under this proposal, the key to understanding the structuralist thesis of the developmental biologists was a previously missing population-genetic theory for the consequences of biases in introduction. The authors criticized classical reasoning for framing the efficacy of variational tendencies as a question of evolution by mutation pressure, i.e., the transformation of populations by recurrent mutation. They argued that, if generative biases are important, this cannot be because they out-compete selection as forces under the shifting-gene-frequencies theory, but because they act prior to selection, via introduction. Thus the theory of arrival biases proposes that the generative dispositions of a developmental-genetic system (i.e., its tendencies to respond to genetic perturbation in preferential ways) shape evolution by mediating biases in introduction. The theory, which applies to both mutational and developmental biases, addresses how such preferences can be effective in shaping the course of evolution even while strong selection is at work. Systematic evidence for predicted effects of introduction biases first began to appear from experimental studies of adaptation in bacteria and viruses. Since 2017, this support has widened to include systematic quantitative results from laboratory adaptation, and similar but less extensive results from the retrospective analysis of natural adaptations traced to the molecular level (see below). The empirical case that biases in mutation shape adaptation is considered to be established for practical purposes such as evolutionary forecasting (e.g., ). However, the implications of the theory have not been tested critically in regard to morphological and behavioral traits in animals and plants that are the traditional targets of evolutionary theorizing (see Ch. 9 of ). Thus, the relevance of the theory to molecular adaptation has been established, but the significance for evo-devo remains unclear. The theory sometimes appears associated with calls for reform from advocates of evo-devo (e.g.,), though it has not yet appeared in textbooks or in broad treatments of challenges in evolutionary biology (e.g.,). Simple model The kind of dual causation proposed by the theory has been explained with the analogy of "Climbing Mount Probable." Imagine a robot on a rugged mountain landscape, climbing by a stochastic 2-step process of proposal and acceptance. In the proposal step, the robot reaches out with its limbs to sample various hand-holds, and in the acceptance step, the robot commits and shifts its position. If the acceptance step is biased to favor higher hand-holds, the climber will ascend. But one also may imagine a bias in the proposal step, e.g., the robot may sample more hand-holds on the left than on the right. Then the dual proposal-acceptance process will show both an upward bias due to a bias in acceptance, and a leftward bias due to a bias in proposal. If the landscape is rugged, the ascent will end on a local peak that (due to the proposal bias) will tend to be to the left of the starting point. On a perfectly smooth landscape, the climber will simply spiral to the left until the single global peak is reached. In either case, the trajectory of the climber is subject to a dual bias. These two biases are not pressures competing to determine an allele frequency: they act at different steps, along non-identical dimensions. The dual effect predicted by the theory was demonstrated originally with a population-genetic model of a 1-step adaptive walk with 2 options, i.e., the climber faces two upward choices, one with a higher selection coefficient and the other with a higher mutation rate. A key feature of model is that neither of the alternatives is present in the initial population: they must be introduced. In simulated adaptation under this model, the population frequently reaches fixation for the mutationally favored allele, even though it is not the most fit option. The form of the model is agnostic with respect to whether the biases are mutational or developmental. Subsequent theoretical work (below) has generalized on the theory of one-step walks, and also considered longer-term adaptive walks on complex fitness landscapes. The general implication for parallel evolution is that biases in introduction may contribute strongly to parallelism. The general implication for the directionality and repeatability of adaptive walks is simply that some paths are more evolutionarily favorable due to being mutationally favorable. The general implication for the long-term predictability of outcomes, e.g., particular phenotypes, is that some phenotypes are more findable than others due to mutational effects, and such effects may strongly shape the distribution of evolved phenotypes. The application of the theory to problems in evo-devo and self-organization relies formally on the concept of a genotype-phenotype (GP) map. The genetic code, for example, is a GP map that induces asymmetries in mutationally accessible phenotypes. Consider evolution from the Met (amino acid) phenotype encoded by the ATG (codon) genotype. A phenotypic shift from Met to Val requires an ATG to GTG mutation; a shift from Met to Leu can occur by 2 different mutations (ATG to CTG or TTG); a shift from Met to Ile can occur by 3 different mutations (to ATT, ATC, or ATA). If each type of genetic mutation has the same rate, i.e., with no mutation bias per se, the GP map induces 3 different rates of introduction of the alternative phenotypes Val, Leu and Ile. Due to this bias in introduction, evolution from Met to Ile is favored, and this is not due to a mutational bias (in the sense of a bias reflecting the mechanisms of mutagenesis), but rather an asymmetric mapping of phenotypes to mutationally accessible genotypes. Results of theoretical modeling One-step adaptive walks As noted above, in the simplest case of the "Climbing Mount Probable" effect, one may consider a climber facing just two fixed choices: up and to the left, or up and to the right. This case is modeled using simulations by, and is given a more complete treatment by In general, the limiting behavior of evolution as the supply of new mutations becomes arbitrarily small, i.e., as , is called "origin-fixation" dynamics . The origin-fixation approximation for choosing between the left and right options and (respectively) in the Yampolsky-Stoltzfus model is given by the following: where (or ) and (or ) are the mutation rate and selection coefficient for the left (or right) alternative, and assuming that the probability of fixation . In the Yampolsky-Stoltzfus model, this approximation is good for . For 1-step walks under origin-fixation conditions, the behavior given by Eqn () generalizes from 2 to many alternatives. For instance, Cano, et al. (2022) consider a model gene with many different beneficial mutations, and under low mutation supply, the effects of mutation bias are proportional on the spectrum of adaptive changes. When is not very small, different beneficial alleles may be present simultaneously, competing and slowing down adaptation, an effect known as clonal interference. Clonal interference reduces the effect of mutation bias in models of evolution in finite genetic spaces: alleles favored by mutation still tend to arrive sooner, but before they reach fixation, later-arising alleles that are more beneficial can out-compete them, enhancing the effect of fitness differences. Under the most extreme condition when all possible beneficial alleles are reliably present in a large population, the most fit allele wins deterministically and there is no room for an effect of mutation bias. Stated differently, when all the beneficial alleles are present and selection determines the winner, the chance of success is 1 for the most fit allele, and 0 for all other alleles. Thus, in a gene model with a finite set of beneficial mutations, the influence of mutation bias is expected to be strongest when but to fall off as becomes large. The influence of mutation under varying degrees of clonal interference can be quantified precisely using the regression method of Cano, et al (2022). Suppose that the expected number of changes of a given class of mutational changes defined by starting and ending states is directly proportional to the product of (1) the frequency of the starting state and (2) the mutation rate raised to the power of , that is, Taking the logarithm of this equation gives where is the logarithm of the constant of proportionality. Thus, when is unknown, it may be estimated as the coefficient for the regression of log(counts) on log(expected counts). Simulations of a gene model (figure at right from ) show a range from under high mutation supply to when the mutation supply is low. While this approach was developed to assess how the mutation spectrum influenced adaptive missense changes (defined by a starting codon and an ending amino acid), the equation reflects a generic framework applicable to any mutationally defined classes of change. Note that these considerations apply to finite genetic spaces. In an infinite genetic space, clonal interference still slows down the rate of adaptation due to competition, but it does not prevent an effect of mutation bias because there are always mutationally favored alternative alleles among the most-fit class of alleles. Contribution of mutation to parallelism In general, if there is some set of possible steps each with a probability , then the chance of parallelism is given by summing the squares, . It follows from the definition of the variance or the coefficient of variation , that (see Box 2 of or Ch. 8 of ) That is, parallelism is increased by anything that decreases the number of choices or increases the heterogeneity in their chances (as measured by or ). This result validates the intuition of Shull that "It strains one's faith in the laws of chance to imagine that identical changes should crop out again and again if the possibilities are endless and the probabilities equal" (p. 448). To the extent that heterogeneity in reflects heterogeneity in mutational chances, mutation contributes to parallelism. In particular, for the case of origin-fixation dynamics, each value of is a product of a mutational origin term and a fixation term, so that heterogeneity in either contributes similarly to the chances of parallelism, and it is possible to partition effects of mutation and selection in accounting for the repeatability of evolution. Under origin-fixation conditions, and assuming , it follows that where and are coefficients of variation for vectors of selection coefficients and mutation rates, respectively. Numeric examples in Box 2 of suggest that mutation sometimes contributes more to parallelism than selection, although the authors note that in the denominator above confounds effects of mutation and selection in a hidden way (because, in practice, reflects the set of paths that are sufficiently favored by selection and sufficiently mutationally likely to be observed). Longer-term effects: trends, navigability, and findability For a systematic view of long-term effects of evolution in discrete genotypic space, consider the 4 perspectives below, focusing on the influence of a mutation spectrum (characteristic for some evolving system) on various ways of defining the chances of evolution (following the treatment by ): A point with access to other nearby points in genotype space. From , there are 0 or more upward steps or paths that differ in mutational favorability (as a function of the mutation spectrum) and fitness benefits. The evolvability-from- is a function of this set of steps. Under simple conditions, each step has a probability and the repeatability of evolution is derived by squaring the values. The non-empty set of steps in a path or traverse of increasing fitness, i.e., an adaptive path (one could also consider a neutral path or a path of non-decreasing fitness). Each path has a length and a composition in terms of fitness benefits of steps, and the mutational favorability of steps. The likelihood that evolution follows a given path must depend in some way on these properties, in relation to other possible paths. The aggregated set of paths (the "basin of attraction") that lead to a given destination such as a peak or plateau of fitness, or the set of steps into a phenotypic network. Any destination is discoverable via 0 or more upward paths that connect it with lower points. The points in this collection may also have paths to other destinations. For a given destination, the evolvability-to or findability depends on this collection of paths relative to competing paths. Each collection has some total size, i.e., there may be many or few paths leading to a destination. A fitness landscape that may include many peaks and paths. Depending on its collection of peaks and paths, a landscape may be more or less navigable, in the sense of having a high chance of finding a peak of high fitness from a randomly chosen starting point. The navigability of a landscape will depend on the mutation spectrum in relation to the composition of paths in the landscape. Theoretical results relating to each of these perspectives are available. For instance, in a simulation of adaptive walks of protein-coding genes in the context of an abstract NK landscape, the effect of a GC-AT mutation bias is to alter the protein sequence composition in a manner qualitatively consistent with the analogy of Climbing Mount Probable (above). Each adaptive walk begins with a random sequence and ends on some local peak; the direction of the walk and the final peak depend on the mutation bias. For instance, adaptive walks under a mutation bias toward GC result in proteins that have more of the amino acids with GC-rich codons (Gly, Ala, Arg, Pro), and likewise, adaptive walks under AT bias result in proteins with more of the amino acids with AT-rich codons (Phe, Tyr, Met, Ile, Asn, Lys). On a rough landscape, the initial effect is similar, but the adaptive walks are shorter. That is, the mutation bias imposes a preference (on the adaptive walks) for steps, paths, and local peaks that are enriched in outcomes favored by the mutation bias. This illustrates the concept of a directional trend in which the system moves cumulatively in a particular direction along an axis of composition. The influence of transition-transversion bias has been explored using empirical fitness landscapes for transcription factor binding sites. Each landscape is based on generating thousands of different 8-nucleotide fragments and measuring how well they bind to a particular transcription factor. Each peak on each landscape is accessible by some set of paths made of steps that are nucleotide changes, each one being either a transition or a transversion. Among all possible genetic changes, the ratio of transitions to transversions is 1:2. However, the collection of paths leading to a given peak (on a given empirical landscape) has a specific transition-transversion composition that may differ from 1:2. Likewise, any evolving system has a particular transition-transversion bias in mutation. The more closely the mutation bias (of the evolving system) matches the composition bias (of the landscape), the more likely that the evolving system will find the peak. Thus, for a given evolving system with its characteristic transition-transversion bias, some landscapes are more navigable than others. Navigability is maximized when the mutation bias of the evolving system matches the composition bias of the landscape. Finally, rather than organizing genotypes by fitness (in terms of peaks, upward paths, and collections of paths leading to a peak), we can organize genotypes by phenotype using a genotype–phenotype map. A given phenotype identifies a network in genotype-space including all of the genotypes with that phenotype. An evolving system may diffuse neutrally within the network of genotypes with the same phenotype, but conversions between phenotypes are assumed to be non-neutral. Each phenotypically defined network has a findability that is, as a first approximation, a function of the number of genotypes in the network. For instance, using the canonical genetic code as a genotype-phenotype map, the phenotype Leucine has 6 codons whereas Tryptophan has 1: Leucine is more findable because there are more mutational paths from non-Leucine genotypes. This idea can be applied to the way that RNA folds (considered as phenotypes) map to RNA sequences. For instance, evolutionary simulations show that the RNA folds with more sequences are more findable, and this is due to the way that they are over-sampled by mutation. A similar point has been made in regard to substructures of regulatory networks (see also ). The above results apply, as before, to finite spaces. In infinite spaces, the set of remaining beneficial mutations to be explored is infinite and includes an infinite supply of mutationally favored and mutationally disfavored options. Therefore evolution in infinite spaces can continue forever in the mutationally favored direction with no diminution of the mutational effect that applies in the short-term, e.g., one could consider Eqn () for such an infinite space. The model of Gomez, et al (2020) allows unlimited adaptation via two traits, one with a higher rate of beneficial mutation, and the other with larger selective benefits. In this model, mutation bias continues to be important in long-term evolution even when mutation supply is very high. Distinctive implications The theory of biases in the introduction process as a cause of orientation or direction in evolution may be contrasted with other theories that have been used by evolutionary biologists to reason about the role of variation in evolution: Organisms respond adaptively to conditions of life, and these responses are inherited (Lamarckism). This theory is generally considered to lack a mechanistic basis. Variation supplies raw material shaped into adaptations by selection (neo-Darwinism). In this theory, "selection is the only direction-giving factor in evolution", while variation is a material cause— merely a passive source of substance, not a source of form, initiative, or direction (supplied by selection)—, so that the laws of variation "bear no relation" to the structures built by selection. Development imposes prior constraints on form. In this folk theory, "selection may decide the winner of a given game but development non-randomly defines the players." This theory appeared in classic arguments from authors such as Eimer and Cope; it re-emerged in developmentalist claims of the 1980s. Mass conversion by mutation pressure transforms a population. The implications of this mode of causation were worked out mainly by Haldane and Kimura, who found it implausible due to requiring high mutation rates unopposed by selection The amount of standing variation enhances or retards selection-driven shifts in quantitative characters. In evolutionary quantitative genetics, the matrix (standing variation) is a source of dimensional but not directional asymmetry, depending on the amount of variation available along any given dimension in trait-space. Relative to these theories, the theory of arrival biases has distinctive implications, some of which are supported empirically as described below, e.g., the most frequent outcome of an adaptive process such as the emergence of antibiotic resistance is not necessarily the most beneficial, but is often a moderately beneficial outcome favored by a high rate of mutational origin. Likewise, the theory implies that evolution can have directions that are not adaptive, or tendencies that are not optimal, an implication one commentator on Arthur's book found "disturbing". This theory is defined, not by any particular problem, taxon, level of organization, or field of study, but by a mechanism defined at the level of population genetics, namely the ability of biases in introduction to impose biases on evolution. Some implications are as follows (see ). Effects do not require neutrality or high mutation rates. In contrast to the theory of evolution by mutation pressure explored (and rejected) by Haldane and others, variational dispositions under the theory of arrival biases do not depend on neutral evolution and do not require high mutation rates. Graduated biases can have graduated effects. In contrast to what is implied by the language of "constraints" or "limits" employed in historic appeals to internal sources of direction in evolution, the theory of arrival biases is not deterministic and does not require an absolute distinction between possible and impossible forms. Instead, the theory is probabilistic, and graduated biases can have graduated effects. Regime-dependency with regard to population genetics. Under the theory, variation biases do not have a guaranteed effect independent of the details of population-genetics. The influence of mutation biases reaches a maximum (proportional influence) under origin-fixation conditions and can disappear almost entirely under high levels of mutation supply. Parity in fixation biases and origination biases (under limiting conditions). In classical neo-Darwinian thinking, selection governs and shapes evolution, whereas variation plays a passive role of supplying materials. By contrast, under limiting origin-fixation conditions, the theory of arrival biases establishes a condition of parity such that (for instance) a 2-fold bias in fixation and a 2-fold bias in introduction both have the same 2-fold effect on the chances of evolution. Generality with regard to sources of variational bias. In the evolutionary literature, mutation biases, developmental "constraints", and self-organization in the sense of findability are all treated as separate topics. Under the theory of arrival biases, these are all manifestations of the same kind of population-genetic mechanism in which biases in the introduction of variants imposes biases on evolution. Any short-term bias is either a mutational bias in the sense of a difference in rates for two fully specified genotypic conversions, or it can be treated as a scheme of differential phenotypic aggregation over genotypes (see Box 2 of ). In addition to these direct implications, some more sophisticated or indirect implications have emerged in the literature. Non-causal associations induced by mutation and selection. Due to a dual dependence on mutation and selection, the distribution of adaptive changes may show non-causal associations of mutation rates and selection coefficients, somewhat akin to Berkson's paradox, as suggested in Ch. 8 of. and developed in more detail by Gitschlag, et al (2023). Conditions for composition and decomposition of causes. Under limiting origin-fixation conditions, the chances of evolution reflect two factors multiplied together, representing biases in introduction and biases in fixation, as in Eqn (). Thus, conditions exist under which it is possible to quantify and directly compare the dispositional influences of mutation and selection. This approach has already been used in a few empirical arguments addressed below. (Box 2 of ). Biased depletion of the spectrum of beneficial mutations. In any case of a system adapting via mutation and selection, there is some set of possible beneficial mutations, characterized by a distribution of selection coefficients and mutation rates. As adaptation occurs in a mutation-biased manner, this spectrum of possible beneficial mutations is depleted in a biased way. The theory for this depletion is relevant to experimental work showing that "shifts in mutation spectra enhance access to beneficial mutations". That is, the experimentally observed favorability of shifts in mutation spectra depends on a pattern of biased depletion of beneficial mutations that is itself a sign of mutation-biased adaptation. Evidence Evidence for the theory has been summarized recently, e.g., Gomez, et al (2020) present a table listing 8 different studies providing evidence of effect of mutation bias on adaptation, and Ch. 9 of Mutation, Randomness and Evolution is devoted to empirical support for the theory (see also ). Biases in introduction are expected to influence evolution whether neutral or adaptive, but an effect on neutral evolution is not considered intuitively surprising or controversial, and so is not given much attention. Instead, accounts of evidence focus on mutation-biased adaptation, because this highlights how predictions of the theory clash with the classical conception of mutation as a weak pressure easily overcome by selection, per the "opposing pressures" argument of Fisher and Haldane. Direct evidence of causation under controlled conditions Direct evidence that the spectrum of mutation shapes the spectrum of adaptive changes comes from studies that manipulate the mutation spectrum directly. In one study, resistance to cefotaxime was evolved repeatedly, using 3 strains of E. coli with different mutation spectra: wild-type, mutH and mutT. The spectrum of resistance mutations among the evolved strains showed the same patterns of spontaneous mutations as the parental strains. Specifically, the transversions favored by mutT (left block of bars) are highly enriched among resistant isolates from mutT parents (blue in the accompanying figure), and likewise, the resistant strains from mutH parents (red) tend to have the nucleotide transition mutations favored by mutH (center block of bars). Thus, changing the mutation spectrum changes the spectrum of adaptive changes in a corresponding manner. Another study showed that the AR2 strain of P. fluorescens adapted to the loss of motility overwhelmingly (> 95% of the time) by one specific change, an A289C change in the ntrB gene, while the Pf0-2x strain adapted via diverse changes in several genes. The pattern in AR2 derivatives was traced to a mutational hotspot. Because the hotspot behavior was associated mainly with synonymous differences between the two strains, the experimenters were able to use genetic engineering to remove the hotspot from AR2 and add it to Pf0-2x, without changing the encoded amino acid sequence. This reversed the qualitative pattern of outcomes, so that the modified AR2 (engineered to remove the hotspot) adapted via diverse changes, while the modified Pf0-2x with the engineered hotspot adapted via the A289C change 80% of the time. Graduated effects A different use of available evidence is to focus on the idea of graduated effects, which distinguishes the theory of arrival biases from the intuitive notion of "constraints" or "limits" on possible forms. In particular, one may set aside the dramatic effects associated with hotspots and mutator alleles, and consider the effects of ordinary quantitative biases in nucleotide mutations. A number of studies have established that modest several-fold biases in mutation can have a several-fold effect on evolution, and some studies indicate a roughly proportional relation between mutation rates and the chances of an adaptive change. For instance, Sackman, et al (2017) studied parallel evolution in 4 related bacteriophages. In each case, they adapted 20 cultures in parallel, then sequenced a sample of the adapted culture to identify causative mutations. The results showed a strong preference for nucleotide transitions, 29:5 for paths (white vs. black bars in the figure at right) and 74:6 for events. In a study of resistance to Rifampicin in Pseudomonas aeruginosa, MacLean, et al (2010) measured selection coefficients and frequency of evolution for 35 resistance mutations in the rpoB (RNA polymerase) gene, and reported mutation rates for 11 of these. The mutation rates vary over a 30-fold range. The frequency with which a resistant variant appears in the set of 284 replicate cultures correlates strongly and roughly linearly with the mutation rate (figure at right). This is not explained by a correlation between selection coefficients and mutation rates, which are not correlated (see Ch. 9 of ). As explained above, the influence of the mutation spectrum on the spectrum of adaptive changes can be captured in a single parameter , defined as a coefficient of binomial regression of observed counts on the expected counts from a mutational model. Based on theoretical considerations, expected values of range from 0 (no influence) to 1 (proportional influence). This method was applied by Cano, et al. to 3 large data sets of adaptive changes, comparing a model based on independent measures of the mutation spectrum with adaptive changes previously identified in studies of (1) clinical antibiotic resistance of Mycobacterium tuberculosis, (2) laboratory adaptation in E. coli, and (3) laboratory adaptation of the yeast Saccharomyces cerevisiae to environmental stress. In each case, , indicating a roughly proportional influence of mutation bias. The authors report that this is not just due to the influence of transition-transversion bias, because applies both to transition-transversion bias and to the other aspects of the nucleotide mutation spectrum. Scope of applicability A final use of available evidence is to consider the range of natural conditions under which the theory may be relevant. Whereas laboratory studies can be used to establish causation and assess effect-sizes, they do not provide direct guidance to where the theory applies in nature. As noted in, most studies of adaptation do not include a genetic analysis that identifies specific mutations, and in the rare cases in which an attempt is made to identify causative mutations, the results typically implicate only very small numbers of changes that are subject to questions of interpretation. Therefore, the strategy followed in key studies has been to focus on trusted cases of adaptation in which the proposed functional effects of putative adaptive mutations have been verified using techniques of genetics. Payne, et al looked for an effect of transition bias among causative mutations for antibiotic resistance in clinically identified strains of Mycobacterium tuberculosis, which exhibits a strong mutation bias toward nucleotide transitions. They compared the observed transition-transversion ratio to the 1:2 null expectation under the absence of mutation bias. Using two different curated databases, they found transition:transversion ratios of 1755:1020 and 1771:900, i.e., enrichments 3.4-fold and 3.9-fold over the null, respectively. They also took advantage of the special case of Met-to-Ile replacements, which can take place by 1 transition (ATG to ATA) or 2 different transversions (ATG to ATT or ATC). This 1:2 ratio of possibilities again represents a null expectation for effects independent of mutation bias. In fact, the mutations in resistant isolates have transition-transversion ratios of 88:49 and 96:39 (for the 2 datasets), i.e., 3.6-fold and 4.9-fold above null expectations. This result cannot be due to selection at the amino acid level, because the changes are all Met to Ile. The significance of this result is not that mutation bias only works when the options are selectively indistinguishable: instead, the lesson is that the bias toward nucleotide transitions is roughly 4-fold both for the Met-to-Ile case, and for amino-acid-changing substitutions generally. A much broader taxonomic scope is implicated in a meta-analysis of published studies of parallel adaptation in nature. In this study, the authors curated a data set covering 10 published cases of parallel adaptation traced to the molecular level, including well known cases involving spectral tuning, resistance to natural toxins such as cardiac glycosides and tetrodotoxin, foregut fermentation, and so on. The results shown below (table) indicate a transition-transversion ratio of 132:99, a 2.7-fold enrichment relative to the null expectation of 1:2 (the ratio of paths, which is less sensitive to extreme values, is 27:28, a 2-fold enrichment). Thus, this study shows that a bias toward transitions is observed in well known cases of parallel adaptation in diverse taxa, including animals and plants. Finally, Storz, et al. analyzed changes in hemoglobin affinity associated with altitude adaptation in birds. Specifically, they studied the effect of CpG bias, an enhanced mutation rate at CpG sites due to effects of cytosine methylation on damage and repair, found widely in mammals and birds. They assembled a data set consisting of 35 matched pairs of high- and low-altitude bird species. In each case, hemoglobins were evaluated for functional differences resulting in a higher oxygen affinity in the high-altitude species. The changes in affinity plausibly linked to adaptation implicated 10 different paths found a total of 22 different times. Six of the 10 paths involved CpG mutations, whereas only 1 would be expected by chance; and 10 out of 22 events involved CpG mutations, whereas only 2 would be expected by chance (both differences were significant). This enrichment of mutationally-likely genetic changes supports the theory of arrival biases and provides further evidence that predictable effects of mutation bias are important for understanding adaptation in nature. Context in evolutionary thinking The theory of arrival bias has been described as a cross-cutting theory because it proposes a causal grounding (in population genetics) for diverse kinds of pre-existing claims for which a causal grounding is either unknown or mis-specified, the developmentalist thesis (above) that evolutionary dispositions may emerge from the way development shapes variation, acting prior to selection (e.g.,) a variety of claims in the molecular evolution literature for biased mutational effects (e.g., on codon usage) that are ascribed to unequal or directional "mutation pressure" but which are not plausibly explained as evolution by mutation pressure, suggesting instead the need for a theory of mutational biases in introduction; the suggestion emerging from the paleobiology debate of the 1980s that, in the hierarchical expansion of evolutionary causation from the population level to multiple levels (i.e., populations, species, higher taxa), speciation is an important source of introduction biases at the level of higher taxa; claims to the effect that evolution tends to find phenotypes over-represented in genotype space, i.e., a "findability" or "arrival of the frequent" effect that can be recognized both in certain arguments from the molecular evolution literature, e.g., King's (1971) explanation for amino acid frequencies, and from the evolutionary self-organization literature, e.g., the arguments of Kauffman The context for applying the theory is illustrated in this figure (right). On the left are details of mutation and development that are responsible for tendencies in the generation of variation (varigenesis), i.e., tendencies prior to selection or drift. On the right are observable evolutionary patterns that might possibly be explained by these tendencies. The green arrow is some theory— the theory of arrival biases or some alternative theory— that specifies conditions of a cause-effect relationship linking variational tendencies to evolutionary tendencies. To apply a theory in this context is to generate evolutionary hypotheses or explanations that appeal to the internal details of mutation and development (on the left) to account for evolutionary patterns (on the right) via the conditions of causation specified by the theory. For instance, Darwin's comment that the laws of variation "bear no relation" to the structures built by selection would suggest that there are no conditions under which the internal details on the left account for the patterns on the right. The other theories all suggest that variational tendencies may influence evolution under some conditions. For instance, the theory of mutation pressure applies when mutation rates are high and unopposed by selection, thus it has a limited range of applications. The theory of evolutionary quantitative genetics can be applied very broadly to the evolution of quantitative characters, but the theory (as developed so far) does not suggest that mutation biases will have much impact. By contrast, the theory of arrival bias might apply broadly, and allows for a strong role for variational tendencies in shaping evolutionary tendencies. Late arrival and non-obviousness Though it seems intuitively obvious today, the theory did not emerge formally until 2001, e.g., as noted above, population geneticists did not propose the theory in the 1980s to answer an evo-devo challenge that literally called for recognizing biases in the introduction of variation. This late emergence has been attributed to a "blind spot" due to multiple factors, including a tradition of verbal arguments that minimize the role of mutation, a tendency to associate causation with processes that shift frequencies of variants rather than processes that create variants, and a formal argument from population genetics that doesn't extend to evolution from new mutations. Specifically, when Haldane and Fisher  asked if tendencies of mutation could influence evolution, they framed this as a matter of the efficacy of mutation pressure (below), concluding that, because mutation rates are low, mutation is a weak force, only important in the special case of abnormally high mutation rates unopposed by selection. Their conception of evolutionary causation was modeled on selection, which operates by shifting frequencies of available alleles, and so they treated recurrent mutation in the same way. Their conclusion is correct for the case of evolution from standing variation. More generally, in Modern Synthesis thinking, evolution was assumed to follow from a short-term process of shifting frequencies of available alleles. In this process, mutation is typically unimportant except when the focus is on low-frequency alleles maintained by deleterious mutation pressure (see Population genetics: Mutation), e.g., Edwards (1977) addressed theoretical population genetics without considering mutation at all; Lewontin (1974) stated that "There is virtually no qualitative or gross quantitative conclusion about the genetic structure of populations in deterministic theory that is sensitive to small values of migration, or any that depends on mutation rates" (p. 267). The Haldane-Fisher "opposing pressures" argument was used repeatedly by leading thinkers to reject structuralist or internalist thinking (examples in or Ch. 6 of ), e.g., Fisher (1930) stated that “The whole group of theories which ascribe to hypothetical physiological mechanisms, controlling the occurrence of mutations, a power of directing the course of evolution, must be set aside, once the blending theory of inheritance is abandoned." Seventy years later Gould (2002), citing Fisher (1930), wrote that “Since orthogenesis can only operate when mutation pressure becomes high enough to act as an agent of evolutionary change, empirical data on low mutation rates sound the death-knell of internalism.” (p. 510)  In this way, arguments from population genetics were used to reject, rather than support, speculative claims about the role of variational tendencies. The flaw in the Haldane-Fisher argument, pointed out in, is that it treats mutation only as a pressure on frequencies of existing alleles, not as a cause of the origin of new alleles. When alleles relevant to the outcome of evolution are absent initially, biases in introduction can impose strong biases on the outcome. Thus, the late appearance of this theory sheds light on how closely Modern Synthesis thinking was tied to the assumption of standing variation, and to the forces theory. These commitments continue to echo in contemporary sources, e.g., in a US white paper endorsed by SSE, SMBE, ASN, ESA and other relevant professional societies, Futuyma, et al (2001) state, as fact, that evolution is shifting gene frequencies, identifying the main causes of "evolution" (so defined) as selection and drift (figure). However, toward the end of the 20th century, theoreticians began to note that long-term dynamics depend on events of mutational introduction not covered in classical theory. In the recent literature, the assumption of evolution from standing variation is only rarely made explicit, e.g., . More commonly, evolution from standing variation is presented as an option to be considered together with evolution from new mutations. Relevance to contemporary issues Parallelism and predictability. The application of the theory to parallelism is addressed above. The tendency for particular outcomes to recur in evolution is not merely a function of selection, but also reflects biases in introduction due to differential accessibility by mutation (or, for the case of phenotypes, by mutation and altered development). Recent reviews on prediction apply the theory to the role of mutation biases in contributing to the repeatability of evolution. Partitioning causal responsibility for patterns to mutation and selection. In the case of origin-fixation dynamics, evolutionary dispositions can be attributed to a combination of mutation and selection, and it is possible, in principle, to untangle these contributions, as in an analysis of regulatory vs. structural effects in evolution or patterns of amino acid replacement in protein evolution. Evo-devo, GP maps, and findability. The application of the theory of arrival biases to development and phenotypes is mediated by the concept of a genotype-phenotype map. A simple example of a biased induced by a GP map is shown at right. An evolving system diffuses neutrally within the genotypic network for its phenotype, and may occasionally jump to another phenotype. From the starting network of genotypes encoding phenotype P0, there are mutations leading to genotypic networks for P1 and P2. However, the number of mutations leading from the starting network to P2 is 4 times higher illustrating the idea that, for a given developmental-genetic system, some phenotypes are more mutationally accessible. This is not the same thing as a mutation bias per se (an asymmetry caused by the details of mutagenesis), but it can have the same effect in a population-genetic model. In this case, if all mutations happen at the same rate, the total rate of mutational introduction of P2 is 4 times higher than for P1: this bias can be mapped to the Yampolsky-Stoltzfus model and would have the same implications as a 4-fold mutation bias. For short-term evolution, what matters is the distribution of immediately mutationally accessible phenotypes. In long-term evolution, however, one may expect two different effects that can be explained by the figure at right, after Fig. 4 of Fontana (2002). The networks show the genotypes that map to 3 different phenotypes, P0, P1 and P2. Over time, a system may diffuse neutrally among different genotypes with the same phenotypes. Rarely a jump from one phenotype to another may occur. In the short term, evolution depends only on what is immediately accessible from a given point in genotype space. In the medium-term, evolution depends on the accessibility of alternative phenotype networks, relative to the starting network, e.g., starting with P0, P2 is twice as accessible as P1, even though P1 and P2 have the same number of genotypes. In the long-term, what matters is the total findability of a phenotype from all other phenotypes, which (as a first approximation) is a matter of the number of genotypes (and more precisely is a matter of the total surface area of the network accessible to other high-fitness phenotypes). In this case, P0 is more findable than P1 and P2 because it has twice as many genotypes. The variational bias toward more numerous phenotypes is called "phenotype bias" by Dingle, et al (2022) (see also ). This effect of findability is the formal basis for empirical and theoretical arguments in studies of the findability of regulatory network motifs or RNA fold families. In the figure at right, Dingle, et al. (2022) present evidence of a striking tendency for the most common RNA folds in nature to match the folds most widely distributed in sequence space. Role in broad appeals for reform The theory of arrival biases, proposed in 2001, appears in several subsequent appeals for reform, relative to the neo-Darwinian view of the Modern Synthesis. Per Arthur, it is part of a developmentalist approach to evolution that emphasizes the internal organizing effects of "developmental reprogramming" on variation. In a different framing per, the efficacy of arrival biases undermines the historic commitment of theoreticians to viewing evolution as a process of shifting gene frequencies in an abundant gene pool, dominated by mass-action forces, and is part of a larger movement (beginning during the molecular revolution) away from the neo-Darwinism of the Modern Synthesis and towards a version of mutationism grounded in population genetics. The theory also has been invoked in the literature of the Extended Evolutionary Synthesis under the heading of Developmental bias. Distinction from other theories of mutational effects The theory of arrival biases focuses on a kind of population-genetic causation linking intrinsic generative biases acting prior to selection with predictable evolutionary tendencies. It is distinct from other ideas that lack the same focus on causation, on intrinsic biases, or on the introduction process. Evolution by mutation pressure In classic sources, evolution by "mutation pressure" means the mass transformation of a population by mutational conversion, as in Wilson and Bossert (1971, p. 42). The general assessment of this theory, following Haldane (1932)   and Fisher (1930), is that evolution by mutation pressure is implausible because it requires high mutation rates unopposed by selection. Kimura argued even more pessimistically that transformation by mutation pressure would take so long that it can be ignored for practical purposes. Nevertheless, later empirical and theoretical work showed that the theory can be valuable in cases such as the loss of a complex trait encoded by many loci, e.g., loss of sporulation in experimental populations of B. subtilis, a case in which the mutation rate for loss of the trait was estimated as an unusually high value, . Thus, the theory of mutation pressure and the theory of arrival biases both depict ways for the process of mutation to be an important influence, but they focus on different modes of causation: influencing either the fixation process (mutation pressure) or the introduction process (arrival bias). The effectiveness of mutational tendencies via these two modes is completely different, e.g., only the mutation pressure theory relies on high mutation rates unopposed by selection. Evolution along genetic lines of least resistance Evolutionary quantitative genetics, the body of theory that focuses on highly polygenic quantitative traits, makes a particular prediction about mutational effects that has some empirical support. In the standard theory for a set of quantitative traits, the standing variation is represented by a matrix of variances and covariances, which depends (in a complex way) on mutational input represented by an matrix. Phenotypic divergence will tend to be aligned (in phenotype space) with the dimension of greatest variation, , and this predicted effect of standing variation has been seen repeatedly.   This effect (explained more fully in Developmental bias) is called adaptation "along genetic lines of least resistance" and could be re-stated (with variation in a positive role) as adaptation along lines of maximal variational fuel. When divergence also aligns with , this suggests that mutational variability shapes divergence, but this circumstantial correlation has other interpretations and is not taken as dispositive evidence. The use of mutation bias in the sense of an asymmetric effect on trait means is not part of the standard framework. When mutation bias is included in models of a single quantitative trait under stabilizing selection, the result is a small displacement from the optimal value.   Thus, models in evolutionary quantitative genetics are focusing on a different kind of problem, so that there is no simple translation between (for instance) effects of and effects of biases in introduction. Mutational contingency Evolutionary explanations have often relied on a paradigm of "equilibrium explanation" (ch. 5 of ) in which outcomes are explained by appeal to what is selectively optimal, without regard to history or details of process (as explained in ) However, attention has focused in recent decades on the idea of "contingency", i.e., the idea that the outcome of evolution cannot be explained as the predictable or predefined endpoint of a deterministic process, but takes some path that cannot be predicted easily, or can only be predicted by knowing details of the starting conditions and the subsequent dynamics. "Mutational" contingency refers to cases in which an event of evolution is associated distinctively with a particular mutation, or a mutational hotspot, e.g.,. in the sense that the evolutionary change would not have happened in the observed manner if the distinctive mutation had not occurred in the manner inferred. This notion differs from the theory of biases in the introduction process because it is an explanatory concept (rather than a mechanism), applied in idiographic explanations, i.e., explaining one-off events (token events), The theory of biases in the introduction process is a theory of general causation: the result of successfully applying the theory is to assign, not a token explanation, but a general explanation like this pattern in which happens more often than is caused by a bias in introduction due to the higher chance of the mutational-developmental conversion . Developmental constraints (developmental bias) The concept of "constraint" is fraught. Green and Jones (2016) argue that evolutionary biologists use it as a flexible explanatory concept rather than as a way to refer to a specific causal theory, i.e., a constraint is a factor with some limiting influence that makes it predictive, even if the causal basis of this influence is unclear. A simple notion of developmental constraint is that some phenotypic forms are not observed, due to being impossible (or at least very difficult) to generate developmentally  , e.g., centipedes with an even number of leg-bearing segments. That is, constraint is an explanation for the non-existence of phenotypes based on a variational effect (absence), within a paradigm focused on accounting for patterns of phenotype existence. Other references to "constraint" imply graduated differences rather than the absolute difference between possible and impossible forms, e.g.,. Whereas the effectiveness of absolute biases does not require a special causal theory (because a developmentally impossible form is an evolutionarily impossible form), the idea of graduated biases prompts questions of causation, due to the conflict with the classic Haldane-Fisher "opposing pressures" argument, which holds that mere variational tendencies are ineffectual because mutation rates are small. The seminal "developmental constraints" paper by Maynard Smith, et al. (1985) noted this issue without providing a solution. Advocates of "constraint" were criticized for failing to provide a mechanism. This is the issue that Yampolsky and Stoltzfus sought to remedy. Nevertheless, the theory of arrival biases cannot be easily mapped to the concept of "constraint" due to the latter being used widely as a synonym for "factor". In the evo-devo literature, the term "constraint" is increasingly replaced with references to developmental bias. However, the concept of developmental bias is often associated with some idea of facilitated variation or evolvability, whereas the theory of arrival biases is only about the population-genetic consequences of arbitrary biases in the generation of variation. Facilitated variation, evolvability, and directed mutation The theory of arrival biases does not require or imply facilitated variation or directed mutation and is not by itself a theory of the evolution of evolvability. The population-genetic models used to illustrate the theory, and the empirical cases invoked in support of the theory, focus on the effects of different forms of mutation bias, where the bias is always relative to some dimension other than fitness, e.g., transition-transversion bias, CpG bias, or the asymmetry of two traits with different mutabilities. That is, the theory does not assume that biases are beneficial with respect to fitness, and it does not propose that mutation somehow contributes to adaptedness separately from the effect of selection (contra ). In fact, many models illustrate the efficacy of arrival biases by focusing on a case where the most mutationally favored outcomes are not the most fit options, as in the original Yampolsky-Stoltzfus model, where one choice has a higher mutation rate but a smaller fitness benefit, and the other has a higher fitness benefit but a smaller mutation rate. The theory assumes neither that mutationally favored outcomes are more fit, nor that they are less fit. See also Developmental bias Evolvability Extended evolutionary synthesis Mutation bias Population genetics References Evolutionary biology
Bias in the introduction of variation
[ "Biology" ]
12,344
[ "Evolutionary biology" ]
66,436,172
https://en.wikipedia.org/wiki/Massless%20free%20scalar%20bosons%20in%20two%20dimensions
Massless free scalar bosons are a family of two-dimensional conformal field theories, whose symmetry is described by an abelian affine Lie algebra. Since they are free i.e. non-interacting, free bosonic CFTs are easily solved exactly. Via the Coulomb gas formalism, they lead to exact results in interacting CFTs such as minimal models. Moreover, they play an important role in the worldsheet approach to string theory. In a free bosonic CFT, the Virasoro algebra's central charge can take any complex value. However, the value is sometimes implicitly assumed. For , there exist compactified free bosonic CFTs with arbitrary values of the compactification radius. Lagrangian formulation The action of a free bosonic theory in two dimensions is a functional of the free boson , where is the metric of the two-dimensional space on which the theory is formulated, is the Ricci scalar of that space. The parameter is called the background charge. What is special to two dimensions is that the scaling dimension of the free boson vanishes. This permits the presence of a non-vanishing background charge, and is at the origin of the theory's conformal symmetry. In probability theory, the free boson can be constructed as a Gaussian free field. This provides realizations of correlation functions as expected values of random variables. Symmetries Abelian affine Lie algebra The symmetry algebra is generated by two chiral conserved currents: a left-moving current and a right-moving current, respectively which obey . Each current generates an abelian affine Lie algebra . The structure of the left-moving affine Lie algebra is encoded in the left-moving current's self-OPE, Equivalently, if the current is written as a Laurent series about the point , the abelian affine Lie algebra is characterized by the Lie bracket The center of the algebra is generated by , and the algebra is a direct sum of mutually commuting subalgebras of dimension 1 or 2: Conformal symmetry For any value of , the abelian affine Lie algebra's universal enveloping algebra has a Virasoro subalgebra with the generators The central charge of this Virasoro subalgebra is and the commutation relations of the Virasoro generators with the affine Lie algebra generators are If the parameter coincides with the free boson's background charge, then the field coincides with the free boson's energy-momentum tensor. The corresponding Virasoro algebra therefore has a geometrical interpretation as the algebra of infinitesimal conformal maps, and encodes the theory's local conformal symmetry. Extra symmetries For special values of the central charge and/or of the radius of compactification, free bosonic theories can have not only their symmetry, but also additional symmetries. In particular, at , for special values of the radius of compactification, there may appear non-abelian affine Lie algebras, supersymmetry, etc. Affine primary fields In a free bosonic CFT, all fields are either affine primary fields or affine descendants thereof. Thanks to the affine symmetry, correlation functions of affine descendant fields can in principle be deduced from correlation functions of affine primary fields. Definition An affine primary field with the left and right -charges is defined by its OPEs with the currents, These OPEs are equivalent to the relations The charges are also called the left- and right-moving momentums. If they coincide, the affine primary field is called diagonal and written as . Normal-ordered exponentials of the free boson are affine primary fields. In particular, the field is a diagonal affine primary field with momentum . This field, and affine primary fields in general, are sometimes called vertex operators. An affine primary field is also a Virasoro primary field with the conformal dimension The two fields and have the same left and right conformal dimensions, although their momentums are different. OPEs and momentum conservation Due to the affine symmetry, momentum is conserved in free bosonic CFTs. At the level of fusion rules, this means that only one affine primary field can appear in the fusion of any two affine primary fields, Operator product expansions of affine primary fields therefore take the form where is the OPE coefficient, and the term is the contribution of affine descendant fields. OPEs have no manifest dependence on the background charge. Correlation functions According to the affine Ward identities for -point functions on the sphere, Moreover, the affine symmetry completely determines the dependence of sphere -point functions on the positions, Single-valuedness of correlation functions leads to constraints on momentums, Models Non-compact free bosons A free bosonic CFT is called non-compact if the momentum can take continuous values. Non-compact free bosonic CFTs with are used for describing non-critical string theory. In this context, a non-compact free bosonic CFT is called a linear dilaton theory. A free bosonic CFT with i.e. is a sigma model with a one-dimensional target space. If the target space is the Euclidean real line, then the momentum is imaginary , and the conformal dimension is positive . If the target space is the Minkowskian real line, then the momentum is real , and the conformal dimension is negative . If the target space is a circle, then the momentum takes discrete values, and we have a compactified free boson. Compactified free bosons The compactified free boson with radius is the free bosonic CFT where the left and right momentums take the values The integers are then called the momentum and winding number. The allowed values of the compactification radius are if and otherwise. If , free bosons with radiuses and describe the same CFT. From a sigma model point of view, this equivalence is called T-duality. If , the compactified free boson CFT exists on any Riemann surface. Its partition function on the torus is where , and is the Dedekind eta-function. This partition function is the sum of characters of the Virasoro algebra over the theory's spectrum of conformal dimensions. As in all free bosonic CFTs, correlation functions of affine primary fields have a dependence on the fields' positions that is determined by the affine symmetry. The remaining constant factors are signs that depend on the fields' momentums and winding numbers. Boundary conditions in the case c=1 Neumann and Dirichlet boundary conditions Due to the automorphism of the abelian affine Lie algebra there are two types of boundary conditions that preserve the affine symmetry, namely If the boundary is the line , these conditions correspond respectively to the Neumann boundary condition and Dirichlet boundary condition for the free boson . Boundary states In the case of a compactified free boson, each type of boundary condition leads to a family of boundary states, parametrized by . The corresponding one-point functions on the upper half-plane are In the case of a non-compact free boson, there is only one Neumann boundary state, while Dirichlet boundary states are parametrized by a real parameter. The corresponding one-point functions are where and for a Euclidean boson. Conformal boundary conditions Neumann and Dirichlet boundaries are the only boundaries that preserve the free boson's affine symmetry. However, there exist additional boundaries that preserve only the conformal symmetry. If the radius is irrational, the additional boundary states are parametrized by a number . The one-point functions of affine primary fields with vanish. However, the Virasoro primary fields that are affine descendants of the affine primary field with have nontrivial one-point functions. If the radius is rational , the additional boundary states are parametrized by the manifold . Conformal boundary conditions at arbitrary were also studied under the misnomer "boundary Liouville theory". Related theories and generalizations Multiple bosons and orbifolds From massless free scalar bosons, it is possible to build a product CFT with the symmetry algebra . Some or all of the bosons can be compactified. In particular, compactifying bosons without background charge on an -dimensional torus (with Neveu–Schwarz B-field) gives rise to a family of CFTs called Narain compactifications. These CFTs exist on any Riemann surface, and play an important role in perturbative string theory. Due to the existence of the automorphism of the affine Lie algebra , and of more general automorphisms of , there exist orbifolds of free bosonic CFTs. For example, the orbifold of the compactified free boson with is the critical two-dimensional Ashkin–Teller model. Coulomb gas formalism The Coulomb gas formalism is a technique for building interacting CFTs, or some of their correlation functions, from free bosonic CFTs. The idea is to perturb the free CFT using screening operators of the form , where is an affine primary field of conformal dimensions . In spite of its perturbative definition, the technique leads to exact results, thanks to momentum conservation. In the case of a single free boson with background charge , there exist two diagonal screening operators , where . Correlation functions in minimal models can be computed using these screening operators, giving rise to Dotsenko–Fateev integrals. Residues of correlation functions in Liouville theory can also be computed, and this led to the original derivation of the DOZZ formula for the three-point structure constant. In the case of free bosons, the introduction of screening charges can be used for defining nontrivial CFTs including conformal Toda theory. The symmetries of these nontrivial CFTs are described by subalgebras of the abelian affine Lie algebra. Depending on the screenings, these subalgebras may or may not be W-algebras. The Coulomb gas formalism can also be used in two-dimensional CFTs such as the q-state Potts model and the model. Various generalizations In arbitrary dimensions, there exist conformal field theories called generalized free theories. These are however not generalizations of the free bosonic CFTs in two dimensions. In the former, it is the conformal dimension which is conserved (modulo integers). In the latter, it is the momentum. In two dimensions, generalizations include: Massless free fermions. Ghost CFTs. Supersymmetric free CFTs. References Conformal field theory String theory
Massless free scalar bosons in two dimensions
[ "Astronomy" ]
2,197
[ "String theory", "Astronomical hypotheses" ]
66,436,237
https://en.wikipedia.org/wiki/M1-67
M1-67 is an ejecta nebula that surrounds the Wolf–Rayet star WR 124, which is about 6.4 kpc from Earth in the constellation of Sagitta. It contains dust which is caught up in WR 124's solar wind and which absorbs much of the star's light. It was discovered by American astronomer Paul W. Merrill in 1938, at the same time that he discovered the star it surrounds. It is approximately 6 lightyears across, making it about 20,000 years old. Distance and characteristics A 2010 study focused on M1-67, measuring its expansion rate by using Hubble Space Telescope photographs taken 11 years apart. The expansion rate was then compared to the expansion velocity, which was calculated from the Doppler shift of its nebular emission lines, resulting in a geometric distance of d=3.35 ± 0.67kpc. NASA has confirmed that the released gas is traveling at up to 100,000 mp/h, causing turbulence, and carrying along approximately 100 billion-mile wide glowing blobs, with each blob being around 30 times the mass of the Earth. The blast took place around 10,000 years or 10 millennia ago. An infrared study of the nebula showed that it consists of mildly processed material with number ratios of N/O = 1.0 ± 0.5 and C/O = 0.46 ± 0.27. The mass of the nebula's dust has been confirmed to be , and the mass of its ionised gas is estimated at . The morphology of M-167 is complex and knotted, unlike other Wolf–Rayet nebulae. Studying the dynamics of the nebula has suggested that it has interacted with the surrounding ISM, causing a bow shock which travels at a high velocity of about 180 km/s. WR 124 is determined to be about 1.3 parsecs away from the bow shock. The wind collided with the bow shock shortly after the outburst, oriented along its main axis, as evidenced by the lack of emission found within the radial velocities in the centre of the nebula as seen from telescopes on Earth. Higher radial velocities were found in the center and lower velocities near the edge, giving an estimated expansion rate of 150 km/s and dynamical timescales of 8 to 20 kyr. However, there are other explanations for its shape that do not require a bow shock. An alternative model suggest that WR 124 is a bipolar nebula with its axis in pointing northwest, surrounded by an equatorial torus, as well as jets expanding in the eastern direction. References Planetary nebulae Sagitta Astronomical objects discovered in 1938
M1-67
[ "Astronomy" ]
544
[ "Sagitta", "Constellations" ]
66,436,668
https://en.wikipedia.org/wiki/Copper%20cycle
The copper cycle is the biogeochemical cycle of natural and anthropogenic exchanges of copper between reservoirs in the hydrosphere, atmosphere, biosphere, and lithosphere. Human mining and extraction activities have exerted large influence on the copper cycle. Overview The diagram immediately below shows the global copper reservoirs labeled with sizes in μg/g inside parentheses. The largest copper reservoirs are metal use (production, fabrication, use, discard), the core, and the crust. Fluxes between reservoirs are shown as arrows with units of Gg Cu/yr. The thickness of the arrows represents the flux size. The anthropogenic fluxes are in red and the natural fluxes are in navy blue. The largest fluxes are from copper metal use and soil, between the crust and mantle, and from the freshwater to oceans. The flux of copper from micrometeorites to the atmosphere is difficult to measure, but is relatively constant over time. It is assumed that the rate of cosmic flux is uniform everywhere. The median of all the fluxes post 1980 was used in the figure. Copper reservoirs Natural copper reservoirs include the Earth's core, mantle, and crust. Crustal rocks contain average copper abundance of hundred parts per million. Other natural reservoirs are terrestrial biomass, sediments, freshwater, oceans, and the atmosphere. Anthropogenic reservoirs include copper's manufacturing life cycle (production, fabrication, usage, and discard), fossil fuels, and agricultural biomass. In outer space, there is also copper on the moon and in micrometeorites. Copper fluxes Natural fluxes Copper is exchanged between the mantle and the crust through volcanoes, hydrothermal vents, and subduction zones. Volcanoes and hydrothermal vents degas material from the mantle, which condenses into particulate form. Copper is recycled back to the mantle through subducting ocean crust. As the Earth's crust weathers, soil and sediment is formed, and some copper is mobilized from freshwater to the ocean. Copper in the soil is taken up by plants and then released back into the soil when the plants decompose. Wildfires or burning of natural biomass releases copper into the atmosphere. Anthropogenic fluxes Copper is present in coal and unrefined crude oil. When fossil fuels are combusted, copper is released into the atmosphere and soils. Copper cycles through agricultural biomass when animals eat plants containing trace amounts of copper. The copper is then returned to the soil when manure is applied as fertilizer. Agricultural burning also releases copper into the atmosphere. Copper mining contributes significantly to copper emissions into fresh waters. Copper is also introduced into freshwater during metal corrosion, degradation, and abrasion of copper. Scrap copper metal is commonly recycled, but at the end of its manufacturing life cycle, it is discarded to landfills, which can leach significant copper into fresh waters. References Biogeochemical cycle Copper
Copper cycle
[ "Chemistry" ]
596
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,436,869
https://en.wikipedia.org/wiki/Lithium%20cycle
The lithium cycle (Li) is the biogeochemical cycle of lithium through the lithosphere and hydrosphere. Overview In the diagram above, lithium sinks are described in concentrations (ppm) and displayed as boxes. Fluxes are shown as arrows and are in units of moles per year. Continental rocks containing lithium are dissolved, transferring lithium to rivers or secondary minerals. Dissolved lithium in run-off travels to the ocean. Fluid release from hydrothermal vents contributes to oceanic lithium reserves while lithium is removed from the ocean by secondary mineral formation. Sinks and fluxes Lithium is widely distributed in the lithosphere and mantle as a trace element in silicate minerals. Lithium concentrations are highest in the upper continental and oceanic crusts. Chemical weathering at Earth’s surface dissolves lithium in primary minerals and releases it to rivers and ground waters. Lithium can be removed from solution by formation of secondary minerals like clays, oxides, or zeolites. Rivers eventually feed into the ocean, providing approximately 50% of marine inputs. The remainder of lithium inputs come from hydrothermal venting at mid-ocean ridges, where lithium is released from the mantle. Secondary clay formation removes dissolved lithium from seawater to the authigenic clays and to the altered oceanic crust. Geochemical tracers Lithium isotopes have potential as viable geochemical tracers for processes such as silicate rock weathering and crust/mantle recycling due to significant lithium isotope fractionation during these processes. References Biogeochemical cycle Lithium
Lithium cycle
[ "Chemistry" ]
311
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,436,887
https://en.wikipedia.org/wiki/Boron%20cycle
The boron cycle is the biogeochemical cycle of boron through the atmosphere, lithosphere, biosphere, and hydrosphere. Atmospheric and terrestrial fluxes Boron in the atmosphere is derived from soil dusts, volcanic emissions, forest fires, evaporation of boric acid from seawater, biomass emissions, and sea spray. Sea salt aerosols are the largest flux to the atmosphere. On land, boron cycles through the biosphere by rock weathering, and wet and dry deposition from the atmosphere. Ocean fluxes The marine biosphere circulates a large reservoir of boron. Dissolved boron is delivered to the ocean by river transport, wet deposition, submarine groundwater discharge, and hydrothermal vents. Boron is lost from the oceans in emissions from the ocean surface, deposition of organic materials and sediments (mostly carbonates), and the subduction of ocean sediment. Anthropogenic impacts The boron cycle has been significantly impacted by human activity. Major anthropogenic fluxes are coal mining and combustion, oil production, emissions from industrial factories, biofuels, landfills, and mining and processing of boron ores. Anthropogenic boron fluxes to the hydrosphere and atmosphere have increased and anthropogenic fluxes now exceed the natural boron fluxes. Notes References Biogeochemical cycle Boron
Boron cycle
[ "Chemistry" ]
283
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,436,966
https://en.wikipedia.org/wiki/Arsenic%20cycle
The arsenic (As) cycle is the biogeochemical cycle of natural and anthropogenic exchanges of arsenic terms through the atmosphere, lithosphere, pedosphere, hydrosphere, and biosphere. Although arsenic is naturally abundant in the Earth's crust, long-term exposure and high concentrations of arsenic can be detrimental to human health. Reservoirs and fluxes Lithosphere Arsenic's largest reservoir on Earth is the lithosphere. Earth's crust contains more than 200 mineral types containing As, including many sulfide minerals. Arsenic is abundant in ore deposits containing arsenopyrite (FeAsS) and tennantite. Sedimentary rocks bearing coal and shale may also contain high As. Major fluxes of As from the lithosphere to the atmosphere are volcanic emissions. Soil is the second largest global reservoir of As Under oxic conditions, As is present in soils as arsenate (As(III)), which can bind to Fe(III) hydroxides. The speciation of As in soil depends on soil pH and other factors. Acidic soils may contain arsenate bound to aluminium and iron, while basic soils may contain calcium-bound arsenate. The residence time for As in soils depends on the climate type, ranging from 1,000 to 3,000 years for moderate climates. Hydrosphere Freshwater and groundwaters commonly contain <1 ppb of As. The concentration of As is pH dependent; acidic conditions mobilize As at pH <5. Oxic seawater contains As(III) as arsenate (average of 1.7 ppb). Major sinks include sedimentation and subduction. Biosphere Arsenic is naturally present in the biosphere, with highest concentration in plant roots. Terrestrial plants can contain up to 200 ppm (parts per million) As. Marine organisms (e.g. Annelida and Echinodermata) contain 6-8 ppm. The human body also contains trace As with highest concentrations in the kidneys and liver (up to ~1.5 ppm). Anthropogenic emissions Human use arsenic in pesticides, wood preservatives, metal treatment, paint, and coal-based power plants. Anthropogenic residues and discharges from coal-based power plants, mining, and smelting can contaminate rivers, lakes, streams and soil. Anthropogenic As emissions originate from steel and glass production, and forest and grassland burning. In the atmosphere, As is mainly present in particulates such as dust, with a residence time of 7 to 10 days. Arsenic toxicity Arsenic is a metalloid with an atomic number of 33, and its common oxidation states are +3 or +5, as arsenate(As III) and arsenite(As V). Arsenic is primarily found as organic arsenic compounds, inorganic arsenic compounds, and arsine gas. Arsenic toxicity is dependent on its oxidation state; As(III) is more toxic than As(V) because of its ability to bind to thiol groups on proteins and enzymes, and its slower excretion rate from the body. The World Health Organization recognizes that inorganic arsenic is extremely toxic for humans (EPA maximum of 10 ppb in water) and detrimental to aquatic life. See also Arsenic Arsenic poisoning References Biogeochemical cycle Arsenic
Arsenic cycle
[ "Chemistry" ]
684
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,437,092
https://en.wikipedia.org/wiki/Char%20Miller
Franklin Lubbock "Char" Miller IV (born November 23, 1951) is an American historian and environmental analysis scholar. He is the W.M. Keck Professor of Environmental Analysis and History at Pomona College and the director of the Claremont Colleges' environmental analysis program. Early life and education Miller was born on November 23, 1951. He attended the Pomfret School and then Pitzer College, graduating in 1975, and subsequently received his Master's degree and doctorate from Johns Hopkins University. Career Miller began his teaching career at the University of Miami in 1980. He moved to Trinity University in 1981, where he ultimately served as chair of the History Department and Director of Urban Studies. After nearly 30 years as a professor at Trinity, Miller began teaching at Pomona College in 2007. He is a Senior Fellow at the Pinchot Institution for Conservation and a Fellow of the Forest History Society. Works Natural Consequences: Intimate Essays for a Planet in Peril (2022) West Side Rising: How San Antonio's 1921 Flood Devastated a City and Sparked a Latino Environmental Justice Movement (2021) Hetch Hetchy: A History in Documents (2020) San Antonio: A Tricentennial History (2018) Ogallala: Water for a Dry Land (2018) Not So Golden State: Sustainability vs. the California Dream (2016) America's Great National Forests, Wildernesses, and Grasslands (2016) Seeking the Greatest Good: The Conservation Legacy of Gifford Pinchot (2013) Death Valley National Park: A History (2013) On the Edge: Water, Immigration, and Politics in the Southwest (2013) Public Lands, Public Debates: A Century of Controversy (2012) References External links Pomona College faculty page Biographical interview on the Pomona College Sagecast Pitzer College alumni Johns Hopkins University alumni Pomona College faculty Living people 1951 births American environmentalists American environmental scientists Historians from California
Char Miller
[ "Environmental_science" ]
383
[ "American environmental scientists", "Environmental scientists" ]
66,437,177
https://en.wikipedia.org/wiki/QSO%20J0313%E2%88%921806
QSO J0313−1806 was the most distant, and hence also the oldest known quasar at z = 7.64, at the time of its discovery. In January 2021, it was identified as the most redshifted (highest z) known quasar, with the oldest known supermassive black hole (SMBH) at solar masses. The 2021 announcement paper described it as "the most massive SMBH at z > 7". This quasar beat the prior recordsetting quasar, ULAS J1342+0928. In 2023, UHZ1 was discovered, setting a new record for most distant quasar, eclipsing that of QSO J0313−1806. One of the 2021 paper authors, Feige Wang, said that the existence of a supermassive black hole so early in the existence of the Universe posed problems for the current theories of formation since "black holes created by the very first massive stars could not have grown this large in only a few hundred million years". The redshift z = 7.642 corresponds to an age of about 600 million years. See also Direct collapse black hole, a process by which black holes may form less than a few hundred million years after the Big Bang List of the most distant astronomical objects List of quasars PSO J172.3556+18.7734 ULAS J1342+0928 References Sources Further reading Astronomical objects discovered in 2021 Supermassive black holes Quasars Eridanus (constellation)
QSO J0313−1806
[ "Physics", "Astronomy" ]
322
[ "Black holes", "Galaxy stubs", "Unsolved problems in physics", "Supermassive black holes", "Constellations", "Astronomy stubs", "Eridanus (constellation)" ]
66,439,372
https://en.wikipedia.org/wiki/Number%20of%20parliamentarians%20in%20the%20Fifth%20French%20Republic
The number of MPs and senators are prescribed in the French constitution of 4 October 1958. History References See also Number of Westminster MPs Legal history of France National Assembly (France) Parliamentary history of France Government of France Numbering in politics
Number of parliamentarians in the Fifth French Republic
[ "Mathematics" ]
46
[ "Mathematical objects", "Numbers", "Numbering in politics" ]
66,439,467
https://en.wikipedia.org/wiki/Chlorine%20cycle
The chlorine cycle (Cl) is the biogeochemical cycling of chlorine through the atmosphere, hydrosphere, biosphere, and lithosphere. Chlorine is most commonly found as inorganic chloride ions, or a number of chlorinated organic forms. Over 5,000 biologically produced chlorinated organics have been identified. The cycling of chlorine into the atmosphere and creation of chlorine compounds by anthropogenic sources has major impacts on climate change and depletion of the ozone layer. Chlorine plays essential roles in many biological processes, including numerous roles in the human body. It also acts as an essential co-factor in enzymes involved in plant photosynthesis. Troposphere Chlorine plays a large role in atmospheric cycling and climate, including, but not limited to chlorofluorocarbons (CFCs). The major flux of chlorine into the troposphere comes from sea salt aerosol spray. Both organic and inorganic chlorine is transferred into the troposphere from the oceans. Biomass combustion is another source of both organic and inorganic forms of chlorine to the troposphere from the terrestrial reservoir. Typically, organic chlorine forms are highly un-reactive and will be transferred to the stratosphere from the troposphere. The major flux of chlorine from the troposphere is via surface deposition into water systems. Hydrosphere Oceans are the largest source of chlorine in the Earth's hydrosphere. In the hydrosphere, chlorine exists primarily as chloride due to the high solubility of the Cl− ion. The majority of chlorine fluxes are within the hydrosphere due to chloride ions' solubility and reactivity within water systems. The cryosphere is able to retain some chlorine deposited by rainfall and snow, but the majority is eluted into oceans. Lithosphere The largest reservoir of chlorine resides in the lithosphere, where of global chlorine is found in Earth's mantle. Volcanic eruptions will sporadically release high levels of chlorine as HCl into the troposphere, but the majority of the terrestrial chlorine flux comes from seawater sources mixing with the mantle. Organically bound chlorine is as abundant as chloride ions in terrestrial soil systems, or the pedosphere. Discovery of multiple Cl-mediating genes in microorganisms and plants indicate that numerous biotic processes use chloride and produce organic chlorinated compounds, as well as many abiotic processes. These chlorinated compounds can then be volatilized or leached out of soils, which makes the overall soil environment a global sink of chlorine. Multiple anaerobic prokaryotes have been found to contain genes and show activity for chlorinated organic volatilization Biological processes Chlorine's ability to completely dissociate in water is also why it is an essential electrolyte in many biological processes. Chlorine, along with phosphorus, is the sixth most common element in organic matter. Cells utilize chloride to balance pH and maintain turgor pressure at equilibrium. The high electrical conductivity of Cl− ions are essential for neuron signalling in the brain and regulate many other essential functions in biology Anthropogenic chlorinated compounds The depleting effects of chlorofluorocarbons (CFCs) on ozone over Antarctica has been studied extensively since the 1980s. The low reactivity of CFCs allow it to reach the upper stratosphere, where it interacts with UV-C radiation and forms highly reactive chloride ions that interact with methane. These highly reactive chlorine ions will also interact with volatile organic compounds to form other ozone depleting acids. Chlorine-36 is the radioactive isotope produced in many nuclear facilities as byproduct waste. Its half-life of , mobility in the pedosphere, and ability to be taken up by organisms has made it an isotope of high concern among researchers. The high solubility and low reactiveness of is also has made it a useful application for research of biogeochemical cycling of chlorine, as most research uses it as an isotope tracer References Biogeochemical cycle Chlorine
Chlorine cycle
[ "Chemistry" ]
880
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,439,508
https://en.wikipedia.org/wiki/Chromium%20cycle
The chromium cycle is the biogeochemical cycle of chromium through the atmosphere, hydrosphere, biosphere and lithosphere. Biogeochemical cycle Terrestrial weathering and river transport Chromium has two common oxidation states relevant for environmental conditions: trivalent chromium, Cr(III) (reduced form), and hexavalent chromium, Cr(VI) (most oxidized form). The poorly soluble trivalent chromium cation () strongly adsorbs onto clay particles and particulate organic matter, whereas the highly toxic and carcinogenic hexavalent chromate anion () is soluble and non-sorbed, making it a toxic contaminant in environmental systems. Chromium commonly exists in soil and rocks as highly insoluble trivalent chromium, such as chromite (, or FeO·), a mixed oxide mineral of the spinel group resembling magnetite (, , or FeO·). Terrestrial weathering could cause trivalent chromium to be oxidized by manganese oxides to hexavalent chromium, which is then solubilized and cycled to the ocean through rivers. Estuaries release particulate chromium from rivers to the sea, increasing the dissolved fluxes of chromium to the ocean. Oceanic cycling Soluble hexavalent chromium is the most common type of chromium in oceans, where over 70% of dissolved chromium in the ocean is found in oxyanions such as chromate (). Soluble trivalent chromium is also found in the oceans where complexation with organic ligands occurs. Chromium is estimated to have a residence time of 6,300 years in the oceans. Hexavalent chromium is reduced to trivalent chromium in oxygen minimum zones or at the surface of the ocean by divalent iron and organic ligands. There are four sinks of chromium from the oceans: (1) oxic sediments in pelagic zones, (2) hypoxic sediments in continental margins, (3) anoxic or sulfidic sediments in basins or fjords with permanently anoxic or sulfidic (euxinic) bottom waters, and (4) marine carbonates. Influence from other biogeochemical cycles Manganese (III) can oxidize Cr(III) to Cr(VI) when complexed with organic ligands. This causes contaminant mobilization of Cr(VI), and also reduces Mn(III) to Mn(II), which can then be oxidized back to Mn(III) by oxygen. Methods for chromium tracking Isotopic fractionation of chromium has become a valuable tool for monitoring environmental chromium contamination through recent advancements in mass spectrometry. Isotope fractionation during river transport is determined by local redox conditions based on dissolved organic matter in rivers. References Biogeochemical cycle Chromium
Chromium cycle
[ "Chemistry" ]
628
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,439,538
https://en.wikipedia.org/wiki/Gold%20cycle
The gold cycle is the biogeochemical cycling of gold through the lithosphere, hydrosphere, atmosphere, and biosphere. Gold is a noble transition metal that is highly mobile in the environment and subject to biogeochemical cycling, driven largely by microorganisms. Gold undergoes processes of solubilization, stabilization, bioreduction, biomineralization, aggregation, and ligand utilization throughout its cycle. These processes are influenced by various microbial populations and cycling of other elements such as carbon, nitrogen, and sulfur. Gold exists in several forms in the Earth's surface environment including Au(I/III)-complexes, nanoparticles, and placer gold particles (nuggets and grains). The gold biogeochemical cycle is highly complex and strongly intertwined with cycling of other metals including silver, copper, iron, manganese, arsenic, and mercury. Gold is important in the biotech field for applications such as mineral exploration, processing and remediation, development of biosensors and drug delivery systems, industrial catalysts, and for recovery of gold from electronic waste. Lithosphere The lithosphere is the dominant reservoir of gold, containing an estimated 2.6x1013 Mg. Today, gold exists primarily as electrum, in hard rock deposits like tellurides, and as particles in placers in Earth's crust. Gold cycling starts with the microbial weathering of gold-bearing rocks and minerals which mobilizes gold in the environment via release of elemental gold and solubilization. The Witwatersrand gold deposits host approximately 30% of the world's gold resources, a large proportion of which is directly associated with organic carbon derived from microbial mats. Gold ore has been mined in many countries, including Japan, India, Spain, Yugoslavia, South Africa, Australia, the United States of America, Canada, Colombia, Mexico, and Brazil. Ocean The ocean reservoir contains an estimated 5.6x109 Mg of gold and oceanic gold concentration is about 4 ng Au/L with higher values in some coastal waters. Au(I/III)-ions and Au(0)-colloids are unstable under surface conditions in aqueous solutions and commonly form ligand complexes with substances excreted by microorganisms. Similar to silver and mercury, these mobile Au(I/III)-complexes are toxic in nature. Some bacteria that live in biofilms on placer gold particle surfaces deal with this toxicity by precipitating Au(I/III)-complexes which leads to the biomineralization of gold. Other archaea, iron-reducing bacteria, and some sulfate-reducing bacteria have developed methods to regulate and detoxify their immediate environment when Au(III)-ions are present at toxic levels. Iron- and sulfur-oxidizing litho-autotrophic bacteria break down gold-hosting sulfide minerals, releasing gold as alloy particles or Au(I)-thiosulfate complexes. Eventually, gold nanoparticles released by these processes undergo transformation, are dispersed in oceans, or accumulate in sediments. Atmosphere The atmosphere is the smallest reservoir of gold, containing an estimated 370 Mg. The most volatile gold compounds are Au2Cl6, which may occur in volcanic gases, and AuF3. Influences and interactions of other biogeochemical cycles The biogeochemical cycle of gold is affected by the carbon, nitrogen, sulfur, and iron cycles. Decomposition of organic carbon under anoxic conditions creates a wide range of organic intermediates, e.g., organic acids, that are important determinants of gold mobility. Key microbial processes in the nitrogen cycle can be influenced by gold and vice versa; for example autotrophic denitrifying bacteria can destabilize Au-complexes and may play a role in gold cycling. Overall, it is likely that gold mobility, biomineralization, and ore forming processes are impacted by the reactive nitrogen-containing compounds. Gold is commonly incorporated in iron-sulfides and adsorbed by Fe(III)-oxyhydroxide precipitates; oxidation of gold-bearing pyrite can lead to the mobilization of soluble gold complexes. Ancient Earth Throughout Earth's history, the interplay of gold, microorganisms, and physicochemical conditions such as pH and redox potential have led to the aggregation of gold particles to form grains and nuggets. Cyanobacteria in shallow surface waters on early anoxic Earth accumulated gold complexes dissolved in the water and geochemical modeling indicates that gold solubility in ancient waterbodies was much higher than today. Experimental evidence suggests that on early Earth, Fe(III)-reducing extremophiles and sulfate-reducing bacteria may have been contributed to formation of gold-bearing deposits. See also California Gold Rush Biogeochemical cycle References Gold Biogeochemical cycle
Gold cycle
[ "Chemistry" ]
1,021
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,439,551
https://en.wikipedia.org/wiki/Iodine%20cycle
The iodine cycle is a biogeochemical cycle that primarily consists of natural and biological processes that exchange iodine through the lithosphere, hydrosphere, and atmosphere. Iodine exists in many forms, but in the environment, it generally has an oxidation state of -1, 0, or +5. Oceanic cycling Iodine in the ocean exists mostly in oceanic sediments and seawater. During subduction of oceanic crust and seawater, most of the iodine cycles into seawater through brine, while a minor amount is cycled into the mantle. Marine biota, including seaweed and fish, accumulate iodine from the seawater and return it during decomposition. Sedimentation of oceanic iodine replenishes the ocean sediment sink. The losses of iodine from the oceanic sink are to the atmospheric sink. Sea spray aerosolization accounts for a portion of this loss. However, the majority of the iodine cycled into the atmosphere occurs through biological conversion of iodide and iodate to methyl forms, primarily methyl iodide. Algae, phytoplankton, and bacteria are involved in reducing the stable Iodate ion to iodide, and different species produce volatile methyl iodide which leaves the oceans and forms aerosols in the atmosphere. Terrestrial cycling Iodine rarely occurs naturally in mineral form, so it comprises a very small portion of rocks by mass. Sedimentary rocks have higher concentrations of iodine compared to metamorphic and igneous rocks. Due to the low concentration of iodine in rocks, weathering is a minor flux of iodine to soils and the freshwater hydrosphere. Soils contain a much higher concentration of iodine compared to their parent rock, though most of it is bound to organic and inorganic matter, potentially due to microbial activity. The major source of iodine to soils is through dry and wet deposition of aerosolized iodine in the atmosphere. Due to the high production of atmospheric iodine from the oceans, both the concentration of iodine and the flux of iodine to soils is greatest near coastal regions. Plants uptake iodine from the soil through their roots and return the iodine when they decompose. Fauna that consume plants may uptake this iodine but similarly return it to soils upon decomposition. Some iodine may also be cycled into the freshwater hydrosphere through leaching and runoff, where it may return to the oceans. Similar to oceanic iodine, the majority of iodine cycled out of soil is volatilized through conversion to methyl forms of iodine by bacteria. Unlike ocean volatilization, however, bacteria are thought to be the only organisms responsible for volatilization in soils. Anthropogenic influences Iodine is a necessary trace nutrient for human health and is used as a product for various industries. Iodine intended for human use and consumption is taken from brines, which accounts for a minor perturbation to the global iodine cycle. A much larger anthropogenic impact is through the burning of fossil fuels, which releases iodine into the atmosphere. Iodine-129, a radioisotope of iodine, is a waste product of nuclear power generation and weapons testing. Unless present in high concentrations, I-129 likely does not present danger to human health. Early research has attempted to use the I-129/I-127 ratio as a tracer for the iodine cycle. References Biogeochemical cycle Iodine
Iodine cycle
[ "Chemistry" ]
702
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,439,570
https://en.wikipedia.org/wiki/Lead%20cycle
The lead cycle is the biogeochemical cycle of lead through the atmosphere, lithosphere, biosphere, and hydrosphere, which has been influenced by anthropogenic activities. Natural lead sources Lead (Pb) is a heavy trace element and is formed by the radioactive decay of uranium and thorium. In crustal rocks, it is present as the lead sulfide mineral galena. Natural sources of lead in the lead cycle include wind borne dust, volcanic outgassing, and forest fires. Natural weathering of rocks by physical and chemical agents can mobilize lead in soils. Mobilized lead can react to form oxides or carbonates. It can co-precipitate with other minerals by being occluded through surface adsorption and complexation Anthropogenic lead cycle Anthropogenic activities have accelerated lead mobilization to the environment. The majority of anthropogenic lead comes from non-ferrous metal manufacturing plants, mining and smelting of ores, stationary and mobile fossil fuel combustion platforms, and lead batteries. These activities produce very fine micron-sized Pb particles that can be transported as aerosols. Anthropogenic lead fluxes decreased from the 1980s to the 2000s as a result of global regulation and outlawing of leaded gasoline. However, the global production in lead has seen a steady rise in the 21st century. Lead accumulation in the ocean Wet deposition removes lead from the atmosphere to the surface ocean. Precipitation leads to solubilization of aerosols and washout of particulates. Pb concentrations in the oceans is dependent on wet deposition and the concentration of Pb present in atmosphere. The main sink for lead is burial in marine sediments Lead in drinking water Lead is highly regulated in drinking water because it affects the developing brain and the nervous system. Children are more prone to lead exposure because they absorb more of ingested Pb from gastrointestinal tracts. Environmental Protection Agency established the Lead and Copper rule (LCR) in 1991 which states that lead and copper concentrations should not exceed 15 ppb and 1.3 ppm in more than 10% of customer taps sampled. In spite of such regulations, lead was found to be in high concentration exceeding the LCR threshold in Flint, Michigan drinking water. The problem was exacerbated when the drinking water supply was switched to Flint river rather than the treated water from Lake Huron and Detroit river. The water was corrosive which caused the dissolution of lead from water pipes. The preventive measure in such cases is to add phosphate to control the mobilization of lead by formation of protective scales. References Biogeochemical cycle Lead
Lead cycle
[ "Chemistry" ]
545
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,439,584
https://en.wikipedia.org/wiki/Potassium%20cycle
The potassium (K) cycle is the biogeochemical cycle that describes the movement of potassium throughout the Earth's lithosphere, biosphere, atmosphere, and hydrosphere. Functions Along with nitrogen and phosphorus, potassium is one of the three major nutrients that plants require in large quantities. Potassium is essential to stomata control in plants and is also essential for muscles contraction in humans. Lithosphere and Soil By weight, K totals to 2.6% of the Earth's crust. Stored in primary minerals (feldspar, biotite, and muscovite), chemical weathering releases potassium into the soil to account for up to 11% of plant demand. Some plants and bacteria also release organic acids into the soil that make K accessible for their use. Potassium exists in its highest concentrations in the upper most layers of soil, stored in three pools: fixed K, exchangeable K, and solution K. Fixed K accounts for 96-99% of soil K and is stored in feldspar, mica, and illite minerals. Exchangeable K is potassium adsorbed onto clay particles and organic matter and accounts for 1-2% of total soil K. Potassium in soil solution is the most readily available form of K for plants to absorb, but only amounts to 0.1-0.2% of total soil K. Reserves of potassium exist in ores and evaporites of potassium chloride (KCl) found in Germany, France, Canada, the United States, and Dead Sea brine. An estimated 32 x 106 tonnes (32 Tg) of potassium are mined from the Earth each year, of which 28 x 106 tonnes (28 Tg) are applied to crop fields annually. Potassium is most commonly applied as potassium chloride (KCl), but also referred to as potash and K2O. Application of potassium is necessary in agriculture because the removal of potassium from the soil through plant uptake and crop removal occurs at a faster rate than the replacement through rock weathering. At the current consumption rate, K2O reserves are expected to last 100 years. Potassium depletion in soils can be minimized by leaving crop residues on soils, allowing the plant matter to decay and release their stored potassium back into the soil. Biosphere The most abundant ion in plant cells is the potassium ion. Plants take up potassium for plant growth and function. A portion of potassium uptake in plants can be attributed to weathering of primary minerals, but plants can also ‘pump’ potassium from deeper soil layers to increase levels of surface K. Potassium stored in plant matter can be returned to the soil during decomposition, especially in areas of higher rainfall that experience higher leaching rates. Potassium leaching occurs at higher rates than nitrogen and phosphorus, likely because it only exists in the soluble ion form (K+) in the plant. Nitrogen and phosphorus are typically incorporated into large, complex molecules that are more difficult to leach through cell membranes than the small K+ ion. Deciduous plants that lose their leaves will relocate 10-32% of potassium for use in other areas of the plant before abscission. Atmosphere Some potassium is exchanged between plants and the atmosphere through organic aerosols released from plant leaves. Atmospheric potassium deposition varies from 0.7 to greater than 100 kg ha−1 yr−1 depending on geographic location and climate. Additionally, marine aerosols can evaporate into the atmosphere and return via precipitation. Hydrosphere The hydrosphere is the largest reservoir for potassium, holding an estimated 552.7 x 1012 tonnes (552.7x106 Tg). Leaching and erosion carry 1.4 x 109 tonnes (1400 Tg) yr−1 of potassium in soil solution into groundwater, rivers, and oceans. Some potassium in the atmosphere also enters the hydrosphere through precipitation. Potassium in sediment pore fluids is removed from solution by the authigenic formation of clay, which is then subducted, along with potassium deposits and ocean basalt, to return to the lithosphere. See also Potassium Potash References Biogeochemical cycle Potassium Soil science
Potassium cycle
[ "Chemistry" ]
836
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,439,766
https://en.wikipedia.org/wiki/Neodymium%28III%29%20sulfate
Neodymium(III) sulfate is a salt of the rare-earth metal neodymium that has the formula Nd2(SO4)3. It forms multiple hydrates, the octa-, penta-, and the dihydrate, which the octahydrate is the most common. This compound has a retrograde solubility, unlike other compounds, its solubility decreases with increasing temperature. This compound is used in glass for extremely powerful lasers. Preparation Neodymium sulfate is produced by dissolving neodymium(III) oxide in sulfuric acid: It can also be prepared by the reaction of neodymium(III) perchlorate and sodium sulfate. Properties Neodymium sulfate octahydrate decomposes at 40 °C to the pentahydrate, which in turn decomposes to the dihydrate at 145 °C. The dihydrate dehydrates to the anhydrous form at 290 °C. References Sulfates Neodymium(III) compounds
Neodymium(III) sulfate
[ "Chemistry" ]
215
[ "Sulfates", "Salts" ]
66,440,212
https://en.wikipedia.org/wiki/HD%20207832
HD 207832 is a G-type main-sequence star. Its surface temperature is 5764 K. HD 207832 is slightly enriched compared to the Sun in its concentration of heavy elements, with a metallicity Fe/H index of 0.17 and is much younger at an age of 0.74 billion years. Kinematically, it belongs to the thin disk of the Milky Way. A multiplicity study in 2014 detected a candidate comoving stellar companion - a red dwarf star or brown dwarf with a spectral class M6.5, at a very wide projected separation of 38.57′ (2.0 light years) Planetary system In 2012, two planets, named HD 207832 b and HD 207832 c, were discovered by the radial velocity method on wide, eccentric orbits. The planetary system would remain stable even if the planetary orbits are coplanar. Although discovery of the inner planet was confirmed in 2018, the discovery of both planets was suspected to be a false positive in 2020, as newer radial velocity data do not support the existence of the planets. References Piscis Austrinus G-type main-sequence stars Hypothetical planetary systems J21523626-2601352 107985 207832 CD-26 15858
HD 207832
[ "Astronomy" ]
258
[ "Piscis Austrinus", "Constellations" ]
66,440,882
https://en.wikipedia.org/wiki/Achroonema
Achroonema is a genus of bacteria with uncertain systematics. The genus was described in 1948 by Heinrich Leonhards Skuja. Species: Achroonema angustatum (Koppe) Skuja Achroonema articulatum Skuja Achroonema gotlandicum Skuja Achroonema inaequale Skuja Achroonema lentum Skuja Achroonema macromeres Skuja Achroonema proteiforme Skuja Achroonema simplex Skuja Achroonema spiroideum Skuja Achroonema splendens Skuja Achroonema sporogenum Skuja Achroonema subsalsum Behre References Bacteria genera Enigmatic bacteria taxa
Achroonema
[ "Biology" ]
167
[ "Bacteria stubs", "Bacteria" ]
66,440,883
https://en.wikipedia.org/wiki/Vericiguat
Vericiguat, sold under the brand name Verquvo, is a medication used to reduce the risk of cardiovascular death and hospitalization in certain patients with heart failure after a recent acute decompensation event. It is taken by mouth. Vericiguat is a soluble guanylate cyclase (sGC) stimulator. Common side effects include low blood pressure and low red cell count (anemia). It was approved for medical use in the United States in January 2021, and for use in the European Union in July 2021. The U.S. Food and Drug Administration considers it to be a first-in-class medication. Medical uses Vericiguat is indicated to reduce the risk of cardiovascular death and hospitalization for heart failure following a prior hospitalization for heart failure or need for outpatient intravenous diuretics, in adults with symptomatic chronic heart failure and an ejection fraction of less than 45%. Vericiguat is usually given orally once every day with food. No dose adjustments are required in the elderly, in people with mild-to-moderate liver failure, or in those with impaired kidney function. As of 2024, no studies have found information for patients with severely impaired kidney function, severe liver failure, or are on dialysis. Vericiguat is contraindicated in pregnancy. While there are no studies on its safety when used by pregnant women, animals studies suggest higher rates of birth defects, as well as increased number of abortions and resorptions. It may also pass into breast milk, but the effects on breastfed infants is unknown. The manufacturer advises that child-bearing age patients should be on contraception and assessed for pregnancy before starting treatment. Adverse effects The most common side effects of vericiguat include symptomatic low blood pressure and anemia. Patients taking other soluble guanylate cyclase inhibitors should not take vericiguat. Pharmacology Vericiguat is a direct stimulator of soluble guanylate cyclase, an important enzyme in vascular smooth muscle cells. Specifically, vericiguat binds to the beta-subunit of the target site on the soluble guanylate cyclase enzyme. Soluble guanylate cyclase catalyzes the formation of cyclic GMP upon interaction with nitric oxide to activate a number of downstream signaling cascades, which can compensate for defects in this pathway and resulting losses in regulatory myocardial and vascular cellular processes due to cardiovascular complications. Pharmacokinetics After vericiguat is administered (100 mg by mouth once daily), the average steady state and Cmax and AUC for patients with cardiovascular failure is 350 mcg/L and 6,680 mcg/h/L with a Tmax of one hour. Vericiguat has a positive food effect, and therefore patients are advised to consume food with the drug for an oral bioavailability of 93%. Vericiguat is extensively protein bound in plasma. Vericiguat is primarily metabolized via phase 2 conjugation reactions, with a minor CYP-mediated oxidative metabolite. The major metabolite is glucuronidated and inactive. The typical half-life profile for patients with heart failure is 30 hours. Vericiguat has a decreased clearance in patients with systolic heart failure. History The U.S. Food and Drug Administration (FDA) approved vericiguat based on evidence from a clinical trial (NCT02861534) which consisted of 5,050 participants aged 23 to 98 years old with worsening heart failure. The trial was conducted at 694 sites in 42 countries in Europe, Asia, North and South America. The trial enrolled participants with symptoms of worsening heart failure. Participants were randomly assigned to receive vericiguat or a placebo pill once a day. Neither the participants nor the health care professionals knew if the participants were given vericiguat or placebo pills until after the trial was complete. It was awarded a fast track designation on 19 January 2021. Society and culture Legal status On 20 May 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for vericiguat, intended for the treatment of symptomatic chronic heart failure in adults with reduced ejection fraction. The applicant for this medicinal product is Bayer AG. Vericiguat was approved for medical use in the European Union in July 2021. References Further reading External links Soluble guanylate cyclase stimulators Pyrazolopyridines Fluoroarenes Pyrimidines Carbamates Amines Drugs developed by Merck & Co. 2-Fluorophenyl compounds
Vericiguat
[ "Chemistry" ]
1,016
[ "Amines", "Bases (chemistry)", "Functional groups" ]
66,441,421
https://en.wikipedia.org/wiki/Actinoptychus
Actinoptychus is a genus of diatoms belonging to the family Heliopeltaceae. The genus was described in 1843 by Christian Gottfried Ehrenberg. Species: Actinoptychus octodenarius Ehrenberg Actinoptychus senarius (Ehrenberg) Ehrenberg, 1843 References Diatoms Diatom genera
Actinoptychus
[ "Biology" ]
76
[ "Diatoms", "Algae" ]
66,441,710
https://en.wikipedia.org/wiki/Magnetic%20pulsations
Magnetic pulsations are extremely low frequency disturbances in the Earth's magnetosphere driven by its interactions with the solar wind. These variations in the planet's magnetic field can oscillate for multiple hours when a solar wind driving force strikes a resonance. This is a form of Kelvin–Helmholtz instability. The intensity, frequency, and orientation of these variations is measured by Intermagnet. In 1964, the International Association of Geomagnetism and Aeronomy (IAGA) proposed a classification of magnetic pulsations into continuous pulsations (Pc) and irregular pulsations (Pi). References Magnetospheres Magnetism in astronomy Earth Solar phenomena
Magnetic pulsations
[ "Physics", "Astronomy" ]
139
[ "Physical phenomena", "Magnetospheres", "Astronomy stubs", "Solar phenomena", "Magnetism in astronomy", "Stellar phenomena" ]
66,442,955
https://en.wikipedia.org/wiki/Michel%20Bierlaire
Michel Bierlaire (born 1967 in Namur, Belgium) is a Belgian-Swiss applied mathematician specialized in transportation modeling and optimization. He is a professor at EPFL (École Polytechnique Fédérale de Lausanne) and the head of the Transport and Mobility Laboratory. Career Bierlaire received a PhD in mathematics from University of Namur in 1996 for his thesis on "Mathematical models for transportation demand analysis" that was supervised by Philippe Toint. He then joined as a research associate the Intelligent Transportation Systems Program at the Massachusetts Institute of Technology where he worked on the design and development of DynaMIT, a real-time software simulation tool designed to "effectively support the operation of Advanced Traveler Information Systems (ATIS) and Advanced Traffic Management Systems (ATMS)." In 1998, he joined EPFL first as a senior scientist (Maître d'enseignement et de recherche) at the Operations Research Group at the Institute of Mathematics. In 2006, he was made associate professor at the EPFL's School of Architecture, Civil and Environmental Engineering and became the founding director of the Transport and Mobility Laboratory. Since 2012, he has been a full professor at the EPFL. At the EPFL, he created in 2010 the Doctoral Program in Civil and Environmental Engineering, that he chaired until 2017. In 2012, Bierlaire founded hEART, the European Association for Research in Transportation that he chaired from 2012 to 2015. Research Bierlaire's research targets at developing mathematical models replicating the complexity of mobility behavior of individuals and goods for all modes of transportation. He aims to develop solutions to transportation problems that also include the implications of mobility on land use, economics, and the environment, among others. His work focuses on modelling travel behaviours by employing choice and activity-based models; on developing operations research models based on vehicle routing, scheduling, and timetabling; and on the fusion of those models. His further interests encompass intelligent transportation systems and the reproduction of pedestrian flow patterns. He creates and tests mathematical models and algorithms for applications in operations research that include continuous and discrete optimization, queuing theory, graphs, and simulation. Apart from implementations in transportation demand analysis, his work also finds active use in other domains such as marketing and image analysis. His multidisciplinary research draws next to mathematics also on computer vision, image analysis, hospital management and marketing. Biogeme Bierlaire is the lead developer of Biogeme, an open source project that performs the maximum likelihood estimation of parametric discrete choice models. It is working within the framework of Pandas, a Python data analysis library. Teaching Bierlaire has developed several online courses, one discrete choice models, and three on optimization. Together with Moshe Ben-Akiva at MIT, Daniel McFadden and Joan Walker, both at University of California, Berkeley, he is offering a course on "Discrete Choice Analysis: Predicting Individual Behaviour and Market Demand" that is designed for professionals from academia and industry. Distinctions On invitation from the Association of European Operational Research Societies, Bierlaire initiated the EURO Journal on Transportation and Logistics, whose editor in chief he was between 2011 and 2019. Since 2012, he has been an associate editor of the journal Operations Research. He was an associate editor of the Journal of Choice Modelling since its conception in 2007 until 2017. Selected works S. Sharif Azadeh, B. Atasoy, M. E. Ben-Akiva, M. Bierlaire, and M. Y. Maknoon. "Choice-driven dial-a-ride problem for demand responsive mobility service." Transportation Research Part B: Methodological 161 (2022): 128-149. Paneque, M. P., Bierlaire, M., Gendron, B., & Sharif Azadeh, S. (2021). Integrating advanced discrete choice models in mixed integer linear optimization. Transportation Research Part B: Methodological, 146, 26-49. References External links Website of the Transport and Mobility Laboratory q Website of the European Association for Research in Transportation 1967 births Living people Belgian mathematicians Swiss mathematicians Applied mathematicians Université de Namur alumni Academic staff of the École Polytechnique Fédérale de Lausanne People from Namur (city)
Michel Bierlaire
[ "Mathematics" ]
871
[ "Applied mathematics", "Applied mathematicians" ]
66,443,279
https://en.wikipedia.org/wiki/H3R26me2
H3R26me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 26th arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial. Nomenclature The name of this modification indicates dimethylation of arginine 26 on histone H3 protein subunit: Arginine Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases. Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction, and transcriptional regulation. Arginine methylation plays a major role in gene regulation because of the ability of the PRMTs to deposit key activating (histone H4R3me2, H3R2me2, H3R17me2a, H3R26me2) or repressive (H3R2me2, H3R8me2, H4R3me2) histone marks. Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. Mechanism and function of modification Methylation of H3R26me2 is mediated by CARM1 and is recruited to promoter upon gene activation along with acetyltransferases and activates transcription. When CARM1 is recruited to transcriptional promoters the histone H3 is methylated (H3R17me2 & H3R26me2). H3R26 lies close to H3K27, which is a repressive mark when methylated. There are several ways that H3R26 could change gene expression. Epigenetic implications The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states, which define genomic regions by grouping different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterized by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. The human genome is annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Clinical significance CARM1 knockout mice are smaller and die shortly after birth. CARM1 is required for the epigenetic maintenance of pluripotency and self-renewal, as it methylates H3R17 and H3R26 at core pluripotency genes such as Oct4, SOX2, and Nanog. It is possible that H3R26me2 levels are changed before pre-implantation of bovine embryos and their development. Methods The histone mark H3K4me1 can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone methylation Histone methyltransferase References Epigenetics Post-translational modification
H3R26me2
[ "Chemistry" ]
1,158
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
66,444,644
https://en.wikipedia.org/wiki/Rockfall%20barrier
A rockfall barrier is a structure built to intercept rockfall, most often made from metallic components and consisting of an interception structure hanged on post-supported cables. Barriers are passive rockfall mitigation structures adapted for rock block kinetic energies up to 8 megajoules. Alternatively, these structures are also referred to as fences, catch fences, rock mesh, net fences.... History In the 1960s, the Washington State Department of Transportation conducted the very first experiments for evaluating the efficiency of barriers in arresting rock blocks. A so-called 'chain link fence attenuator' was exposed to impacts by blocks freely rolling down a slope for evaluating its efficiency. These experiments were followed by some others till the end of the 1990s. Progressively, the testing technique was improved using zip-lines to convey the rock block to the barrier. Testing real-scale structures is now very common and part of the design process. The very first use of rockfall barriers dates back to this period. It progressively became widespread. Nowadays, barriers are the most widely used type of rockfall mitigation structures and their variety has considerabily increased since the 1970s and in particular over the last two decades. A commonly used type of net is made from metallic rings. In such nets, each ring is interlaced with either 4 or 6 adjoining rings. These nets were first used after a French company bought a stock of nets used in the USSR for protecting harbours against submarine intrusion. These nets are referred to as ASM (anti-submarine). Other mesh shapes are also obesrved in rockfall barriers (see below). From the 2000s, these barriers were progressively adapted to be used as protection structures against various types of geophysical flows such as small landslides, mud flows debris flows, snow avalanches... Types of barriers Barriers are mainly made from metallic components which are net, cables, posts, shackles and brakes mainly. Barriers are connected to the ground thanks to anchors. Depending on the rock block kinetic energy and manufacturer, various structures types and design exist, combining these different components. This variety in barrier design in particular results from the different: post cross sectional shapes (circular, square...) mesh size and shape: made from hexagonal wire mesh, circular rings or cables, this latter forming either rectangular, square, rhombus or water-drops mesh shapes distance between supporting posts (ie. length of the mesh panels) number and lay out of the cables and brakes (if any) brakes (if any): various technologies and activation force levels number and lay out of the brakes (if any) post position with respect to the interception structure. Static barriers When the rock block kinetic energy is less than 500 kJ, a static barrier is often adapted. In general, it consists of static posts, cables and an interception net. As a result of this design, the deformation of the structure when impacted is limited. Flexible barriers Flexible barriers are used when the rock bloc kinetic energy is larger than 500 kJ and up to 8000 kJ. The structure is given flexibility by using brakes, placed along the cables connected to the interception net. When the rock boulder impacts, the net, force develop in these cables. Once the force in the cables reaches a given value, the brake is activated, allowing for a larger barrier deformation and dissipating energy. The way this component dissipates energy varies from one brake technology to the other : pure friction, partial failure, plastic deformation, mixed friction/plastic deformation). Brakes also avoid large forces to develop in the barrier anchorage and are thus key components. Design principles The two main design characterictis of rockfall barriers are their height and their impact strength. Similarly as for other passive rockfall protection structures (e.g. embankments), the barrier required height is defined based on rock fragments passing heights obtained from trajectory simulations. These simulations also provide the kinetic energy to consider for the barrier selection and design. The appropriate barrier choice is based on these two parameters. The impact strength of a specific rockfall barrier is mainly determined from real-scale impact experiments. For instance, the design of flexible barriers is often based on results from the conformance tests prescribed in a specific European guildeline. These tests consist in normal-to-the barrier impacts in the center of a three-panel barrier by a projectile with translational velocity of at least 25 m/s and no rotational velocity. The response of a barrier may also be evaluated based on specific numerical models, developed based on a finite element method or a discrete element method. These simulations tools may also be used to improve the barrier design, for example accounting for site-specific impact conditions. See also Rockfall Flexible debris-resisting barrier Landslide mitigation Rockfall protection embankment References Landslide analysis, prevention and mitigation
Rockfall barrier
[ "Environmental_science" ]
978
[ "Environmental soil science", " prevention and mitigation", "Landslide analysis" ]
66,445,175
https://en.wikipedia.org/wiki/NGC%20788
NGC 788 is a lenticular galaxy located in the constellation Cetus. Its velocity with respect to the cosmic microwave background is 3938 ± 30km/s, which corresponds to a Hubble distance of . It was discovered in a sky survey by Wilhelm Herschel on September 10, 1785. Studies of NGC 788 indicate that it, while itself being classified as a Seyfert 2, contains an obscured Seyfert 1 nucleus, following the detection of a broad Hα emission line in the polarized flux spectrum. The observation also indicated the lowest radio luminosities observed in an obscured Seyfert 1. Supernova One supernova has been observed in NGC 788: SN 1998dj (type Ia, mag. 16) was discovered by the Lick Observatory Supernova Search (LOSS) on 8 August 1998. NGC 788 Group NGC 788 is the largest and brightest galaxy in a group of at least five galaxies that bears its name. The other four galaxies in the NGC 788 group (also known as LGG 44) are IC 183, NGC 829, NGC 830 and NGC 842. Image gallery See also List of NGC objects (1–1000) References Lenticular galaxies Cetus 0788 007656 -01-06-025 F01586-0703 Seyfert galaxies
NGC 788
[ "Astronomy" ]
279
[ "Cetus", "Constellations" ]
66,445,316
https://en.wikipedia.org/wiki/Conflict-free%20coloring
Conflict-free coloring is a generalization of the notion of graph coloring to hypergraphs. Definition A hypergraph H has a vertex-set V and an edge-set E. Each edge is a subset of vertices (in a graph, each edge contains at most two vertices, but in a hypergraph, it may contain more than two). A coloring is an assignment of a color to each vertex of V. A coloring is conflict-free if at least one vertex in each edge has a unique color. If H is a graph, then this condition becomes the standard condition for a legal coloring of a graph: the two vertices adjacent to every edge should have different colors. Applications Conflict-free colorings arise in the context of assigning frequency bands to cellular antennae, in battery consumption aspects of sensor networks and in RFID protocols. Special cases A common special case is when the vertices are points in the plane, and the edges are subsets of points contained in the same disk. In this setting, a coloring of the points is called conflict-free if, for every closed disk D containing at least one point from the set, there is a color that occurs precisely once. Any conflict-free coloring of every set of n points in the plane uses at least c log n colors, for an absolute constant c > 0. The same is true not only for disks but also for homothetic copies of any convex body. Another special case is when the vertices are vertices of a graph, and the edges are sets of neighbors. In this setting, a coloring of the vertices is called conflict-free if, for every vertex v, there is a color that is assigned to exactly one vertex among v and its neighbors. In this setting, the conflict-free variant of the Hadwiger Conjecture holds: If a graph G does not contain Kk+1 as a minor, then it has a conflict-free coloring with at most k colors. For planar graphs, three colors are sometimes necessary and always sufficient for a conflict-free coloring. It is NP-complete to decide whether a planar graph has a conflict-free coloring with one color, and whether a planar graph has a conflict-free coloring with two colors. External links References Graph coloring Hypergraphs
Conflict-free coloring
[ "Mathematics" ]
459
[ "Graph theory stubs", "Graph coloring", "Mathematical relations", "Graph theory" ]
66,445,586
https://en.wikipedia.org/wiki/Young%20Ladies%20Don%27t%20Play%20Fighting%20Games
is a Japanese manga series by Eri Ejima. It has been serialized in Media Factory's seinen manga magazine Monthly Comic Flapper since January 2020 and has been collected in eight tankōbon volumes. The manga is licensed in North America by Seven Seas Entertainment. A live-action web drama adaptation aired from May to July 2023. An anime adaptation has been announced. Plot Young school girls have a love for fighting games in a prestigious all-girl academy where video games are banned. Despite this, they enter Japan's biggest fighting game tournament. Characters Media Manga The manga series is written and illustrated by Eri Ejima and has been serialized in Media Factory's seinen manga magazine Monthly Comic Flapper since January 4, 2020. Eight tankōbon volumes were released as of October 2024. Seven Seas Entertainment licensed the manga for a North American release. Drama A live-action web drama adaptation was announced on October 21, 2022. It is directed by Ryoma Ouchida and written by Anna Kawahara. It aired on the Lemino streaming service from May 19 to July 7, 2023 and ran for eight episodes. Anime An anime adaptation was announced on January 21, 2021. References External links 2023 Japanese television series debuts 2023 Japanese television series endings 2020s LGBTQ literature Anime series based on manga Comedy-drama anime and manga Japanese girls' love television series Media Factory manga Anime and manga set in schools Seinen manga Seven Seas Entertainment titles Works about video games Yuri (genre) anime and manga
Young Ladies Don't Play Fighting Games
[ "Technology" ]
311
[ "Works about video games", "Works about computing" ]
66,446,555
https://en.wikipedia.org/wiki/Fluorine%20cycle
The fluorine cycle is the series of biogeochemical processes through which fluorine moves through the lithosphere, hydrosphere, atmosphere, and biosphere. Fluorine originates from the Earth’s crust, and its cycling between various sources and sinks is modulated by a variety of natural and anthropogenic processes. Overview Fluorine is the thirteenth most abundant element on Earth and the 24th most abundant element in the universe. It is the most electronegative element and it is highly reactive. Thus, it is rarely found in its elemental state, although elemental fluorine has been identified in certain geochemical contexts. Instead, it is most frequently found in ionic compounds (e.g. HF, CaF2). The major mechanisms that mobilize fluorine are chemical and mechanical weathering of rocks. Major anthropogenic sources include industrial chemicals and fertilizers, brick manufacturing, and groundwater extraction. Fluorine is primarily carried by rivers to the oceans, where it has a residence time of about 500,000 years. Fluorine can be removed from the ocean by deposition of terrigenous or authigenic sediments, or subduction of the oceanic lithosphere. Lithosphere The vast majority of the Earth's fluorine is found in the crust, where it is primarily found in hydroxysilicate minerals. Levels of fluorine in igneous rocks vary greatly, and are influenced by the fluorine contents of magma. Likewise, altered oceanic crust exhibits large variability in fluorine; serpentinization zones contain elevated levels of fluorine. Many details concerning the exact mineralogy and distribution of fluorine in the crust are poorly understood, particularly fluorine's abundance in metamorphic rocks, in the mantle, and in the core. Fluorine can be liberated from its crustal reservoirs via natural processes (such as weathering, erosion, and volcanic activity) or anthropogenic processes, such as phosphate rock processing, coal combustion, and brick-making. Anthropogenic contributions to the fluorine cycle are significant, with anthropogenic emissions contributing about 55% of global fluorine inputs. Hydrosphere Fluorine can dissolve into waters as the anion fluoride, where is abundance depends on local abundance within the surrounding rocks. This is in contrast to other halogen abundances, which tend to reflect the abundance of other local halogens, rather than the local rock composition. Dissolved fluoride is present found in low abundances in surface runoff in rainwater and rivers, and higher concentrations (74 micromolar) in seawater. Fluorine can also enter surface waters via volcanic plumes. Atmosphere Fluorine can enter the atmosphere via volcanic activity and other geothermal emissions, as well as via biomass burning and wind-blown dust plumes. Additionally, it can come from a wide variety of anthropogenic sources, including coal combustion, brick-making, uranium processing, chemical manufacturing, aluminum production, glass etching, and the microelectronics/semiconductor industry. Fluorine can also enter the atmosphere as a product of reactions between anthropogenically-generated atmospheric chemicals (for example, uranium fluoride). Furthermore, fluorine is a component in chlorofluorocarbon gases (CFCs), which were mass-produced throughout the 20th century until the detrimental effects associated with their breakdown into highly reactive chlorine and chlorine oxide species were better understood. The majority of contemporary studies on atmospheric fluorine focus on hydrogen fluoride (HF) in the troposphere, due to HF gas’s toxicity and high reactivity. Fluorine can be removed from the atmosphere via “wet” deposition, by precipitating out of rain, dew, fog, or cloud droplets, or via “dry” deposition, which refers to any processes that do not involve liquid water, such as adherence to surface materials as driven by atmospheric turbulence. HF can also be removed from the atmosphere via photochemical reactions in the stratosphere. Biosphere Fluorine is an important element for biological systems. From a mammalian health perspective, it is notable as a component of fluorapatite, a key mineral in the teeth of humans that have been exposed to fluorine, as well as shark and fish teeth. In soil, fluorine can act as a source for biological systems and a sink for atmospheric processes, as atmospheric fluorine can leach to considerable depths. References Fluorine Biogeochemical cycle
Fluorine cycle
[ "Chemistry" ]
945
[ "Biogeochemical cycle", "Biogeochemistry" ]
61,448,894
https://en.wikipedia.org/wiki/C9H11Cl2N
{{DISPLAYTITLE:C9H11Cl2N}} The molecular formula C9H11Cl2N may refer to: 2,4-Dichloroamphetamine 3,4-Dichloroamphetamine
C9H11Cl2N
[ "Chemistry" ]
53
[ "Isomerism", "Set index articles on molecular formulas" ]
61,448,922
https://en.wikipedia.org/wiki/C4H16Cl3CoN4
{{DISPLAYTITLE:C4H16Cl3CoN4}} The molecular formula C4H16Cl3CoN4 (molar mass: 285.48 g/mol, exact mass: 283.9773 u) may refer to: Cis-Dichlorobis(ethylenediamine)cobalt(III)_chloride Trans-Dichlorobis(ethylenediamine)cobalt(III)_chloride
C4H16Cl3CoN4
[ "Chemistry" ]
96
[ "Isomerism", "Set index articles on molecular formulas" ]
61,449,452
https://en.wikipedia.org/wiki/Jewel%20of%20Vicenza
The Jewel of Vicenza () was a silver model of the city of Vicenza made as an ex-voto in the 16th century and attributed to the architect Andrea Palladio. The Jewel was stolen by the Napoleonic army during the Italian Campaign in the French Revolutionary Wars and subsequently destroyed. A copy was created between 2012 and 2013. History The precious Jewel was made of silver plates on a wood frame. It was completed in 1578. It's not certain that Andrea Palladio created the model, but the bond between two Bishops of Vicenza (Niccolò Ridolfi and his successor Matteo Priuli) suggests that Palladio's role as director of city life was more important than his role as architect. For him, the Jewel might have represented his mental conception of the city of Vicenza. It can possibly be attributed to a goldsmith of the Capobianco family, as is supported by a document found in 2012 in the "Sanctuary of the Madonna" of Monte Berico. The citizens offered the Jewel as an ex-voto to the "Madonna of Mount Berico" in order to avoid the Plague of Saint Charles Borromeo that had spread two years before in the Duchy of Milan with some infection cases in the western cities of the Republic of Venice and up to Verona. Despite poor conditions, the citizenship united and provided a modest gift from each family. Vicenza was spared (until the great Italian plague of 1629–1631), so the Jewel was initially displayed in the church of Monte Berico next to the "Santuario della Madonna di Monte Berico". Between the 17th and the 18th century, six oil paintings were made that represented the first patron saint of the city, Saint Vincent, holding the silver Jewel in his hands. The paintings that portray Saint Vincent holding the precious model are the main evidence we have of the model's appearance. The paintings show the Jewel from different points of view, which provides information about its three-dimensional form. The model is a main element in each of the pictures; Vicenza is seen from the front, offered by Saint Vincent, and enclosed in its medieval walls. In that period, the distinction between the borghi (boroughs) over the walls and the inner part of the city, nowadays the Old Town of Vicenza, already existed. Under the Napoleonic government, the French troops looted Vicenza of cultural artifacts in 1797, as they did throughout most of Italy. The armée française, having seized the Sanctuary, brought the Jewel of Vicenza back to France because they thought it was completely made out of silver. They attempted to melt the model down, but it burned instead, as it was made of wood and only covered by a silver coating. With its destruction, Vicenza lost an important artifact from its century-long history of goldsmithing. Reconstruction In May 2010, a Committee for the Jewel of Vicenza was founded with the support of the Office of Cultural Heritage and other local institutions. The committee held a competition for a virtual restoration of the Jewel. The competition was won by the architect Romano Concato from Trissino, who compensated for the absence of original drawings by studying two paintings by Francesco Maffei and two others by Alessandro Maganza. The reconstruction was also developed by referencing medieval planimetrics of the city, the Pianta Angelica, designed by Giovanni Pittoni in 1580. In 2011, the Committee began the second part of its project—a collection of silver donations in order to recreate the model for Vicenza, as happened for the ancient ex-voto. The collection had 66 lb (30 kg) as its minimum target. In 2012, more than 110 lb (50 kg) of silver were collected (enough for the project), but ten of the most important goldsmiths of the city went on collecting until 25 December, to create additional funds for the reconstruction and its display. Features The new Jewel of Vicenza is a large round silver tray with a diameter of 58 cm (23 in) that supports more than 300 models of Vicenza buildings. Sixty-one of the models represent buildings of historical importance, such as the Basilica Palladiana, the Cathedral, the Torre Bissara, and dozens of churches. In the center of Piazza dei Signori, a gold model of the Rua was added, unannounced. The reconstruction of the Jewel was designed by proportioning the models using the golden ratio and studying the size of buildings from the time of its original construction so the three-dimensional reconstruction could be as close to the original as possible. The reconstruction started with small sculptures in modelled wax that served as a model for subsequent casting. The cast elements were then finished and embellished with chiseling and engraving. The model is made of 925/1000 silver. The final weight was 33 lb (15 kg); and the total effort took around 2,000 hours of work. Presentation On 15 June 2012 in Piazza dei Signori an official presentation on the reconstruction was held. Afterwards, the work began, combining the craftsmanship of silversmith Carlo Rossi and the sophisticated laser technology offered by a company in Bressanvido. In September 2012, at the Gallerie di Palazzo Leoni Montanari the finished tray was presented with its first complete building, the Church of San Lorenzo. From 6 April to 9 June 2013, halfway through the reconstruction, the Jewel was exhibited at the Diocesan Museum with the support of FAI. In the summer of 2013 the Jewel was completed; it was returned to the citizens for the town's patronal feast on 7 September. The reconstruction was included in the usual procession to the Basilica of St. Mary of Mount Berico in an official ceremony that had 30,000 participants. The Jewel is now permanently housed at the Diocesan Museum and placed next to the painting by Maffei, San Vincenzo with the model of the city of Vicenza. In 2015, the Jewel was exhibited at Expo 2015 in Milan. At the exhibition, silversmith Carlo Rossi was awarded the Confartigianato Design Award 2015 for his creation of the reconstruction. References Bibliography Further reading External links 3D Project Video of Model Reconstruction Andrea Palladio Vicenza Silver sculptures 3D imaging Architectural history Lost sculptures 1578 works 1500s sculptures Stolen works of art Destroyed sculptures Renaissance sculptures
Jewel of Vicenza
[ "Engineering" ]
1,279
[ "Architectural history", "Architecture" ]
61,449,511
https://en.wikipedia.org/wiki/ZTF%20J153932.16%2B502738.8
ZTF J153932.16+502738.8 is a double binary white dwarf with an orbital period of just 6.91 minutes. Its period has been observed to be decreasing, due to the emission of gravitational waves. It is both an eclipsing binary and a double-lined spectroscopic binary. The hotter white dwarf is , and the other one is significantly cooler (<10,000K). The stars may merge into one in 130,000 years, or if mass transfers between them, they may separate again. Their distance from Earth is estimated at . Stars The brighter star has an effective temperature of , a logarithm of surface gravity of 7.75, and a mass 0.6 times the Sun. Its radius is 0.0156 that of the Sun. The dimmer star is cooler, with a temperature of under , and has a mass 0.21 that of the Sun. It is physically larger than the brighter star at 0.0314 the radius of the Sun. Name ZTF stands for Zwicky Transient Facility. This is a survey of the whole northern sky recording light curves that uses Samuel Oschin Telescope at Palomar Observatory. Eclipse The light curve shows eclipses. One dip in the light curve is 15%, and the other is close to 100%. This means that one star is much brighter than the other. The light curve is not flat between eclipses, as the bright star is lighting up the face of the dim star. Orbital decay The orbital period is decreasing at seconds per second giving a characteristic timescale of 210,000 years. This decay is mostly due to the emission of gravitational waves, however 7% of the decay could be due to tidal losses. The decay is predicted to go for 130,000 years when the orbital period should reach 5 minutes. Then the dimmer star is predicted to expand and lose mass to the more massive star. It could then become an AM CVn system or merge to make a R Coronae Borealis star. The orbit compares with V407 Vulpeculae with a 9.5 minute orbit, and HM Cancri with 5.4 minute orbit. Star composition The hot star is a hydrogen-rich white dwarf of type DA. It has wide and shallow absorption lines of hydrogen. The dim star has narrow hydrogen emission lines, showing it is cooler. There are also helium absorption and emission lines. The two kinds of lines vary over the period, so that they can be identified with the two components. The emission lines are likely due to excess heating of the dim star by the bright one. References Spectroscopic binaries White dwarfs Gravitational-wave astronomy Boötes Eclipsing binaries
ZTF J153932.16+502738.8
[ "Physics", "Astronomy" ]
553
[ "Boötes", "Astrophysics", "Constellations", "Gravitational-wave astronomy", "Astronomical sub-disciplines" ]
61,450,906
https://en.wikipedia.org/wiki/Jamf
Jamf Holding Corp. is a software company best known for developing Jamf Pro (formerly The Casper Suite), a mobile device management system. History Minneapolis-based Jamf Software was founded in Eau Claire, Wisconsin, by Zach Halmstad, Christopher Thon and Chip Pearson and in 2002 released The Casper Suite. The company name came from Laszlo Jamf, a character in Thomas Pynchon's novel Gravity's Rainbow. Apple's growth in larger environments continued and Jamf developed tools to make Apple devices work in corporate environments. Jamf received a $30 million investment from Summit Partners in 2008. In 2015 Dean Hager was hired as CEO to replace Halmstad and Pearson, who had previously shared those duties. Over a decade after its inception, The Casper Suite was rebranded as Jamf Pro in 2017. IBM selected Jamf Pro to manage their Macs in 2015. Vista Equity Partners acquired a majority of shares in Jamf in December 2017. Jamf acquired three companies in 2018 and 2019 – Orchard and Grove, ZuluDesk and Digita Security – expanding its product portfolio to include identity and authentication management, an education-specific MDM, and endpoint security built for Mac using user behavior analytics. Jamf had a successful IPO on the Nasdaq stock market in July 2020, raising $468 million and valuing the company at around $4.6 billion. In May 2021 Jamf acquired zero-trust software vendor Wandera for $400M. As of December 31, 2022, the company reported servicing approximately 71,000 active customers globally. Furthermore, Jamf's technology is deployed on roughly 30 million Apple devices worldwide. Products Unless otherwise noted, all Jamf products support macOS, iOS, iPadOS, and tvOS. Jamf Pro (mobile device management) Jamf Now (mobile device management) Jamf School (mobile device management): Previously ZuluDesk. Jamf Teacher Jamf Connect (identity management) Jamf Protect (endpoint security) Jamf Safe Internet (internet filter and endpoint security for schools) Jamf Private Access (zero trust security model) Jamf Data Policy (internet filter and data cap) Integration with Microsoft Intune Jamf has a partnership with Microsoft that allows Jamf Pro to communicate with Intune. This partnership extended Microsoft Azure Active Directory and Microsoft Intune to macOS. In 2020, the partnership expanded again to include iOS device compliance. See also List of Mobile Device Management software Unified Endpoint Management Enterprise Mobility Management Bring Your Own Device Mobile Application Management References External links Repository Mobile device management software Proprietary software Software distribution System administration Remote administration software Software companies established in 2002 Software companies based in Minneapolis 2020 initial public offerings Companies listed on the Nasdaq 2002 establishments in Wisconsin Companies based in Minneapolis Software companies of the United States
Jamf
[ "Technology" ]
571
[ "Information systems", "System administration" ]
61,452,352
https://en.wikipedia.org/wiki/Qanats%20of%20Baladeh%20Ferdows
The Qanats of Baladeh Ferdows belongs to the Sasanian Empire and is located in Ferdows. References Persian developed underground aqueducts Water wells Infrastructure in Iran World Heritage Sites in Iran Buildings and structures in South Khorasan province
Qanats of Baladeh Ferdows
[ "Chemistry", "Engineering", "Environmental_science" ]
49
[ "Hydrology", "Water wells", "Environmental engineering" ]
61,452,780
https://en.wikipedia.org/wiki/C17H17Cl2NO
{{DISPLAYTITLE:C17H17Cl2NO}} The molecular formula C17H17Cl2NO (molar mass: 322.229 g/mol, exact mass: 321.0687 u) may refer to: Diclofensine Fengabine (SL-79,229) Molecular formulas
C17H17Cl2NO
[ "Physics", "Chemistry" ]
71
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,453,053
https://en.wikipedia.org/wiki/Ewald%20Prize
In 1986 the International Union of Crystallography (IUCr) established the Ewald Prize for outstanding contributions to the science of crystallography. The Ewald Prize is considered the highest prize available to crystallographers apart from the Nobel Prize. The Ewald Prize has been described as prestigious, acclaimed and coveted. The prize is named after Paul Peter Ewald for his contributions to the founding and leadership of the IUCr. The prize consists of a medal, a certificate and a financial award (US$ 20,000 in 1987). It is presented once every three years during the triennial International Congresses of Crystallography. The first prize was presented during the XIV Congress at Perth, Australia, in 1987. The prize is open to any scientist who has made contributions of exceptional distinction to the science of crystallography, irrespective of nationality, age or experience. The prize may be shared by several contributors to the same scientific achievement. Prize Winners References Crystallography awards
Ewald Prize
[ "Chemistry", "Materials_science" ]
202
[ "Crystallography awards", "Crystallography" ]
61,453,404
https://en.wikipedia.org/wiki/C18H17FN2O
{{DISPLAYTITLE:C18H17FN2O}} The molecular formula C18H17FN2O (molar mass: 296.339 g/mol, exact mass: 296.1325 u) may refer to: Didesmethylcitalopram Fluproquazone Molecular formulas
C18H17FN2O
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,455,123
https://en.wikipedia.org/wiki/R%C3%B6ssler%20Prize
The Rössler Prize, offered by the ETH Zurich Foundation, is a monetary prize that has been awarded annually since 2009 to a promising young tenured professor of the ETH Zurich in the middle of an accelerating career. The prize of 200,000 Swiss Francs is financed by the returns from an endowment made by Max Rössler, an alumnus of the ETH. The prize money has to be used for the research of the laureate. Laureates 2009: Nenad Ban, Microbiology 2010: Gerald Haug, Geology of Climate 2011: Andreas Wallraff, Solid State Physics 2012: Nicola Spaldin, Material Science 2013: Olivier Voinnet, RNA Biology 2014: , Health Sciences and Technology 2015: David J. Norris, Mechanical and Process Engineering 2016: Christophe Copéret, Chemistry and Applied Biosciences 2017: Olga Sorkine-Hornung, Computer Science 2018: Philippe Block, Architecture 2019: Maksym Kovalenko, Inorganic chemistry/Nanotechnology 2020: Paola Picotti, Biology 2021: , Machine Learning 2022: Tanja Stadler, Mathematics and Computational evolutionary biology 2023: , Mathematics 2024: , Robotics See also Science and technology in Switzerland Prizes named after people References External links Academic awards Science and technology awards Swiss awards Awards established in 2009
Rössler Prize
[ "Technology" ]
263
[ "Science and technology awards" ]
61,457,253
https://en.wikipedia.org/wiki/White%20Heat%20Cold%20Logic
White Heat Cold Logic (2008), edited by Paul Brown, Charlie Gere, Nicholas Lambert, and Catherine Mason, is a book about the history of British computer art during 1960–1980. Overview The book includes 29 contributed chapters by a variety of authors. The book was published in 2008 by MIT Press, in hardcover format. It also includes a series foreword by Sean Cubbitt, the editor-in-chief of the Leonardo Book Series. Contributors The following authors contributed chapters in the book: Roy Ascott Stephen Bell Paul Brown Stephen Bury Harold Cohen Ernest Edmonds Maria Fernández Simon Ford John Hamilton Frazer Jeremy Gardiner Charlie Gere Adrian Glew Beryl Graham Stan Hayward Graham Howard Richard Ihnatowicz Malcolm Le Grice Tony Longson Brent MacGregor George Mallen Catherine Mason Jasia Reichardt Stephen A. R. Scrivener Brian Reffin Smith Alan Sutcliffe Doron D. Swade John Vince Richard Wright Aleksandar Zivanovic Reviews The book has been reviewed in a number of publications and online, including: Amazon.co.uk. BCS. Furtherfield. Leonardo. Realtime. Wired. See also Event One computer art exhibition (1969) References External links Amazon USA information Amazon UK information 2008 non-fiction books 21st-century history books Art history books Case studies Computer books MIT Press books History of computing Computer art
White Heat Cold Logic
[ "Technology" ]
280
[ "Works about computing", "Computers", "Computer books", "History of computing" ]
61,459,433
https://en.wikipedia.org/wiki/Maltenes
Maltenes are the n-alkane (pentane or heptane)-soluble molecular components of asphalt, which is the residue remaining after petroleum refiners remove other useful derivatives such as gasoline and kerosene from crude oil. Asphaltene compounds are the other primary component of asphalt. Composition As viscous liquids, maltenes consist of heavy, dark-colored asphaltic resins, first acidaffins, second acidaffins, and saturates, combined with lighter colored oils. The resins provide the adhesive qualities in asphalts; the oils are the carrier medium for both the maltene resins and the asphaltene compounds. Maltenes are characterized by their lower molecular weight and their solubility, in comparison with asphaltenes. Using adsorption chromatography in the presence of an acid reagent, maltenes can be separated into four fractions: The polar compounds are highly reactive petroleum resins that act as a colloidal dispersion stabilizer for the asphaltene substrate. First acidaffins are aromatic hydrocarbons, with or without oxygen, nitrogen and sulfur. They provide a chemically compatible dispersing agent for peptized asphaltene. Second acidaffins are straight chain or cyclic unsaturated petro-hydrocarbons (aka olefins). They are somewhat oily and somewhat resinous. Saturates (aka paraffins) are either straight or branch chain saturated hydrocarbons. They are the true oil component of asphalt binder, and function as the gelling agent for the asphalt compounds. Analysis It had long been suspected that asphalt pavement deterioration resulted from chemical reactions of specific asphalt components. In 1959, Fritz Rostler observed: “It is generally recognized that failures of asphalt pavements caused by embrittlement and other changes in physical properties during the aging process are due to chemical reactions of all or some of the asphalt components.” It was Rostler who undertook the necessary research to identify the asphalt components and chemical processes contributing to the aging process. His methodology was to separate the asphalt components by first using sulfuric acid to separate the soluble components, then using an n-pentane solvent to separate the insoluble components. Rostler’s work in the rubber industry led to the development of ASTM Test D-2006-70, which accurately identifies the relationships between the light fraction maltenes, acidaffins and saturates. Although this test has not been updated since 1970, it remains an accurate standard for defining the desirable maltene content distribution in asphalt pavement. Rostler Analysis aka ASTM Test D-2006-70 Where PC represents Polar Compounds, A1 represents First Acidaffins, A2 represents Second Acidaffins, and S represents Saturated Hydrocarbons: Min. Max. Maltene Distribution Ratio D-2006-70 0.3 0.6 (PC + A1) / (S + A2) Rostler determined that the loss of the low-molecular-weight maltene components in asphalt is largely responsible for the cracking and hardening seen in aging pavement. This discovery led to the development of commercial rejuvenators that combine maltene fractions of asphalt with a carrier capable of penetrating asphalt pavements, in order to restore the proper balance of asphalt components. Geochemistry The geochemical composition of maltenes varies according to the crude oil source, with any given maltene fraction representing a wide variety of base elements of different concentrations, which may include, for example, cobalt, chromium, copper, iron, molybdenum, manganese, nickel, strontium, vanadium, or zinc. References Asphalt
Maltenes
[ "Physics", "Chemistry" ]
768
[ "Amorphous solids", "Asphalt", "Unsolved problems in physics", "Chemical mixtures" ]
61,459,884
https://en.wikipedia.org/wiki/HERACLES%20%28spacecraft%29
HERACLES (Human-Enhanced Robotic Architecture and Capability for Lunar Exploration and Science) is a planned robotic transport system to and from the Moon by Europe (ESA), Japan (JAXA) and Canada (CSA) that will feature a lander called the European Large Logistic Lander (EL3, or Argonaut), a Lunar Ascent Element, and a rover. The lander can be configured for different operations such as up to 1.5 tons of cargo delivery, sample-returns, or prospecting resources found on the Moon. The system is planned to support the Artemis program and perform lunar exploration using the Lunar Gateway space station as a staging point. As of 2023, the HERACLES project has been superseded by the European Large Logistics Lander (EL3) project, and is no longer active. Project overview The HERACLES architecture was outlined by 2015. ESA approved the HERACLES project in November 2019. Its first mission is expected to launch in 2030. The project will be the next phase of ESA's exploration program Terrae Novae (known as European Exploration Envelope Programme (E3P) before 2021). The HERACLES transport system will leverage the Lunar Gateway as a staging point. The architecture involves dispatching the EL3 lunar lander from Earth aboard an Ariane 64 which would land on the Moon with a disposable descent module. The EL3 lander will have a landing mass of approximately and will be capable of transporting a Canadian robotic rover to explore, prospect potential resources, and load samples up to on the ascent module. The rover would then traverse several kilometers across the Schrödinger basin on the far side of the Moon to explore and collect more samples to load on the next EL3 lander. The ascent module would return each time to the Lunar Gateway, where it would be captured by the Canadian robotic arm and samples transferred to an Orion spacecraft for transport to Earth with the returning astronauts. The ascent module would then be refueled and paired with a new descent module dispatched from Earth. The second and third landings would each have payload available for alternate uses such as testing new hardware, demonstrating technology and gaining experience in operations. The 4th or 5th lander mission will provide a sample return. The project will require the development of a reusable lunar ascent engine, four of which could be clustered to power a reusable crewed or robotic lander in the future. Later missions will include a pressurised rover driven by astronauts and an ascent module for the crew to return to Earth. Key objectives The key objectives of HERACLES include: Preparing for human lunar missions by implementing, demonstrating, and certifying technology elements for human lunar landing, surface operations, and return. Create opportunities for science, particularly sample return. Gain scientific and exploration knowledge, particularly on potential resources. Create opportunities to demonstrate and test technologies and operational procedures for future Mars missions. System elements The HERACLES EL3 lander concept will consist of the Lunar Descent Element (LDE), which will be provided by Japan's JAXA, the ESA-built Interface Element that will house the rover, and the European Lunar Ascent Element (LAE) that will return the samples to the Lunar Gateway. The rover, to be developed by the Canadian Space Agency (CSA), will have a mass of and will feature a "radioisotope power system" that will permit operations during the long and frigid lunar nights. The total spacecraft mass will be ≈ including fuel, with a payload of ≈. Reusable ascent engine development Nammo have been awarded a contract to evaluate engine performance requirements and 'find' the best engine design. The engine may be fed by electrically driven pumps, from low pressure propellant tanks, which may enable in-space refueling. See also References External links Short video of the HERACLES transport system at YouTube. 2020s in spaceflight Artemis program Exploration of the Moon Projects established in 2019 European Space Agency European space programmes European Space Agency space probes Japanese space probes Space program of Japan
HERACLES (spacecraft)
[ "Engineering" ]
826
[ "Space programs", "European space programmes" ]
61,460,635
https://en.wikipedia.org/wiki/NGC%201387
NGC 1387 is a lenticular galaxy in the constellation Fornax, in the Fornax Cluster. It was discovered by William Herschel on December 25, 1835. At a distance of 53 million light-years, it is one of the closer members of the Fornax Cluster. It has a magnitude of 10.8, which makes NGC 1387 one of the brighter galaxies in the Fornax Cluster and it is 60 000 light-years across. It is only 12 arcminutes from the central galaxy NGC 1399, which makes it one of the closest galaxies to NGC 1399. NGC 1387 is an early-type galaxy with a Hubble classification of (R')SAB(s)0. It has a clear, normal-looking non-ansae type bar embedded in a very extensive envelope, which is structureless except for the bar. Observations in 2006 discovered a large nuclear ring around NGC 1387, in a bulge-subtracted 2.2 micron image. Despite their name, early-type galaxies are much older than spiral galaxies, and mostly comprise old, red-colored stars. Very little star formation occurs in these galaxies; the lack of star formation in elliptical galaxies appears to start at the center and then slowly propagates outward. This is an early-type lenticular galaxy, with similar nature to early-type elliptical galaxies. NGC 1387 is rich with globular clusters, with an estimated number of clusters of 406 ± 81. However, unlike similar galaxies NGC 1374 and NGC 1379, which have an almost equal number of blue and red globular clusters, NGC 1387's globular cluster system is mostly composed of red globular clusters, with only a small fraction of blue globular clusters. This may be caused by gravitational interactions with the massive central galaxy NGC 1399, which probably stripped off most of the globular clusters from NGC 1387. The globular clusters of NGC 1387, like globulars in NGC 1379 and NGC 1374, did not show any evidence of multiple populations. References External links Elliptical galaxies Lenticular galaxies Fornax Cluster 1387 13344 Fornax
NGC 1387
[ "Astronomy" ]
450
[ "Fornax", "Constellations" ]
61,460,637
https://en.wikipedia.org/wiki/Labelcode
Labelcode also known as Label Code is a unique 4 to 6-digit music label identification code that is assigned by (GVL), Germany. Since 2017, Labelcode is no longer mandatory. Labelcode is still used in some occasions, for example, CD publishing. Ways to get a Labelcode A Labelcode is only issued by GVL after it has been published for the first time. However, there are several ways to get an LC. The simplest is the way described below: Takeover of the LC of the press shop (keyword: assignment of ancillary rights). After pressing a CD, for example, an application for an LC including a copy of the recordings is sent to GVL. After processing and issuing the LC, the GVL sends the sticker with the LC to the applicant. Only from the second production onwards can the LC be used permanently. In consultation with the GVL, members of the Association of German Musicians (VDM) can receive their own label code via the VDM. This is also possible before pressing a CD for the first time. If you don't want to create your own label, but need an LC to publish the song and you are a member of the "German Rock and Pop Musicians' Association" (www.drmv.de), they can claim the possibility of using the LC 08248 from the in-house Rockwerk Records immediately. In return, Rockwerk Records retains the GVL royalties recorded upon publication and forwards them to the DRMV, which uses them for its statutory activities. The DRMV member must forego GVL income in this regard. Source: DRMV license agreement for the label code. If the first publication of a label should have a circulation of at least 3,000 physical media, GVL will issue a provisional label code. Prerequisite is the submission of the order confirmation from the press shop. The publication then printed with the provisional LC is subsequently submitted to GVL as a specimen copy. Usage Labelcode was created by GVL on May 1, 1976, and introduced by IFPI in 1977 in order to unmistakably identify the different record labels. The number of countries using the Labelcode is limited (it is mostly used in Germany), and the code given on the item is not always accurate to the label on which the album or single was actually released. As of 2017, the (GVL) have adopted the internationally recognised ISRC form of sound recording identification which enables the remuneration to be allocated much more precisely than before. Code format Labelcodes should not be confused with catalog numbers. It is on the form of LC-12345 or LC-100405. Labelcodes were originally 4-digits long, but when the 5-digits were introduced, a zero was prepended to the old codes. LC-2345 and LC-02345 and therefore the same code. For the newer 6-digit codes, a zero was not prefixed to the existing codes. LC-01303 and LC-101303 are both valid, but reference a different label. A full list of Labelcodes can be found on the GVL Label Recherche website. References Unique identifiers Character encoding Music production
Labelcode
[ "Technology" ]
677
[ "Natural language and computing", "Character encoding" ]
61,461,651
https://en.wikipedia.org/wiki/Robert%20Penner
Robert Clark Penner is an American mathematician whose work in geometry and combinatorics has found applications in high-energy physics and more recently in theoretical biology. He is the son of Sol Penner, an aerospace engineer. Biography Robert Clark Penner received his B.S. degree from Cornell University in 1977 and his Ph.D. from the Massachusetts Institute of Technology in 1981, the latter under the direction of James Munkres and David Gabai. In his doctoral studies, he solved a 50 year old problem posed by Max Dehn on the action of the mapping class group on curves and arcs in surfaces, developed combinatorial aspects of Thurston's theory of train tracks and generalized Thurston's construction of pseudo-Anosov maps. After postdoctoral positions at Princeton University and at the Mittag-Leffler Institute, Penner spent most of the period of 1985–2003 at the University of Southern California. From 2004 until 2012, he worked at Aarhus University, where he co-founded with Jørgen Ellegaard Andersen the Center for the Quantum Geometry of Moduli Spaces. Since 2013 Penner has held the position of the René Thom Chair in Mathematical Biology at the Institut des Hautes Etudes Scientifiques. Throughout his career Penner held various visiting positions around the world including Harvard University, Stanford University, Max-Planck-Institut für Mathematik at Bonn, University of Tokyo, Mittag-Leffler Institute, Caltech, UCLA, Fields Institute, University of Chicago, ETH Zurich, University of Bern, University of Helsinki, University of Strasbourg, University of Grenoble, Nonlinear Institute of Nice-Sophia Antipolis. Contributions to mathematics, physics, and biology Penner's research began in the theory of train tracks including a generalization of Thurston's original construction of pseudo-Anosov maps to the so-called Penner-Thurston construction, which he used to give estimates on least dilatations. He then co-discovered the so-called Epstein-Penner decomposition of non-compact complete hyperbolic manifolds with David Epstein, in dimension 3 a central tool in knot theory. Over several years he developed the decorated Teichmüller theory of punctured surfaces including the so-called Penner matrix model, the basic partition function for Riemann's moduli space. Extending the foregoing to orientation-preserving homeomorphisms of the circle, Penner developed his model of universal Teichmüller theory together with its Lie algebra. He discovered combinatorial cocycles with Shigeyuki Morita for the first and with Nariya Kawazumi for the higher Johnson homomorphisms. Penner has also contributed to theoretical biology in joint work with Jørgen E. Andersen et al. discovering a priori geometric constraints on protein geometry, and with Michael S. Waterman, Piotr Sulkowski, Christian Reidys et al. introducing and solving the matrix model for RNA topology. Main journal publications Books with the assistance of J. L. Harer: Combinatorics of Train Tracks, Annals of Mathematical Studies 125, Princeton University Press (1992); second printing (2001). Perspectives in Mathematical Physics, International Press, edited by R. C. Penner and Shing-Tung Yau (1994). Discrete Mathematics--proof techniques and mathematical structures, World Scientific Publishing Company (1999); second printing (2001). Woods Hole Mathematics: perspectives in math and physics, edited by N. Tongring and R. C. Penner, foreword by Raul Bott, World Scientific Publishing Company (2004). Groups of Diffeomorphisms-in honor of Shigeyuki Morita on the occasion of his 60th birthday, Advanced Studies in Pure Mathematics 52 (2008), Mathematical Society of Japan, edited by R. C. Penner, D. Kotschick, T. Tsuboi, N. Kawazumi, T. Kitano, Y. Mitsumatsu. Decorated Teichmüller theory, (with a foreword by Yuri I. Manin), QGM Master Class Series, European Mathematical Society, Zürich, 2012, xviii+360 pp. . Topology and K-theory: Lectures by Daniel Quillen, Notes by Robert Penner, Springer-Verlag Lecture Notes in Mathematics (2020) Patents Methods of Digital Filtering and Multi-Dimensional Data Compression Using the Farey Quadrature and Arithmetic, Fan, and Modular Wavelets, US Patent 7,158,569 (granted 2Jan07) Philanthropy In 2018 Penner endowed the Alexzandria Figueroa and Robert Penner Chair at the IHES in memoriam of Alexzandria Figueroa. References 21st-century American mathematicians Combinatorialists American geometers Cornell University alumni Massachusetts Institute of Technology alumni University of Southern California faculty Academic staff of Aarhus University Living people Year of birth missing (living people)
Robert Penner
[ "Mathematics" ]
1,008
[ "Combinatorialists", "Combinatorics" ]
73,781,308
https://en.wikipedia.org/wiki/Wood%E2%80%93Anderson%20seismometer
The Wood–Anderson seismometer (also known as the Wood–Anderson seismograph) is a torsion seismometer developed in the United States by Harry O. Wood and John August Anderson in the 1920s to record local earthquakes in southern California. It photographically records the horizontal motion. The seismometer uses a pendulum of 0.8g, its period is 0.8 seconds, its magnification is 2,800 times, and its damping constant is 0.8. Charles Francis Richter developed the Richter magnitude scale using the Wood–Anderson seismometer. Overview In 1908, geologist Grove K. Gilbert paid Harry Wood $1,000 to draft a map of potentially active faults in northern California and several years later Andrew Lawson, professor at the University of California, Berkeley assigned Wood to oversee the University's seismometers, where attention was focused on local earthquakes as well as the distant events that were used (especially by European scientists like Beno Gutenberg) to study the attributes of the Earth's interior. Seismometers that were in use up until that time had been developed and optimized for detecting the long-period seismic waves from distant earthquakes and did not detect local events well. Wood left Berkeley in 1912 and spent several years researching volcano seismology in Hawaii and made contact with Arthur L. Day, the director of the Carnegie Institution's geophysical laboratory, while Day also conducted volcanological research there. He would serve as Wood's mentor who took his advice and went to work at the Bureau of Standards in Washington D. C. where a relationship was developed with George Ellery Hale, the director of Carnegie's Mount Wilson Observatory in Pasadena. In March 1921, the Carnegie Institution accepted a proposal from Wood to provide financing for a long-duration program of seismological research in Southern California. As a researcher for the Institute, Wood worked in a partnership with John A. Anderson (an instrument designer and astrophysicist from the Mount Wilson Observatory) to pursue the development of a seismometer that could record the short-period waves from local earthquakes. Their instrument would require the ability to measure the seismic waves with periods from .5–2.0 seconds, which were considerably shorter than what the existing units were able to detect. In September 1923, with the successful completion of what became known as the Wood-Anderson torsion seismometer, the focus became establishing a network of the instruments throughout the region that would be able to pinpoint earthquake epicenters and eventually allow mapping of the corresponding fault zones. Wood suggested that the Carnegie Institute establish a small network of the units at five locations throughout the region (Pasadena, Mount Wilson, Riverside, Santa Catalina Island, and Fallbrook) and the Institute agreed to move forward with the proposal. Richter magnitude scale Prior to the development of the magnitude scale, the only measure of an earthquake's strength or "size" was a subjective assessment of the intensity of shaking observed near the epicenter of the earthquake, categorized by various seismic intensity scales such as the Rossi-Forel scale. ("Size" is used in the sense of the quantity of energy released, not the size of the area affected by shaking, though higher-energy earthquakes do tend to affect a wider area, depending on the local geology.) In 1883 John Milne surmised that the shaking of large earthquakes might generate waves detectable around the globe, and in 1899 E. Von Rehbur Paschvitz observed in Germany seismic waves attributable to an earthquake in Tokyo. In the 1920s Harry Wood and John Anderson developed the Wood–Anderson Seismograph, one of the first practical instruments for recording seismic waves. Wood then built, under the auspices of the California Institute of Technology and the Carnegie Institute, a network of seismographs stretching across Southern California. He also recruited the young and unknown Charles Richter to measure the seismograms and locate the earthquakes generating the seismic waves. In 1931 Kiyoo Wadati showed how he had measured, for several strong earthquakes in Japan, the amplitude of the shaking observed at various distances from the epicenter. He then plotted the logarithm of the amplitude against the distance and found a series of curves that showed a rough correlation with the estimated magnitudes of the earthquakes. Richter resolved some difficulties with this method and then, using data collected by his colleague Beno Gutenberg, he produced similar curves, confirming that they could be used to compare the relative magnitudes of different earthquakes. To produce a practical method of assigning an absolute measure of magnitude required additional developments. First, to span the wide range of possible values, Richter adopted Gutenberg's suggestion of a logarithmic scale, where each step represents a tenfold increase of magnitude, similar to the magnitude scale used by astronomers for star brightness. Second, he wanted a magnitude of zero to be around the limit of human perceptibility. Third, he specified the Wood–Anderson seismograph as the standard instrument for producing seismograms. Magnitude was then defined as "the logarithm of the maximum trace amplitude, expressed in microns", measured at a distance of . The scale was calibrated by defining a magnitude 0 shock as one that produces (at a distance of ) a maximum amplitude of 1 micron (1 μm, or 0.001 millimeters) on a seismogram recorded by a Wood-Anderson torsion seismometer. Finally, Richter calculated a table of distance corrections, in that for distances less than 200 kilometers the attenuation is strongly affected by the structure and properties of the regional geology. When Richter presented the resulting scale in 1935, he called it (at the suggestion of Harry Wood) simply a "magnitude" scale. "Richter magnitude" appears to have originated when Perry Byerly told the press that the scale was Richter's and "should be referred to as such." In 1956, Gutenberg and Richter, while still referring to "magnitude scale", labelled it "local magnitude", with the symbol , to distinguish it from two other scales they had developed, the surface wave magnitude (MS) and body wave magnitude (MB) scales. The Richter magnitude of an earthquake is determined from the logarithm of the amplitude of waves recorded by seismographs (adjustments are included to compensate for the variation in the distance between the various seismographs and the epicenter of the earthquake). The original formula is: where is the maximum excursion of the Wood–Anderson seismograph, the empirical function depends only on the epicentral distance of the station, . In practice, readings from all observing stations are averaged after adjustment with station-specific corrections to obtain the value. References Sources . , NUREG/CR-1457. . . . . . . Seismology instruments American inventions 1923 introductions 1923 in science
Wood–Anderson seismometer
[ "Technology", "Engineering" ]
1,409
[ "Seismology instruments", "Measuring instruments" ]
73,781,432
https://en.wikipedia.org/wiki/Hadronyche%20pulvinator
Hadronyche pulvinator, also known as the Cascade funnel-web spider, is a species of funnel-web spider in the Atracidae family. It is endemic to Australia. It was described in 1927 by Australian arachnologist Vernon Victor Hickman. Distribution and habitat The species occurred in south-east Tasmania; it is now presumed to be extinct under Tasmania's Threatened Species Protection Act 1995. It is known only from the female holotype specimen collected in 1926 from the type locality of Cascades, now a western suburb of Hobart, in the foothills of Mount Wellington. References pulvinator Spiders of Australia Endemic fauna of Australia Arthropods of Tasmania Spiders described in 1927 Taxa named by Vernon Victor Hickman Extinct arachnids Extinct animals of Australia Species known from a single specimen
Hadronyche pulvinator
[ "Biology" ]
167
[ "Individual organisms", "Species known from a single specimen" ]
73,782,713
https://en.wikipedia.org/wiki/Lithium%20hypofluorite
Lithium hypofluorite is an inorganic compound with the chemical formula of . It is a compound of lithium, fluorine, and oxygen. This is a lithium salt of hypofluorous acid, and contains lithium cations and hypofluorite anions . Synthesis The salt theoretically results from the neutralization of hypofluorous acid (HOF) and lithium hydroxide (LiOH). It can be formed by the action of fluorine on lithium hydroxide: Chemical properties The compound is quite unstable, since it contains oxygen in the oxidation state of 0. It, therefore, tends to decompose to lithium fluoride and oxygen gas: References Lithium salts Hypofluorites Oxidizing agents
Lithium hypofluorite
[ "Chemistry" ]
156
[ "Lithium salts", "Inorganic compounds", "Redox", "Oxidizing agents", "Salts", "Inorganic compound stubs" ]
73,782,857
https://en.wikipedia.org/wiki/Receptivity%20%28NMR%29
In NMR spectroscopy, receptivity refers to the relative detectability of a particular element. Some elements are easily detected, some less so. The receptivity is a function of the abundance of the element's NMR-responsive isotope and that isotope's gyromagnetic ratio (or equivalently, the nuclear magnetic moment). Some isotopes, tritium for example, have large gyromagnetic ratios but low abundance. Other isotopes, for example 103Rh, are highly abundant but have low gyromagnetic ratios. Widely used NMR spectroscopies often focus on highly receptive elements: 1H, 19F, and 31P. References Nuclear magnetic resonance
Receptivity (NMR)
[ "Physics", "Chemistry" ]
146
[ "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance stubs", "Nuclear physics" ]
73,784,146
https://en.wikipedia.org/wiki/Charles%20Wilson%20Killam
Charles Wilson Killam (July 20, 1871 – May 12, 1961) was an American architect, engineer, and professor at Harvard University. He was widely recognized for his technical knowledge, architectural theory, educational views, and publications. He was also known for his consulting work for the Harvard Business School and Baker Library as well as his extensive restoration work at Mount Vernon. He was a key contributor to the development of Harvard's School of Architecture and to collegiate architectural education throughout the United States. Killam also took an active role in the planning and development of Cambridge, Massachusetts and served on numerous boards and committees. Additionally, he was an advocate for low-cost and public housing as well as an early advocate for architectural education for women. Early life and education Charles Wilson Killam was born in Charlestown, Massachusetts on July 20, 1871, and grew up in the Hyde Park neighborhood of Boston. He was the son of Horace Wilson Killam from Wilton, New Hampshire and Georgianna Gage from Watertown, Massachusetts. Killam had three sisters and two brothers. Killam attended Hyde Park Grammar Schools at the Henry Grew School, where he completed the school's course of study and graduated in 1885. After graduating from the Grew School, he attended Hyde Park High School. In 1887, during his second year at the high school and at the age of 16, he dropped out to work. Killam's interest in architecture began at an early age and he pursued his studies at home and while traveling extensively through Europe. His father was a practical draftsman during this period and taught evening classes in elementary, mechanical, and architectural drawing at Hyde Park High School. After leaving high school, Killam furthered his architectural education by taking evening classes, but never graduated from high school. Peabody and Stearns After leaving high school in 1887, Killam went to work at the architectural firm of Peabody & Stearns in Boston where he became a draftsman. Robert Swain Peabody, the co-founder of the firm, was an encouraging mentor to Killam and his architectural career. During his 21 years with Peabody & Stearns, Killam advanced his architectural knowledge and furthered his technical expertise in the field. He eventually became the Chief Architectural Engineer for the firm. To further develop his skills, Killam noted how he visited numerous job sites because that was then "the only way to find out, for instance, how to support a terracotta cornice or how to do flashing." Since he was not on the payroll of these jobs, he was able to spend as much or as little time on various aspects of the construction as he wanted. He valued this experience and spent countless hours examining plans in architectural and engineering offices, copying details and specification provisions. Killam stated that his interest covered the whole field of architecture: In 1900, Killam was awarded second prize in the Boston Society of Architects Rotch Travelling Scholarship, and traveled throughout Europe studying architecture. While at the firm, Killam also entered various design competitions such as for the new Young Men's Christian Association (YMCA) building in Hyde Park. Harvard University In 1908, Killam left Peabody & Stearns to begin his academic career as an instructor in architectural construction and engineering at Harvard University. He was appointed to strengthen a recognized weakness in architectural engineering at Harvard and first taught a course in the resistance of materials and elementary structural design to address this weakness. Within a year, Killam was appointed assistant professor of architectural construction and taught at Harvard's new School of Architecture when it was founded by Herbert Langford Warren in 1912. Killam became associate professor in 1915, associate professor of architecture in 1921, and professor in 1924. Killam taught his students the adaptation of modern construction techniques to the older styles of design. He was critical of designers of the time who misrepresented the structure of their buildings and gave too much power to engineers. He recognized the importance of integrating the teaching of design and construction and was one of the first to advocate for closer collaboration between the two fields. Killam continuously improved Harvard's department of architecture until it became one of the strongest in the United States. His well-known courses in fundamentals of engineering and construction were extremely thorough, complete, and well arranged to meet the needs of architecture students. In 1917, following the death of Warren, Killam was appointed Acting Dean of the School of Architecture. Despite the challenges of the ongoing war and dwindling enrollment, Killam sought to carry forward Warren's principles while placing greater emphasis on construction. Although the curricula in architecture and landscape architecture remained largely unchanged with Killam as Acting Dean, there was a significant shift where landscape architecture students no longer studied the rudiments of architectural design in the same studios with architecture students. Killam held this position until 1922 when George Harold Edgell was appointed as new Dean of the school. One of Killam's students, Edward Durell Stone, had failed Killam's "Theory of Building Construction" course as a freshman at Harvard. Stone's classmate Walter Harrington Kilham Jr. recalled that Stone "couldn’t take it any longer and had decided to quit the school and go over to the rival MIT." Stone had asked Dean Edgell to be exempt from retaking Killam's course but was denied, and, in response, Stone transferred to MIT. John McAndrew, another classmate of Stone, commented that Killam's course was "a very 'tough and rough' course, the only one in which anyone learned anything at all." At the time of Killam's retirement from Harvard, Dean Joseph Hudnut stated that "Professor Killam has conducted the work in his field with great distinction. He has greatly augmented the efficiency of the instruction in architecture and his methods have been widely copied in other American schools of architecture." Educational views Killam held views on education and the field of architecture that were pioneering for the time. He “welcomed the new styles especially where unusual construction called for applying basic principles of engineering.” He also strongly believed that modern materials and methods of construction should be integrated into styles from the past, particularly the classic and Renaissance forms. While serving as Acting Dean, Killam described Harvard's position on the necessity of courses in history and the fine arts, that the architects of the country should have a broad cultural training before they begin their technical studies. Killam had a curiosity for learning which sustained throughout his life. Whenever there was a new and interesting building or design, he made sure to visit it in person. In the early days of commercial flight, he flew to distant locations to examine various structures. He instilled this curiosity in his teaching by actively encouraged his students to explore their architectural interests and he supported these interests with his own research and materials from outside the classroom: He demanded the same thoroughness of his students that he gave himself and never returned a student's unfinished problem "without his professional correction to the last detail, sharply noted in red ink and colored pencil so that the solution would be clear and direct." He defined the principal function of the architect of the time as "to plan and direct the execution of building projects so as to produce convenient, safe, economical and durable enclosures for our manifold activities." Killam was determined that each student be thoroughly grounded in all methods of building construction, both old and new. In his "Resistance of Materials and Elementary Structural Design" course, Killam demanded that his students gain a sound knowledge of construction by learning how to derive formulae from theory and how to create their own tables and handbooks. His architectural experience convinced him that "a student should not run errands, keep time, or check materials, and that a student does not have any possible time to waste in actual manual labor at the innumerable trades dealing with innumerable materials." Killam was also an advocate and supporter for women's education, particularly in the field of architecture and construction. As early as 1916, Killam lectured at the Cambridge School of Architectural and Landscape Design for Women, which his colleagues Henry Atherton Frost and Bremer Whidden Pond had founded less than a year earlier. He lectured in architectural construction, landscape construction, and criticized graduate theses at the school from 1916 through the 1924 academic year. Killam was dedicated to achieving honest and effective methods of building in architecture. His work helped to combine construction techniques with the art of design in architectural education. Although Killam retired from Harvard before modern architecture was introduced, his goals were eventually recognized within this new approach to teaching architecture. Even after his death, Killam's courses at Harvard were continued to be taught without alteration. His methods were fundamental in the work of the school and was considered one of the most persistent and valuable factors in Harvard's educational system. Retirement from Harvard In 1936, Harvard President James B. Conant was in search of a new chairman for the new Graduate School of Design and to lead the school into the era of modernism. Among the candidates, German architect and Bauhaus founder Walter Gropius was the most favored by Conant for the position. Conant sought the opinions of the school's faculty about the possible appointment of Gropius and received overwhelming approval and support. Killam, however, cast the lone outright objection to Gropius's appointment. The engineering aesthetic of the new and modern style of architecture represented by Gropius did not appeal to him. At an address to the Association of Collegiate Schools of Architecture, Killam made his objection clear stating that the school's primary function was to train architects, not painters, sculptors, or commercial designers for machine-made products. He challenged the economic viability of teaching modern design and firmly rejected the expanded role of the architect that Gropius promoted. With the overwhelming approval of Gropius from the school's faculty, and despite Killam's objections, Conant proceeded to offer Gropius the position in December 1936 and he was commenced the following spring. Killam remained adamantly opposed to the appointment of Gropius as the school's new chairman and professor of design and disliked the prospect of Gropius bringing a new Bauhaus to Harvard. In protest to this new assignment, Killam decided to resign his professorship at Harvard University. In January 1937, after 29 years as Harvard faculty, Killam retired and became professor emeritus. Being too active to accept full time retirement, Killam continued to serve the School of Design as an advisor while actively participating in the faculty councils. After his resignation in 1937, Killam returned to lecture at the Cambridge School of Architecture and Landscape Architecture, which had formed a partnership with Smith College during his absence. He held this position until the school closed in 1942 and was absorbed by Harvard's Graduate School of Design, at which point he retired for a second time. Throughout his tenure as professor emeritus, Killam continued to work as consultant on architecture and played a key role in the drafting of building and zoning codes. His span as professor of architecture emeritus from 1937 to 1961 was, at the time, the longest in the history of Harvard's School of Architecture and Graduate School of Design. Cambridge planning and development In addition to his academic career, Killam was an active member of his community, taking on numerous responsibilities and roles within the city of Cambridge, Massachusetts. He was a resident of Cambridge for nearly 50 years having moved there at the beginning of his academic career at Harvard. He resided at 20 Walker Street in Cambridge before settling at 51 Avon Hill Street in Cambridge where he lived for over 40 years. Killam was actively involved in matters of building and zoning codes, tenement-house legislation, city planning, unemployment relief, and low-cost housing. He was also a significant figure in bringing the Plan E Charter to Cambridge, which provided for a city council-manager form of government. Killam held various leadership positions in the Cambridge community. He served on the first board of directors for the Cambridge Housing Association when it was formed in 1911. He was elected as the director of the Cambridge Chamber of commerce and served as the chairman of the Cambridge Housing authority. Additionally, he served as a member, secretary, president, and chairman of the Cambridge Planning Board, where he contributed to the development of the city and played a crucial role in shaping its growth. His leadership roles in these positions demonstrated his commitment to civic engagement and to the betterment of the city of Cambridge. Cambridge planning board In 1924, Killam was appointed to the Cambridge Planning Board by Mayor Edward W. Quinn and served as president and chairman of the board. The board, while headed by Killam, was responsible for work including widening of streets to improve traffic and assisting with the Charles River betterment project to improve the Charles River Basin. Killam "[knew] more about Cambridge streets and how to improve traffic conditions than any salaried official in the city." He also took an active part in drafting the city's new zoning ordinance and was adamantly opposed to the construction of a bridge at Dartmouth street crossing over the Charles River. In 1929, despite being "one of the city’s most efficient commissions," the board resigned as a body. The primary reason being that the board was often ignored on important city planning issues, their recommendations were given little consideration, and they received minimal cooperation and support from city officials. A year later in 1930, Richard M. Russell was elected mayor of Cambridge and Killam was appointed to Russel's new Planning Board. This board was responsible for work including improving traffic and parking conditions in the city as well as city planning and economic development. Mayor Russell also appointed Killam as first chairman of the newly formed Cambridge Housing Authority in 1935. However, Killam resigned from the Housing Authority in 1936 because of a difference of opinion with other members of the authority regarding plans for the local slum clearance project and that too much money was spent on land rather than economic development. Plan E charter Killam also played a key role in developing a new council-manager form of charter, known as Plan E by Cambridge, for the city of Cambridge. This charter includes a weak mayor elected by the City Council from among its members addition to an appointed city manager who handles day-to-day city operations. In 1938, Killam traveled throughout the Midwestern United States to research the advantages and disadvantages of this form of charter. He visited cities such as Milwaukee, Cincinnati, and Cleveland which had recently adopted this form of government. He "visited twenty-one cities and interviewed five mayors, ten city managers, twelve editors, twenty past or present city officials, three labor men, and thirteen officers of citizens’ organizations." During his trip, Killam interviewed notable city leaders such as Harold Hitz Burton, Daniel Hoan, and Charles Phelps Taft II. Upon returning, he strongly recommended that Cambridge adopt this form of Council-Manager city charter and became a key contributor to its development and implementation by Cambridge in 1940. Over 80 years later, Cambridge still operates under this Plan E charter. Later in 1946, Killam's views and foresight on traffic congestion lead him to oppose the construction of a parking garage under the Boston Common explaining that it would cater to drivers and greatly increase congestion within the city. He suggested that instead of investing in underground parking areas or highway developments, it would be more beneficial and cost-effective to focus on expanding the city's rapid transit facilities. Massachusetts state housing and building laws Killam was member of the committee which drafted the Massachusetts town housing law known as the Tenement House Act for Towns (Chapter 635 of 1912) which was passed in amended form into law by the 133rd Massachusetts General Court and adopted and enforced by towns throughout the state. The same committee, with some changes and additions, drafted a law for Massachusetts cities for the following year and was called the Tenement House Act for Cities (Chapter 786 of 1913). It was passed into law by the 134th Massachusetts General Court and adopted and enforced by cities throughout the state. In 1913, Killam was appointed by Massachusetts Governor Eugene Foss to a commission to investigate the regulations throughout the Commonwealth relative to the construction, alteration, and maintenance of buildings and to develop a State building law. This commission also worked to investigate building laws and fire conditions in the State of Massachusetts. In 1915, this commission submitted a report which laid out a new state-wide building code relating to fireproofing districts to be adopted and enforced throughout Massachusetts. Despite the extensive work by the commission, this state building code failed legislative approval by the 136th Massachusetts General Court. In 1930, Killam was appointed to the advisory committee which helped the New England Building Officials Conference write a model code for New England. This model code resulted in a new code for Boston. Public and low-cost housing Killam was also an advocate for public and low-cost housing within the city of Cambridge. He believed that such housing projects should prioritize the improvement of living conditions for many people in the future, rather than providing extravagant accommodations for a select few. He argued that eliminating middlemen's profits was crucial in achieving truly low-cost housing. Additionally, Killam believed that housing progress should not be hindered by the inability to immediately provide for the lowest levels of the low-wage group, as this was a relief problem rather than a housing problem. According to Killam, large-scale rental projects were the way forward for successful housing policy. However, he acknowledged that managing such projects would require specialized training and expertise beyond that commonly found in the country. In particular, a manager of a large-scale low-cost housing project must possess skills in dealing with diverse races and social problems, as well as the ability to guide without dictation, and manage a complex team of employees with varied duties. In 1940, Killam wrote a letter to Massachusetts Senator Henry Cabot Lodge regarding the creation of the United States Housing Administration and low-cost housing projects. Killam argued that the government should pay for the amortization and interest of loans for low-cost housing projects instead of relying on income generated by the projects. He also contended that land should not be overly restricted for development to facilitate slum clearances, and subsidies for low-cost housing projects should be economically feasible. He also stated that technical information and practical experience should inform housing policy, and localities should be provided with information to make their own decisions. Lodge read this letter to the 76th U.S. Congress during its third session. Consulting, design, and restoration work Killam's consulting services, structural design, and restoration work were sought by many due to his knowledge and thoroughness in the field. In the early 20th century, Killam designed several residential houses around New England with architects Henry Atherton Frost and Bremer Whidden Pond. Together they designed houses such as the Quincy W. Wales house at 21 Sylvan Avenue in Newton, Massachusetts and the Georgia H. Emery house at 12 Blackberry Lane in the Jaffrey Center Historic District of Jaffrey, New Hampshire. Both houses were featured in House Beautiful. Harvard Business School During the 1920s, Killam became the consulting architect and professional advisor for the numerous new buildings being constructed during the expansion of the Harvard Business School. Most notably, Killam was professional advisor for the design competition for the school's new library, and the consulting architect for the school's new Baker Library which had been designed by McKim, Mead & White and completed in 1927. His contributions to the planning and design of the new buildings at the school made him "one of the most devoted workers behind the scenes" for this project. Killam additionally served as supervising architect along with Wallace Brett Donham for the construction of many of the school's other new buildings. Case method classroom In 1925, in preparation for the Harvard Business School's expansion, Killam and architecture student Harry J. Korslund designed a 177-seat, horseshoe-shaped classroom with 6-inch tiers that would support the case method of teaching. The case method was a new approach to business education that involved a more interactive and participatory format compared to the traditional lecture format. The Harvard Business School played a central role in developing this method and refining the corresponding classroom design. In 1927 when the school moved to Allston, the case method classroom design by Killam and Korslund was built in the basement of the Baker Library. Although primitive with poor acoustics and lighting and wooden tablet-arm chairs, this case method classroom design was the first deliberate design of a space for business education in the country. Mount Vernon On several occasions between 1932 and 1935, Killam was contracted to advise and perform extensive restoration and structural strengthening work at George Washington's Mount Vernon estate in Virginia. In correspondence with the Mount Vernon Ladies' Association, which considered him a "renowned structural expert," Killam noted that "too much emphasis has been placed upon keeping the externals looking like a prosperous modern estate and too little care and money have been spent in thorough repairs and strengthening." Killam expressed his devotion to the preservation and restoration of the estate through his exchanges with the estate's resident superintendent Harrison Howell Dodge: Killam's work included examining the mansion's structural stress and installing necessary reinforcements, termite-proofing the outer walls with copper, and placing steel beams in the mansion's basement to reinforce its structure which "remain strong and reliable today." Upon completion of his work at the main mansion, Killam claimed the building was "thrice as strong as when originally constructed." In addition to the main mansion, Killam also performed restoration and strengthening work on the other structures on the estate including the barn, quarters, spinning house, banquet hall, gardener's and butler's houses, and the office building. Dorchester Heights Monument In 1934–1935, Killam altered and performed structural rehabilitation to the Dorchester Heights Monument. Under the supervision of the Boston Art Commission, Killam undertook the "first documented program to repair the monument" since its completion in 1902. This monument was originally designed by Peabody & Stearns in 1899 while he was working there. His work to the monument included constructing a new steel and concrete floor below the tower chamber, reinforcing the monument with tie rods and structural framing, and strengthening badly rusted steel beams. In addition he also weatherproofed the structure by adding flashing, protective coatings, and weatherstripping as well as installing windows and doors in the originally open arches. Other works In 1930, Killam and architect Eleanor Raymond performed a complete renovation of the Little Theatre at the Gloucester School on Rocky Neck in Gloucester, Massachusetts. Together, they expanded the stage, extended the gallery, and added promenades and porches to the facility. In 1935, Robert E. Greenwood, mayor of Fitchburg, Massachusetts, hired Killam as consulting architect for a new high school. Killam was recommended to the school and planning boards by Professor Henry Vincent Hubbard who was serving as advisor to the school committee at the time. This new building was to replace the old high school which had burned down in 1934. The new Fitchburg High School was completed in 1937 and designed by Coolidge, Shepley, Bulfinch and Abbott. Marriage and children On August 6, 1894, at the First Baptist Church in Hyde Park, Killam married Amy Edna Whittemore (1871–1942), a classmate from his early education in Hyde Park. Whittemore was born in 1871 in Londonderry, New Hampshire, but grew up and went to school in Hyde Park with Killam. She was the youngest daughter of Henry Joshua Whittemore, a music teacher at Hyde Park High School, and Esther Miranda Goodwin. Together, Killam and Whittemore had four children: Muriel Esther Killam (1895–1988) Horace Goodwin Killam (1896–1989) Roger Wilson Killam (1898–1987) Mary Whittemore Killam (1903–1993) While he devoted much of his time to academic pursuits and professional endeavors, he remained a committed family man, having great affection for his wife and four children, and later, his grandchildren and great-grandchildren. Death Charles Wilson Killam died in a Providence, Rhode Island hospital on May 19, 1961, at the age of 89. He was living in Rumford, Rhode Island at the time of his death. He was buried at Shawsheen Cemetery in Bedford, Massachusetts alongside his wife, who predeceased him. He was survived by two sisters, his four children, and several grandchildren and great-grandchildren. Published works Killam was a prolific and assiduous writer of numerous articles published in professional journals, academic magazines, and periodicals, and authored several texts on architectural construction. These were pioneering in the field of architecture and architectural construction. Despite publication, Killam never regarded his works as being in final form. He would not permit them being published as hardcover books, believing that this would limit the potential for further development of its content. Killam's 1937 textbook, Notes on Architectural Construction, was widely used in architectural schools throughout the United States and became a core part of their curricula, lectures, and instruction. Some of his notable published articles, works, and reports include: "Bridge Design from the Architect’s Standpoint" – Harvard Engineering Journal. (1909) "The Charles River Bridges" – Harvard Engineering Journal. (1910) "The Relation of a State-Wide Building Code to Housing and Town Planning" – Architectural quarterly of Harvard University. (1913) "Report Relative to the Construction, Alteration and Maintenance of Buildings" (1915) "Study of Construction in Architectural Education" – The Architectural Forum. (1922) Harvard University's Baker Library Architectural Competition Program (1924) "Apartments and Automobiles" – The Cambridge Tribune. (1928) "Modern Construction and its Possible Determination of Style Forms" – American Institute of Architects: Journal of Proceedings. (1930) "Modern Design as Influenced by Modern Materials" – The Architectural Forum. (1930) "Why Architects Tend to Specify Substitutes for Lumber in Buildings of Today" – American Lumberman. (1930) "Design in its Relation to Construction" – The Journal of the American Institute of Architects. (1935) "Plea for Beauty" – Architect & Engineer. (1935) "Low-Cost Housing In The United States" – Harvard Business Review. (1936) "Architectural Construction Part One: Notes on Architectural Construction" (1937) "School Training for Architecture: Some Pertinent Thoughts on Education" – Pencil Points. (1937) "Appropriations for the United States Housing Administration" – United States of America Senate Congressional Record. (1940) "Are Planners Prepared to Build Our Cities?" – Pencil Points. (1942) "City Planning And Blighted Areas" – Michigan Society of Architects. (1943) "The Education of Practicing Architects" – Journal of The American Institute of Architects. (1949) "Architectural Construction Part Two: Design of Masonry and Foundations" (1950) Accomplishments and positions held Throughout his academic and professional career, Killam held various positions of leadership and served on numerous boards and committees. He was also a member of several clubs and institutions, and collaborated closely with many notable and influential architects and academics of his time. The Massachusetts State Association of Architects awarded Killam with their Certificate of Honor in 1946 and wrote the following about him: Memberships Throughout his life, Killam was member of numerous clubs, associations, societies, and institutes both academic and professional in nature. Some of which include: Became member of the American Institute of Architects (AIA) in 1913. Elected a Fellow of the American Institute of Architects (FAIA) in 1926. American Society of Civil Engineers Boston Architectural Club Boston Society of Architects National Fire Protection Association American Concrete Institute Active member of President Hoover's Conference on Home Building and Home Ownership and the Correlating Committee on Legislation and Administration (1931). Cambridge Club—Elected director of the club in 1928, vice-president in 1934, and president in 1935. Director and President of the Cambridge Taxpayers’ League (1932). Harvard Faculty Club Elected to a fellowship in the American Academy of Art and Sciences. Chairmanships Killam chaired many committees, commissions, and bodies throughout his career. Some notable positions he was chairman for include: Cambridge Public School Association committee on school plant (1910–1911). AIA Basic Building Code Committee (1916). Special commission to revise the building ordinance of Cambridge (1917). Chairman of the Faculty of Architecture at Harvard (1917). Chairman of the Council of the School of Architecture at Harvard (1918). Boston Society of Architects committee of materials and methods (1930). Served as both chairman and director Cambridge Industrial Association Municipal Affairs Committee (1932). AIA committee on structural service (1940). Vice-chairman of the AIA committee on building costs and committee on cost of materials (1940). AIA committee on the technical services of the American Institute of Architects (1941). Committees Some of the other notable committees Killam was a member of include: Council and executive committee of the Harvard University School of Engineering (1912–1913). Cambridge Unemployment Relief Committee (1933). American Standards Association committee on methods of testing wood (1940). Representative Killam also acted as a representative for the AIA and other groups on various committees, some of which include: One of fourteen delegates of the Boston AIA Chapter—Joseph Everett Chandler, Ralph Adams Cram, Henry H. Kendall, and Arthur W. Rice were other notable delegates of the chapter. Represented the AIA on the following committees: U.S. Forest Service and American Society for Testing Materials committee on standardization of methods of testing wood (called the American Engineering Standards Committee) (1922). Committee of technical groups and government agencies engaged in the preparation and promulgation of codes and standards relating to the design and construction of buildings (1933). Joint Committee on Standard Specifications for Concrete and Reinforced Concrete (1940). Central Agency Committee, cooperating with The Producers' Council Inc., and the Federal Home Loan Bank Board (1940). Appointments and other positions held Killam was appointed to many positions by various academic and political individuals and held numerous other positions at the city, state, and national level. Some of these appointments and other positions include: Appointed associate of the Harvard University Engineering Journal Board (1912–1913). Appointed to the jury for the national "Better Homes in America" design competition sponsored by General Electric and The Architectural Forum (1935). Ralph T. Walker, Franklin O. Adams, and Eliel Saarinen were also jurors. Judge for the Jordan Marsh Company Architects’ Contest along with Helen Storrow and William Emerson (1935). Director of the Program of Cooperation between the Federal Home Loan Bank Board and AIA to construct well-designed, well-built, well-equipped, low-cost housing (1940). Notes References External links 2022 FAIA Directory of Fellows (Charles W. Killam found in the Chronological Directory on p. 117 and the Alphabetical Directory on p. 394). Collections and Records of Charles W. Killam at the Fred W. Smith National Library for the Study of George Washington. Charles Wilson Killam works at WorldCat library catalog. 1871 births 1961 deaths 20th-century American academics Academics from Massachusetts American civil engineers American Society of Civil Engineers Architects from Boston Architects from Cambridge, Massachusetts Architectural theoreticians Architecture academics Architecture educators Engineering educators Fellows of the American Academy of Arts and Sciences Fellows of the American Institute of Architects Harvard University staff People from Boston People from Cambridge, Massachusetts People from Charlestown, Boston People in building engineering Harvard Graduate School of Design faculty Smith College faculty Peabody and Stearns people Burials at Shawsheen Cemetery
Charles Wilson Killam
[ "Engineering" ]
6,453
[ "Building engineering", "People in building engineering", "American Society of Civil Engineers", "Civil engineering organizations" ]
73,785,837
https://en.wikipedia.org/wiki/Bandonia%20marina
Bandonia marina is a species of fungus in the family Tetragoniomycetaceae, first described as Candida marina in 1962. It is currently the only species in the monotypic genus Bandonia. The species is a marine yeast and is one of the microorganisms that feed on and decompose tarballs in the ocean. References External links Tremellomycetes Fungi described in 1962 Fungus species
Bandonia marina
[ "Biology" ]
86
[ "Fungi", "Fungus species" ]
73,785,999
https://en.wikipedia.org/wiki/Phacidiopycnis%20washingtonensis
Phacidiopycnis washingtonensis, is a species of fungus in the family Phacidiaceae, first described by C.L. Xiao & J.D. Rogers in 2005. It is a weak orchard pathogen and a cause of rubbery rot, also known as speck rot, in postharvest apples. First described in North Germany, it affects several apple varieties, including commercially important Jonagold and Elstar. Losses caused by P. washingtonensis during storage are usually below 1% but can reach 5–10% of apples. P. washingtonensis is a weak canker pathogen to apple trees, but while commercial trees in orchards don't seem to be at risk, crabapple pollinators can be susceptible. The fungus causes small black dots (fruiting bodies) to form on infected twigs and tree branches. Fruiting bodies contain millions of spores which serve as the source for fruit infection. Speck rot in postharvest is characterized by an initial light brown skin discoloration that progresses to a more blackish skin discoloration, with a firm rubbery texture. References External links Fungal plant pathogens and diseases Apple tree diseases Leotiomycetes Fungi described in 2005 Fungus species
Phacidiopycnis washingtonensis
[ "Biology" ]
250
[ "Fungi", "Fungus species" ]
73,786,655
https://en.wikipedia.org/wiki/Truncatella%20hartigii
Truncatella hartigii is a species of parasitic fungus in the family Bartaliniaceae, first described by Karl von Tubeuf in 1888, and given its current name by in 1949. It is a parasite of pine needles. It is morphologically similar to Pestalotiopsis funerea with differences in their conidia. It shows significant antibacterial activity, especially against Enterococcus faecalis. References External links Amphisphaeriales Fungal conifer pathogens and diseases Fungi described in 1949 Fungus species
Truncatella hartigii
[ "Biology" ]
109
[ "Fungi", "Fungus species" ]
73,786,829
https://en.wikipedia.org/wiki/Chaenothecopsis%20penningtonensis
Chaenothecopsis penningtonensis is a resinicolous fungus found on Picea mariana bark flakes. Found in Minnesota and Wisconsin, Chaenothecopsis penningtonensis is newly introduced in 2020 by ecologists Otto Gockman and Steven Selva. As of 2022, this species have also been observed in Alberta, Canada by ecologist Jose Maloles. Description Chaenothecopsis penningtonensis sits atop resin on the lower surface of Picea mariana bark flakes. It is dark brown to black in color, thallus absent, and has a very short apothecia. Along with C. resinicola, C. penningtonesis are the only resinicolous species of Chaenothecopsis found in North America with non-septate spores and short apothecia and asci. Habitat and Geography Chaenothecopsis penningtonensis reside in temperate peatlands within temperate boreal forests where long, cold and dry winters and short, warm and moist summers occur. Etymology The species epithet, penningtonensis, is derived from the location at which this species was discovered, at the Pennington Bog Scientific and Natural Area of Pennington, Minnesota. References Eurotiomycetes Fungi described in 2020 Fungi of the United States Fungi of Canada Fungus species
Chaenothecopsis penningtonensis
[ "Biology" ]
263
[ "Fungi", "Fungus species" ]
73,787,498
https://en.wikipedia.org/wiki/Gibbs%20rotational%20ensemble
The Gibbs rotational ensemble represents the possible states of a mechanical system in thermal and rotational equilibrium at temperature and angular velocity . The Jaynes procedure can be used to obtain this ensemble. An ensemble is the set of microstates corresponding to a given macrostate. The Gibbs rotational ensemble assigns a probability to a given microstate characterized by energy and angular momentum for a given temperature and rotational velocity . where is the partition function Derivation The Gibbs rotational ensemble can be derived using the same general method as to derive any ensemble, as given by E.T. Jaynes in his 1956 paper Information Theory and Statistical Mechanics. Let be a function with expectation value where is the probability of , which is not known a priori. The probabilities obey normalization To find , the Shannon entropy is maximized, where the Shannon entropy goes as The method of Lagrange multipliers is used to maximize under the constraints and the normalization condition, using Lagrange multipliers and to find is found via normalization and can be written as where is the partition function This is easily generalized to any number of equations via the incorporation of more Lagrange multipliers. Now investigating the Gibbs rotational ensemble, the method of Lagrange multipliers is again used to maximize the Shannon entropy , but this time under the constraints of energy expectation value and angular momentum expectation value , which gives as Via normalization, is found to be Like before, and are given by The entropy of the system is given by such that where is the Boltzmann constant. The system is assumed to be in equilibrium, follow the laws of thermodynamics, and have fixed uniform temperature and angular velocity . The first law of thermodynamics as applied to this system is Recalling the entropy differential Combining the first law of thermodynamics with the entropy differential gives Comparing this result with the entropy differential given by entropy maximization allows determination of and which allows the probability of a given state to be written as which is recognized as the probability of some microstate given a prescribed macrostate using the Gibbs rotational ensemble. The term can be recognized as the effective Hamiltonian for the system, which then simplifies the Gibbs rotational partition function to that of a normal canonical system Applicability The Gibbs rotational ensemble is useful for calculations regarding rotating systems. It is commonly used for describing particle distribution in centrifuges. For example, take a rotating cylinder (height , radius ) with fixed particle number , fixed volume , fixed average energy , and average angular momentum . The expectation value of number density of particles at radius can be written as Using the Gibbs rotational partition function, can be calculated to be Density of a particle at a given point can be thought of as unity divided by an infinitesimal volume, which can be represented as a delta function. which finally gives as which is the expected result. Difference between Grand canonical ensemble and Gibbs canonical ensemble The Grand canonical ensemble and the Gibbs canonical ensemble are two different statistical ensembles used in statistical mechanics to describe systems with different constraints. The grand canonical ensemble describes a system that can exchange both energy and particles with a reservoir. It is characterized by three variables: the temperature (T), chemical potential (μ), and volume (V) of the system. The chemical potential determines the average particle number in this ensemble, which allows for some variation in the number of particles. The grand canonical ensemble is commonly used to study systems with a fixed temperature and chemical potential, but a variable particle number, such as gases in contact with a particle reservoir. On the other hand, the Gibbs canonical ensemble describes a system that can exchange energy but has a fixed number of particles. It is characterized by two variables: the temperature (T) and volume (V) of the system. In this ensemble, the energy of the system can fluctuate, but the number of particles remains fixed. The Gibbs canonical ensemble is commonly used to study systems with a fixed temperature and particle number, but variable energy, such as systems in thermal equilibrium. References Statistical mechanics
Gibbs rotational ensemble
[ "Physics" ]
818
[ "Statistical mechanics" ]