id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
73,238,226 | https://en.wikipedia.org/wiki/List%20of%20countries%20by%20air%20pollution | The following list of countries by air pollution sorts the countries of the world according to their average measured concentration of particulate matter (PM2.5) in micrograms per cubic meter (μg/m3). The World Health Organization's recommended limit is 5 micrograms per cubic meter, although there are also various national guideline values, which are often much higher. Air pollution is among the biggest health problems of modern industrial society and is responsible for more than 10 percent of all deaths worldwide (nearly 4.5 million premature deaths in 2019), according to The Lancet. Air pollution can affect nearly every organ and system of the body, negatively affecting nature and humans alike. Air pollution is a particularly big problem in emerging and developing countries, where global environmental standards often cannot be met. The data in this list refers only to outdoor air quality and not indoor air quality, which caused an additional two million premature deaths in 2019.
2022 list (UChicago AQLI 2022)
All data are valid for the year 2022 and are taken from the Air Quality Life Index (AQLI) of the University of Chicago. In addition to particulate matter pollution, the modeled potential loss of life expectancy of the population due to particulate matter pollution is given.
List (2018−2023)
All data are valid for the year 2018-2023 and are taken from the IQAir 2023 World Air Quality Ranking
References
External links
IQAir World Air Quality Ranking
Countries
Countries
Countries
Countries
Pollution
Countries | List of countries by air pollution | [
"Physics",
"Chemistry",
"Mathematics"
] | 307 | [
"Visibility",
"Physical quantities",
"Quantity",
"Particulates",
"Particle technology",
"Wikipedia categories named after physical quantities"
] |
73,240,709 | https://en.wikipedia.org/wiki/C4-FN | C4-FN (C4-fluoronitrile, C4FN) is a perfluorinated compound developed as a high-dielectric gas for high-voltage switchgear. It has the structure (CF3)2CFC≡N, which can be described as perfluoroisobutyronitrile, falling under the category of PFAS, or per- and polyfluoroalkyl substances.
It is promoted as an alternative to sulfur hexafluoride (SF6) for interruption and insulation applications, as it has insulation properties twice that of SF6 and a relatively low global warming potential (GWP) compared with SF6 that is the most potent greenhouse gas. The compound has been introduced into the market by 3M under the denomination Novec 4710 and commercialized in high voltage equipment by General Electric starting from 2016. It is seen as a credible alternative to SF6 by the European Commission as offering the capability to replace SF6 while keeping the same benefits of dimensional footprint and performance. Several other companies started using C4-FN mixtures for high voltage applications: LS Electric, Hitachi Energy, Hyosung or Hyundai Electric.
C4-FN mixtures refers to the typically used gas mixtures including C4-FN mixed with natural origin gases (O2, CO2, N2) which are used within high-voltage equipment.
There are no other reported applications than electric insulation for the C4-FN mixtures. Apart from typical distribution and transmission high-voltage equipment, research has been done for applications within the Large Hadron Collider.
Application to high-voltage equipment
Current applications
C4-FN is usually not used alone as a single gas compound in high voltage equipment due to its high boiling point temperature which would limit either the pressure or the application temperature to too strict levels which are usually between -30 °C and +55 °C as per the relevant standards. It is mostly used mixed with carbon dioxide (CO2), nitrogen (N2), and oxygen (O2), in proportions widely varying depending on applications and products but ranging from 3.5% to 6% (percentages are given in mole fraction).
C4-FN is usually used as a dielectric additive whose content is a compromise between:
The filling pressure of the equipment, which is necessary to ensure current interruption by the circuit breaker.
The required minimum operating temperature of the equipment.
The other gases used (CO2, O2, N2) and their chemical interactions.
Contrary to the C5-FK (fluoroketone) technology, C4-FN mixtures are able to cover the needs of the network operators. Hitachi Energy announced breaking away from the C5-FK to focus purely on C4-FN and natural origin solutions for high-voltage equipment.
C4-FN technology is recent but developing quickly, especially under the recent public pressure to reduce carbon footprint of the equipment. The use of SF6 in the electrical industry is not well known and the status of exception is wearing as shown by the new proposal of European F-Gas regulation.
In comparison with the development of air-blast, SF6 and vacuum technologies, C4-FN is relatively fast. This is permitted by the use of similar concepts as for other puffer and selfblast technologies:
2014: First publication of the use of C4-FN for high-voltage applications.
2016: First using a C4-FN/CO2 mixture energized in Sellindge. The equipment is rated 420 kV and operating on a 400 kV network.
2017: First using a C4-FN/O2/CO2 mixture energized in Etzel, Switzerland. The equipment is rated 145 kV, 40 kA and operating on a 50 kV network.
2021: First , live-tank, using C4-FN/O2/CO2 mixture energized.
2022: Announced date for the availability of a 420 kV, 63 kA GIS.
Higher content of C4-FN has been reported in specific retrofilling applications, i.e., where the gas within commissioned equipment, usually SF6, is replaced by a gas mixture containing C4-FN. Retrofilling designates a sort of retrofitting with limited changes on the equipment.
Alternative technologies
In the recent years, C4-FN has taken the lead on other gaseous media alternatives like HFO-1234ze and C5F10O (fluoroketone, C5-FK), as several manufacturers started adopting it. The most important rally was the commitment to the technology by Hitachi Energy in April 2021.
C4-FN mixture technology is today in competition with mixtures of natural-origin gases such as nitrogen (N2), oxygen (O2) and carbon dioxide (CO2), which have a GWP below 1 and lower boiling points but lower dielectric and thermal properties which negatively impact the overall performance of the equipment and usually results in bigger apparatuses and use of material. The mixtures using natural-origin gas often are only used for insulation while the interruption function is done using on vacuum interrupters. The scalability of such vacuum interrupters is still subject to discussion and polemic as the announced portfolio and products above 145 kV are still to be released
Regulations
F-Gas
C4-FN is a fluorinated gas and its use can be locally regulated because of its greenhouse effect. As the molecule was only very recently introduced on the market for high voltage switchgear, it was not yet properly regulated.
The latest proposal of the European F-gas regulation represents a severe drawback for the C4-FN solutions in high voltage equipment as it introduces a hierarchy between three categories of solutions: GWP<10, GWP<2000 and GWP≥2000. The impact of such regulation would be to make the C4-FN products as transition solutions before the apparition of GWP<10 alternatives.
Supporters
ENTSO-E has officially stated its support to the C4-FN mixtures for high voltage equipment as they represent the best solution to quickly remove SF6. The European Distribution System Operators (E.DSO) association also supported the removal of the GWP<10 threshold in the latest F-gas proposal.
The main reasons mentioned in the positions' papers are that:
C4-FN mixtures appear the fastest solution to replace SF6 in the coming years. Making it a non-viable solution in the medium-term would likely slow or stop all development and result in new substations to be installed with SF6.
The carbon-footprint of a C4-FN substation is much lower as with other alternative solutions like vacuum or natural-origin gases as per several Life-cycle assessments. Therefore, the rationale of greenhouse gas reduction is achieved by the technology and the regulation's scope should not be limited to the gas only but to the whole product.
There is only one supplier of vacuum interrupter able to reach 145 kV at the moment: Siemens. Preventing the use of C4-FN gases for high voltage equipment would therefore represent the risk of monopoly in short and medium-term, impacting strongly the competition on the market. Additionally, there is a risk than Siemens could not reach the demand due to the transfer of productions from other manufacturers.
Several European fundings supported the development of C4-FN solutions through the LIFE programme.
Criticisms
C4-FN is still criticized regarding its reliance on manufacturers' data mostly. Two topics are discussed: Reliability and Environment Health and Safety (EHS).
The reliability was mostly discussed regarding the applicability or not of the existent IEC and IEEE standards to fully qualify C4-FN mixtures performance. This has been partially reduced with CIGRE publishing several technical brochures from working groups that investigated the phenomena, reliability and testing procedures for C4-FN mixtures. In the meantime, IEC and IEEE organizations started working on new or revised standards. ETH Zurich also contributed to investigate key properties of the gas and its mixtures.
Regarding the EHS aspects, the main discussed aspect is the toxicology of C4-FN which was almost exclusively studied by the producer (3M) and OEM (GE Grid, Hitachi Energy, etc.). The molecule is now registered in REACH with the CAS no. 42532-60-05. It should not be confounded with its isomer, heptafluorobutyronitrile, which is toxic (CAS no. 375-00-8). It is certain that the study of pure, mixed, and arced C4-FN should continue to consolidate the knowledge about the risks. Several parties have started, mainly research teams.
Siemens Energy regularly criticized the C4-FN technology which is in direct competition with its vacuum and synthetic air technology.
Physical and other properties
Impact on the environment
C4-FN is a greenhouse gas and has a Global Warming Potential (GWP) estimated at 2750 over 100 years, although various studies give other results: 1490, 2100 and 3646. Its atmospheric lifetime is estimated in the range of 30 years. It is therefore much better than SF6 whose GWP100 is 24300 but also much higher than CO2 (GWP 1) or air (GWP 0).
C4-FN mixtures have much lower GWP as pure C4-FN because the amount of the fluorinated gas is relatively low in molar fraction. When mixed with O2, CO2, or N2, in the range of reported applications, the GWP of the complete mixture is usually in the range of GWP100 300-500. Additionally, the GWP is a CO2 equivalent per unit of mass and C4-FN mixtures being typically 50% lighter than SF6, the CO2 reduction is in the range of 98.7-99.3% compared to SF6 at identical volume.
Nevertheless, C4-FN, pure or mixed, is a potent greenhouse gas whose emissions must be carefully minimized. This gas is not foreseen in other applications than high-voltage insulation where it provides advantageous GWP reduction in comparison to SF6.
As a fluorinated gas with greenhouse effect, C4-FN could be targeted by regulations like the European F-Gas regulation through the mention of GWP limits.
Thermodynamic and dielectric properties
Pure C4-FN can be described using a Peng-Robinson equation of state. Relatively accurate results have been obtained using the critical point (385.996 K, 2501.524 kPa, 2.6302 mol/L) and an acentric factor of 0.356. Mixtures of C4-FN/O2/CO2 have been described in various literature and recently updated by two equipment manufacturers.
The dielectric properties have been investigated in several laboratories under the supervision of ETH Zurich and as part of the CIGRE D1.67 working group.
The measurements show that a C4-FN/CO2 gas mixture containing 20% of C4-FN (mole fraction) has a dielectric strength similar to the SF6 (values at 100 kPa based on the AC breakdowns in a uniform arrangement). Additionally, minor synergies were observed between 0 and 7% with breakdown values higher than what a purely linear interaction would allow. The AC breakdown voltages were also linearly increasing with pressure, ensuring a good scalability. Detailed values and additional results in weakly non-uniform and strongly non-uniform arrangements are available in the datasets.
Conclusions in the CIGRE technical brochure mention that the obtained results confirmed the applicability of the existing tests methods (including waveform ratios) and design rules from SF6.
References
Nitriles
Perfluorinated compounds | C4-FN | [
"Chemistry"
] | 2,495 | [
"Nitriles",
"Functional groups"
] |
51,792,164 | https://en.wikipedia.org/wiki/Partnership%20on%20AI | Partnership on Artificial Intelligence to Benefit People and Society, otherwise known as Partnership on AI, is a nonprofit coalition committed to the responsible use of artificial intelligence. Coming into inception in September 2016, PAI (Partnership on AI) grouped together members from over 90 companies and non-profits in order to explore best practice recommendations for the tech community.
History
The Partnership on AI was publicly announced on September 28, 2016 with founding members Amazon, Facebook, Google, DeepMind, Microsoft, and IBM, with interim co-chairs Eric Horvitz of Microsoft Research and Mustafa Suleyman of DeepMind. More than 100 partners from academia, civil society, industry, and nonprofits are member organizations in 2019.
In January 2017, Apple head of advanced development for Siri, Tom Gruber, joined the Partnership on AI's board. In October 2017, Terah Lyons joined the Partnership on AI as the organization's founding executive director. Lyons brought to the organization her expertise in technology governance, with a specific focus in machine intelligence, AI, and robotics policy, having formerly served as Policy Advisor to the United States Chief Technology Officer Megan Smith. Lyons was succeeded by Partnership on AI board member Rebecca Finlay as interim executive director. Finlay was named CEO of Partnership on AI on October 26, 2021.
In October 2017, Terah Lyons joined the Partnership on AI as the organization's founding executive director. Lyons brought to the organization her expertise in technology governance, with a specific focus in machine intelligence, AI, and robotics policy, having formerly served as Policy Advisor to the United States Chief Technology Officer Megan Smith. Lyons was succeeded by Partnership on AI board member Rebecca Finlay as interim executive director. Finlay was named CEO of Partnership on AI on October 26, 2021.
In October 2018, Baidu became the first Chinese firm to join the Partnership.
In November 2020 the Partnership on AI announced the AI Incident Database (AIID), which is a tool to identify, assess, manage, and communicate AI risk and harm.
In August 2021, the Partnership on AI submitted a response to the National Institute of Standards and Technology (NIST). The response provided examples of PAI’s work related to AI risk management, such as the Safety Critical AI report on responsible publication of AI research, the ABOUT ML project on documentation and transparency in machine learning lifecycles, and the AI Incident Database. The response also highlighted how the AI Incident Database involves some of the minimum attributes in NIST’s AI RMF, such as being consensus-driven, risk-based, adaptable, and consistent with other approaches to managing AI risk.
On October 26, 2021, Rebecca Finlay was named CEO.
In February 2023, the Partnership on AI (PAI) launched a novel framework aimed at guiding the ethical development and use of synthetic media. This initiative was backed by a variety of initial partners, including notable entities such as Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, WITNESS, and synthetic media startups Synthesia, D-ID, and Respeecher. The framework, which emphasizes transparency, creativity, and safety, was the result of a year-long collaborative process involving contributions from a wide range of stakeholders, including synthetic media startups, social media platforms, news organizations, advocacy groups, academic institutions, policy professionals, and public commenters.
Mission and Principles
Partnership on AI has a multiple pronged approach to achieve impact. Their initiatives are separated into five different programs: AI and media integrity; AI, work, and the economy; justice, transparency, and accountability; inclusive research and design; and security for AI. These programs aim to produce value through specific outputs, methodological tools, and articles.
Through the program on AI & Media Integrity, PAI actively endeavors to establish best practices that ensure AI's positive influence on the global information ecosystem. Recognizing the potential for AI to facilitate harmful online content and amplify existing negative narratives, PAI is committed to mitigating these risks and fostering a responsible AI presence.
The AI, Labor, and the Economy program serves as a collaborative platform, uniting economists, worker representative organizations, and PAI's partners to formulate a cohesive response on how AI can contribute to an inclusive economic future. The recent release of PAI's "Guidelines for AI and Shared Prosperity" on June 7, 2023, outlines a blueprint for the judicious use of AI across various stages, guiding organizations, policymakers, and labor entities.
The Fairness, Transparency, and Accountability program, in conjunction with the Inclusive Research & Design program, strives to reshape the AI landscape towards justice and fairness. By exploring the intersections between AI and fundamental human values, the former establishes guidelines for algorithmic equity, explainability, and responsibility. Simultaneously, the latter empowers communities by providing guidelines on co-creating AI solutions, fostering inclusivity throughout the research and design process.
The Safety Critical AI program addresses the growing deployment of AI systems in pivotal sectors like medicine, finance, transportation, and social media. With a focus on anticipating and mitigating potential risks, the program brings together partners and stakeholders to develop best practices that span the entire AI research and development lifecycle. Notable initiatives include the establishment of the AI incident Database, formulation of norms for responsible publication, and the creation of the innovative AI learning environment SafeLife.
The association is also built of thematic foundations that drive Partnership on AI's focus. Atop the programs mentioned above, Partnership on AI looks to expand upon the social impact of AI, encouraging positive social utility. The organization has highlighted potential benefits of AI within public welfare, education, sustainability, etc. With these specific use cases, Partnership on AI is developing an ethical framework in which to analyze and AI's measure of ethical efficacy. The ethical framework places an emphasis on inclusive participatory practices that enhance equity in AI.
Programs and initiatives
The Partnership on AI has been involved in several initiatives aimed at promoting the responsible use of AI. One of their key initiatives is the development of a framework for the safe deployment of AI models. This framework guides model providers in developing and deploying AI models in a manner that ensures safety for society and can adapt to evolving capabilities and uses.
In collaboration with DeepMind, the Partnership on AI has also launched a study to investigate the high attrition rates among women and minoritized individuals in tech.
Recognizing the importance of explainability in AI, the Partnership on AI hosted a one-day, in-person workshop focused on the deployment of “explainable artificial intelligence” (XAI). This event brought together experts from various industries to discuss and explore the concept of XAI.
In an effort to support information integrity, the Partnership on AI collaborated with First Draft to investigate effective strategies for addressing deceptive content online. This initiative reflects the organization’s methodical approach to identifying and promoting best practices in AI.
The Partnership on AI is also creating resources to facilitate effective engagement between AI practitioners and impacted communities.
In November 2020, the Partnership on AI announced the AI Incident Database (AIID), a project dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. The AIID, which shifted to a new special-purpose independent non-profit in 2022, serves as a valuable resource for understanding and mitigating the potential risks associated with AI.
Most recently, PAI conducted the PAI's 2023 Policy Forum. This event, held in London, was a gathering of diverse stakeholders to explore recent trends in AI policy globally and strategies for ensuring AI safety. During the event, the Partnership on AI (PAI) unveiled their "Guidance for Safe Foundation Model Deployment" for public feedback. This guidance, shaped by the Safety Critical AI Steering Committee and contributions from PAI's worldwide network, offers flexible principles for managing risks linked to large-scale AI implementation. Participants included policymakers, AI professionals, philanthropy and civil society members, and academic experts.
Partners and members
The Board of Directors of the Partnership on AI (PAI) as of 2023 includes:
Jatin Aythora, Vice-Chair of the Board, representing BBC Research & Development.
Ben Coppin from DeepMind.
William Covington, Board Secretary, affiliated with the University of Washington School of Law.
Jerremy Holland, Chair of the Board, from Apple.
Eric Horvitz, Board Chair Emeritus, representing Microsoft.
Angela Kane, Board Treasurer, associated with the Vienna Center for Disarmament and Non-Proliferation.
Lama Nachman from Intel Labs.
Joelle Pineau, Vice-Chair of the Board, representing Meta.
Francesca Rossi, Chair of the Audit Committee, from IBM.
Eric Sears representing the John D. and Catherine T. MacArthur Foundation.
Brittany Smith from OpenAI.
Martin Tisné from AI Collaborative.
Nicol Turner Lee representing the Brookings Institution.
Criticisms
In October 2020, Access Now, announced its official resignation from PAI in a letter. Access Now stated that it had found that there was an increasingly smaller role for civil society to play within PAI and that PAI had not influenced or changed the attitude of member companies or encouraged them to respond to or consult with civil society on a systematic basis. Access Now also expressed its disagreement with PAI’s approach to AI ethics and risk assessment, and its advocacy for an outright ban on technologies that are fundamentally incompatible with human rights, such as facial recognition or other biometric technologies that enable mass surveillance.
References
External links
The AI Incident Database
Artificial intelligence associations
Organizations established in 2016
Existential risk from artificial general intelligence | Partnership on AI | [
"Technology"
] | 1,960 | [
"Existential risk from artificial general intelligence"
] |
51,792,686 | https://en.wikipedia.org/wiki/Lawvere%E2%80%93Tierney%20topology | In mathematics, a Lawvere–Tierney topology is an analog of a Grothendieck topology for an arbitrary topos, used to construct a topos of sheaves. A Lawvere–Tierney topology is also sometimes also called a local operator or coverage or topology or geometric modality. They were introduced by and Myles Tierney.
Definition
If E is a topos, then a topology on E is a morphism j from the subobject classifier Ω to Ω such that j preserves truth (), preserves intersections (), and is idempotent ().
j-closure
Given a subobject of an object A with classifier , then the composition defines another subobject of A such that s is a subobject of , and is said to be the j-closure of s.
Some theorems related to j-closure are (for some subobjects s and w of A):
inflationary property:
idempotence:
preservation of intersections:
preservation of order:
stability under pullback: .
Examples
Grothendieck topologies on a small category C are essentially the same as Lawvere–Tierney topologies on the topos of presheaves of sets over C.
References
Topos theory
Closure operators | Lawvere–Tierney topology | [
"Mathematics"
] | 258 | [
"Mathematical structures",
"Closure operators",
"Category theory",
"Order theory",
"Topos theory"
] |
54,616,030 | https://en.wikipedia.org/wiki/Line%20sampling | Line sampling is a method used in reliability engineering to compute small (i.e., rare event) failure probabilities encountered in engineering systems. The method is particularly suitable for high-dimensional reliability problems, in which the performance function exhibits moderate non-linearity with respect to the uncertain parameters The method is suitable for analyzing black box systems, and unlike the importance sampling method of variance reduction, does not require detailed knowledge of the system.
The basic idea behind line sampling is to refine estimates obtained from the first-order reliability method (FORM), which may be incorrect due to the non-linearity of the limit state function. Conceptually, this is achieved by averaging the result of different FORM simulations. In practice, this is made possible by identifying the importance direction in the input parameter space, which points towards the region which most strongly contributes to the overall failure probability. The importance direction can be closely related to the center of mass of the failure region, or to the failure point with the highest probability density, which often falls at the closest point to the origin of the limit state function, when the random variables of the problem have been transformed into the standard normal space. Once the importance direction has been set to point towards the failure region, samples are randomly generated from the standard normal space and lines are drawn parallel to the importance direction in order to compute the distance to the limit state function, which enables the probability of failure to be estimated for each sample. These failure probabilities can then be averaged to obtain an improved estimate.
Mathematical approach
Firstly the importance direction must be determined. This can be achieved by finding the design point, or the gradient of the limit state function.
A set of samples is generated using Monte Carlo simulation in the standard normal space. For each sample , the probability of failure in the line parallel to the important direction is defined as:
where is equal to one for samples contributing to failure, and is zero otherwise:
is the important direction, is the probability density function of a Gaussian distribution (and is a real number). In practice the roots of a nonlinear function must be found to estimate the partial probabilities of failure along each line. This is either done by interpolation of a few samples along the line, or by using the Newton–Raphson method.
The global probability of failure is the mean of the probability of failure on the lines:
where is the total number of lines used in the analysis and the are the partial probabilities of failure estimated along all the lines.
For problems in which the dependence of the performance function is only moderately non-linear with respect to the parameters modeled as random variables, setting the importance direction as the gradient vector of the performance function in the underlying standard normal space leads to highly efficient Line Sampling. In general it can be shown that the variance obtained by line sampling is always smaller than that obtained by conventional Monte Carlo simulation, and hence the line sampling algorithm converges more quickly. The rate of convergence is made quicker still by recent advancements which allow the importance direction to be repeatedly updated throughout the simulation, and this is known as adaptive line sampling.
Industrial application
The algorithm is particularly useful for performing reliability analysis on computationally expensive industrial black box models, since the limit state function can be non-linear and the number of samples required is lower than for other reliability analysis techniques such as subset simulation. The algorithm can also be used to efficiently propagate epistemic uncertainty in the form of probability boxes, or random sets. A numerical implementation of the method is available in the open source software OpenCOSSAN.
See also
Rare event sampling
Curse of dimensionality
Quantitative risk assessment
References
Reliability analysis
Variance_reduction | Line sampling | [
"Engineering"
] | 741 | [
"Reliability analysis",
"Reliability engineering"
] |
54,620,481 | https://en.wikipedia.org/wiki/FIRST%20Global%20Challenge | The FIRST Global Challenge is a yearly robotics competition organized by the International First Committee Association. It promotes STEM education and careers for youth and was created by Dean Kamen in 2016 as an expansion of FIRST, an organization with similar objectives.
History
FIRST Global is a trade name for the International First Committee Association, a nonprofit corporation based in Manchester, New Hampshire, with a 501(c)(3) designation from the IRS.
The nonprofit was founded by the co-founder of FIRST, Dean Kamen, with the objective of promoting STEM education and careers in the developing world through Olympics-style robotics competitions. Former US Congressman, Joe Sestak was the organization's president in 2017, but left after the 2017 Challenge.
Each year, the FIRST Global Challenge is held in a different city. For example, Mexico City was selected to host the 2018 Challenge after the United States hosted the 2017 edition in Washington, DC. This is a change from FIRST's system of championships, where one city hosts for several years at a time.
In May 2020, it was announced that FIRST Global would not host a traditional challenge in 2020 due to the COVID-19 pandemic and shifted to a remote model.
In 2022, FIRST Global returned to in-person events with the 2022 Challenge in Geneva, Switzerland.
Editions
Washington, D.C. 2017
The 2017 FIRST Global Challenge was held in Washington, D.C., from July 16–18, and the challenge was the use of robots to separate different colored balls, representing clean water and impurities in water, symbolizing the Engineering Grand Challenge (based on the Millennium Development Goal) of improving access to clean water in the developing world. Around 160 teams composed of 15- to 18-year-olds from 157 countries participated, and around 60% of teams were created or led by young women. Six continental teams also participated.
Mexico City 2018
The 2018 FIRST Global Challenge was held in Mexico City from August 15–18. The 2018 Challenge was called Energy Impact and explored the impact of various types of energy on the world and how they can be made more sustainable. In the challenge, robots worked together in teams of three to give cubes to human players, turn a crank, and score cubes in goals in order to generate electrical power. The challenge was based on three Engineering Grand Challenges; making solar energy affordable, making fusion energy a reality, and creating carbon sequestration methods.
Dubai 2019
The 2019 challenge, called Ocean Opportunities, was held in Dubai from October 24–27 and was the first challenge hosted outside of North America. The challenge was themed around clearing the ocean of pollutants, and had two alliances of three teams each attempting to score large and small balls representing pollutants into processing areas and a processing barge. The processing barge had multiple levels, with higher levels worth more points. At the end of the match, robots "docked" with the barge by driving onto or climbing up it, with climbing worth more points. The event was opened by Sheikh Hamdan bin Mohammed Al Maktoum, Crown Prince of Dubai.
Geneva 2022
The 2022 challenge called Carbon Capture, was held in Geneva from October 13–16. The challenge was themed around removing carbon dioxide () emissions from the atmosphere. In the Carbon Capture game, six different countries worked together to capture and store black balls representing carbon particles. The storage tower had multiple cantilevered bars that the robots mounted to, with the higher bars worth a greater multiplier. At the end of a match, robots "docked" on the storage tower's base or climbed the bars with their alliance indicator ball. Each match started with a "global alliance" of six countries, then divided into two "regional alliances" each consisting of three countries. The event was opened by Dr. Martina Hirayama, Switzerland State Secretary for Education, Research and Innovation (SERI).
Singapore 2023
The 2023 challenge, called Hydrogen Horizons, was held in Singapore from October 7–10. The challenge is themed around renewable energy with a focus on hydrogen technologies.
Subordinate programs
Global STEM Corps
The Global STEM Corps is a FIRST Global initiative that connects qualified volunteer mentors with students in developing countries to prepare them for competitions.
New Technology Experience
The New Technology Experience (NTE) is an annual component of the FIRST Global Challenge that was added to the organization's offerings in 2021. It was established as a means for the student community to stay current with cutting-edge technology and is integrated with each year's theme. The 2021 NTE was the CubeSat Prototype Challenge. The 2022 NTE, Carbon Countermeasures, was presented in partnership with XPRIZE.
References
External links
Educational organizations based in the United States
Organizations established in 2016
Robotics
Robotics organizations
Technology organizations | FIRST Global Challenge | [
"Engineering"
] | 963 | [
"Robotics",
"Automation"
] |
54,622,321 | https://en.wikipedia.org/wiki/Borylene | A borylene is the boron analogue of a carbene. The general structure is R-B: with R an organic moiety and B a boron atom with two unshared electrons. Borylenes are of academic interest in organoboron chemistry. A singlet ground state is predominant with boron having two vacant sp2 orbitals and one doubly occupied one. With just one additional substituent the boron is more electron deficient than the carbon atom in a carbene. For this reason stable borylenes are more uncommon than stable carbenes. Some borylenes such as boron monofluoride (BF) and boron monohydride (BH) the parent compound also known simply as borylene, have been detected in microwave spectroscopy and may exist in stars. Other borylenes exist as reactive intermediates and can only be inferred by chemical trapping.
The first stable terminal borylene complex [(OC)5WBN(SiMe3)2] was reported by Holger Braunschweig et al. in 1998. In this compound a borylene is coordinated to a transition metal. Borylenes are also stabilized as Lewis base adducts, e.g. with a NHC carbene. Other strategies are the use of cyclic alkyl amino carbenes (CAACs) and other Lewis bases, and their use as bis-adducts.
Free borylenes
As discussed above, free borylenes have yet to be isolated, but they have been the subject of a number of computational studies and have investigated spectroscopically and experimentally. B-R (R=H, F, Cl, Br, I, NH2, C2H, Ph) have been observed via microwave or IR spectroscopy at low temperature via elaborate procedures. When generated as reactive intermediates, borylenes have been shown to activate strong C-C single bonds, yielding products analogous to an organometallic oxidative addition reaction. Most commonly, these are generated via reduction of an organoborane dichloride, but photolysis of other boranes can also afford short-lived borylene species.
As might be expected, calculations have demonstrated that the HOMO is composed of the nonbonding electrons on boron (nσ-type, sp character). The LUMO and LUMO+1 are empty, orthogonal pπ-type orbitals and are degenerate in energy except in the case where R breaks the symmetry of the molecule, thus lifting the degeneracy. Unlike carbenes, which can exist in either singlet or triplet ground states, calculations have indicated that all yet-studied borylenes have a singlet ground spin state. The smallest singlet-triplet gap was calculated to be 8.2 kcal/mol for Me3Si-B. Aminoborylene (H2NB) is a slight exception to the above paradigm, as the nitrogen lone pair donates into an unoccupied boron p orbital. Thus, there is formally a double bond between boron and nitrogen; the π* combination of this interaction serves as the LUMO+1.
Mono-Lewis base-stabilized borylenes
The first example of a borylene stabilized by a single Lewis base was reported in 2007 and exists as a dimer—a diborene. An (NHC)BBr3 adduct was reduced to generate a probable (NHC)B-H intermediate that subsequently dimerized to form the diborene. A similar species with a boron–boron single bond was also observed. The diborene has an incredibly short boron–boron bond length of 1.560(18) Å, further supporting the assignment of a double bond. DFT and NBO calculations were performed on a model system (with Dipp moieties replaced by H). Although some differences between the calculated and crystal structures were evident, they could primarily be ascribed to distortions from planarity caused by the bulky Dipp groups. The HOMO was calculated to be a B-B π-bonding orbital and the HOMO-1 is of mixed B-H and B-B σ-bonding character. NBO calculations supported the above assessments, as populations for the B-B σ- and π-bonding orbitals were calculated to be 1.943 and 1.382 respectively.
A number of similar compounds have been generated and isolated, and several studies involving putative mono-Lewis base-stabilized borylene intermediates have been reported. However, an isolable example remained elusive until 2014. Betrand et al. argued that due to boron's electropositivity and thus preference to be electron-poor, CAAC (cyclic (alkyl)(amino)carbene) might serve as a better Lewis base than the more commonplace NHC. The (NHC)borane adduct was prepared then reduced with Co(Cp*)2. One equivalent of reductant yielded an aminoboryl radical and a second reduction event lead to the desired (CAAC)borylene. Another group followed a similar synthetic strategy using DAC(diamidocarbene); the reduction of a (DAC)borane derivative afforded an analogous (DAC)borylene (see figure). Although the C=B=NR2 structure is similar in nature to aminoboraalkenes, an exploration of molecular orbitals gives an entirely different picture: as expected, the HOMO is a bond of π symmetry derived from the donation of boron's lone pair into the empty orbital on carbon. As previously discussed, a nitrogen lone pair donates into an empty boron p-orbital to form a π bond; the out of phase combination serves as a high-energy LUMO+2.
The first example of dinitrogen fixation at a p-block element was published in 2018 by Holger Braunschweig et al., whereby one molecule of dinitrogen is bound by two transient mono-Lewis base-stabilized borylene species. The resulting dianion was subsequently oxidized to a neutral compound, and reduced using water.
Bis-Lewis base-stabilized borylenes
Taking inspiration from Robinson's above diborene synthesis, Bertrand et al. swapped NHC for CAAC and successfully isolated the first bis-Lewis base-stabilized borylene in 2011. Reduction of (CAAC)BBr3 with KC8 in the presence of excess CAAC afforded the bis(CAAC)BH. A labeling study indicated that the H-atom was abstracted from an aryl group associated with the CAAC. Reduction of (CAAC)BBr3 yields the same terminal borylene even in the absence of additional Lewis base via a mechanism that remains poorly understood. Exploitation of this procedure has been used to form mixed bis-Lewis base-stabilized borylenes as well. Several other routes have also been proposed. A more novel one employs methyl triflate to abstract a hydride from (CAAC)BH3. Treatment with a Lewis base, followed by triflic acid and KC8 afford the desired (CAAC)(Lewis base)BH. Although the reported case uses only specific Lewis bases, the approach is argued to be highly generalizable. A number of other compounds in this class have been generated using borylene-transition metal complexes as precursors. Treatment of (OC)5M=B-Tp with carbon monoxide or acetonitrile yields the corresponding adducts: (CO)2B-Tp and (MeNC)2B-Tp.
Bonding in these complexes is quite similar to that in mono-Lewis base compounds. At least one π-acceptor ligand is present in all known examples of these compounds, and the B-L bond strength tends to scale with the π-acidity of the Lewis base. Low-energy σ-donation orbitals from the base to boron are present in these compounds, and the π-interaction from boron's lone pair to the Lewis base serves as the HOMO. Calculated electronic structure for a number of borylene complexes were compared with their isoelectronic homologues: carbone complexes (CL2) and nitrogen cation complexes ((N+)L2).
Borylene-transition metal complexes
The first transition metal complex reported by Braunschweig et al. featured a borylene ligand bridging between two manganese centers: [ μ-BX{η5-C5H4R}Mn(CO)2}2] (R=H, Me; X=NMe2). The first terminal borylene complex [(CO)5MBN(SiMe3)2] was prepared by the same group several years later. Two previous structures – [(CO)4Fe(BNMe2)] and [(CO)4Fe{BN(SiMe3)2}] – had been proposed by other groups but disqualified due to inconsistent 11B-NMR data. A number of diborylene complexes have also been described. The first of these, [(η5-C5Me5)Ir{BN(SiMe3)2}2], was prepared by the photochemical reaction of [(η5-C5Me5)Ir(CO)2] with [(OC)5Cr{BN(SiMe3)2}]. One unusual reaction exhibited by these complexes is coupling of borylene and carbon monoxide ligands. Catenation of an iron borylene complex has generate an iron complex of a tetraboron (B4) chain.
Orbitally, the interactions between transition metals and borylenes tend to be similar to the above Lewis acids and borylenes. A number of computational studies have been performed on these systems. A sample paper from 2000 employed NBO to analyze a series of related complexes. Taking [(CO)4Fe{BN(SiH3)2}] as an example, it was calculated that—as expected—the boron moiety is relatively electron-poor (+0.59 charge). The Fe-B π-bonding orbitals were found to have populations of 0.39 and 0.48 whereas the σ-bonding had 0.61. Thus, the Wiberg bond index of the Fe-B bond was a relatively strong 0.65 (compare: Fe-CO was 0.62 in the same complex. The analogous tungsten complex had a bond index value of 0.82. Overall, the paper concludes that transition metal-borylene bonds are very strong. However, the bonding has strong ionic contributions. Orbital attractions are primarily σ- accompanied by weaker π-interactions. Unlike corresponding metal-carbyne complexes, the bond order in all studied cases was less than 1.
References
Boron compounds
Reactive intermediates | Borylene | [
"Chemistry"
] | 2,274 | [
"Functional groups",
"Octet-deficient functional groups",
"Organic compounds",
"Physical organic chemistry",
"Reactive intermediates"
] |
65,980,009 | https://en.wikipedia.org/wiki/Gene%20regulatory%20circuit | Genetic regulatory circuits (also referred to as transcriptional regulatory circuits) is a concept that evolved from the Operon Model discovered by François Jacob and Jacques Monod. They are functional clusters of genes that impact each other's expression through inducible transcription factors and cis-regulatory elements.
Genetic regulatory circuits are analogous in many ways to electronic circuits in how they use signal inputs and outputs to determine gene regulation. Like electronic circuits, their organization determines their efficiency, and this has been demonstrated in circuits working in series to have a greater sensitivity of gene regulation. They also use inputs such as trans and cis sequence regulators of genes, and outputs such as gene expression level. Depending on the type of circuit, they respond constantly to outside signals, such as sugars and hormone levels, that determine how the circuit will return to its fixed point or periodic equilibrium state. Genetic regulatory circuits also have an ability to be evolutionarily rewired without the loss of the original transcriptional output level. This rewiring is defined by the change in regulatory-target gene interactions, while there is still conservation of regulatory factors and target genes.
In-silico application
These circuits can be modelled in silico to predict the dynamics of a genetic system. Having constructed a computational model of the natural circuit of interest, one can use the model to make testable predictions about circuit performance. When designing a synthetic circuit for a specific engineering task, a model is useful for identifying necessary connections and parameter operating regimes that give rise to a desired functional output. Similarly, when studying a natural circuit, one can use the model to identify the parts or parameter values necessary for a desired biological outcome. In other words, computational modelling and experimental synthetic perturbations can be used to probe biological circuits. However, the structure of the circuits have shown to not be a reliable indicator of the function that the regulatory circuit provides for the larger cellular regulatory network.
Engineering and synthetic biology
Understanding of genetic regulatory circuits are key in the field of synthetic biology, where disparate genetic elements are combined to produce novel biological functions. These biological gene circuits can be used synthetically to act as physical models for studying regulatory function.
By engineering genetic regulatory circuits, cells can be modified to take information from their environment, such as nutrient availability and developmental signals, and react in accordance to changes in their surroundings . In plant synthetic biology, genetic regulatory circuits can be used to program traits to increase crop plant efficiency by increasing their robustness to environmental stressors. Additionally, they are used to produce biopharmaceuticals for medical intervention.
References
Genetics
Gene expression
Systems biology | Gene regulatory circuit | [
"Chemistry",
"Biology"
] | 524 | [
"Genetics",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Systems biology"
] |
68,831,440 | https://en.wikipedia.org/wiki/Oncometabolism | Oncometabolism is the field of study that focuses on the metabolic changes that occur in cells that make up the tumor microenvironment (TME) and accompany oncogenesis and tumor progression toward a neoplastic state.
Cells with increased growth and survivability differ from non-tumorigenic cells in terms of metabolism. The Warburg Effect, which describes how cancer cells change their metabolism to become more oncogenic in order to proliferate and eventually invade other tissues in a process known as metastasis.
The chemical reactions associated with oncometabolism are triggered by the alteration of oncogenes, which are genes that have the potential to cause cancer. These genes can be functional and active during physiological conditions, producing normal amounts of metabolites. Their upregulation as a result of DNA damage can result in an overabundance of these metabolites, and lead to tumorigenesis. These metabolites are known as oncometabolites, and can act as biomarkers.
History
In the 1920s, Otto Heinrich Warburg discovered an intriguing bioenergetic phenotype shared by most tumor cells: a higher-than-normal reliance on lactic acid fermentation for energy generation. He is known as the "Father of Oncometabolism". Although the roots of this research field trace back to the 1920s, it was only recently recognized Over the last decade, research on cancer progression has focused on the role of shifting metabolic pathways for both the cancer and immune cells, leading to an increase interest in characterizing the metabolic alterations that cells undergo in the TME.
Warburg Effect
In the absence of hypoxic conditions (i.e. physiological levels of oxygen), cancer cells preferentially convert glucose to lactate, according to Otto H. Warburg, who believed that aerobic glycolysis was the key metabolic change in cancer cell malignancy. The "Warburg effect" was later coined to describe this metabolic shift. Warburg thought this change in metabolism was due to mitochondrial "respiration injury", but this interpretation was questioned by other researchers in 1956 showing that intact and functional cytochromes detected in most tumor cells clearly speak against a general mitochondrial dysfunction. Furthermore, Potter et al. and several other authors provided significant evidence that oxidative phosphorylation and a normal Krebs cycle persist in the vast majority malignant tumors, adding to the growing body of evidence that most cancers exhibit the Warburg effect while maintaining a proper mitochondrial respiration. Dang et al. in 2008 provided evidence that the tumor tissue sections used in Warburg's experiments should have been thinner for the oxygen diffusion constants employed, implying that the tissue slices studied were partially hypoxic and the calculated critical diffusion distance was of 470 micrometers. As a result, endless debates and discussions about Warburg's discovery took place and have piqued the interest of scientists all over the world, which has helped bring attention to cell metabolism in cancer and immune cells and the use of modern technology to discover what these pathways are and how they are modified as well as potential therapeutic targets.
Metabolic reprogramming
Carcinogenic cells undergo a metabolic rewiring during oncogenesis, and oncometabolites play an important role. In cancer, there are several reprogrammed metabolic pathways that help cells survive when nutrients are scarce: Aerobic glycolysis, an increase in glycolytic flux, also known as the Warburg effect, allows glycolytic intermediates to supply subsidiary pathways to meet the metabolic demands of proliferating tumorigenic cells. Another studied reprogrammed pathway is gain of function of the oncogene MYC. This gene encodes a transcription factor that boosts the expression of a number of genes involved in anabolic growth via mitochondrial metabolism. Oncometabolite production is another example of metabolic deregulation.
Oncometabolites
Oncometabolites are metabolites whose abundance increases markedly in cancer cells through loss-of-function or gain-of-function mutations in specific enzymes involved in their production, the accumulation of these endogenous metabolites initiates or sustains tumor growth and metastasis. Cancer cells rely on aerobic glycolysis, which is reached through defects in enzymes involved in normal cell metabolism, this allows the cancer cells to meet their energy needs and divert acetyl-CoA from the TCA cycle to build essential biomolecules such as amino acids and lipids. These defects cause an overabundance of endogenous metabolites, which are frequently involved in critical epigenetic changes and signaling pathways that have a direct impact on cancer cell metabolism.
Epigenetics
Oncometabolite dysregulation and cancer progression are linked to epigenetic changes in cancer cells. Several mechanisms have been linked to D-2-hydroxyglutarate, succinate, and fumarate with the inhibition of α-KG–dependent dioxygenases, this causes epigenetic changes that affect the expression of genes involved in cell differentiation and the development of malignant characteristics. The group of Timothy A. Chan described a mechanism by which abnormal accumulation of the oncometabolite D-2-hydroxyglutarate in brain tumor samples increased DNA methylation, a process that has been shown to play a key role in oncogenesis. On the other hand, in paraganglioma cells, succinate and fumarate were found to methylate histones, effectively silencing the genes PNMT and KRT19, which are involved in neuroendocrine differentiation and epithelial-mesenchymal transition, respectively.
Biomarkers for cancer detection
The discovery of oncometabolites has ushered in a new era in cancer biology, one that has the potential to improve patient care. The discovery of new therapeutic and reliable markers that exploit vulnerabilities of cancer cells, are being used to targeting either upstream or downstream effectors of these pathways. Oncometabolites can be used as diagnostic biomarkers and may be able to assist oncologists in making more precise decisions in early stages of tumorigenesis, particularly in predicting more aggressive tumor behavior.
Isocitrate dehydrogenase
The detection of D-2-hydroxyglutarate in glioma patients using proton magnetic resonance spectroscopy (MRS) has been shown to be a noninvasive procedure. The presence of IDH1 or IDH2 mutations was linked to the detection of this oncometabolite 100 percent of the time. IDH2/R140Q is a specific mutation that has shown promising results after its inhibition by the small molecule AGI-6780. Therefore, limiting the supply of D-2-hydroxyglutarate by inhibiting the detected mutant IDH enzymes could be a good therapeutical approach to IDH-mutant cancers.
Succinate dehydrogenase
IHC staining has been shown to be a useful diagnostic tool for prioritizing patients for SDH mutation testing in early stages of cancer. The absence of SDHB in IHC staining would be linked to the presence of SDH oncogene mutations. The already commercialized drug decitabine (Dacogen®) could be an effective therapy to repress the migration capacities of SDHB-mutant cells,
Fumarate hydratase
IHC staining for FH is used to detect lack of this protein in patients with papillary renal cell carcinoma type 2. The lack of FH in renal carcinoma cells induces pro-survival metabolic adaptations where several cascades are affected.
Glycine-N-methyltransferase
Downregulation of glycine-N-methyltransferase has been linked to hepatocellular carcinoma and pancreatic cancer. Serving this as a reliable marker for oncogenesis. When compared to patients with deletions in GNMT, patients with no deletions early-stage pancreatic cancer had twice the median months overall survival.
Applications
Oncometabolomics
Metabolomics can be applied to oncometabolism, since the changes in cancer's genomic, transcriptomic, and proteomic profiles can result in changes in downstream metabolic pathways. With this information we can elucidate the responsible pathways and oncometabolites for various diseases. Actually, through the use of this technique, the dysregulation of the pyruvate kinase enzyme in glucose metabolism was discovered in cancer cells. Another common used technique is glucose or glutamine labeled with 13C to show that the TCA cycle is used to generate large amounts of fatty acids (phospholipids) and to replenish the TCA cycle intermediates. But oncometabolomics does not necessarily need to be used on cancer cells, but on cells immediately surrounding them in the TME.
Metabolomics applied to cancer has the potential to significantly improve current oncological treatments and has a great diagnostic value, since metabolic changes are the prequel of phenotypic changes in cells (thus tissues and organs) making it suitable for early detection of difficult-to-detect cancers. This also leads to a more personalized medicine and customize an individual's cancer treatment according to their specific oncometabolite profiles, which would allow for better cancer therapy customization or informed adjustments.
Software and libraries
Ingenuity Pathway Analysis (IPA)
Ingenuity Pathway Analysis (IPA) is a metabolic pathway analysis software package that helps researchers model, analyze, and comprehend complex biological systems by associating specific metabolites with potential metabolic pathways for data analysis. This software has been used by researchers to elucidate regulatory networks on oncometabolites like hydroxyglutarate.
Metabolights
Metabolights is an open-access database for metabolomics research that collects all experimental data from leading journals' metabolic experiments. Since its initial release in 2012, the MetaboLights repository has seen consistent year-on-year growth. It is a resource that surged in response to the needs of the scientific community to easy access to metabolite data.
Research
Cancer research has been ongoing for centuries, trying to elucidate the origin of its cause. As cancer research evolves with time, the scientific community tends to pay more attention to cell metabolism and how to target these metabolic needs and changes that cells undergo during carcinogenesis. There is growing evidence that metabolic dependencies in cancer are influenced by tissue environment, being this important to consider the TME for different in vitro and in vivo models to study oncometabolism in different cancer scenarios.
There is extensive research on the modulation of BET proteins in cancer models of breast. These proteins appear to be involved in oncometabolism and targeting and uncoupling BRD4 actions in carcinogenic cells, as well as stopping pro-migratory signals and changing cytokine metabolism, particularly IL-6. The same group has reported on the importance of exosomes in the TME and how these vesicles, shed by adipocytes, can carry a specific molecular cargo that causes metabolic changes in the cell, leading to pro-metastatic changes in the recipient breast cancer cells.
References
Wikipedia Student Program
Oncology
Metabolism | Oncometabolism | [
"Chemistry",
"Biology"
] | 2,349 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
68,834,254 | https://en.wikipedia.org/wiki/Silicification | In geology, silicification is a petrification process in which silica-rich fluids seep into the voids of Earth materials, e.g., rocks, wood, bones, shells, and replace the original materials with silica (SiO2). Silica is a naturally existing and abundant compound found in organic and inorganic materials, including Earth's crust and mantle. There are a variety of silicification mechanisms. In silicification of wood, silica permeates into and occupies cracks and voids in wood such as vessels and cell walls. The original organic matter is retained throughout the process and will gradually decay through time. In the silicification of carbonates, silica replaces carbonates by the same volume. Replacement is accomplished through the dissolution of original rock minerals and the precipitation of silica. This leads to a removal of original materials out of the system. Depending on the structures and composition of the original rock, silica might replace only specific mineral components of the rock. Silicic acid (H4SiO4) in the silica-enriched fluids forms lenticular, nodular, fibrous, or aggregated quartz, opal, or chalcedony that grows within the rock. Silicification happens when rocks or organic materials are in contact with silica-rich surface water, buried under sediments and susceptible to groundwater flow, or buried under volcanic ashes. Silicification is often associated with hydrothermal processes. Temperature for silicification ranges in various conditions: in burial or surface water conditions, temperature for silicification can be around 25°−50°; whereas temperatures for siliceous fluid inclusions can be up to 150°−190°. Silicification could occur during a syn-depositional or a post-depositional stage, commonly along layers marking changes in sedimentation such as unconformities or bedding planes.
Sources of silica
The sources of silica can be divided into two categories: silica in organic and inorganic materials. The former category is also known as biogenic silica, which is a ubiquitous material in animals and plants. The latter category is the second most abundant element in Earth's crust. Silicate minerals are the major components of 95% of presently identified rocks.
Biology
Biogenic silica is the major source of silica for diagenesis. One of the prominent examples is the presence of silica in phytoliths in the leaves of plants, i.e. grasses, and Equisetaceae. Some suggested that silica present in phytoliths can serve as a defense mechanism against the herbivores, where the presence of silica in leaves increases the difficulty in digestion, harming the fitness of herbivores. However, evidence on the effects of silica on the wellbeing of animals and plants is still insufficient.
Besides, sponges are another biogenic source of naturally occurring silica in animals. They belong to the phylum Porifera in the classification system. Silicious sponges are commonly found with silicified sedimentary layers, for example in the Yanjiahe Formation in South China. Some of them occur as sponge spicules and are associated with microcrystalline quartz or other carbonates after silicification. It could also be the main source of precipitative beds such as cherts beds or cherts in petrified woods.
Diatoms, an important group of microalgae living in marine environments, contribute significantly to the source of diagenetic silica. They have cell walls made of silica, also known as diatom frustules. In some silicified sedimentary rocks, fossils of diatoms are unearthed. This suggests that diatoms frustules were sources of silica for silicification. Some examples are silicified limestones of Miocene Astoria Formation in Washington, silicified ignimbrite in El Tatio Geyser Field in Chile, and Tertiary siliceous sedimentary rocks in western pacific deep sea drills. The presence of biogenic silica in various species creates a large-scale marine silica cycle that circulates silica through the ocean. Silica content is therefore high in active silica upwelling areas in the deep-marine sediments. Besides, carbonate shells that deposited in shallow marine environments enrich silica contents at continental shelf areas.
Geology
The major component of the Earth's upper mantle is silica (SiO2), which makes it the primary source of silica in hydrothermal fluids. SiO2 is a stable component. It often appears as quartz in volcanic rocks. Some quartz that is derived from pre-existing rocks, appear in the form of sand and detrital quartz that interact with seawater to produce siliceous fluids. In some cases, silica in siliceous rocks are subjected to hydrothermal alteration and react with seawater at certain temperatures, forming an acidic solution for silicification of nearby materials. In the rock cycle, the chemical weathering of rocks also releases silica in the form of silicic acid as by-products. Silica from weathered rocks is washed into waters and deposit into shallow-marine environments.
Mechanisms of silicification
The presence of hydrothermal fluids is essential as a medium for geochemical reactions during silicification. In the silicification of different materials, different mechanisms are involved. In the silicification of rock materials like carbonates, replacement of minerals through hydrothermal alteration is common; while the silicification of organic materials such as woods is solely a process of permeation.
Replacement
The replacement of silica involves two processes:
1) Dissolution of rock minerals
2) Precipitation of silica
It could be explained through the carbonate-silica replacement. Hydrothermal fluids are undersaturated with carbonates and supersaturated with silica. When carbonate rocks get in contact with hydrothermal fluids, due to the difference in gradient, carbonates from the original rock dissolve into the fluid whereas silica precipitate out of it. The carbonate that dissolved is therefore pulled out from the system while the silica precipitated recrystallizes into various silicate minerals, depending on the silica phase. The solubility of silica strongly depends on the temperature and pH value of the environment where pH9 is the controlling value. Under a condition of pH lower than 9, silica precipitates out of the fluid; when the pH value is above 9, silica becomes highly soluble.
Permeation
In the silicification of woods, silica dissolves in hydrothermal fluid and seeps into lignin in cell walls. Precipitation of silica out of the fluids produces silica deposition within the voids, especially in the cell walls. Cell materials are broken down by the fluids, yet the structure remains stable due to the development of minerals. Cell structures are slowly replaced by silica. Continuous penetration of siliceous fluids results in different stages of silicification i.e. primary and secondary. The loss of fluids over time leads to the cementation of silicified woods through late silica addition.
The rate of silicification depends on a few factors:
1) Rate of breakage of original cells
2) Availability of silica sources and silica content in the fluid
3) Temperature and pH of silicification environment
4) Interference of other diagenetic processes
These factors affect the silicification process in many ways. The rate of breakage of original cells controls the development of the mineral framework, hence the replacement of silica. Availability of silica directly determines the silica content in fluids. The higher the silica content, the faster silicification could take place. The same concept applies to the availability of hydrothermal fluids. The temperature and pH of the environment determine the condition for silicification to occur. This is closely connected to the burial depth or association with volcanic events. Interference of other diagenetic processes could sometimes create disturbance to silicification. The relative time of silicification to other geological processes could serve as a reference for further geological interpretations.
Examples
Volcanic rocks
In the Conception Bay in Newfoundland, Southeastern coast of Canada, a series of Pre-Cambrian to Cambrian-linked volcanic rocks were silicified. The rocks mainly consist of rhyolitic and basaltic flows, with crystal tuffs and breccia interbedded. Regional silicification was taken place as a preliminary alteration process before other geochemical processes occurred. The source of silica near the area was from hot siliceous fluids from rhyolitic flow under a static condition. A significant portion of silica appeared in the form of white chalcedonic quartz, quartz veins as well as granular quartz crystal. Due to the difference in rock structures, silica replaces different materials in rocks of close locations. The following table shows the replacement of silica at different localities:
Metamorphic rocks
In the Semail Nappe of Oman in the United Arb Emirates, silicified serpentinite was found. The occurrence of such geological features is rather unusual. It is a pseudomorphic alteration where the protolith of serpentinite was already silicified. Due to tectonic events, basal serpentinite was fractured and groundwater permeated along the faults, forming a large-scale circulation of groundwater within the strata. Through hydrothermal dissolution, silica precipitated and crystallized around the voids of serpentinite. Therefore, silicification can only be seen along groundwater paths. The silicification of serpentinite was formed under the condition where groundwater flow and carbon dioxide concentration are low.
Carbonates
Silicified carbonates can appear as silicified carbonate rock layers, or in the form of silicified karsts. The Paleogene Madrid Basin in Central Spain is a foreland basin resulted from the Alpine uplift, an example of silicified carbonates in rock layers. The lithology consists of carbonate and detritus units that were formed in a lacustrine environment. The rock units are silicified where cherts, quartz, and opaline minerals are found in the layers. It is conformable with the underlying evaporitic beds, also dated from similar ages. It is found that there were two stages of silicification within the rock strata. The earlier stage of silicification provided a better condition and site for the precipitation of silica. The source of silica is still uncertain. There are no biogenic silica detected from the carbonates. However, microbial films in carbonates are found, which could suggest the presence of diatoms.
Karsts are carbonate caves formed from a dissolution of carbonate rocks such as limestones and dolomites. They are usually susceptible to groundwater and are dissolved in these drainage. Silicified karsts and cave deposits are formed when siliceous fluids enter karsts through faults and cracks. The Mid-Proterozoic Mescal Limestone from the Apache Group in central Arizona is classic examples of silicified karsts. A portion of the carbonates are replaced by cherts in early diagenesis and the remaining portion is completely silicified in later stages. The source of silica in carbonates are usually associated with the presence of biogenetic silica; however, the source of silica in Mescal Limestone is from weathering of overlying basalts, which are extrusive igneous rocks that have high silica content.
Silicified woods
Silicification of woods usually occur in terrestrial conditions, but sometimes it could be done in aquatic environments. Surface water silicification can be done through the precipitation of silica in silica-enriched hot springs. On the northern coast of central Japan, the Tateyama hot spring has a high silica content that contributes to the silicification of nearby fallen woods and organic materials. Silica precipitates rapidly out of the fluids and opal is the main form of silica. With a temperature of around 70 °C and a pH value of around 3, the opal deposited is composed of silica spheres of different sizes arranged randomly.
Early silicification
Mafic magma dominated the seafloor at around 3.9 Ga during the Hadean-Archean transition. Due to rapid silicification, the felsic continental crust began to form. In the Archean, the continental crust was composed of tonalite–trondhjemite–granodiorite (TTG) as well as granite–monzonite–syenite suites.
The Mount Goldsworthy in the Pilbara Craton located in Western Australia holds one of the earliest silicification example with an Archean clastic meta-sedimentary rock sequence, revealing the surface environment of the Earth in the early times with evidence from silicification and hydrothermal alteration. The unearthed rocks are found to be SiO2 dominant in terms of mineral composition. The succession was subjected to a high degree of silicification due to hydrothermal interaction with seawater at low temperatures. Lithic fragments were replaced with microcrystalline quartz and protoliths were altered during silicification. The condition of silicification and the elements that were present suggested that the surface temperature and carbon dioxide contents were high during either or both syn-deposition and post-deposition.
The Barberton Greenstone Belt in South Africa, specifically the Eswatini Supergroup of around 3.5–3.2 Ga, is a suite of well-preserved silicified volcanic-sedimentary rocks. With the composition ranging from ultramafic to felsic, the silicified volcanic rocks are directly beneath the bedded chert layer. Rocks are more silicified near the bedded chert contact, suggesting a relationship between chert deposition and silicification. The silica altered zones reveal that hydrothermal activities, as in seawater circulation, actively circulate the rock layers through fractures and fault during the deposition of bedded chert. The seawater was heated up and therefore picked up silicious materials from underneath volcanic origin. The silica enriched fluids bring about silicification of rocks through seeping into porous materials in the syn-depositional stage at a low-temperature condition.
See also
Metasomatism
Permineralization
Pseudomorph
Silica cycle
References
Sedimentary rocks
Geochemical processes
Silicate minerals | Silicification | [
"Chemistry"
] | 2,955 | [
"Geochemical processes"
] |
68,838,431 | https://en.wikipedia.org/wiki/Amazon%20Astro | Amazon Astro is a home robot developed by Amazon.com, Inc. It was designed for home security monitoring, remote care of elderly relatives, and as a virtual assistant that can follow a person from room to room.
Features
Tom's Guide called the device "Alexa on wheels" and everything available on the Amazon Echo Show 10 is on this new device. The Astro has visual ID and should be able to recognize different family members and send an alert if the device sees someone it does not recognize in the home.
In 2022, Amazon announced a pilot program connecting Astro to the Ring security system, allowing workers in a remote call centre to control Astro to investigate security alerts.
Hardware
Reception
Mark Gurman of Bloomberg News says that, six months after its release, hardly anyone was talking about Astro online, and that Amazon had shipped only a few hundred units, at most.
David Priest of CNET observes that "For now, this robot remains a luxury item, for people with a lot of money to try out a cutting-edge technology that still lacks a compelling use case."
Lauren Goode of Wired magazine labels Astro as "a robot for the sake of a robot" and "a robot without a cause, at least for now".
The announcement in September 2022 that Astro would function as a security guard connected to Ring security devices for homes and small businesses led Gizmodo to comment on the increasing "creepiness" of Astro.
See also
Smart speaker
References
Astro
Products introduced in 2021
Robots
2021 robots | Amazon Astro | [
"Physics",
"Technology"
] | 305 | [
"Physical systems",
"Machines",
"Robots"
] |
56,164,961 | https://en.wikipedia.org/wiki/BC%20Energy%20Step%20Code | The BC Energy Step Code is a provincial regulation that local governments in British Columbia, Canada, may use, if they wish, to incentivize or require a level of energy efficiency in new construction that goes above and beyond the requirements of the base building code. It is an example of a "stretch code," or "reach code," in that it is an appendix to a mandatory minimum energy code that allows communities to voluntarily adopt a uniform approach to achieving more ambitious levels of energy efficiency in new construction.
The BC Energy Step Code consists of a series of specific measurable efficiency targets, and groups them into "steps" that represent increasing levels of energy-efficiency performance. By gradually adopting one or more steps, a local government can increase the building performance requirements in its community. The regulation is designed as a technical roadmap to help the province reach its target that all new buildings will attain a net zero energy ready level of performance by 2032.
The Government of British Columbia enacted the BC Energy Step Code as regulation on April 6, 2017. It entered into legal force on December 15, 2017.
How it works
The BC Energy Step Code establishes a series of measurable energy-efficiency requirements that builders must meet in communities that reference it in their building and development bylaws. The regulation groups these performance targets into a series of "steps" of increasing energy efficiency. Step 1 simply requires confirmation that new buildings meet the existing energy-efficiency requirements of the existing BC Building Code. Meanwhile, at the opposite end of the scale, Step 5 for homes represents a home that is net-zero energy ready. A Step 5 home is effectively the most energy-efficient home that can be built today, roughly equivalent to the rigorous Passive house standard.
The BC Building Code separates all buildings into two basic categories – Part 9 and Part 3, as follows:
Part 9 buildings refer to houses and small buildings three storeys or less, that have a building area or "footprint" no more than 600 square metres. This category includes single-family homes, duplexes, townhomes, small apartment buildings, and small stores, offices, and industrial shops.
Part 3 buildings are larger and more complex. They are four storeys and taller, and have a footprint greater than 600 square metres. This category includes larger apartment buildings, condos, shopping malls, office buildings, hospitals, care facilities, schools, churches, theatres, and restaurants.
For Part 9 buildings, there are five steps of the BC Energy Step Code; Part 3 buildings have four steps, while commercial buildings have three. Each step represents a more stringent set of energy-efficiency requirements. As communities climb the steps, they gradually increase the level of energy efficiency in their new buildings. The BC Energy Step Code applies to new construction only.
For small buildings, Steps 1 to 3 (collectively, the "Lower Steps") can be achieved using construction techniques and products readily understood and available in today's market; homes built to Steps 4 and 5 (the "Upper Steps") are more ambitious and may require more training and incentives to achieve.
The regulation is performance-based, not prescriptive, in that it does not specify the specific materials and strategies a builder must use. Instead, it sets measurable performance targets that the proposed building must meet.
To ensure that builders have the skills and capacity they need to cost-effectively produce higher performance buildings, until 2020, governments that wish to use the BC Energy Step Code may incentivize all steps, but may only require Lower Steps.
What it measures
The BC Energy Step Code measures a building's energy performance via a variety of metrics. The Building Envelope Metrics and the Equipment and Systems Metrics are demonstrated through a whole-building performance simulation, while the Airtightness Metric is demonstrated through an on-site blower door test of the building before occupancy.
Building envelope metrics
Thermal Energy Demand Intensity (TEDI): The amount of annual heating energy needed to maintain a stable interior temperature, taking into account heat loss through the envelope and passive gains (i.e., the amount of heat gained from solar energy passing through the envelope, or from activities in the home such as cooking and lighting, and that provided by body heat). It is calculated per unit of area of the conditioned space over the course of a year, and expressed in kWh/(m2·year).
Equipment and systems metrics
Percent Lower than EnerGuide Reference House: An EnerGuide reference house establishes how much energy a home would use if it was built to base building code standards. This metric identifies how much less energy, stated as a percentage, the new home will require compared to the reference house.
Mechanical Energy Use Intensity: The modelled amount of energy used by space heating and cooling, ventilation, and domestic hot water systems, per unit of area, over the course of a year, expressed in kWh/(m2·year).
Total Energy Use Intensity: The modelled amount of total energy used by a building, per unit of area, over the course of a year, expressed in kWh/(m2·year).
Airtightness metrics
Air Changes per Hour at a 50 Pa Pressure differential, as measured by a blower door test.
Air Leakage Rate: A measure of the rate that air leaks through the building envelope per unit area of the building envelope, as recorded in L/(sm2) at a 75 Pa pressure differential.
Requirements
To meet the requirements of the BC Energy Step Code, builders will work with an energy advisor to check that their plans will meet the energy-performance requirements of a given step. An energy advisor uses software to analyze construction plans and determine the energy efficiency of a building. The builder then begins construction, paying special attention to the building envelope—the walls, windows, doors, and insulation. The energy advisor also tests a building once it is built to see how well it performs.
To achieve the Lower Steps, building and design professionals and trades can rely on conventional building designs with careful air-sealing practices, and incrementally incorporate some key elements in the design, building envelope, and equipment and systems. Builders and designers will collaborate with the energy advisor to select the most cost effective way to meet the standard's requirements. These Lower Steps give builders new flexibility in how to achieve modest gains in efficiency through improved envelopes and/or upgraded systems.
To achieve the Upper Steps, builders and designers will need to adopt an integrated design approach to building design and may need to incorporate more substantial changes in building design, layout, framing techniques, system selection, and materials. These techniques and materials will be more costly and challenging without additional training and experience.
Origins
In September 2015, the province's Building Safety and Standards branch established an Energy Efficiency Working Group (EEWG) to review policies and regulations that apply to energy efficiency in BC, to seek stakeholder input and offer guidance on how to best implement an Energy Step Code to achieve consistent building energy performance beyond the BC Building Code. The consultations engaged with the building and development sectors, and the trades and professions that support them, as well as local governments, utilities, and other stakeholders, to identify a consistent approach to increasing energy-efficiency standards.
In August 2016, the group renamed itself the Stretch Code Implementation Working Group and published its final report and recommendations, including adoption of a Step Code into a voluntary provincial regulation.
The Energy Step Code Council
In mid 2017, the province renamed the group the Energy Step Code Council, and mandated it "to support local governments and industry towards smooth uptake of the BC Energy Step Code and help guide market transformation towards higher-performance buildings within B.C." The Energy Step Code Council meets quarterly to support training and capacity building opportunities for local governments, industry, and other stakeholder, communicate what the BC Energy Step Code is and how it may be implemented across the province, and provide advice and clarification on technical aspects of the standard.
Cost implications of adoption
In September 2017, BC Housing, the province's housing authority, and the Energy Step Code Council published the BC Energy Step Code 2017 Metrics Research Study as a comprehensive exploration of the standard's energy, emissions and economic impacts. The research is based on data generated by builders from all across British Columbia, and bills itself as "one of the most extensive energy analyses of buildings in Canada."
The researchers conclude that meeting the requirements of the Lower Steps of the BC Energy Step Code involve only very modest construction premiums. In most situations, builders can achieve the Lower Steps for less than a 2% construction cost premium above that of a home built to the requirements of the BC Building Code. The construction cost premiums associated with meeting the requirements of Step 1 amounts to just a small fraction of a percent, the report states. In exchange, owners, occupants, and others would enjoy the benefits detailed in the "Benefits of adoption" section below.
In an effort to illustrate how the BC Energy Step Code would impact construction costs in the "real world," the study's authors produced a series of hypothetical scenarios for various building types in various cities.
For an apartment in a six-storey building
The Metrics Research report offers an example of the anticipated capital construction cost premium for a hypothetical 730 square foot unit in a six-storey apartment building in Surrey, British Columbia. Units in this hypothetical new building would sell for between CAD$270,000 and CAD$730,000.
For this building, the report says meeting the requirements of Step 1 would involve a construction cost premium of CAD$100 per unit above the cost of building to the standard modelling requirements of the BC Building Code. Meeting the requirements of Step 2 would incur A 0.5 percent construction cost premium, about CAD$790 per unit. Meeting the requirements of Step 3 adds about CAD$970 to the per-unit build cost. Finally, the researchers found that building to the very high-performance levels of Step 4 may entail a per-unit construction cost premium of CAD$4,215.
For a home in a six-unit row house
The Metrics Research report also models an example of the anticipated capital construction cost premium for a hypothetical 1,720 square feet unit built into a six-unit row house project in Surrey, B.C. Units in this hypothetical new building would sell for between CAD$550,000 and CAD$800,000.
For this building, the researchers conclude that meeting the requirements of Step 1 would involve a construction cost premium of $560 per unit above the cost of building to the BC Building Code. Meeting the requirements of Step 2 would incur a 0.4% construction cost premium, about CAD$1,250 per unit. Meeting the requirements of Step 3 adds about CAD$2,950 to the per-unit build cost. Finally, the report states that building to the highest performance levels may require non-conventional building practices; this would increase construction costs between $5,500 (Step 4) and $9,400 (Step 5) per unit, the study suggests.
Benefits of adoption
Buildings built to higher energy efficiency standard have been shown to provide multiple co-benefits – to home and building owners and occupants, to industry, to the environment, and to the community.
For building owners and occupants
Owners and tenants often prefer high-performance buildings as they require less energy, reducing utility bills. Occupants also prefer them because they better manage:
Temperature, improving comfort.
Fresh air throughout the building, improving health.
Soundproofing, reducing exterior noise.
For industry
The BC Energy Step Code provides industry with a clear sense of where the province is heading on energy efficiency, while giving builders a welcome level of consistency via standardized performance metrics.
For climate change mitigation
If a given community's new homes are likely to be heated with natural gas, the BC Energy Step Code will reduce the amount of that fuel they need to burn to stay comfortable. A well-insulated and well-sealed Step 3 home heated with natural gas will consume much less of the fuel when compared with one built to the minimum code requirements. This will result in fewer carbon emissions.
For economic development
The global green-building market doubles every three years and the value of the green building materials market is expected to reach $234 billion by 2019. British Columbia is already a green building design and construction leader, boasting some of highest-performing buildings in North America. Almost 12,000 people work in green architecture and related construction services in BC, while close to 9,000 work in clean energy services. The BC Energy Step Code could open up new local economic development opportunities, and helps unlock a significant export opportunity. At a November 2017 conference, an assistant deputy minister with the Province of British Columbia's Office of Housing and Construction Standards called the BC Energy Step Code "a driver of the clean economy."
Geographic availability
The BC Energy Step Code is available to communities to all climate zones across the province for Part 9 buildings, and only to Climate Zone 4 (Lower Mainland and South Vancouver Island) for Part 3 buildings. Future iterations of the standard will increase coverage to all types and all areas.
All British Columbia local governments except the City of Vancouver may reference and enforce the BC Energy Step Code in their policies and bylaws. The City of Vancouver has its own building code, and its own high-performance buildings strategy, the Zero Emissions Building Plan.
B.C. local governments referencing the standard
As of a March 2019 survey of 76 local governments, 14 local governments reported that they had implemented the BC Energy Step Code, and 17 local governments reported they were in the process of implementing at the time of the survey.
Related Canadian Policies
In August 2017, British Columbia joined Canada's federal government, represented by Natural Resources Canada, and other provinces and territories in endorsing the Build Smart: Canada's Buildings Strategy, which is a "key driver" of the Pan-Canadian Framework on Clean Growth and Climate Change. The strategy commits signatories to develop and adopt increasingly stringent model building codes, starting in 2020, with the goal that provinces and territories adopt a "net-zero energy ready" model building code by 2030. In British Columbia, the BC Energy Step Code serves as a technical policy pathway for British Columbia to deliver on that goal.
As of mid-2018, the only other tiered building standard in Canada's is the Toronto Green Standard, which establishes sustainable design requirements for new private and public developments in that city. The Toronto Green Standard consists of stepped levels of performance measures with supporting guidelines that promote sustainable site and building design.
Similar regulations
New Buildings Institute, a U.S. nonprofit organization advocating for improved energy performance in commercial buildings, describes a stretch code as "a locally mandated or incentivized code or alternative compliance path that is more ambitious than the base code, resulting in buildings that achieve higher energy savings." The institute says the codes provide an opportunity to train building and development communities in advanced practices before the underlying energy code is improved. They help accelerate market acceptance and adoption of more stringent energy efficiency codes in the future. Stretch codes can work in tandem with utility incentive programs.
In November 2017, New Buildings Institute released a set of model stretch building code strategies that target 20% better efficiency than current U.S. national building energy codes. The new 20% Stretch Code Provisions address design aspects such as envelope, mechanical, water heating, lighting and plug loads.
Other stretch codes are in place in the United States, in Massachusetts, Vermont, Oregon, New York, and California.
See also
Energy policy of Canada
Efficient energy use
Passive house
California Green Building Standards Code
List of low-energy building techniques
References
External links
BC Energy Step Code (Building Safety and Standards Branch, Province of British Columbia)
Canada's Energy Code (National Energy Code of Canada for Buildings 2015)
BC Energy Step Code Design Guide & Supplemental (Research Library, BC Housing Management Commission)
Building codes
Standards of Canada | BC Energy Step Code | [
"Engineering"
] | 3,234 | [
"Building engineering",
"Building codes"
] |
56,168,286 | https://en.wikipedia.org/wiki/1958%20Mailuu-Suu%20tailings%20dam%20failure | The 1958 Mailuu-Suu tailings dam failure in the industrial town of Mailuu-Suu, (Kyrgyz: Майлуу-Суу), Jalal-Abad Region, southern Kyrgyzstan, caused the uncontrolled release of of radioactive waste.
The event caused several direct casualties and widespread environmental damage. It was the single worst incident in a region of arid, mountainous western Kyrgyzstan, with a collection of shuttered Soviet-era uranium mining and processing sites, a legacy of extensive radioactive waste dumps, and a history of flooding and mudslides.
As of 2017, despite recent remediations funded by the World Bank and others, the treatment of radioactive waste at Mailuu-Suu still poses serious health and safety risks for local residents.
Background
Oil was discovered here in the early 1900s. Deposits of radium-bearing barites had been discovered by Alexander Fersman in 1929, during his national mineralogical resources survey for the new Soviet government. Uranium mining began in 1946, organized by the "Zapadnyi Mining and Chemical Combine". In addition to mining, two uranium plants would process more than of uranium ore, by ion exchange and alkaline leach, to produce uranium oxide for Soviet atomic bomb projects. The processed ore was both mined locally and imported from elsewhere in the Eastern Bloc.
The town was classified as one of the Soviet government's secret cities, officially known only as "Mailbox 200".
Uranium mining was halted in 1968. Operations left behind some 23 separate uranium tailings dams and 13 waste rock dumps, poorly designed on unstable hillsides above a town of 20,000 people in an area prone to both landslides and earthquakes, holding a total of material containing radionuclides and heavy metals. No attempt to stabilize or seal the material was done when Soviet mining ceased.
Dam failure
On April 16, 1958, with mining and processing plants still operational, a combination of poor design, neglect, heavy rainfall and a reported earthquake caused the #7 tailings dam at Mailuu-Suu to fail. About 50% of the entire volume of the dam flowed into the swift Mailuu-Suu River, only downhill from the breach. The waste then spread about downstream across the national border into Uzbekistan then into the heavily populated Fergana Valley. The Mailuu-Suu River is a tributary of the Kara Darya, used for agricultural irrigation in the valley.
Some fatalities, building destruction, and contamination of the flood plain were reported as the direct result of the mudflow. Lack of any public response by officials makes it difficult to identify fatalities from the April 1958 event, especially as distinguished from everyday exposure.
Aftermath
Longterm health effects are more measurable. Grave threats to long-term residents persist, with residents experiencing far higher rates of cancer, goiter, anemia, and other illnesses related to radiological exposure.
Mailuu-Suu was found to be one of the 10 most polluted sites in the world in a study published in 2006 by the Blacksmith Institute.
Annual spring flooding and the lack of maintenance pose a continued threat of further releases of radioactive material. In 1994, a new landslide temporarily dammed the Mailuu-Suu River. In 2002 a flood caused by a mudslide nearly submerged a tailings pit.
The World Bank approved a US$5 million grant to reclaim the tailings pits in 2004, and approved an additional $1 million grant for the project in 2011. The United Nations Development Programme, and the European Bank for Reconstruction and Development have also funded programs.
References
Tailings dam failures
Radioactively contaminated areas
Radiation accidents and incidents
1958 mining disasters
Mining in Kyrgyzstan
1958 in the Soviet Union
Disasters in the Soviet Union
1958 disasters in the Soviet Union
Dam failures in Asia
April 1958 events in Asia | 1958 Mailuu-Suu tailings dam failure | [
"Chemistry",
"Technology"
] | 779 | [
"Radioactively contaminated areas",
"Soil contamination",
"Radioactive contamination"
] |
56,168,643 | https://en.wikipedia.org/wiki/Aspergillus%20flavescens | Aspergillus flavescens is a rare species of fungus in the genus Aspergillus. Aspergillus flavescens can cause Myringomycosis.
References
Further reading
flavescens
Fungi described in 1867
Fungus species | Aspergillus flavescens | [
"Biology"
] | 50 | [
"Fungi",
"Fungus species"
] |
77,597,039 | https://en.wikipedia.org/wiki/Tetrachloroiodic%20acid | Tetrachloroiodic acid is an inorganic compound, a polyhalide acid with the formula HICl4. In addition to an anhydrous form, an orange crystalline tetrahydrate is known. It is unstable in air.
Synthesis
Tetrachloroiodic acid may be formed by dissolution of iodine trichloride in concentrated hydrochloric acid:
Tetrachloroiodic acid may also be made by passing chlorine through a solution of iodine in concentrated hydrochloric acid :
Physical propieties
Tetrachloroiodic acid forms a crystal hydrate which has orange crystals that are unstable in air and melt by dissolving in their own water of crystallization at 19 °C.
See also
Iodine trichloride
References
Iodine compounds
Chlorine compounds | Tetrachloroiodic acid | [
"Chemistry"
] | 173 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
77,599,492 | https://en.wikipedia.org/wiki/NGC%205314 | NGC 5314 is a spiral galaxy in the constellation of Ursa Minor. Its velocity with respect to the cosmic microwave background is 9636 ± 100 km/s, which corresponds to a Hubble distance of 142.13 ± 10.17 Mpc (∼463 million light-years). It was discovered by American astronomer Lewis Swift on 8 April 1886.
One supernova has been observed in NGC 5314: SN 2023eyz (type Ia, mag. 20.4) was discovered by the Zwicky Transient Facility on 8 April 2023.
See also
List of NGC objects (5001–6000)
References
External links
5314
048810
+12-13-009
13450+7035
Ursa Minor
18860408
Discoveries by Lewis Swift
Spiral galaxies | NGC 5314 | [
"Astronomy"
] | 161 | [
"Ursa Minor",
"Constellations"
] |
77,602,408 | https://en.wikipedia.org/wiki/Isoproturon | Isoproturon (IPU) is a urea class selective herbicide, which has been used to control annual grasses and many broad leafed weeds in wheat, barley, rye and triticale.
Isoproturon was introduced in 1971 by Hoechst AG, (now AgrEvo GmbH), Rhône-Poulenc and Ciba-Geigy AG. It was once one of the most widely used herbicides in the world, however it has suffered various bans, including the USA, and until 2016 was sold in 22 European countries.
Regulation
IPU is used in India.
Australia
Isoproturon was never registered in Australia. Agronomist Bill Crabtree estimates potential A$47 billion savings if IPU had been available since 1980. 4Farmers' attempt to register IPU is ongoing.
United Kingdom
Isoproturon is used in the UK.
Isoproturon was banned in March 2007, taking effect in July 2009, due to its effects on the aquatic environment.
By 2014 the ban was reversed. Lower concentration formulations, notably Blutron, with 250 g/L IPU and 50 g/L diflufenican, were for sale. Greater solubility allows lower concentrations of IPU and greater plant uptake -- lessening the residue left in the environment.
European Union
Isoproturon's registration in the European Union is expired, though under EC Regulation 1107/2009 it is approved in the Netherlands and no other EU member nation. The EU's ban took effect from the 30th of September 2016.
The EU Commission, that also banned amitrole, did so only partially on endocrine disruption concerns, and other unclear grounds. If it had been for only endocrine disruption, it is likely exemptions would be available (for 'serious danger to plant health' or 'negligible exposure') under EU law. The European Court of Justice ruled in December 2015 that the commission illegally broke their "clear, precise and unconditional obligation" to publish scientific criteria.
United States
Isoproturon is not registered in the United States. Presumably, it has never been registered.
Ecodegradation
Isoproturon is non-persistent in soil; very photochemically stable, and stable to acids and alkalis, but under sustained ultraviolet light can degrade into some eleven products, and can be hydrolytically cleaved by strong bases on heating.
Degradation is mainly N-demethylation and oxidation of the ring-isopropyl group. Variations in order give rise to a few possible pathways, and the balance of demethylation and oxidation can allow selective activity of the herbicide. Both reactions may occur, making a typical degradation product of 2-(4-Aminophenyl)propan-2-ol (also called
Dimethyl-p-Aminobenzylalkohol), which is an irritant and may be harmful if swallowed.
Most isoproturon is expected to have degraded in soil after 6-28 days; the rate is temperature sensitive as the process is driven by enzymes and microbes. In water, the DT50 is 40 days, and in water sediments 149 days.
Metabolism in plants usually follows the path beginning with isopropyl side chain oxidation.
Toxicology
Isoproturon is in the WHO's toxicity class III: Slightly Hazardous. The oral LD50 is 3350 mg/kg (mice), and percutaneously for rats is >2000 mg/kg. It is non-irritating to skin and eyes, as tested on rabbits. A dietary NOEL over 90 days for rates is 80 mg/kg, for dogs 50 mg/kg. IPU is an endocrine disruptor.
Isoproturon is not toxic to bees and birds but can harm fish, with LC50 of 191 mg/L (carp), guppies 91 mg/L, and catfish just 9 mg/L. The UK Environment Agency set a non-statutatory acceptable average water limit of 2 μg/L or 20 μg/L in one measurement. In rats, the half life of ingested isoproturon is about 8 hours, excretion being 86% through urination.
Application
Technical grade isoproturon is >97% pure, and is then sold as an active ingredient in commercial formulations, usually as an SC, suspended concentrate, or WP, wettable powder.
Diflufenican is a Class C2 (or Group 7) resistance class herbicide.
Synthesis
IPU is synthesised from cumene, to which HNO3 is reacted, forming p.nitrocumene. The nitrite group then is reacted with hydrogen to replace its two oxygen atoms. Phosgene is reacted to p.cumidine, which replaces one of phosgene's chlorine atoms, and then dimethyl amine completes the chain, replacing the other chlorine atom. An alternative route, involving the more direct combination of p.cumidine, urea and dimethyl amine, exists.
Lists
Weeds controlled by IPU include annual grasses, such as black-twitch, common windgrass, common wild oat, and annual meadow grass. It is used in spring and winter to control many annual broad leaved weeds.
Isoproturon is used on crops such as wheat, rye, barley, triticale, sugarcane, citrus, cotton, asparagus, oilseed rape, peas, spring field beans, sugar beet, potatoes, carrots, brassicas and onions. It is not used on durum wheat because of isoproturon's phytotoxicity to it, however it is nonphytotoxic to other cereals.
Isoproturon's herbicide resistance class is class C2 (HRAC) or class 7 (WSSA). Black-twitch and lesser canary grass have shown resistant examples.
Tradenames
Isoproturon
Alon (AgrEvo)
Arelon (AgrEvo)
Avanon (Gharda)
Blutron (Agform)
Isoguard (Gharda)
Graminon (Ciba-Geigy AG)
Phytosanitaire (Rhone-Poulenc)
Tolkan (Rhone-Poulenc)
References
Links
Ureas
Herbicides
Endocrine disruptors
Isopropyl compounds
Dimethylamino compounds | Isoproturon | [
"Chemistry",
"Biology"
] | 1,324 | [
"Herbicides",
"Endocrine disruptors",
"Organic compounds",
"Biocides",
"Ureas"
] |
77,606,905 | https://en.wikipedia.org/wiki/Lajos%20Balogh%20%28scientist%29 | Lajos Peter Balogh (born January 15, 1950, Hungary), mainly referred to as Lou Balogh, is a Hungarian-American scientist known for his research on polymers, dendrimer nanocomposites, and nanomedicine. Balogh is the editor-in-chief of Precision Nanomedicine (PRNANO). Based on his career-long citation numbers, he belongs to the World's Top 2% Scientists.
Early life and education
Balogh was born on January 15, 1950, in Komádi, Hajdú-Bihar County, Hungary. He studied chemistry at the Debreceni Vegyipari Technikum then at the Kossuth Lajos University from 1969 to 1974, earning his Ph.D. in 1983. In 1991, Balogh received an invitation from the University of Massachusetts Lowell and moved to the United States.
Career
Balogh joined UMass Lowell as a visiting professor in 1991. In 1996, he left for the Michigan Molecular Institute to research dendrimers, where he was a senior associate scientist in Donald Tomalia's group. From 1998 to 2018, Balogh worked as a professor at the University of Michigan Ann Arbor, Center for Biologic Nanotechnology, the University at Buffalo, and co-directed Nano-Biotechnology Center at the Roswell Park Comprehensive Cancer Center. As a visiting professor, he also taught at the Chinese Academy of Sciences, the Seoul National University, and the Semmelweis University.
Balogh is one of the co-founders of the American Society for Nanomedicine (2008). He has been a board member of several expert organizations, e.g., the U.S. Technical Advisory Group to ISO TC229 on Nanotechnology (since 2005), the Scientific Committee of the CLINAM Summits (since 2011), and numerous other National and International Steering Committees.
Between 2008 and 2016, Balogh, as editor-in-chief, took an upstart scientific journal (Nanomedicine: Nanotechnology, Biology, and Medicine, Elsevier) from 5 editors and no journal impact factor to 20 editors and JIF2014 =6.9, (5-year JIF=7.5). He increased the journal's readership to over 480,000 downloads per year. In 2017, Balogh initiated Manuscript Clinic, a platform that helped scientists and students publish their research results in nanomedicine and nanotechnology and promoted both nanoscience and scientific writing. In 2018, he founded Andover House, Inc., a not-for-profit online publishing company, and launched Precision Nanomedicine (PRNANO). He serves as the editor-in-chief of this scientists-owned, fully open-access and peer-reviewed international journal. PRNANO has been designated the official journal of the International Society for Nanomedicine and CLINAM, the European Society for Clinical Nanomedicine (Basel, Switzerland).
Personal
Balogh is married to Éva Kovács Balogh, a Hungarian American linguist. He has three children, Andrea, Péter, and Áki. Peter Balogh was the crew chief of the University of Michigan Solar car team's MomentUM, which won first place at the North American Solar Challenge in 2005 and now works in the maritime industry in Asia. Aki Balogh is the Co-founder and President of MarketMuse and Co-founder and CEO of DLC.link.
Research work
Balogh discovered and pioneered dendrimer nanocomposites, drug delivery platforms, and co-invented new cancer treatments. He is considered an international expert in nanomedicine and scholarly publications. He published 228 scientific papers in chemistry, physics, nanotechnology, and nanomedicine, gave over 230 invited lectures, and was awarded 12 patents. His publications have been cited over 10000 times (22 papers with more than 100 citations, 11 with more than 200 citations, and 2 cited over 1000 times; h-index=42). Balogh has been listed as belonging to the World's top 2% of Scientists.
Achievements
Balogh is one of the five founders of the American Society for Nanomedicine. He serves on the U.S. Technical Advisory Group to ISO TC 229 Nanotechnology and on the Board of several international and U.S. national organizations. Some recent awards include a visiting professorship for Senior International Scientists at the Chinese Academy of Sciences, Beijing, the Korean Federation of Science and Technology Societies Brain Pool Program Award for Renowned Foreign Scientists to teach at Seoul National University, Seoul, Korea, and a Fulbright Teaching/Research Scholarship at Semmelweis University. Balogh is a member of the External Body of the Hungarian Academy of Sciences (since 2011).
Selected publications
Peer-reviewed publications
Wolfgang Parak, Beatriz Pelaz, Christoph Alexiou, Ramon A. Alvarez-Puebla, Frauke Alves, Anne M. Andrews, Sumaira Ashraf, Lajos P. Balogh, et al., Diverse Applications of Nanomedicine, ACS Nano. ACS Nano, 2017, 11 (3), pp 2313–2381, DOI: 10.1021/acsnano.6b0604
Kukowska-Latallo, Jolanta F. Kimberly A. Candido, Zhengyi Cao, Shraddha S. Nigavekar, Istvan J Majoros, Thommey P. Thomas, Lajos P. Balogh, Mohamed K. Khan and James R. Baker, Jr., Nanoparticle Targeting of Anticancer Drug Improves Therapeutic Response in Animal Model of Human Epithelial Cancer, Cancer Research 2005, 65, 5317-5324
L. Balogh and Donald A. Tomalia: Poly(Amidoamine) Dendrimer-Templated Nanocomposites I. Synthesis of Zero-Valent Copper Nanoclusters; Journal of Am. Chem. Soc., 1998, 120, 7355-7356
L Balogh, R Valluzzi, KS Laverdure, SP Gido, GL Hagnauer, DA Tomalia: Formation of silver and gold dendrimer nanocomposites, Journal of Nanoparticle Research 1, 353-368
Lajos P. Balogh, Shawn M. Redmond, Peter Balogh, Houxiang Tang, David C. Martin, Stephen C. Rand: "Self-Assembly and Optical Properties of Dendrimer Nanocomposite Multilayers," Macromolecular Bioscience 2007, 7, 1032–1046
Yuliang Zhao, Lajos Balogh, Caging Cancer, Nanomedicine: Nanotechnology, Biology, and Medicine, 11 (2015) 867–869
Hong, S. A. U. Bielinska, A. Mecke, B. Keszler, J. L. Beals, X. Shi, L. Balogh, B. G. Orr, J. R. Baker Jr., and M. M. Banaszak Holl, Interaction of Poly(amidoamine) Dendrimers with Supported Lipid Bilayers and Cells: Hole Formation and the Relation to Transport, Bioconj. Chem. 2004, 15, 774-782
L Balogh, DR Swanson, DA Tomalia, GL Hagnauer, AT McManus, Dendrimer−silver complexes and nanocomposites as antimicrobial agents, Nano letters 1 (1), 18-21
Books
Lajos P. Balogh (Ed), Nanomedicine's Most Cited Series, Vol.2, Nano-Enabled Medical Applications (Taylor & Francis, 2020)
Lajos P. Balogh (Ed), Nanomedicine's Most Cited Series, Vol.1, Nanomedicine in Cancer (Taylor & Francis, 2017)
Book chapters
Lajos P. Balogh, "Introduction to Nanomedicine" Chapter 1 in Nanomedicine in Health And Disease, Ed: Victor R. Preedy, Science Publishers, 2011, p.3.
Lajos P. Balogh, Donald A. Mager and Mohamed K. Khan, "Synthesis and Biodisposition of Dendrimer Composite Nanoparticles," Chapter 6 in: Materials for Nanomedicine, Eds: V. Torchilin and M. Amiji, World Scientific Publishing, 2010
Lajos P. Balogh, Teyeb Ould Ely, and Wojciech G. Lesniak "Composite Nanoparticles for Cancer Imaging and Therapy: Engineering Surface, Composition, and Shape," Chapter 4 in Nanomedicine Design of Particles, Sensors, Motors, Implants, Robots, and Devices, Eds: M. Schulz, V.N. Shanov, ISBN 978-1-59693-279-1, Artech House 2009
Lajos P. Balogh, "Dendrimer 101" Chapter 11 (p 136-155) in: "Biological and Biomedical Applications of Engineered Nanostructures" Ed: Warren P. Chan – U. Toronto, Eurekah, 2007.
Lajos P Balogh and Mohamed K Khan: Dendrimer Nanocomposites for Cancer Therapy" Chapter 28 (p. 551–592, in Nanotechnology in Cancer Therapy (Ed: Mansour Amiji), CRC Press (Taylor and Francis Group), Boca Raton, London, New York, 2006
Selected patents
J. Y. Ye, T. B. Norris, L. P. Balogh, J. R. Baker, Jr., Laser-based Method and System for Enhancing Optical Breakdown, US 7,474,919 B2, January 6, 2009
D. A. Tomalia and L. Balogh: "Nanocomposites of Dendritic Polymers" US 6,664,315 B2, December 16, 2003
D. A. Tomalia and L. Balogh: "Method and Articles for Transfection of Genetic Material," US 6,475,994, November 5, 2002,
L. Balogh, D. R. Swanson, D. A. Tomalia, G. L. Hagnauer, A. T. McManus: "Antimicrobial Dendrimer Nanocomposites and a Method of Treating Wounds," US 6,224,898 B1, May 1, 2001.
R. Faust and L. Balogh: U.S. 5,665,837 (1997); Initiation via Haloboration in Living Cationic Polymerization
References
1950 births
Living people
Hungarian scientists
Nanomedicine
Nanomedicine journals
Polymer chemistry
Dendrimers
Medical journal editors | Lajos Balogh (scientist) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,290 | [
"Materials science",
"Nanomedicine",
"Dendrimers",
"Polymer chemistry",
"Nanotechnology"
] |
77,609,190 | https://en.wikipedia.org/wiki/Chlodantane | Chlodantane (developmental code name ADK-910) is a drug described as an adaptogen or actoprotector "of the estrogen activity type" that was developed in Russia and was never marketed. It is an adamantane derivative and is closely related to bromantane (N-(2-adamantyl)-N-(para-bromophenyl)amine) and other adamantanes. It has been said to improve physical performance. However, only animal or cell culture research has been conducted and it has not been studied in humans. The drug is described as having a broader spectrum of activity than bromantane. It also has immunostimulant effects that are said to be more pronounced than those of bromantane.
See also
List of Russian drugs
References
Abandoned drugs
Adamantanes
Amines
4-Chlorophenyl compounds
Drugs in the Soviet Union
Drugs with unknown mechanisms of action
Ketones
Russian drugs
Russian inventions | Chlodantane | [
"Chemistry"
] | 202 | [
"Ketones",
"Drug safety",
"Functional groups",
"Amines",
"Bases (chemistry)",
"Abandoned drugs"
] |
67,470,005 | https://en.wikipedia.org/wiki/Human%20Systems%20Integration | Human Systems Integration (HSI) is an interdisciplinary managerial and technical approach to developing and sustaining systems which focuses on the interfaces between humans and modern technical systems. The objective of HSI is to provide equal weight to human, hardware, and software elements of system design throughout systems engineering and lifecycle logistics management activities across the lifecycle of a system. The end goal of HSI is to optimize total system performance and minimize total ownership costs. The field of HSI integrates work from multiple human centered domains of study include training, manpower (the number of people), personnel (the qualifications of people), human factors engineering, safety, occupational health, survivability and habitability.
HSI is a total systems approach that focuses on the comprehensive integration across the HSI domains, and across systems engineering and logistics support processes. The domains of HSI are interrelated: a focus on integration allows tradeoffs between domains, resulting in improved manpower utilization, reduced training costs, reduced maintenance time, improved user acceptance, decreased overall lifecycle costs, and a decreased need for redesigns and retrofits. An example of a tradeoff is the increased training costs that might result from reducing manpower or increasing the necessary skills for a specific maintenance task. HSI is most effective when it is initiated early in the acquisition process, when the need for a new or modified capability is identified. Application of HSI should continue throughout the lifecycle of the system, integrating HSI processes alongside the evolution of the system.
HSI is an important part of systems engineering projects.
History
Military origins
The US Navy initiated the Military Manpower versus Hardware (HARDMAN) Methodology in 1977 to address problems with manpower, personnel and training in the service. In 1980, The National Academies of Sciences, Engineering, and Medicine established the Committee on Human Factors, which was later renamed the Committee on Human Systems Integration. The modern concept of Human Systems Integration in the United States originated in 1986 as a US Army program called the Manpower and Personnel Integration (MANPRINT) program. With ties to the academic fields of industrial engineering and experimental psychology, MANPRINT incorporated human factors engineering with manpower, personnel and training domains into an integrated discipline. MANPRINT focused on the needs and capabilities of the soldier during the development of military systems, and MANPRINT framed a human-centered focus in six domains: human factors engineering, manpower, personnel, training, health hazards and system safety. The US Marine Corps, a component of the Navy, implemented aspects of both HARDMAN and MANPRINT programs to achieve HSI objectives, issuing a formal HSI policy in Marine Corps Order 5000.22 in 1994. The US Air Force began an HSI program in 1982 as "IMPACTS". Modern HSI programs abandoned early acronyms such as HARDMAN, MANPRINT and IMPACTS over the course of the development of their HSI programs. For example, the Air Force currently manages HSI through the Air Force Office of Human Systems Integration (AFHSIO). The US Coast Guard implemented an HSI program in 2000 in the strategy and HR capability division (CG-1B) of the human resources directorate. The US Department of Homeland Security initiated an HSI program under the Science and Technology Directorate in 2007, and the Transportation Security Administration (TSA) initiated a focused HSI effort under the umbrella of DHS S&T in 2018. The Federal Rail Administration (under the National Transportation Safety Board) and NASA Ames Research Center also address HSI. The United Kingdom, Canada, Australia and New Zealand have HSI programs similarly rooted in human factors and modeled after the Army MANPRINT program. In Europe HSI is known as Human Factors Integration.
Policy
DoD acquisition policy to formalize manpower, personnel, training and safety processes started in 1988. HSI as a distinct focus area was first addressed in the Operation of the Defense Acquisition System (DODINST 5000.02) issued in 2003. Updated in 2008, this policy expanded the six domains in the MANPRINT program to seven, re-focusing systems safety as safety and occupational health, and adding habitability and survivability to the list. In 2010, the National Academy of Sciences committee on Human Systems Integration was transitioned to a board under the Division of Behavioral and Social Sciences and Education. The Board on Human Systems Integration (BOHSI) issues consensus studies, reports and proceedings on HSI research and application. A 2013 update of the DODINST 5000.02 added force protection to the survivability domain. In 2020, the DODINST 5000.02 title and content shifted to the "Operation of the Adaptive Acquisition framework", which describes HSI activities tailored to each acquisition pathway, according to the unique characteristics of the capability being required.
The Defense Acquisition Guidebook, first published in 2002, devotes an entire chapter to manpower planning and HSI. In addition to focused discussion on each domain, the DAG emphasizes viewing HSI from a total system perspective, viewing the human components of a system as integral to the total system as any other component or subsystem. The DAG emphasizes the importance of representing HSI in all aspects of programmatic Integrated Product and Process Development, strategic planning and risk management.
The Standard Practice for Human Systems Integration (SAE 6906) was issued in 2019, and defines standard practices for procurement activities related to HSI. The standard is provided for industry to apply HSI during system design, through disposal and all related activities. This standard includes an overview of HSI and the domains, the domain relationships and tradeoffs, systems development process requirements, and a number of technical standard references
Technical Standards and requirements
ASTM F1337-10 Standard Practice for Human Systems Integration Program Requirements for Ships and Marine Systems, Equipment and Facilities
DI-HFAC 81743 Human Systems Integration Program Plan
HSI and Systems Engineering
The INCOSE Systems Engineering Handbook provides an authoritative reference to understand the discipline of Systems Engineering for student and practicing professionals. The human part of the system is associated with systems engineering activities from start to finish: from requirements development, to architectural design processes, verification, validation and operation. HSI is integral to the systems engineering process, and must be addressed in all program level integrated development product teams at program, technical, design, and decision reviews throughout the lifecycle of the system. The guidebook focuses on the integration of HSI into SE processes, and notes that intuitive understanding of the important role of the human as an element of a system is not enough to achieve HSI related cost and performance objectives. HSI assists engineers though the addition of human-centered domain specialists and integrators who ensure that human considerations such as usability, safety and health, maintainability and trainability are accounted for using systematic methodologies grounded in each human-centered domain
HSI trade studies and analyses are key methods of HSI that often result in insights not otherwise realized in systems engineering:. The INCOSE Systems Engineering Guidebook recommends a number of steps to effectively incorporate HSI into systems engineering processes
Initiate HSI early and effectively
Identify HSI issues and plan analyses
Document HSI requirements
Make HSI a factor in source selection for contracted development
Execute Integrated Technical Processes (including HSI domain integration
Conduct Proactive Tradeoffs
Conduct HSI Assessments
HSI interacts with a number of SE activities:
HSI domain experts collaborate with each other to achieve HSI objectives
The contractor and the customer may each have an HSI lead integrator and domain experts, each role collaborating with their counterparts
HSI domain experts may participate in program management roles such as Integrated Product Teams, design teams, logistics management teams, and other systems engineering and program management collaborations
HSI interacts with reliability, availability and maintainability activities.
HSI is important to successful test and evaluation and should be integrated to all stages of test and evaluation activities
HSI interacts with logistics and supportability activities.
HSI and Logistics Support
Planning and management for cost and performance across the lifecycle of a system are accomplished through lifecycle logistics and integrated product support. These activities ensure that the system will meet sustainment objectives and satisfy user sustainment objectives. Product Support management covers three focus areas: lifecycle management, technical management and infrastructure management. The HSI domains of training, manpower and personnel fall under infrastructure management and are among the twelve elements of logistics / product support. Design Interface, one of the twelve elements of logistics / product support, is a subcategory of technical management and includes multiple domains of HSI, including human factors, personnel, habitability, training, safety and occupational health.
Design Interface (including HSI) is the integration of quantitative systems design characteristics with functional integrated product support elements. In this element of logistics, the systems design parameters drive product support resource requirements. Product support requirements are derived to ensure the system meets availability goals, balancing design and support costs. Design interface is a leading activity that impacts all other logistics / product support elements. Reliability and maintainability are aspects of design interface that have ties to manpower, personnel and training. Maintainability is a measure of the ease and speed in which a piece of equipment or system can be restored to full functionality after a failure; it is a function of design, personnel availability and skill levels, maintenance procedures, training and test equipment. Low maintainability may increase manpower, personnel and training costs over the lifecycle of the system. Human factors engineering and usability play an important role in requirements development, definition, design development and evaluation of system support for reliability and maintainability in the operational environment. Safety and occupational health are important aspects of product support: injury, accidental equipment damage, chronic injuries and long-term health problems reduce supportability, reliability and availability
Domains
Human Factors Engineering
Human Factors Engineering (HFE) is an engineering discipline that ensures human capabilities and limitations in areas such as perception, cognition, sensory and physical attributes are incorporated into requirements and design. Effective HFE ensures that systems design capitalizes on, and does not exceed, the abilities of the human user population. HFE can reduce the scope of manpower and training requirements, and ensure the system can be operated maintained and supported by users, in a habitable, safe and survivable manner. HFE is concerned with designing human-systems interfaces such as:
Functional interfaces: functions, tasks, and allocation of functions to human or automation
Informational interfaces: information and characteristics of information that support understanding and awareness of the environment and system
Environmental interfaces: natural and artificial environments, environmental controls, and facility design
Cooperational interfaces: provisions for team performance, cooperation and collaboration
Organizational interfaces: job design, management structure, policies and regulations that impact behavior
Cognitive interfaces: decision rules, decision support systems, provisioning for situational awareness and mental models.
Physical interfaces: hardware and software elements such as controls, displays, workstations, worksites, accesses, labels and markings, structures, steps and ladders, handholds, maintenance provisions, and more.
Technical standards and requirements:
ASTM F1166-07 Standard Practice for Human Engineering Design for Marine Systems, Equipment and Facilities
HFES-200 Human Factors Engineering of Software User Interfaces
MIL-STD 46855 Human Engineering Requirements for Military Systems, Equipment and Facilities
MIL-STD 1472 DoD Design Criteria Standard for Human Engineering
FAA Human Factors Design Standards (HFDS) HF-STD-001B
HFE Data Information Descriptions:
Human Engineering Program Plan (HEPP) DI-HFAC- 81742
Human Engineering Systems Analysis Report (HESAR) DI-HFAC-80745
Human Engineering Design Approach Document (HEDAD-M) DI-HFAC-80747
Human Engineering Design Approach Document (HEDAD-O) DI-HFAC-80746
Human Engineering Test Plan (HETP) DI-HFAC-80743
Human Engineering Test Reports (HETR) DI-HFAC-80744
Manpower
Manpower focuses on evaluating and defining the right mix of personnel (sometimes referred to as "spaces") for people to operate, maintain and support a system. Manpower requirements should be based on task analysis and consider workload, fatigue, physical and sensory overload, environmental conditions (heat/cold) and reduced visibility. Manpower requirements are the highest cost driver for a system, and can account for up to 70% of the total lifecycle cost. Requirements are based on the full range of operations from a low operational tempo, peacetime scenario to continuous sustained operations, and should include consideration for surge operations capacity. In the manpower analysis process, labor-intensive "high driver tasks" should be examined, and targeted for engineering design changes to reduce the manpower requirement through automation, or improved usability in design. A top down functional analysis can be the basis for determinations of which functions can be eliminated, consolidated, or simplified to control manpower costs.
DoD manpower policy comes from DoD Directive 1100.4, Guidance for Manpower Management
Personnel
The personnel domain is concerned with the human performance characteristics of the user population (cognitive, sensory and physical skills, knowledge, experience and abilities) of operators, maintainers and support staff required for a system. Cost effective engineering designs minimize personnel requirements, and keep them consistent with the user population. Systems that require new or advance personnel requirements will experience cost increases in other domains, such as training. The user group identified for a system may be referred to as the "target audience". The target audience is situated within a larger organizational structure, and recruitment, retention and personnel policies that may impact or be impacted by the new system should be considered. HSI and the personnel domain may impact policy, or policy may impact HSI. For example, the system may require additional recruitment to sustain the organizational workforce while employing the new system. An example of policy impacting HSI is increased diversity in the user population that may alter anthropometric requirements for the system and impact requirements in the HFE domain.
Manpower and personnel standards include:
Standard Practice for Manpower and Personnel SAE1010
Training
The training domain is concerned with giving the target audience the opportunity to acquire, gain or enhance the knowledge, skills and abilities needed to operate, maintain and support a system. The target audience may be individuals or groups; training in a systems engineering / acquisition setting is focused on job-relevant knowledge, skills and abilities aimed at satisfying performance levels specific to the system being designed. Training the operators, maintainers and support personnel to conduct their respective tasks is a component of the total system and a part of delivering the intended capability of the system. This includes the integration of training concepts and strategies with elements of logistics support, including technical manuals and procedures, interactive electronic technical manuals, job performance aids, computer based interactive courseware, simulators, and actual equipment, including embedded training capabilities on actual equipment. Training is an important aspect of configuration management: it is critical that training impacts of any and all changes to the system are evaluated. The objective of training is to develop and sustain ready, well trained personnel while reducing lifecycle costs, contributing to a positive readiness outcome. The industry standard practice to develop cost effective training is instructional systems design.
Training standards include:
USA:
Guidance for the Acquisition of Training Data Products and Services (Part 1 of 5) MIL-HDBK 29612/1
Instructional Systems Development/Systems Approach to Training and Education (Part 2 of 5) MIL-HDBK 29612/2
UK
JSP 882 Defence Direction and Guidance for Training and Education
Environment, Safety and Occupational Health
The environment, safety and occupational health domain is focused on determining system design characteristics that minimize risks to human health and physical wellbeing such as acute or chronic illness, disability death, or injury. In a physical system design, systems safety works closely with systems engineers to identify, document, design out, or mitigate system hazards and reduce residual risk from those hazards. The three areas that must be considered are:
environment, or the natural and manmade conditions in and around the system and the operational context of the system
safety factors in systems design that minimize the potential for mishaps, such as walking surfaces, work at heights, pressure extremes, confined spaces, control of hazardous energy releases, fire and explosions
occupational health: system design features that minimize the risk of injury, acute or chronic illness, or disability or reduce long term job performance from hazards such as noise, chemicals, atmospheric hazards (such as confined spaces), vibration, radiation and repetitive motion injuries.
A health hazard analysis should be performed periodically during the system lifecycle to identify risks, initiating the risk management process. In DoD programs, program managers must prepare a Programmatic Environmental, Safety and Occupational Health Evaluation (PESHE) which is an overall evaluation of ESOH risks for the program, and documents the progress of HHA program monitoring.
Systems safety is grounded in a risk management process but Safety risk management has a unique set of processes and procedures. For example, identified hazards should be designed out of the system whenever possible, either through selecting a different design, or altering the design to eliminate the hazard. If a design change isn't feasible, engineered features or devices should be added to interrupt the hazard and prevent a mishap. Warnings (devices, signs or signals) are the next best mitigation, but are considered to be far less beneficial to preventing mishaps. The last resort is personal protective equipment to protect people from the hazard, and training (knowledge skills and abilities to protect against the hazard and prevent a mishap). HFE review and involvement with design interventions introduced to address hazards is an important connection between the systems safety and HFE domain specialists. Design interventions may have manpower and personnel implications, and training mitigations for hazards must be incorporated into continued operator and maintainer training in order to sustain the training intervention.
Systems safety standards include:
USA:
MIL-STD 882 System Safety
UK:
Defence Policy for Health, Safety and Environmental Protection (DSA 01.1)
Force Protection and Survivability
Survivability is design features that reduce the risk of fratricide, detection and probability of an attack, and enable the crew to continue the mission and avoid acute or chronic illness, severe injury, disability or death in hostile environments. Elements of survivability include reducing susceptibility to a mishap or attack (protection against detection for example) and minimizing potential wounds or injury to personnel operating and maintaining the system. Survivability also includes protection from chemical, biological, radioactive and nuclear (CBRN) threats. and should include requirements to preserve integrity of the crew compartment, rapid egress in case of system destruction, and emergency systems for contingency management, escape, survival and rescue
Survivability is often categorized in the following topics:
Reduce Fratricide
Reduce detectability
Reduce probability of attack
Minimize damage if attacked
Minimize injury
Minimize mental and physical fatigue
Survive extreme environments
Habitability
Habitability is the application of human centered design to the physical environment (living areas, personal hygiene facilities, working areas, living areas, and personnel support areas) to sustain and optimize morale, safety, health, comfort and quality of life of personnel. Design aspects such as lighting; space; ventilation and sanitation; noise and temperature control; religious, medical and food services availability; berthing, bathing and personal hygiene are all aspects of habitability, and directly contribute to personnel effectiveness and mission accomplishment.
Habitability Standards Include:
Color Coordination Manual for Habitability DI-MISC 81123
Design Criteria Limits Noise Standards MIL-STD 1474
Further reading
Boehm-Davis, D., Durso, F. T., & Lee, J. D. (2015). APA handbook of human systems integration. Washington, DC: American Psychological Association.
Booher, H. R. (1990). Manprint: An approach to systems integration. New York, NY: Reinhold.
Hardman, N. S. (2009). An empirical methodology for engineering human systems integration.
Pew, R. W., & Mavor, A. S. (2007). Human-system integration in the system development process: A new look. Washington: National Academies Press.
Rouse, W. B. (2010). The economics of human systems integration valuation of investments in peoples training and education, safety and health, and work productivity. Hoboken, NJ: Wiley.
Human Systems Integration in the System Acquisition Process Army Regulation (AR) 602-2
United States Air Force Human Systems Integration Handbook
NASA Human Systems Integration Practitioners Guide
Defense Innovation Marketplace Human Systems Community of Interest
IEEE Human Systems Integration Technical Committee
NDIA Human Systems Division
References
Systems theory
Systems engineering
Systems psychology
Management education | Human Systems Integration | [
"Engineering"
] | 4,182 | [
"Systems engineering"
] |
67,471,804 | https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Kuo%20criterion | The Rayleigh–Kuo criterion (sometimes called the Kuo criterion) is a stability condition for a fluid. This criterion determines whether or not a barotropic instability can occur, leading to the presence of vortices (like eddies and storms). The Kuo criterion states that for barotropic instability to occur, the gradient of the absolute vorticity must change its sign at some point within the boundaries of the current. Note that this criterion is a necessary condition, so if it does not hold it is not possible for a barotropic instability to form. But it is not a sufficient condition, meaning that if the criterion is met, this does not automatically mean that the fluid is unstable. If the criterion is not met, it is certain that the flow is stable.
This criterion was formulated by Hsiao-Lan Kuo and is based on Rayleigh's equation named after the Lord Rayleigh who first introduced this equation in fluid dynamics.
Barotropic instability
Vortices like eddies are created by instabilities in a flow. When there are instabilities within the mean flow, energy can be transferred from the mean flow to the small perturbations which can then grow. In a barotropic fluid the density is a function of only the pressure and not the temperature (in contrast to a baroclinic fluid, where the density is a function of both the pressure and temperature). This means that surfaces of constant density (isopycnals) are also surfaces of constant pressure (isobars). Barotropic instability can form in different ways. Two examples are; when there is an interaction between the fluid flow and the bathymetry or topography of the domain; when there are frontal instabilities (may also lead to baroclinic instabilities). These instabilities are not dependent on the density and might even occur when the density of the fluid is constant. Instead, most of the instabilities are caused by a shear on the flow as can be seen in Figure 1. This shear in the velocity field induces a vertical and horizontal vorticity within the flow. As a result, there is upwelling on the right of the flow and downwelling on the left. This situation might lead to a barotropic unstable flow. The eddies that form alternatingly on both sides of the flow are part of this instability.
Another way to achieve this instability is to displace the Rossby waves in the horizontal direction (see Figure 2). This leads to a transfer of kinetic energy (not potential energy) from the mean flow towards the small perturbations (the eddies). The Rayleigh–Kuo criterion states that the gradient of the absolute vorticity should change sign within the domain. In the example of the shear induced eddies on the right, this means that the second derivative of the flow in the cross-flow direction, should be zero somewhere. This happens in the centre of the eddies, where the acceleration of the flow perpendicular to the flow changes direction.
Examples
The presence of these instabilities in a rotating fluid have been observed in laboratory experiments. The settings of the experiment were based on the conditions in the Gulf Stream and showed that within the ocean currents such as the Gulf Stream, it is possible for barotropic instabilities to occur. But barotropic instabilities were also observed in other Western Boundary Currents (WBC). In the Agulhas current, the barotropic instability leads to ring shedding. The Agulhas current retroflects (turns back) near the coast of South Africa. At this same location, some anti-cyclonic rings of warm water escape from the mean current and travel along the coast of Africa. The formation of these rings is a manifestation of a barotropic instability.
Derivation
The derivation of the Rayleigh–Kuo criterion was first written down by Hsiao-Lan Kuo in his paper called 'dynamic instability of two-dimensional nondivergent flow in a barotropic atmosphere''' from 1949.'' This derivation is repeated and simplified below.
First, the assumptions made by Hsiao-Lan Kuo are discussed. Second, the Rayleigh equation is derived in order continue to derive the Rayleigh–Kuo criterion. By integrating this equation and filling in the boundary conditions, the Kuo criterion can be obtained.
Assumptions
In order to derive the Rayleigh–Kuo criterion, some assumptions are made on the fluids properties. We consider a nondivergent, two-dimensional barotropic fluid. The fluid has a mean zonal flow direction which can vary in the meridional direction. On this mean flow, some small perturbations are imposed in both the zonal and meridional direction: and . The perturbations need to be small in order to linearize the vorticity equation. Vertical motion and divergence and convergence of the fluid are neglected. When taking into account these factors, a similar result would have been obtained with only a small shift in the position of the criterion within the velocity profile.
The derivation of the Kuo criterion will be done within the domain . On the northern and southern boundary of this domain, the meridional fluid is zero.
Rayleigh Equation
Barotropic vorticity equation
To derive the Rayleigh equation for a barotropic fluid, the barotropic vorticity equation is used. This equation assumes that the absolute vorticity is conserved: here, is the material derivative. The absolute vorticity is the relative vorticity plus the planetary vorticity: . The relative vorticity, , is the rotation of the fluid with respect to the Earth. The planetary vorticity (also called Coriolis frequency),, is the vorticity of a parcel induced by the rotation of the Earth. When applying the beta-plane approximation for the planetary vorticity, the conservation of absolute vorticity looks like:
The relative vorticity is defined as Since the flow field consist of a mean flow with small perturbations, it can be written as with and This formulation is used in the vorticity equation:
Here, and are the zonal and meridional components of the flow and is the relative vorticity induced by the perturbations on the flow ( and ). is the mean zonal flow and is derivative of the planetary vorticity with respect to .
Linearization
A zonal mean flow with small perturbations was assumed, , and a meridional flow with a zero mean, . Since it was assumed that the perturbations are small, a linearization can be performed on the barotropic vorticity equation above, ignoring all the non-linear terms (terms where two or more small variables, i.e. , are multiplied with one another). Also the derivative of in the zonal direction, the time derivative of the mean flow and the time derivative of are zero. This results in a simplified equation:
With as defined above () and and the small perturbations in the zonal and meridional components of the flow.
Stream function
To find the solution to the linearized equation, a stream function was introduced by Lord Rayleigh for the perturbations of the flow velocity:
These new definitions of the stream function are used to rewrite the linearized barotropic vorticity equation.
Here, is the second derivative of with respect to . To solve this equation for the stream function, a wave-like solution was proposed by Rayleigh which reads . The amplitude may be complex number, is the wave number which is a real number and is the phase velocity which may be complex as well. Inserting this proposed solution leads us to the equation which is known as Rayleigh's equation.
To get to this equation, in the last step it was used that can't be zero and neither can the exponential. This means that the terms in the square brackets needs to be zero. The symbol denotes the second derivative of the amplitude of the stream function, with respect to . This last equation that was derived, is known as Rayleigh's equation which is a linear ordinary differential equation. It is very difficult to explicitly solve this equation. It is therefore that Hsiao-Lan Kuo came up with a stability criterion for this problem without actually solving it.
Kuo Criterion
Instead of solving Rayleigh's equation, Hsiao-Lan Kuo came up with a necessary stability condition which had to be met in order for the fluid to be able to get unstable. To get to this criterion, Rayleigh's equation was rewritten and the boundary conditions of the flow field are used.
The first step is to divide Rayleigh's equation by and multiplying the equation by the complex conjugate of .
In the last step, is multiplied with its complex conjugate leading to the following equality is used: . For the solution of Rayleigh's equation to exist, both the real and imaginary part of the equation above need to be equal to zero.
Boundary conditions
To get to the Kuo criterion, the imaginary part is integrated over the domain () . The stream function at the boundaries of the domain is zero, , as already stated in the assumptions. The zonal flow must vanish at the boundaries of the domain. This leads to a constant stream function which is set to zero for convenience.
The first integral can be solved:
So the first integral is equal to zero. This means that the second integral should also be zero, making it possible to solve this integral numerically.
When is zero, we are dealing with a stable amplitude of the solution, this means that the solution is stable. We are looking for un unstable situation, so then should be zero. Since the fraction in front of is non-zero and positive, this leads to the conclusion that should be zero. This leads to the final formulation, the Kuo criterion:
Here, is the mean zonal flow and is the derivative of the planetary vorticity with respect to .
References
Fluid dynamics | Rayleigh–Kuo criterion | [
"Chemistry",
"Engineering"
] | 2,090 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
57,901,226 | https://en.wikipedia.org/wiki/Harold%20P.%20Eubank | Harold Porter Eubank (23 October 1924 – 23 March 2006, in Kilmarnock, Virginia) was an American physicist, specializing in magnetic fusion energy research.
Eubank grew up in rural Virginia and then in WW II served in the U.S. Army, receiving a Bronze Star. He received in 1948 a B.S. in physics from the College of William and Mary, in 1950 an M.S. from Syracuse University, and in 1953 a Ph.D. from Brown University. He was an assistant professor at Brown University until 1959. From 1959 to 1985 Eubank was a research physicist at the Princeton Plasma Physics Laboratory (PPPL). He headed neutral beam research at PPPL and was one of the world's leading experts on high temperature plasmas heated by neutral beams. In 1977 he was the chair of the Division of Plasma Physics at the American Physical Society.
Eubank published more than 100 papers and spoke frequently at scientific meetings in the U.S. and internationally. Upon his death he was survived by his widow, two sons, a daughter, two granddaughters, two step-children, and his first wife.
Awards and honors
1975 — elected a Fellow of the American Physical Society
1981 — Distinguished Associate Award from the United States Department of Energy
1982 — Elliott Cresson Medal and a Life Fellow Membership from the Franklin Institute in Philadelphia
References
1924 births
2006 deaths
20th-century American physicists
College of William & Mary alumni
Syracuse University alumni
Brown University alumni
United States Department of Energy National Laboratories personnel
Fellows of the American Physical Society
Plasma physicists
United States Army personnel of World War II | Harold P. Eubank | [
"Physics"
] | 325 | [
"Plasma physicists",
"Plasma physics"
] |
57,908,292 | https://en.wikipedia.org/wiki/Frances%20Pleasonton | Frances Pleasonton (1912–1990) was a Particle Physicist at the Oak Ridge National Laboratory. She was an active teacher and researcher, and a member of the team who first demonstrated neutron decay in 1951.
Early life and education
Pleasonton earned her bachelor's degree at Bryn Mawr College. She was an editor of the Bryn Mawr College yearbook. She went on to teach at Winsor School, Girls Latin School of Chicago and Brearley School. She returned to Bryn Mawr College for her master's degree, working as a warden at Pembroke East, and graduated in 1943. She was demonstrator-elect in physics and took a leave of absence for government service in 1942. During her master's degree she identified the crystal structure of Rochelle salt.
Research
Pleasonton was an active researcher in neutron decay. There were several attempts to measure neutron half-life before the second world war, all of which failed due to the lack of availability of intense neutron sources. Arthur Snell and Leonard Miller built the Oak Ridge Graphite Reactor, which could focus beams of neutrons and allow scientists to observe their decay. They measured the half-life of a neutron in 1951. Pleasonton was supported by the United States Atomic Energy Commission and published broadly. In 1958 they examined the decay of helium-6, Pleasonton and Snell monitoring the directions of neutrinos and electrons. This result confirmed the electron-neutrino theory of beta decay. In 1973 she authored several sections of the report for the Nuclear Regulatory Commission. At Oak Ridge National Laboratory, Pleasonton's laboratory was visited by the Queen of Greece and the King of Jordan. Pleasonton went on to study the ionisation of xenon x-rays.
Pleasonton remained in Tennessee after her retirement from Oak Ridge National Laboratory and was involved in citizens groups to protect the environment.
References
1912 births
1990 deaths
Particle physicists
20th-century American physicists
Bryn Mawr College alumni
Bryn Mawr College faculty
American women physicists
20th-century American women scientists
Oak Ridge National Laboratory people
American women academics | Frances Pleasonton | [
"Physics"
] | 426 | [
"Particle physicists",
"Particle physics"
] |
70,355,338 | https://en.wikipedia.org/wiki/List%20of%20resistors | A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of electrical power as heat may be used as part of motor controls, in power distribution systems, or as test loads for generators.
Fixed resistors have resistances that only change slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements (such as a volume control or a lamp dimmer), or as sensing devices for heat, light, humidity, force, or chemical activity.
Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors as discrete components can be composed of various compounds and forms. Resistors are also implemented within integrated circuits.
Lead arrangements
Through-hole components typically have "leads" (pronounced ) leaving the body "axially", that is, on a line parallel with the part's longest axis. Others have leads coming off their body "radially" instead. Other components may be SMT (surface mount technology), while high power resistors may have one of their leads designed into the heat sink.
Fixed resistors
Carbon composition
Carbon composition resistors (CCR) consist of a solid cylindrical resistive element with embedded wire leads or metal end caps to which the lead wires are attached. The body of the resistor is protected with paint or plastic. Early 20th-century carbon composition resistors had uninsulated bodies; the lead wires were wrapped around the ends of the resistance element rod and soldered. The completed resistor was painted for color-coding of its value.
The resistive element in carbon composition resistors is made from a mixture of finely powdered carbon and an insulating material, usually ceramic. A resin holds the mixture together. The resistance is determined by the ratio of the fill material (the powdered ceramic) to the carbon. Higher concentrations of carbon, which is a good conductor, result in lower resistances. Carbon composition resistors were commonly used in the 1960s and earlier, but are not popular for general use now as other types have better specifications, such as tolerance, voltage dependence, and stress. Carbon composition resistors change value when stressed with over-voltages. Moreover, if internal moisture content, such as from exposure for some length of time to a humid environment, is significant, soldering heat creates a non-reversible change in resistance value. Carbon composition resistors have poor stability with time and were consequently factory sorted to, at best, only 5% tolerance. These resistors are non-inductive, which provides benefits when used in voltage pulse reduction and surge protection applications. Carbon composition resistors have higher capability to withstand overload relative to the component's size.
Carbon composition resistors are still available, but relatively expensive. Values ranged from fractions of an ohm to 22 megohms. Due to their high price, these resistors are no longer used in most applications. However, they are used in power supplies and welding controls. They are also in demand for repair of vintage electronic equipment where authenticity is a factor.
Carbon pile
A carbon pile resistor is made of a stack of carbon disks compressed between two metal contact plates. Adjusting the clamping pressure changes the resistance between the plates. These resistors are used when an adjustable load is required, such as in testing automotive batteries or radio transmitters. A carbon pile resistor can also be used as a speed control for small motors in household appliances (sewing machines, hand-held mixers) with ratings up to a few hundred watts. A carbon pile resistor can be incorporated in automatic voltage regulators for generators, where the carbon pile controls the field current to maintain relatively constant voltage. This principle is also applied in the carbon microphone.
Carbon film
In manufacturing carbon film resistors, a carbon film is deposited on an insulating substrate, and a helix is cut in it to create a long, narrow resistive path. Varying shapes, coupled with the resistivity of amorphous carbon (ranging from 500 to 800 μΩ m), can provide a wide range of resistance values. Carbon film resistors feature lower noise compared to carbon composition resistors because of the precise distribution of the pure graphite without binding. Carbon film resistors feature a power rating range of 0.125 W to 5 W at 70 °C. Resistances available range from 1 ohm to 10 megaohm. The carbon film resistor has an operating temperature range of −55 °C to 155 °C. It has 200 to 600 volts maximum working voltage range. Special carbon film resistors are used in applications requiring high pulse stability.
Printed carbon resistors
Carbon composition resistors can be printed directly onto printed circuit board (PCB) substrates as part of the PCB manufacturing process. Although this technique is more common on hybrid PCB modules, it can also be used on standard fibreglass PCBs. Tolerances are typically quite large and can be in the order of 30%. A typical application would be non-critical pull-up resistors.
Thick and thin film
Thick film resistors became popular during the 1970s, and most SMD (surface mount device) resistors today are of this type. The resistive element of thick films is 1000 times thicker than thin films, but the principal difference is how the film is applied to the cylinder (axial resistors) or the surface (SMD resistors).
Thin film resistors are made by sputtering (a method of vacuum deposition) the resistive material onto an insulating substrate. The film is then etched in a similar manner to the old (subtractive) process for making printed circuit boards; that is, the surface is coated with a photo-sensitive material, covered by a pattern film, irradiated with ultraviolet light, and then the exposed photo-sensitive coating is developed, and underlying thin film is etched away.
Thick film resistors are manufactured using screen and stencil printing processes.
Because the time during which the sputtering is performed can be controlled, the thickness of the thin film can be accurately controlled. The type of material also varies, consisting of one or more ceramic (cermet) conductors such as tantalum nitride (TaN), ruthenium oxide (), lead oxide (PbO), bismuth ruthenate (), nickel chromium (NiCr), or bismuth iridate ().
The resistance of both thin and thick film resistors after manufacture is not highly accurate; they are usually trimmed to an accurate value by abrasive or laser trimming. Thin film resistors are usually specified with tolerances of 1% and 5%, and with temperature coefficients of 5 to 50 ppm/K. They also have much lower noise levels, on the level of 10–100 times less than thick film resistors. Thick film resistors may use the same conductive ceramics, but they are mixed with sintered (powdered) glass and a carrier liquid so that the composite can be screen-printed. This composite of glass and conductive ceramic (cermet) material is then fused (baked) in an oven at about 850 °C.
When first manufactured, thick film resistors had tolerances of 5%, but standard tolerances have improved to 2% or 1% in the last few decades. Temperature coefficients of thick film resistors are typically ±200 or ±250 ppm/K; a 40-kelvin (70 °F) temperature change can change the resistance by 1%.
Thin film resistors are usually far more expensive than thick film resistors. For example, SMD thin film resistors, with 0.5% tolerances and with 25 ppm/K temperature coefficients, when bought in full size reel quantities, are about twice the cost of 1%, 250 ppm/K thick film resistors.
Metal film
A common type of axial-leaded resistor today is the metal-film resistor. Metal Electrode Leadless Face (MELF) resistors often use the same technology.
Metal film resistors are usually coated with nickel chromium (NiCr), but might be coated with any of the cermet materials listed above for thin film resistors. Unlike thin film resistors, the material may be applied using different techniques than sputtering (though this is one technique used). The resistance value is determined by cutting a helix through the coating rather than by etching, similar to the way carbon resistors are made. The result is a reasonable tolerance (0.5%, 1%, or 2%) and a temperature coefficient that is generally between 50 and 100 ppm/K. Metal film resistors possess good noise characteristics and low non-linearity due to a low voltage coefficient. They are also beneficial due to long-term stability.
Metal oxide film
Metal-oxide film resistors are made of metal oxides which results in a higher operating temperature and greater stability and reliability than metal film. They are used in applications with high endurance demands.
Wire wound
Wirewound resistors are commonly made by winding a metal wire, usually nichrome, around a ceramic, plastic, or fiberglass core. The ends of the wire are soldered or welded to two caps or rings, attached to the ends of the core. The assembly is protected with a layer of paint, molded plastic, or an enamel coating baked at high temperature. These resistors are designed to withstand unusually high temperatures of up to 450 °C. Wire leads in low power wirewound resistors are usually between 0.6 and 0.8 mm in diameter and tinned for ease of soldering. For higher power wire-wound resistors, either a ceramic outer case or an aluminum outer case on top of an insulating layer is used. If the outer case is ceramic, such resistors are sometimes described as "cement" resistors, though they do not actually contain any traditional cement. The aluminum-cased types are designed to be attached to a heat sink to dissipate the heat; the rated power is dependent on being used with a suitable heat sink, e.g., a 50 W power rated resistor overheats at a fraction of the power dissipation if not used with a heat sink. Large wirewound resistors may be rated for 1,000 watts or more.
Because wirewound resistors are coils they have more undesirable inductance than other types of resistor. However, winding the wire in sections with alternately reversed direction can minimize inductance. Other techniques employ bifilar winding, or a flat thin former (to reduce cross-section area of the coil). For the most demanding circuits, resistors with Ayrton–Perry winding are used.
Applications of wirewound resistors are similar to those of composition resistors with the exception of high frequency applications. The high frequency response of wirewound resistors is substantially worse than that of a composition resistor.
Foil resistor
In 1960, Felix Zandman and Sidney J. Stein presented a development of resistor film of very high stability.
The primary resistance element of a foil resistor is a chromium nickel alloy foil several micrometers thick. Chromium nickel alloys are characterized by having a large electrical resistance (about 58 times that of copper), a small temperature coefficient and high resistance to oxidation. Examples are Chromel A and Nichrome V, whose typical composition is 80 Ni and 20 Cr, with a melting point of 1420 °C. When iron is added, the chromium nickel alloy becomes more ductile. The Nichrome and Chromel C are examples of an alloy containing iron. The composition typical of Nichrome is 60 Ni, 12 Cr, 26 Fe, 2 Mn and Chromel C, 64 Ni, 11 Cr, Fe 25. The melting temperature of these alloys are 1350 °C and 1390 °C, respectively.
Since their introduction in the 1960s, foil resistors have had the best precision and stability of any resistor available. One of the important parameters of stability is the temperature coefficient of resistance (TCR). The TCR of foil resistors is extremely low, and has been further improved over the years. One range of ultra-precision foil resistors offers a TCR of 0.14 ppm/°C, tolerance ±0.005%, long-term stability (1 year) 25 ppm, (3 years) 50 ppm (further improved 5-fold by hermetic sealing), stability under load (2000 hours) 0.03%, thermal EMF 0.1 μV/°C, noise −42 dB, voltage coefficient 0.1 ppm/V, inductance 0.08 μH, capacitance 0.5 pF.
The thermal stability of this type of resistor also has to do with the opposing effects of the metal's electrical resistance increasing with temperature, and being reduced by thermal expansion leading to an increase in thickness of the foil, whose other dimensions are constrained by a ceramic substrate.
Ammeter shunts
An ammeter shunt is a special type of current-sensing resistor, having four terminals and a value in milliohms or even micro-ohms. Current-measuring instruments, by themselves, can usually accept only limited currents. To measure high currents, the current passes through the shunt across which the voltage drop is measured and interpreted as current. A typical shunt consists of two solid metal blocks, sometimes brass, mounted on an insulating base. Between the blocks, and soldered or brazed to them, are one or more strips of low temperature coefficient of resistance (TCR) manganin alloy. Large bolts threaded into the blocks make the current connections, while much smaller screws provide volt meter connections. Shunts are rated by full-scale current, and often have a voltage drop of 50 mV at rated current. Such meters are adapted to the shunt full current rating by using an appropriately marked dial face; no change need to be made to the other parts of the meter.
Grid resistor
In heavy-duty industrial high-current applications, a grid resistor is a large convection-cooled lattice of stamped metal alloy strips connected in rows between two electrodes. Such industrial grade resistors can be as large as a refrigerator; some designs can handle over 500 amperes of current, with a range of resistances extending lower than 0.04 ohms. They are used in applications such as dynamic braking and load banking for locomotives and trams, neutral grounding for industrial AC distribution, control loads for cranes and heavy equipment, load testing of generators and harmonic filtering for electric substations.
The term grid resistor is sometimes used to describe a resistor of any type connected to the control grid of a vacuum tube. This is not a resistor technology; it is an electronic circuit topology.
Cermet oxide resistors
Fusible resistors
Special varieties
Cermet
Phenolic
Tantalum
Water resistor
Variable resistors
Adjustable resistors
A resistor may have one or more fixed tapping points so that the resistance can be changed by moving the connecting wires to different terminals. Some wirewound power resistors have a tapping point that can slide along the resistance element, allowing a larger or smaller part of the resistance to be used.
Where continuous adjustment of the resistance value during operation of equipment is required, the sliding resistance tap can be connected to a knob accessible to an operator. Such a device is called a rheostat and has two terminals.
Potentiometers
A potentiometer (colloquially, pot) is a three-terminal resistor with a continuously adjustable tapping point controlled by rotation of a shaft or knob or by a linear slider. The name potentiometer comes from its function as an adjustable voltage divider to provide a variable potential at the terminal connected to the tapping point. Volume control in an audio device is a common application of a potentiometer. A typical low power potentiometer (see drawing) is constructed of a flat resistance element (B) of carbon composition, metal film, or conductive plastic, with a springy phosphor bronze wiper contact (C) which moves along the surface. An alternate construction is resistance wire wound on a form, with the wiper sliding axially along the coil. These have lower resolution, since as the wiper moves the resistance changes in steps equal to the resistance of a single turn.
High-resolution multiturn potentiometers are used in precision applications. These have wire-wound resistance elements typically wound on a helical mandrel, with the wiper moving on a helical track as the control is turned, making continuous contact with the wire. Some include a conductive-plastic resistance coating over the wire to improve resolution. These typically offer ten turns of their shafts to cover their full range. They are usually set with dials that include a simple turns counter and a graduated dial, and can typically achieve three-digit resolution. Electronic analog computers used them in quantity for setting coefficients and delayed-sweep oscilloscopes of recent decades included one on their panels.
Resistance decade boxes
A resistance decade box or resistor substitution box is a unit containing resistors of many values, with one or more mechanical switches which allow any one of various discrete resistances offered by the box to be dialed in. Usually the resistance is accurate to high precision, ranging from laboratory/calibration grade accuracy of 20 parts per million, to field grade at 1%. Inexpensive boxes with lesser accuracy are also available. All types offer a convenient way of selecting and quickly changing a resistance in laboratory, experimental and development work without needing to attach resistors one by one, or even stock each value. The range of resistance provided, the maximum resolution, and the accuracy characterize the box. For example, one box offers resistances from 0 to 100 megohms, maximum resolution 0.1 ohm, accuracy 0.1%.
Photo resistors
Thermistors
Varistors
Water resistors
Special devices
There are various devices whose resistance changes with various quantities. The resistance of NTC thermistors exhibit a strong negative temperature coefficient, making them useful for measuring temperatures. Since their resistance can be large until they are allowed to heat up due to the passage of current, they are also commonly used to prevent excessive current surges when equipment is powered on. Similarly, the resistance of a humistor varies with humidity. One sort of photodetector, the photoresistor, has a resistance which varies with illumination.
The strain gauge, invented by Edward E. Simmons and Arthur C. Ruge in 1938, is a type of resistor that changes value with applied strain. A single resistor may be used, or a pair (half bridge), or four resistors connected in a Wheatstone bridge configuration. The strain resistor is bonded with adhesive to an object that is subjected to mechanical strain. With the strain gauge and a filter, amplifier, and analog/digital converter, the strain on an object can be measured.
A related but more recent invention uses a Quantum Tunnelling Composite to sense mechanical stress. It passes a current whose magnitude can vary by a factor of 1012 in response to changes in applied pressure.
References
Electrical components | List of resistors | [
"Technology",
"Engineering"
] | 3,982 | [
"Electrical engineering",
"Electrical components",
"Components"
] |
70,356,927 | https://en.wikipedia.org/wiki/Sumykhimprom%20ammonia%20leak | On 21 March 2022 during the battle of Sumy, a Russian airstrike damaged one of the ammonia tanks at the Sumykhimprom plant, contaminating land within a radius including the villages of Novoselytsia and Verkhnya Syrovatka. Due to the direction of the wind, the city of Sumy was largely unaffected despite its proximity to the leak.
Background
Two days prior to the leak Mikhail Mizintsev, the Chief of Russia's National Defense Management Center claimed that Ukrainian nationalists were plotting a false flag chemical attack in Sumy. Mizintsev alleged on 19 March that mines had been placed in chemical storage facilities at the plant to poison residents in case of Russian troops advancement into the city. He also alleged that a secondary school was similarly sabotaged in Kotlyarovo, Mykolaiv Oblast.
Leak
The leak was first reported at about 4:30 am local time on 21 March 2022 at the Sumykhimprom chemical plant, located in the suburbs of Sumy.
References
Ammonia
March 2022 events in Ukraine
Battle of Sumy
Northern front of the Russian invasion of Ukraine
Environmental disasters in Ukraine
Chemical disasters
2022 disasters in Ukraine
Pollution events in 2022
Russian airstrikes during the Russian invasion of Ukraine
2022 airstrikes
2022 industrial disasters
Industrial accidents and incidents in Ukraine | Sumykhimprom ammonia leak | [
"Chemistry"
] | 274 | [
"Chemical accident",
"Chemical disasters"
] |
70,358,540 | https://en.wikipedia.org/wiki/Angkor%20Wat%20Equinox | The Angkor Wat equinox is a solar phenomenon considered as a hierophany that happens twice a year with spring and autumn equinox, as part of the many astronomical alignments indicative of a "fairly elaborate system of astronomy" and of the Hindu influence in the construction of the vast temple complex of Angkor Wat, in Cambodia.
Description
The sunrise on Angkor Wat during the equinox is such that someone standing in front of the western entrance on the equinox is able to see the sun rising directly over the central lotus tower. In fact, it would be more correct to describe the phenomenon as the exact match of the shadow formed by the sunrise on Angkor Wat's central prang and the western entrance bridge.
Architecture
Like most celestial cities, Angkor Wat contains many astronomically inspired symbols and alignments. Angkor Wat was built by Suryvarman II, literally the Sun-King, during his reign for 1113 to 1150 with "astronomical and cosmic rhythm". It was dedicated as a tribute to Vishnu, a solar deity according to the Rigveda.
In fact, it appears that most of the vast complex of Angkor Wat was determined by the equinox. In the bas-relief at Angkor Wat, the position of the churning pivot would correspond to the position of the spring equinox. The 91 asuras in the south represent the 91 days from equinox to winter solstice, and the 88 northern devas represent the 88 days from equinox to summer solstice. In fact, there are either 88 or 89 devas in the scene, 89 if the deva atop Mount Mandara is counted with the others. There are 88 or 89 days from the spring equinox, counted from the first day of the new year, to the summer solstice.
In fact, the solar alignment is not limited to Angkor Wat, but includes many other temples of the Khmer civilization, as it connects Angkor Wat with other temples on the Ancient Khmer Highway from the West Mebon to the Preah Khan of Kompong Svay.
Interpretation
An eternal reminder of Suryavarman II's ascension to the throne
Scholars theorize that Suryavarman II was crowned sovereign in Angkor Wat during the equinox. The temples' calibrated use of equinox sunrises to highlight the central tower and the bas-relief of the churning of the ocean of milk would have served as an eternal reminder of this king's "ringing in a new golden age."
A solar city
While Angkor is also known as an hydraulic city since Bernard Philippe Groslier, the Angkor Wat equinox manifests how Angkor was also a solar city. According to Eleanor Manikka, "measurements of the temple recorded data, fixed solar and lunar alignments, defined pathways into and out of sanctuaries, and put segments of the temple in precise association with rays of sunlight during the equinox and solstice days". Accordingly, the gigantic representation of the churning of the sea actually works as a calendar: it positions the two solstice days at the extreme north and south, counts the days between them, and measures 54 units for the north- and southbound arcs of the sun and moon, emulating the symbolism on the bridge or in the western entrances, which repeat the 54/54-unit pairs several times.
An ancient Khmer New Year
The spring equinox, which receives such a special treatment at Angkor Wat, evidently marked the onset of the calendar year. However, during the thirteenth century, many years after the reign of Suryavarman II, the Khmer New Year was moved to the fifth lunar month, Chate, which corresponds to mid-April, in order for farmers to have more time to celebrate once the dry season was over. The astrological New Year that was celebrated before then occurred when the constellation of Aries or Ram appeared. This phenomenon occurred on the vernal equinox on March 21, but because of the precession of the equinoxes, the sun at the vernal equinox is not seen in the constellation of Pisces and enters Aries around April 13 or 14.
Tourism
The solar alignment of equinox at Angkor Wat is attracting a growing number of tourists, in a new trend of tourism connected to solar phenomena, also seen in such places such as Luxor and Vezelay Abbey.
In 2022, Angkor Wat ranked No. 1 as the best place in the world to watch sunrise and sunset, in part because of the Angkor Wat equinox phenomenon.
See also
Orientation of churches
Spring equinox in Teotihuacán
References
Bibliography
Astronomical events of the Solar System
Geography of Cambodia
Khmer folklore
March
September
Solar alignment
Solar phenomena
Spring equinox
Autumn equinox
Angkor Wat | Angkor Wat Equinox | [
"Physics",
"Astronomy"
] | 994 | [
"Physical phenomena",
"Astronomical events",
"Solar phenomena",
"Astronomical events of the Solar System",
"Stellar phenomena",
"Solar System"
] |
70,359,815 | https://en.wikipedia.org/wiki/Gamma%20helix | Gamma helix (or γ-helix) is a type of secondary structure in proteins that has been predicted by Pauling, Corey, and Branson, but has never been observed in natural proteins. The hydrogen bond in this type of helix was predicted to be between N-H group of one amino acid and the C=O group of the amino acid six residues earlier (or, as described by Pauling, Corey, Branson, "to the fifth amide group beyond it"). This can also be described as i + 6 → i bond and would be a continuation of the series (310 helix, alpha helix, pi helix and gamma helix). This theoretical helix contains 5.1 residues per turn. However, a fully developed gamma helix has characteristics of a structure that has 2.2 amino acid residues per turn, a rise of 2.75Å per residue, and a pseudo-cyclic (C7) structure closed by intramolecular H-bond. Depending on the amino acid's side chain (R) involved in this main-chain reversal motif, two stereoisomers can occur with their Cα-substituent located either in the axial or in the equatorial position relative to the H-bonded pseudo-cycle.
References
Protein structural motifs
Helices | Gamma helix | [
"Chemistry",
"Biology"
] | 261 | [
"Biotechnology stubs",
"Protein classification",
"Biochemistry stubs",
"Protein structural motifs",
"Biochemistry"
] |
70,361,275 | https://en.wikipedia.org/wiki/Phosphorus%20dioxide | Phosphorus dioxide () is a gaseous oxide of phosphorus. It is a free radical that plays a role in the chemiluminescence of phosphorus and phosphine. It is produced when phosphates are heated to high temperatures.
In the ground state the molecule is bent, like nitrogen dioxide, but there is an excited state that is linear.
References
Phosphorus oxides
Free radicals | Phosphorus dioxide | [
"Chemistry",
"Biology"
] | 78 | [
"Inorganic compounds",
"Free radicals",
"Inorganic compound stubs",
"Senescence",
"Biomolecules"
] |
70,363,079 | https://en.wikipedia.org/wiki/3%2C3-Bis%28azidomethyl%29oxetane | 3,3-Bis(azidomethyl)oxetane (BAMO) is a oxetane monomer used in energetic propellant binders and plasticizer. It is frequently used as a copolymer to improve the physical properties of more commonly used polymers and to give them energetic properties.
Preparation
BAMO is made by reacting BCMO with sodium azide in an alkaline solution. Tetrabutyl ammonium bromide is used as a phase-transfer catalyst in the reaction.
PolyBAMO can be made by mixing boron trifluoride diethyl etherate and BAMO in trimethylolpropane. The polymerization of BAMO destroys the oxetane ring, but the azide groups remain intact.
References
Oxetanes
Monomers
Plasticizers
Organoazides | 3,3-Bis(azidomethyl)oxetane | [
"Chemistry",
"Materials_science"
] | 170 | [
"Polymer stubs",
"Organic chemistry stubs",
"Monomers",
"Polymer chemistry"
] |
70,367,723 | https://en.wikipedia.org/wiki/Fractal%20physiology | Fractal physiology refers to the study of physiological systems using complexity science methods, such as chaos measure, entropy, and fractal dimensions. The underlying assumption is that biological systems are complex and exhibit non-linear patterns of activity, and that characterizing that complexity (using dedicated mathematical approaches) is useful to understand, and make inferences and predictions about the system.
Main Findings
Neurophysiology
Quantifications of the complexity of brain activity is used in the context of neuropsychiatric diseases and mental states characterization, such as schizophrenia, affective disorders, or neurodegenerative disorders. Particularly, diminished EEG complexity is typically associated with increased symptomatology.
Cardiovascular systems
The complexity of Heart Rate Variability is a useful predictor of cardiovascular health.
Software
In Python, NeuroKit provides a comprehensive set of functions for complexity analysis of physiological data. AntroPy implements several measures to quantify the complexity of time-series.
In R, TSEntropies provides methods to quantify the entropy. casnet implements a collection of analytic tools for studying signals recorded from complex adaptive systems.
In MATLAB, The Neurophysiological Biomarker Toolbox (NBT) allows the computation of Detrended fluctuation analysis. EZ Entropy implements the entropy analysis of physiological time-series.
See also
Fractal dimension
Entropy
Complex system
References
Fractals
Physiology | Fractal physiology | [
"Mathematics",
"Biology"
] | 285 | [
"Mathematical analysis",
"Functions and mappings",
"Physiology",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
70,370,785 | https://en.wikipedia.org/wiki/Flickering%20spectroscopy | Flickering analysis of cellular or membranous structures is a widespread technique for measuring the bending modulus and other properties from the power spectrum of thermal fluctuations.
First demonstrated theoretically by Brochard and Lennon in 1975, flickering spectroscopy has become a widespread technique due to its simplicity and lack of specialised equipment beyond a brightfield microscope. It is used in structures such as red blood cells, giant unilamellar vesicles and other cell-like structures.
Theoretical overview
Considering a quasi-spherical shell subject to thermal undulations according to Langevin dynamics, one can express the time-averaged mean square amplitudes of the fluctuation modes as
where and index the fluctuation mode corresponding to spherical harmonics and is the reduced membrane tension, is the spontaneous curvature and is the bending modulus, as defined by the Helfrich hamiltonian.
Experimental procedure and analysis
The equatorial plane of a cell-like structure can be imaged using phase contrast microscopy to obtain a video showing the fluctuations of the membrane.
On the video, the contours can be found using image analysis algorithms, which can then be used to determine the power spectrum of the fluctuation modes in real space amplitude. This can be used, following the steps above, to obtain relevant parameters such as the bending modulus, which is useful for a number of applications in membrane structure research.
References
Spectroscopy | Flickering spectroscopy | [
"Physics",
"Chemistry"
] | 277 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
70,371,363 | https://en.wikipedia.org/wiki/Glass%20formation | A glass is an amorphous solid completely lacking long range periodic atomic structure that exhibits a region of glass transformation. This broad definition means that any material be it organic, inorganic, metallic, etc. in nature may form a glass if it exhibits glass transformation behavior. However prior to 1900 very few non-silicate glasses were known and the theories developed were consequently heavily influenced by existing observations of silicate melts (compounds containing silicon and oxygen). These theories are grouped under the heading of structural theories of glass formation. In later years many non-silicate glasses were discovered and it is recognized today that almost any material is capable of forming a glass given the right experimental conditions and focus has changed from which materials will form a glass to under what conditions will a particular material form a glass. More recent theories focus on the kinetics behind the formation of glass and these kinetic theories of glass formation have largely replaced earlier structural theories.
Structural theories of glass formation
Among the first structural theories of glass formation was that which was developed by Goldschmidt who stated that glasses of the general formula RnOm will form glasses when the ratio of the ionic radii of the cation to the oxygen is in the range of 0.2 to 0.4. When this condition is true the cation tends to be bonded to 4 oxygen atoms and have a tetrahedral coordination. As such Goldschmidt concluded from this that only cations with tetrahedral coordination would form glasses on cooling. The conclusion was empirical and no attempt was made to explain this observation by Goldschmidt.
The ideas of Goldschmidt were extended by Zachariasen who attempted to explain why certain coordination numbers would favor glass formation. He noted that silicates which formed glasses rather than recrystallizing after melting and cooling formed network structures consisting of tetrahedra joined at all four corners in a non-periodic non-symmetric manner (unlike crystals which are periodic and symmetrical. These networks extend in all three dimensions in a manner such that the average behavior of the glass is identical - the properties of the glass are isotropic. Using this as his basis Zachariasen concluded that the ability to form a glass was dependent on the ability to form these networks. He then went on to explain the necessary conditions for forming such a network, which he defined as follows:
An oxygen atom cannot be connected to more than two cations otherwise the variation in bond angles required to form a non periodic network cannot be achieved.
The number of oxygen atoms must be small either 3 or 4, which was and empirically determined condition based on the fact that the only glasses known at the time were formed from either triangular or tetrahedrally coordinated cations.
At least 3 corners of a polyhedron must be shared in order to yield a 3 dimensional structure and the polyhedra may only be joined at the corners (they do not share edges or faces).
He also stated that the melt must be cooled under appropriate conditions in order for glass formation to occur, anticipating later kinetic theories of glass formation. Other statements of Zachariasen were used as the basis for a class of glass formation models known as random network theory. However, in his original work Zachariasen did not use the term random network preferring instead to use "vitreous network" as the structure is not truly random and is constrained by minimum distances between atoms. As a consequence not all inter-nuclear distances are equally probably and observed x ray patterns for glasses are a consequence of a vitreous network.
Other structural theories of glass formation focused on the nature of the bonds between cations and anions. For example, Smekal suggested that only bonds which are intermediate in nature lying between purely ionic and purely covalent in a melt would allow for the formation of glass. He suggested this on the basis that ionic bonds lack the directionality required to form a network and that covalent bonds would enforce strict bond angles preventing the variation required for the formation of a non periodic network. Stanworth attempted to better quantify this mixed bond concept and divided oxides into three groups on the basis of electronegativity of the cation. The groups were as follows:
network formers: cations which form bonds to oxygen with near 50% ionic character produce good glasses
intermediates: form slightly more ionic bonds with oxygen though the cannot form glasses themselves they can partially replace cations of the network former class.
modifiers: cations with very low electronegativities which form highly ionic bonds with oxygen, these never act as network formers and can only modify structures created by network formers.
Bond strength was also suggested as an important factor in the formation of glasses. Sun argued that strong bonds were important for the formation of glasses as they prevent the reorganization of the material into a crystal structure during cooling, therefore facilitating glass formation. The bond strength he referred to could be given by the energy required to dissociate an oxide structure in the gaseous phase divided by the number of bonds. Although this model yielded results that were compatible with previous observations, it yielded no new insights into the formation of glass. Rawson argued that Sun ignored the importance of temperature in his model suggesting that higher melting points yields more energy for bond disruption whilst low temperatures afford less energy. He argued that a material with a low melting temperature and high bond strength would be a better glass former than a one with a similar bond strength but a much higher melting point. Although the application of this model to single cation oxides does little to improve the results of the application of the Sun model, it does predict the excellent glass forming property of boric oxide and extension to binary and ternary systems yields the prediction that the ease of glass formation should be improved for compositions near eutectics. This observation has often been made and is dubbed the "liquidus temperature effect". An example of this is the glass formation of the CaO-Al2O3 binary in a region near a eutectic.
Kinetic theories of glass formation
The structural theories of glass formation only consider the relative ease of glass formation. Materials which form glasses under a moderate cooling rate are called good glass formers, those that require a rapid cooling rate are called poor glass formers and those that require extreme cooling rates are referred to a non-glass formers. As it is now recognized that nearly any material is capable of forming a glass given the correct experimental conditions the focus of kinetic theories of glass formation is to identify how fast a system must be cooled to form a glass and avoid detectable crystallization, rather than whether or not a system will form a glass.
References
Glass | Glass formation | [
"Physics",
"Chemistry"
] | 1,341 | [
"Homogeneous chemical mixtures",
"Amorphous solids",
"Unsolved problems in physics",
"Glass"
] |
60,996,879 | https://en.wikipedia.org/wiki/MIQE | The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines are a set of protocols for conducting and reporting quantitative real-time PCR experiments and data, as devised by Bustin et al. in 2009. They were devised after a paper was published in 2002 that claimed to detect measles virus in children with autism through the use of RT-qPCR, but the results proved to be completely unreproducible by other scientists. The authors themselves also did not try to reproduce them and the raw data was found to have a large amount of errors and basic mistakes in analysis. This incident prompted Stephen Bustin to create the MIQE guidelines to provide a baseline level of quality for qPCR data published in scientific literature.
Purpose
The MIQE guidelines were created due to the low quality of qPCR data submitted to academic journals at the time, which was only becoming more common as Next Generation Sequencing machinery allowed for such experiments to be run for a cheaper cost. Because the technique is utilized across all of science in multiple fields, the instruments, methods, and designs of how qPCR is used differ greatly. To help improve overall quality, the MIQE guidelines were made as generalized suggestions on basic experimental procedures and forms of data that should be collected as a minimum level of reported information for other researchers to understand and use when reading the published material. Setting up a recognized and largely agreed upon set of guidelines such as these were deemed important by the scientific community especially due to the ever increasing amount of scientific work coming from developing countries with many different languages and protocols.
History
Original version developments
In 2009, Stephen Bustin led an international group of scientists including Mikael Kubista to put together a set of guidelines on how to perform qPCR and what forms of data should be collected and published in the process. This also allowed editors and reviewers of scientific journals to employ the guidelines when looking over a submitted paper that included qPCR data. Thus, the guidelines were set up as a sort of checklist for each step of the procedure with certain items being marked as essential (E) when submitting data for publication and others marked as just desirable (D).
An additional version of the guidelines was published in September 2010 for use with fluorescence-based quantitative real-time PCR. It also acted as a précis for the broader form of the guidelines. Other researchers have been creating further versions for specific forms of qPCR that may require a supplementary or different set of items to check, including single-cell qPCR and digital PCR (dPCR). Appropriate adherence to the existing MIQE guidelines has also been overviewed in other scientific areas, including photobiomodulation and clinical biomarkers.
It was noted by Bustin in 2014 (and again by him in 2017) that there was some amount of uptake and usage of the MIQE guidelines within the scientific community, but there were still far too many published papers with qPCR experiments that lacked even the most basic of data presentation and proper confirmation of effectiveness for said data. These studies retained major reproducibility issues, where the conclusions of their evidence could not be replicated by other researchers, throwing the initial results into doubt. All of this was despite many papers directly citing Bustin's original MIQE publication, but not following through on the guideline checklist of material in their own experiments. However, some researchers have pointed out at least some success, with a number of papers being rejected by academic journals for publication due to failing to pass MIQE checklists. Other studies have been retracted after the fact once their lack of proper data to pass the MIQE guidelines was noted and publicly pointed out to the journal editors.
Tightening of guidelines
When setting up their new comparative qPCR systems titled "Dots in Boxes" in 2017, New England Biolabs stated that they had designed the data collection portion around the MIQE guidelines so that the data fit all the minimum parameter checklists in the protocols. Other scientific instrument companies have assisted in guideline compliance by purposefully tailoring their devices for them, including Bio-Rad creating a mobile app that allows for active marking of the MIQE checklist as each step is completed.
An overview of the 10th anniversary since the publication of the MIQE guidelines was conducted in June 2020 and discussed the scientific studies that had produced better and more organized results when following the guidelines. In August 2020, an updated version of the guidelines for the digital PCR method was published to account for improvement in machinery, technologies, and techniques since the original 2013 release. Additional guideline steps were added for data analysis, while also providing a more simplified checklist table for researchers to use. An RT-qPCR targeting assay was developed alongside Stephen Bustin using the MIQE guidelines for clinical biomarkers in December 2020 in order to identify the clinical presence of COVID-19 viral particles during the COVID-19 pandemic.
Guidelines overview
The MIQE guidelines are split up into 9 different sections that make up the checklist. These include not only considerations for doing the qPCR itself, but also how the resulting data is collected, analyzed, and presented. An important part of the latter is including information relating to the analysis software used and also submitting the raw data to the relevant databases.
Experimental design
Large portions of the guidelines include basic actions that would normally be included in experiments and publications regardless, such as an item for describing the experimental and control group differences. Other such information includes how many individual units are used in each group in the experiment. These two pieces are defined as essential for any study. This section also includes two desirable points, which are pointing out whether the author's laboratory itself or a core laboratory of the university or organization conducted the qPCR assay and an acknowledgement of any other individuals that contributed to the work.
Sample
The essential requirements that samples and sample material must meet includes a description of the sample, what form of dissection was used, what processing method was done, whether the samples were frozen or fixed and how long did it take, and what sample conditions were used. It is also desirable to know the volume or mass of the sample that was processed for the qPCR.
Nucleic acid extraction
For the process of extracting the DNA/RNA, there are a number of essential guidelines. This includes a description of the extraction process done, a statement on what DNA extraction kit was used and any changes made to the directions, details on whether any DNase or RNase treatment was used, a statement on whether any contamination was assessed, a quantification of the amount of genetic material extracted, a description of the instruments used for the extraction, the methods used to retain RNA integrity, a statement on the RNA integrity number and quality indicator and the quantification cycle (Cq) reached, and lastly what testing was done to determine the presence or absence of inhibitors. Four desired pieces of information are where the reagents used were obtained from, what level of genetic purity was obtained, what yield was obtained, and an electrophoresis gel image for confirmation.
Reverse transcription
The primary essential parts for this phase include detailing the reaction conditions in full, giving both the amount of RNA used and the total volume of the reaction, give information on the oligonucleotide used as a primer and its concentration, the concentration and type of reverse transcriptase used, and lastly the temperature and amount of time done for the reaction. It is also desirable to have the catalog numbers of reagents used and their manufacturers, the standard deviation for the Cq with and without the transcriptase being involved, and how the cDNA was stored.
qPCR target information
All of the basic information regarding the target is necessary here, including the gene symbol, the accession database number for the sequence in question, the length of the sequence being amplified, information about the specificity screen used such as BLAST, what splicing variants exist for the sequence, and where the exon or intron for each primer is. There are several desired, but not required information pieces for this section, such as the location of the amplicon, whether any pseudogenes or homologs exist, whether a sequence alignment was done and the data obtained from it, and any data on the secondary structure of the amplified sequence.
qPCR oligonucleotides
Creation of the oligonucleotides requires only two pieces of essential information: the primer sequences used and the location and details of any modifications made to the sequence. But there are several desirable pieces of data, including the identification number from the RTPrimerDB database, the sequences from the probes, the manufacturer used to make the oligos, and how they were purified.
qPCR protocol
As one of the primary segments of the guidelines, there are several essential parts on the checklist for the qPCR process itself. This includes the full set of conditions used for the reaction, the volume of both the reaction and the cDNA, the concentrations for the probes, magnesium ions, and dNTPs, what kind of polymerase was used and its concentration, what kit was used and its manufacturer, what additives to the reaction were used, who manufactured the qPCR machine, and what parameters were set for the thermocycling process. The only additional desired pieces of information are the chemical composition of the buffer used, who manufactured the plates and tubes used and what their catalog number is, and whether the reaction was set up manually or by a machine.
qPCR validation
In order to confirm the effectiveness and quality of the qPCR process that was performed, there are several actions and subsequent data that must be presented. This includes explaining the specific method of checking that the process functioned, such as using a gel, direct sequencing of the genetic material, showing a melt profile, or from digestion by restriction enzyme. If SYBR Green I was used, then the Cq of the control group with no template DNA must be given. Further essential data includes the calibration of the machine curves with the slope and y intercept noted, the efficiency of the PCR process as determined from the aforementioned slope, the correlation coefficients (r squared) for the calibration curves, the dynamic range of the linear curves, the Cq found at the lowest concentration where 95% of the results were still positive (LOD) along with the evidence for the LOD itself, and lastly if a multiplex is used, then the efficiency and LOD must be given for each assay done.
The extra desired information includes evidence given that qPCR optimization occurred by the use of gradients, the confidence intervals to show efficiency of the qPCR, and the confidence intervals for the entire range tested.
Data analysis
The final section of the guidelines involves information on how the analysis of the qPCR data was done. The essential parts of that include the program and program version used for the analysis, the method for how the Cq was determined, figuring out the outlier points in the data and how they are used or excluded and why, what results were found for the controls with no template genetic material, an explanation for why the reference genes used were chosen and why the number of them was chosen, the method used to normalize the data, how many technical replicates were included, how repeatable was the data within the assays, what methods were used to determine significance of the results, and what software was used for this part of the qualitative analysis.
It is also desired to include information on the number of biological replicates and whether they matched the results from the technical replicates, the reproducibility data for the concentration variants, data on the power analysis, and lastly for the researchers to submit the raw data in the RDML file format.
References
Further reading
Molecular biology
Laboratory techniques
Polymerase chain reaction | MIQE | [
"Chemistry",
"Biology"
] | 2,441 | [
"Biochemistry methods",
"Genetics techniques",
"Polymerase chain reaction",
"nan",
"Molecular biology",
"Biochemistry"
] |
60,997,393 | https://en.wikipedia.org/wiki/Julie%20Elizabeth%20Gough | Julie Elizabeth Gough is a Professor of Biomaterials and Tissue Engineering at The University of Manchester. She specializes on controlling cellular responses at the cell-biomaterial interface by engineering defined surfaces for mechanically sensitive connective tissues.
Early life and education
Gough is a cell biologist. She studied cell- and immunobiology, and molecular pathology and toxicology at the University of Leicester, graduating with a BSc in 1993 and an MSc in 1994, respectively. She continued her doctoral studies at the University of Nottingham, earning her PhD in Biomaterials in 1998. Between 1998 and 2002, she furthered her studies at both Nottingham and Imperial College London as a postdoctoral fellow working on novel composites and bioactive glasses for bone repair.
Research and career
Gough joined the School of Materials, Faculty of Science and Engineering at The University of Manchester, as a lecturer in 2002. She was quickly promoted to Senior lecturer and Reader in 2006 and 2010, respectively.
From 2012 to 2013 she was a Royal Academy of Engineering/Leverhulme Trust Senior Research Fellow. Gough was made full Professor in 2014.
Since then, she has continued her research in tissue engineering of mechanically sensitive connective tissues such as bone, cartilage, skeletal muscle and the intervertebral disc. This includes analysis and control of cells such as osteoblasts, chondrocytes, fibroblasts, keratinocytes, myoblasts and macrophages on a variety of materials and scaffolds. Her research also involves the development of scaffolds for tissue repair using novel hydrogels and magnesium alloys as various porous and fibrous materials. Gough has worked on the advisory board of the journal Biomaterials Science, and as part of the local organising committee for the World Biomaterials Congress.
References
External links
Biomaterials
Tissue engineering
Year of birth missing (living people)
Living people
Women molecular biologists
Professorships at the University of Manchester
Alumni of the University of Leicester
Alumni of the University of Nottingham | Julie Elizabeth Gough | [
"Physics",
"Chemistry",
"Engineering",
"Biology"
] | 411 | [
"Biomaterials",
"Biological engineering",
"Cloning",
"Chemical engineering",
"Materials",
"Tissue engineering",
"Matter",
"Medical technology"
] |
61,000,461 | https://en.wikipedia.org/wiki/Independent%20water%20and%20power%20plant | An independent water and power plant (IWPP) or an integrated water and power project is a combined facility which serves as both a desalination plant and a power plant. IWPPs are more common in the Middle East, where demand for both electricity and salt water desalinisation are high.
Independent water and power producers negotiate both a feed-in power tariff and a water tariff in the same deal with the utility company, who also purchases both products. IWPPs tend to have an installed capacity of over 1 gigawatt (1,000 megawatts) and generates power in a typical thermal power station setup. Seawater is purified by integrating MSF, MED, TVC, or RO water desalination technologies with the power plant, thus increasing overall efficiency.
See also
Independent Power Producer
References
External links
Cogeneration
Power station technology
Energy conversion
Chemical process engineering
Water desalination
Water treatment | Independent water and power plant | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 187 | [
"Water desalination",
"Water treatment",
"Chemical engineering",
"Water pollution",
"Water technology",
"Environmental engineering",
"Chemical process engineering"
] |
61,001,891 | https://en.wikipedia.org/wiki/Epididymis%20evolution%20from%20reptiles%20to%20mammals | The epididymis, which is a tube that connects a testicle to a vas deferens in the male reproductive system, evolved by retention of the mesonephric duct during regression and replacement of the mesonephros with the metanephric kidney. Similarly, during embryological involution of the paired mesonephric kidneys, each mesonephric duct is retained to become the epididymis, vas deferens, seminal vesicle and ejaculatory duct (Wolffian duct). In reptiles and birds both the testes and excurrent ducts (efferent ducts, epididymis, vas deferens) occur in an intra-abdominal location (testicond). Primitive mammals, such as the monotremes (prototheria), also are testicond. Marsupial (metatheria) and placental (eutheria) mammals exhibit differing degrees of testicular descent into an extra-abdominal scrotum. In scrotal mammals the epididymis is attached to the testes in an extra-abdominal position where the cauda epididymis extends beyond the lowest extremity of the testis. Hence, the cauda epididymis is exposed to the coolest of temperatures compared to all other reproductive structures.
Whereas testicond reptiles contain an excurrent duct system, they lack male reproductive glands (absent seminal vesicles, prostate, bulbourethral glands). Monotreme mammals are also testicond (like reptiles) and contain some, but not all (absent seminal vesicles) of the male reproductive glands observed in most metatherian and eutherian mammals. This combination of reptilian and mammalian structures within the monotreme reproductive tract has informed the evolution of the male reproductive tract in mammals. For example, the intra-abdominal low sperm storage capacity of the echidna (Tachyglossus aculeatus) epididymis informed the role of the epididymis as the prime mover in the evolution of descended testes in mammals as it relates to lower extra-gonadal temperatures enhancing epididymal sperm storage in scrotal mammals. Furthermore, the structure of the monotreme reproductive tract also informed prostate evolution in monotreme mammals.
Structural differentiation of the epididymis in reptiles
The reptilian testis and epididymis typically undergo seasonal recrudescence coupled to the breeding season. All reptiles retain their testes and excurrent ducts within the abdomen (testicond). Generally, the reptilian epididymis does not exhibit the same degree of anatomical regionalization compared to scrotal mammals (Figure 1). Indeed, the anatomical appearance of the epididymis of many reptiles appears much more similar to the epididymis of monotremes than scrotal mammals (Figure 1). Anatomically, the gross morphologic features of the reptilian epididymis can vary between species, with some species of reptiles exhibiting just two anatomical regions whereas others (snakes) may exhibit no observable regionalization of the epididymis.
A reptilian histologic initial segment of the epididymis has been extensively documented in several species homologous to the initial segment of mammals. The initial segment of the epididymis, first described in the guinea pig epididymis, is a histologically distinct region of tall pseudostratified columnar epithelium that receives spermatozoa from the ductuli efferentes (Figure 1).
The epididymis is the primary sperm storage organ in male reptiles. In all reptiles and mammals the sperm storage region of the epididymis can objectively be identified as that distal extremity of the epididymis that exhibits a widened diameter of duct which contains additional layers of circumferential smooth muscle capable of contraction during ejaculation in direct continuity with the vas deferens (Figure 1). This sperm storage region has been described as the anatomical cauda epididymis or the histologic terminal segment of the epididymis. The caudal region of the reptilian epididymis, where sperm are stored, is an anatomical extension that narrows into a conical shape before forming the vas deferens. The coiled epididymal duct within the cauda epididymis does not appear to be particularly long, and so may be limited in its capacity to store sperm in comparison to scrotal mammals. Limited sperm storage in the reptilian epididymis may be circumvented by the ability of female reptiles to store viable spermatozoa within their reproductive tract for utilization months or years after insemination. A competing reproductive strategy to long-term sperm storage that explains the production of offspring after prolonged periods in the absence of males is facultative parthenogenesis. In either case, these female reproductive strategies may have evolved to counter limited sperm storage in the reptilian male epididymis.
Structural differentiation of the epididymis in monotreme mammals
The monotremes (short beaked echidna, long beaked echidna, platypus) are testicond seasonal breeding mammals that exhibit some characteristics of the reproductive tract found in reptiles (e.g. testicond, presence of a cloaca). The fully developed monotreme epididymis exhibits two anatomical regions, similar to some reptiles. The two anatomical regions of the monotreme epididymis closely correspond to just two histologic regions (Figure 1B), an initial segment and a terminal segment. Structural differentiation of the epididymis into just an initial segment and terminal segment, with no intervening middle segment, has also subsequently been observed as far back as the epididymis of sharks. In the monotreme echidna, the initial segment, where sperm undergo maturation, is much larger than the terminal segment (Figure 1B), the later segment being the sperm storage region of the epididymis. In the monotreme echidna, the proportion (26% of total) of mature sperm stored intra-abdominally in the terminal segment of the epididymis is considerably less than the proportion of mature sperm stored in the epididymis of many eutherian mammals (50-75% of total) with descended testes. Hence, both reptiles and the monotreme echidna appear to have relatively limited sperm storage capacity in the testicond epididymis compared to mammals with the epididymis located in an extra-abdominal scrotum. This reduced sperm storage capacity of the monotreme testicond epididymis is further supported by observations that the sperm storage region of the epididymis of a testicond mammal (echidna) and a scrotal mammal (rat) are respectively 4% and 8% of the total length of the duct. Significantly, the low intra-abdominal sperm storage capacity of the echidna epididymis helped inform the role of the epididymis as a prime mover in the evolution of descended testes in mammals whereby lower extra-gonadal temperatures within the scrotal cauda epididymis reduces oxidative respiration of sperm, which enhances oxygen availability, thereby allowing greater epididymal sperm storage in the cooler scrotum of mammals.
Structural differentiation of the epididymis in marsupials and placental mammals
Most species of marsupial (metatherian) and placental (eutherian) mammals have evolved extra-gonadal testes, although a limited number of these mammals remain testicond or exhibit differing degrees of testicular descent. As a result of the epididymis being attached to the testis, and the cauda epididymis extending below the lower extremity of the testis (Figure 1C), it was proposed that the epididymis was the prime mover in the evolution of testicular decent, whereby the cauda epididymis preceded the testis into a scrotal location.
The epididymis of marsupials (metatherians) and placental mammals (eutherians) has undergone further structural differentiation compared to that observed in prototherian mammals (Figure 1). In scrotal mammals, an initial segment is nearly always observed, however, additional histologically distinct regions have developed between the initial segment and the distal sperm storage region (terminal segment). These intervening histologic regions have been referred to as the middle segment. The histologic regions of the middle segment (Figure 1C) can vary in number in metatherian and eutherian species of mammals. Beyond the histologic regions of the middle segment, the sperm storage region (anatomical cauda, histologic terminal segment) of scrotal mammals has enlarged to accommodate enhanced storage of sperm (Figure 1C). The storage of sperm in the scrotal epididymis is enhanced by cooler extra-abdominal temperatures. Indeed, experimental reflection of one epididymis into the warmer temperature of an abdominal location reduced sperm storage capacity by 75% compared to the contralateral epididymis that remained in the scrotum. Significantly, cooler scrotal temperatures reduces oxidative respiration of sperm, thereby increasing oxygen availability to store more sperm per unit volume of duct, which has informed the evolution of descended testes in mammals.
Trends in the evolution of the epididymis from testicond reptiles and monotremes to scrotal mammals
A histologically distinct initial segment of the epididymis is widely observed in many species of reptiles and even as far back as sharks. A large initial segment is also present in the epididymis of the testicond monotreme echidna. Furthermore, the scrotal epididymis of metatherian and eutherian mammals nearly all exhibit an initial segment which may contain histologically distinct sub-zones therein. Hence, the initial segment of the epididymis is well conserved in testicond vertebrates (reptiles, monotremes) and in scrotal metatherian and eutherian mammals (Figure 1).
The status of the histologic middle segment of the epididymis in reptiles is incompletely defined (Figure 1A). Considering the wide variation in the anatomical structures of the four orders (Crocodilia, Sphenodontia, Squamata, Testudines) of reptilian epididymides and the paucity of histologic studies that correlate anatomical structure to histology, the evolution of the middle segment in reptiles, if present, remains to be delineated. In contrast, extensive studies of the echidna epididymis show that the monotreme epididymis lacks a middle segment. It is only in metatherian and eutherian mammals that a middle segment has been extensively documented. Whereas the initial segment of the epididymis often contains histologically distinct sub-zones therein, the downstream zones that collectively constitute the middle segment most likely evolved from the upstream sub-zones of the initial segment.
The histologic terminal segment is the sperm storage region of the epididymis in reptiles, monotremes and both metatherian and eutherian mammals (Figure 1). The testicond epididymis (reptiles and monotremes) has a limited sperm storage capacity compared with the scrotal epididymis (metatherian and eutherian mammals), which has a much larger terminal segment to accommodate increased sperm storage. It is the cooler temperature of the scrotal epididymis that reduces oxidative respiration of sperm in the terminal segment, thereby increasing oxygen availability to store more sperm per unit volume of duct, thus informing the evolution of descended testes in mammals.
Whereas different and multiple histologic sub-regions may or may not occur within any segment of the epididymis, the histologic description of the epididymis consisting of an initial segment, middle segment and terminal segment provides a harmonized characterization that allows direct comparisons of homologous segments across species.
Summary and conclusion
The evolution of the epididymis from reptiles to mammals (Figure 1) entailed:
Retention of the histologic initial segment.
To varying degrees, elaboration of a histologic middle segment.
An increase in the length, volume and size of the histologic terminal segment of scrotal mammals, whereby the lower extra-abdominal (scrotal) temperature increased oxygen availability to sustain and store more sperm, thus providing a physiologic mechanism for the evolution of descended testes in mammals.
References
Evolution of tetrapods
Scrotum | Epididymis evolution from reptiles to mammals | [
"Biology"
] | 2,679 | [
"Phylogenetics",
"Evolution of tetrapods"
] |
71,805,117 | https://en.wikipedia.org/wiki/Permutation%20codes | Permutation codes are a family of error correction codes that were introduced first by Slepian in 1965. and have been widely studied both in Combinatorics and Information theory due to their applications related to Flash memory and Power-line communication.
Definition and properties
A permutation code is defined as a subset of the Symmetric Group in endowed with the usual Hamming distance between strings of length . More precisely, if are permutations in , then
The minimum distance of a permutation code is defined to be the minimum positive integer such that there exist , distinct, such that .
One of the reasons why permutation codes are suitable for certain channels is that the alphabet symbols only appear once in each codeword, which for example makes the errors occurring in the context of powerline communication less impactful on codewords
Gilbert-Varshamov bound
A main problem in permutation codes is to determine the value of , where is defined to be the maximum number of codewords in a permutation code of length and minimum distance . There has been little progress made for , except for small lengths. We can define with to denote the set of all permutations in which have distance exactly from the identity.
Let with , where is the number of derangements of order .
The Gilbert-Varshamov bound is a very well known upper bound, and so far outperforms other bounds for small values of .
Theorem 1:
There has been improvements on it for the case where as the next theorem shows.
Theorem 2: If for some integer , then
.
For small values of and , researchers have developed various computer searching strategies to directly look for permutation codes with some prescribed automorphisms
Other Bounds
There are numerous bounds on permutation codes, we list two here
Gilbert-Varshamov Bound Improvement
An Improvement is done to the Gilbert-Varshamov bound already discussed above. Using the connection between permutation codes and independent sets in certain graphs one can improve the Gilbert–Varshamov bound asymptotically by a factor , when the code length goes to infinity.
Let denote the subgraph induced by the neighbourhood of identity in , the Cayley graph and .
Let denotes the maximum degree in
Theorem 3: Let and
Then,
where .
The Gilbert-Varshamov bound is,
Theorem 4: when is fixed and does to infinity, we have
Lower bounds using linear codes
Using a linear block code, one can prove that there exists a permutation code in the symmetric group of degree , having minimum distance at least and large cardinality. A lower bound for permutation codes that provides asymptotic improvements in certain regimes of length and distance of the permutation code is discussed below. For a given subset of the symmetric group , we denote by the maximum cardinality of a permutation code of minimum distance at least entirely contained in , i.e.
.
Theorem 5: Let be integers such that and . Moreover let be a prime power and be positive integers such that and . If there exists an code such that has a codeword of Hamming weight , then
where
Corollary 1: for every prime power , for every ,
.
Corollary 2: for every prime power , for every ,
.
References
Error detection and correction | Permutation codes | [
"Engineering"
] | 662 | [
"Error detection and correction",
"Reliability engineering"
] |
71,822,245 | https://en.wikipedia.org/wiki/Shingo%20Futamura | Shingo Futamura (April 3, 1938 -) is a rubber industry materials scientist noted for his concept of the deformation index.
Education
Futamura completed his undergraduate Bachelor of Science degree at Waseda University in Japan. He earned a master's degree from the University of Michigan in 1968. He received his doctorate in polymer science from the University of Akron in 1975 under advisor Eberhard Meinecke.
Career
By 1974, Futamura was appointed as a group leader of polymer physics at Firestone Central Research in Akron, Ohio. During a career spanning over 40 years, Futamura authored 25 scientific papers and 50 US patents. He worked for Nippon Zeon Co., Firestone Tire & Rubber Company, and Goodyear Tire & Rubber Company.
He is best known for proposing the concept of a deformation index to relate viscoelastic properties to real-world tire performance. The concept is used to select rubber compounds that minimize tire rolling resistance, and it is used in finite element analysis to simplify the calculation of energy loss and temperature distribution.
Awards and recognition
1973 - Honorable Mention award for paper entitled "Solution SBR-Study in Copolymerization Dynaimcs", ACS Rubber Division Spring meeting
2014 - Melvin Mooney Distinguished Technology Award from the ACS Rubber Division
References
1938 births
Polymer scientists and engineers
20th-century Japanese engineers
Living people
University of Akron people
Goodyear Tire and Rubber Company people
Bridgestone people
Waseda University alumni | Shingo Futamura | [
"Chemistry",
"Materials_science"
] | 294 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
74,620,623 | https://en.wikipedia.org/wiki/Kerr%E2%80%93Newman%E2%80%93de%E2%80%93Sitter%20metric | The Kerr–Newman–de–Sitter metric (KNdS) is the one of the most general stationary solutions of the Einstein–Maxwell equations in general relativity that describes the spacetime geometry in the region surrounding an electrically charged, rotating mass embedded in an expanding universe. It generalizes the Kerr–Newman metric by taking into account the cosmological constant .
Boyer–Lindquist coordinates
In signature and in natural units of the KNdS metric is
with all the other metric tensor components , where is the black hole's spin parameter, its electric charge, and the cosmological constant with as the time-independent Hubble parameter. The electromagnetic 4-potential is
The frame-dragging angular velocity is
and the local frame-dragging velocity relative to constant positions (the speed of light at the ergosphere)
The escape velocity (the speed of light at the horizons) relative to the local corotating zero-angular momentum observer is
The conserved quantities in the equations of motion
where is the four velocity, is the test particle's specific charge and the Maxwell–Faraday tensor
are the total energy
and the covariant axial angular momentum
The overdot stands for differentiation by the testparticle's proper time or the photon's affine parameter, so .
Null coordinates
To get coordinates we apply the transformation
and get the metric coefficients
and all the other , with the electromagnetic vector potential
Defining ingoing lightlike worldlines give a light cone on a spacetime diagram.
Horizons and ergospheres
The horizons are at and the ergospheres at .
This can be solved numerically or analytically. Like in the Kerr and Kerr–Newman metrics, the horizons have constant Boyer-Lindquist , while the ergospheres' radii also depend on the polar angle .
This gives 3 positive solutions each (including the black hole's inner and outer horizons and ergospheres as well as the cosmic ones) and a negative solution for the space at in the antiverse behind the ring singularity, which is part of the probably unphysical extended solution of the metric.
With a negative (the Anti–de–Sitter variant with an attractive cosmological constant), there are no cosmic horizon and ergosphere, only the black hole-related ones.
In the Nariai limit the black hole's outer horizon and ergosphere coincide with the cosmic ones (in the Schwarzschild–de–Sitter metric to which the KNdS reduces with that would be the case when ).
Invariants
The Ricci scalar for the KNdS metric is , and the Kretschmann scalar is
See also
Kerr–Newman metric
De Sitter–Schwarzschild metric
de Sitter space
de Sitter universe
Anti-de Sitter space
AdS/CFT correspondence
References
Exact solutions in general relativity
Equations
Metric tensors | Kerr–Newman–de–Sitter metric | [
"Mathematics",
"Engineering"
] | 587 | [
"Exact solutions in general relativity",
"Tensors",
"Mathematical objects",
"Equations",
"Metric tensors"
] |
74,628,152 | https://en.wikipedia.org/wiki/Ammann%20A1%20tilings | In geometry, an Ammann A1 tiling is a tiling from the 6 piece prototile set shown on the right. They were found in 1977 by Robert Ammann. Ammann was inspired by the Robinsion tilings, which were found by Robinson in 1971. The A1 tiles are one of five sets of tiles discovered by Ammann and described in Tilings and patterns.
The A1 tile set is aperiodic, i.e. they tile the whole Euclidean plane, but only without ever creating a periodic tiling.
Generation through matching
The prototiles are squares with indentations and protrusions on the sides and corners that force the tiling to form a pattern of a perfect binary tree that is continued indefinitely. The markings on the tiles in the pictures emphasize this hierarchical structure, however, they have only illustrative character and do not represent additional matching rules as this is already taken care of by the indentations and protrusions.
However, the tiling produced in this way is not unique, not even up to isometries of the Euclidean group, e.g. translations and rotations. When going to the next generation, one has choices. In the picture to the left, the initial patch in the left upper corner highlighted in blue can be prolonged by either a green or a red tile, which are mirror images of each other and instances of the prototile labeled b. Then there are two more choices in the same spirit but with prototile e. The remainder of the next generation is then fixed. If one would deviate from the pattern for this next generation, one would run into configurations that will not match up globally at least at some later stage.
The choices are encoded by infinite words from for the alphabet , where g indicates the green choice while r indicates the red choice. These are in bijection to a Cantor set and thus their cardinality is the continuum. Not all choices lead to a tiling of the plane. E.g. if one only sticks to the green choice one would only fill a lower right corner of the plane. If there are sufficiently generic infinitely many alteration between g and r one will however cover the whole plane. This still leaves uncountably many different A1 tilings, all of them necessarily nonperiodic. Since there are only countably many possible Euclidean isometries that respect the squares underlying the tiles to relate these different tilings, there are uncountable many A1 tilings even up to isometries.
Additionally an A1 tiling may have faults (also called corridors) going off to infinity in arms. This additionally increases the numbers of possible A1 tilings, but the cardinality remains that of the continuum. Note that the corridors allow for some part with binary tree hierarchy to be rotated compared to the other such parts.
Further pictures
See also
Robinson's tilings
References | Ammann A1 tilings | [
"Physics"
] | 581 | [
"Tessellation",
"Aperiodic tilings",
"Symmetry"
] |
47,666,123 | https://en.wikipedia.org/wiki/Journal%20of%20Integer%20Sequences | The Journal of Integer Sequences is a peer-reviewed open-access academic journal in mathematics, specializing in research papers about integer sequences.
It was founded in 1998 by Neil Sloane. Sloane had previously published two books on integer sequences, and in 1996 he founded the On-Line Encyclopedia of Integer Sequences (OEIS). Needing an outlet for research papers concerning the sequences he was collecting in the OEIS, he founded the journal. Since 2002 the journal has been hosted by the David R. Cheriton School of Computer Science at the University of Waterloo, with Waterloo professor Jeffrey Shallit as its editor-in-chief. There are no page charges for authors, and all papers are free to all readers. The journal publishes approximately 50–75 papers annually.
In most years from 1999 to 2014, SCImago Journal Rank has ranked the Journal of Integer Sequences as a third-quartile journal in discrete mathematics and combinatorics. It is indexed by Mathematical Reviews and Zentralblatt MATH.
References
External links
Mathematics journals
Open access journals
Academic journals established in 1998
English-language journals
Irregular journals | Journal of Integer Sequences | [
"Mathematics"
] | 221 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
47,666,505 | https://en.wikipedia.org/wiki/Prostaglandin%20D2%20receptor | {{DISPLAYTITLE:Prostaglandin D2 receptor}}
The prostaglandin D2 (PGD2) receptors are G protein-coupled receptors that bind and are activated by prostaglandin D2. Also known as PTGDR or DP receptors, they are important for various functions of the nervous system and inflammation. They include the following proteins:
Prostaglandin D2 receptor 1 (DP1) -
Prostaglandin D2 receptor 2 (DP2) -
Structure
The PTGDR gene that encodes the prostaglandin D2 receptor in humans is found on the long arm of chromosome 14 at 14q22.1 and consists of four exons. A 1995 molecular cloning study of the prostaglandin D2 receptor derived from humans found that the corresponding cDNA encoded for a protein with 359 amino acids and molecular mass of 40,276 daltons. The receptor is a heterotrimeric G protein-coupled receptor, containing seven rhodopsin-like transmembrane domains, an extracellular NH2 terminus, and an intracellular COOH terminus.
The receptor contains a few structural sites at which it can interact with other molecules. For instance, there are three possible sites for N-glycosylation at the Asn-10, Asn-90, and Asn-297 residues. Protein kinase C can also phosphorylate the prostaglandin D2 receptor at two sites in the first and second cytoplasmic loops as well as at six sites in the COOH terminus.
Signal transduction pathway
A 2014 journal article described that the PGD2 receptor signaling pathway begins with the binding of prostaglandin D2. After PDG2 binds to the extracellular ligand site on the receptor, the Gs alpha subunit is activated. Activation of the Gs alpha subunit prompts activation of the enzyme adenylate cyclase, which is located on the cell membrane. Adenylate cyclase then catalyzes the change from ATP to cyclic AMP, or cAMP. The result of the PDG2 receptor signaling pathway is a rise in levels of second messenger cAMP, which can proceed to perform other tasks depending on the activated cell.
However, several other researchers make distinctions between the two prostaglandin D2 receptor subtypes and their G protein-coupled receptor pathways. They describe that the binding of PDG2 to PTGDR1 activates the Gs alpha subunit, resulting in the subsequent increase of cAMP. This stimulation of cAMP also involves activation of Protein Kinase A and influx of calcium ions through membrane channels. In contrast, the binding of PDG2 to PTGDR2 instead activates the Gi alpha subunit, decreasing cAMP levels and increasing intracellular calcium ion levels through inositol phosphate. These distinctions in signal transduction pathways mediate the different effects of these PDG2 receptor subtypes.
Disease relevance
Inflammation: PTGDR1 signaling results in many non-inflammatory effects, such as inhibition of dendritic cell and Langerhans cell migration and eosinophil apoptosis. PTGDR2 mediates several pro-inflammatory effects, including the stimulation of TH2 cells, ILC2, and eosinophils.
Asthma: Activation of PTGDR2 amplifies an inflammation cascade by upregulating the expression and release of type 2 cytokines through TH2 cells, ILC2 cells, and eosinophils. These type 2 cytokines lead to symptoms like airway inflammation, increased mucus production, and mucus metaplasia, which are found in asthma conditions. Increase in PTGDR1 signal transduction results in vasodilation, which can promote the migration and likelihood of survival for inflammatory cell types.
Neurodegeneration: A 2018 study induced the prostaglandin D2 signaling pathway in mice via PTGDR2 to determine the impact on Parkinson's Disease-like pathology. The researchers observed that the mice with PG treatment developed loss of dopamine neurons in the substantia nigra pars compacta, motor deficits, and other progressive disease-like symptoms. They also discovered PGD2 receptors on dopaminergic cells but not on microglia.
See also
Eicosanoid receptor
Prostaglandin E2 receptor
References
External links
Eicosanoids
G protein-coupled receptors | Prostaglandin D2 receptor | [
"Chemistry"
] | 918 | [
"G protein-coupled receptors",
"Signal transduction"
] |
47,669,007 | https://en.wikipedia.org/wiki/Grote%E2%80%93Hynes%20theory | Grote–Hynes theory is a theory of reaction rate in a solution phase. This rate theory was developed by James T. Hynes with his graduate student Richard F. Grote in 1980.
The theory is based on the generalized Langevin equation (GLE). This theory introduced the concept of frequency dependent friction for chemical rate processes in solution phase. Because of inclusion of the frequency dependent friction instead of constant friction, the theory successfully predicts the rate constant including where the reaction barrier is large and of high frequency, where the diffusion over the barrier starts decoupling from viscosity of the medium. This was the weakness of Kramer's rate theory, which underestimated the reaction rate having large barrier with high frequency.
References
Physical chemistry | Grote–Hynes theory | [
"Physics",
"Chemistry"
] | 155 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"Physical chemistry stubs",
"nan"
] |
76,149,757 | https://en.wikipedia.org/wiki/Vibrational%20solvatochromism | Vibrational solvatochromism refers to changes in the vibrational frequencies of molecules due to variations in the solvent environment. Solvatochromism is a broader term that describes changes in the electronic or vibrational properties of a molecule in response to changes in the solvent polarity or composition. In the context of vibrational solvatochromism, researchers study how the vibrational spectra of a molecule, which represent the different vibrational modes of its chemical bonds, are influenced by the properties of the solvent.
Understanding vibrational solvatochromism helps researchers to characterize molecular environments and study molecular dynamics in different solvents and biological environments.
Dielectric continuum model
By considering the intermolecular interaction of the solute molecule with a dielectric continuum solvent, one can obtain a general relationship between the vibrational frequency and intermolecular interaction potential. This relationship is given by the sum of three contributions: (1) Coulombic term describing an interaction between the permanent dipole moment of the molecule and electric field, (2) induction term describing interaction with the induced dipole moment, (3) electric field-correction term which arises from the change of the electric field along the normal coordinate of the vibration. When we consider only the linear terms with respect to the Onsagar reaction field, , the frequency shift for the jth normal mode can be given as
where and are the effective gas-phase and solvent-induced vibrational dipole moment, respectively. Despite the limited validity due to the approximate nature of the dielectric continuum solvent model, researchers still often use this theory for vibrational solvatochromism, especially when a more refined model is challenging to implement.
Electrostatic Effect: Distributed Multipole Analysis
The solvent electric field experienced by a given solute molecule in solution is highly nonuniform in space. For a realistic description of vibrational solvatochromism, one should consider the local electric potential created by surrounding solvent molecules. Assuming that the solute-solvent intermolecular interaction potential can be fully described by the distributed charges, dipoles, and high-order multipoles interacting with solvent electric potential and its gradients, it was shown that the vibrational solvatochromic frequency shift is given as
Here, the vibrational solvatochromic charge (), dipole (), quadrupole (), and octupole () terms can be determined using any distributed multipole expansion method.[5] The above Equation can be interpreted as a type of vibrational spectroscopic map.
Quantum chemistry calculations conducted for various IR probes have revealed that terms up to vibrational solvatochromic quadrupoles are essential for adequately describing the vibrational frequency shift.
Electrostatic Effect: Semiempirical Approaches
The vibrational frequency shift, denoted as , for the jth normal mode is defined as the difference between the actual vibrational frequency of the mode in a solution and the frequency in the gas phase.
An early approach aimed to express the solvation-induced vibrational frequency shift in terms of the solvent electric potentials evaluated at distributed atomic sites on the target solute molecule. This method involves calculating the solvent electric potentials at these specific solute sites through the utilization of atomic partial charges from surrounding solvent molecules. The vibrational frequency shift of the solute molecule, denoted as , for the jth vibrational mode with an atomic configuration of the solvent molecules can be represented as
Here, represents the vibrational frequency of the jth normal mode in solution, signifies the vibrational frequency in the gas phase, denotes the number of distributed sites on the solute molecule, denotes the solvent electric potential at the kth site of the solute molecule, and are the parameters to be determined through least-square fitting to a training database comprising clusters containing a solute and multiple solvent molecules. This method provides a means to quantify the impact of solvation on the vibrational frequencies of the solute molecule.
Another widely used model for characterizing vibrational solvatochromic frequency shifts involves expressing the frequency shift in terms of solvent electric fields evaluated at distributed sites on the target solute molecule. This model is represented by the equation:
Here, is the mth Cartesian component of the solvent electric field at the kth site on the solute molecule, and represent parameters to be determined through least-square fitting to a training database of clusters containing a solute and multiple solvent molecules. This approach provides a framework for quantifying the influence of solvent electric fields on the vibrational frequencies of the solute molecule.
General solute-solvent interaction effects
Buckingham developed the general theory describing the vibrational frequency shifts of a spatially localized normal mode in solution based on the intermolecular interaction potential. Cho later generalized this theory to any arbitrary normal mode. Solvation-induced vibrational frequencies and the resulting new set of normal modes of the solute molecule in solution can be directly obtained by diagonalizing the Hessian matrix derived from an effective Hamiltonian for the solute in the presence of a molecular environment. In the limiting case that the vibrational couplings of the normal mode of interest with other vibrational modes are relatively weak, the vibrational frequency shift is given by
where and are the electric anharmonicity (EA) and mechanical anharmonicity (MA), respectively, defined as
and
where is the cubic anharmonic constant. There exist cases in which the weak coupling approximation cannot be acceptable, for example, when normal modes are coupled and delocalized. In those cases, an additional term describing the mode coupling contribution to the frequency shift should be included.
See also
Vibrational spectroscopic map
References
Infrared spectroscopy | Vibrational solvatochromism | [
"Physics",
"Chemistry"
] | 1,160 | [
"Infrared spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
76,149,855 | https://en.wikipedia.org/wiki/Capillaritron | A capillaritron is a device for creating ion and atom rays.
Mechanism
The capillaritron, the basic concept of which was published in 1981, consists of a fine metal capillary through which gas flows as an anode and a concentric extraction cathode with an outlet opening. A flow of gas through the capillary is extracted when high voltage (usually a few kilovolts) is ionised by free electrons and secondary electrons, which are accelerated towards the anode (see also impact ionisation). The positively charged ions are accelerated in the electric field and form an ion beam behind the opening of the extraction cathode. Due to recombination and charge exchange processes in the plasma, the beam also partly consists of uncharged atoms.
The capillary usually consists of resistant materials, such as tungsten. A further development from 1992 is the quartz capillaritron. Here the capillary consists of quartz, an electrically insulating material, into which a metal wire is inserted in order to generate the anode potential. The advantage lies in the simpler, more flexible and cheaper production of quartz capillaries with a predetermined inner diameter, which, unlike metal capillaries, do not have to be drilled but can be electrochemically etched or manufactured by a glassblower.
As a rule, inert gas is used as operating gas, as this only undergoes a minor chemical reaction with the other materials involved. However, a capillaritron also works with hydrogen, with nitrogen or even with air.
With ion beams of capillaritrons, current densities of up to 10 kiloamperes per square millimetre and beam currents of several milliamperes are achieved.
Through focusing with ion optics, beams with high power density can be generated in high vacuum, which can also be used to process surfaces selectively.
Applications
Capillaritrons are commercially available.
Ion and atom beams can be used to sputter surfaces over large areas, and the sputtered material can be used for thin film deposition. Atomic beams can also be used to process insulating surfaces. When using ion beams, such surfaces would become more electrostatically charged, which slows down the ions before they hit the surface.
Furthermore, the capillaritron as an atom source can be used for mass spectrometry.
Capillaritrons are also suited for accelerator applications.
Further reading
John F. Mahoney, Julius Perel, A. Theodore Forrester: Capillaritron: A New, Versatile Ion Source. In: Appl. Phys. Lett. 38, 1981, S. 320–322 ().
Julius Perel, John F. Mahoney, Bernard Kalensher: Investigation of the Capillaritron ion source for electric propulsion, AIAA, 15th International Electric Propulsion 1981, Las Vegas, U.S.A., published online on 17 Aug 2012 ().
Julius Perel: Ion Source for Rocket Payload, 6th Quarterly Status Report, Air Force Geophysics Laboratory, Pasadena, U.S.A., August 1983
Roland Hanke, Helmut Knapp, Detlef Rübesame, Stephan Wege, Heinz Niedrig: A capillaritron ion source as triode system coupled with an einzel lens., In: Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms, Volumes 59–60, Part 1, 1 July 1991, Pages 135-138 ().
Markus Bautsch, Patrik Varadinek, Stephan Wege, Heinz Niedrig: A Compact and Inexpensive Quartz Capillaritron Source. In: J. Vac. Sci. Tech. A. 12, Nr. 2, 1994, S. 591–593 ().
References
Ion source
Accelerator physics
Surface science | Capillaritron | [
"Physics",
"Chemistry",
"Materials_science"
] | 791 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Ion source",
"Surface science",
"Experimental physics",
"Condensed matter physics",
"Mass spectrometry",
"Accelerator physics"
] |
76,152,094 | https://en.wikipedia.org/wiki/C10H18O5 | {{DISPLAYTITLE:C10H18O5}}
The molecular formula C10H18O5 (molar mass: 218.249 g/mol) may refer to:
Di-tert-butyl dicarbonate
Diethylene glycol diglycidyl ether
Molecular formulas | C10H18O5 | [
"Physics",
"Chemistry"
] | 66 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
76,152,941 | https://en.wikipedia.org/wiki/Alfa%20Romeo%20690T%20engine | The Alfa Romeo 690T is a twin-turbocharged, direct injected, 90° V6 petrol engine designed and produced by Alfa Romeo since 2015. It is used in the high-performance Giulia Quadrifoglio and Stelvio Quadrifoglio models and is manufactured at the Alfa Romeo Termoli engine plant.
Description
The 690T is often considered to be the Ferrari F154 engine with two less cylinders, but in fact it is a completely new engine developed by the same engineer, Gianluca Pivetti, of the F154, that shares some peculiarities Alfa knew worked well and to reduce development time.
This 2.9-litre V6 uses single-scroll rather than twin-scroll turbos, which produce of boost pressure. Alfa also added mechanical cylinder deactivation to the right bank for increased highway fuel efficiency. The 90-degree V6 engine's crankshaft has three crankpins 120 degrees apart, each with two connecting rods mounted side by side. This configuration results in uneven firing at 90 and 150 degrees of each rotation, but for each cylinder bank results in even pulses every 240 degrees, providing evenly-spaced exhaust pulses to each turbocharger and allows one bank to deactivate. Additionally, from 2020 onward, Alfa added port injection, doubling the number of injectors to 12.
The Maserati 3.0-litre V6 Nettuno engine, introduced in the Maserati MC20, shares many of its characteristics with the Ferrari F154 and the Alfa Romeo 690T engines.
In 2023 Alfa Romeo presented the 33 Stradale model that featured a bigger displacement 690T engine. Now at 3.0-litres and producing .
Applications
Alfa Romeo Giulia Quadrifoglio
Alfa Romeo Stelvio Quadrifoglio
Alfa Romeo Giulia GTA and GTAm
2023 Alfa Romeo Giulia SWB Zagato
Alfa Romeo
References
Alfa Romeo
V6 engines
Alfa Romeo engines
Gasoline engines by model
Engines by model
Piston engines
Internal combustion engine | Alfa Romeo 690T engine | [
"Technology",
"Engineering"
] | 400 | [
"Internal combustion engine",
"Engines",
"Engines by model",
"Piston engines",
"Combustion engineering"
] |
76,153,929 | https://en.wikipedia.org/wiki/Random%20two-sided%20matching | A random two-sided matching is a process by which members of two groups are matched to each other in a random way. It is often used in sports in order to match teams in knock-out tournaments. In this context, it is often called a draw, as it is implemented by drawing balls at random from a bowl, each ball representing the name of a team.
Examples
The UEFA Champions League and UEFA Europa League draw
A random two-sided matching occurs in the UEFA Champions League Round of 16 and UEFA Europa League Round of 32. After some games are done within 8 groups, the group winner and the group runner-up proceed to the champions league. The UEFA rules say that each winner should be paired with a runner-up. Without further constraints, this problem could easily be solved by finding a random permutation of the winners. But UEFA rules impose two additional constraints: two teams from the same group cannot be paired, and two teams from the same association cannot be paired. Thus, the goal is to choose a random matching in an incomplete bipartite graph.
The UEFA mechanism makes several draws from different bowls. At the beginning, there are:
Bowl 1, containing identical balls each of which represents one group runner-up;
Bowl 2, initially empty, to be filled and refilled later.
Bowls A to H, each of which represents a group winner and contains 7 balls with the winner's name on it.
The draw proceeds as follows:
A ball is drawn from bowl 1, and the runner-up's name is displayed;
A computer program shows all winners that can — according to the UEFA rules — be paired with the drawn runner-up. This takes into account not only current constraints, but also constraints for future runners-up.
From some of the bowls A to H, representing the potential winners, a single ball is taken and put in bowl 2;
The balls in bowl 2 are shuffled. One ball is drawn, and it represents the winner matched to the previously drawn runner-up.
Bowl 2 is emptied, and the process repeats for 8 rounds.
This procedure yields probabilities that are different than just choosing a matching at random; this creates a distortion in the matching probalities of different groups, which raises suspicion and conspiracy theories.
The FIFA draw
Another two-sided matching occurs in the FIFA World Cup. First, the runners-up are drawn in a random order. Then, each winner in turn is drawn, and it is matched to the first runner-up in the order, to which it can be matched according to the constraints.
This draw, too, produces distorted probabilities relative to the uniform-random matching.
See also
Fair random assignment - one-sided matching - allocating items to agents with different preferences.
References
Matching (graph theory)
Randomness | Random two-sided matching | [
"Mathematics"
] | 569 | [
"Matching (graph theory)",
"Mathematical relations",
"Graph theory"
] |
76,155,114 | https://en.wikipedia.org/wiki/Splayed%20opening | In architecture, a splayed opening is a wall opening that is narrower on one side of the wall and wider on another. When used for a splayed window, it allows more light to enter the room. In fortifications, a splayed opening is used to broaden the arc of fire (cf. embrasure, loophole).
Splayed arch
A splayed arch (also sluing arch) is an arch where the springings are not parallel ("splayed"), causing an opening on the exterior side of an arch to be different (usually wider) than the interior one. The intrados of a splayed arch is not generally cylindrical as it is for typical (round) arch, but has a conical shape.
José Calvo-López, a Spanish scholar of architecture, subdivides the splayed arches into symmetrical (where both springers form the same angles with the faces of the wall), and the ox horn arches, where one springer is orthogonal to the wall, and another is not, creating a "warped" intrados (the use of the term "ox horn" should not be confused with , "cow's horn" of a design technique that was used for skew arch profiles).
Double-splayed window
Double-splayed windows, widening towards both wall faces, with the narrowest part in the middle of a wall, are considered common in the Anglo-Saxon architecture, although the use of this trait for dating is questionable, and English church buildings of the 12th century have such windows too.
See also
Hagioscope, a splayed opening for observation
Squinch, a conical-shaped vault spanning the inner corner between two walls.
References
Sources
Arches and vaults | Splayed opening | [
"Engineering"
] | 361 | [
"Architecture stubs",
"Architecture"
] |
76,155,257 | https://en.wikipedia.org/wiki/Bayes%20correlated%20equilibrium | In game theory, a Bayes correlated equilibrium is a solution concept for static games of incomplete information. It is both a generalization of the correlated equilibrium perfect information solution concept to bayesian games, and also a broader solution concept than the usual Bayesian Nash equilibrium thereof. Additionally, it can be seen as a generalized multi-player solution of the Bayesian persuasion information design problem.
Intuitively, a Bayes correlated equilibrium allows for players to correlate their actions in a way such that no player has an incentive to deviate for every possible type they may have. It was first proposed by Dirk Bergemann and Stephen Morris.
Formal definition
Preliminaries
Let be a set of players, and a set of possible states of the world. A game is defined as a tuple , where is the set of possible actions (with ) and is the utility function for each player, and is a full support common prior over the states of the world.
An information structure is defined as a tuple , where is a set of possible signals (or types) each player can receive (with ), and is a signal distribution function, informing the probability of observing the joint signal when the state of the world is .
By joining those two definitions, one can define as an incomplete information game. A decision rule for the incomplete information game is a mapping . Intuitively, the value of decision rule can be thought of as a joint recommendation for players to play the joint mixed strategy when the joint signal received is and the state of the world is .
Definition
A Bayes correlated equilibrium (BCE) is defined to be a decision rule which is obedient: that is, one where no player has an incentive to unilaterally deviate from the recommended joint strategy, for any possible type they may be. Formally, decision rule is obedient (and a Bayes correlated equilibrium) for game if, for every player , every signal and every action , we have
for all .
That is, every player obtains a higher expected payoff by following the recommendation from the decision rule than by deviating to any other possible action.
Relation to other concepts
Bayesian Nash equilibrium
Every Bayesian Nash equilibrium (BNE) of an incomplete information game can be thought of a as BCE, where the recommended joint strategy is simply the equilibrium joint strategy.
Formally, let be an incomplete information game, and let be an equilibrium joint strategy, with each player playing . Therefore, the definition of BNE implies that, for every , and such that , we have
for every .
If we define the decision rule on as for all and , we directly get a BCE.
Correlated equilibrium
If there is no uncertainty about the state of the world (e.g., if is a singleton), then the definition collapses to Aumann's correlated equilibrium solution. In this case, is a BCE if, for every , we have
for every , which is equivalent to the definition of a correlated equilibrium for such a setting.
Bayesian persuasion
Additionally, the problem of designing a BCE can be thought of as a multi-player generalization of the Bayesian persuasion problem from Emir Kamenica and Matthew Gentzkow. More specifically, let be the information designer's objective function. Then her ex-ante expected utility from a BCE decision rule is given by:
If the set of players is a singleton, then choosing an information structure to maximize is equivalent to a Bayesian persuasion problem, where the information designer is called a Sender and the player is called a Receiver.
References
Game theory equilibrium concepts | Bayes correlated equilibrium | [
"Mathematics"
] | 717 | [
"Game theory",
"Game theory equilibrium concepts"
] |
76,155,381 | https://en.wikipedia.org/wiki/Leiden%20algorithm | The Leiden algorithm is a community detection algorithm developed by Traag et al
at Leiden University. It was developed as a modification of the
Louvain method. Like the Louvain method, the Leiden algorithm attempts to optimize modularity in extracting communities from networks; however, it addresses key issues present in the Louvain method, namely poorly connected communities and the resolution limit of modularity.
Improvement over Louvain method
Broadly, the Leiden algorithm uses the same two primary phases as the Louvain algorithm: a local node moving step (though, the method by which nodes are considered in Leiden is more efficient) and a graph aggregation step. However, to address the issues with poorly-connected communities and the merging of smaller communities into larger communities (the resolution limit of modularity), the Leiden algorithm employs an intermediate refinement phase in which communities may be split to guarantee that all communities are well-connected.
Consider, for example, the following graph:
Three communities are present in this graph (each color represents a community). Additionally, the center "bridge" node (represented with an extra circle) is a member of the community represented by blue nodes. Now consider the result of a node-moving step which merges the communities denoted by red and green nodes into a single community (as the two communities are highly connected):
Notably, the center "bridge" node is now a member of the larger red community after node moving occurs (due to the greedy nature of the local node moving algorithm). In the Louvain method, such a merging would be followed immediately by the graph aggregation phase. However, this causes a disconnection between two different sections of the community represented by blue nodes. In the Leiden algorithm, the graph is instead refined:
The Leiden algorithm's refinement step ensures that the center "bridge" node is kept in the blue community to ensure that it remains intact and connected, despite the potential improvement in modularity from adding the center "bridge" node to the red community.
Graph components
Before defining the Leiden algorithm, it will be helpful to define some of the components of a graph.
Vertices and edges
A graph is composed of vertices (nodes) and edges. Each edge is connected to two vertices, and each vertex may be connected to zero or more edges. Edges are typically represented by straight lines, while nodes are represented by circles or points. In set notation, let be the set of vertices, and be the set of edges:
where is the directed edge from vertex to vertex . We can also write this as an ordered pair:
Community
A community is a unique set of nodes:
and the union of all communities must be the total set of vertices:
Partition
A partition is the set of all communities:
Partition Quality
How communities are partitioned is an integral part on the Leiden algorithm. How partitions are decided can depend on how their quality is measured. Additionally, many of these metrics contain parameters of their own that can change the outcome of their communities.
Modularity
Modularity is a highly used quality metric for assessing how well a set of communities partition a graph. The equation for this metric is defined for an adjacency matrix, A, as:
where:
represents the edge weight between nodes and ; see Adjacency matrix;
and are the sum of the weights of the edges attached to nodes and , respectively;
is the sum of all of the edge weights in the graph;
and are the communities to which the nodes and belong; and
is Kronecker delta function:
Reichardt Bornholdt Potts Model (RB)
One of the most well used metrics for the Leiden algorithm is the Reichardt Bornholdt Potts Model (RB). This model is used by default in most mainstream Leiden algorithm libraries under the name RBConfigurationVertexPartition. This model introduces a resolution parameter and is highly similar to the equation for modularity. This model is defined by the following quality function for an adjacency matrix, A, as:
where:
represents a linear resolution parameter
Constant Potts Model (CPM)
Another metric similar to RB, is the Constant Potts Model (CPM). This metric also relies on a resolution parameter The quality function is defined as:
Understanding Potts Model resolution parameters/Resolution limit
Typically Potts models such as RB or CPM include a resolution parameter in their calculation. Potts models are introduced as a response to the resolution limit problem that is present in modularity maximization based community detection. The resolution limit problem is that, for some graphs, maximizing modularity may cause substructures of a graph to merge and become a single community and thus smaller structures are lost. These resolution parameters allow modularity adjacent methods to be modified to suit the requirements of the user applying the Leiden algorithm to account for small substructures at a certain granularity.
The figure on the right illustrates why resolution can be a helpful parameter when using modularity based quality metrics. In the first graph, modularity only captures the large scale structures of the graph; however, in the second example, a more granular quality metric could potentially detect all substructures in a graph.
Algorithm
The Leiden algorithm starts with a graph of disorganized nodes (a) and sorts it by partitioning them to maximize modularity (the difference in quality between the generated partition and a hypothetical randomized partition of communities). The method it uses is similar to the Louvain algorithm, except that after moving each node it also considers that node's neighbors that are not already in the community it was placed in. This process results in our first partition (b), also referred to as . Then the algorithm refines this partition by first placing each node into its own individual community and then moving them from one community to another to maximize modularity. It does this iteratively until each node has been visited and moved, and each community has been refined - this creates partition (c), which is the initial partition of . Then an aggregate network (d) is created by turning each community into a node. is used as the basis for the aggregate network while is used to create its initial partition. Because we use the original partition in this step, we must retain it so that it can be used in future iterations. These steps together form the first iteration of the algorithm.
In subsequent iterations, the nodes of the aggregate network (which each represent a community) are once again placed into their own individual communities and then sorted according to modularity to form a new , forming (e) in the above graphic. In the case depicted by the graph, the nodes were already sorted optimally, so no change took place, resulting in partition (f). Then the nodes of partition (f) would once again be aggregated using the same method as before, with the original partition still being retained. This portion of the algorithm repeats until each aggregate node is in its own individual network; this means that no further improvements can be made.
The Leiden algorithm consists of three main steps: local moving of nodes, refinement of the partition, and aggregation of the network based on the refined partition. All of the functions in the following steps are called using our main function Leiden, depicted below: The Fast Louvain method is borrowed by the authors of Leiden from "A Simple Acceleration Method for the Louvain Algorithm".
function Leiden_community_detection(Graph G, Partition P)
do
P = fast_louvain_move_nodes(G, P)
/* Call the function to move the nodes to communities.(more details in function below). */
done = (|P| == |V(G)|)
/* If the number of partitions in P equals the number of nodes in G, then set done flag to True to end do-while loop, as this will mean that each node has been aggregated into its own community. */
if not done
P_refined = get_p_refined(G, P)
/* This is a crucial part of what separates Leiden from Louvain, as this refinement of the partition enforces that only nodes that are well connected within their community are considered to be moved out of the community. (more detail in function refine_partition_subset below). */
G = aggregate_graph(G, P_refined)
/* Aggregates communities into single nodes for next iteration (details in function below). */
P = {{v | v ⊆ C, v ∈ V (G)} | C ∈ P}
/* This line essentially takes nodes from the communities in P and breaks them down so that each node is treated as its own singleton community (community made up of one node). */
end if
while not done
return flattened(P) /* Return final partition where all nodes of G are listed in one community each. */
end function
Step 1: Local Moving of Nodes
First, we move the nodes from into neighboring communities to maximize modularity (the difference in quality between the generated partition and a hypothetical randomized partition of communities). In the above image, our initial collection of unsorted nodes is represented by the graph on the left, with each node's unique color representing that they do not belong to a community yet. The graph on the right is a representation of this step's result, the sorted graph ; note how the nodes have all been moved into one of three communities, as represented by the nodes' colors (red, blue, and green).
function fast_louvain_move_nodes(Graph G, Partition P)
Q = queue(V(G)) /* Place all of the nodes of G into a queue to ensure that they are all visited. */
while Q not empty
v = Q.pop_front() /* Select the first node from the queue to visit. */
C_prime = arg maxC∈P∪∅ ∆HP(v → C)
/* Set C_prime to be the community in P or the empty set (no community) that provides the maximum increase in the Quality function H when node v is moved into that community. */
if ∆HP(v → C_prime) > 0 /* Only look at moving nodes that will result in a positive change in the quality function. */
v → C_prime /* Move node v to community C_prime */
N = {u | (u, v) ∈ E(G), u !∈ C_prime} /* Create a set N of nodes that are direct neighbors of v but are not in the community C_prime. */
Q.add(N - Q) /* Add all of the nodes from N to the queue, unless they are already in Q. */
end if
return P /* Return the updated partition. */
end function
Step 2: Refinement of the Partition
Next, each node in the network is assigned to its own individual community and then moved them from one community to another to maximize modularity. This occurs iteratively until each node has been visited and moved, and is very similar to the creation of except that each community is refined after a node is moved. The result is our initial partition for , as shown on the right. Note that we're also keeping track of the communities from , which are represented by the colored backgrounds behind the nodes.
function get_p_refined(Graph G, Partition P)
P_refined = get_singleton_partition(G) /* Assign each node in G to a singleton community (a community by itself). */
for C ∈ P
P_refined = refine_partition_subset(G, P_refined, C)
/* Refine partition for each of the communities in P_refined. */
end for
return P_refined /* return newly refined partition. */
function refine_partition_subset(Graph G, Partition P, Subset S)
R = {v | v ∈ S, E(v, S − v) ≥ γ * degree(v) * (degree(S) − degree(v))}
/* For node v, which is a member of subset S, check if E(v, S-v) (the edges of v connected to other members of the community S, excluding v itself) are above a certain scaling factor. degree(v) is the degree of node v and degree(S) is the total degree of the nodes in the subset S. This statement essentially requires that if v is removed from the subset, the community will remain in tact. */
for v ∈ R
if v in singleton_community /* If node v is in a singleton community, meaning it is the only node. */
T = {C | C ∈ P, C ⊆ S, E(C, S − C) ≥ γ * degree(C) · (degree(S) − degree(C)}
/* Create a set T of communities where E(C, S - C) (the edges between community C and subset S, excluding edges between community C and itself) is greater than the threshold. The threshold here is γ * degree(C) · (degree(S) − degree(C). */
Pr(C_prime = C) ∼ exp(1/θ ∆HP(v → C) if ∆HP(v → C) ≥ 0
0 otherwise for C ∈ T
/* If moving the node v to C_prime changes the quality function in the positive direction, set the probability that the community of v to exp(1/θ * ∆HP(v → C)) else set it to 0 for all of the communities in T. */
v → C_prime /* Move node v into a random C_prime community with a positive probability. */
end if
end for
return P /* return refined partition */
end function
Step 3: Aggregation of the Network
We then convert each community in into a single node. Note how, as is depicted in the above image, the communities of are used to sort these aggregate nodes after their creation.
function aggregate_graph(Graph G, Partition P)
V = P /* Set communities of P as individual nodes of the graph. */
E = {(C, D) | (u, v) ∈ E(G), u ∈ C ∈ P, v ∈ D ∈ P} /* If u is a member of subset C of P, and v is a member subset D of P and u and v share an edge in E(G), then we add a connection between C and D in the new graph. */
return Graph(V, E) /* Return the new graph's nodes and edges. */
end function
function get_singleton_partition(Graph G)
return {{v} | v ∈ V (G)} /* This is the function where we assign each node in G to a singleton community (a community by itself). */
end function
We repeat these steps until each community contains only one node, with each of these nodes representing an aggregate of nodes from the original network that are strongly connected with each other.
Limitations
The Leiden algorithm does a great job of creating a quality partition which places nodes into distinct communities. However, Leiden creates a hard partition, meaning nodes can belong to only one community. In many networks such as social networks, nodes may belong to multiple communities and in this case other methods may be preferred.
Leiden is more efficient than Louvain, but in the case of massive graphs may result in extended processing times. Recent advancements have boosted the speed using a "parallel multicore implementation of the Leiden algorithm".
The Leiden algorithm does much to overcome the resolution limit problem. However, there is still the possibility that small substructures can be missed in certain cases. The selection of the gamma parameter is crucial to ensure that these structures are not missed, as it can vary significantly from one graph to the next.
References
Algorithms
Network theory | Leiden algorithm | [
"Mathematics"
] | 3,304 | [
"Algorithms",
"Mathematical logic",
"Applied mathematics",
"Graph theory",
"Network theory",
"Mathematical relations"
] |
76,158,248 | https://en.wikipedia.org/wiki/WAY-261240 | WAY-261240 is a drug which acts as a potent and selective 5-HT2C receptor agonist, though its affinity at other serotonin receptors has not been disclosed. It produces anorectic effects in animal studies. A large family of related derivatives is known.
See also
Lorcaserin
WAY-163909
WAY-470
References
Serotonin receptor agonists
Chloroarenes
Chromanes
Amines | WAY-261240 | [
"Chemistry"
] | 93 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
63,130,267 | https://en.wikipedia.org/wiki/A%20Guide%20to%20the%20Classification%20Theorem%20for%20Compact%20Surfaces | A Guide to the Classification Theorem for Compact Surfaces is a textbook in topology, on the classification of two-dimensional surfaces. It was written by Jean Gallier and Dianna Xu, and published in 2013 by Springer-Verlag as volume 9 of their Geometry and Computing series (, ). The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries.
Topics
The classification of surfaces (more formally, compact two-dimensional manifolds without boundary) can be stated very simply, as it depends only on the Euler characteristic and orientability of the surface. An orientable surface of this type must be topologically equivalent (homeomorphic) to a sphere, torus, or more general handlebody, classified by its number of handles. A non-orientable surface must be equivalent to a projective plane, Klein bottle, or more general surface characterized by an analogous number, its number of cross-caps. For compact surfaces with boundary, the only extra information needed is the number of boundary components. This result is presented informally at the start of the book, as the first of its six chapters. The rest of the book presents a more rigorous formulation of the problem, a presentation of the topological tools needed to prove the result, and a formal proof of the classification.
Other topics in topology discussed as part of this presentation include simplicial complexes, fundamental groups, simplicial homology and singular homology, and the Poincaré conjecture. Appendices include additional material on embeddings and self-intersecting mappings of surfaces into three-dimensional space such as the Roman surface, the structure of finitely generated abelian groups, general topology, the history of the classification theorem, and the Hauptvermutung (the theorem that every surface can be triangulated).
Audience and reception
This is a textbook aimed at the level of advanced undergraduates or beginning graduate students in mathematics, perhaps after having already completed a first course in topology. Readers of the book are expected to already be familiar with general topology, linear algebra, and group theory. However, as a textbook, it lacks exercises, and reviewer Bill Wood suggests its use for a student project rather than for a formal course.
Many other graduate algebraic topology textbooks include coverage of the same topic.
However, by focusing on a single topic, the classification theorem, the book is able to prove the result rigorously while remaining at a lower overall level, provide a greater amount of intuition and history, and serve as "a motivating tour of the discipline’s fundamental techniques".
Reviewer complains that parts of the book are redundant, and in particular that the classification theorem can be proven either with the fundamental group or with homology (not needing both), that on the other hand several important tools from topology including the Jordan–Schoenflies theorem are not proven, and that several related classification results are omitted. Nevertheless, reviewer D. V. Feldman highly recommends the book, Wood writes "This is a book I wish I’d had in graduate school", and reviewer Werner Kleinert calls it "an introductory text of remarkable didactic value".
References
External links
Author's web site for A Guide to the Classification Theorem for Compact Surfaces including a PDF version of Chapter 1
Low-dimensional topology
Manifolds
Mathematics textbooks
2013 non-fiction books
Springer Science+Business Media books | A Guide to the Classification Theorem for Compact Surfaces | [
"Mathematics"
] | 683 | [
"Low-dimensional topology",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Manifolds"
] |
63,135,851 | https://en.wikipedia.org/wiki/Immunometabolism | Immunometabolism is a branch of biology that studies the interplay between metabolism and immunology in all organisms. In particular, immunometabolism is the study of the molecular and biochemical underpinninngs for i) the metabolic regulation of immune function, and ii) the regulation of metabolism by molecules and cells of the immune system. Further categorization includes i) systemic immunometabolism and ii) cellular immunometabolism.
Immunometabolism includes metabolic inflammation:a chronic, systemic, low grade inflammation, orchestrated by metabolic deregulation caused by obesity or aging.
Immunometabolism first appears in academic literature in 2011, where it is defined as "an emerging field of investigation at the interface between the historically distinct disciplines of immunology and metabolism." A later article defines immunometabolism as describing "the changes that occur in intracellular metabolic pathways in immune cells during activation". Broadly, immunometabolic research records the physiological functioning of the immune system in the context of different metabolic conditions in health and disease. These studies can cover molecular and cellular aspects of immune system function in vitro, in situ, and in vivo, under different metabolic conditions. For example, highly proliferative cells such as cancer cells and activating T cells undergo metabolic reprogramming, increasing glucose uptake to shift towards aerobic glycolysis during normoxia. While aerobic glycolysis is an inefficient pathway for ATP production in quiescent cells, this so-called “Warburg effect” supports the bioenergetic and biosynthetic needs of rapidly proliferating cells.
Signalling and metabolic network
There are many indispensable signalling molecules connected to metabolic processes, which play an important role in both the immune system homeostasis and in the immune response. From these the most significant are mammalian target of rapamycin (mTOR), liver kinase B1 (LKB1), 5' AMP-activated protein kinase (AMPK), phosphoinositide 3 kinase (PI3K) and protein kinase B (akt). All of the aforementioned molecules together control the most important metabolic pathways in cells like glycolysis, krebs cycle or oxidative phosphorylation. To fully understand how all of these molecules and pathways affect the immune cells, it is first needed to examine the delicate interplay of these molecules.
mTOR
mTOR is a serine/threonine protein kinase, which is found in 2 complexes in cells: mTOR complex 1 and 2 (mTORC1 and mTORC2). mTORC1 is activated through the T cell receptor (TCR) and the costimulatory molecule cluster of differentiation 28 (CD28) engagement. However, it can also be activated by growth factors like IL-7 or IL-2 and by metabolites like glucose or amino acids (leucin, arginine or glutamine). In contrast, there are more gaps as to how mTORC2 pathway functions, but its activation is also achieved through growth factors as exemplified by IL-2.
When activated mTORC1 negatively regulates autophagy (through inhibiting the ULK complex) and shifts the cell towards aerobic glycolysis, glutaminolysis (through activation of c-Myc) and promotes lipid synthesis and mitochondrial remodelling. mTORC2 enhances glycolysis as well, but in contrast to mTORC1, it activates akt, which in turn promotes glucose transporter 1 (GLUT1) membrane deposition. It also further promotes, through other kinases, cell proliferation and survival.
PI3K-akt
PI3K mediates the phosphorylation of phosphatidylinositol-(4,5)-bisphosphate (PIP2) into phosphatidylinositol-(3,4,5)-trisphosphate (PIP3). PIP3 then serves as a scaffold for other proteins, which contain a pleckstrin homology (PH) domain. It can be activated, just like mTOR, through TCR, CD28 and, unlike mTOR, through another costimulatory molecule: Inducible T-cell COStimulator (ICOS).
The present of PIP3 on a membrane recruits many proteins including phosphoinositide-dependent protein kinase 1 (PDK1), which after its phosphorylation together with mTORC2 activates akt, a serine/threonine kinase. As a result akt promotes GLUT1 membrane deposition and akt also inhibits transcription factor forkhead box O (FoxO), whose inactivation acts in synergy with the mTORC2 above mentioned changes.
LKB1-AMPK
Both LKB1 and AMPK are serine/threonine kinases acting predominantly opposingly to the aforementioned molecules. From the two, LKB1's activation is less understood, as it is mainly dependants on cellular localization and on many posttranslational modifications. For instance the above-mentioned akt can stimulate LKB1 inhibition through promoting nuclear retention. When activated, LKB1 can activate, apart from other targets, AMPK, whose activation leads to mTORC1 destabilization. Furthermore, it activates ULK complex, phosphorylates p53 and acetyl-CoA carboxylase (ACC), which promotes autophagy, cell cycle arrest and fatty acids oxidation respectively. Since AMPK can also be activated through adenosine monophosphate (AMP) or by glucose insufficiency, it acts as a sensor of starvation and therefore activates many already mentioned catabolic processes, which is in direct contrast with mTOR, which activates myriad of anabolic processes.
Immune cells
Generally speaking, cells, whose primary objective is their long-term survival or control of inflammation, in terms of energy tend to rely on Krebs cycle and lipid oxidation which are both coupled with functional oxidative phosphorylation. Among these cells we can include naive T cells, memory T cells, regulatory T cells (Tregs), unstimulated innate immune cells like macrophages and M2 macrophages. On the contrary, cells whose main function is proliferation, synthesis of different molecules or propagation of inflammation often prefer glycolysis as a source of energy and metabolites. Therefore, into these cells belong for instance effector T cells and M1 macrophages.
T cells
Naive T cells have to be kept in a permanent state of quiescence, until they encounter their cognate antigen. The quiescence state is sustained by tonic TCR signalling and by IL-7. Tonic TCR signalling is necessary to keep the FoxO transcription factor active, which in turn allows for IL-7R transcription. This enables the T cell to survive and proliferate at a low rate. However, during this tonic TCR signalling proteins, that control metabolism, have to be strictly regulated, because their activation could lead to spontaneous exit of quiescence and differentiation into various T cells subset, as exemplified by the uncontrolled activation of PI3K which causes the development of Th1 or Th2.
Both of the aforementioned signals should lead to the mTOR and akt activation, but in quiescence T cells there are tuberous sclerosis complex (TSC) and phosphatase and tensin homolog (PTEN) acting against their activations. Therefore, a naive T cell dependent predominantly on oxidative phosphorylation and has much lower glucose uptake and ATP production than their activated counterparts (effector T cells).
Quiescence exit begins when a T cell encounters its cognate antigen usually during an infection. The TCR signal together with the costimulation signal lead to downregulation of PTEN and TSC. This causes the phosphorylation cascades of mTOR and akt and many more kinases to be fully activated. These cascades activities result in glucose and glutamine uptake coupled with higher glycolysis and glutaminolysis, which not only supports rapid cell growth, but also further promotes mTOR activation. Furthermore, mTOR stimulates lipid synthesis and mitochondria remodelling, exemplified by increased expression of sterol regulatory element-binding protein (SREBP) and mitochondria undergoing fission, which causes them to function predominantly as biosynthetic hubs, rather than energy production hubs. After their activation and the metabolic reprogramming, T cells compete with one another and consequently, it is very likely that during its effector phase T cells reach a point, where they suffer from lack of nutrients. In such cases AMPK is activated to balance the mTOR signalling and to prevent apoptosis.
The described scheme of quiescence exit holds true for inflammatory T cells subsets like Th1, Th2, Th17 and cytotoxic T cells. However, mTOR activity can be detrimental when we focus on Tregs. This is shown by the fact that in Tregs high activation of mTORC1 coupled with a higher level of glycolysis leads to the failure of Treg lineage commitment. Therefore, in contrast to inflammatory cell subsets, Tregs rely on oxidative phosphorylation fuelled by lipid oxidation. Although, it is important to note that complete suppression of glycolysis leads to enolase (a glycolytic enzyme) binding to a splice variant of Foxp3, which effectively compromises peripheral Tregs abilities to act as immunosuppressive cells.
After the infection is cleared most of the activated T cells succumb to apoptosis. However, few of them survive and develop into the memory T cell subsets. For this development the engagement of costimulatory molecules, like CD28, appears to be crucial, as the co-stimulation manifests in mitochondrial morphology, thus allowing for higher oxidative phosphorylation but also retaining the potential to quickly revert to glycolysis. Moreover, T cell activation causes an overall increase in acetyl-CoA, which is a substrate for the histone acetylation. As a results, many genes are acetylated and therefore accessible to transcription even after the differentiation into memory subsets, hence allowing memory T cells to rapidly re-express some effector related genes. The aforementioned changes allow T cells to become memory cells, but what exactly drives the memory cell differentiation is still under debate, even though IL-15 seems to be necessary for the T cell memory induction. Recently, asymmetric division of mTORC1, during the first divisions after TCR activation, has been shown to drive the memory cell differentiation in those cells which receive lower amount of mTORC1.
Macrophages
Immunometabolism of macrophages is mostly studied in the two opposing populations of macrophages: M1 and M2. M1 macrophages are a pro-inflammatory population induced by LPS or IFNγ. This activation leads, as in the case of T cells, to increase in glycose uptake and glycolysis. What is strikingly different is the Krebs cycle, as in the case of M1 macrophages the cycle is broken at two places. The first break is the conversion of iso-citrate to α-ketoglutarate owing to the downregulation of isocitrate dehydrogenase. Accumulated citrate is subsequently used for lipid and itaconate synthesis, which are both indispensable for M1 macrophages function. The second break at the succinate to fumarate transition occurs probably due to the itaconate production and causes a build up of succinate. This triggers ROS production, which stabilizes HIF-1α. This transcription factor further promotes glycolysis and it is essential for activation of inflammatory macrophages.
M2 macrophages are anti-inflammatory cells which need for their induction IL-4. M2 macrophages metabolism is markedly distinct from M1 macrophages due to their unbroken Krebs cycle, which after their activation is fuelled by upragulated glycolysis, glutaminolysis and fatty acid oxidation. How the fully operational Krebs cycle exactly translates to M2 macrophages functions is still poorly understood, but the upregulated pathways allow for production of intermediates (mainly acetyl-CoA and S-adenosyl methionine), which are needed for histone modifications of genes targeted by IL-4 signalling.
Drug discovery
Immunometabolism is an area of growing drug discovery research investment in numerous areas of medicine, such as for example, in lessening the impact of age-related metabolic dysfunction and obesity on incidence of type 2 diabetes/ cardiovascular disease, cancer, as well as infectious diseases. In recent years, evidence suggests that immunometabolism is implicated in autoimmune disorders. The metabolic alterations on immune system regulation have provided unique insights into disease pathogenesis and development, as well as potential therapeutic targets.
Immunometabolism - from inflammation to sepsis
Sepsis-Related Immunometabolic Paralysis
Sepsis pathophysiology now includes immunometabolic paralysis, a condition marked by severe
abnormalities in cellular energy metabolism. This phenomenon affects both the acute and late stages
of the disease, playing a critical role in the immune response during sepsis.
Summary
A potentially fatal illness known as sepsis is brought on by the body's overreaction to an infection.
Although there is a strong inflammatory response during the early phase of sepsis,
immunometabolic paralysis may appear later on and is linked to a bad prognosis for the patient. Shih Chin Cheng and colleagues have conducted recent research that explores the complex interplay
between cellular metabolism and the immune response in sepsis.
Important Results
• 1. Transition from Oxidative Phosphorylation to Aerobic Glycolysis: The Warburg effect,
which occurs during the acute stage of sepsis, is characterized by a change from oxidative
phosphorylation to aerobic glycolysis. One of the key mechanisms in the first activation of
the host defense against infections is this metabolic change.
• 2. Impaired Energy Metabolism in Leukocytes: It was shown that patients experiencing acute
sepsis exhibited extensive impairments in cellular energy metabolism, which impacted
leukocyte glycolysis and oxidative metabolism. The ailment known as immunometabolic
paralysis is associated with a compromised capacity to react to secondary stimulus.
• 3. IFN-γ's Function in Restoring Glycolysis: Interferon-gamma, or IFN-γ, is being explored as a
possible treatment option IFN-γ therapy partially restored glycolysis, in tolerant
monocytes, as demonstrated by in vitro tests, demonstrating its ability to mitigate the
metabolic abnormalities linked to immunotolerance.
Therapeutic Implications
The work emphasizes how cellular metabolism in sepsis might be targeted therapeutically. Although
few medicines possessing metabolic-regulatory properties have been investigated, the study
emphasizes how important it is to comprehend and treat immunometabolic paralysis in order to
improve outcomes for individuals suffering from sepsis.
Conclusion
To sum up, the research conducted by Cheng and colleagues provides significant understanding of the
intricate relationship between immune response and cellular metabolism in sepsis. A crucial role for
immunometabolic paralysis—a condition marked by impaired energy metabolism—in the
development and cure of sepsis is revealed. It appears that more investigation and testing of
therapeutic approaches aimed at cellular metabolism will help to improve the management of sepsis.
References
External links
Metabolism
Immunology | Immunometabolism | [
"Chemistry",
"Biology"
] | 3,315 | [
"Biochemistry",
"Immunology",
"Metabolism",
"Cellular processes"
] |
63,138,024 | https://en.wikipedia.org/wiki/Shahzeen%20Attari | Shahzeen Attari is a professor at the O'Neill School of Public and Environmental Affairs at Indiana University Bloomington. She studies how and why people make the judgements and decisions they do with regards to resource use and how to motivate climate action. In 2018, Attari was selected as an Andrew Carnegie Fellow in recognition of her work addressing climate change. She was also a fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) from 2017 to 2018, and received a Bellagio Writing Fellowship in 2022.
Early life and education
Shahzeen Attari was born in Mumbai, India and grew up in Dubai, United Arab Emirates. As she grew up, she witnessed first-hand how the desert transformed into a metropolis over the short span of time. In coming to understand the massive impacts humans can have on nature, Attari became drawn to work in the environmental sphere and human behavior.
Attari studied physics and math at the University of Illinois Urbana-Champaign Grainger College of Engineering, earning her B.S. in Engineering Physics in 2004. Drawn to interdisciplinary research, she then went on to earn her M.S. in Civil and Environmental Engineering from Carnegie Mellon College of Engineering in 2005, and her Ph.D. in Civil and Environmental Engineering & Engineering and Public Policy, also from Carnegie Mellon. Her dissertation assessed how demand-side management methods can mitigate carbon emissions. She completed her doctorate in 2009.
Research and career
Attari is a professor at the O’Neill School of Public and Environmental Affairs at Indiana University Bloomington. Previously, she was a postdoctoral fellow at the Earth Institute at the Center for Research on Environmental Decisions (CRED) at Columbia University from 2009 to 2011.
Perceptions of energy and water
During her Ph.D., Attari conducted a study on how people perceive how much energy different appliances use. In this work Attari and colleagues found that for a sample of 15 activities, participants underestimated energy use and savings by a factor of 2.8 on average, with small overestimates for low-energy activities and large underestimates for high-energy activities. This study, published in the Proceedings of the National Academy of Sciences, highlighted the need for communication campaigns to correct these skewed perceptions and inform individuals of ways in which they can most successfully reduce their energy use. This study has been summarized by The Economist, The New York Times, and BBC.
Later Attari independently investigated how participants think about water use. In another study published in the Proceedings of the National Academy of Sciences, Attari showed that participants still favor curtailment (doing the same behavior but less of it) over efficiency (switching to more effective technologies that use less energy for the work needed to be done). For a sample of 17 activities, participants underestimated water use by a factor of 2 on average, with large underestimates for high water-use activities. Combining both her work on energy and water, Attari showed that perceptions of energy use are far worse than for water use.
Overall, her work has found that participants consistently underestimate their water and energy use and know surprisingly little about which curtailment efforts will have the greatest impact on the environment. She presented these results at TEDx Bloomington, answering the question: why don’t people conserve energy and water?
Credibility and climate communication
Another line of research that Attari and collaborators have worked on is to understand the relationship between a climate communicator's carbon footprint and the effect of their advocacy on participants. They find that the communicators’ carbon footprint massively affects their credibility and intentions of their audience to conserve energy and also affects audience support for public policies advocated by the communicator. They also show that the negative effects of a large carbon footprint on credibility are greatly reduced if the communicator reforms their behavior by reducing their personal carbon footprints. The implications of these results are stark: effective communication of climate science and advocacy of both individual behavior change and public policy interventions are greatly helped when advocates lead the way by reducing their own carbon footprint.
With funding from the Andrew Carnegie Fellowship, Attari is conducting the following research project: Motivating climate change solutions by fusing facts and feelings.
Attari has assumed the role of both a scientist and activist, using her research to inspire greater change. She consistently gives public lectures and academic talks to communicate her research results and to advocate for solutions.
Awards and grant
Her awards and honors:
Andrew Carnegie Fellow
Indiana University Bicentennial Professorship
Center for Advanced Study in the Behavioral Sciences Fellowship
SN10 – Among top ten scientists to watch under the age of 40, Science News
Outstanding Junior Faculty Award, Indiana University
Excellence in Teaching, Campus Catalyst Award, Office of Sustainability, Indiana University
Attari has received research grants from the following:
Carnegie Corporation, Andrew Carnegie Fellowship
National Science Foundation - Decision, Risk, and Management Science
Environmental Resilience Institute, Indiana University's Prepared for Environmental Change Grand Challenge Initiative
Selected publications
Her publications include:
Shahzeen Z. Attari, David H. Krantz, & Elke U. Weber. (2019). Climate change communicators’ carbon footprints affect their audience's policy support. Climatic Change, 154(3–4), 529–545. []
Shahzeen Z. Attari, David H. Krantz, & Elke U. Weber (2016). Statements about climate researchers’ carbon footprints affect their credibility and the impact of their advice. Climatic Change, 138(1–2), 325–338. []
Benjamin D. Inskeep & Shahzeen Z. Attari (2014) The Water Shortlist, Environment: Science and Policy for Sustainable Development []
Shahzeen Z. Attari (2014) Perceptions of Water Use, Proceedings of the National Academy of Sciences []
Jonathan E. Cook & Shahzeen Z. Attari (2012) Paying for What Was free: Lessons from the New York Times Paywall, Cyberpsychology, Behavior, and Social Networking [DOI: http://doi.org/10.1089/cyber.2012.0251]
Shahzeen Z. Attari, Michael L. DeKay, Cliff I. Davidson, and Wändi Bruine de Bruin (2010) Public perceptions of energy consumption and savings, Proceedings of the National Academy of Sciences []
Personal life
Attari enjoys hiking with her dog, spicy food, and reading science fiction novels. She believes that science fiction books inspire us to reimagine the world we live in.
References
Year of birth missing (living people)
Living people
American environmentalists
21st-century American women scientists
Carnegie Mellon University College of Engineering alumni
Indiana University Bloomington faculty
Grainger College of Engineering alumni
Scientists from Mumbai
21st-century American engineers
American women engineers
People from Dubai
American climate activists
Indian climate activists
Energy use comparisons
Indian emigrants to the United States
Indian environmental scientists | Shahzeen Attari | [
"Environmental_science"
] | 1,432 | [
"Indian environmental scientists",
"Environmental scientists"
] |
63,138,221 | https://en.wikipedia.org/wiki/Runtime%20predictive%20analysis | Runtime predictive analysis (or predictive analysis) is a runtime verification technique in computer science for detecting property violations in program executions inferred from an observed execution. An important class of predictive analysis methods has been developed for detecting concurrency errors (such as data races) in concurrent programs, where a runtime monitor is used to predict errors which did not happen in the observed run, but can happen in an alternative execution of the same program. The predictive capability comes from the fact that the analysis is performed on an abstract model extracted online from the observed execution, which admits a class of executions beyond the observed one.
Overview
Informally, given an execution , predictive analysis checks errors in a reordered trace of . is called feasible from (alternatively a correct reordering of ) if any program that can generate can also generate .
In the context of concurrent programs, a predictive technique is sound if it only predicts concurrency errors in feasible executions of the causal model of the observed trace. Assuming the analysis has no knowledge about the source code of the program, the analysis is complete (also called maximal) if the inferred class of executions contains all executions that have the same program order and communication order prefix of the observed trace.
Applications
Predictive analysis has been applied to detect a wide class of concurrency errors, including:
Data races
Deadlocks
Atomicity violations
Order violations, e.g., use-after-free errors
Implementation
As is typical with dynamic program analysis, predictive analysis first instruments the source program. At runtime, the analysis can be performed online, in order to detect errors on the fly. Alternatively, the instrumentation can simply dump the execution trace for offline analysis. The latter approach is preferred for expensive refined predictive analyses that require random access to the execution trace or take more than linear time.
Incorporating data and control-flow analysis
Static analysis can be first conducted to gather data and control-flow dependence information about the source program, which can help construct the causal model during online executions. This allows predictive analysis to infer a larger class of executions based on the observed execution. Intuitively, a feasible reordering can change the last writer of a memory read (data dependence) if the read, in turn, cannot affect whether any accesses execute (control dependence).
Approaches
Partial order based techniques
Partial order based techniques are most often employed for online race detection. At runtime, a partial order over the events in the trace is constructed, and any unordered pairs of critical events are reported as races. Many predictive techniques for race detection are based on the happens-before relation or a weakened version of it. Such techniques can typically be implemented efficiently with vector clock algorithms, allowing only one pass of the whole input trace as it is being generated, and are thus suitable for online deployment.
SMT-based techniques
SMT encodings allow the analysis to extract a refined causal model from an execution trace, as a (possibly very large) mathematical formula. Furthermore, control-flow information can be incorporated into the model. SMT-based techniques can achieve soundness and completeness (also called maximal causality
), but has exponential-time complexity with respect to the trace size. In practice, the analysis is typically deployed to bounded segments of an execution trace, thus trading completeness for scalability.
Lockset-based approaches
In the context of data race detection for programs using lock based synchronization, lockset-based techniques provide an unsound, yet lightweight mechanism for detecting data races. These techniques primarily detect violations of the lockset principle. which says that all accesses of a given memory location must be protected by a common lock. Such techniques are also used to filter out candidate race reports in more expensive analyses.
Graph-based techniques
In the context of data race detection, sound polynomial-time predictive analyses have been developed, with good, close to maximal predictive capability based on a graphs.
Computational Complexity
Given an input trace of size executed by threads, general race prediction is NP-complete and even W[1]-hard parameterized by , but admits a polynomial-time algorithm when the communication topology is acyclic.
Happens-before races are detected in time, and this bound is optimal.
Lockset races over variables are detected in time, and this bound is also optimal.
Tools
Here is a partial list of tools that use predictive analyses to detect concurrency errors, sorted alphabetically.
: a lightweight framework for implementing dynamic race detection engines.
: a dynamic analysis framework designed to facilitate rapid prototyping and experimentation with dynamic analyses for concurrent Java programs.
: SMT-based predictive race detection.
: SMT-based predictive use-after-free detection.
See also
Model checking
Dynamic program analysis
Runtime verification
References
Software testing | Runtime predictive analysis | [
"Engineering"
] | 972 | [
"Software engineering",
"Software testing"
] |
63,140,623 | https://en.wikipedia.org/wiki/Ingrid%20Burke | Ingrid C. "Indy" Burke is the Carl W. Knobloch, Jr. Dean at the Yale School of Forestry & Environmental Studies. She is the first female dean in the school's 116 year history. Her area of research is ecosystem ecology with a primary focus on carbon cycling and nitrogen cycling in semi-arid rangeland ecosystems. She teaches on subjects relating to ecosystem ecology, and biogeochemistry.
Early life and education
Burke received her B.S in biology from Middlebury College and her Ph.D in botany from the University of Wyoming. At Middlebury College, Burke was planning on becoming an English major, but after taking a science class where they examined the role of photosynthesis in aquatic environments she became fascinated by the topic of environmental science. Soon after taking this class, Burke decided to switch her major to biology after realizing that she could spend her life working outside and be able to solve scientific mysteries as a profession. After her time at Middlebury College she started a Ph.D. track at Dartmouth College. Here she planned on studying a phenomenon known as “fir waves,” where rows of balsam fir trees die collectively, forming arresting patterns across the landscape, but after her advisor moved to work at the University of Wyoming, Burke decided to move as well. After finishing her Ph.D, she moved to Colorado State University where she started her professional career.
Career and research
Burke's career as an environmental scientist began with a job teaching at Colorado State University in 1987 in the Natural Resource Ecology Laboratory. She became an associate professor in the Department of Forest Sciences at Colorado State University in 1994. In 2008 she began teaching at the University of Wyoming where she earned a spot as the director of the Haub School of Environment and Natural Resources. She worked there until 2016 when she became the Carl W. Knobloch, Jr. Dean at the Yale School of Forestry & Environmental Studies.
Burke is also on the board of directors at The Conservation Fund.
Burke has published over 150 peer reviewed articles, chapter, books and reports including the investigation of a significant project titled, "A Regional Assessment of Land Use Effects on Ecosystem Structure and Function in the Central Grasslands" from 1996-1999. This project had major implications for understanding and managing ecosystems in the central United States.
Selected publications
The Importance of Land-Use Legacies to Ecology and Conservation (2003) BioScience, Vol 53, Issue 1, 77–88
Texture, Climate, and Cultivation Effects on Soil Organic Matter Content in U.S. Grassland Soils (1989) Soil Science Society of America Journal, Vol. 53 No. 3, 800-805
Global-Scale Similarities in Nitrogen Release Patterns During Long-Term Decomposition (2007) Science, Vol. 315, Issue 5810, 361-364
ANPP Estimates From NDVI for the Central Grasslands Region of The United States (1997) Ecology, Vol. 78, No 3, 953-958
Interactions Between Individual Plant Species and Soil Nutrient Status in Shortgrass Steppe (1995) Ecology, Vol. 76, No 4, 45-52
additional publications can be found on her Google Scholar profile.
Notable awards and honors
Her awards and honors include:
2019 Fellow, Ecological Society of America, for advancing our understanding of ecosystem processes, in particular nitrogen and carbon cycling in grasslands.
2018 Fellow, Connecticut Academy of Science and Engineering
2012 Promoting Intellectual Engagement Award, University of Wyoming
2010 Fellow, American Association for the Advancement of Sciences
2008 USDA Agricultural Research Service, Rangeland Resources Unit: Award for Enhancing Collaborative Research Partnerships
2005 Colorado State University Honors Professor
2004–2005 National Academy of Sciences Education Fellow in the Life Science
2001-2008 University Distinguished Teaching Scholar, Colorado State University
2000 Mortar Board Rose Award, Colorado State University
1993–‘98 National Science Foundation Presidential Faculty Fellow Award
References
Living people
Year of birth missing (living people)
Middlebury College alumni
University of Wyoming alumni
Yale University faculty
Colorado State University faculty
University of Wyoming faculty
American botanists
Biogeochemists
Fellows of the American Association for the Advancement of Science | Ingrid Burke | [
"Chemistry"
] | 815 | [
"Geochemists",
"Biogeochemistry",
"Biogeochemists"
] |
59,558,412 | https://en.wikipedia.org/wiki/Karen%20Sp%C3%A4rck%20Jones%20Award | To commemorate the achievements of Karen Spärck Jones, the Karen Spärck Jones Award was created in 2008 by the British Computer Society (BCS) and its Information Retrieval Specialist Group (BCS IRSG). Since 2024, the award has been sponsored by Bloomberg. Prior to 2024, it was sponsored by Microsoft Research.
The winner of the award is invited to present a keynote talk the following year alternately at the European Conference on Information Retrieval (ECIR) or the Conference of the European Chapter of the Association for Computational Linguistics (EACL).
Chronological recipients and keynote talks
2009: Mirella Lapata : “Image and Natural Language Processing for Multimedia Information Retrieval”
2010: Evgeniy Gabrilovich : “Ad Retrieval Systems in vitro and in vivo: Knowledge-Based Approaches to Computational Advertising”
2011: No award was made
2012: Diane Kelly : “Contours and Convergence”
2013: Eugene Agichtein : “Inferring Searcher Attention and Intention by Mining Behavior Data”
2014: Ryen White : “Mining and Modeling Online Health Search”
2015: Jordan Boyd-Graber : “Opening up the Black Box: Interactive Machine Learning for Understanding Large Document Collections, Characterizing Social Science, and Language-Based Games”, Emine Yilmaz : “A Task-Based Perspective to Information Retrieval”
2016: Jaime Teevan : “Search, Re-Search.”
2017: Fernando Diaz (computer scientist) : “The Harsh Reality of Production Information Access Systems”
2018: Krisztian Balog : “On Entities and Evaluation”
2019: Chirag Shah : “Task-Based Intelligent Retrieval and Recommendation”
2020: Ahmed H. Awadallah : “Learning with Limited Labeled Data: The Role of User Interactions”
2021: Ivan Vulić : “Towards Language Technology for a Truly Multilingual World?”
2022: William Yang Wang "Large Language Models for Question Answering: Challenges and Opportunities"
2023: Hongning Wang "Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict?"
References
Computer science awards | Karen Spärck Jones Award | [
"Technology"
] | 422 | [
"Science and technology awards",
"Computer science",
"Computer science awards"
] |
51,806,224 | https://en.wikipedia.org/wiki/DS%20Crucis | DS Crucis (HR 4876, HD 111613) is a variable star near the open cluster NGC 4755, which is also known as the Kappa Crucis Cluster or Jewel Box Cluster. It is in the constellation Crux.
Location
DS Crucis is one of the brightest stars in the region of the NGC 4775 open cluster, better known as the Jewel Box Cluster, but its membership of the cluster is in doubt. The cluster is part of the larger Centaurus OB1 association and lies about 8,500 light years away.
DS Crucis and NGC 4755 lie just to the south-east of β Crucis, the lefthand star of the famous Southern Cross.
Variability
DS Crucis is a variable star with an amplitude of about 0.05 magnitudes. It was found to be variable from the photometry performed by the Hipparcos satellite. The variability type is unclear but it is assumed to be an α Cygni variable.
Properties
DS Crucis is an A1 bright supergiant (luminosity class Ia), although it has also been classified as A2 Iabe. It is nearly 80,000 times the luminosity of the sun, partly due to its higher temperature of 9,000 K, and partly to being over a hundred times larger than the sun. The κ Crucis cluster has a calculated age of 11.2 million years, and DS Crucis an age of seven million years.
References
Crux
111613
B-type supergiants
062732
4876
CD-59 4432
J12511794-6019473
Alpha Cygni variables
Crucis, DS | DS Crucis | [
"Astronomy"
] | 344 | [
"Crux",
"Constellations"
] |
56,182,553 | https://en.wikipedia.org/wiki/Wheeler%20incremental%20inductance%20rule | The incremental inductance rule, attributed to Harold Alden Wheeler by Gupta and others is a formula used to compute skin effect resistance and internal inductance in parallel transmission lines when the frequency is high enough that the skin effect is fully developed. Wheeler's concept is that the internal inductance of a conductor is the difference between the computed external inductance and the external inductance computed with all the conductive surfaces receded by one half of the skin depth.
Linternal = Lexternal(conductors receded) − Lexternal(conductors not receded).
Skin effect resistance is assumed to be equal to the reactance of the internal inductance.
Rskin = ωLinternal.
Gupta gives a general equation with partial derivatives replacing the difference of inductance.
where
is taken to mean the differential change in inductance as surface m is receded in the nm direction.
is the surface resistivity of surface m.
magnetic permeability of conductive material at surface m.
skin depth of conductive material at surface m.
unit normal vector at surface m.
Wadell and Gupta state that the thickness and corner radius of the conductors should be large with respect to the skin depth. Garg further states that the thickness of the conductors must be at least four times the skin depth. Garg states that the calculation is unchanged if the dielectric is taken to be air and that where is the characteristic impedance and the velocity of propagation, i.e. the speed of light. Paul, 2007,
disputes the accuracy of at very high frequency for rectangular conductors such as stripline and microstrip due to a non-uniform distribution of current on the conductor. At very high frequency, the current crowds into the corners of the conductor.
Example
In the top figure, if
is the inductance and is the characteristic impedance using the dimensions , and ,
and
is the inductance and is the characteristic impedance using the dimensions , and
then the internal inductance is
where is the velocity of propagation in the dielectric.
and the skin effect resistance is
Notes
References
Signal cables
Telecommunications engineering
Transmission lines
Distributed element circuits | Wheeler incremental inductance rule | [
"Engineering"
] | 440 | [
"Electrical engineering",
"Electronic engineering",
"Telecommunications engineering",
"Distributed element circuits"
] |
54,624,511 | https://en.wikipedia.org/wiki/Sol%C3%A8r%27s%20theorem | In mathematics, Solèr's theorem is a result concerning certain infinite-dimensional vector spaces. It states that any orthomodular form that has an infinite orthonormal set is a Hilbert space over the real numbers, complex numbers or quaternions. Originally proved by Maria Pia Solèr, the result is significant for quantum logic and the foundations of quantum mechanics. In particular, Solèr's theorem helps to fill a gap in the effort to use Gleason's theorem to rederive quantum mechanics from information-theoretic postulates. It is also an important step in the Heunen–Kornell axiomatisation of the category of Hilbert spaces.
Physicist John C. Baez notes,Nothing in the assumptions mentions the continuum: the hypotheses are purely algebraic. It therefore seems quite magical that [the division ring over which the Hilbert space is defined] is forced to be the real numbers, complex numbers or quaternions.Writing a decade after Solèr's original publication, Pitowsky calls her theorem "celebrated".
Statement
Let be a division ring. That means it is a ring in which one can add, subtract, multiply, and divide but in which the multiplication need not be commutative. Suppose this ring has a conjugation, i.e. an operation for which
Consider a vector space V with scalars in , and a mapping
which is -linear in left (or in the right) entry, satisfying the identity
This is called a Hermitian form. Suppose this form is non-degenerate in the sense that
For any subspace S let be the orthogonal complement of S. Call the subspace "closed" if
Call this whole vector space, and the Hermitian form, "orthomodular" if for every closed subspace S we have that is the entire space. (The term "orthomodular" derives from the study of quantum logic. In quantum logic, the distributive law is taken to fail due to the uncertainty principle, and it is replaced with the "modular law," or in the case of infinite-dimensional Hilbert spaces, the "orthomodular law.")
A set of vectors is called "orthonormal" if The result is this:
If this space has an infinite orthonormal set, then the division ring of scalars is either the field of real numbers, the field of complex numbers, or the ring of quaternions.
References
Hilbert spaces
Mathematical logic
Theorems in quantum mechanics | Solèr's theorem | [
"Physics",
"Mathematics"
] | 532 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Mathematical logic",
"Quantum mechanics",
"Theorems in mathematical physics",
"Hilbert spaces",
"Physics theorems"
] |
54,625,341 | https://en.wikipedia.org/wiki/Autonomous%20Rail%20Rapid%20Transit | Autonomous Rail Rapid Transit (ART) is a lidar (light detection and ranging) guided bi-articulated bus system for urban passenger transport. Developed and manufactured by CRRC through CRRC Zhuzhou Institute Co Ltd, it was unveiled in Zhuzhou in the Hunan province on June 2, 2017. ART is specifically referred to as a train or rapid transit as Digital-rail Rapid Transit and electric road by its manufacturer, however the public describes it as a bus or trolleybus and bus rapid transit. Its exterior is composed of individual fixed sections joined by articulated gangways, resembling a rubber-tyred tram and translohr.
The system is labelled as "autonomous" in English, however, the models in operation are optically guided and feature a driver on board. Despite "rail" in the name, the system does not use rails.
Automated Rapid Transit systems (ARTs) can operate independently without the need for a guiding sensor and as a result, they fall under the classification of buses. Consequently, vehicles deployed on these routes are mandated to display license plates.
Background
Before the announcement by CRRC, optical guided buses have been in use in a number of cities in Europe and North America, including in Rouen as part of Transport Est-Ouest Rouennais, in Las Vegas as a segment of Metropolitan Area Express BRT service (now discontinued), and in Castellón de la Plana as . The guidance system technology used on these systems was called Visée under their original developer Matra, and is now named Optiguide after being acquired by Siemens.
Description
An ART vehicle with three carriages is approximately long. It can travel at a speed of and can carry up to 300 passengers. A five-carriage ART vehicle provides space for 500 passengers. A four carriage model was introduced in 2021 which can carry 400 passengers. Two vehicles can closely follow each other without being mechanically connected, similarly to multiple unit train control. The entire ART has a low-floor design from a space frame with bolted-on panels to support the weight of passengers. It is built as a bi-directional vehicle, with driver's cabs at either end, allowing it to travel in either direction at full speed.
The long ART lane was built through downtown Zhuzhou and inaugurated in 2018.
Sensors and batteries
The ART is equipped with various optical and other types of sensors to allow the vehicle to automatically follow a route defined by a virtual track of markings on the roadway. A steering wheel also allows the driver to manually guide the vehicle, including around detours. A Lane Departure Warning System helps to keep the vehicle in its lane and automatically warns, if it drifts away from the lane. A Collision Warning System supports the driver on keeping a safe distance with other vehicles on the road and if the proximity reduces below a given level, it alerts the driver by a warning sign. The Route Change Authorization is a navigation device, which analyzes the traffic conditions on the chosen route and can recommend a detour to avoid traffic congestion. The Electronic Rearview Mirrors work with remotely adjustable cameras and provide a clearer view than conventional mirrors, including an auto dimming device to reduce the glare.
The ART is powered by lithium–titanate batteries and can travel a distance of per full charge. The batteries can be recharged via current collectors at stations. The recharging time for a trip is 30 seconds and for a trip, 10 minutes.
Benefits and limitations
A 2018 article by a sustainability academic argued trackless trams could replace both light-rail and bus rapid transit due to low cost, quick installation and low emissions. Others have disputed the claims about cost and quick installations, and argued that ART is a proprietary technology with little deployment worldwide. Other experts have argued the technology is overhyped, that optical guidance technology is not new, and that current proposals largely represent a repackaging of the bus as a rail-replacement technology. As of 2022 there are no systems outside of China and few proposals. That may be because:
The system is not fully autonomous
The system is not rail-based and so has the ride qualities of a bus
The vehicles can get stuck in road traffic when not operated in dedicated rights of way
The required vehicles cannot be bought through competitive tender
Proponents have argued the lack of rails means cheaper construction costs. Multi-axle hydraulic steering technology and bogie-like wheel arrangement could allow lower swept path in turns, thus requiring less side clearance. The minimum turning radius of is similar to buses.
However, because the ART is a guided system, ruts and depressions could be worn into the road by the alignment of the large number of wheels, so reinforcement of the roadway to prevent those problems may be as disruptive as the installation of rails in a light rail system. Researchers in 2021 found evidence of significant road wear due to trackless tram vehicles, which undermined claims of quick construction, with the researchers finding significant road strengthening was required by the technology. The suitability of the system for winter climates with ice and snow has not yet been proven. The higher rolling resistance of rubber tires requires more energy for propulsion than the steel wheels of a light rail vehicle.
A few abandoned proposals for light-rail lines have been revived as ART proposals because of the lower projected costs. However, a different report, by the Australian Railways Association, which supports light rail, said there were reliability questions with ART installations, implying the initial suggested capital cost savings were illusory. A November 2020 proposal for a trackless tram system in the City of Wyndham, near Melbourne, posited a cost of $AU23.53M per km for roadworks, vehicles, recharge point and depots. Recently completed light rail systems in Australia have had costs of between $AU80M and $AU150M per km.
The Government of New South Wales considered the system as an alternative to light rail for a line to connect Sydney Olympic Park to Parramatta. However, concerns were raised that there was only one supplier of the technology, and that the development of "long articulated buses" was "too much in its preliminary phase" to meet the project deadlines. Instead, the plan was to build a light-rail line which would connect to another light-rail route already under construction, so passengers would not have to change vehicles.
The Auckland Light Rail Group, in its studies of trackless trams for the City Centre to Māngere line, found that trackless trams would have a lower capacity than claimed. The official specifications for the ARRT assume a standing density of eight passengers per square meter, whereas many transit systems have more typical standing densities of four passengers per square meter. Based on that, the long ARRT would more realistically have a capacity of 170 passengers, rather than the claimed 307. This would be only a slight increase over the typical capacity of conventional bi-articulated buses at the same passenger density (~150 passengers), and less than a typical long LRV (~210-225 passengers).
List of commercially operating lines
Proposed systems
Proposals, including vehicle testing, have been made in several countries.
China, Changsha. Changsha Meixi Lake to Changsha Municipal Government line, reported to start construction in 2021 for completion in 2022
China, Harbin. In May 2021 testing of a vehicle was underway with plans for an route with 11 stations. There are reports that stations have been constructed in January 2021 and trial operations will commence in August 2021.
China, Tongli. testing was underway with the service expected to open to passengers by the end of 2021.
China, Xi'an. Two routes. One with 18 stations over and second with 9 stations over .
Malaysia. Iskandar Malaysia Bus Rapid Transit in Johor. ART is one technology under consideration for the corridor. A three-month test of an ART vehicle, along with eight other bus types, began in April 2021. In May 2024, the planned three line IMBRT was shelved due to unable to handle the traffic flow and affect the efficiency of the service. The traffic flow condition were projected to be much worse when the now under-construction RTS Link train line were expected to be completed by end 2026. The Johor government then propose the construction of Elevated Autonomous Rapid Transit (E-ART) system, a hybrid system utilising LRT infrastructure (without the LRT track) and ART system to replaced the now cancelled IMBRT.
Malaysia. Kuching Urban Transportation System in Kuching. The three line Kuching LRT project was proposed as a light-rail in 2018, but shelved due to costs. In 2019, the government of Sarawak announced that the ART technology had been selected instead, due to its lower costs for similar levels of service. , the project has commenced construction and is under testing.
Mexico. Metrorrey Line 5 in Monterrey. This new line of the Metrorrey system is currently being built by the government of the state of Nuevo León. The public tender was awarded in 2022 to a consortium formed by the Portuguese firm Mota-Engil and the Chinese CRRC. In October 2023 Governor Samuel García presented the ART vehicles that would be used for the Line 5. It is expected to open on 2027.
Qatar. The system was considered for use during the 2022 FIFA World Cup, but was not pursued. In July 2019 a two-week test with one vehicle was undertaken in Doha, the first trial outside China.
Australia. Perth In March 2021, the Australian Government provided $2 million to produce a business case to investigate a trackless tram on Scarborough Beach Road between the Stirling City Centre and the Perth CBD. In September 2023, an ART vehicle was delivered to the City of Stirling to begin trials for a proposed route between Glendalough railway station and Scarborough Beach.
Indonesia. The system is considered for use in Nusantara, the future capital city. The bus has been delivered in July 2024, will be showcased in August 2024 at the time of the Independence Day, and tested in October-December 2024.
New Zealand. In June 2024, Auckland Transport indicated it was interested in trialling a trackless tram on the Northern Busway.
See also
Automatic train operation
Automated guideway transit
Articulated bus
Bi-articulated bus
Battery electric bus
Capacitor electric vehicle#Capabus
Charging station
Electric road
Fuel cell bus
Gadgetbahn
Guided bus
Personal rapid transit
Rubber-tyred metro
Rubber-tyred tram
Translohr
Trolleybus
Trackless train
Transit Elevated Bus (TEB)
Trolleybus#Off-wire power developments (In Motion Charging)
Wright StreetCar
References
Guided bus
Self-driving cars
Bus rapid transit in China | Autonomous Rail Rapid Transit | [
"Engineering"
] | 2,148 | [
"Automotive engineering",
"Self-driving cars"
] |
54,625,930 | https://en.wikipedia.org/wiki/Tyromyces%20pulcherrimus | Tyromyces pulcherrimus, commonly known as the strawberry bracket, is a species of poroid fungus in the family Polyporaceae. It is readily recognisable by its reddish fruit bodies with pores on the cap underside. The fungus is found natively in Australia and New Zealand, where it causes a white rot in living and dead logs of southern beech and eucalyptus. In southern Brazil, it is an introduced species that is associated with imported eucalypts.
Taxonomy
The fungus was first described in 1922 by English-born Australian dentist and botanist Leonard Rodway, who called it Polyporus pulcherrimus. Curtis Gates Lloyd, an American mycologist to whom Rodway had sent a specimen for examination, suggested a similarity to Albatrellus confluens. Gordon Herriot Cunningham transferred it to the genus Tyromyces in 1922 to give it the name by which it is known by today. Some sources refer to the species as Aurantiporus pulcherrimus, after Buchanan and Hood's proposed 1992 transfer to Aurantiporus.
The specific epithet pulcherrimus is derived from the Latin word for "very beautiful". One common name used for the fungus is strawberry bracket.
Description
The fruit bodies of Tyromyces pulcherrimus are bracket-shaped caps that measure in diameter. They are sessile, lacking a stipe, and are instead attached directly to the substrate. The cap colour when fresh is cherry red or salmon, but it dries to become brownish. The cap surface can be hairy, particularly near the point of attachment. Pores on the cap underside are red, and number about 1–3 per millimetre. The flesh is soft and thick, red, and watery. It does not have any distinct odour. Tyromyces pulcherrimus is inedible.
With a monomitic hyphal system, Tyromyces pulcherrimus contains only generative hyphae. These hyphae are clamped, and are sometime covered with granules, or an orange substance that appears oily. The hyphae in the context are arranged in a parallel fashion, and strongly agglutinated to form a densely packed tissue. Cystidia are absent from the hymenium. The basidia are club shaped with typically four sterigmata, and measure 15–23 by 6.5–7.5 μm. Spores are ellipsoid to more or less spherical, hyaline, and measure 5–7 by 3.5–4.5 μm.
Habitat and distribution
Tyromyces pulcherrimus is a white rot fungus that grows on the exposed heartwood of several tree species. It has been recorded on southern beech (Nothofagus cunninghamii) in Victoria and Tasmania and on Antarctic beech (Nothofagus moorei) in Queensland and New South Wales. In Tasmania, evidence suggests that it prefers wet forests, including rainforest and wet sclerophyll forest. In Brazil, it is an introduced species that has been recorded on imported eucalypts. It has been found there in Rio Grande do Sul State. In New Zealand, the fungus has been recorded on red beech (Fuscospora fusca) and silver beech (Lophozonia menziesii).
References
Fungi native to Australia
Fungi of New Zealand
Fungi of Brazil
Fungi described in 1922
pulcherrimus
Fungus species | Tyromyces pulcherrimus | [
"Biology"
] | 717 | [
"Fungi",
"Fungus species"
] |
54,629,028 | https://en.wikipedia.org/wiki/HD%20131399 | HD 131399 is a star system in the constellation of Centaurus. Based on the system's electromagnetic spectrum, it is located around 350 light-years (107.9 parsecs) away. The total apparent magnitude is 7.07, but because of interstellar dust between it and the Earth, it appears 0.22 ± 0.09 magnitudes dimmer than it should be.
The brightest star, is a young A-type main-sequence star, and further out are two lower-mass stars. A Jupiter-mass planet or a low-mass brown dwarf was once thought to be orbiting the central star, but this has been ruled out.
Stellar system
The brightest star in the HD 131399 system is designated HD 131399 A. Its spectral type is A1V, and it is 2.08 times as massive as the Sun. The two lower-mass stars are designated HD 131399 B and C, respectively. B is a G-type main-sequence star, while HD 131399 C is a K-type main-sequence star. Both stars are less massive than the Sun.
HD 131399 B and C are located very close to each other, and the two orbit each other at about 10 AU. In turn, the B-C pair orbits the central star A at a distance of 349 astronomical units (au). This orbit takes about 3,600 years to complete, and it has an eccentricity of about 0.13 The entire system is about 21.9 million years old.
One paper has reported that HD 131399 A has a companion in an inclined 10-day orbit with a semi-major axis of . HD 131399 A has been described as a "nascent Am star"; although it has a very slow projected rotation rate and would be expected to show chemical peculiarities, its spectrum is relatively normal, possibly due to its young age.
Claims of a planetary system
The claimed discovery of a massive planet, named HD 131399 Ab, was announced in a paper published in the journal Science. The object was imaged using the SPHERE imager of the Very Large Telescope at the European Southern Observatory, located in the Atacama Desert of Chile, and announced in a July 2016 paper in the journal Science. It was thought to be a T-type object with a mass of , but its orbit would have been unstable, causing it to be ejected between the primary's red giant phase and white dwarf phase. This was the first exoplanet candidate to be discovered by SPHERE. The image was created from two separate SPHERE observations: one to image the three stars and one to detect the faint planet. After its discovery, the team unofficially named the system "Scorpion-1" and the planet "Scorpion-1b", after the survey that prompted its discovery, the Scorpion Planet Survey (principal investigator: Daniel Apai).
In May 2017, observations made by the Gemini Planet Imager and including a reanalysis of the SPHERE data suggest that this target is, in fact, a background star. This object's spectrum seems to be like that of a K-type or M-type dwarf, not a T-type object as first thought. It also initially appeared to be associated with HD 131399, but this was because of its unusually high proper motion (in the top 4% fastest-moving stars). After subsequent data published in 2022 confirmed that the object is a background star, the paper announcing the putative discovery was retracted.
References
Notes
Centaurus
Hypothetical planetary systems
Triple star systems
A-type main-sequence stars
G-type main-sequence stars
K-type main-sequence stars
Durchmusterung objects
131399
072940 | HD 131399 | [
"Astronomy"
] | 761 | [
"Centaurus",
"Constellations"
] |
54,629,293 | https://en.wikipedia.org/wiki/Single%20cell%20epigenomics | Single cell epigenomics is the study of epigenomics (the complete set of epigenetic modifications on the genetic material of a cell) in individual cells by single cell sequencing. Since 2013, methods have been created including whole-genome single-cell bisulfite sequencing to measure DNA methylation, whole-genome ChIP-sequencing to measure histone modifications, whole-genome ATAC-seq to measure chromatin accessibility and chromosome conformation capture.
Single-cell DNA methylome sequencing
Single cell DNA genome sequencing quantifies DNA methylation. This is similar to single cell genome sequencing, but with the addition of a bisulfite treatment before sequencing. Forms include whole genome bisulfite sequencing, and reduced representation bisulfite sequencing
Single-cell ATAC-seq
ATAC-seq stands for Assay for Transposase-Accessible Chromatin with high throughput sequencing. It is a technique used in molecular biology to identify accessible DNA regions, equivalent to DNase I hypersensitive sites. Single cell ATAC-seq has been performed since 2015, using methods ranging from FACS sorting, microfluidic isolation of single cells, to combinatorial indexing. In initial studies, the method was able to reliably separate cells based on their cell types, uncover sources of cell-to-cell variability, and show a link between chromatin organization and cell-to-cell variation.
Single-cell ChIP-seq
ChIP-sequencing, also known as ChIP-seq, is a method used to analyze protein interactions with DNA. ChIP-seq combines chromatin immunoprecipitation (ChIP) with massively parallel DNA sequencing to identify the binding sites of DNA-associated proteins. In epigenomics, this is often used to assess histone modifications (such as methylation). ChIP-seq is also often used to determine transcription factor binding sites.
Single-cell ChIP-seq is extremely challenging due to background noise caused by nonspecific antibody pull-down, and only one study so far has performed it successfully. This study used a droplet-based microfluidics approach, and the low coverage required thousands of cells to be sequenced in order to assess cellular heterogeneity.
Single-cell Hi-C
Chromosome conformation capture techniques (often abbreviated to 3C technologies or 3C-based methods) are a set of molecular biology methods used to analyze the spatial organization of chromatin in a cell. These methods quantify the number of interactions between genomic loci that are nearby in three dimensional space, even if the loci are separated by many kilobases in the linear genome.
Currently, 3C methods start with a similar set of steps, performed on a sample of cells. First, the cells are cross-linked, which introduces bonds between proteins, and between proteins and nucleic acids, that effectively "freeze" interactions between genomic loci. The genome is then cut digested into fragments through the use of restriction enzymes. Next, proximity based ligation is performed, creating long regions of hybrid DNA. Lastly, the hybrid DNA is sequenced to determine genomic loci that are in close proximity to each other.
Single-cell Hi-C is a modification of the original Hi-C protocol, which is an adaptation of the 3C method, that allows you to determine proximity of different regions of the genome in a single cell. This method was made possible by performing the digestion and ligation steps in individual nuclei, as opposed to the original Hi-C protocol, where ligation was performed after cell lysis in a pool containing crosslinked chromatin complexes. In single cell Hi-C, after ligation, single cells are isolated and the remaining steps are performed in separate compartments, and hybrid DNA is tagged with a compartment specific barcode. High-throughput sequencing is then performed on the pool of the hybrid DNA from the single cells. Although the recovery rate of sequenced interactions (hybrid DNA) can be as low as 2.5% of potential interactions, it has been possible to generate three dimensional maps of entire genomes using this method. Additionally, advances have been made in the analysis of Hi-C data, allowing for the enhancement of HiC datasets to generate even more accurate and detailed contact maps and 3D models.
See also
Single cell sequencing
Epigenomics
Chromosome conformation capture
References
Epigenetics
DNA sequencing
Genomics
Cell biology | Single cell epigenomics | [
"Chemistry",
"Biology"
] | 916 | [
"Cell biology",
"Molecular biology techniques",
"DNA sequencing"
] |
54,631,939 | https://en.wikipedia.org/wiki/Svetlana%20%28company%29 | PJSC Svetlana () is a company based in Saint Petersburg, Russia. It is primarily involved in the research, design, and manufacturing of electronic and microelectronic instruments. Svetlana is part of Ruselectronics. The name of the company is said to originate from the words for 'light of an incandescent lamp' (СВЕТ ЛАмпы НАкаливания).
History
The company was established in 1889 as the Ya. M. Aivaz () Factory. Svetlana was a major producer of vacuum tubes. In 1937, the Soviet Union purchased a tube assembly line from RCA, including production licenses and initial staff training, and installed it on the St Petersburg plant. US-licensed tubes were produced since then.
Since 2001, New Sensor Corp. has been holding the rights for the Svetlana vacuum tube brand for the US and Canada. The New Sensor tubes are actually manufactured at the Expo-pul factory (former Reflektor plant) in Saratov. Tubes manufactured by Svetlana in Saint Petersburg still bear the "winged С" (cyrillic S) logo (see the image below) but no longer the name Svetlana.
In 2017 the company announced a 3-billion-ruble modernization plan.
Products
The Svetlana Association produces a variety of electronic and microelectronic instruments, including transmitting and modulator tubes for all frequency ranges; X-band broadband passive TR limiter; KU-band broadband TR tube; klystron amplifiers; X-ray tubes; portable X-ray units for medicine and industry; high-frequency fast response thyristors; transistors; integrated microcircuits; microcomputers; microcontrollers; microcalculators; ultrasonic delay lines; receiving tubes; process equipment for the manufacture of electronic engineering items. Vacuum tubes currently in production include the 6550, 6L6, EL34, and KT88.
Directors
1961-1969 — Kaminsky I. I.
1969-1988 — Filatov O. V.
1988-1991 — Khizha G. S.
1991-1993 — Shchukin Gennady Anatolyevich
1993-1994 — Bashkatov V. E.
1994-2014 — Popov V. V.
since 2014 — Gladkov N. Y.
Awards
1931 – Order of Lenin (№8) for the implementation of the production plan of the first five-year plan in two and a half years.
1937 – diploma and "Grand Prix" at the International Exhibition of Art and Technology in Paris for powerful generator lamps GDO-15, GKO-10 manufactured by the Svetlana.
See also
6P1P vacuum tube
Russian tube designations
7400 series – Second sources in Europe and the Eastern Bloc
Soviet integrated circuit designation
References
External links
Official website
Electronics companies of Russia
Manufacturing companies of Russia
Companies based in Saint Petersburg
Ruselectronics
Vacuum tubes
Manufacturing companies established in 1889
Electronics companies of the Soviet Union
Companies nationalised by the Soviet Union
Ministry of the Electronics Industry (Soviet Union)
Russian brands | Svetlana (company) | [
"Physics"
] | 642 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
54,632,635 | https://en.wikipedia.org/wiki/Cholamine%20chloride%20hydrochloride | Cholamine chloride hydrochloride is one of Good's buffers with a pH in the physiological range. Its pKa at 20°C is 7.10, making it useful in cell culture work. Its ΔpKa/°C is -0.027 and it has a solubility in water at 0°C of 4.2M.
References
Buffer solutions
Quaternary ammonium compounds
Amines
Hydrochlorides | Cholamine chloride hydrochloride | [
"Chemistry",
"Biology"
] | 92 | [
"Buffer solutions",
"Biotechnology stubs",
"Functional groups",
"Biochemistry stubs",
"Biochemistry",
"Amines",
"Bases (chemistry)"
] |
54,632,788 | https://en.wikipedia.org/wiki/Poppy-seed%20bagel%20theorem | In physics, the poppy-seed bagel theorem concerns interacting particles (e.g., electrons) confined to a bounded surface (or body) when the particles repel each other pairwise with a magnitude that is proportional to the inverse distance between them raised to some positive power . In particular, this includes the Coulomb law observed in Electrostatics and Riesz potentials extensively studied in Potential theory. Other classes of potentials, which not necessarily involve the Riesz kernel, for example nearest neighbor interactions, are also described by this theorem in the macroscopic regime.
For such particles, a stable equilibrium state, which depends on the parameter , is attained when the associated potential energy of the system is minimal (the so-called generalized Thomson problem). For large numbers of points, these equilibrium configurations provide a discretization of which may or may not be nearly uniform with respect to the surface area (or volume) of . The poppy-seed bagel theorem asserts that for a large class of sets , the uniformity property holds when the parameter is larger than or equal to the dimension of the set . For example, when the points ("poppy seeds") are confined to the 2-dimensional surface of a torus embedded in 3 dimensions (or "surface of a bagel"), one can create a large number of points that are nearly uniformly spread on the surface by imposing a repulsion proportional to the inverse square distance between the points, or any stronger repulsion (). From a culinary perspective, to create the nearly perfect poppy-seed bagel where bites of equal size anywhere on the bagel would contain essentially the same number of poppy seeds, impose at least an inverse square distance repelling force on the seeds.
Formal definitions
For a parameter and an -point set , the -energy of is defined as follows:
For a compact set we define its minimal -point -energy as
where the minimum is taken over all -point subsets of ; i.e., . Configurations that attain this infimum are called -point -equilibrium configurations.
Poppy-seed bagel theorem for bodies
We consider compact sets with the Lebesgue measure and . For every fix an -point -equilibrium configuration . Set
where is a unit point mass at point . Under these assumptions, in the sense of weak convergence of measures,
where is the Lebesgue measure restricted to ; i.e., .
Furthermore, it is true that
where the constant does not depend on the set and, therefore,
where is the unit cube in .
Poppy-seed bagel theorem for manifolds
Consider a smooth -dimensional manifold embedded in and denote its surface measure by . We assume . Assume
As before, for every fix an -point -equilibrium configuration and set
Then, in the sense of weak convergence of measures,
where . If is the -dimensional Hausdorff measure normalized so that , then
where is the volume of a d-ball.
The constant Cs,p
For , it is known that , where is the Riemann zeta function. Using a modular form approach to linear programming, Viazovska together with coauthors established in a 2022 paper that in dimensions and , the values of , , are given by the Epstein zeta function
associated with the lattice and Leech lattice, respectively.
It is conjectured that for , the value of is similarly determined as the value of the Epstein zeta function for the hexagonal lattice. Finally, in every dimension it is known that when , the scaling of becomes rather than , and the value of can be computed explicitly as the volume of the unit -dimensional ball:
The following connection between the constant and the problem of sphere packing is known:
where is the volume of a p-ball and
where the supremum is taken over all families of non-overlapping unit balls such that the limit
exists.
See also
Hausdorff dimension
Geometric measure theory
Sphere packing
Riemann zeta function
References
Physics theorems
Potentials
Dimension
Bagels
bagel theorem | Poppy-seed bagel theorem | [
"Physics"
] | 807 | [
"Geometric measurement",
"Physical quantities",
"Equations of physics",
"Theory of relativity",
"Dimension",
"Physics theorems"
] |
54,634,552 | https://en.wikipedia.org/wiki/HEPBS | HEPBS (N-(2-Hydroxyethyl)piperazine-N'-(4-butanesulfonic acid)) is a zwitterionic organic chemical buffering agent; one of Good's buffers. HEPBS and HEPES have very similar structures and properties, HEPBS also having an acidity (pKa) in the physiological range (7.6-9.0 useful range). This makes it possible to use it for cell culture work.
References
Zwitterions
Piperazines
Ethanolamines
Sulfonic acids
Buffer solutions | HEPBS | [
"Physics",
"Chemistry"
] | 125 | [
"Buffer solutions",
"Matter",
"Functional groups",
"Zwitterions",
"Sulfonic acids",
"Ions"
] |
64,488,973 | https://en.wikipedia.org/wiki/Thermal%20pressure | In thermodynamics, thermal pressure (also known as the thermal pressure coefficient) is a measure of the relative pressure change of a fluid or a solid as a response to a temperature change at constant volume. The concept is related to the Pressure-Temperature Law, also known as Amontons's law or Gay-Lussac's law.
In general pressure, () can be written as the following sum: .
is the pressure required to compress the material from its volume to volume at a constant temperature . The second term expresses the change in thermal pressure . This is the pressure change at constant volume due to the temperature difference between and . Thus, it is the pressure change along an isochore of the material.
The thermal pressure is customarily expressed in its simple form as
Thermodynamic definition
Because of the equivalences between many properties and derivatives within thermodynamics (e.g., see Maxwell Relations), there are many formulations of the thermal pressure coefficient, which are equally valid, leading to distinct yet correct interpretations of its meaning.
Some formulations for the thermal pressure coefficient include:
Where is the volume thermal expansion, the isothermal bulk modulus, the Grüneisen parameter, the compressibility and the constant-volume heat capacity.
Details of the calculation:
The utility of the thermal pressure
The thermal pressure coefficient can be considered as a fundamental property; it is closely related to various properties such as internal pressure, sonic velocity, the entropy of melting, isothermal compressibility, isobaric expansibility, phase transition, etc. Thus, the study of the thermal pressure coefficient provides a useful basis for understanding the nature of liquid and solid. Since it is normally difficult to obtain the properties by thermodynamic and statistical mechanics methods due to complex interactions among molecules, experimental methods attract much attention.
The thermal pressure coefficient is used to calculate results that are applied widely in industry, and they would further accelerate the development of thermodynamic theory.
Commonly the thermal pressure coefficient may be expressed as functions of temperature and volume. There are two main types of calculation of the thermal pressure coefficient: one is the Virial theorem and its derivatives; the other is the Van der Waals type and its derivatives.
Thermal pressure at high temperature
As mentioned above, is one of the most common formulations for the thermal pressure coefficient.
Both and are affected by temperature changes, but the value of and of a solid much less sensitive to temperature change above its Debye temperature. Thus, the thermal pressure of a solid due to moderate temperature change above the Debye temperature can be approximated by assuming a constant value of and .
On the contrary, in the paper, authors demonstrated that, at ambient pressure, the pressure predicted of Au and MgO from a constant value of deviates from the experimental data, and the higher temperature, the more deviation. In addition, the authors suggested a thermal expansion model to replace the thermal pressure model.
Thermal pressure in a crystal
The thermal pressure of a crystal defines how the unit-cell parameters change as a function of pressure and temperature. Therefore, it also controls how the cell parameters change along an isochore, namely as a function of . Usually, Mie-Grüneisen-Debye and other Quasi harmonic approximation (QHA) based state functions are being used to estimate volumes and densities of mineral phases in diverse applications such as thermodynamic, deep-Earth geophysical models and other planetary bodies. In the case of isotropic (or approximately isotropic) thermal pressure, the unit cell parameter remains constant along the isochore and the QHA is valid. But when the thermal pressure is anisotropic, the unit cell parameter changes so, the frequencies of vibrational modes also change even in constant volume and the QHA is no longer valid.
The combined effect of a change in pressure and temperature is described by the strain tensor :
Where is the volume thermal expansion tensor and is the compressibility tensor. The line in the P-T space which indicates that the strain is constant in a particular direction within the crystal is defined as:
Which is an equivalent definition of the isotropic degree of thermal pressure.
See also
Isochoric process
Pressure
Hydrostatic equilibrium
References
Thermodynamics
Fluid mechanics
Pressure | Thermal pressure | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 880 | [
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Pressure",
"Civil engineering",
"Thermodynamics",
"Wikipedia categories named after physical quantities",
"Fluid mechanics",
"Dynamical systems"
] |
64,496,462 | https://en.wikipedia.org/wiki/Time%27s%20Arrow%20and%20Archimedes%27%20Point | Time's Arrow and Archimedes Point: New Directions for the Physics of Time is a 1996 book by Huw Price, on the physics and philosophy of the arrow of time. It explores the problem of the direction of time, looking at issues in thermodynamics, cosmology, electromagnetism, and quantum mechanics. Price argues that it is fruitful to think about time from a hypothetical Archimedean point – a viewpoint outside of time. In later chapters, Price argues that retrocausality can resolve many of the philosophical issues facing quantum mechanics and along these lines proposes an interpretation involving what he calls 'advanced action'.
Summary
Chapter 1 – The View From Nowhen
Price briefly introduces the stock philosophical questions about time, starting with Saint Augustine's observations in Confessions, highlighting the questions 'What is the difference between past and future?', 'Could the future affect the past?' and 'what gives time its direction?'.
He then introduces the block universe view where the 'present' is regarded as a subjective notion, which changes from observer to observer, in the same way that the concept of 'here' changes depending on where the observer is. The block universe view rejects the notion that there exists an objective present and grants that the past, present and future are all equally real. He then surveys reasons to favour this view and common objections to it. Price then introduces the idea of viewing the block universe from an Archimedean point from outside of time, which is the view that is taken in the rest of the book.
Finally, Price introduces two problems regarding the arrow of time, which he calls the taxonomy problem and the genealogy problem. The taxonomy problem is the problem characterizing and finding the relationship between different arrows of time (e.g. the thermodynamic and cosmological arrows of time). The genealogy problem is to explain why asymmetries (i.e. arrows) exist in time, given that the laws of physics seem to be reversible (i.e. symmetric) in time.
Chapter 2 – "More apt to be Lost than Got": The Lessons of the Second Law
Covers the thermodynamics arrow of time, arising from the second law of thermodynamics. Discusses Ludwig Boltzmann and his development of the second law as a statistical law. The chapter also discusses Boltzmann's H-theorem and Loschmidt's paradox. Price takes a time-symmetric view and comes to the conclusion that the mystery of the second law is not the question of the why entropy increases, but why entropy was low at the beginning of the universe. Taking a time-symmetric view, he then speculates that entropy may decrease again, reaching a minimum at the end of the universe.
Chapter 3 – New Light of the Arrow of Radiation
This chapter discusses the apparent asymmetry of radiation. Namely, radiation is often observed spreading outwards from a source, but coherent radiation is not observed converging in a sink. Price criticizes explanations of this phenomenon from Karl Popper and Paul Davies and Dieter Zeh. The Wheeler–Feynman absorber theory is discussed and Price concludes that the arrow of time from radiation is a more general case of the thermodynamic arrow of time.
Chapter 4 – Arrows and Errors in Contemporary Cosmology
In this chapter Price tackles the problem of why entropy was low at the big bang and whether or not we should expect entropy to be low at the other temporal extreme of the universe. He introduces the Gold Universe model, which suggests that the universe will begin and end in a low entropy state. Explanations from Stephen Hawking and Paul Davies of the low entropy big bang are scrutinized. Price concludes that both Hawking and Davies apply a 'temporal double standard' with different standards being applied towards the past and the future. Thus, Price concludes that the arguments are flawed. The Gold Universe view is defended and some of its implications are explored.
Chapter 5 – Innocence and Symmetry in Microphysics
Price explores what he calls 'The Principle of Independence of Incoming Influences', which is the idea that systems are uncorrelated before they interact, but become correlated after interaction. He distinguishes two versions of this claim. The first is the macroscopic version which Price claims is associated with the low entropy past. The second is the microscopic version, which Price terms μInnocence. Price argues that, while the low entropy past gives us some reason to accept the macroscopic version, there is less reason to accept μInnocence. It is argued that, while μInnocence is intuitively plausible, it arises from a temporal double standard with respect to causality.
Chapter 6 – In Search of the Third Arrow
This chapter explores the idea of causation. Price argues that ideas about causation exert greater influence on physicists than is generally acknowledged. He explores the argument that the temporal asymmetry of causation comes from physical asymmetry, but ultimately finds this argument unconvincing, especially on the microscopic level. He concludes the chapter by claiming that the most plausible explanation is that the apparent asymmetry of causation is anthropocentric. That is: causation is not asymmetric in time, but we view it as being so because we (human beings) are ourselves thermodynamically asymmetric in time.
Chapter 7 – Convention Objectified and the Past Unlocked
The chapter introduces the 'conventionalist view' of causation: that the direction of causation is an anthropocentric convention and addresses some common criticisms of the view. The 'bilking argument' against retrocausality is introduced, and Michael Dummett's strategy for avoiding paradoxes in a world with retrocausality is examined.
Chapter 8 – Einstein's Issue: The Puzzle of Contemporary Quantum Theory
This chapter is a self-contained introduction to quantum mechanics. It introduces the EPR paradox and the measurement problem. Bell's theorem and the GHZ experiment are then introduced in the context of hidden-variables interpretations of quantum mechanics. De Broglie–Bohm theory, the many-worlds interpretation, the many-minds interpretation and the quantum decoherence approach are all examined, though Price finds them all ultimately unconvincing. He points out that Bell's theorem relies on the assumption that, when a measurement basis is chosen, this choice is independent of the state of the quantum system being measured. This, he points out, would not necessarily be the case in a world with advanced action.
Chapter 9 – The Case for Advanced Action
Price notes that the independence assumption in Bell's theorem can be relaxed in two ways: the first being that the measurement basis and the state of the quantum system are correlated through a common cause in the past, and the second being what Price calls 'advanced action' – a 'common cause' in the future. He argues against superdeterminism, the idea that a quantum system and measurement apparatus are correlated due to a common cause in the past. In contrast, he suggests that the 'advanced action' interpretation is elegant and appealing and fits in better with his 'Archimedean viewpoint'. He briefly discusses the relationship between advanced action and free will.
Release
The book was published by Oxford University Press on 9 October 1997. It was initially released in hardback, but is now available in hardback, paperback and ebook formats.
Reception
Time's Arrow and Archimedes' Point was generally well received. Many reviewers found Price's arguments stimulating and praised his explanations of the issues. However, many took issues with some of his specific arguments.
Joel Lebowitz gave the book a mixed review for Physics Today where he called Price's arguments regarding backward causation "unconvincing", but praised the section on quantum mechanics, writing "his discussion ... of the Bohr-Einstein 'debate' about the completeness of the quantum description of reality is better than much of the physics literature".
Peter Coveney gave the book a mixed review for the New Scientist, criticizing Price's treatment of non-equilibrium statistical mechanics, but concluding by saying "[a]lthough I didn't find many of the arguments convincing, Price's book is a useful addition to the literature on time, particularly as it reveals the influence of modern science on the way a philosopher thinks. But given its restricted and idiosyncratic character, this book should be read only in conjunction with more broadly based works."
John D. Barrow reviewed the book in Nature, strongly criticizing the chapter on the cosmological arrow of time but writing "the author has done physicists a great service in laying out so clearly and critically the nature of the various time-asymmetry problems of physics".
Craig Callender gave the book a detailed, positive review for The British Journal for the Philosophy of Science, calling it "exceptionally readable and entertaining" as well as "a highly original and important contribution to the philosophy and physics of time".
Gordon Belot reviewed the book for The Philosophical Review, writing "[t]his is a fertile and fascinating area, and Price's book provides an exciting entree, even if it does not provide all the answers".
Carlo Rovelli chose the book as one of his favourite books on the subject of time, calling Huw Price "one of the best living philosophers" and saying that it "teaches us an important lesson: we are so used to think time as naturally oriented that we instinctively think that the future is determined by the past even if we try not to"
References
Philosophy of time
Philosophy of physics
1996 non-fiction books | Time's Arrow and Archimedes' Point | [
"Physics"
] | 1,986 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Physical quantities",
"Time",
"Philosophy of time",
"Spacetime"
] |
62,220,759 | https://en.wikipedia.org/wiki/QuTiP | QuTiP, short for the Quantum Toolbox in Python, is an open-source computational physics software library for simulating quantum systems, particularly open quantum systems. QuTiP allows simulation of Hamiltonians with arbitrary time-dependence, allowing simulation of situations of interest in quantum optics, ion trapping, superconducting circuits and quantum nanomechanical resonators. The library includes extensive visualization facilities for content under simulations.
QuTiP's API provides a Python interface and uses Cython to allow run-time compilation and extensions via C and C++. QuTiP is built to work well with popular Python packages NumPy, SciPy, Matplotlib and IPython.
History
The idea for the QuTip project was conceived in 2010 by PhD student Paul Nation, who was using the quantum optics toolbox for MATLAB in his research. According to Paul Nation, he wanted to create a python package similar to qotoolbox because he "was not a big fan of MATLAB" and then decided to "just write it [him]self". As a postdoctoral fellow, at the RIKEN Institute in Japan, he met Robert Johansson and the two worked together on the package.
In contrast to its predecessor qotoolbox, which relies on the proprietary MATLAB environment, it was published in 2012 under an open source license.
The Version created by Nation and Johansson already contained the most important features of the package, but QuTips scope and features are constantly being extended by a large community of contributors. It has grown in popularity amongst physicists, with over 250.000 downloads in the year 2021.
Examples
Creating quantum objects
>>> import qutip
>>> import numpy as np
>>> psi = qutip.Qobj([[0.6], [0.8]]) # create quantum state from a list
>>> psi
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[0.6]
[0.8]]
>>> phi=qutip.Qobj(np.array([0.8, -0.6])) # create quantum state from a numpy-array
>>> phi
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[ 0.8]
[-0.6]]
>>> e0=qutip.basis(2, 0) # create a basis vector
>>> e0
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[1.]
[0.]]
>>> A=qutip.Qobj(np.array([[1,2j], [-2j,1]])) # create quantum operator from numpy array
>>> A
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
Qobj data =
[[1.+0.j 0.+2.j]
[0.-2.j 1.+0.j]]
>>> qutip.sigmay() # some common quantum objects, like pauli matrices, are predefined in the qutip package
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
Qobj data =
[[0.+0.j 0.-1.j]
[0.+1.j 0.+0.j]]
Basic operations
>>> A*qutip.sigmax()+qutip.sigmay() # we can add and multiply quantum objects of compatible shape and dimension
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = False
Qobj data =
[[0.+2.j 1.-1.j]
[1.+1.j 0.-2.j]]
>>> psi.dag() # hermitian conjugate
Quantum object: dims = [[1], [2]], shape = (1, 2), type = bra
Qobj data =
[[0.6 0.8]]
>>> psi.proj() # projector onto a quantum state
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
Qobj data =
[[0.36 0.48]
[0.48 0.64]]
>>> A.tr() # trace of operator
2.0
>>> A.eigenstates() # diagonalize an operator
(array([-1., 3.]), array([Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[-0.70710678+0.j ]
[ 0. -0.70710678j]] ,
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[-0.70710678+0.j ]
[ 0. +0.70710678j]] ],
dtype=object))
>>> (1j * A).expm() # matrix exponential of an operator
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = False
Qobj data =
[[-0.2248451-0.35017549j -0.4912955-0.7651474j ]
[ 0.4912955+0.7651474j -0.2248451-0.35017549j]]
>>> qutip.tensor(qutip.sigmaz(), qutip.sigmay()) # tensor product
Quantum object: dims = [[2, 2], [2, 2]], shape = (4, 4), type = oper, isherm = True
Qobj data =
[[0.+0.j 0.-1.j 0.+0.j 0.+0.j]
[0.+1.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+1.j]
[0.+0.j 0.+0.j 0.-1.j 0.+0.j]]
Time evolution
>>> Hamiltonian = qutip.sigmay()
>>> times = np.linspace(0, 2, 10)
>>> result = qutip.sesolve(Hamiltonian, psi, times, [psi.proj(), phi.proj()]) # unitary time evolution of a system according to schroedinger equation
>>> expectpsi, expectphi = result.expect # expectation values of projectors onto psi and phi
>>> plt.figure(dpi=200)
>>> plt.plot(times, expectpsi)
>>> plt.plot(times, expectphi)
>>> plt.legend([r"$\psi$",r"$\phi$"])
>>> plt.show()
Simulating a non-unitary time evolution according to the Lindblad Master Equation is possible with the qutip.mesolve function
References
External links
Articles with example Python (programming language) code
Computational physics
Free software programmed in Python
Simulation software
Software using the BSD license
Quantum Monte Carlo | QuTiP | [
"Physics",
"Chemistry"
] | 1,759 | [
"Quantum Monte Carlo",
"Quantum chemistry",
"Computational physics"
] |
62,223,541 | https://en.wikipedia.org/wiki/Sachdev%E2%80%93Ye%E2%80%93Kitaev%20model | In condensed matter physics and black hole physics, the Sachdev–Ye–Kitaev (SYK) model is an exactly solvable model initially proposed by Subir Sachdev and Jinwu Ye, and later modified by Alexei Kitaev to the present commonly used form. The model is believed to bring insights into the understanding of strongly correlated materials and it also has a close relation with the discrete model of AdS/CFT. Many condensed matter systems, such as quantum dot coupled to topological superconducting wires, graphene flake with irregular boundary, and kagome optical lattice with impurities, are proposed to be modeled by it. Some variants of the model are amenable to digital quantum simulation, with pioneering experiments implemented in nuclear magnetic resonance.
Model
Let be an integer and an even integer such that , and consider a set of Majorana fermions which are fermion operators satisfying conditions:
Hermitian ;
Clifford relation .
Let be random variables whose expectations satisfy:
;
.
Then the SYK model is defined as
.
Note that sometimes an extra normalization factor is included.
The most famous model is when :
,
where the factor is included to coincide with the most popular form.
See also
Non-Fermi liquid
References
Lattice models | Sachdev–Ye–Kitaev model | [
"Physics",
"Materials_science"
] | 256 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
53,340,420 | https://en.wikipedia.org/wiki/Single-shot%20multi-contrast%20X-ray%20imaging | Single-shot multi-contrast x-ray imaging is an efficient and a robust x-ray imaging technique which is used to obtain three different and complementary types of information, i.e. absorption, scattering, and phase contrast from a single exposure of x-rays on a detector subsequently utilizing Fourier analysis/technique. Absorption is mainly due to the attenuation and Compton scattering from the object, while phase contrast corresponds to phase shift of x-rays.
The technique obtain images from the biological and non-biological objects. The research purposes include radiography, scattering imaging, differential phase contrast, and diffraction imaging. It is also possible to adjust and modify the experiment based on what information is of most importance. Almost every application that utilize this technique have the same approach, mathematics and science behind it such as the experimental setup, complementary information and Fourier analysis.
Single-shot multi-contrast x-ray imaging gained its importance recently in contrast to Talbot–Lau interferometer because of the less optical element such as diffraction gratings being used under it and hence obtaining every information digitally.
References
Spatial harmonic method
Interferometry-based setups
Hybrid detectors and coded apertures
Fourier analysis
Imaging
Interferometry
X-ray instrumentation | Single-shot multi-contrast X-ray imaging | [
"Technology",
"Engineering"
] | 251 | [
"X-ray instrumentation",
"Measuring instruments"
] |
53,341,083 | https://en.wikipedia.org/wiki/Automated%20efficiency%20model | An automated efficiency model (AEM) is a mathematical model that estimates a real estate property’s efficiency (in terms of energy, commuting, etc) by using details specific to the property which are available publicly and/or housing characteristics which are aggregated over a given area such as a zip code. AEMs have some similarities to an automated valuation model (AVM) in terms of concept, advantages and disadvantages.
AEMs calculate specific efficiencies such as location, water, energy or solar efficiency. The Council of Multiple Listing Services defines an AEM as, “any algorithm or scoring model that estimates the [efficiency] of a home without an on-site inspection. They are similar to Automated Valuation Models (AVMs), but are more reliant on public data such as square footage...and estimated energy usage.”
Most AEMs calculate a property’s selected efficiency by analyzing available public information and may also apply proprietary data or formulas, and allow for a user such as a home owner to make additional inputs. Housing characteristics such as age of the home or square footage may be obtained by data providers such as those on this list of online real estate databases or a similar offerings. Estimates of energy usage may be available from published sources such as through the Residential Energy Consumption Survey by the Energy Information Administration.
Examples of use
By design, the AEM score output is provided as a preliminary comparison tool so the score of one property may be compared to other homes, against an average score for the area, etc. Primary users may vary from buyers and sellers to real estate agents and appraisers as they complete relevant comparisons. For example, REColorado, the multiple listing service covering the Denver metro area, presents a UtilityScore widget on homes for sale. Zillow publishes a Sun Number score on the home fact sheet so website visitors can compare the solar energy potential of prospective properties. Trulia has published a report using automated estimates from UtilityScore to present water, natural gas and electric rates into a single price per square foot by zip code.
Beyond usage for consumer preliminary comparisons, usage of AEMs varies by industry. AEMs may also be used by solar installers, home improvement contractors, efficiency inspectors, and mortgage lenders.
In the photovoltaics industry, installers use Sun Number to reduce the soft costs associated with motivating consumers to invest in solar systems and in recording property specifications to create quotes. The U.S. Department of Energy has found that Sun Number eliminates 7–10 days from the quotation process when solar suitability is determined digitally and eliminates the need for an onsite inspection.
AEMs have been used in the mortgage industry to support a niche loan product called a Location Efficient Mortgage (LEM). During underwriting, an AEM such as the H+T Affordability Index is used to calculate the location efficient value
According to National Mortgage Professional Magazine AEMs may one day be incorporated into loan underwriting as well, “Since utilities are as big or bigger part of home expenses than even real estate taxes, we may see [estimated utility usage] begin to be factored into underwriting.”
Methodology
AEMs generate a score for a specific property based on both publicly available housing characteristics about the subject property as well as mathematical modeling. AEMs are technology-driven scores without an onsite inspection or human assessment. For more accurate information unique to a specific property an onsite inspection such as an energy audit is required.
Detailed information on the data accessed to calculate an AEM, the modeling formulas and algorithms are generally not published. A summary of general information is listed in the table below:
Advantages
As shown in the section above, AEMs tend to rely on public information rather than information which is private to the resident such as actual utility bills. Utility bills can vary based on the occupancy and personal property within a structure. The public information used in AEMs is relatively static as it is focused on details of the structure, location and/or mechanical systems and therefore tends to reflect the real property transferred during a real estate transaction.
According to the Council of Multiple Listing Services advantages are, “AEMs provide consumers with a quick comparison of all properties across a specified market. Since most focus on the attached systems and structure, they are only meant to reflect the efficiency of the real property.”
Disadvantages
According to the Council of Multiple Listing Services advantages are, “AEMs are dependent on data used, the assumptions made, and the model methodology. Since models and methodologies differ and no on-site inspections are performed, accuracy may vary among scoring systems.”
References
Mathematical modeling | Automated efficiency model | [
"Mathematics"
] | 948 | [
"Applied mathematics",
"Mathematical modeling"
] |
53,343,992 | https://en.wikipedia.org/wiki/IPOP | IPOP (IP-Over-P2P) is an open-source user-centric software virtual network allowing end users to define and create their own virtual private networks (VPNs). IPOP virtual networks provide end-to-end tunneling of IP or Ethernet over “TinCan” links setup and managed through a control API to create various software-defined VPN overlays.
History
IPOP started as a research project at the University of Florida in 2006. In its first-generation design and implementation, IPOP was built atop structured P2P links managed by the C# Brunet library. In its first design, IPOP relied on Brunet’s structured P2P overlay network for peer-to-peer messaging, notifications, NAT traversal, and IP tunneling. The Brunet-based IPOP is still available as open-source code; however, IPOP’s architecture and implementation have evolved.
Starting September 2013, the project has been funded by the National Science Foundation under the SI2 (Software Infrastructure for Sustained Innovation) program to enable it as open-source “scientific software element” for research in cloud computing. The second-generation design of IPOP incorporates standards (XMPP, STUN, TURN) and libraries (libjingle) that have evolved since the project’s beginning to create P2P tunnels – which we refer to as TinCan links. The current TinCan-based IPOP implementation is based on modules written in C/C++ that leverage libjingle to create TinCan links, and exposing a set of APIs to controller modules that manage the setup, creation and management of TinCan links. For enhanced modularity, the controller module runs as a separate process from the C/C++ module that implements TinCan links and communicate through a JSON-based RPC system; thus the controller can be written in other languages such as Python.
See also
OpenConnect, implements a TLS and DTLS-based VPN
OpenSSH, which also implements a layer-2/3 "tun"-based VPN
OpenVPN, SSL/TLS based user-space VPN
Point-to-Point Tunneling Protocol (PPTP) Microsoft method for implementing VPN
Secure Socket Tunneling Protocol (SSTP) Microsoft method for implementing PPP over SSL VPN
Social VPN, an open-source VPN based on relationships
SoftEther VPN, an open-source VPN server program which supports OpenVPN protocol
stunnel encrypt any TCP connection (single port service) over SSL
UDP hole punching, a technique for establishing UDP "connections" between firewalled/NATed network nodes
References
External links
Peer-to-peer-based VPN Alternatives - Linux Magazine
Tutorial: Deploying Your Own P2P Overlay for IPOP VPNs - FutureGrid
Install package network:vpn:ipop / ipop
Google Summer of Code > 2015 > IP-over-P2P Project
Free security software
Tunneling protocols
Unix network-related software
Virtual private networks | IPOP | [
"Engineering"
] | 646 | [
"Computer networks engineering",
"Tunneling protocols"
] |
53,353,992 | https://en.wikipedia.org/wiki/Perturb-seq | Perturb-seq (also known as CRISP-seq and CROP-seq) refers to a high-throughput method of performing single cell RNA sequencing (scRNA-seq) on pooled genetic perturbation screens. Perturb-seq combines multiplexed CRISPR mediated gene inactivations with single cell RNA sequencing to assess comprehensive gene expression phenotypes for each perturbation. Inferring a gene’s function by applying genetic perturbations to knock down or knock out a gene and studying the resulting phenotype is known as reverse genetics. Perturb-seq is a reverse genetics approach that allows for the investigation of phenotypes at the level of the transcriptome, to elucidate gene functions in many cells, in a massively parallel fashion.
The Perturb-seq protocol uses CRISPR technology to inactivate specific genes and DNA barcoding of each guide RNA to allow for all perturbations to be pooled together and later deconvoluted, with assignment of each phenotype to a specific guide RNA. Droplet-based microfluidics platforms (or other cell sorting and separating techniques) are used to isolate individual cells, and then scRNA-seq is performed to generate gene expression profiles for each cell. Upon completion of the protocol, bioinformatics analyses are conducted to associate each specific cell and perturbation with a transcriptomic profile that characterizes the consequences of inactivating each gene.
History
In the December 2016 issue of the Cell journal, two companion papers were published that each introduced and described this technique. A third paper describing a conceptually similar approach (termed CRISP-seq) was also published in the same issue. In October 2016, the CROP-seq method for single-cell CRISPR screening was presented in a preprint on bioRxiv and later published in the Nature Methods journal. While each paper shared the core principles of combining CRISPR mediated perturbation with scRNA-seq, their experimental, technological and analytical approaches differed in several aspects, to explore distinct biological questions, demonstrating the broad utility of this methodology. For example, the CRISPR-seq paper demonstrated the feasibility of in vivo studies using this technology, and the CROP-seq protocol facilitates large screens by providing a vector that makes the guide RNA itself readable (rather than relying on expressed barcodes), which allows for single-step guide RNA cloning. A June 2022 paper in Cell published results from one of the first genome-scale Perturb-seq screens, which uncovered new perturbations that promote chromosomal instability as well as variations in the expression of mitochondrially encoded transcripts in response to different forms of mitochondrial stress.
Experimental workflow
CRISPR Single Guide RNA Library design and selection
Pooled CRISPR libraries that enable gene inactivation can come in the form of either knockout or interference. Knockout libraries perturb genes through double stranded breaks that prompt the error prone non-homologous end joining repair pathway to introduce disruptive insertions or deletions. CRISPR interference (CRISPRi) on the other hand utilizes a catalytically inactive nuclease to physically block RNA polymerase, effectively preventing or halting transcription. Perturb-seq has been utilized with both the knockout and CRISPRi approaches in the Dixit et al. paper and the Adamson et al. paper, respectively.
Pooling all guide RNAs into a single screen relies on DNA barcodes that act as identifiers for each unique guide RNA. There are several commercially available pooled CRISPR libraries including the guide barcode library used in the study by Adamson et al. CRISPR libraries can also be custom made using tools for sgRNA design, many of which are listed on the CRISPR/cas9 tools Wikipedia page.
Lentiviral vectors
The sgRNA expression vector design will depend largely on the experiment performed but requires the following central components:
Promoter
Restriction sites
Primer Binding Sites
sgRNA
Guide Barcode
Reporter gene:
Fluorescent gene: vectors are often constructed to include a gene encoding a fluorescent protein, such that successfully transduced cells can be visually and quantitatively assessed by their expression.
Antibiotic resistance gene: similar to fluorescent markers, antibiotic resistance genes are often incorporated into vectors to allow for selection of successfully transduced cells.
CRISPR-associated endonuclease: Cas9 or other CRISPR-associated endonucleases such as Cpf1 must be introduced to cells that do not endogenously express them. Due to the large size of these genes, a two-vector system can be used to express the endonuclease separately from the sgRNA expression vector.
Transduction and selection
Cells are typically transduced with a Multiplicity of Infection (MOI) of 0.4 to 0.6 lentiviral particles per cell to maximize the likelihood of obtaining the most cells which contain a single guide RNA. If the effects of simultaneous perturbations are of interest, a higher MOI may be applied to increase the amount of transduced cells with more than one guide RNA. Selection for successfully transduced cells is then performed using a fluorescence assay or an antibiotic assay, depending on the reporter gene used in the expression vector.
Single-cell library preparation
After successfully transduced cells have been selected for, isolation of single cells is needed to conduct scRNA-seq. Perturb-seq and CROP-seq have been performed using droplet-based technology for single cell isolation, while the closely related CRISP-seq was performed with a microwell-based approach. Once cells have been isolated at the single cell level, reverse transcription, amplification and sequencing takes place to produce gene expression profiles for each cell. Many scRNA-seq approaches incorporate unique molecular identifiers (UMIs) and cell barcodes during the reverse transcription step to index individual RNA molecules and cells, respectively. These additional barcodes serve to help quantify RNA transcripts and to associate each of the sequences with their cell of origin.
Bioinformatics analysis
Read alignment and processing are performed to map quality reads to a reference genome. Deconvolution of cell barcodes, guide barcodes and UMIs enables the association of guide RNAs with the cells that contain them, thus allowing the gene expression profile of each cell to be affiliated with a particular perturbation. Further downstream analyses on the transcriptional profiles will depend entirely on the biological question of interest. T-distributed Stochastic Neighbor Embedding (t-SNE) is a commonly used machine learning algorithm to visualize the high-dimensional data that results from scRNA-seq in a 2-dimensional scatterplot. The authors who first performed Perturb-seq developed an in-house computational framework called MIMOSCA that predicts the effects of each perturbation using a linear model and is available on an open software repository.
Advantages and limitations
Perturb-seq makes use of current technologies in molecular biology to integrate a multi-step workflow that couples high-throughput screening with complex phenotypic outputs. When compared to alternative methods used for gene knockdowns or knockouts, such as RNAi, zinc finger nucleases or transcription activator-like effector nucleases (TALENs), the application of CRISPR-based perturbations enables more specificity, efficiency and ease of use. Another advantage of this protocol is that while most screening approaches can only assay for simple phenotypes, such as cellular viability, scRNA-seq allows for a much richer phenotypic readout, with quantitative measurements of gene expression in many cells simultaneously. Perturb-seq can therefore combine the high throughput of forward genetics, in terms of the number of genetic perturbations, with the rich phenotype dimension of reverse genetics.
However, while a large and comprehensive amount of data can be a benefit, it can also present a major challenge. Single cell RNA expression readouts are known to produce ‘noisy’ data, with a significant number of false positives. Both the large size and noise that is associated with scRNA-seq will likely require new and powerful computational methods and bioinformatics pipelines to better make sense of the resulting data. Another challenge associated with this protocol is the creation of large scale CRISPR libraries. The preparation of these extensive libraries depends upon a comparative increase in the resources required to culture the massive numbers of cells that are needed to achieve a successful screen of many perturbations.
In parallel to these single-cell methods, other approaches have been developed to reconstruct genetic pathways using whole-organism RNA-sequencing. These methods use a single aggregate statistic, called the transcriptome-wide epistasis coefficient, to guide pathway reconstruction. In contrast with the statistical framework of the methods described above, this coefficient may be more robust to noise and is intuitively interpretable in terms of Batesonian epistasis. This approach was used to identify a new state in the life cycle of the nematode C. elegans.
Applications
Perturb-seq or other conceptually similar protocols can be used to address a broad scope of biological questions and the applications of this technology will likely grow over time. Three papers on this topic, published in the December 2016 issue of the Journal Cell, demonstrated the utility of this method by applying it to the investigation of several distinct biological functions. In the paper, “Perturb-Seq: Dissecting Molecular Circuits with Scalable Single-Cell RNA Profiling of Pooled Genetic Screens”, the authors used Perturb-seq to conduct knockouts of transcription factors related to the immune response in hundreds of thousands of cells to investigate the cellular consequences of their inactivation. They also explored the effects of transcription factors on cell states in the context of the cell cycle. In the study led by UCSF, “A Multiplexed Single-Cell CRISPR Screening Platform Enables Systematic Dissection of the Unfolded Protein Response” the researchers suppressed multiple genes in each cell to study the unfolded protein response (UPR) pathway. With a similar methodology, but using the term CRISP-seq instead of Perturb-seq, the paper "Dissecting Immune Circuits by Linking CRISPR-Pooled Screens with Single-Cell RNA-Seq" performed a proof of concept experiment by using the technique to probe regulatory pathways related to innate immunity in mice. Lethality of each perturbation and epistasis analyses in cells with multiple perturbations was also investigated in these papers. Perturb-seq has so far been used with very few perturbations per experiment, but it can theoretically be scaled up to address the whole genome. Finally, the October 2016 preprint and subsequent paper demonstrate the bioinformatic reconstruction of the T cell receptor signaling pathway in Jurkat cells based on CROP-seq data.
Recently, the Perturb-seq (CROP-seq) workflow has been adapted to enable genome-scale CRISPRi (CRISPR interference) screens in Jurkat cells at single-cell resolution. The first-of-its-kind genome-scale CRISPRi screen was conducted to verify factors involved in TCR signaling pathways. In more detail, a guide RNA library targeting 18,595 human genes was utilized for CRISPR-based gene knockdowns in Jurkat cells expressing the dCas9-KRAB fusion endonuclease. In total, one million Jurkat cells were processed for single-cell RNA sequencing allowing transcriptomic readouts of a final list of 374 marker genes involved in TCR signaling. The bioinformatic analysis confirmed more than 70 known activators and repressors of TCR signaling cascades, hence showcasing the potential of Perturb-seq (CROP-seq) screens to support translational research.
While these publications used these protocols for answering complex biological questions, this technology can also be used as a validation assay to ensure the specificity of any CRISPR based knockdown or knockout; the expression levels of the target genes as well as others can be measured with single cell resolution in parallel, to detect whether the perturbation was successful and to assess the experiment for off target effects. Furthermore, these protocols make it possible to perform perturbation screens in heterogeneous tissues, while obtaining cell type specific gene expression responses.
References
RNA sequencing
Genomics
Bioinformatics
Molecular biology techniques | Perturb-seq | [
"Chemistry",
"Engineering",
"Biology"
] | 2,585 | [
"Genetics techniques",
"Biological engineering",
"Bioinformatics",
"RNA sequencing",
"Molecular biology techniques",
"Molecular biology"
] |
65,997,053 | https://en.wikipedia.org/wiki/James%20Mannin | James Mannin (died June 1779) was an artist, painter and draughtsman who lived in Ireland.
Life
There are no known details of James Mannin's early life. Some early sources state that he may have been French, but the surname Mannin is most commonly found in northern Italy. The first records of Mannin in Dublin date from 1753, when he is recorded as a designer of ornamental patterns. It was this work that brought him to the attention of the Dublin Society, which began an association which lasted the rest of his career. He supplied the Society with designs for items including carpets and picture frames during the 1750s, and in 1767 he designed the president's chair carved by Richard Cranfield (1731–1809). On 18 October 1769 he married Mary Maguire in St Andrew's Church, Dublin. He lived in Lazer's Hill from 1770 to 1775, before moving to King Street.
Career
From 1753, Mannin worked as a private drawing teacher. In May 1754, Mannin took on a number of young Irish artists as apprentices with the Society, to teach them ornamental drawing and design. This first group included Hugh Douglas Hamilton. This was the first time in Ireland that design was formally taught and reflected the Society's mission to promote high quality design in Ireland. This was further cemented when Mannin became a salaried employee of the Society in May 1756 as the master of the school of ornament, a post he would hold until just before his death. During his tenure, he taught many Irish artists such as John James Barralet, George Mullins, and Thomas Roberts.
There are no surviving drawings attributed to Mannin, but given his influence it is believed he looked to French taste and in particular Rococo. The Society was interested in the development of art education in France, and purchased prints after works of French artists to be used as teaching aids. The Society had a strong role in shaping Mannin's teaching, to ensure that the teaching was of a standard commensurate with the Society's fees. He was instructed to teach his students in pattern drawing based on Hamburg damasks in March 1765, which reflected the promotion of damask weaving in the Irish linen industry. Mannin also taught drawing for engraving.
Mannin continued to work as a painter of landscapes, still lifes, and flowers in a private capacity throughout this time. In 1765 and 1766 he exhibited with the Society of Artists in Hawkins Street. The Dublin Society awarded him premiums for landscape three times, in 1763, 1769, and 1770. He also produced his own ornamental designs, including a staircase for the Society of Artists in 1765, and carriage designs for coachbuilders in 1770. He also taught art privately, and even complained in an address in June 1766 that the Dublin Society's teaching demands encroached on his ability to pursue this work.
He became ill in early 1779, leading him to suggest Barralet to be appointed master of the school of ornamental drawing in his place. His death was announced by the Dublin Society on 24 June 1779.
References
1779 deaths
18th-century textile artists
18th-century Irish painters
18th-century Irish male artists
Irish male painters
Draughtsmen
Artists from Dublin (city) | James Mannin | [
"Engineering"
] | 661 | [
"Design engineering",
"Draughtsmen"
] |
65,997,474 | https://en.wikipedia.org/wiki/Ashcroft%20and%20Mermin | Solid State Physics, better known by its colloquial name Ashcroft and Mermin, is an introductory condensed matter physics textbook written by Neil Ashcroft and N. David Mermin. Published in 1976 by Saunders College Publishing and designed by Scott Olelius, the book has been translated into over half a dozen languages and it and its competitor, Introduction to Solid State Physics (often shortened to Kittel), are considered the standard introductory textbooks of condensed matter physics.
Content
The Drude Theory of Metals
The Sommerfeld Theory of Metals
Failures of the Free Electron Model
Crystal Lattices
The Reciprocal lattice
Determination of Crystal Structures by X-Ray Diffraction
Classification of Bravais Lattices and Crystal Structures
Electron Levels in a Periodic Potential: General Properties
Electrons in a Weak Periodic Potential
The Tight-Binding Method
Other Methods for Calculating Band Structure
The Semiclassical Model of Electron Dynamics
The Semiclassical Theory of Conduction in Metals
Measuring the Fermi Surface
Band Structure of Selected Metals
Beyond the Relaxation-Time Approximation
Beyond the Independent Electron Approximation
Surface Effects
Classification of Solids
Cohesive Energy
Failures of the Static Lattice Model
Classical Theory of the Harmonic Crystal
Quantum Theory of the Harmonic Crystal
Measuring Phonon Dispersion Relations
Anharmonic Effects in Crystals
Phonons in Metals
Dielectric Properties of Insulators
Homogeneous Semiconductors
Inhomogeneous Semiconductors
Defects in Crystals
Diamagnetism and Paramagnetism
Electron Interactions and Magnetic Structure
Magnetic Ordering
Superconductivity
Reception
The book has been reviewed several times and has been recommended in many other works. In a review of another work by the MRS Bulletin in 2011, the book was said to be "the indispensable work on electronic systems for experimental condensed matter physicists", due largely to the book's "lucidity and panache". The book is also recommended in other textbooks on condensed matter physics, including The Solid State by Harold Max Rosenberg in 1979, where it is called a "detailed, higher-level, modern treatment." The textbook Solid-State Physics for Electronics by Andre Moliton states in the foreword that the book aims to prepare students to "use by him- or herself the classic works of taught solid state physics, for example, those of Kittel and Ashcroft and Mermin." Along with Kittel, the textbook Introduction to Solid State Physics and Crystalline Nanostructures by Giuseppe Iadonisi, Giovanni Cantele, and Maria Luisa Chiofalo included the book in the "Acknowledgements" section as "special mentions". It is also called one of the standard textbooks of solid state physics in the textbook Polarized Electrons In Surface Physics. In a 2003 article detailing Mermin's contributions to solid state physics, the book was said to be "an extraordinarily readable textbook of the subject, which introduced a whole generation of solid state specialists to a subtle and elegant way of doing theoretical physics." The book, along with Kittel is also used as a benchmark for other books on solid-state physics; the publisher's description for the book Advanced Solid State Physics by Philip Phillips that was supplied to the Library of Congress for its bibliography entry states: "This is a modern book in solid state physics that should be accessible to anyone who has a working level of solid state physics at the Kittel or Ashcroft/Mermin level."
Reviews
The book received several reviews, including published articles in Science, Physics Today, and Physics Bulletin in 1977. It was also reviewed in German.
Impressionism, Realism, and the aging of Ashcroft and Mermin
In July 2013, José Menéndez, a physics professor at the Arizona State University Tempe campus published an article titled "Impressionism, Realism, and the aging of Ashcroft and Mermin" in Physics Today that stated: "It is undoubtedly one of the best physics books ever written, but it is not aging well". Both Ashcroft and Mermin wrote separate responses that were published in the same issue, addressing Menéndez's concerns. In his reply, Ashcroft wrote: "Over the years many readers have remarked that the initial edition of our book should 'not be touched'; it is just right in its treatments of the fundamentals." He then went on to say that writing a sequel "encompassing the many advances in condensed-matter physics that have occurred over the past 38 years" could be an option, but pointed to the fact that the book was translated into French, German, and Portuguese in the previous ten years as evidence that others agree it should be left as is.
Release details
References
External links
1976 non-fiction books
Physics textbooks
Condensed matter physics
Harcourt (publisher) books
Henry Holt and Company books | Ashcroft and Mermin | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 950 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
66,001,579 | https://en.wikipedia.org/wiki/Even%E2%80%93even%20nucleus | In atomic physics, even–even (EE) nuclei are nuclei with an even number of neutrons and an even number of protons. Even-mass-number nuclei, which comprise 151/251 = ~60% of all stable nuclei, are bosons, i.e. they have integer spin. The vast majority of them, 146 out of 151, belong to the EE class; they have spin 0 because of pairing effects.
See also
Even and odd atomic nuclei
Nuclear shell model
References
Bosons
Atomic physics
Subatomic particles with spin 0 | Even–even nucleus | [
"Physics",
"Chemistry"
] | 112 | [
" and optical physics stubs",
"Quantum mechanics",
"Bosons",
"Subatomic particles",
" molecular",
"Atomic physics",
"Atomic",
"Physical chemistry stubs",
"Matter",
" and optical physics"
] |
66,005,728 | https://en.wikipedia.org/wiki/Safe%20listening | Safe listening is a framework for health promotion actions to ensure that sound-related recreational activities (such as concerts, nightclubs, and listening to music, broadcasts, or podcasts) do not pose a risk to hearing.
While research shows that repeated exposures to any loud sounds can cause hearing disorders and other health effects, safe listening applies specifically to voluntary listening through personal listening systems, personal sound amplification products (PSAPs), or at entertainment venues and events. Safe listening promotes strategies to prevent negative effects, including hearing loss, tinnitus, and hyperacusis. While safe listening does not address exposure to unwanted sounds (which are termed noise) – for example, at work or from other noisy hobbies – it is an essential part of a comprehensive approach to total hearing health.
The risk of negative health effects from sound exposures (be it noise or music) is primarily determined by the intensity of the sound (loudness), duration of the event, and frequency of that exposure. These three factors characterize the overall sound energy level that reaches a person's ears and can be used to calculate a noise dose. They have been used to determine the limits of noise exposure in the workplace.
Both regulatory and recommended limits for noise exposure were developed from hearing and noise data obtained in occupational settings, where exposure to loud sounds is frequent and can last for decades. Although specific regulations vary across the world, most workplace best practices consider 85 decibels (dB A-weighted) averaged over eight hours per day as the highest safe exposure level for a 40-year lifetime. Using an exchange rate, typically 3 dB, allowable listening time is halved as the sound level increases by the selected rate. For example, a sound level as high as 100 dBA can be safely listened to for only 15 minutes each day.
Because of their availability, occupational data have been adapted to determine damage-risk criteria for sound exposures outside of work. In 1974, the US Environmental Protection Agency recommended a 24-hour exposure limit of 70 dBA, taking into account the lack of a "rest period" for the ears when exposures are averaged over 24 hours and can occur every day of the year (workplace exposure limits assume 16 hours of quiet between shifts and two days a week off). In 1995, the World Health Organization (WHO) similarly concluded that 24-hour average exposures at or below 70 dBA pose a negligible risk for hearing loss over a lifetime. Following reports on hearing disorders from listening to music, additional recommendations and interventions to prevent adverse effects from sound-related recreational activities appear necessary.
Public health and community interventions
Several organizations have developed initiatives to promote safe listening habits. The U.S. National Institute on Deafness and Other Communication Disorders (NIDCD) has guidelines for safely listening to personal music players geared toward the "tween" population (children aged 9–13 years). The Dangerous Decibels program promotes the use of "Jolene" mannequins to measure output of PLSs as an educational tool to raise awareness of overexposure to sound through personal listening. This type of mannequin is simple and inexpensive to construct and is often an attention-grabber at schools, health fairs, clinic waiting rooms, etc.
The National Acoustic Laboratories (NAL), the research division of Hearing Australia, developed the Know Your Noise initiative, funded by the Australian Government Department of Health. The Know Your Noise website has a Noise Risk Calculator that makes it possible and easy for users to identify and understand their levels of noise exposure (at work and play), and possible risks for hearing damage. Users can also take an online hearing test to see how well they hear in a noisy background.
The WHO launched the Make Listening Safe initiative as part of the celebration of World Hearing Day on 3 March 2015. The initiative's main goal is to ensure that people of all ages can enjoy listening to music and other audio media in a manner that does not create a hearing risk. Noise-induced hearing loss, hyperacusis, and tinnitus have been associated with the frequent use at high volume of devices such as headphones, headsets, earpieces, earbuds, and True Wireless Stereo technologies of any type.
Make Listening Safe aims to:
raise awareness about safe listening practices, especially among the younger population;
highlight the benefits of safe listening to policy-makers, health professionals, manufacturers, parents, and others;
foster the development and implementation of standards applicable to personal audio devices and recreational venues to cover safe listening features
become a depository of open-access resources and information on safe listening practices in at least six languages (Arabic, Chinese, English, French, Russian, and Spanish).
In 2019 the World Health Organization published a toolkit for safe listening devices and systems that provides the rationale for the proposed strategies, and identifies actions that governments, industry partners and the civil society can take.
On 1 November 2023 the WHO launched a Make Listening Safe Campaign (MLSC) in the United Kingdom as a pilot to a strategy to encourage the adoption of safe listening practices amongst those between the ages of ten and forty. The MLSC UK will run a sequence of run short campaigns focused on different themes, starting with avoidable risks amongst headphone users. It will include an ePetition requesting the government to adopt higher hearing safeguarding standards/regulations in line with the WHO/International Telecommunication Union (ITU) recommendations. The plan is to evaluate the effort and later roll it out to its other 193 member states. It includes an in-person launch event, public education focused campaigns, policy advocacy, and collaboration with various stakeholders, including governmental bodies, industry players, and healthcare professionals.
Make Listening Safe is promoting the development of features in PLS to raise the users' awareness of risky listening practices. In this context, the WHO partnered with the International Telecommunication Union (ITU) to develop suitable exposure limits for inclusion in the voluntary H.870 safety standards on "Guidelines for safe listening devices/systems." Experts in the fields of audiology, otology, public health, epidemiology, acoustics, and sound engineering, as well as professional organizations, standardization organizations, manufacturers, and users are collaborating on this effort.
The Make Listening Safe initiative also covers entertainment venues. Average sound pressure levels (SPL) in nightclubs, discotheques, bars, gyms and live sports venues can be as high as 112 dB (A-weighted); sound levels at pop concerts may be even higher. Frequent exposure or even a short exposure to very high-sound pressure levels such as these can be harmful. WHO reviewed existing noise regulations for various entertainment sites – including clubs, bars, concert venues, and sporting arenas in countries around the world, and released a global Standard for Safe Listening Venues and Events as part of World Hearing Day 2022. Also released in 2022 were:
an mSafeListening handbook, on how to create an mHealth safe listening program.
and a media toolkit for journalists containing key information and how to talk about safe listening.
Sound source interventions
Personal listening systems (PLS)
Personal listening systems are portable devices – usually an electronic player attached to headphones or earphones – which are designed for listening to various media, such as music or gaming. The output of such systems varies widely. Maximum output levels vary depending upon the specific devices and regional regulatory requirements. Typically, PLS users can choose to limit the volume between 75 and 105 dB SPL. The ITU and the WHO recommend that PLS be programmed with a monitoring function that sets a weekly sound exposure limit and provides alerts as users reach 100% of their weekly sound allowance. If users acknowledge the alert, they can choose to whether or not to reduce the volume. But if the user does not acknowledge the alert, the device will automatically reduce the volume to a predetermined level (based on the mode selected, i.e. 80 or 75 dBA). By conveying exposure information in a way that can be easily understood by end-users, this recommendation aims to make it easier for listeners to manage their exposures and avoid any negative effects. The health app on iPhones, Apple Watches, and iPads incorporated this approach starting in 2019. These feature the opt-in Apple Hearing Study, part of the Research app that is being conducted in collaboration with the University of Michigan School of Public Health. Data is being shared with the WHO's Make Listening Safe initiative. Preliminary results released in March 2021, one year into the study, indicated that 25% of participants experienced ringing in their ears a few times a week or more, 20% of participants have hearing loss, and 10% have characteristics that are typical in cases of noise-induced hearing loss. Nearly 50% of participants reported that they had not had their hearing tested in at least 10 years. In terms of exposure levels, 25% of the participants experienced high environmental sound exposures.
The International Technical Commission (ITC) published the first European standard IEC 62368–1 on personal audio systems in 2010. It defined safe output levels for PLSs as 85 dB or less, while allowing users to increase the volume to a maximum of 100 dBA. However, when users raise the volume to the maximum level, the standard specifies that an alert should pop up to warn the listener of the potential for hearing problems.
The 2018 ITU and WHO standard H.870 "Guidelines for safe listening devices/systems" focus on the management of weekly sound-dose exposure. This standard was based on the EN 50332-3 standard "Sound system equipment: headphones and earphones associated with personal music players – maximum sound pressure level measurement methodology – Part 3: measurement method for sound dose management." This standard defines a safe listening limit as a weekly sound dose equivalent to 80 dBA for 40 hours/week.
Potential differences in children
The frequent use of PLS among children has raised concerns about the potential risks that might be associated with such exposure. A systematic review and meta-analysis published in 2022 recorded an increased prevalence of risk of hearing loss compared to 2015 estimates among young people between 12 and 34 years of age who are exposed to high sound pressure levels (SPL) due to use of headphones and entertainment soundscapes. The authors included articles published between 2000 and 2021 that reported unsafe listening practices. The number of young people who may be at risk of hearing loss worldwide has been estimated from the total global estimates of the population aged 12 to 34 years. Thirty-three studies (corresponding to data from 35 medical records and 19,046 individuals) were included; 17 and 18 records focused on the use of SEPs and noisy entertainment venues, respectively. The pooled prevalence estimate of exposure to unsafe listening to EPS was 23.81% (95% CI 18.99% to 29.42%). The model was adjusted according to the intensity and duration of exposure to identify an estimated prevalence of 48.2%. The estimated global number of young people who may be at risk of hearing loss due to exposure to unsafe listening practices ranged from 0.67 to 1.35 billion. The authors concluded that unsafe listening practices are highly prevalent worldwide and may put over 1 billion young people at risk of hearing loss.
There is no agreement on the acceptable risk of noise-induced hearing loss in children; and adult damage-risk criteria may not be suitable for establishing safe listening levels for children due to differences in physiology and the more serious developmental impact of hearing loss early in life. One attempt to identify safe levels assumed that the most appropriate exposure limit for recreational noise exposure in children would aim to protect 99% of children from a shift in hearing exceeding 5 dB at 4 kHz after 18 years of noise exposure. Using estimates from the International Organization for Standardization (ISO 1999:2013), the authors calculated that 99% of children who are exposed from birth until the age of 18 years to 8-h average sound levels (LEX) of 82 dBA would have hearing thresholds of about 4.2 dB greater, indicating a shift in hearing ability. By including a 2 dBA margin of safety which reduces the 8-hr exposure allowance to 80 dBA, the study estimated a hearing change of 2.1 dB or less in 99% of children. To preserve the hearing from birth until the age of 18 years, it was recommended that noise exposures be limited to 75 dBA over a 24-hour period. Other researchers recommended that the weekly sound dose be limited to the equivalent of 75 dBA for 40 hours/week for children and users who are sensitive to intense sound stimulation.
Personal sound amplification products (PSAPs)
Personal sound amplification products are ear-level amplification devices intended for use by persons with normal hearing. The output levels of 27 PSAPs that were commercially available in Europe were analyzed in 2014. All of them had a maximum output level that exceeded 120 dB SPL; 23 (85%) exceeded 125 dB SPL, while 8 (30%) exceeded 130 dB SPL. None of the analyzed products had a level limiting option.
The report triggered the development of a few standards for these devices. The ANSI/CTA standard 2051 on "Personal Sound Amplification Performance Criteria" followed in 2017. It specified a maximum output sound pressure level of 120 dB SPL. In 2019, the ITU published standard ITU-T H.871 called "Safe listening guidelines for personal sound amplifiers". This standard recommends that PSAPs measure the weekly sound dose and adhere to a weekly maximum of less than 80 dBA for 40 hours. PSAPs that cannot measure weekly sound dose should limit the maximum output of the device to 95 dBA. It also recommends that PSAPs provide clear alerts in their user guides, packaging, and ads mentioning the risks of ear damage that can result from using the device and providing information on how to avoid these risks. A technical paper describing how to test the compliance of various personal audio systems/devices to the essential/mandatory and optional features of Recommendation ITU-T H.870 was published in 2021.
Entertainment venues
Both those working in the music industry and those enjoying recreational music at venues and events can be at risk of experiencing hearing disorders. In 2019, the WHO published a report summarizing regulations for control of sound exposure in entertainment venues in Belgium, France, and Switzerland. The case studies were published as an initial step towards the development of a WHO regulatory framework for control of sound exposure in entertainment venues. In 2020, a couple of reports described exposure scenarios and procedures in use during entertainment events. These took into account the safety of those attending an event, those exposed occupationally to the high intensity music, as well as those in surrounding neighborhoods. Technical solutions, practices of monitoring and on-stage sound are presented, as well as the problems of enforcing environmental noise regulations in an urban environment, with country specific examples.
Several different regulatory approaches have been implemented to manage sound levels and minimize the risk of hearing damage for those attending music venues. A report published in 2020 identified 18 regulations regarding sound levels in entertainment venues – 12 from Europe and the remainder from cities or states in North and South America. Legislative approaches include: sound level limitations, real-time sound exposure monitoring, mandatory supply of hearing protection devices, signage and warning requirements, loudspeaker placement restrictions, and ensuring patrons can access quiet zones or rest areas. The effectiveness of these measures in reducing the risk of hearing damage has not been evaluated, but the adaptation of the approaches described above is consistent with the general principles of the hierarchy of controls used to manage exposure to noise in workplaces.
Patrons of music venues have indicated their preference for lower sound levels and can be receptive when earplugs are provided or made accessible. This finding may be region or country-specific. In 2018, the U.S. Centers for Disease Control and Prevention published the results of a survey of U.S. adults related to the use of a hearing protection device during exposure to loud sounds at recreational events. Overall, more than four of five reported never or seldom wearing hearing protection devices when attending a loud athletic or entertainment event. Adults aged 35 years and older were significantly more likely to not wear hearing protection than were young adults aged 18–24 years. Among adults who frequently enjoy attending sporting events, women were twice as likely as men to seldom or never wear hearing protection. Adults who were more likely to wear protection had at least some college education or had higher household incomes. Adults with hearing impairment or with a deaf or hard-of-hearing household member were significantly more likely to wear their protective devices.
The challenges in implementing measures to reduce risks to hearing in a wide range of entertainment venues – whether through mandatory or voluntary guidelines, with or without enforcement – are significant. It requires involvement from many different professional groups and buy-in from both venue managers and users. The WHO and ITU Global Standard for Venues and Events released on World Hearing Day 2022 offers resources to facilitate action. The standard details six features recommended for safe listening venues and events. The standard can be used by Governments to implement legislation, by owners and managers of venues and events to protect their clientele, and by audio engineers, and by other staff.
A 2023 survey showed that U.S. adults acknowledge the risks posed by high sound exposures at concerts and other events. Results indicated an interest towards protective actions, such as limiting sound levels, posting warning signs, and wearing hearing protection. Fifty four percent of the study participants agreed that sound levels at concert venues should be limited to reduce risk for hearing disorders, seventy five percent agreed that warning signs should be posted when sound levels are likely to exceed safe levels, and 61% of respondents stated that they would wear hearing protection if s provided when sound levels were likely to exceed safe levels.
Personal interventions
While establishing effective public and community health interventions, enacting appropriate legislation and regulations, and developing pertinent standards for listening and audio systems are all important in establishing a societal infrastructure for safe listening, Individuals can take steps to ensure that their personal listening habits minimize their risk of hearing problems. Personal safe listening strategies include:
Listening to PLSs at safe levels, such as 60% of the volume range. Noise-cancelling headphones and sound-isolating earphones can help one avoid turning the volume up to overcome loud background noise.
Sound measurement apps can help one find out how loud sounds are. If not measuring the sound levels, a good rule of thumb is that sounds are potentially hazardous if it is necessary to speak in a raised voice to be heard by someone an arm's length away. Moving away from the sound or using hearing protection are approaches to reduce exposure levels.
Monitoring the amount of time spent in loud activities helps one manage risk. Whenever possible, take a break between exposures so the ears can rest and recover.
Watching for warning signs of hearing loss. Tinnitus, difficulty hearing high pitched sounds (such as birds singing or cell phone notifications), and trouble understanding speech in background noise can be indicators of hearing loss.
Getting a hearing test regularly. The American Speech Language Hearing Association recommends that school-aged children be screened for hearing loss annually from kindergarten through the third grade, then again in 7th and 11th grade. Adults should have their hearing tested every ten years until they reach age 50, and every three years after that. Hearing should be tested sooner if any warning signs develop.
Teaching children and young adults about the hazards of overexposure to loud sounds and how to practice safe listening habits could help protect their hearing. Good role models in their own listening habits could also prompt healthy listening habits. Health care professionals have the opportunity to educate patients about relevant hearing risks and promote safe listening habits. As part of their health promotion activities, hearing professionals can recommend appropriate hearing protection when necessary and provide information, training and fit-testing to ensure individuals are adequately but not overly protected. Wearing earplugs to concerts has been shown to be an effective way to reduce post-concert temporary hearing changes.
See also
Sound
Sound power level
Noise-induced hearing loss
Noise regulation
Loud music
Global Audiology
Health problems of musicians
Hearing
Electronic Music Foundation
Tinnitus
Diplacusis
Hyperacusis
World Hearing Day
Safe-in-Sound Award
International Society of Audiology
Acoustic trauma
List of films featuring the deaf and hard of hearing
References
External links
American Academy of Audiology, Audiological Services for Musicians and Music Industry Personnel , 2020.
Apple Hearing Study, University of Michigan.
Global Audiology, International Society of Audiology
World Health Organization (WHO) Childhood hearing loss: act now, here's how infographic.
Introduction to the World Health Organization program on hearing and its initiative to Make Listening Safe, Dr. Shelly Chadha, March 2015.
World Health Organization (WHO) and International Telecommunication Union (ITU) Consultation on Make Listening Safe initiative, March 2015.
World Health Organization (WHO), 2019. Toolkit for safe listening devices and systems.
Safe listening devices and systems: a WHO-ITU standard. 2019.
World Health Organization, Hearing loss due to recreational exposure to loud sounds: A review.
World Health Organization, Regulation for control of sound exposure in entertainment venues. Case studies from Belgium, France and Switzerland. December 2019.
World Health Organization, Make Listening Safe, Activities 2019.
World Health Organization, Tips for safe listening 2019. Available in several languages.
World Health Organization, Consultation on Make Listening Safe Initiative 2020.
World Health Organization, World Report on Hearing, 2021.
European Association of Hearing Aid Professionals (AEA). Make Listening Safe resources.
Standards for Safe Listening – how they align and how some differ, ENT News, May 2020.
National Acoustics Laboratories, Know your Noise. Information about noise or music exposure and its impact on your hearing health.
Hearing Australia, Tips for safe listening using headphones and earbuds.
National Center for Environmental Health, Centers for Disease Control and Prevention, Statistics about the Public Health Burden of Noise-Induced Hearing Loss.
National Center for Environmental Health, May is Better Hearing and Speech Month (cdc.gov) 2021.
National Center for Environmental Health, Centers for Disease Control and Prevention, Loud noise can cause hearing loss. Resources.
Centers for Disease Control and Prevention, Vital Signs: hearing loss.
National Institute for Occupational Safety and Health (NIOSH), Centers for Disease Control and Prevention, Noise and hearing loss prevention.
National Institute for Occupational Safety and Health (NIOSH), Centers for Disease Control and Prevention, Reducing the Risk of Hearing Disorders among Musicians.
National Institute for Occupational Safety and Health, Centers for Disease Control and Prevention, NIOSH Sound Level Meter app.
Safe-in-Sound Excellence in Hearing Loss Prevention Award winners.
World Health Organization- Short videos on World Hearing Day materials, available in six languages.
Listening
Acoustics
Audiology
Audio engineering
Consumer electronics
Health communication
Loudspeakers
World Health Organization
Health campaigns
Hearing | Safe listening | [
"Physics",
"Engineering"
] | 4,655 | [
"Electrical engineering",
"Audio engineering",
"Classical mechanics",
"Acoustics"
] |
77,611,211 | https://en.wikipedia.org/wiki/The%20Steam%20Mill%2C%20Chester | The Steam Mill is a Grade II listed building located on Steam Mill Street in Chester, Cheshire, England. The mill was originally built in 1786, during the Georgian era.
Location
Sitting on the banks of the Shropshire Union Canal, Steam Mill lies within close proximity to both Chester Railway Station and Chester town centre.
Steam Mill is located approximately a seven-minute walk from Chester Railway Station, and around 10 minutes from Chester town centre.
Architecture and fittings
While the building has been refurbished by James Brotherhood and Associates architects, it retains many of its original features, including exposed brickwork and wooden beams.
Steam Mill has a large, five-story atrium, shower and changing facilities, secure bike storage, lift access, on-site car parking, and private meeting rooms.
History
Steam Mill was one of the first canal-side steam-powered mills, and was built on disused meadowland in 1786 for Chester Corn and Flour Merchants: Samuel Walker, George Walker, and Hugh Ley.
In 1819, the mill was sold to Frost & Sons, who are responsible for building the present structure that remains today.
Frost & Sons had formerly owned the Dee Mills, which they acquired shortly after moving to Chester in 1818. Unfortunately, Dee Mills were ruined by a fire in 1819, which is thought to be the reason behind the move to Steam Mill
In 1827, Frost & Sons replaced the original steam engine.
The ownership of the mill was handed to seed merchant David Miln in 1938.
The building remained in use as a mill until 1986.
Today, Steam Mill is owned by Threadneedle Pensions Ltd, and managed by joint agents Legat Owen and Mason Owen, and serves as a hub for several offices and businesses, including thimbl.
References
Grade II listed buildings in Chester
Industrial buildings completed in 1786
1786 establishments in England
Flour mills in the United Kingdom
Steam power | The Steam Mill, Chester | [
"Physics"
] | 370 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
77,614,035 | https://en.wikipedia.org/wiki/List%20of%20Human%20Powered%20Vehicle%20Challenge%20results | This is a list of Human Powered Vehicle Challenge winners.
2002–2024
2002
East
West
2003
East
*Indicates a tie
West
*Indicates a tie
2004
East
West
2005
East
West
2006
East
West
2007
East
West
2008
East
West
2009
East
* Indicates Unknown
West
2010
East
West
2011
East
West
2012
East
West
2013
East
West
2014
East
West
2015
East
West
2016
East
West
2017
East
West
2018
East
West
2019
North
West
2020
North
South
2021
2022
*Indicates a tie
2023
2024
East
West
Overall totals
References
Lists of sports champions by sport
Mechanical engineering competitions | List of Human Powered Vehicle Challenge results | [
"Engineering"
] | 110 | [
"Mechanical engineering competitions",
"Mechanical engineering"
] |
77,614,057 | https://en.wikipedia.org/wiki/Human%20Powered%20Vehicle%20Challenge | The Human Powered Vehicle Challenge (HPVC) is a student design competition organized by ASME (American Society of Mechanical Engineers). The competition was started in 1983 at the University of California, Davis.
Concept
The HPVC is an engineering design and innovation competition that gives students the opportunity to network and apply engineering principles through the design, fabrication, and racing of human powered vehicles. ASME's international Human Powered Vehicle Challenge (HPVC) provides an opportunity for students to demonstrate the application of sound engineering design principles in the development of sustainable and practical transportation alternatives. In the HPVC, students work in teams to design and build efficient, highly engineered vehicles for everyday use—from commuting to work, to carrying goods to market.
While the competition format has evolved throughout the years, it is typically made up of three main parts. The first is the design and engineering of the vehicle, the second is the speed of the vehicle, and the third is the practicality of the vehicle tested through an endurance event.
Design
The most important segment of the challenge is design. Contestants must submit a detailed design report with sections including analysis, design, and testing. The design report also includes references to prior work if the vehicle uses elements from a prior year, as well as a section for future work and the goals of the vehicle. The design report is paired with a Critical Design Review (CDR). The CDR consists of each team presenting their vehicle in a set amount of time to a panel of judges. The judges are allowed and encouraged to ask challenging questions to test the knowledge of the presenters. These two sections are scored and are combined for the design segment of the challenge.
Innovation
From 2012 to 2018 an innovation segment was added. Scored separately from design, it was based on both the design report and design review, and judged contestants on how innovative their vehicles were.
Speed
The foundation of the challenge is based in speed and has often been associated with the World Speed Challenge held at Battle Mountain, Nevada. Speed events have been divided into two categories: sprints and drag races.
Sprint
Top speed is recorded in the sprint event. Set within a defined overall distance, the vehicle has a set distance to accelerate, a set distance to reach and record its top speed, and a set distance to stop.
Drag race
The drag race is a head-to-head event in which two vehicles race to a predefined distance. The winner moves on in a double elimination-style tournament.
Endurance
The endurance event is a timed 2.5-hour race where the objective is to complete as many laps as possible. Laps are typically in length. Each lap has multiple obstacles including, hairpin turns, stop signs, quick turns, rumble strips, a slalom section, and a parcel delivery task. After 2.5 hours, each team's total laps are recorded and any penalties, such as missed stops or knocked-over obstacles, are assigned. The team with the greatest distance covered wins.
History
1983–2001
The first competition took place in 1983 at University of California, Davis. The original objective was to reach the highest speed possible. The inaugural event was won by California State University, Chico. In 1989 Portland State University won the 7th annual competition hosted by California State University, Northridge. It wasn't until 1993 that University of California Davis won the challenge that they started.
2002–2009
Beginning in 2002, each year's competition was held in more than one location, with designations of "east" and "west". Consisted of a three-class system with single-rider, multi-rider, and utility vehicle being scored separately. Single- and multi-rider vehicles were scored based on design, a 2.5-hour endurance event, and a sprint event. The utility vehicles were scored on design and a utility event.
In 2004, University of Missouri, Rolla (now Missouri University of Science and Technology) won their 2nd challenge in three years. This would start a run where Missouri S&T would place in the top three overall for eleven years straight.
2010–2011
Vehicles classes were reduced to two: a speed class and an unrestricted class. Vehicle were scored on design, the endurance race, men's sprint event, and a now separated women's sprint event. Missouri S&T swept both the East and West competitions in 2010 for the speed class. In 2011 University of Toronto won first overall in the unrestricted class.
2012–2019
The vehicle class was reduced to a single designation, with an added Innovation category to be scored separately. In 2014, the first Human Powered Vehicle Challenge to take place in India was held at the Indian Institute of Technology, Delhi. More than 400 students and 36 teams from more than 30 universities turned out for the competition.
In 2016, the University of Akron won first overall at the East competition which took place in Athens, Ohio, after placing second overall at the West Competition a few weeks earlier.
2020–present
The COVID-19 pandemic forced the competition to be held online and as a design event only. The decade's first in-person competition was hosted by Liberty University in 2023 in Lynchburg, Virginia.
Results
The results start in 2002, the first year of the "modern" format of the competition. This is also the first year for which records are easily accessible.
References
Mechanical engineering competitions
Student events
Awards established in 1983
Awards of the American Society of Mechanical Engineers | Human Powered Vehicle Challenge | [
"Engineering"
] | 1,097 | [
"Mechanical engineering competitions",
"Mechanical engineering"
] |
77,615,848 | https://en.wikipedia.org/wiki/Michela%20Procesi | Michela Procesi (born 1973) is an Italian mathematician specializing in Hamiltonian partial differential equations such as the nonlinear Schrödinger equation or wave equation. The Degasperis–Procesi equation is named for her. She is a professor of mathematics at Roma Tre University.
Education and career
Procesi was born in 1973 in Rome, the daughter of mathematician Claudio Procesi. She earned a laurea in physics at the Sapienza University of Rome in 1998, and continued at la Sapienza for a PhD in mathematics in 2002. Her dissertation, Estimates on Hamiltonian splittings: tree techniques in the theory of homoclinic splitting and Arnold diffusion for a-priori stable systems, was supervised by Luigi Chierchia.
She became a postdoctoral researcher at the International School for Advanced Studies in Trieste and, with the support of the Istituto Nazionale di Alta Matematica Francesco Severi, at Roma Tre University. After continued work as a researcher at the University of Naples Federico II and la Sapienza, she obtained a position as an associate professor at Roma Tre University in 2015. She has been a full professor there since 2019.
Recognition
Procesi was an invited speaker at the 2022 (virtual) International Congress of Mathematicians.
References
External links
Home page
1973 births
Living people
Italian mathematicians
Italian women mathematicians
Mathematical analysts
Sapienza University of Rome alumni
Academic staff of Roma Tre University | Michela Procesi | [
"Mathematics"
] | 290 | [
"Mathematical analysis",
"Mathematical analysts"
] |
77,618,701 | https://en.wikipedia.org/wiki/8176%20aluminium%20alloy | 8176 aluminium alloy is produced using iron, zinc and silicon as additives. It is used in power lines due to its high electrical conductivity.
Chemical composition
Applications
Aluminium 8176 is used in building wiring and cables.
References
External links
Material Properties
Aluminium alloys | 8176 aluminium alloy | [
"Chemistry"
] | 55 | [
"Alloys",
"Alloy stubs",
"Aluminium alloys"
] |
77,627,375 | https://en.wikipedia.org/wiki/Berzins-Delahay%20equation | In electrochemistry, the Berzins-Delahay equation is analogous to the Randles–Sevcik equation, except that it predicts the peak height () of a linear potential scan when the reaction is electrochemically reversible, the reactants are soluble, and the products are deposited on the electrode with a thermodynamic activity of one.
= electrode surface area in cm2
= concentration of the reactant in mol/cm3
= stoichiometric number of electrons exchanged in equivalents/mol
= Faraday constant in C/equivalent
= Diffusion coefficient of the reactant in cm2/s
= scan rate in V/s
= Gas constant in J/molK
= temperature in K
Despite the fact that this equation is derived under very simplistic assumptions, considering the complex phenomenon of nucleation, the Berzins-Delahay equation often makes good predictions of . This is likely because nucleation processes have been resolved at this point, meaning that the fundamental assumptions of the derivation match the physical phenomena well. Corrections for these errant assumptions are available.
Derivation
This equation is derived using the following governing equations and initial/boundary conditions:
= time in s
= distance from planar electrode in cm
= the potential of the electrode in V
= the initial potential of the electrode in V
= the formal potential for the reaction in V
= a reference concentration of 1 mol/L or 1 mmol/cm3
Uses
The Berzins-Delahay equation is primarily used to measure the concentration or the diffusion coefficient of an analyte that participates in a reversible, deposition electrochemical reaction. To validate the application of this equation, one typically checks for a linear relationship between and and peak potentials () that are independent of . The characteristic shape of a deposition voltammogram, with a sharp reduction (negative current) with a decaying tail and a large oxidation peak that quickly decays to zero current, is also required to verify the reaction has soluble reactants and deposited products.
References
Electrochemical equations | Berzins-Delahay equation | [
"Chemistry",
"Mathematics"
] | 423 | [
"Electrochemistry stubs",
"Mathematical objects",
"Equations",
"Electrochemistry",
"Analytical chemistry stubs",
"Physical chemistry stubs",
"Electrochemical equations"
] |
67,487,128 | https://en.wikipedia.org/wiki/Electrochemical%20skin%20conductance | Electrochemical skin conductance (ESC) is an objective, non-invasive and quantitative electrophysiological measure of skin conductance through the application of a pulsating direct current on the skin. It is based on reverse iontophoresis and steady chronoamperometry (more specifically chronovoltametry). ESC is intended to provide insight into and assess sudomotor (or sweat gland) function and small fiber peripheral neuropathy. The measure was principally developed by Impeto Medical to diagnose cystic fibrosis from historical research at the Mayo Clinic and then tested on others diseases with peripheral neuropathic alterations in general. It was later integrated into health connected scales by Withings.
Biology
Anatomy: the eccrine sweat gland
See also sweat gland, eccrine sweat gland and Autonomic nervous system.
The ESC measurement relies on the particularities of the outer-most layer of the human skin, the stratum corneum (SC), which consists of a lipid corneocyte matrix crossed by skin appendages (sweat glands and their follicles) as described in Electrical properties of skin at moderate voltages: contribution of appendageal macropores. According to the authors the stratum corneum is electrically insulating against DC voltages under 10V and only its appendageal pathways are conductive.
In the hairless skin, such as the palms of the hands and soles of the feet, in contact with the electrodes, the eccrine sweat glands are the principal conductive pathways this is why the ESC measurement technologies focus only on those skin parts.
These sweat glands are innervated by the sympathetic autonomic peripheral nervous system. According to Sato, both adrenergic and cholinergic-muscarinic neurons participate, in the following physiological proportions: adrenergic 2/7 and cholinergic 5/7.
Particularities of the autonomic sympathetic nerve fibers that innervate sweat glands are that they are long (the postganglionic nerves start at the spinal cord and may end at the palm or sole), thin, unmyelinated or thinly myelinated C fibers. Because of these characteristics, they are prone to damage early in many neuropathic processes; assessing sweat gland nerve function, or dysfunction, therefore, can be used as a surrogate for the damage imparted to small caliber sensory nerves in neuropathy.
Physiology: Stimulation of sweat function
See Sudomotor function.
During normal physiological function, activation of eccrine sweat glands starts with a “chemical” stimulus. For instance, in the cholinergic pathway (the dominant pathway), this leads to the following sequence, or activation cascade:
The neurotransmitter acetylcholine binds to its corresponding muscarinic cholinergic receptor on the membrane cells of the sweat gland wall;
This activates the G proteins coupled to the neuroreceptor;
The G proteins, or their intracellular messengers, then modulate ion channels, creating an ion flux through the membrane;
This polarizes the gland to voltages around 10 mV and always less than 100mV electrical potential difference between the two sides of the gland wall
Technology
Impeto medical: Sudoscan
Summary
For the purposes of measuring Electrochemical Skin Conductance Sudoscan technology activates the sweat gland with an “electrical” stimulus. The applied voltage directly polarizes the gland with voltages between 100 mV to 1000 mV. This induces ion fluxes across the gland wall, depending on the electrochemical gradient of the ions. Because the current applied is high compared to the physiological current, the test could be compared to a “stress test” for sweat glands.
In fact, firm application of the hands and feet against the electrodes blocks physiological sweating, and the active measure extracts electro-active ions (i. e., chloride near the anode, proton near the cathode) and pulls them towards the electrodes.
The resulting conductance is then given for each foot and hand in μS (micro-Siemens).
Details
Currently, ESC measurement can be obtained with the use of a medical device, called Sudoscan. No specific patient preparation or medical personnel training is required. The measure lasts less than 3 minutes, and is innocuous and non-invasive.
The apparatus consists of stainless-steel electrodes for the hands and the feet which are connected to a computer for recording and data management purposes. To conduct an ESC test, the patients place their hands and feet on the electrodes. Sweat glands are most numerous on the palms of the hands and soles of the feet, and thus well suited for sudomotor function evaluation.
The electrodes are used alternatively as anode or cathode. A direct current (DC) incremental voltage under 4 volts is applied on the anode. This DC, through reverse iontophoresis, induces a voltage on the cathode and generates a current (of an intensity less than 0.3 mA) between the anode and the cathode, related to electro-active ions from sweat reacting with the electrodes. The electrochemical phenomena are measured by the two active electrodes (the anode and the cathode) successively in the two active limbs (either hands or feet), whilst the two passive electrodes allow retrieval of the body potential.
During the test, 4 combinations of 15 different low DC voltages are applied. The resulting Electrochemical Skin Conductances (ESC) for each hand and foot are expressed in μS (micro-Siemens). The test also evaluates the percentage of asymmetry between the left and right side, for both hands and feet ESC, providing an assessment of whether one side is more affected than the other.
Withings: scales
Summary
Withings integrated Sudoscan technology into its scale (FDA clearance) in order to provide large adoption of the measurement and allow for at home follow-up of patients with neuropathies.
Details
The Withings technology is based on the same principle but only measure the ESC on foot from its BodyComp and BodyScan scales. A clinical trial (agreement study) demonstrated the correlation between the BodyScan scale and Sudoscan measurements. More generally the adoption of a technology going from only hospital measurements to home measurements allow the building of Real World Evidence (RWE) time series profile for patients.
Alternative methods and technologies
There are several other clinical tests available to assess sudomotor and/or small fiber function and/or peripheral or cardiac neuropathy. These may employ a measurement target other than the sweat glands, and/or alternate methodologies.
For sudomotor tests specific clinical assessments include:
Sympathetic Skin Response (SSR), defined as the variation in electrical potential of the skin due to sympathetic sudomotor outflow,
Quantitative Sudomotor Axon Reflex Testing (QSART)
Applications
From a physiological standpoint, the pattern of innervation of the sweat gland—namely, the postganglionic sympathetic nerve fibers—allows clinicians and researchers to use sudomotor function testing to assess dysfunction of the autonomic nervous systems (ANS).
To ensure optimal use and interpretation of the ESC, normative values were defined in adults and children. In addition, reproducibility of the method was assessed under clinical conditions, including both healthy controls and patients with common chronic conditions.
ESC has clinical utility in the evaluation and follow-up of dysautonomia and small fiber peripheral neuropathy which may occur in diseases such as:
Diabetes
General
See diabetes
Diabetes and two of its main complications: diabetic neuropathy and autonomic neuropathy. Sensorimotor polyneuropathy (DSPN) is the most common type of polyneuropathy in community-dwelling patients with diabetes, affecting about 25% of them. The course of DSPN is insidious, though, and up to 50% of patients with neuropathy may be asymptomatic, often resulting in delayed diagnosis. Advanced or painful DSPN may result not only in reduced quality of life, but has been statistically associated with retinopathy and nephropathy, and leads to considerable morbidity and mortality. The autonomic nervous system (ANS), of which sudomotor nerves are an integral part, is the primary extrinsic control mechanism regulating heart rate, blood pressure, and myocardial contractility. Cardiac autonomic neuropathy (CAN) describes a dysfunction of the ANS and its regulation of the cardiovascular system. CAN is the strongest predictor for mortality in diabetes. Because early symptoms of CAN tend to be nonspecific, its diagnosis is frequently delayed and screening for CAN should be routinely considered in diabetic patients. Assessment of sudomotor function provides a measure of sympathetic cholinergic function in the workup of CAN.
Diabetic foot ulcer
See Diabetic foot ulcer (DFU).
In diabetic wounds, issues like tissue ischemia, hypoxia, high glucose microenvironment and skin dryness disrupt the healing process, leading to delayed or nonhealing wounds and clinical complications. In some cases it led to amputations and in the worst cases to the death. In that context being able to detect earlier the diabetic neuropathies and skin dryness with electrochemical conductance to avoid complication has been proposed for DFU management.
Amyloidosis
Amyloidosis such as familial amyloid neuropathy, AL amyloidosis, and AA amyloidosis [publication pending]. During the course of AL amyloidosis, peripheral neuropathy occurs in 10–35% of patients; dysautonomia itself is an independent prognostic factor, and assessment of sweat disturbances is routine in the evaluation of amyloidosis. ESC may provide a measure of subclinical autonomic involvement, which is not systematically assessed with more sophisticated equipment.
Cystic fibrosis
The effects of cystic fibrosis on sweat glands were described by Quinton. The performance and potential utility of ESC were assessed in this disease.
Parkinson's disease
Assessment of dysautonomia is important for patient follow-up and assessment of sudomotor function can be helpful in daily practice.
Chemotherapy-induced peripheral neuropathy (CIPN)
Chemotherapy-induced peripheral neuropathy is a common, potentially severe and dose-limiting adverse effect of multiple chemotherapeutic agents. CIPN can persist long after the completion of chemotherapy and imposes a significant quality of life and economic burden to cancer survivors. ESC allows for an objective quantification of small fiber impairment and is easy to implement in the clinic.
Sjögren syndrome
ESC may help in the diagnosis process.
Neuropathic pain
Neuropathic pain usually manifests in the setting of small fiber neuropathy. Small fiber neuropathy is common and may arise from a number of conditions such as diabetes, metabolic syndrome, infectious diseases, toxins, and autoimmune disorders. The gold standard for diagnosing small fiber neuropathy as the etiology of neuropathic pain is skin biopsy. Sudomotor assessment, an accurate objective technique, could be considered as a good screening tool to limit skin biopsy in patients in whom it is not suitable.
ESC has been evaluated for both early diagnosis of small fiber neuropathy and follow-up of treatment efficacy in each of these conditions.
References
Electrophysiology
Electrodiagnosis
Medical assessment and evaluation instruments
Medical procedures
Electrophoresis
Electroanalytical methods
Skin anatomy
Skin physiology
Peripheral nervous system disorders | Electrochemical skin conductance | [
"Chemistry",
"Biology"
] | 2,413 | [
"Electroanalytical chemistry",
"Instrumental analysis",
"Biochemical separation processes",
"Molecular biology techniques",
"Electroanalytical methods",
"Electrophoresis"
] |
47,670,056 | https://en.wikipedia.org/wiki/Plasma%20Science%20and%20Technology | Plasma Science and Technology is a scientific journal published by the Institute of Plasma Physics, Chinese Academy of Sciences (CAS) and the Chinese Society of Theoretical and Applied Mechanics, hosted by IOP Publishing. It publishes novel experimental and theoretical findings in all fields related to plasma physics. The current editor-in-chief is Yunfeng Liang of the Forschungszentrum Jülich Institute of Energy and Climate Research, Germany.
See also
Hefei Institutes of Physical Science
References
External links
Journal home page
Institute of Plasma Physics, Chinese Academy of Sciences
Hefei Institutes of Physical Science
Chinese Academy of Sciences
IOP Publishing academic journals
Academic journals established in 1999
English-language journals
Monthly journals
Plasma science journals
Academic journals associated with learned and professional societies | Plasma Science and Technology | [
"Physics"
] | 148 | [
"Plasma science journals",
"Plasma physics stubs",
"Plasma physics"
] |
47,679,400 | https://en.wikipedia.org/wiki/Luebering%E2%80%93Rapoport%20pathway | In biochemistry, the Luebering–Rapoport pathway (also called the Luebering–Rapoport shunt) is a metabolic pathway in mature erythrocytes involving the formation of 2,3-bisphosphoglycerate (2,3-BPG), which regulates oxygen release from hemoglobin and delivery to tissues. 2,3-BPG, the reaction product of the Luebering–Rapoport pathway was first described and isolated in 1925 by the Austrian biochemist Samuel Mitja Rapoport and his technical assistant Jane Luebering.
Through the Luebering–Rapoport pathway bisphosphoglycerate mutase catalyzes the transfer of a phosphoryl group from C1 to C2 of 1,3-BPG, giving 2,3-BPG. 2,3-bisphosphoglycerate, the most concentrated organophosphate in the erythrocyte, forms 3-PG by the action of bisphosphoglycerate phosphatase. The concentration of 2,3-BPG varies proportionately with the pH, since it is inhibitory to catalytic action of bisphosphoglyceromutase.
References
External links
UniProt: Bisphosphoglycerate mutase - Homo sapiens (Human) UniProt-Information about bisphosphoglycerate mutase
A live model of the effect of changing 2,3-bisphosphoglycerate on the oxyhaemoglobin saturation curve
Biochemical reactions
Metabolic pathways
Organophosphates
Physiology
Respiratory physiology | Luebering–Rapoport pathway | [
"Chemistry",
"Biology"
] | 347 | [
"Biochemistry",
"Physiology",
"Biochemical reactions",
"Metabolic pathways",
"Metabolism"
] |
57,920,221 | https://en.wikipedia.org/wiki/In%20vivo%20supersaturation | In vivo supersaturation is the behavior of orally administered compounds that undergo supersaturation as they pass through the gastrointestinal (GI) tract. Typically these compounds have a weakly basic nature (pKa in the range of 5 to 8) and a relatively low solubility in aqueous solutions. In vivo supersaturation is a recent phenomenon that was first observed by Yamashita et al. in 2003.
References
Pharmacodynamics | In vivo supersaturation | [
"Chemistry"
] | 96 | [
"Pharmacology",
"Pharmacology stubs",
"Pharmacodynamics",
"Medicinal chemistry stubs"
] |
57,923,221 | https://en.wikipedia.org/wiki/Hexaphenylcarbodiphosphorane | Hexaphenylcarbodiphosphorane is the organophosphorus compound with the formula C(PPh3)2 (where Ph = C6H5). It is a yellow, moisture-sensitive solid. The compound is classified as an ylide and as such carries significant negative charge on carbon. It is isoelectronic with bis(triphenylphosphine)iminium. The P-C-P angle is 131°. The compound has attracted attention as an unusual ligand in organometallic chemistry.
The pure compound has two crystalline phases: a metastable monoclinic C2 phase that is triboluminescent, and an orthorhombic P222 form that is not. Both polymorphs are photoluminescent, with respective peak wavelengths at 540 and 575 nm.
Preparation
The compound was originally prepared by deprotonation of the phosphonium salt [HC(PPh3)2]Br using potassium.
An improved procedure entails production of the same double phosphonium salt from methylene bromide. The double deprotonation is effected with potassium amide.
Related compounds
Methylenetriphenylphosphorane (CH2=PPh3), the parent Wittig reagent
References
Organophosphorus compounds
Ligands | Hexaphenylcarbodiphosphorane | [
"Chemistry"
] | 284 | [
"Ligands",
"Coordination chemistry",
"Functional groups",
"Organic compounds",
"Organophosphorus compounds"
] |
63,151,790 | https://en.wikipedia.org/wiki/Newman%E2%80%93Janis%20algorithm | In general relativity, the Newman–Janis algorithm (NJA) is a complexification technique for finding exact solutions to the Einstein field equations. In 1964, Newman and Janis showed that the Kerr metric could be obtained from the Schwarzschild metric by means of a coordinate transformation and allowing the radial coordinate to take on complex values. Originally, no clear reason for why the algorithm works was known.
In 1998, Drake and Szekeres gave a detailed explanation of the success of the algorithm and proved the uniqueness of certain solutions. In particular, the only perfect fluid solution generated by NJA is the Kerr metric and the only Petrov type D solution is the Kerr–Newman metric.
The algorithm works well on ƒ(R) and Einstein–Maxwell–Dilaton theories, but doesn't return expected results on Braneworld and Born–Infield theories.
See also
Birkhoff's theorem (relativity)
References
Algorithms
Exact solutions in general relativity | Newman–Janis algorithm | [
"Physics",
"Mathematics"
] | 196 | [
"Exact solutions in general relativity",
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Mathematical objects",
"Equations",
"Relativity stubs",
"Theory of relativity"
] |
70,374,569 | https://en.wikipedia.org/wiki/List%20of%20border%20control%20organisations | Border control is generally the responsibility of specialised government organisations which oversee various aspects their jurisdiction's border control policies, including customs, immigration policy, border security, biosecurity measures. Official designations, division of responsibilities, and command structures of these organisations vary considerably and some countries split border control functions across multiple agencies.
Canada
Immigration, Refugees and Citizenship Canada: Immigration, Refugees and Citizenship Canada (IRCC; ) is the department of the Government of Canada with responsibility for matters dealing with immigration to Canada, refugees, and Canadian citizenship. IRCC's mandate emanates from the Department of Citizenship and Immigration Act. The Minister of IRCC is the key person to uphold and administer the Citizenship Act of 197 and its subsequent amendments. The minister will work closely with the Minister of Public Safety in relation to the administration of the Immigration and Refugee Protection Act.
Canada Border Services Agency: The Canada Border Services Agency (CBSA; ) is the primary organisation tasked with maintaining Canada's border controls. The Agency was created on 12 December 2003, though its creation was formalised by the Canada Border Services Agency Act, which received Royal Assent on 3 November 2005. amalgamating Canada Customs (from the now-defunct Canada Customs and Revenue Agency) with border and enforcement personnel from the Department of the CIC and the Canadian Food Inspection Agency (CFIA).
Canadian Air Transport Security Authority: The Canadian Air Transport Security Authority (CATSA; () is the Canadian Crown corporation responsible for security screening of people and baggage and the administration of identity cards at the 89 designated airports in Canada. CATSA is answerable to Transport Canada and reports to the Government of Canada through the Minister of Transport.
China
Border control in China is the responsibility of a variety of entities in each of the country's four distinct immigration areas. In the Special Administrative Regions of Hong Kong and Macau, agencies tracing their lineage to British and Portuguese colonial authorities, respectively, perform border control functions based on the policies and practices in force before those territories' return to the People's Republic of China. Areas administered by the Republic of China are subject to border controls distinct from those in the People's Republic of China.
People's Republic of China:
: The Immigration Department () of Hong Kong is responsible for border controls of the Hong Kong Special Administrative Regions, including internal controls with the rest of China. After the People's Republic of China resumed sovereignty of the territory in July 1997, Hong Kong's immigration system remained largely unchanged from its British predecessor model. In addition, visa-free entry acceptance regulations into Hong Kong for passport holders of some 170 countries remain unchanged before and after 1997.
: The Immigration Department of Macau, under the Public Security Police Force, is the government agency responsible for immigration matters, whilst the Public Security Police Force itself is responsible for enforcing immigration laws in Macau.
Mainland China: Border control in Mainland China is the responsibility of National Immigration Administration (NIA; )), a unit of the Ministry of Public Security (MPS; ). Customs related border controls are largely within the purview of the General Administration of Customs of the People's Republic of China.
India
Border control in India is performed by a variety of organisations, each focusing on a distinct section of its external borders.
Border Security Force: The Border Security Force, or BSF, is the primary border defence organisation of India. It is one of the five Central Armed Police Forces of the Union of India, it was raised in the wake of the 1965 War on 1 December 1965, "for ensuring the security of the borders of India and for matters connected there with". From independence in 1947 to 1965, the protection of India's international boundaries was the responsibility of local police belonging to each border state, with little inter-state coordination. BSF was created as a Central government-controlled security force to guard all of India's borders, thus bringing greater cohesion in border security. BSF is charged with guarding India's land border during peacetime and preventing transnational crime. It is a Union Government Agency under the administrative control of the Ministry of Home Affairs. It currently stands as the world's largest border guarding force.
Assam Rifles: The Assam Rifles, one of India's oldest continuously existent paramilitary units, has been responsible for physical controls on the border between India and Myanmar since 2002. The border area between India, Myanmar, and China is largely made up of minority groups, many of which are transboundary communities. Consequently, enforcing border controls is a challenge for all three countries, and porous sections of the border between India and Myanmar have historically been common since Myanmar was formerly a part of the British Indian Empire.
Indo–Tibetan Border Police: The Indo-Tibetan Border Police (ITBP) is charged with maintaining border controls on India's side of the extensive border between minority regions of India and China. In September 1996, the Parliament of India enacted the "Indo-Tibetan Border Police Force Act, 1992" to "provide for the constitution and regulation" of the ITBP "for ensuring the security of the borders of India and for matters connected therewith". The first head of the ITBP, designated Inspector General, was Balbir Singh, a police officer previously belonging to the Intelligence Bureau. The ITBP, which started with 4 battalions, has since restructuring in 1978, undergone expansion to a force of 56 battalions as of 2017 with a sanctioned strength of 89,432.
Indonesia
The Directorate General of Immigration (Indonesian: Direktorat Jenderal Imigrasi) is the primary agency tasked with border control in Indonesia.
Ireland
Border control for the Republic of Ireland is managed at major ports and airports by Border Management Unit, directed by the Department of Justice's Irish Naturalisation and Immigration Service. The Garda National Immigration Bureau manages VISA and residency requirements. The Revenue Commissioners control customs and excise. As the Republic maintains a common travel area with the United Kingdom, there is no formal border control on the Northern Irish border.
Iran
Iranian Immigration & Passport Police: The Immigration & Passport Police Office is a subdivision of Law Enforcement Force of Islamic Republic of Iran with the authority to issue Iranian passports and deals with Immigrants to Iran. The agency is member of ICAO's Public Key Directory (PKD).
Islamic Republic of Iran Border Guard Command: Islamic Republic of Iran Border Guard Command, commonly known as NAJA Border Guard, is a subdivision of Law Enforcement Force of Islamic Republic of Iran (NAJA) and Iran's sole agency that performs border guard and control in land borders, and coast guard in maritime borders. The unit was founded in 2000, and from 1991 to 2000, the unit's duties was done by of Security deputy of NAJA. Before 1991, border control was Gendarmerie's duty.
Malaysia
Immigration Department of Malaysia is responsible for regulating the entry and exit of people into and out of Malaysia. The department manages and maintains the country's immigration policies, including issuing visas, permits, and passes for visitors, students, and workers. It also enforces immigration laws, including detaining and deporting illegal immigrants and those who violate the terms of their visas or passes.
Royal Malaysian Customs serves as the primary border control organization in Malaysia. Its main responsibility is to enforce customs laws and regulations at ports of entry, including airports, seaports, and land borders. The department is tasked with preventing the smuggling of contraband and other illegal goods into the country while facilitating legitimate trade and travel.
México
In México, there is 2 separated institutions responsible of regulating migration affairs, with continuous collaboration:
The Secretariat of Foreign Affairs is responsible to provide to mexican citizens of passports and visas or permits to foireigners, when the applications are done in the exterior.
The National Institute of Migration is the authority responsible of regulating the entry of people from México at entry points such as airports, freeways, and nautical ports. Its also responsible to provide visas to foreigners, when the applications or permits are done in mexican territory. Also, provide follou up to foreigners after the entry to México.
North Korea
Border Security Command and Coastal Security Bureau are collectively responsible for restricting unauthorised cross-border (land and sea) entries and exits, in the early 1990s the bureaux responsible for border security and coastal security were transferred from the State Security Department to the Ministry of People's Armed Forces. Sometime thereafter, the Border Security Bureau was enlarged to corps level and renamed the Border Security Command. Previously headquartered in Chagang Province, the Border Security Command was relocated to Pyongyang in 2002.
Pakistan
Physical controls on Pakistan's international borders are managed by dedicated paramilitary units: the Pakistan Rangers on the border with India, the Frontier Corps with Afghanistan and Iran and the Gilgit−Baltistan Scouts with China and the Pakistan-administered side of the Line of Control.
The Pakistan Rangers are two paramilitary law enforcement organisations whose primary mission is border defence on the border with India as well as internal security operations, and providing assistance to the police in maintaining law and order. Rangers is an umbrella term for:
the Punjab Rangers, headquartered in Lahore, responsible for guarding Punjab Province's 1,300 km long border with India;
the Sindh Rangers, headquartered in Karachi, defending Sindh Province's ~912 km long border with India.
The Frontier Corps are four western provincial forces, part of the Civil Armed Forces. They operate along the external borders of the western provinces of Balochistan and Khyber Pakhtunkhwa and are the direct counterparts to the Rangers of the eastern provinces (Sindh and Punjab). The Frontier Corps comprises four separate organisations:
Frontier Corps Khyber Pakhtunkhwa (North) and Frontier Corps Khyber Pakhtunkhwa (South) stationed in Khyber Pakhtunkhwa province;
Frontier Corps Balochistan (North) and Frontier Corps Balochistan (South) stationed in Balochistan province.
Each force is headed by a seconded inspector general, who is a Pakistan Army officer of at least major-general rank, although the force itself is under the jurisdiction of the Interior Ministry. With a total manpower of approximately 80,000, the task of the Frontier Corps is to help local law enforcement in the maintenance of law and order, and to carry out border patrol and anti-smuggling operations. Some of the FC's constituent units such as the Chitral Scouts, the Khyber Rifles, Swat Levies, the Kurram Militia, the Tochi Scouts, the South Waziristan Scouts, and the Zhob Militia have regimental histories dating back to British colonial times and many, e.g. the Khyber Rifles, have distinguished combat records before and after 1947.
The Gilgit−Baltistan Scouts are part of the Civil Armed Forces, under the direct control of the Ministry of the Interior of the Government of Pakistan. The Scouts are an internal and border security force with the prime objective to protect the China–Pakistan border and support Civil Administration in ensuring maintenance of law and order in Gilgit-Baltistan and anywhere else in Pakistan. The force was formerly known as the Northern Areas Scouts but was renamed to the Gilgit−Baltistan Scouts in 2011.
The Maritime Security Agency is responsible for guarding the southern maritime border.
Pakistan Customs is responsible for customs-related border security measures.
Schengen Area
Border control in the Schengen Area is primarily performed by the national authorities of individual member states. Consequently, there are many distinct organisations involved with border control along the area's external frontiers and at sea and air ports of entry within its members states.
European Border and Coast Guard Agency (Frontex): Frontex is the Schengen Area's multilateral border control organisation. It is headquartered in Warsaw and operates in coordination with the border and coast guards of individual Schengen Area member states. According to the European Council on Refugees and Exiles (ECRE) and the British Refugee Council, in written evidence submitted to the UK House of Lords inquiry, Frontex fails to demonstrate adequate consideration of international and European asylum and human rights law including the 1951 Convention relating to the Status of Refugees and EU law in respect of access to asylum and the prohibition of refoulement. In September 2009, a Turkish military radar issued a warning to a Latvian helicopter conducting an anti-migrant and anti-refugee patrol in the eastern Aegean Sea to leave the area as it is in Turkish airspace. The Turkish General Staff reported that the Latvian Frontex aircraft had violated Turkish airspace west of Didim. According to a Hellenic Air Force announcement, the incident occurred as the Frontex helicopter —identified as an Italian-made Agusta A109— was patrolling a common route used by people smugglers near the small isle of Farmakonisi. Another incident took place in October 2009 in the airspace above the eastern Aegean sea, off the island of Lesbos. On 20 November 2009, the Turkish General Staff issued a press note alleging that an Estonian Border Guard aircraft Let L-410 UVP taking off from Kos on a Frontex mission had violated Turkish airspace west of Söke. As part of the Border and Coast Guard a Return Office was established with the capacity to repatriate immigrants residing illegally in the union by deploying Return Intervention Teams composed of escorts, monitors, and specialists dealing with related technical aspects. For this repatriation, a uniform European travel document would ensure wider acceptance by third countries. In emergency situations such Intervention Teams will be sent to problem areas to bolster security, either at the request of a member state or at the agency's own initiative. It is this latter proposed capability, to be able to deploy specialists to member states borders without the approval of the national government in question that is proving the most controversial aspect of this European Commission plan.
Direction centrale de la police aux frontières: The Direction centrale de la police aux frontières (DCPF) is a directorate of the French National Police that is responsible for border control at certain border crossing points and border surveillance in some areas in France. They work alongside their British counterparts at Calais, and along the Channel Tunnel Rail Link with the British Transport Police. The DCPF is consequently largely responsible for Schengen Area border controls with the United Kingdom.
Direction générale des douanes et droits indirects: Direction générale des douanes et droits indirects (DGDDI), commonly known as les douanes, is a French law enforcement agency responsible for levying indirect taxes, preventing smuggling, surveilling borders and investigating counterfeit money. The agency acts as a coast guard, border guard, sea rescue organisation and a customs service. In addition, since 1995, the agency has replaced the Border Police in carrying out immigration control at smaller border checkpoints, in particular at maritime borders and regional airports.
Finnish Border Guard: The Finnish Border Guard, including the coast guard, is the agency responsible for border control related to persons, including passport control and border patrol. The Border Guard is a paramilitary organisation, subordinate to the Ministry of the Interior in administrative issues and to the President of the Republic in issues pertaining to the president's authority as Commander-in-Chief (e.g. officer promotions). The Finland-Russia border is a controlled external border of the Schengen Area, routinely patrolled and protected by a border zone enforced by the Border Guard. Finland's borders with Norway and Sweden are internal Schengen borders with no routine border controls, but the Border Guard maintains personnel in the area owing to its search and rescue (SAR) duties. There are two coast guard districts for patrolling maritime borders. In peacetime, the Border Guard trains special forces and light infantry and can be incorporated fully or in part into the Finnish Defence Forces when required by defence readiness. The Border Guard has police and investigative powers in immigration matters and can independently investigate immigration violations. The Border Guard has search and rescue (SAR) duties, both maritime and inland. The Guard operates SAR helicopters that are often used in inland SAR, in assistance of a local fire and rescue department or other authorities. The Border Guard shares border control duties with Finnish Customs, which inspects arriving goods, and the Finnish Police, which enforces immigration decisions such as removal.
Koninklijke Marechaussee: The Koninklijke Marechaussee (English: Royal Military Constabulary) is a branch of the Dutch Armed Forces and are responsible for border control functions as well as guarding national borders and ports of entry, notably Amsterdam Schiphol Airport and Eurostar terminals at Amsterdam Centraal and Rotterdam Centraal. At Schiphol Airport, the Koninklijke Marechaussee operates a criminal investigations department and combats drugs trafficking in cooperation with FIOD for both passenger and air freight.
Swedish border police: Border control duties in Sweden are handled by a special group in the police force. Sweden has natural land borders only to Norway and Finland, where there are no border controls, so border surveillance is not done there apart from customs control. Therefore, border control is focused on some fixed control points, during the border control-less Schengen period until 2015 mainly airports. The introduction of full border control from Denmark and the continent in 2015 put a heavy load on the border police who had to check 8000 cars and 50 trains per day coming over the Öresund Bridge, and 3000 cars in Helsingborg and more in other ferry ports. The police quickly educated several hundred semi-authorised border control guards who had to ask the real officers to take over any doubtful case. The customs office and the coast guard can not do formal border controls, but can stop people in doubtful cases and ask police to take over.
Singapore
The Immigration and Checkpoints Authority, or ICA, is the border control agency of Singapore under the Ministry of Home Affairs.
The ICA is responsible for border control, border customs services, and immigration enforcement in Singapore. ICA is accountable to Parliament through the Minister for Home Affairs. The agency is in charge of maintaining all border checkpoints in Singapore. In addition, ICA handles anti-terrorism operations and is responsible for many visa and residence related aspects of border control.
South Korea
Korea Immigration Service, is a part of Ministry of Justice, responsible for protecting border control Enforcement. Korea Immigration Service issues Visa, controls traffic of Human at Port of entry and Immigration
Korea Customs Service, is a part of Ministry of Economy and Finance, responsible for enforce Customs such as Tariff and movement of goods at Port of entry
South Africa
The Joint Operations Division is a component of the South African National Defence Force that patrols the land borders and oceanic territory. The National Border Control Unit of the South African Police Service works in ports and airports. Since 2020, The Border Management Agency (BMA), a branch of the Department of Home Affairs overseas border controls at ports and airports.
Taiwan
In areas controlled by the Republic of China, the National Immigration Agency (NIA; ), a subsidiary organisation of the Ministry of the Interior, is responsible for border control. The agency is headed by the Director General. The current Director-General is Chiu Feng-kuang. The agency was established in early 2007 and its job includes the care and guidance of new immigrants, exit and entry control, the deportation of undocumented migrants, and the prevention of human trafficking. The agency also deals with persons from Mainland China, Hong Kong and Macau who do not hold household registration in the areas controlled by the ROC.
United Kingdom
HM Revenue and Customs: Customs administration related to border controls in the United Kingdom largely fall within the jurisdiction of HM Revenue and Customs.
UK Visas and Immigration (UKVI): UKVI operates the visa aspect of the United Kingdom's border controls, managing applications from foreign nationals seeking to visit or work in the UK, and also considers applications from businesses and educational institutions seeking to become sponsors for foreign nationals. It also considers applications from foreign nationals seeking British citizenship.
Border Force: The Border Force is in charge of physical controls and checkpoints at airports, land borders, and ports. Since 1 March 2012, Border Force has been a law-enforcement command within the Home Office, accountable directly to ministers. Border Force is responsible for immigration and customs at 140 rail, air and sea ports in the UK and western Europe, as well as thousands of smaller airstrips, ports and marinas. The work of the Border Force is monitored by the Independent Chief Inspector of Borders and Immigration.
Immigration Enforcement: Immigration Enforcement is the organisation responsible for enforcing border control policies within the United Kingdom, including pursuing and removing undocumented migrants.
United States
Most aspects of American border control are handled by various divisions of the Department of Homelend Security (DHS).
US Customs and Border Protection: U.S. Customs and Border Protection (CBP), a division of the DHS, is the country's primary border control organisation, charged with regulating and facilitating international trade, collecting import duties, and enforcing American trade, customs and immigration regulations. It has a workforce of more than 58000 employees. Every individual entering America is subject to inspection by Customs and Border Protection (CBP) officers for compliance with immigration, customs and agriculture regulations. Travellers are screened for a variety of prohibited items ranging from gold, silver, and precious metals to alcoholic beverages, firearms, and narcotics.
Transport Security Administration: The Transport Security Administration, or TSA, is a division of the DHS responsible for conducting security checks at American airports and other transport hubs, including overseas preclearance facilities (with the notable exception of those in Canada, where CATSA conducts security checks prior to CBP immigration screening). For passengers departing by air from America, TSA screening is the only physical check conducted upon departure.
Immigration and Customs Enforcement: Immigration and Customs Enforcement, or ICE, is the organisation responsible for enforcing immigration laws within America, focusing largely on deporting undocumented migrants. ICE operates detention centres throughout the country and approximately 34,000 undocumented migrants are imprisoned by ICE on any given day, in over 500 detention centres, jails, and prisons nationwide.
United States Citizenship and Immigration Services: United States Citizenship and Immigration Services is responsible for various aspects of border control relating to immigration, including reviewing visa petitions and applications as well as processing asylum claims.
State and local law enforcement agencies: Officers from police forces established by state, county, and municipal governments across America are deputised by ICE to detain undocumented migrants pursuant to Immigration and Nationality Act Section 287(g). Under section 287(g), ICE trains and authorises state and local law enforcement officers to identify, process, and detain undocumented migrants they encounter during their daily law-enforcement activity. The 287(g) programme has been criticised for increasing racist profiling by police and undermining community safety as the fear of deportation discourages undocumented migrants from reporting crimes or talking to law enforcement officers.
Notes
References
Control
Export and import control
Control list
International law
Travel
Visas
Lists of law enforcement agencies | List of border control organisations | [
"Physics"
] | 4,661 | [
"Physical systems",
"Transport",
"Travel"
] |
70,378,375 | https://en.wikipedia.org/wiki/Maria%20V.%20Chekhova | Maria V. Chekhova (born 1963) is a Russian-German physicist known for her research on quantum optics and in particular on the quantum entanglement of pairs of photons. She is a researcher at the Max Planck Institute for the Science of Light in Erlangen, Germany, where she heads an independent research group on quantum radiation, and a professor at the University of Erlangen–Nuremberg, in the chair of experimental physics (optics).
Education and career
Chekhova was born on 8 June 1963 in Moscow, and educated in physics at Moscow State University, where she earned a master's degree in 1986, completed a Ph.D. in 1989, and earned a habilitation in 2004. She was a full-time researcher at Moscow State University from 1989 to 2010, continuing on a part-time basis until 2020. In the meantime, she took her present position at the Max Planck Institute for the Science of Light in 2010. In 2020, she added a part-time affiliation as professor at the University of Erlangen–Nuremberg.
In November 2021 Chekhova was elected for the "Optica Fellow 2022" for "pioneering contributions to the science and applications of photon pairs and twin beams".
Book
Chekhova is the coauthor, with Peter Banzer, of the textbook Polarization of Light: In Classical, Quantum and Nonlinear Optics (De Gruyter, 2021).
References
External links
Chekhova Research Group
1963 births
Living people
Russian physicists
Russian women physicists
German physicists
German women physicists
Quantum physicists
Optical physicists
Moscow State University alumni
Max Planck Society people
Academic staff of the University of Erlangen-Nuremberg | Maria V. Chekhova | [
"Physics"
] | 342 | [
"Quantum physicists",
"Quantum mechanics"
] |
70,379,087 | https://en.wikipedia.org/wiki/Cordyceps%20locustiphila | Cordyceps locustiphila is the basionym and teleomorph of the fungi Beauveria locustiphila, a species of fungus in the family Cordycipitaceae. and is a species within the genus Cordyceps. It was originally described in by Henn in 1904. C. locustiphila is an entomopathogen and obligate parasite of the grasshopper species within the genus Colpolopha or Tropidacris, and as such is endemic to South America. The scientific name is derived from its close relationship with its host, being named after locusts. The fungi was renamed to Beauveria locustiphila in 2017 following research into the family Cordycipitaceae. Following the loss of the species type specimen, new studies were conducted that now recommend that the fungi be divided into 3 species. C. locustiphila, C. diapheromeriphila, and C. acridophila.
Description
Macroscopic characteristics
The fruiting bodies of Cordyceps locustiphila form gregariously as clubs through breaks and joints in the chitinous shell of their host locusts. The stipe of the club is a fleshy greyish yellow with a length of 1–4 mm long and a diameter of 1–2 mm. The stromata formed on the ends of the club are bright yellow and have a simple, claviform, body plan, ranging from 3–5 mm in length and 2–4 mm in width. Ovoid perithecia are semi-immersed within the walls of the stoma, and have a wall smaller than 50 micrometers.
Microscopic characteristics
The anamorph of this fungi forms as an ocher yellow hyphal network which turns white at the external margins. The hyphae have a diameter of 1.5–2.5 nanometers. The fungi produces conidiophores with acremonium-like phialides that are simple and erect from the hyphal mat. Conidia are cylindrical in appearance, and are produced solitarily, or via the slime drop method. This asexual phase is what spurned the reclassification as a Beauveria species
Ecology and dispersal
C. locustiphila has evolved to be closely dependent on its host species, grasshoppers in the genus Colpolopha. As such, it is limited to the range of this grasshopper, and are endemic to South America, including regions of north-central Argentina, Northern Chile, Southern Brazil, and south east Peru.
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-69.448097,
-14.891352
],
[
-51.875133,
-20.714656
],
[
-51.945419,
-29.246826
],
[
-59.115196,
-31.611849
],
[
-64.527667,
-36.673339
],
[
-69.166923,
-39.060634
],
[
-69.448097,
-14.891352
]
]
]
}
}
]
}
The fungus colonizes the bodies of its host when ascospores become trapped on the chitin exoskeleton by the grasshoppers fine hairs and begin to germinate. As the mycelium develops, it breaks through the exoskeleton to invade the interior cavities of the insects body for protection during growth. The fungi then uses the insect as a source of nutrients and shelter for its lifespan.
When it is time to reproduce and disperse spores, the mycelium produces stroma that emerge through gaps and joints in the exoskeleton. Ascospores are then released by semi-embedded Perithecia in the stroma's wall to be dispersed by the wind. Similar to other Cordyceps species, C. locustiphila has shown to be able to influence the neurological processes of its host to "brainwash" the locust into positioning itself where the wind currents and environments are most beneficial for spore dispersion.
Human uses
C. locustiphila is a species of specific scientific interest due to its abilities as an entomopathogen. C. locustiphila poses no threat to human beings, but the locust it targets can pose severe threats to human agriculture and lead to famines in South America. As such, C. locustiphila has been the subject of research both for its mechanism of breaching chitin defenses in general, as well as possible use as a biological alternative to pesticides in order to maintain agricultural security while reducing pollutants.
Unlike other Cordyceps species, which have been used in traditional medicine across Asia, C. locustiphila has not been recorded as being used as a medicine or nutrient source at this time.
Classification uncertainty
C. locustiphila was originally classified in 1904, but in 2017 was determined to be the teleomorph of Beauveria locustiphila, a species within a Cordyceps clade of asexually reproducing entomopathogens, Beauveria. This confusion is derived from the variations of sexual and asexual stages of the species lifecycle, as is a part of the larger changes taking place amongst fungal taxonomy due to the increase of DNA testing and research. Due to taxonomic standard changes implemented as part of the "One Fungus One Name" initiative B. locustiphila will not unseat the name C. locustiphila, being the more widespread teleomorph name of the species until such time it is ratified by the International Botanical Congress
Following further research of C. locustiphila interactions with its host species, as well as genetic testing of the SSU, LSU, TEF, RPB1 and RPB2 nuclear loci, it has been recommended that the species be further divided into 3 species, Cordyceps locustiphila, Cordyceps diapheromeriphila, and Cordyceps acridophila and/or Beauveria locustiphila, Beauveria diapheromeriphila and Beauveria acridophila.
These taxonomical complexities have been marked as a possible obstacle in C. locustiphila use as a biological pesticide due to the further interactions and hybridizations the species would undergo should it be propagated widely.
References
Cordyceps
Fungi described in 1904
Fungus species | Cordyceps locustiphila | [
"Biology"
] | 1,375 | [
"Fungi",
"Fungus species"
] |
70,381,482 | https://en.wikipedia.org/wiki/Lutetium%20%28177Lu%29%20vipivotide%20tetraxetan | {{DISPLAYTITLE:Lutetium (177Lu) vipivotide tetraxetan}}
Lutetium (177Lu) vipivotide tetraxetan, sold under the brand name Pluvicto, is a radiopharmaceutical medication used for the treatment of prostate-specific membrane antigen (PSMA)-positive metastatic castration-resistant prostate cancer (mCRPC). Lutetium (177Lu) vipivotide tetraxetan is a targeted radioligand therapy.
The most common adverse reactions include fatigue, dry mouth, nausea, anemia, decreased appetite, and constipation.
Lutetium (177Lu) vipivotide tetraxetan is a radioconjugate composed of PSMA-617, a human prostate-specific membrane antigen (PSMA)-targeting ligand, conjugated to the beta-emitting radioisotope lutetium-177, with potential antineoplastic activity against PSMA-expressing tumor cells. Upon intravenous administration of lutetium (177Lu) vipivotide tetraxetan, it targets and binds to PSMA-expressing tumor cells. Upon binding, PSMA-expressing tumor cells are destroyed by 177Lu through the specific delivery of beta particle radiation. PSMA, a tumor-associated antigen and type II transmembrane protein, is expressed on the membrane of prostatic epithelial cells and overexpressed on prostate tumor cells.
Lutetium (177Lu) vipivotide tetraxetan was approved for medical use in the United States in March 2022, and in the European Union in December 2022. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
History
In 2006, scientists from Purdue University designed a targeting ligand that bound with high affinity and specificity to PSMA on prostate cancer cells and patented its ability to target attached radionuclides such as 177Lu, 99mTc, 68Ga, etc. to prostate cancers. The patents were licensed to Endocyte in 2007. In 2012, scientists at German Cancer Research Center and University Hospital Heidelberg improved the drug's affinity, patented, and licensed to ABX advanced biomedical compounds, a small German pharmaceutical company, for early clinical development. In 2017, the ABX patent was also acquired by Endocyte and Endocyte together with the above two sets of patents was acquired by Novartis in 2018.
Efficacy and safety was initially investigated as a compassionate access treatment in Germany with high tumor targeting and low doses to normal organs. Physician-scientists from the Peter MacCallum Cancer Centre conducted a phase 2 trial demonstrating high response rates, low toxicity and reduction in pain in men with metastatic castration-resistant cancer who progressed after conventional treatments. The ANZUP co-operative trials conducted the first randomized, multicentre trial comparing lutetium vipivotide tetraxetan to cabazitaxel chemotherapy. This trial demonstrated higher PSA response and fewer adverse effects with lutetium vipivotide tetraxetan.
Efficacy was evaluated in VISION, a randomized (2:1), multicenter, open-label trial that evaluated lutetium (177Lu) vipivotide tetraxetan plus best standard of care (BSoC) (n=551) or BSoC alone (n=280) in men with progressive, prostate-specific membrane antigen (PSMA)-positive metastatic castration-resistant prostate cancer (mCRPC). All participants received a GnRH analog or had prior bilateral orchiectomy. Participants were required to have received at least one androgen receptor pathway inhibitor, and 1 or 2 prior taxane-based chemotherapy regimens. Participants received lutetium (177Lu) vipivotide tetraxetan 7.4 GBq (200 mCi) every 6 weeks for up to a total of 6 doses plus BSoC or BSoC alone.
The U.S. Food and Drug Administration (FDA) granted the application for lutetium (177Lu) vipivotide tetraxetan priority review and breakthrough therapy designations.
Society and culture
Regulatory status
On 13 October 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Pluvicto, intended for the treatment of prostate cancer. The applicant for this medicinal product was Novartis Europharm Limited. Lutetium (177Lu) vipivotide tetraxetan was approved for medical use in the European Union in December 2022.
References
Lutetium complexes
Drugs developed by Novartis
Radiopharmaceuticals | Lutetium (177Lu) vipivotide tetraxetan | [
"Chemistry"
] | 1,003 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
70,383,921 | https://en.wikipedia.org/wiki/Oceanic%20freshwater%20flux | Oceanic freshwater fluxes are defined as the transport of non saline water between the oceans and the other components of the Earth's system (the lands, the atmosphere and the cryosphere). These fluxes have an impact on the local ocean properties (on sea surface salinity, temperature and elevation), as well as on the large scale circulation patterns (such as the thermohaline circulation).
Introduction
Freshwater fluxes in general describe how freshwater is transported between and stored in the earth's systems: oceans, land, the atmosphere and the cryosphere. While the total amount of water on Earth has remained virtually constant over human timescales, the relative distribution of that total mass between the four reservoirs has been influenced by past climate states, such as glacial cycles. Since the oceans account for 71% of the Earth's surface area, 86% of evaporation (E) and 78% of precipitation (P) occur over the ocean, the oceanic freshwater fluxes represent a large part of the world's freshwater fluxes.
There are five major freshwater fluxes into and out of the ocean, namely:
Precipitation
Evaporation
Riverine discharge
Ice freezing or melting (Sea ice freezing or melting, ice shelf melting, iceberg melting)
Groundwater discharge
whereby the 1., 3. and 5. are all inputs, adding freshwater to the ocean, while 2. is an output, i.e. a negative freshwater flux and 4. can be either a freshwater loss (freezing) or gain (melting).
The quantity and the spatial distribution of those fluxes determine the ocean salinity (the salt concentration of the ocean water). A positive freshwater flux leads to mixing of water with low to zero salinity with the salty ocean water, resulting in a decrease of the water salinity. This is for example the case in regions, where precipitation is greater than evaporation. On the contrary, if evaporation gets greater than precipitation, the ocean salinity increases, since only water (H2O) evaporates, but not the ions (e.g. Na+, Cl+) which make up salt.
Estimates of the annual mean freshwater fluxes into the ocean are for precipitation (88% of total freshwater input), for riverine discharge from land (9%), for ice discharge from land (<1%) and and for saline and fresh groundwater discharge respectively (<1%). The annual mean freshwater fluxes out of the ocean via evaporation is estimated to be .
The salinity, along with temperature and pressure, determines the density of the water. Higher salinity and cooler water results in a higher water density (see also spiciness of ocean water). Since differences in water density drive large-scale ocean circulation, freshwater fluxes are most important for ocean circulation patterns like the Thermohaline Circulation (THC).
Freshwater fluxes into the ocean
Evaporation and precipitation
There are large spatial and temporal variations in precipitation and evaporation patterns. The dominant reason for precipitation is adiabatic cooling when moist air rises, whose water vapor then becomes supersaturated above a certain altitude and condenses out. Areas of large precipitation are therefore areas of convection, which is most prominent in the Intertropical Convergence Zone (ITCZ), a band of latitudes around the equator.
Evaporation describes the process when surface water changes its phase from liquid to gaseous. This process requires a high amount of energy, due to the strong hydrogen bonds between the water molecules. This results in a global evaporation pattern, where high evaporation rates can be observed mostly in warm tropical and subtropical regions, where the surface was heated by solar radiation which can provide the necessary amount of energy. At higher latitudes the evaporation rate decreases. Additionally, the evaporation rate is influenced by the relative humidity of the air overlying the water surface. Approaching the saturation of the air with water vapour, the evaporation rate decreases, i.e. a lower air–sea humidity gradient decreases evaporation.
The actual freshwater flux that the ocean experiences in a certain timeframe is the net amount of precipitation and evaporation in this time interval. This means, if evaporation minus precipitation (E-P) is positive, the ocean experiences a net loss of freshwater, while the opposite is true for a negative value for E-P. On a global scale, the subtropical gyres and western boundary currents of the Atlantic, Pacific and Indian Oceans are regions where evaporation exceeds precipitation. In contrast, the ITCZ as well as high latitudes (> 40° N/S) are regions of net precipitation, although the ITCZ exceeds the high latitudes in terms of quantity of rainfall. The equatorial region of net precipitation is centered north of the equator in the Atlantic and Pacific Oceans but is broader and extends further south in the Indian Ocean. An additional center of strong net precipitation is located over the western Pacific-Indonesian region.
Both Atlantic subtropical gyres are net evaporative, as well as the Pacific subtropical gyres, although they show an east–west transition with increased evaporation near the eastern boundaries. This spatial pattern can be attributed to the fact that the overlying air becomes saturated in humidity, subsequently leading to decreasing evaporation rates as the air is driven westward by the trade winds.
An estimation of the annual mean freshwater flux into the ocean is for precipitation, while annual mean freshwater flux out of the ocean via evaporation is estimated to be .
When considering all the ocean basins, the only ocean basin which experiences net precipitation averaged over the year, is the North Pacific. The other ocean basins, namely the South Pacific, the North and South Atlantic and the Indian Ocean are areas of net evaporation, albeit with varying strength. The net evaporation over the South Pacific Ocean is distinctly smaller than over the other ocean basins, although the South Pacific Ocean covers an area as large the whole Atlantic Ocean and one third larger than the Indian Ocean.
It is very likely that the energy increase (heat flux) observed in the upper 700 m of the global oceans can be attributed to anthropogenic climate change and increased radiative forcing due to greenhouse gas emissions. Although observed trends in evaporation minus precipitation suggest that the Atlantic Ocean will become saltier, while the Indian Ocean will become fresher in the coming decades, it is easier to project global patterns of air-sea flux based on changes in heat content and salinity while regional trends are rarely robust.
Seasonal cycle
The amount and even the sign of the net total freshwater flux E-P from an ocean basin can change throughout the year.
The net evaporation over much of the subtropics is most pronounced during winter season due to the increased strength of the easterly trades in winter. This applies for both hemispheres. The wind impacts evaporation in two ways. Firstly directly, whereby a greater wind speed carries water vapour faster away from the evaporating surface, leading to a faster reestablishment of the air–sea humidity gradient, which were reduced by the evaporation beforehand and is necessary for high evaporation rates. Secondly indirectly, since enhanced surface wind strengthens the wind-driven subtropical gyre. Since the subtropical gyres drive a northwards heat transport via the western boundary currents, the sea surface temperatures warm up along the paths of the currents and cause more evaporation by providing more energy and enlarging the air–sea humidity gradients. In the extratropics the net precipitation is not explainable by a simple seasonal cycle. In the Atlantic and Pacific Oceans the net mid-latitude precipitation reaches its peak during June–August synchronously in the northern and the southern hemisphere, i.e. in different seasons.
In the North and South Atlantic Oceans and in the North Pacific Ocean evaporation exceeds precipitation in winter and spring. During summer and autumn the sign of E-P changes for all ocean basins but the South Atlantic Ocean, which is always net evaporative. When considering the Atlantic as a whole, the constant net loss of freshwater in the South Atlantic Ocean determines the sign of the total freshwater flux and cancels the net precipitation from the North Atlantic in summer out. This means, the Atlantic in total is net-evaporative during the whole year due to the prominent influence of the South Atlantic Ocean. The opposite can be stated about the Pacific Ocean as a whole, which shows an excess of precipitation over evaporation for every season. This pattern of evaporation minus precipitation is consistent with the observed higher salinity in the Atlantic compared to the Pacific Ocean. In the Indian Ocean a net evaporation rules most of the year, except during December–February.
Changes due to climate change
Past
The report from Working Group 1 in the IPCC 2021 AR6 concluded that patterns of evaporation minus precipitation (E-P) over the ocean have enhanced the present mean pattern of wetting and drying. In general, saline surface waters had become saltier (especially in the Atlantic Ocean) while relatively fresh surface waters had become fresher (especially in the Indian Ocean). However, AR6 assessed only low confidence in globally averaged trends in E-P over the 20th century due to observational uncertainty, with a spatial dominated by evaporation increases over the ocean. Even coarse-resolution models show that mean SST and variability in SST are sensitive to changes in flux forcing.
Future
Based on the assessment of Coupled Model Intercomparison Project 6 (CMIP6) models, AR6 concluded that it is very likely that, in the long term, global mean ocean precipitation will increase with increasing Global Surface Air Temperature. Annual mean and global mean precipitation will very likely increase by 1–3% per °C warming. Hereby, the precipitation patterns will also change and exhibit substantial regional and seasonal differences. Following the general trend ‘wet-gets-wetter-dry-gets-drier’, precipitation will very likely increase over high latitudes and the tropical ocean and likely increase in large parts of the monsoon regions, but likely decrease over the subtropics, including the Mediterranean, southern Africa and southwest Australia, in response to greenhouse-gas induced warming. Although these are the expected general trends there can be distinct deviations from those pattern changes on a local scale. One possible impact of the corresponding trend in ocean salinity is an altering of the Thermohaline Circulation, which is explained below.
Continental discharge
Another source of freshwater discharge into the ocean is runoff from continents, through river estuaries. The average yearly freshwater discharge from continents is estimated around .
Compared to other ocean basins, the discharge is relatively high into the western tropical Atlantic, led by the Amazon and the Orinoco river estuaries. This causes some local effects as well adjustment to the large scale thermohaline circulation, as discussed in the "Influence on the Thermohaline Circulation" chapter.
Seasonal cycles
Most rivers exhibit some sort of seasonal cycle in their discharge, often (but not always) related to seasonal variation in the precipitation. The figure on the right shows the seasonal cycle of the runoff from the 10 largest rivers (Amazon, Mississippi, Congo, Yenisey, Paraná, Orinoco, Lena, Changjiang, Mekong, Brahmaputra/Gange), compared with the local precipitation cycle and two different P-E estimates.
In several rivers, the runoff peak follows the precipitation peak, with different delays reflecting the time needed for the surface runoff to travel to the river mouth. For shorter rivers such as Changjiang, Mekong and Brahmaputra/Gange, the lag between the precipitation and the runoff peaks is about a month or less, while in the Amazon and the Orinoco rivers, the lag is of 2 or more months.
Other larger rivers at higher latitude, such as the Yenisey, Lena and Mississippi, seem to experience a runoff cycle decoupled from the precipitation cycle. The sharp June peak of the Lena and Yesiney cycle is likely due to snowmelt, as well as the less prominent peak between March and May in the Mississippi river.
The Panama and the Congo river do not experience significant seasonal runoff cycles, despite the precipitation cycles, this is probably due to human intervention through river damming.
Multi-annual cycles and climate change impacts
River runoff is also affected by other meteorological cycles that span over several years.
In particular, a significant correlation with El Niño-Southern Oscillation (ENSO) phase and strength has been observed for several major rivers, as well as a correlation with Interdecadal Pacific Oscillation (IPO).
These irregular cycles, and other possible factors of internal variation which are yet to be researched fully, make it difficult to identify the changes in river runoff that can be ascribed to human induced climate change.
However, climate simulations under a moderate emission scenario (RCP4.5) show significant changes in river runoff by the end of the century, with decreased runoff in Central America, Mexico, the Mediterranean Basin, Southern Africa and much of South America, and increased runoff in the rest of Eurasia and North America.
These changes are consistent with the expected precipitation changes, but a component of earlier snowmelt and permafrost thawing will also have to be considered.
A 2018 study has recorded the variation in river runoff into different each oceanic basin from 1986 to 2016, showing an increased discharge into the Arctic Ocean and a decreased discharge in the Indian Ocean over the last decade.
Local impacts of river runoff
The input of freshwater from river runoff may seem negligible compared to that precipitation, but several studies have shown that its impact can't be neglected
The largest annual changes in surface salinity have been observed on the western tropical Atlantic, peaking between spring and summer, when the precipitation peak in the ITCZ coincides with the peak in the Amazon discharge. The low salinity water influx have also been shown to follow the seasonal variation of currents on the Brazil coast (northwestward in spring, eastward in summer)
As a direct consequence of the freshwater discharge, rivers have an impact on the local Sea Surface Temperature (SST). This effect is theorethically present at all river mouths, but it was possible to measure it only for very large rivers.
The freshwater placed on top of the saline water serves to stabilize the stratification, restricting the vertical mixing of colder water from higher depths, hence increasing the local SST. Simulations have shown a large SST anomaly especially close to the mouth of the Congo river between July and April, up to +1 °C; a similar anomaly has also been simulated near the mouth of the Amazon river between May and October.
River discharge also has an impact on the local sea level, through two different processes:. Those are roughly represented by the first two terms of the following equation
In which is the sea surface variation from the mean, is the variation of bottom sea pressure, is the variation of sea water density from the mean (), and is the atmospheric pressure at sea level.
The first term represents the simple increase of the ocean mass: the importance of this contribution can be established through "hosing experiment", which entails simulating the same water input of the river but with the same salinity as ocean water. While it has been shown that sea level increase caused by this contribution is carried away by bartropic waves in the timescale of days, it can still have an impact when the water basin is semi-enclosed (as in the Arctic) or the water input is particularly large. Durand et altr. simulated a "hosing experiment" with seasonally variable input in the Bay of Bengal, that showed sea level oscillation to the order of 0.1 m
Further, since the freshwater from the river runoff has a lower density than the seawater, the term is negative across the first water layer, which results in a positive contribution to the sea level from the second term. This phenomenon is called halosteric effect. The contribution of the halosteric effect generally has a longer lasting effect compared to the ocean mass contribution, while still being in the same order of margnitude (0.1-0-2 m).
The third term of the equation represents the dependency on atmospheric pressure which is unaffected by river runoff.
Other oceanic freshwater fluxes
Groundwater
The total flux of groundwater to the ocean can be divided into three different fluxes: fresh submarine groundwater discharge, near-shore terrestrial groundwater discharge and recirculated sea water. The contribution of fresh groundwater accounts for less than 1% of the total freshwater input into the ocean and is therefore negligible on a global scale. However, due to a high variability of groundwater discharge there can be an important contribution to coastal ecosystems on a local scale.
Ice freezing and melting
Two categories of ice have to be considered in context of oceanic freshwater fluxes: sea ice and (recently) grounded ice like ice shelfs and icebergs.
Sea ice is considered as part of the oceanic water budget, therefore, its melting or freezing states not an input or output of water in general. However, at a regional scale and intraannually timescale, it can present an important determinator of ocean salinity, by adding freshwater during melting process or by rejecting salt during the freezing process. For example, over the Arctic Ocean evaporation and precipitation rates are quite low, respectively, about 5±10 cm/yr and 20±30 cm/yr in liquid water equivalent. The freshwater cycle in the Arctic Ocean is, therefore, significantly determined by freezing and melting of sea ice, for which characteristic rates are about 100 and 50 cm/yr, respectively. If the ice drifts during the long intervals between the phase changes (frozen and liquid), the result is a net local distillation, where the sea ice was formed and a net local freshening of water, where the sea ice melts. This freezing and melting of sea ice, with their accompanying salinity changes, supply local buoyancy forcing that influences ocean circulation.
The calving of a previously grounded ice sheet into the ocean as an iceberg as well as the melting of ice shelfs related to warm ocean water constitute a net freshwater influx, not only on a local but whole ocean scale. Although, the total input from the cryosphere is small compared to the total input of precipitation and riverine discharge (less than 1% on a global scale), on a local scale this can be an important contributor of freshwater and influencing ocean circulation.
Influence on thermohaline circulation (THC)
The Thermohaline Circulation is part of the global ocean circulation. Although this phenomenon is not fully understood yet, it is known that its driving processes are thermohaline forcing and turbulent mixing. Thermohaline forcing refers to density-gradient driven motions, whereby density is determined by the temperature (‘thermo’) and salt concentration (‘haline’) of the water. Heat and freshwater fluxes at the ocean's surface play therefore a key role in forming ocean currents. Those currents exert a major effect on regional and global climate.
The Atlantic Meridional Overturning Circulation (AMOC) is the Atlantic branch of the THC. Hereby, northward moving surface water release heat and water to the atmosphere and gets therefore colder, more saline and consequently denser. This leads to the formation of cold deep water in the North Atlantic. This cold deep water flows back to the south a depth of 2–3 km until it joins the Antarctic Circumpolar Current.
The described differences in net precipitation-evaporation patterns between the Atlantic and the Pacific, with the Atlantic being net evaporative and the Pacific experiencing net precipitation, leads to a distinct difference in salinity contrast, with the Atlantic being more saline than the Pacific. This freshwater flux driven salinity contrast is the main reason that the Atlantic supports a meridional overturning circulation and the Pacific does not. The lower surface salinity of the North Pacific, due to high precipitation rates, inhibits deep convection in the Pacific. AR6 concluded from model simulations from the Climate Model Intercomparison Project 6 (CMIP6) that the AMOC will very likely weaken in the 21st century, but there is low confidence in the models’ projected timing and magnitude of AMOC decline. The projected AMOC weakening can be explained by the CMIP6 projection of an increase in high-latitude temperature and precipitation, along with freshwater input from increased melting of the Greenland Ice Sheet, which cause high-latitude North Atlantic surface waters to become less dense and more stable, preventing overturning and weakening AMOC.
Impacts of river runoff on the large-scale thermohaline circulation
While evaporation and precipitation processes are the main cause of the salinity anomalies that drive the THC, large rivers seem to have a not negligible impact as well. In particular, a 2017 study simulated the shutdown of the Amazon runoff, and measured its impact on the AMOC. It was found that the Amazon shutdown could cause a strengthening in the AMOC, increased upwelling and lower SST in the equator and southern tropics. The cooler SST over the equator consequently could cause a reduction of the rainfall in the ITCZ, weakening of the meridional atmospheric cells and the westerlies winds in the extratropics. North America and the Arctic would then experience warmer winters (with anomalies up to 1.3 °C), while Northern Eurasia would have cooler and drier condition. In the southern hemisphere, the Amazonia region could also experience drier conditions, possibly causing a positive feedback. The paper concluded by advising caution in the building of dams over the Amazon river (more than a hundred new dams are being considered for construction in the next few decades)
In what could be seen as a small-scale case study, the damming of the Nile river in 1964 (Aswan High Dam) has been shown to have had an impact on the THC of the Mediterranean Sea. A steady increase in the surface and intermediate waters' salinity has been recorded in the West Mediterranean over the last 40 years. This is connected to a growth of the activity in the deep water formation sites in the South Adriatic. The damming of the Nile has been found to be responsible for about 40% of this salinity increase (and hence the increase in deep water formation)
References
Oceanography | Oceanic freshwater flux | [
"Physics",
"Environmental_science"
] | 4,613 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
70,386,320 | https://en.wikipedia.org/wiki/Codon%20reassignment | Codon reassignment is the biological process via which the genetic code of a cell is changed as a response to the environment. It may be caused by alternative tRNA aminoacylation, in which the cell modifies the target aminoacid of some particular type of transfer-RNA. This process has been identified in bacteria, yeast and human cancer cells.
In human cancer cells, codon reassignment can be triggered by tryptophan depletion, resulting in proteins where the tryptophan aminoacid is substituted by phenylalanine.
See also
Expanded genetic code
References
Genetics
Amino acids
Biological processes
Bacteria
Yeasts
Cancer | Codon reassignment | [
"Chemistry",
"Biology"
] | 132 | [
"Biomolecules by chemical classification",
"Fungi",
"Genetics",
"Yeasts",
"Prokaryotes",
"Amino acids",
"Bacteria",
"nan",
"Microorganisms"
] |
61,006,900 | https://en.wikipedia.org/wiki/Tip%20dating | Tip dating is a technique used in molecular dating that allows the inference of time-calibrated phylogenetic trees. Its defining feature is that it uses the ages of the samples to provide time information for the analysis, in contrast with traditional 'node dating' methods that require age constraints to be applied to the internal nodes of the evolutionary tree.
In tip dating, morphological data and molecular data are typically analysed together to estimate the evolutionary relationships (tree topology) and the divergence times among lineages (node times); this approach is also known as 'total-evidence dating'. However, tip dating can also be used to analyse data sets that only comprise morphological characters or that only comprise molecular characters (e.g., data sets that include samples of ancient DNA or of serially sampled viruses).
Tip dating has been implemented in Bayesian phylogenetic software and typically draws on the fossilised birth-death model for evolution. This is a model of diversification that allows speciation, extinction, and sampling of fossil and extant taxa.
This promising method is not yet fully mature, and there are a number of possible biases or undesirable behaviour that must be taken into account when interpreting its results.
References
Phylogenetics | Tip dating | [
"Biology"
] | 249 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.