id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
73,772,168
https://en.wikipedia.org/wiki/David%20R.%20Morrow
David R. Morrow is an American philosopher and the Director of Research for the Institute for Carbon Removal Law and Policy and the Forum for Climate Engineering Assessment at American University. He is also a Research Fellow in the Institute for Philosophy & Public Policy at George Mason University. Morrow is known for his works on climate policy and ethics. Early life an education Morrow received a master of arts in public policy from the Harris School of Public Policy at the University of Chicago and a Ph.D. philosophy from the Graduate Center of the City University of New York. Books Morrow, D.R. 2019. Values in Climate Policy. London: Rowman & Littlefield International. Morrow, D.R. & A. Weston. 2019. A Workbook for Arguments: A Complete Course in Critical Thinking, 3rd edition. Indianapolis: Hackett. (First edition, 2012) Melchert, N. & D.R. Morrow. 2018. The Great Conversation: A Historical Introduction to Philosophy, 8th edition. New York: Oxford University Press. Morrow, D.R. 2018. Moral Reasoning: A Text and Reader on Ethics and Contemporary Moral Issues. New York: Oxford University Press. Morrow, D.R. 2017. Giving Reasons: An Extremely Short Introduction to Critical Thinking. Hackett. References External links Morrow's personal website 21st-century American philosophers Year of birth missing (living people) Living people American philosophy academics American ethicists George Mason University faculty American University faculty CUNY Graduate Center alumni Johns Hopkins University alumni University of Chicago Harris School of Public Policy alumni Climate change mitigation researchers American political scientists Environmental ethicists
David R. Morrow
[ "Engineering", "Environmental_science" ]
325
[ "Geoengineering", "Environmental ethicists", "Climate change mitigation researchers", "Environmental ethics" ]
73,772,754
https://en.wikipedia.org/wiki/Barbara%20De%20Salvo
Barbara De Salvo is an electronics engineer whose work involves the development of advanced computer memory technology and neuromorphic computing architecture. Educated in Italy, France, and the US, she has worked in France and the US. She is Research Director and Silicon Technology Strategist for the Facebook Reality Labs. Education and career After earning an engineering degree in 1996 from the University of Parma, De Salvo studied microelectronics at the Grenoble Institute of Technology, completing a Ph.D. in 1999. Her dissertation, Étude du transport électrique et de la fiabilité dans les isolants des mémoires non volatiles a grille flottante, was jointly directed by Gérard Ghibaudo and Georges Pananakakis. She earned a habilitation through Joseph Fourier University in 2007, and has also studied at the MIT Sloan School of Management. She worked in Grenoble, France, at CEA-Leti: Laboratoire d'électronique des technologies de l'information, beginning in 1999 and including two years from 2013 to 2015 in Albany, New York working as part of an international collaboration with IBM. She became chief scientist and deputy director of CEA-Leti, before moving in 2019 to Meta Platforms and its Reality Labs in Menlo Park, California as Research Director and Silicon Technology Strategist. Book De Salvo is the author of the book Silicon Non-Volatile Memories: Paths of Innovation (Wiley, 2009). Recognition De Salvo was named an IEEE Fellow, in the 2020 class of fellows, "for contributions to device physics of nonvolatile embedded and stand-alone memories". References External links Year of birth missing (living people) Living people University of Parma alumni Electronics engineers Women electrical engineers Fellows of the IEEE
Barbara De Salvo
[ "Engineering" ]
363
[ "Electronics engineers", "Electronic engineering" ]
73,772,856
https://en.wikipedia.org/wiki/Tontoia
Tontoia is a dubious genus of arthropod known from the Cambrian Burgess Shale, known from a fossil proposed to be the external mould of an arthropod exoskeleton. In its original description by Charles D. Walcott it was initially suggested that Tontoia might be a trilobite, but it is currently considered to be a nomen dubium, and it is unclear whether it even represents an arthropod. References Cambrian arthropods Burgess Shale fossils Taxa named by Charles Doolittle Walcott Fossil taxa described in 1912 Cambrian genus extinctions Nomina dubia
Tontoia
[ "Biology" ]
128
[ "Biological hypotheses", "Nomina dubia", "Controversial taxa" ]
73,775,288
https://en.wikipedia.org/wiki/Azerbaijan%27s%20space%20program
Azerbaijan's space program is the program of the Azerbaijani government to develop Azerbaijan's space capabilities. "Azercosmos", the first satellite operator in the South Caucasus, was established by the Decree No. 885 of the President of the Republic of Azerbaijan Ilham Aliyev dated May 3, 2010, with the aim of ensuring the development, launch, management and operation of telecommunication satellites. On February 8, 2013, Azerbaijan launched the first artificial satellite named Azerspace-1 into space. Background Azerbaijan has had a scientific interest in space travel since the Second World War. In 1927, the decree of conducting astronomical expeditions in the Azerbaijan SSR was issued for the purpose of selecting the southern regional observatory of the Leningrad Institute of Astronomy. For this purpose, in July–August 1930, the employees of that institute A.V. Markov and V.B.Nikonov together with I.A. Benashvili, employee of Azerbaijan scientific society, studied astroclimate in Khankendi, Shusha and Lachin regions, but the lack of cloudless nights dissuaded them from this idea. As a result, it was decided to continue observations in other mountainous regions of Azerbaijan for the purpose of selecting an astronomical observation point. However, the lack of local staff did not allow organizing long-term observation work. Habibulla Mammadbeyli, who graduated from Leningrad State University in 1938 from the Faculty of Physics and Mathematics, taught astronomy in his native language at Baku State University, prepared textbooks in this field, widely promoted astronomy and in 1946-1949 under his leadership in different regions of Azerbaijan (Kalbajar, Khizi, Dashkasan, The expeditions he conducted in Shamakhi and other areas) helped to choose the place of the future observatory in the republic and formed experts in the field of astronomy. In order to determine the location of the future observatory, the last uninterrupted expeditions were organized under the leadership of Hajibey Sultanov at the Institute of Physics and Mathematics of the Azerbaijan SSR AS (Academy of Sciences) in March 1953, and the final decision about the suitable location in Shamakhi was made in June of the same year. In 1959, the Shamakhi Astrophysics Observatory was established by the decision of the Council of Ministers of the Azerbaijan SSR on the basis of the Astrophysics Sector and its Pirgulu Astronomy Station. Since 1960, this observatory has been included in the SA structure of the Azerbaijan SSR as an independent research institute. Karim Karimov, one of the founders of the Soviet space program, played a major role in the creation of space vehicles and the program by which the first man in the world to be sent into space. As a result, after the successful completion of the historic flight of the first cosmonaut Yuri Gagarin, Karim Karimov was awarded the highest order of the USSR – the Order of Lenin. Karim Karimov, who started working as a senior engineer in 1946, was also given the rank of major-general. In 1965, he was appointed the head of the Department of Space Vehicles, and from 1966, the chairman of the USSR State Commission for Test Flights of Piloted Ships. Since 1974, the Azerbaijan National Aerospace Agency has operated as the "Caspian" Scientific Center within the Azerbaijan National Academy of Sciences, and in 1981, the Space Research Scientific-Production Union (KTEIB) was established at its base. By Decree No. 580 of the President of the Republic of Azerbaijan dated February 21, 1992, AMAKA was established on the basis of KTEIB. By Decree No. 463 of the President of the Republic of Azerbaijan dated September 27, 2006, AMAKA (later MAKA) was subordinated to the Ministry of Defense Industry of the Republic of Azerbaijan. In 1977, the American Mugham was sent into space as a result of the Voyager 1 satellite belonging to NASA. The Voyager gold plate made by the American astronomer Carl Sagan also contained a composition called "Mugham" (sometimes called "Chahargah melody") played by Kamil Jalilov with the balaban. Azerbaijani astronomer and astrophysicist Nadir Ibrahimov made great achievements in the study of planets (Mars, Venus and Jupiter's satellite Io) as a result of his observations in the 2-meter telescope of the Shamakhi Astrophysical Observatory. He took large-scale, multiple images of the planet Mars during its great opposition (the shortest distance between Earth and Mars) and mapped the planet. A crater on the surface of Mars was named in his honour at the General Assembly of the International Astronomical Union in Patras, Greece in August 1982. The crater, which has a diameter of 87 km, is located east of Thaumasia Plateau. Fuzuli Farajov dedicated a certain part of his life to the design and production of flying machines of the XXI century. He was one of the participants in the assembly of the fuselage of the spaceship "Buran" and spent a lot of work on the creation of the spaceship. General Karim Karimov, academician Tofig Ismayilov, engineers Vafadar Babayev, Izzateli Agayev, Ferdowsi Karimov, department head of "Molniya" design office Nazim Guliyev, and Shakir Asgarov, who looked after security issues at the Baikonur cosmodrome, also took part in the creation of the spaceship. The first Azerbaijani-born cosmonaut to enter space was Musa Manarov. Manarov, who made his first flight on December 21, 1987, under the command of cosmonaut Vladimir Titov as a flight engineer aboard the "Soyuz TM-4" spacecraft of the "Mir" interorbital station, returned to Earth exactly 1 year — 365 days and 23 hours later, making it the longest time in space for that time. was one of the two Earth inhabitants. For this heroism, he was given the title Hero of the Soviet Union. Manarov was in space for the second time from December 21, 1990, to May 26, 1991. For 175 days and 2 hours, he orbited the Earth as a flight engineer on the Soyuz-11 spacecraft of the Mir orbital complex. During his two flights, Musa went into space 7 times and spent a total of 34 hours and 23 minutes in outer space. In 2014, Allen Mirgadirov and 5 others were selected by the NASA commission for the most similar experiment to the flight to Mars. In practice, Allen was responsible for the technical condition of the equipment, including the computers, its operational condition, and the flight route. In addition to general duties, he also had his own experience. He was involved in the development of a computer program that optimized the flight trajectory. History Satellite programs of Azerbaijan In 2008, the decrees of President Ilham Aliyev on the approval of the State Program on the creation and development of the space industry in Azerbaijan and the launch of telecommunication satellites into orbit in 2008 showed that the state is quite interested in the development of this field. After that, in 2010, the president signed the Decree on the establishment of "Azercosmos" Open Joint-Stock Company, and that institution was given a special task related to the production and launching of a telecommunication satellite, and the creation of the main and backup Earth Satellite Control Center. "Azerkosmos", the first satellite operator in Azerbaijan and the South Caucasus, during the two years of its existence, worked in the direction of providing television and radio broadcasting and telecommunication services in the country, as well as providing highly reliable communication platforms that meet the requirements of government and corporate clients. On February 8, 2013, Azerbaijan launched its first artificial satellite, Azerspace-1. The flight from Kourou Cosmodrome in French Guiana took place between 01:36 and 02:20 at night. The satellite is named Azerspace-1/Africasat-1a because its coverage includes Central Asia, Europe and Africa. 230 million Azerbaijani manats were spent to create the satellite. 15 percent of this amount was paid from the state budget, the rest with loans. Loans in the amount of 98 million US dollars were taken from France's COFACE Export Credit Agency and 116.6 million US dollars from BNP Paribas bank with the guarantee of US Export-Import Bank (US Em-Im). Azerbaijan launched the Azersky observation satellite into orbit on June 30, 2014. The satellite, worth 157 million euros, mainly serves the defence and security of Azerbaijan. Azerbaijan's third satellite, Azerspace-2, was launched into space on September 28, 2018. Launched from the Kourou Cosmodrome in French Guiana, the satellite went into orbit at 2:38 on the night of September 26. The cost of the satellite is 190 million US dollars. List of projects Azerbaijan's space program has implemented a number of projects: Azersky (observation satellite) Azerspace-1 (communication satellite) Azerspace-2 (communication satellite) See also Azercosmos Azerbaijan National Aerospace Agency References Space program of Azerbaijan Science and technology in Azerbaijan Azerbaijan
Azerbaijan's space program
[ "Engineering" ]
1,861
[ "Space programs", "Space programs by country" ]
73,775,366
https://en.wikipedia.org/wiki/DORIS%20%28particle%20accelerator%29
The Double-Ring Storage Facility (DORIS) was an electron–positron storage ring at the German national laboratory DESY. It was DESY's second circular accelerator and its first storage ring, with a circumference of nearly 300 m. After construction was completed in 1974, DORIS provided collision experiments with electrons and their antiparticles at energies of 3.5 GeV per beam. In 1978, the energy of the beams was raised to 5 GeV each. With evidence of "excited charmonium states", DORIS made an important contribution to the process of proving the existence of heavy quarks. In the same year, the first tests of X-ray lithography were performed at DESY. In 1987, the ARGUS detector at the DORIS storage ring was the first experiment to observe the conversion of a B meson into its antiparticle, the anti-B meson. The Hamburg Synchrotron Radiation Laboratory HASYLAB was commissioned in 1980 to use synchrotron radiation, which was generated at DORIS as a byproduct, for research. While DORIS was only used as a synchrotron radiation source for roughly a third of its running time in the beginning, the provision of synchrotron radiation became its sole purpose from 1993 onwards under the name DORIS III. In order to achieve more intense and controllable radiation, DORIS was upgraded in 1984 with wigglers and undulators. By means of a special array of permanent magnets, the accelerated positrons could now be brought onto a slalom course, increasing the intensity of the emitted synchrotron radiation by a factor of 100 in comparison to conventional storage ring systems. Among the many studies carried out with the synchrotron radiation generated by DORIS, from 1986 to 2004, the Israeli biochemist Ada Yonath (Nobel Prize in Chemistry 2009) conducted experiments that led to her deciphering the ribosome. DORIS III served 36 photon beamlines, where 45 instruments were operated in rotation. The overall beam time per year amounted to 8 to 10 months. It was finally shut down at the end of 2012. OLYMPUS The former site of the ARGUS detector at DORIS became the location of the OLYMPUS experiment in 2010. OLYMPUS used the toroidal magnet and pair of drift chambers from the MIT-Bates BLAST experiment along with refurbished time-of-flight detectors and multiple luminosity monitoring systems. OLYMPUS measured the positron–proton to electron–proton cross section ratio to precisely determine the size of two-photon exchange in elastic electron–proton scattering. Two-photon exchange may resolve the proton form factor discrepancy between recent measurements made using polarization techniques and ones using the Rosenbluth separation method. OLYMPUS took data in 2012 and 2013, and first results were published in 2017. References External links DESY website Particle physics facilities Synchrotron radiation facilities
DORIS (particle accelerator)
[ "Materials_science" ]
586
[ "Materials testing", "Synchrotron radiation facilities" ]
73,775,543
https://en.wikipedia.org/wiki/Bromine%20monoxide%20radical
Bromine monoxide is a binary inorganic compound of bromine and oxygen with the chemical formula BrO. A free radical, this compound is the simplest of many bromine oxides. The compound is capable of influencing atmospheric chemical processes. Naturally, BrO can be found in volcanic plumes. BrO is similar to the oxygen monofluoride, chlorine monoxide and iodine monoxide radicals. Chemical properties The compound is very effective as a catalyst of the ozone destruction. The chemical reaction of BrO and chlorine dioxide (OClO) results in ozone depletion in the stratosphere. References Bromine compounds Diatomic molecules Oxides Free radicals
Bromine monoxide radical
[ "Physics", "Chemistry", "Biology" ]
136
[ "Molecules", "Free radicals", "Oxides", "Salts", "Senescence", "Biomolecules", "Diatomic molecules", "Matter" ]
73,775,707
https://en.wikipedia.org/wiki/Semicircular%20bund
A semi-circular bund (also known as a demi-lune or half-moon) is a rainwater harvesting technique consisting in digging semi-lunar holes in the ground with the opening perpendicular to the flow of water. Background These holes are oriented against the slope of the ground, generating a small dike in the curved area with the soil from the hole itself, so they capture the rainwater running downhills. These structures allow water to seep into the soil, retaining in the subsoil a greater amount of moisture. But also, it prevents the loss of fertile soil. Semi-circular bunds are used to reforest arid zones with irregular rain patterns, allowing the growth of plants and trees, such as in the Sahel. See also Zaï Infiltration basin Contour plowing Great Green Wall Rainwater harvesting in the Sahel References Appropriate technology Soil science Hydrology and urban planning Rainwater harvesting Water conservation Flood control projects Irrigation Desert greening Forestry initiatives Environmental engineering
Semicircular bund
[ "Chemistry", "Engineering", "Environmental_science" ]
201
[ "Hydrology", "Chemical engineering", "Hydrology and urban planning", "Civil engineering", "Environmental engineering" ]
73,776,279
https://en.wikipedia.org/wiki/School-choice%20mechanism
A school-choice mechanism is an algorithm that aims to match pupils to schools in a way that respects both the pupils' preferences and the schools' priorities. It is used to automate the process of school choice. The most common school-choice mechanisms are variants of the deferred-acceptance algorithm and random serial dictatorship. Relation to other matching mechanisms School choice is a kind of a two-sided matching market, like the stable marriage problem or residency matching. The main difference is that, in school choice, one side of the market (namely, the schools) are not strategic. Their priorities do not represent subjective preferences, but are determined by legal requirements, for example: a priority for relatives of previous students, minority quotas, minimum income quotas, etc. Strategic considerations A major concern in designing a school-choice mechanism is that it should be strategyproof for the pupils (as they are considered to be strategic), so that they reveal their true preferences for schools. Therefore, the mechanism most commonly used in practice is the Deferred-acceptance algorithm with pupils as the proposers. However, this mechanism may yield outcomes that are not Pareto-efficient for the pupils. This loss of efficiency might be substantial: a recent survey showed that around 2% of the pupils could receive a school that is more preferred by them, without harming any other student. Moreover, in some cases, DA might assign each pupil to their second-worst or worst school. Efficiency-adjusted deferred-acceptance Onur Kesten suggested to amend DA by removing "interrupters", that is, (student, school) pairs in which the student proposes to the school, causes the school to reject another student, and rejected later on. This "Efficiency Adjusted Deferred Acceptance" algorithm (EADA) is Pareto-efficient. Whereas it is not stable and not strategyproof for the pupils, it satisfies weaker versions of these two properties. For example, it is regret-free truth-telling. Interestingly, in lab experiments, more pupils report their true preferences to EADA than to DA (70% vs 35%). EADA is about to be used in Flanders. References Mechanism design Education economics
School-choice mechanism
[ "Mathematics" ]
449
[ "Game theory", "Mechanism design" ]
73,776,850
https://en.wikipedia.org/wiki/REDD%20and%20REDD%2B
REDD+ (or REDD-plus) is a framework to encourage developing countries to reduce emissions and enhance removals of greenhouse gases through a variety of forest management options, and to provide technical and financial support for these efforts. The acronym refers to "reducing emissions from deforestation and forest degradation in developing countries, and the role of conservation, sustainable management of forests, and enhancement of forest carbon stocks in developing countries". REDD+ is a voluntary climate change mitigation framework developed by the United Nations Framework Convention on Climate Change (UNFCCC). REDD originally referred to "reducing emissions from deforestation in developing countries", which was the title of the original document on REDD. It was superseded by REDD+ in the Warsaw Framework on REDD-plus negotiations.Since 2000, various studies estimate that land use change, including deforestation and forest degradation, accounts for 12–29% of global greenhouse gas emissions. For this reason the inclusion of reducing emissions from land use change is considered essential to achieve the objectives of the UNFCCC. Main elements As with other approaches under the UNFCCC, there are few prescriptions that specifically mandate how to implement the mechanism at the national level; the principles of national sovereignty and subsidiarity imply that the UNFCCC can only provide guidelines for implementation, and require that reports are submitted in a certain format and open for review by the convention. There are certain aspects that go beyond this basic philosophy – such as the 'safeguards', explained in more detail below – but in essence, REDD+ is no more than a set of guidelines on how to report on forest resources and forest management strategies and their results in terms of reducing emissions and enhancing removals of greenhouse gases. However, a set of requirements has been elaborated to ensure that REDD+ programs contain key elements that reports from Parties are consistent and comparable and that their content is open to review and function of the objectives of the convention. Decision 1/CP.16 requests all developing countries aiming to undertake REDD+ to develop the following elements: (a) A national strategy or action plan; (b) A national forest reference emission level and/or forest reference level or, if appropriate, as an interim measure, subnational forest reference emission levels and/or forest reference levels; (c) A robust and transparent national forest monitoring system for the monitoring and reporting on REDD+ activities (see below), with, if appropriate, subnational monitoring and reporting as an interim measure; (d) A system for providing information on how the social and environmental safeguards (included in an appendix to the decision) are being addressed and respected throughout the implementation of REDD+. It further requests developing countries to address the drivers of deforestation and forest degradation, land tenure issues, forest governance issues, gender considerations and social and environmental safeguards, ensuring the full and effective participation of stakeholders, inter alia Indigenous peoples and local communities. Eligible activities The decisions on REDD+ enumerate five "eligible activities" that developing countries may implement to reduce emissions and enhance removals of greenhouse gases: (a) Reducing emissions from deforestation. (b) Reducing emissions from forest degradation. (c) Conservation of forest carbon stocks. (d) Sustainable management of forests. (e) Enhancement of forest carbon stocks. The first two activities reduce emissions of greenhouse gases and they are the two activities listed in the original submission on REDD in 2005 by the Coalition for Rainforest Nations. The three remaining activities constitute the "+" in REDD+. The last one enhances the removal of greenhouse gases, while the effect of the other two on emissions or removals is indeterminate but expected to be minimal. Policies and measures In the text of the convention repeated reference is made to national "policies and measures", the set of legal, regulatory and administrative instruments that parties develop and implement to achieve the objective of the convention. These policies can be specific to climate change mitigation or adaptation, or of a more generic nature but with an impact on greenhouse gas emissions. Many of the signatory parties to the UNFCCC have by now established climate change strategies and response measures. The REDD+ approach has a similar, more focused set of policies and measures. Forest sector laws and procedures are typically in place in most countries. In addition, countries have to develop specific national strategies and/or action plans for REDD+. Of specific interest to REDD+ are the drivers of deforestation and forest degradation. The UNFCCC decisions call on countries to make an assessment of these drivers and to base the policies and measures on this assessment, such that the policies and measures can be directed to where the impact is greatest. Some of the drivers will be generic – in the sense that they are prevalent in many countries, such as increasing population pressure – while others will be very specific to countries or regions within countries. Countries are encouraged to identify "national circumstances" that impact the drivers: specific conditions within the country that impact the forest resources. Hints for typical national circumstances can be found in preambles to various COP decisions, such as "Reaffirming that economic and social development and poverty eradication are global priorities" in the Bali Action Plan, enabling developing countries to prioritize policies like poverty eradication through agricultural expansion or hydropower development over forest protection. Reference levels Reference levels are a key component for any national REDD+ program. They serve as a baseline for measuring the success of REDD+ programs in reducing greenhouse gas emissions from forests. They are available for examination by the international community to assess the reported emission reductions or enhanced removals. It establishes the confidence of the international community in the national REDD+ program. The results measured against these baselines may be eligible for results-based payments. Setting the reference levels too lax will erode the confidence in the national REDD+ program, while setting them too strict will erode the potential to earn the benefits with which to operate the national REDD+ program. Careful consideration of all relevant information is therefore of crucial importance. The requirements and characteristics of reference levels are under the purview of the UNFCCC. Given the wide variety of ecological conditions and country-specific circumstances, these requirements are rather global and every country will have a range of options in its definition of reference levels within its territory. A reference level (RL) is expressed as an amount, derived by differencing a sequence of amounts over a period of time. For REDD+ purposes the amount is expressed in CO2-equivalents (CO2e) (see article on global warming potential) of emissions or removals per year. If the amounts are emissions, the reference level becomes a reference emission level (REL); however, these RELs are seen by some as incomplete as they do not take into account removals. Reference levels are based on a scope ‒ what is included? ‒ a scale ‒ the geographical area from which it is derived or to which it is applied ‒ and a period over which the reference level is calculated. The scope, the scale and the period can be modified in reference to national circumstances: specific conditions in the country that would call for an adjustment of the basis from which the reference levels are constructed. A reference level can be based on observations or measurements of amounts in the past, in which case it is retrospective, or it can be an expectation or projection of amounts into the future, in which case it is prospective. Reference levels have to eventually have national coverage, but they may be composed from a number of sub-national reference levels. As an example, forest degradation may have a reference emission level for commercial selective logging and one for extraction of minor timber and firewood for subsistence use by rural communities. Effectively, every identified driver of deforestation or forest degradation has to be represented in one or more reference emission level(s). Similarly for reference levels for enhancement of carbon stocks, there may be a reference level for plantation timber species and one for natural regeneration, possibly stratified by ecological region or forest type. Details on the reporting and technical assessment of reference levels are given in Decision 13/CP.19. Monitoring: measurement, reporting and verification In Decision 2/CP.15 of the UNFCCC countries are requested to develop national forest monitoring systems (NFMS) that support the functions of measurement, reporting and verification (MRV) of actions and achievements of the implementation of REDD+ activities. NFMS is the key component in the management of information for national REDD+ programs. A fully functional monitoring system can go beyond the requirements posted by the UNFCCC to include issues such as a registry of projects and participants, and evaluation of program achievements and policy effectiveness. It may be purpose-built, but it may also be integrated into existing forest monitoring tools. Measurements are suggested to be made using a combination of remote sensing and ground-based observations. Remote sensing is particularly suited to the assessment of areas of forest and stratification of different forest types. Ground-based observations involve forest surveys to measure the carbon pools used by the Intergovernmental Panel on Climate Change (IPCC), the United Nations body for assessing the science related to climate change, as well as other parameters of interest such as those related to safeguards and eligible activity implementation. The reporting has to follow the IPCC guidance, in particular the "Good Practice Guidance for Land use, land-use change, and forestry". This provides reporting templates to be included in National Communications of Parties to the UNFCCC. Also included in the guidance are standard measurements protocols and analysis procedures that greatly impact the measurement systems that countries need to establish. The actual reporting of REDD+ results goes through the Biennial Update Reports (BURs), instead of the National Communications of Parties. The technical assessment of these results is an independent, external process that is managed by the Secretariat to the UNFCCC; countries need to facilitate the requirements of this assessment. The technical assessment is included within the broader process of International Consultation and Analysis (ICA), which is effectively a peer-review by a team composed of an expert from an Annex I Party and an expert from a non-Annex I Party which "will be conducted in a manner that is nonintrusive, non-punitive and respectful of national sovereignty". This "technical team of experts shall analyse the extent to which: (a) There is consistency in methodologies, definitions, comprehensiveness and the information provided between the assessed reference level and the results of the implementation of the [REDD+] activities (...); (b) The data and information provided in the technical annex is transparent, consistent, complete and accurate; (c) The data and information provided in the technical annex is consistent with the [UNFCCC] guidelines (...); (d) The results are accurate, to the extent possible." However, unlike a true verification the technical assessment cannot "approve" or "reject" the reference level, or the reported results measured against this reference level. It does provide clarity on potential areas for improvement. Financing entities that seek to provide results-based payments (payments per tonne of mitigation achieved) typically seek a true verification of results by external experts, to provide assurance that the results for which they are paying are credible. Safeguards In response to concerns over the potential for negative consequences resulting from the implementation of REDD+ the UNFCCC established a list of safeguards that countries need to "address and respect" and "promote and support" in order to guarantee the correct and lasting generation of results from the REDD+ mechanism. These safeguards are: "(a) That actions complement or are consistent with the objectives of national forest programmes and relevant international conventions and agreements; (b) Transparent and effective national forest governance structures, taking into account national legislation and sovereignty; (c) Respect for the knowledge and rights of indigenous peoples and members of local communities, by taking into account relevant international obligations, national circumstances and laws, and noting that the United Nations General Assembly has adopted the United Nations Declaration on the Rights of Indigenous Peoples; (d) The full and effective participation of relevant stakeholders, in particular indigenous peoples and local communities; (e) That actions are consistent with the conservation of natural forests and biological diversity, ensuring that the actions are not used for the conversion of natural forests, but are instead used to incentivize the protection and conservation of natural forests and their ecosystem services, and to enhance other social and environmental benefits; (f) Actions to address the risks of reversals; (g) Actions to reduce displacement of emissions". Countries have to regularly provide a summary of information on how these safeguards are addressed and respected. This could come in the form, for instance, of explaining the legal and regulatory environment with regards to the recognition, inclusion and engagement of Indigenous Peoples, and information on how these requirements have been implemented. Decision 12/CP.19 established that the "summary of information" on the safeguards will be provided in the National Communications to the UNFCCC, which for developing country Parties will be once every four years. Additionally, and on a voluntary basis, the summary of information may be posted on the UNFCCC REDD+ web platform. Additional issues All pertinent issues that comprise REDD+ are exclusively those that are included in the decisions of the COP, as indicated in the above sections. There is, however, a large variety of concepts and approaches that are labelled (as being part of) REDD+ by their proponents, either being a substitute for UNFCCC decisions or complementary to those decisions. Below follows a – no doubt incomplete – list of such concepts and approaches. Project-based REDD+, voluntary market REDD+. As the concept of REDD+ was being defined, many organizations began promoting REDD+ projects at the scale of a forest area (e.g. large concession, National Park), analogous to AR-CDM projects under the Kyoto Protocol, with reduction of emissions or enhancement of removals vetted by an external organization using a standard established by some party (e.g. CCBA, VCS) and with carbon credits traded on the international voluntary carbon market. However, under the UNFCCC REDD+ is defined as national (Decisions 4/CP.15 and 1/CP.16 consistently refer to national strategies and action plans and national monitoring, with sub-national coverage allowed as an interim measure only). Benefit distribution. The UNFCCC decisions on REDD+ are silent on the issue of rewarding countries and participants for their verified net emission reductions or enhanced removals of greenhouse gases. It is not very likely that specific requirements for sub-national implementation of the distribution of benefits will be adopted, as this will be perceived to be an issue of national sovereignty. Generic guidance may be provided, using language similar to that of the safeguards, such as "result-based finance has to accrue to local stakeholders" without being specific on percentages retention for management, identification of stakeholders, type of benefit or means of distribution. Countries may decide to channel any benefits through an existing program on rural development, for instance, provide additional services (e.g. extension, better market access, training, seedlings) or pay local stakeholders directly. Many financial entities do have specific requirement on the design of a system to use funds received, and reporting on the use of these funds. FPIC. Free, prior and informed consent is included in the U.N. Declaration on the Rights of Indigenous Peoples. The REDD+ decisions under the UNFCCC do not have this as an explicit requirement; however, the safeguard on respect for the knowledge and rights of Indigenous peoples and members of local communities notes "that the United Nations General Assembly has adopted the United Nations Declaration on the Rights of Indigenous Peoples" (UNDRIP). Article 19 of UNDRIP requires that "States shall consult and cooperate in good faith with the Indigenous peoples concerned through their own representative institutions in order to obtain their free, prior and informed consent before adopting and implementing legislative or administrative measures that may affect them". This article is interpreted by many organizations engaged in REDD+, for example in the UN-REDD "Guidelines on Free, Prior and Informed Consent," to mean that every, or at least many, communities need to provide their consent before any REDD+ activities can take place. Leakage refers to detrimental effects outside of the project area attributable to project activities. Leakage is less of an issue when REDD+ is implemented at a national or subnational level, as there can be no domestic leakage once full national coverage is achieved. However, there can still be international leakage if activities are displaced across international borders, or "displacement of emissions" between sectors, such as replacing wood fires with kerosene stoves (AFOLU to energy) or construction with wood for construction with concrete, cement and bricks (AFOLU to industry). Many initiatives require leakage be taken into account in program design, so that potential leakage of emissions, including across borders, can be minimized. As a climate change mitigation measure Deforestation and forest degradation account for 17–29% of global greenhouse gas emissions, the reduction of which is estimated to be one of the most cost-efficient climate change mitigation strategies. Regeneration of forest on degraded or deforested lands can remove CO₂ from the atmosphere through the build-up of biomass, making forest lands a sink of greenhouse gases. The REDD+ mechanism addresses both issues of emission reduction and enhanced removal of greenhouse gases. Reducing emissions Emissions of greenhouse gases from forest land can be reduced by slowing down the rates of deforestation and forest degradation, covered by REDD+ eligible activities. Another option would be some form of reduced impact logging in commercial logging, under the REDD+ eligible activity of sustainable management of forests. Enhancing removals Removals of greenhouse gases (specifically ) from the atmosphere can be achieved through various forest management options, such as replanting degraded or deforested areas or enrichment planting, but also by letting forest land regenerate naturally. Care must be taken to differentiate between what is a purely ecological process of regrowth and what is induced or enhanced through some management intervention. REDD+ and the carbon market In 2009, at COP 15 in Copenhagen, the Copenhagen Accord was reached, noting in section 6 the recognition of the crucial role of REDD and REDD+ and the need to provide positive incentives for such actions by enabling the mobilization of financial resources from developed countries. The Accord goes on to note in section 8 that the collective commitment by developed countries for new and additional resources, including forestry and investments through international institutions, will approach US$30 billion for the period 2010–2012. The Green Climate Fund (GCF) was established at COP 17 to function as the financial mechanism for the UNFCCC, thereby including REDD+ finance. The Warsaw Framework on REDD-plus makes various references to the GCF, instructing developing country Parties to apply to the GCF for result-based finance. The GCF currently finances REDD+ programs in phase 1 (design of national strategies or action plans, capacity building) and phase 2 (implementation of national strategies or action plans, demonstration programs). It is currently finalizing an approach to REDD+ results-based payments. REDD+ is also eligible for inclusion under CORSIA, the International Civil Aviation Organization (ICAO)'s market-based greenhouse gas offset mechanism. Implementing REDD+ Decision 1/CP.16, paragraph 73, suggests that national capacity for implementing REDD+ is built up in phases, "beginning with the development of national strategies or action plans, policies and measures, and capacity-building, followed by the implementation of national policies and measures and national strategies or action plans that could involve further capacity-building, technology development and transfer and results-based demonstration activities, and evolving into results-based actions that should be fully measured, reported and verified". The initial phase of the development of national strategies and action plans and capacity building is typically referred to as the "Readiness phase" (a term like Reddiness is also encountered). There is a very substantial number of REDD+ projects globally and this section lists only a selection. One of the more comprehensive online tools with up-to-date information on REDD+ projects is the Voluntary REDD+ Database. Readiness activities Most REDD+ activities or projects implemented since the call for demonstration activities in Decision 2/CP.13 December 2007 are focused on readiness, which is not surprising given that REDD+ and its requirements were completely new to all developing countries. UN-REDD Programme UNDP, UNEP and FAO jointly established the UN-REDD Programme (see below #UN-REDD Programme) in 2007, a partnership aimed at assisting developing countries in addressing certain measures needed in order to effectively participate in the REDD+ mechanism. These measures include capacity development, governance, engagement of Indigenous Peoples and technical needs. The initial set of supported countries were Bolivia, Democratic Republic of Congo, Indonesia, Panama, Papua New Guinea, Paraguay, Tanzania, Vietnam, and Zambia. By March 2014 the Programme counted 49 participants, 18 of which are receiving financial support to kick start or complement a variety of national REDD+ readiness activities. The other 31 partner countries may receive targeted support and knowledge sharing, be invited to attend meetings and training workshops, have observer status at the Policy Board meetings, and "may be invited to submit a request to receive funding for a National Programme in the future, if selected through a set of criteria to prioritize funding for new countries approved by the Policy Board". The Programme operates in six work areas: MRV and Monitoring (led by FAO) National REDD+ Governance (UNDP) Engagement of Indigenous Peoples, Local Communities and Other Relevant Stakeholders (UNDP) Ensuring multiple benefits of forests and REDD+ (UNEP) Transparent, Equitable and Accountable Management of REDD+ Payments (UNDP) REDD+ as a Catalyst for Transformations to a Green Economy (UNEP) Forest Carbon Partnership Facility The World Bank plays an important role in the development of REDD+ activities since its inception. The Forest Carbon Partnership Facility (FCPF) was presented to the international community at COP 13 in Bali, December 2007. Recipient countries can apply $3.6 million towards: the development of national strategies; stakeholder consultation; capacity building; development of reference levels; development of a national forest monitoring system; and social and environmental safeguards analysis. Those countries that successfully achieve a state of readiness can apply to the related Carbon Fund, for support towards national implementation of REDD+. Norwegian International Climate and Forest Initiative At the 2007 Bali Conference, the Norwegian government announced their International Climate and Forests Initiative (NICFI), which provided US$1 billion towards the Brazilian REDD scheme and US$500 million towards the creation and implementation of national-based, REDD+ activities in Tanzania. In addition, with the United Kingdom, $200 million was contributed towards the Congo Basin Forest Fund to aid forest conservation activities in Central Africa. In 2010, Norway signed a Letter of Intent with Indonesia to provide the latter country with up to US$1 billion "assuming that Indonesia achieves good results". "United States" The United States has provided more than $1.5 billion in support for REDD+ and other sustainable landscape activities since 2010. It supports several multilateral partnerships including the FCPF, as well as flagship global programs such as SilvaCarbon, which provides support to REDD+ countries in measuring and monitoring forests and forest-related emissions. The United States also provides significant regional and bilateral support to numerous countries implementing REDD+. ITTO The International Tropical Timber Organization (ITTO) has launched a thematic program on REDD+ and environmental services with an initial funding of US$3.5 million from Norway. In addition, the 45th session of the ITTO Council held in November 2009, recommended that efforts relating REDD+ should focus on promoting "sustainable forest management". Finland In 2009, the Government of Finland and the Food and Agriculture Organization of the United Nations signed a US$17 million partnership agreement to provide tools and methods for multi-purpose forest inventories, REDD+ monitoring and climate change adaptation in five pilot countries: Ecuador, Peru, Tanzania, Viet Nam and Zambia. As part of this programme, the Government of Tanzania will soon complete the country's first comprehensive forest inventory to assess its forest resources including the size of the carbon stock stored within its forests. A forest soil carbon monitoring program to estimate soil carbon stock, using both survey and modelling-based methods, has also been undertaken. Australia Australia established a A$200 million International Forest Carbon Initiative, focused on developing REDD+ activities in its vicinity, i.e., in areas like Indonesia, and Papua New Guinea. Interim REDD+ Partnership In 2010, national governments of developing and developed countries joined efforts to create the Interim REDD+ Partnership as means to enhance implementation of early action and foster fast start finance for REDD+ actions. Implementation phase Some countries are already implementing aspects of a national forest monitoring system and activities aimed at reducing emissions and enhancing removals that go beyond REDD+ readiness. For example, the Forest Carbon Partnership Facility has 19 countries in the pipeline of the Carbon Fund, which will provide payments to these countries based on verified REDD+ emissions reductions achieved under national or subnational programs. Results-based actions Following the Warsaw Framework on REDD-plus, the first country had submitted a Biennial Update Report with a Technical Annex containing the details on emission reductions from REDD+ eligible activities. Brazil submitted its first Biennial Update Report on 31 December 2014. The Technical Annex covers the Amazon biome within Brazil's territory, a little under half of the national territory, reporting emission reductions against Brazil's previously submitted reference emission level of 2,971.02 MtCO2e from a reduction in deforestation. This Technical Annex was reviewed through the International Consultation and Analysis process and on 22 September 2015 a technical report was issued by the UNFCCC which states that "the LULUCF experts consider that the data and information provided in the technical annex are transparent, consistent, complete and accurate" (paragraph 38). (a) Continuation in updating and improving the carbon density map, including through the use of improved ground data from Brazil's first national forest inventory, possibly prioritizing geographic areas where deforestation is more likely to occur; (b) Expansion of the coverage of carbon pools, including improving the understanding of soil carbon dynamics after the conversion of forests to non-forests; (c) Consideration of the treatment of non-CO2 gases to maintain consistency with the GHG inventory; (d) Continuation of the improvements related to monitoring of forest degradation; (e) Expansion of the forest monitoring system to cover additional biomes. Criticisms Since the first discussion on REDD+ in 2005, and particularly at COP 13 in 2007 and COP 15 in 2009, many concerns have been voiced on aspects of REDD+. Though it is widely understood that REDD+ will need to undergo full-scale implementation in all non-Annex I countries to meet the objectives of the Paris Agreement, many challenges need resolving before this can happen. One of the largest issues is how reduction of emissions and the removal of greenhouse gases will be monitored consistently on a large scale, across a number of countries, each with separate environmental agencies and laws. Other issues relate to the conflict between the REDD+ approach and existing national development strategies, the participation of forest communities and Indigenous peoples in the design and maintenance of REDD+, funding for the countries implementing REDD+, and the consistent monitoring of forest resources to detect permanence of the forest resources that have been reported by countries under the REDD+ mechanism. Natural forests vs. high-density plantations Safeguard (e): That actions are consistent with the conservation of natural forests and biological diversity, ensuring that the [REDD+] actions … are not used for the conversion of natural forests, but are instead used to incentivize the protection and conservation of natural forests and their ecosystem services, and to enhance other social and environmental benefits. Footnote to this safeguard: Taking into account the need for sustainable livelihoods of indigenous peoples and local communities and their interdependence on forests in most countries, reflected in the United Nations Declaration on the Rights of Indigenous Peoples, as well as the International Mother Earth Day. The UNFCCC does not define what constitutes a forest; it only requires that Parties communicate to the UNFCCC on how they define a forest. The UNFCCC does suggest using a definition in terms of minimal area, minimal crown coverage and minimal height at maturity of perennial vegetation. While there is a safeguard against the conversion of natural forest, developing country Parties are free to include plantations of commercial tree species (including exotics like Eucalyptus spp., Pinus spp., Acacia spp.), agricultural tree crops (e.g. rubber, mango, cocoa, citrus), or even non-tree species such as palms (oil palm, coconut, dates) and bamboo (a grass). Some opponents of REDD+ argue that this lack of a clear distinction is no accident. FAO forest definitions date from 1948 and define forest only by the number, height, and canopy cover of trees in an area. Similarly, there is a lack of a consistent definition for forest degradation. A national REDD+ strategy need not refer solely to the establishment of national parks or protected areas; by the careful design of rules and guidelines, REDD+ could include land use practices such as shifting cultivation by Indigenous communities and reduced-impact-logging, provided sustainable rotation and harvesting cycles can be demonstrated. Some argue that this is opening the door to logging operations in primary forests, displacement of local populations for "conservation", increase of tree plantations. Achieving multiple benefits, for example the conservation of biodiversity and ecosystem services (such as drainage basins), and social benefits (for example income and improved forest governance) is currently not addressed, beyond the inclusion in the safeguard. Land tenure, carbon rights and benefit distribution According to some critics, REDD+ is another extension of green capitalism, subjecting the forests and its inhabitants to new ways of expropriation and enclosure at the hands of polluting companies and market speculators. So-called "carbon cowboys" – unscrupulous entrepreneurs who attempt to acquire rights to carbon in rainforest for small-scale projects – have signed on indigenous communities to unfair contracts, often with a view to on-selling the rights to investors for a quick profit. In 2012 an Australian businessman operating in Peru was revealed to have signed 200-year contracts with an Amazon tribe, the Yagua, many members of which are illiterate, giving him a 50 per cent share in their carbon resources. The contracts allow him to establish and control timber projects and palm oil plantations in Yagua rainforest. This risk is largely negated by the focus on national and subnational REDD+ programs, and by government ownership of these initiatives. There are risks that the local inhabitants and the communities that live in the forests will be bypassed and that they will not be consulted and so they will not actually receive any revenues. Fair distribution of REDD+ benefits will not be achieved without a prior reform in forest governance and more secure tenure systems in many countries. The UNFCCC has repeatedly called for full and effective participation of Indigenous Peoples and local communities without becoming any more specific. The ability of local communities to effectively contribute to REDD+ field activities and the measurement of forest properties for estimating reduced emissions and enhanced emissions of greenhouse gases has been clearly demonstrated in various countries. In some project-based REDD+, disreputable companies have taken advantage of low governance. Indigenous peoples Safeguard (c): Respect for the knowledge and rights of indigenous peoples and members of local communities, by taking into account relevant international obligations, national circumstances and laws, and noting that the United Nations General Assembly has adopted the United Nations Declaration on the Rights of Indigenous Peoples; Safeguard (d): The full and effective participation of relevant stakeholders, in particular indigenous peoples and local communities, in the [REDD+] actions … [and when developing and implementing national strategies or action plans]; Indigenous peoples are important stakeholders in REDD+ as they typically live inside forest areas or have their livelihoods (partially) based on exploitation of forest resources. The International Indigenous Peoples Forum on Climate Change (IIPFCC) was explicit at the Bali climate negotiations in 2007: REDD/REDD+ will not benefit Indigenous Peoples, but in fact will result in more violations of Indigenous Peoples' rights. It will increase the violation of our human rights, our rights to our lands, territories and resources, steal our land, cause forced evictions, prevent access and threaten indigenous agricultural practices, destroy biodiversity and cultural diversity and cause social conflicts. Under REDD/REDD+, states and carbon traders will take more control over our forests. Some claim putting a commercial value on forests neglects the spiritual value they hold for Indigenous Peoples and local communities. Indigenous Peoples protested in 2008 against the United Nations Permanent Forum on Indigenous Issues final report on climate change and a paragraph that endorsed REDD+; this was captured in a video entitled "the 2nd May Revolt". However, these protests have largely disappeared in recent years. Indigenous people sit as permanent representatives on many multinational and national REDD+ bodies. Indigenous Peoples' groups in Panama broke off their collaboration with the national UN-REDD Programme in 2012 over allegations of a failure of the government to properly respect the rights of the Indigenous groups. Some grassroots organizations are working to develop REDD+ activities with communities and developing benefit-sharing mechanisms to ensure REDD+ funds reach rural communities as well as governments. Examples of these include Plan Vivo projects in Mexico, Mozambique and Cameroon; and Carbonfund.org Foundation's VCS and CCBS projects in the state of Acre, Brazil. In the carbon market When REDD+ was first discussed by the UNFCCC, no indication was given of the positive incentives that would support developing countries in their efforts to implement REDD+ to reduce emissions and enhance removals of greenhouse gases from forests. In the absence of guidance from the COP, two options were debated by the international community at large: a market-based approach; a fund-based approach where Annex I countries would deposit substantial amounts of money into a fund administered by some multi-lateral entity. Under the market-based approach, REDD+ would act as an "offset scheme" in which verified results-based actions translate into some form of carbon credits, more-or-less analogous to the market for Certified Emission Reductions (CER) under the CDM of the Kyoto Protocol. Such carbon credits could then offset emissions in the country or company of the buyer of the carbon credits. This would require Annex I countries to agree to deeper cuts in emissions of greenhouse gases in order to create a market for the carbon credits from REDD+, which is unlikely to happen soon given the current state of negotiations in the COP, but even then there is the fear that the market will be flooded with carbon credits, depressing the price to levels where REDD+ is no longer an economically viable option. Some developing countries, such as Brazil and China, maintain that developed countries must commit to real emissions reductions, independent of any offset mechanism. Since COP 17, however, it has become clear that the REDD+ may be financed by a variety of sources, market and non-market. The newly established Green Climate Fund already is supporting phase 1 and 2 REDD+ programs, and is finalizing rules to allow disbursement of result-based finance to developing countries that submit verified reports of emission reductions and enhanced removals of greenhouse gases. Top-down design by large international institutions vs. bottom-up grassroots coalitions While the COP decisions emphasize national ownership and stakeholder consultation, there are concerns that some of the larger institutional organizations are driving the process, in particular outside of the one Party, one vote realm of multi-lateral negotiations under the UNFCCC. For example, the World Bank and the UN-REDD Programme, the two largest sources of funding and technical assistance for readiness activities and therefore unavoidable for most developing countries, place requirements upon recipient countries that are arguably not mandated or required by the COP decisions. A body of research suggests that, at least as of 2016, REDD+ as a global architecture has only had a limited effect on local political realities, as pre-existing entrenched power dynamics and incentives that promote deforestation are not easily changed by the relatively small sums of money that REDD+ has delivered to date. In addition, issues like land tenure that fundamentally determine who makes decisions about land use and deforestation have not been adequately addressed by REDD+, and there is no clear consensus on how complex political issues like land tenure can be easily resolved to favor standing forests over cleared forests through a relatively top-down mechanism like REDD+. While a single, harmonized, global system that accounts for and rewards emissions reductions from forests and land use has been elusive, diverse context-specific projects have emerged that support a variety of activities including community-based forest management, enforcement of protected areas, sustainable charcoal production, and agroforestry. Although it is not clear whether these diverse projects are genuinely different from older integrated conservation and development initiatives that pre-date REDD+, there is evidence that REDD+ has altered global policy conversations, possibly elevating issues like Indigenous peoples' land rights to higher levels, or conversely threatening to bypass safeguards for Indigenous rights. Debate surrounding these issues is ongoing. Although the World Bank declares its commitment to fight against climate change, many civil society organisations and grassroots movements around the world view with scepticism the processes being developed under the various carbon funds. Among some of the most worrying reasons are the weak (or inexistent) consultation processes with local communities; the lack of criteria to determine when a country is ready to implement REDD+ projects (readiness); the negative impacts such as deforestation and loss of biodiversity (due to fast agreements and lack of planning); the lack of safeguards to protect Indigenous Peoples' rights; and the lack of regional policies to stop deforestation. A growing coalition of civil society organization, social movement, and other actors critical of REDD+ emerged between 2008 and 2011, criticizing the mechanism on climate justice grounds. During the UN climate negotiations in Copenhagen (2009) and Cancun (2010) strong civil society and social movements coalitions formed a strong front to fight the World Bank out of the climate. However, this concern has largely died down as the World Bank initiatives have been more full developed, and some of these same actors are now participating in implementation of REDD+. ITTO has been criticized for appearing to support above all the inclusion of forest extraction inside REDD+ under the guise of "sustainable management" in order to benefit from carbon markets while maintaining business-as-usual. The UN-REDD Programme The United Nations Programme on Reducing Emissions from Deforestation and Forest Degradation (or UN-REDD Programme) is a multilateral body that partners with countries to help them establish the technical capacities to implement REDD+ (see below #Difference between REDD+ and the UN-REDD Programme). The overall development goal of the Programme is "to reduce forest emissions and enhance carbon stocks in forests while contributing to national sustainable development". The UN-REDD Programme supports nationally led REDD+ processes and promotes the informed and meaningful involvement of all stakeholders, including indigenous peoples and other forest-dependent communities, in national and international REDD+ implementation. The programme is a collaboration between FAO, UNDP and UNEP under which a trust fund established in July 2008 allows donors to pool resources to generate the requisite transfer flow of resources to significantly reduce global emissions from deforestation and forest degradation. The Programme has expanded steadily since its establishment and now has over 60 official Partner Countries spanning Africa, Asia-Pacific and Latin America-Caribbean. In addition to the UN-REDD Programme, other initiatives assisting countries that are engaged in REDD+ include the World Bank's Forest Carbon Partnership Facility, Norway's International Climate and Forest Initiative, the Global Environment Facility, Australia's International Forest Carbon Initiative, the Collaborative Partnership on Forests, and the Green Climate Fund. The UN-REDD Programme publicly releases each year an Annual Programme Progress Report and a Semi-Annual Report. Support to Partner Countries The UN-REDD Programme supports its Partner Countries through: Direct funding and technical support to the design and implementation of National REDD+ Programmes; Complementary tailored funding and technical support to national REDD+ actions; and Technical country capacity enhancing support through sharing of expertise, common approaches, analyses, methodologies, tools, data, best practices and facilitated South-South knowledge sharing. Governance The UN-REDD Programme is a collaborative programme of the Food and Agriculture Organization of the United Nations (FAO), the United Nations Development Programme (UNDP) and the United Nations Environment Programme (UNEP), created in 2008 in response to the UNFCCC decisions on the Bali Action Plan and REDD at COP 13. The UN-REDD Programme's 2016-2020 governance arrangements allow for the full and effective participation of all UN-REDD Programme stakeholders – partner countries, donors, Indigenous peoples, civil society organizations, participating UN agencies – while ensuring streamlined decision-making processes and clear lines of accountability. The governance arrangements are built on and informed by five principles: inclusiveness, transparency, accountability, consensus-based decisions and participation. UN-REDD Programme 2016-2020 governance arrangements include: Executive Board The UN-REDD Programme Executive Board has general oversight for the Programme, taking decisions on the allocation of the UN-REDD Programme fund resources. It meets bi-annually, or more frequently as required to efficiently carry out its roles and responsibilities. Assembly The UN-REDD Programme Assembly is a broad multi-stakeholder forum with the role to foster consultation, dialogue and knowledge exchange among UN-REDD Programme stakeholders. National Steering Committees National Steering Committees facilitate strong country ownership and shared/common decision-making for National REDD+ Programmes, and include representatives of civil society and indigenous peoples. Each National Steering Committee provides oversight for National Programmes, addressing any delays, changes or reorientation of a programme and ensuring alignment with and delivery of results as expected and approved by the executive board. Multi-Party Trust Fund Office The Multi-Party Trust Fund Office provides real-time funding administration to the UN-REDD Programme. 2016-2020 Strategic Framework The work of the UN-REDD Programme is guided by its 2016-2020 Strategic Framework , with the goal to: Reduce forest emissions and enhance carbon stocks in forests while contributing to national sustainable development. In order to realize its goal and target impacts, the Programme has set three outcomes and supporting outputs for its 2016-2020 work programme: Contributions of REDD+ to the mitigation of climate change as well as to the provision of additional benefits have been designed. Country contributions to the mitigation of climate change though REDD+ are measured, reported and verified and necessary institutional arrangements are in place. REDD+ contributions to the mitigation of climate change are implemented and safeguarded with policies and measures that constitute results-based actions, including the development of appropriate and effective institutional arrangements. Additionally, the Programme has identified four important cross-cutting themes as being particularly significant in order to ensure that the outcomes and outputs of the Programme will achieve results as desired: Stakeholder Engagement, Forest Governance, Tenure Security and Gender Equality. Donors The UN-REDD Programme depends entirely on voluntary funds. Donors to the UN-REDD Programme have included the European Commission and governments of Denmark, Japan, Luxembourg, Norway, Spain and Switzerland—with Norway providing a significant portion of the funds. Transparency The UN-REDD Programme adheres to the belief that information is fundamental to the effective participation of all stakeholders, including the public, in the advancement of REDD+ efforts around the world. Information sharing promotes transparency and accountability and enables public participation in REDD+ activities. The collaborating UN agencies of the UN-REDD Programme – FAO, UNEP and UNDP – are committed to making information about the Programme and its operations available to the public in the interest of transparency. As part of this commitment, the Programme publishes annual and semi-annual programme progress reports and provides online public access to real-time funding administration. Difference between REDD+ and the UN-REDD Programme REDD+ is a voluntary climate change mitigation approach that has been developed by Parties to the UNFCCC. It aims to incentivize developing countries to reduce emissions from deforestation and forest degradation, conserve forest carbon stocks, sustainably manage forests and enhance forest carbon stocks. The United Nations Collaborative Programme on Reducing Emissions from Deforestation and Forest Degradation in Developing Countries – or UN-REDD Programme – is a multilateral body. It partners with developing countries to support them in establishing the technical capacities needed to implement REDD+ and meet UNFCCC requirements for REDD+ results-based payments. It does so through a country-based approach that provides advisory and technical support services tailored to national circumstances and needs. The UN-REDD Programme is a collaborative programme of the United Nations Food and Agriculture Organization (FAO), the United Nations Development Programme (UNDP) and the United Nations Environment Programme (UNEP), and harnesses the technical expertise of these UN agencies. Other examples of REDD+ multilaterals include the Forest Carbon Partnership Facility and Forest Investment Program, hosted by The World Bank. History Terminology The approach detailed under the UNFCCC is commonly referred to as "reducing emissions from deforestation and forest degradation", abbreviated as REDD+. This title and the acronyms, however, are not used by the COP itself. The original submission by Papua New Guinea and Costa Rica, on behalf of the Coalition for Rainforest Nations, dated 28 July 2005, was entitled "Reducing Emissions from Deforestation in Developing Countries: Approaches to Stimulate Action". COP 11 entered the request to consider the document as agenda item 6: "Reducing emissions from deforestation in developing countries: approaches to stimulate action", again written here exactly as in the official text. The name for the agenda item was also used at COP 13 in Bali, December 2007. By COP 15 in Copenhagen, December 2009, the scope of the agenda item was broadened to "Methodological guidance for activities relating to reducing emissions from deforestation and forest degradation and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries", moving to "Policy approaches and positive incentives on issues relating to reducing emissions from deforestation and forest degradation in developing countries; and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries" by COP 16. At COP 17 the title of the decision simply referred back to an earlier decision: "Guidance on systems for providing information on how safeguards are addressed and respected and modalities relating to forest reference emission levels and forest reference levels as referred to in decision 1/CP.16". At COP 19 the titles of decisions 9 and 12 refer back to decision 1/CP.16, paragraph 70 and appendix I respectively, while the other decisions only mention the topic under consideration. None of these decisions use an acronym for the title of the agenda item; the acronym is not coined by the COP of the UNFCCC. The set of decisions on REDD+ that were adopted at COP 19 in Warsaw, December 2013, was coined the Warsaw Framework on REDD-plus in a footnote to the title of each of the decisions creating the acronyms: REDD originally referred to "reducing emissions from deforestation in developing countries" the title of the original document on REDD. It was superseded in the negotiations by REDD+. REDD+ (or REDD-plus) refers to "reducing emissions from deforestation and forest degradation in developing countries, and the role of conservation, sustainable management of forests, and enhancement of forest carbon stocks in developing countries" (emphasis added); the most recent, elaborated terminology used by the COP. Most of the key REDD+ decisions were completed by 2013, with the final pieces of the rulebook finished in 2015. REDD REDD was first discussed in 2005 by the UNFCCC at its 11th session of the Conference of the Parties to the convention (COP) at the request of Costa Rica and Papua New Guinea, on behalf of the Coalition for Rainforest Nations, when they submitted the document "Reducing Emissions from Deforestation in Developing Countries: Approaches to Stimulate Action", with a request to create an agenda item to discuss consideration of reducing emissions from deforestation and forest degradation in natural forests as a mitigation measure. COP 11 entered the request to consider the document as agenda item 6: Reducing emissions from deforestation in developing countries: approaches to stimulate action. In December 2007, after a two-year debate on a proposal from Papua New Guinea and Costa Rica, state parties to the United Nations Framework Convention on Climate Change (UNFCCC) agreed to explore ways of reducing emissions from deforestation and to enhance forest carbon stocks in developing nations. The underlying idea is that developing nations should be financially compensated if they succeed in reducing their levels of deforestation (through valuing the carbon that is stored in forests); a concept termed 'avoided deforestation (AD) or, REDD if broadened to include reducing forest degradation. Under the free market model advocated by the countries who have formed the Coalition of Rainforest Nations, developing nations with rainforests would sell carbon sink credits under a free market system to Kyoto Protocol Annex I states who have exceeded their emissions allowance. Brazil (the state with the largest area of tropical rainforest) however, opposes including avoided deforestation in a carbon trading mechanism and instead favors creation of a multilateral development assistance fund created from donations by developed states. For REDD to be successful science and regulatory infrastructure related to forests will need to increase so nations may inventory all their forest carbon, show that they can control land use at the local level and prove that their emissions are declining. REDD+ Subsequent to the initial donor nation response, the UN established REDD Plus, or REDD+, expanding the original program's scope to include increasing forest cover through both reforestation and the planting of new forest cover, as well as promoting sustainable forest resource management. Bali Action Plan REDD received substantial attention from the UNFCCC – and the attending community – at COP 13, December 2007, where the first substantial decision on REDD+ was adopted, Decision 2/CP.13: "Reducing emissions from deforestation in developing countries: approaches to stimulate action", calling for demonstration activities to be reported upon two years later and assessment of drivers of deforestation. REDD+ was also referenced in decision 1/CP.13, the "Bali Action Plan", with reference to all five eligible activities for REDD+ (with sustainable management of forests, conservation of forest carbon stocks and enhancement of forest carbon stocks constituting the "+" in REDD+). The call for demonstration activities in decision 2/CP.13 led to a very large number of programs and projects, including the Forest Carbon Partnership Facility (FCPF) of the World Bank, the UN-REDD Programme, and a number of smaller projects financed by the Norwegian International Climate and Forest Initiative (NICFI), the United States, the United Kingdom, and Germany, among many others. All of these were based on substantive guidance from the UNFCCC. Definition of main elements In 2009 at COP 15, decision 4/CP.15: "Methodological guidance for activities relating to reducing emissions from deforestation and forest degradation and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries" provided more substantive information on requirements for REDD+. Specifically, the national forest monitoring system was introduced, with elements of measurement, reporting and verification (MRV). Countries were encouraged to develop national strategies, develop domestic capacity, establish reference levels, and establish a participatory approach with "full and effective engagement of Indigenous peoples and local communities in (…) monitoring and reporting". A year later at COP 16 decision 1/CP.16 was adopted. In section C: "Policy approaches and positive incentives on issues relating to reducing emissions from deforestation and forest degradation in developing countries; and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries" environmental and social safeguards were introduced, with a reiteration of requirements for the national forest monitoring system. These safeguards were introduced to ensure that implementation of REDD+ at the national level would not lead to detrimental effects for the environment or the local population. Countries are required to provide summaries of information on how these safeguards are implemented throughout the three "phases" of REDD+. In 2011 decision 12/CP.17 was adopted at COP 17: "Guidance on systems for providing information on how safeguards are addressed and respected and modalities relating to forest reference emission levels and forest reference levels as referred to in decision 1/CP.16". Details are provided on preparation and submission of reference levels and guidance on providing information on safeguards. Warsaw Framework on REDD-plus In December 2013, COP 19 produced no fewer than seven decisions on REDD+, which are jointly known as the "Warsaw Framework on REDD-plus". These decisions address a work program on results-based finance; coordination of support for implementation; modalities for national forest monitoring systems; presenting information on safeguards; technical assessment of reference (emission) levels; modalities for measuring, reporting and verifying (MRV); and information on addressing the drivers of deforestation and forest degradation. Requirements to be eligible access to "results-based finance" have been specified: through submission of reports for which the contents have been specified; technical assessment through International Consultation and Analysis (ICA) for which procedures have been specified. With these decisions the overall framework for REDD+ implementation was completed, although many details still needed to be provided. COP 20 in December 2014 did not produce any new decisions on REDD+. A reference was made to REDD+ in decision 8/CP.20 "Report of the Green Climate Fund to the Conference of the Parties and guidance to the Green Climate Fund", where in paragraph 18 the COP "requests the Board of the Green Climate Fund (...) (b) to consider decisions relevant to REDD-plus", referring back to earlier COP decisions on REDD+. The remaining outstanding decisions on REDD+ were completed at COP 21 in 2015. With the conclusion of decisions on reporting on the safeguards, non-market approaches, and non-carbon benefits, the UNFCCC rulebook on REDD+ was completed. All countries were also encouraged to implement and support REDD+ in Article 5 of the Paris Agreement. This was part of a broader Article that specified that all countries should take action to protect and enhance their greenhouse gas sinks and reservoirs (stores of sequestered carbon). See also Deforestation and climate change Deforestation by region Emissions trading Illegal logging CDM excluding Forest Conservation Natural Forest Standard Tree credits Tree planting United Nations Forum on Forests References Further reading External links Official UN-REDD Programme Website Official UN-REDD Programme Online Collaborative Workspace Official UNFCCC Website UN-REDD Programme Multi-Partner Trust Fund Factsheet UNFCCC REDD Web Platform REDD+ Partnership, including financing database Forest Carbon Partnership Facility, hosted by the World Bank REDD+ profile on database of Market Governance Mechanisms UN-REDD Programme Code REDD: A campaign to promote REDD+ projects and the corporations who have pledged support REDD-Monitor - Critical analysis and news about REDD Partners: Forest Carbon Partnership Facility (FCPF) Global Environment Facility (GEF) Forest Investment Program (FIP) Emissions reduction Carbon finance Reforestation Forest governance Deforestation World forestry Forest certification Forest conservation Environmental controversies Sustainable forest management Food and Agriculture Organization International forestry organizations United Nations Development Programme United Nations Environment Programme United Nations Framework Convention on Climate Change
REDD and REDD+
[ "Chemistry" ]
11,497
[ "Greenhouse gases", "Emissions reduction" ]
73,776,991
https://en.wikipedia.org/wiki/Tribromine%20octoxide
Tribromine octoxide is a binary inorganic compound of bromine and oxygen with the chemical formula . This is a free radical and one of the most complex bromine oxides. Synthesis A reaction of with at 273 K and low pressure. Physical properties The compound forms a white solid. It exists in two forms which are both soluble in water. It is unstable above -73 °C. References Bromine compounds Oxides Free radicals
Tribromine octoxide
[ "Chemistry", "Biology" ]
87
[ "Free radicals", "Oxides", "Salts", "Senescence", "Biomolecules" ]
73,777,642
https://en.wikipedia.org/wiki/Anchor%20channel
Anchor channels, invented by Anders Jordahl in 1913, are steel channels cast flush in reinforced concrete elements to allow the installation of channel bolts for the fastening of components. Anchor channels consist of steel C-shaped channels and anchors (mostly headed studs) which are connected to the channel by welding or riveting/forging. The channels are supplied with foam filler to prevent concrete from leaking into the channel when the concrete is poured. The system includes T-shaped bolts which are called T-bolts or channel bolts in the regulations. They are also called hammer head bolts, serrated bolts or hook-headed bolts based on their shape and function. The T-bolts are inserted into the anchor channel after the foam filler has been removed. The T-bolts can be moved along the channel length to allow for adjustment. The adjustment of the T-bolt location is required to compensate for construction tolerances or for a change in use for a particular application. The T-bolt is then tightened to the required torque to fasten a component. Types Hot-rolled and cold-formed anchor channels are available depending on the manufacturing method of the open profiles. To provide corrosion resistance, anchor channels & T-bolts can be provided in hot-dipped galvanized or stainless steel. Anchor channels and T-bolts can be either smooth or serrated. Serrated channels and T-bolts can resist loads in the longitudinal direction of the channel by means of a mechanical interlock. This is particularly useful in the case of seismic and dynamic loads. History Concrete and iron reinforced concrete construction at the beginning of the 20th century required new methods of attaching various components to reinforced concrete elements. For smaller loads, trapezoidal wooden battens with bent nails were cast-in. Larger loads could only be attached with special claws when steel girders were embedded in concrete. Another development was S-shaped steel beams into which hook bolts were hung. To enable fastening in reinforced concrete elements that is independent of cast-in steel girders, Anders Jordahl developed a tied back, C-shaped "slotted rebar" with T-shaped hook-headed bolts in 1913. At the time, Anders Jordahl was a German representative for the Kahn System, a product of Albert Kahn and Julius Kahn's Truscon company. The new system by Anders Jordahl was patented in several countries in the following years. Since 1925 the term "anchor channel" has been used and industrial production has been carried out. Further developments by the various manufacturers followed until now. Regulations Anchor channels and channel bolts are qualified in Europe according to EAD 330008-XX-0601 and designed according to EN 1992-4. In the US, the system is qualified according to Acceptance Criteria AC232 and designed according to AC232 and ACI 318. The qualification is certified in Europe by a European Technical Assessment (ETA) and in the US by an Evaluation Service Report (ESR). These specifications also contain the product-specific parameters required for the design, which is determined in the course of the approval process. The regulations in Europe and the USA are almost identical. In countries that are following European regulations with regard to fastenings in concrete (e.g. Malaysia, Singapore, Australia) ETAs, and in countries that follow US-American regulations with regard to fastenings in concrete (e.g. Canada, Mexico, Indonesia, South Korea, Taiwan, New Zealand) ESRs are accepted and referenced. Applications Anchor channels are used, for example, to fasten glass curtain wall panels, precast concrete parts, utility lines, brick facades, reinforced concrete facades, brackets, elevator guiding rails, canopies or other components, e.g. in plant construction. Curved channels are used in tunnel construction, e.g. in tubings, used for the fastening of media or overhead wire. Depending on the manufacturer's product range, suitable anchor channels can be used for a wide variety of construction applications with different requirements for load-bearing capacity, fire protection and corrosion protection. Literature R. Eligehausen, R. Mallée, J. Silva: Anchorage in concrete construction. Beton-Kalender, Ernst & Sohn, Berlin 2006. References External links Design of anchor channels (retrieved 29 May 2020) Fasteners Reinforced concrete
Anchor channel
[ "Engineering" ]
886
[ "Construction", "Fasteners" ]
73,779,438
https://en.wikipedia.org/wiki/Somali%20Jet
The Somali Jet, also known as the Findlater Jet, is a cross-equatorial wind system which forms off the eastern coast of Africa in the Indian Ocean. It is recognised as an important component of the Indian Monsoon and is a factor in the relatively low rainfall in East Africa. It contributes to the existence of the Somali Current - the only major upwelling system that occurs on a western boundary of an ocean. History The Somali Jet was documented scientifically for the first time by Findlater in 1969 based on upper air data from the Maldives and Nairobi. In practice, the existence of the strong winds had long been known due to its effect on maritime trade. Piracy off the Somali Coast is thought to be limited by the strong winds, with most instances of piracy occurring when the Somali Jet is weak. Structure The Somali Jet is a wind maximum (>) in the lowest 1.5 km of the atmosphere, capped above by a maritime temperature inversion. It forms across a relatively narrow band of longitudes. In the northern hemisphere summer (July-September), the jet presents as a southeasterly wind in the southeastern Indian Ocean, before recurving anticylonically to the northeast on crossing the equator, parallel to the East African coast. At this time of year, the strongest jet winds are the southwesterlies in the Arabian Sea. There is also a local maximum in the southeasterly winds off the northern tip of Madagascar. In response to the annual cycle in insolation, the jet reverses direction in the southern hemisphere summer (December to February). References Winds Atmospheric dynamics
Somali Jet
[ "Chemistry" ]
322
[ "Atmospheric dynamics", "Fluid dynamics" ]
73,779,699
https://en.wikipedia.org/wiki/Echinodontium%20ballouii
Echinodontium ballouii is a basidiomycete native to the northeastern United States. It is a polypore and important decomposer of the tree Chamaecyparis thyoides. It was declared an endangered species in 2015 due to the scarcity of this tree, which is threatened by the logging industry. It is probable that around 250 individuals exist today. Taxonomy Echinodontium balloui initially was thought to be in the genus Steccherinum, as it had spines on its hymenium and no stipe. It was placed into the genus Echinodontium in 1964 by Henry Louis Gross, which was later confirmed by Manfred Binder through gene sequencing. Morphology The fungi's fruiting bodies are irregularly-shaped shelf-like formations. Their diameters can span 5-50 centimeters, and they can grow up to 20 centimeters tall. They are brown in color, and the body is tough and woody. They are commonly seen with “teeth” or “spines” protruding from the cap's underside. As a polypore, the fungus's spores are released from an underside covered in small pores, which is lighter in color. These pores are conchate or bell-shaped and release the fungus's basidiospores. The fungus's spores measure between 5-7 μm long and 2-3 μm tall and are strongly amyloid (turn blue-black under Melzer's reagent).As a polypore, Echinodontium ballouii is perennial, releasing spores once a year and forming a new layered hymenium directly on top of that of last year. The cystidia are 25-45 × 5-9 μm and club-shaped, becoming more thick-walled and dark with age. The basidia measure 20-25 × 6-8 μm and have 4 sterigma each. The context (flesh) is made of skeletal and generative hyphae. The skeletal hyphae have a diameter of 3.5-4.5 μm, and are brown and smooth, with thick walls. The generative hyphae have a similar diameter and texture, but are transparent, thin-walled and nodose-septate. Ecology Echinodontium ballouii is a hemi-biotrophic wood decomposer, feeding off of only a single species of tree: the Chamaecyparis thyoides Atlantic white cedar. The fungus forms complex mycelial networks inside the tree's trunk, slowly digesting cellulose and lignin, and emerging as fruiting bodies in order to reproduce. They are often found relatively high up on trees, below branching points. Habitat Because this fungus only inhabits Chamaecyparis thyoides, its habitat is limited to this tree's ecological environment: swampy, coniferous forests within 150 miles of the Eastern coast of the United States. The production of fruiting bodies can take up to forty years, which means that the fungus is typically found on old growth trees. Geographic distribution The fungus's limited host species results in a very confined geographical distribution. Only about twenty visibly occupied trees have been documented to date, each in the East coast of the United States, with the first sighted in New Jersey. Unique aspects The small number of recorded Echinodontium ballouii has resulted in its classification as endangered. This is in part due to the formerly high demand for the fungi's host body, the Atlantic white cedar, for shipbuilding lumber, especially due to its coastal proximity. Today, demand for lumber still puts this important decomposer at risk. The fungus is named for William Hosea Ballou, one of its earliest discoverers. He mistakenly claimed that “there [was] no fungus more beautiful – or more deadly” to the Atlantic white cedar. In reality, the suffering trees that Ballou witnessed were likely due to logging development and changing hydrology. The fungus was generally thought to be extinct after no additional sightings occurred for the majority of the 20th century. However, it was rediscovered in the early 2000s by mycologists Larry Millman and Bill Neill. References Russulales Fungi of the United States Fungi described in 1909 Fungus species
Echinodontium ballouii
[ "Biology" ]
873
[ "Fungi", "Fungus species" ]
73,779,785
https://en.wikipedia.org/wiki/Momfluencer
A mom influencer or momfluencer is a mother who shares the early moments of motherhood on social media, often utilizing sites such as Instagram. The term "dadfluencer" is less common, and refers to a father instead. The term carries with it possible connotations of stress or obligation, felt by the new mother, in terms of the new mother feeling the need to take a high volume of pictures of a new child and share these pictures on such social media sites. Some momfluencers claim to use the new motherhood position in tandem with social media as a means to earn additional income, while others assert that "the influencer scene fully believes that nobody is actually making any money". History The term first came into use in the early 2000s, along with the rise of social media and mobile phone technology which facilitated the ease of widespread sharing of personal baby photos from new mothers with a digital audience. Meaning and use The term is a portmanteau of the words Mom and Influencer. A 'momfluencer' may refer to a new mother that may have "...social media followings in the tens of thousands or even millions..." where the new mother may share, "...tips and inspiration to their fellow moms..." about various duties of a new mother that might consist of installing a car seat or other such activities. Some mothers associated with minority racial or ethnic groups are alleged to be paid less than their peers in racial and ethnic majority groups, in part due to, "...limited financial transparency." Criticism Some have criticized "mom influencer" culture for being overly focused on materialistic pursuits, or in building a form of rat race between competing parents to one-up others in terms of whom might be artificially deemed the best mother according to some external sources such as fans, followers, or the public generally. Author Sara Petersen in Time said: Viewing beautifully shot and lit photos of a momfluencer's bespoke laundry room in her Nantucket mansion through the informed lens of entertainment can be fun and soothing. But we can't all afford Nantucket mansions, and the more we believe (or fool ourselves into believing) that aspirational wicker hampers can make our experiences of motherhood any less frustrating, exhausting, or confounding, the less mental space we have to focus on the broken systems and institutions making motherhood so hard for so many of us. Petersen stated in an interview with Vox regarding a book on the subject from 2023 called Momfluenced that, "The momfluencer, obviously, is not a real person. She's a construct, created by real mothers in the mid-aughts, in concert with tech companies and consumer brands, as a way of making a living on social media." Petersen also criticized momfluencers, or "momfluencer culture", for being predominantly white, and upper middle class. She also criticizes the image put forth by some so called momfluencer women as airing a generally misleading lifestyle on social media in terms of affordability. An example includes renting an expensive car and taking pictures of the vehicle while on a holiday, and then pretending that the vehicle is not rented, but is owned. Petersen's book also was reviewed by Rolling Stone in 2023 wherein Petersen said that momfluencer culture included themes of, "race, class, capitalism, consumerism, domesticity, ideals for femininity. There was a lot there." See also Mommyblogs References American slang Slang terms for women Stereotypes of middle class women
Momfluencer
[ "Technology" ]
751
[ "Computing and society", "Social media" ]
73,779,808
https://en.wikipedia.org/wiki/Isotope%20analysis%20in%20archaeology
Isotope analysis has many applications in archaeology, from dating sites and artefacts, determination of past diets and migration patterns and for environmental reconstruction. Information is determined by assessing the ratio of different isotopes of a particular element in a sample. The most widely studied and used isotopes in archaeology are carbon, oxygen, nitrogen, strontium and calcium. An isotope is an atom of an element with an abnormal number of neutrons, changing their atomic mass. Isotopes can be subdivided into stable and unstable or radioactive. Unstable isotopes decay at a predictable rate over time. The first stable isotope was discovered in 1913, and most were identified by the 1930s. Archaeology was relatively slow to adopt the study of isotopes. Whereas chemistry, biology and physics, saw a rapid uptake in applications of isotope analysis in the 1950s and 1960s, following the commercialisation of the mass spectrometer. It wasn't until the 1970s, with the publication of works by Vogel and Van Der Merwe (1977) and DeNiro and Epstein (1978; 1981) that isotopic analysis became a mainstay of archaeological study. Isotopes Carbon Carbon is present in all biological material including skeletal remains, charcoal and food residues and plays an integral role in the dating of materials, through radiocarbon dating. The ratio of different carbon isotopes naturally fluctuates over time, and, by analysing the composition of carbon dioxide (CO2) in ancient air bubbles trapped in ice cores, a chronological record of these fluctuations can be constructed. Primary producers (such as grasses) absorb and sequester CO2 during photosynthesis, these plants are then eaten by consumers (such as cows, and later humans) which inherit this same CO2 signature. Therefore, by matching the carbon isotope ratios from a sample to ratios from the ice core record, the sample can be assigned to a broad period. After death, an organism no longer absorbs CO2, 14C's instability causes its concentration to decrease over time The predictable rate at which this occurs is known as an element's decay rate. Oxygen and nitrogen Oxygen and nitrogen occur in the form of different isotopes which vary in their proportions geospatially and climatically. Oxygen is absorbed into the body in the form of H2O and is used in the growth of tissues. As with carbon, oxygen isotopic ratio variances can be attributed to specific locations and the proportion of O isotopes can therefore contribute to the reconstruction of past climates, understanding of diets and water consumption, seasonality, mobility patterns, life history and elements of culture. Strontium Strontium is naturally deposited in hydroxyapatite, the mineral component of bones and teeth, following its consumption in food and water. Each locale has a unique Sr isotope ratio and, therefore, the ratio found in a bone or enamel sample can be cross referenced against a record of environmental Sr ratios and assigned to a region. Dental enamel forms in childhood, therefore, Sr extracted from dental enamel reflects the environment in which an individual lived during infancy and childhood. Bone, however, is constantly being renewed and can therefore be used to infer the adult diet and location of the individual. As such, if the Sr ratios are the analogous in the bones and teeth, it can be inferred that an individual remained in the same general region throughout their life. If the ratios differ, the individual's birthplace and death place can be mapped, allowing inference of their movements. This has been applied to determine the functionality and significance of Stonehenge, finding that both the visitors and cattle used in feasting travelled great distances, with Sr ratios attributed to both Scotland and Wales. Calcium Alongside strontium, dietary calcium is deposited in bones teeth, however Ca is more readily deposited than Sr in humans and animals who consume primarily or exclusively plants. Therefore, the greater the Ca:Sr ratio in sample, the more herbivorous the animal was likely to be. Methodology Isolation Before the isotopes can be separated and a ratio can be determined, the desired component of the tissue must be isolated. Such components include collagen, carbonate and apatite. Each component requires different means of isolation, and methods must be further specialised to account for the varied levels of decay and contamination which may occur as a result of taphonomy. In the case of collagen, there are three main modes of isolation: Decalcification of small bone chunks in a 1-5% hydrochloric acid solution. If further decayed organic matter remains, a soak in 0.1 molar sodium hydroxide may be required. The isolated collagen is then freeze dried. Demineralisation of small bone chunks in sodium salt to separate collagen, which is then freeze-dried Demineralisation of powdered bone in 8% hydrochloric acid, slow hydrolysis in pH 3. If required, a further soak in 0.1 molar sodium hydroxide. The latter is most effective in the instance of very poorly preserved bone, although it also faces an increased risk of contamination by other organic matter. Consequently, the supposedly isolated sample should be analysed and only tested if the readings fall within an acceptable range; most mass spectrometers now include a gas analyser as well as a combustion chamber to streamline this process. Mass spectrometry Mass spectrometry is used to separate and measure distinct isotopes present in a sample. Archaeologists typically employ isotope ratio mass spectrometers or IRMSs, consisting of an inlet system, ion source, mass analyser and multiple ion detectors. The sample is usually introduced into the mass spectrometer as a gas, with oxygen and carbon being introduced as carbon dioxide. Strontium is too unstable to be easily handled in gas form, instead, it is evaporated and ionised in a vacuum. This use of a solid source is referred to as thermal ionisation mass spectrometry or TIMS. More recently, strontium isotopes have been at the centre of discussion and investigation into the use of laser ablation inductively coupled mass spectrometry (ICP-MS), which is also of interest due to its less invasive nature. Electron bombardment ionises the gas, allowing the molecules to be focused into a beam which is then split by mass into smaller beams - forming a "mass spectrum". The relative intensities of the different beams is then measured in the ion collector and relayed as isotope ratios. Application and examples Paranthropus dietary reconstruction Plants can be characterised by the ratio of carbon isotopes they sequester, due to alterations in the evolution of photosynthetic biochemical pathways. So-called C3 plants fix CO2 into a 3-carbon molecule and have a greater proportion of 12C, whereas C4 plants fix it into a 4-carbon molecule, and have a carbon isotope signature with higher 13C. This signature translates across trophic levels and can be used to determine the diets of people and animals. Isotopic analysis has been used to illuminate the diets of the different species of the Paranthropus genus. It was determined that P. boisei had a reduced ratio of C3:C4, meaning they likely consumed a greater proportion of grasses and sedges than trees, shrubs and temperature grasses. P. aethiopicus showed a similar trend, whereas P. robustus was a generalist, with a broader dietary niche. Furthermore, carbon isotope analysis shows that around 2.37 million years ago, hominins displayed a widespread shift to favour C4 plants. "Ötzi the Iceman" and reconstructing Neolithic lifeways Ötzi is a Neolithic man who, in 1991, was found in an Alpine glacier between Austria and Italy. Ötzi is exceptionally well preserved since his body was dehydrated and encapsulated in glacial ice. Radiocarbon dating gave an age of approximately 5,200 years old. TIMS, ICP-MS and gas mass spectrometry have all been applied to the strontium, lead, and oxygen isotopes in Ötzi's bones and teeth. His teeth indicated a likely birth and early childhood near to where the Eisack and Rienz rivers confluence. In his adulthood, however, Ötzi's bones suggest that he moved to the lower Vinschgau and Etsch valley. More recent isotopic data, gathered from his gut contents, provides yet another timescale and hint that Ötzi's movement could be attributable to seasonal migration. White Sands trackway and peopling of North America The earliest compelling evidence for human habitation of the Americas comes from the Clovis complex, between 11,050 and 10,800 14C yr B.P. However, a series of human tracks were identified at White Sands National Park, New Mexico, which have been dated contentiously dated to between 23,000 and 21,000 years ago - during the Last Glacial Maximum. Alongside anatomically modern humans, the trackway shows impressions created by a Columbian mammoth and a giant ground sloth. The upper biostratigraphic limit for when the impressions were made could therefore be determined by consideration of the extinction dates of mammoths and ground sloths. More precise dates were able to be gained via radiocarbon dating of ditch grass (ruppia cirrhosa) embedded in the prints. These seeds produced a date of 23,000-21,000 years ago. However, 14C dates are not infallible, and this remains a topic of debate. A recent counterproposal posits that the trackways were, in fact, created by the Clovis culture and the pre-existing proposed dates of first habitation should not be moved. False dates may have been produced as older strata containing the seeds could have been eroded and displaced onto the damp clay, before being impressed in by footsteps. Alternatively, aquatic plants like ditch grass reflect the 14C levels in their environment when living, if 14C was deficient in the habitat, this could imply a false date. References Archaeological science Methods in archaeology Isotopes
Isotope analysis in archaeology
[ "Physics", "Chemistry" ]
2,076
[ "Isotopes", "Nuclear physics" ]
73,779,949
https://en.wikipedia.org/wiki/Haven-1
Haven-1 is a planned space station in low Earth orbit that is currently in development by American aerospace company Vast. The station is expected to launch no earlier than August 2025 atop a SpaceX Falcon 9. The first mission to Haven-1, Vast-1, is expected to launch a crew of four astronauts on board a Crew Dragon spacecraft to the space station for thirty days. More launches are expected to occur using Crew Dragon to shuttle astronauts to and from Haven-1 over the course of its lifespan. The station will be unable to sustain itself over a long period of time and will rely on the Crew Dragon for long-term missions by using its life support systems. Using Dragon, the station will be capable of sustaining 4-crew missions with 24/7 communication facilities, up to 1,000 watts of power, up to 150 kg of preloaded cargo mass, and science, research, and in-space manufacturing opportunities for up to 30 days. The crews aboard the station will also conduct experiments in an attempt to mimic lunar gravity. Components Haven-1's propulsion system is being built and provided by Impulse Space. The propulsion system will consist of a storable propellant combination, nitrous oxide and ethane, propellant tanks, fluid lines, valves, sensors, control electronics and software, and Saiph thrusters as a reaction control thruster. The station will also contain a dome for photography and viewing of the Earth for tourists, in addition to always-on internet through onboard Wi-Fi and resting rooms. In mid-February 2024, Vast announced that it was partnering with El Segundo-based company AnySignal, Irvine-based company TRL11, and Singaporean company Addvalue to provide radio frequency and Inter-satellite Data Relay System (IDRS) connectivity, as well as advanced onboard video solutions for use in Haven-1. In August 2024, Vast revealed that Haven-1 will house a microgravity research facility called the Haven-1 Lab, which will serve as the station's microgravity research, development and manufacturing platform. It will have 10 slots, each capable of accommodating payloads weighing up to 30 kilograms and consuming up to 100 watts of power, previously provided by private companies and governments. The first companies to agree to place payloads on Haven-1 have been announced as Redwire and Yuri Gravity. See also Vast (company) Vast-1 References Proposed space stations Space tourism
Haven-1
[ "Astronomy" ]
502
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
73,780,027
https://en.wikipedia.org/wiki/Lorraine%20Maltby
Lorraine Lucy Maltby (born 1960, née Ward) is a British biologist and who is a professor of environmental biology at the University of Sheffield. She serves as deputy Vice-President for research and innovation and chair of the board of trustees of the Freshwater Habitats Trust. Her research investigates interactions in the riparian zone and the environmental impacts of agri-plastics. Early life and education Maltby became interested in freshwater ecology during her A-Levels, where she completed a project on urban ecology. She moved to Newcastle University for an undergraduate degree in zoology. She then moved to the University of Glasgow for graduate studies, where she studied the life history of freshwater Erpobdella leeches. Research and career Maltby was awarded a Natural Environment Research Council (NERC) postdoctoral research grant, and moved to the University of Sheffield in 1984. Maltby joined the Faculty at the University of Sheffield in 1988, and was appointed a professor in 2004 and served as head of department from 2008. In 2017 she was appointed Deputy Vice President of Research. Her research investigates aquatic-riparian interactions and the environmental impacts of plasticulture. She has been part of the UK Research and Innovation (UKRI) activity around sustainable plastics in agriculture. She has studied chemical pollution in Yorkshire rivers. Maltby is chair of the Board of Trustees of the Freshwater Habitats Trust. Awards and honours 2019 Elected Fellow of the Freshwater Biological Association 2020 Appointed officer of the Most Excellent Order of the British Empire (OBE) in the 2021 New Year Honours for services to Environmental Biology, Animal and Plant Sciences. Selected publications References Living people Alumni of Newcastle University Alumni of the University of Glasgow Fellows of the Royal Society of Biology Officers of the Order of the British Empire 20th-century British biologists 21st-century British biologists Academics of the University of Sheffield 20th-century British women scientists 21st-century British women scientists British women biologists 1960 births Environmental scientists
Lorraine Maltby
[ "Environmental_science" ]
388
[ "Environmental scientists", "British environmental scientists" ]
73,780,161
https://en.wikipedia.org/wiki/Sonja%20Glava%C5%A1ki
Sonja Glavaški is an electrical engineer. Initially focusing on nonlinear control and robust control, her interests have since shifted to include computational challenges in the control of electrical grids and their integration with building-scale energy systems. Educated in Serbia and the US, she works at the Pacific Northwest National Laboratory as Chief Energy Digitalization Scientist and Principal Technology Strategy advisor for the Energy & Environment Directorate. Education and career Glavaški earned an engineering degree and master's degree in electrical engineering from the University of Belgrade. She continued her education at the California Institute of Technology, earning a second master's degree and completing her Ph.D. there. Her 1998 doctoral dissertation, Robust system analysis and nonlinear system model reduction, was supervised by John Doyle. She worked in industry, becoming a principal scientist for Honeywell, at Honeywell Labs in Minneapolis, working for the Eaton Corporation at the Eaton Innovation Center in Wisconsin, and then for United Technologies at the United Technologies Research Center in Connecticut, where she led the Control Systems Group. Next, she moved to ARPA-E, the US Advanced Research Projects Agency–Energy, as a program director in charge of projects including the Network Optimized Distributed Energy Systems (NODES) program, focusing on the integration of small-scale renewable energy sources into the grid and the use of building-scale energy systems for grid energy storage. She moved from there to her present position at the Pacific Northwest National Laboratory. Recognition Glavaški was named an IEEE Fellow, in the 2020 class of fellows, "for leadership in energy systems". References Year of birth missing (living people) Living people Electrical engineers Women electrical engineers Control theorists University of Belgrade alumni California Institute of Technology alumni United States Department of Energy National Laboratories personnel Fellows of the IEEE
Sonja Glavaški
[ "Engineering" ]
357
[ "Electrical engineering", "Control engineering", "Control theorists", "Electrical engineers" ]
73,780,505
https://en.wikipedia.org/wiki/Paurocotylis%20pila
Paurocotylis pila, commonly known as the scarlet berry truffle, is an ascomycete fungus in the genus Paurocotylis. It was first described by Miles Joseph Berkley in 1855. This species is native to New Zealand and Australia and is naturalized in the United Kingdom. It often appears in forests under podocarp trees such as totara; however, it also occurs in gardens, forest tracks, and parks. Taxonomy First described in 1855 by Miles Joseph Berkeley in Joseph Dalton Hooker's The Botany of the Antarctic Voyage II, Flora Novae-Zealandiae, the type specimen was found 'on the ground' and was collected by William Colenso in Te Hāwera, South Taranaki in the North Island of New Zealand. Paurocotylis pila is the only species from the genus Paurocotylis found in New Zealand. Etymology Greek, pauro means few and cotylis means cavity, possibly referring to the observed interior of the type specimen. Latin, pila means sphere, presumably referring to the shape of the fruit body. Description This truffle-like fungus produces a spherical to tuber-shaped fruit body (ascoma) with a smooth surface, which can be lobed or wrinkled. Paurocotylis pila's fruiting body is ball shaped, with a thin, matte red-orange outer rind and has no stalk. Often the rind is creased, but occasionally is smooth. Varying in size, it ranges from 10-30mm across, and is found half buried in soil, or under leaf litter. The fruit body is made of yellow-brown tissue, with multiple hollow chambers. Inside the chambers, the asci break up to leave round, cream or yellow ascospores. Once collected and dried, the rind's colour changes to a dull red-brown. P. pila fruit bodies usually range from in diameter, although some in the UK are up to 60 mm. The fruit body does not have a stipe. There is no odour noted and it is regarded as non-edible. Range DNA barcode (internal transcribed spacer) sequences in the National Center for Biotechnology Information database indicate a distribution in New Zealand, Australia and the United Kingdom. Natural global range This species is native to New Zealand, however, it has been introduced to England. In England, it has spread to Nottingham, Yorkshire, Sheffield, and more. Paurocotylis pila is also native to Tasmania, and has been found in Australia. New Zealand range Paurocotylis pila is found all across New Zealand; often appearing in forests under podocarp trees such as totara. However, it also occurs in gardens, forest tracks, and parks. Habitat This species is found in leaf litter and soil in forests, parks and gardens. Paurocotylis pila prefers disturbed forests, and is often found in soil near tracks. It has even been found in abandoned gravel pits. In England, it has been found fruiting in garden soil. Paurocotylis pila has been found near tracks in forest parks, under Podocarpus. Disturbed soil may make it easier for the fruit bodies to be spotted, or that they are seen more in those areas because it is where observers are. It is thought that due to their berry-like shape and striking colour, birds play a role in their dispersal. Experts have suggested that some members of this genus and related genera of fungi may change between being saprobic and endophytic throughout its life. This is unlikely for this species since it is found under various tree species. Ecology Life cycle/Phenology Paurocotylis pila is a saprobic species that grows underground. The fruiting bodies emerge after warm rain, mainly in autumn. After emerging from underground, Paurocotylis pila often remains partially covered by soil or leaf litter. From there, it is presumed to be dispersed by ground-foraging birds looking for fallen fruit. Fruiting in autumn, Paurocotylis pila coincides with podocarp trees fruiting in the forest. As its colour resembles the fruit, it attracts birds. Bird dispersal has likely assisted it in its spread throughout England, with specimens found in England with damage from birds pecking. Predators, Parasites, and Diseases Birds eat this species, which likely aids in its dispersal. Supporting evidence for bird dispersal is peck marks, often seen on Paurocotylis pila. It is unknown if any other predators, diseases, or parasites live on this species. Evidence of Ascomycota fungi being eaten by moa was found in moa coprolite. This shows that this species may have been eaten and dispersed by moa, but it is unknown which bird species are continuing to spread it today. Given that the species is spreading in the UK, some introduced birds may be spreading the it alongside native species. References Fungi described in 1855 Pyronemataceae Fungus species Fungi of New Zealand Fungi of Australia Fungi of the United Kingdom Inedible fungi
Paurocotylis pila
[ "Biology" ]
1,047
[ "Fungi", "Fungus species" ]
75,128,987
https://en.wikipedia.org/wiki/Rainer%20Marutzky
Rainer Marutzky (Halle, 1947) is a German wood scientist, who is emeritus professor of wood chemistry at the Technical University of Braunschweig and former director of the Fraunhofer Institute for Wood Research, Wilhelm Klauditz Institute (WKI) in Braunschweig, Germany. Biography He was born on 11 August 1947 in Halle, Germany. In 1968, following his military service, he pursued studies in chemistry at the Technical University of Braunschweig from 1968 to 1973. Under the mentorship of Professor Karl Wagner, he earned his doctoral degree and subsequently served as a post-doctoral fellow at the Society for Biotechnology in Braunschweig-Stöckheim, specializing in enzyme chemistry. In 1976, he became a memmer of the Fraunhofer Institute for Wood Research as a research associate. He successfully completed his habilitation at the Institute of Natural Sciences at the Technische Universität Braunschweig in 1991. He was appointed as university professor in 1996. His pioneering research work was predominantly related to the deleterious emissions from the wood-based products and the industrial environment. He was also actively engaged in European standardization initiatives. Marutzky held the position of director of the Fraunhofer WKI from 1989 until December 2009, when he was officially retired. His yearlong efforts are evidenced by many publications in both German and international scientific journals, along with his participation as a keynote speaker and expert at various international scientific symposia. International recognition In 1988, Marutzky along with Edmone Roffael and Lutz Mehlhorn were awarded by the International Association iVTH for their research work on the topic "Investigations on the formaldehyde emissions from wood-based materials and other materials, and the development of methods to reduce formaldehyde emission potential." He has also received several other awards in the field of wood science and technology. Presently, he is a technical advisor to the International Association for Technical Wood Matters (iVTH). References People from Braunschweig German chemists Wood sciences Wood scientists 1947 births Living people
Rainer Marutzky
[ "Materials_science", "Engineering" ]
422
[ "Wood sciences", "Wood scientists", "Materials science" ]
75,129,233
https://en.wikipedia.org/wiki/278%20%28number%29
278 (two hundred [and] seventy-eight) is the natural number following 277 and preceding 279. In mathematics 278 is an even composite number with 2 prime factors. 278 is equal to Φ(30). It is the sum of the totient function. 278 is a nontotient number which means that it is an even number that doesn't follow Euler's totient function. 278 is the smallest semiprime number that has an anagram that is also semiprime. The other number is 287. References Integers
278 (number)
[ "Mathematics" ]
114
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
75,129,370
https://en.wikipedia.org/wiki/Charles%20Hamilton%20Mitchell
Charles Hamilton Mitchell, CB, CMG, DSO (1872-1941) was a Civil Engineer and an Intelligence Officer of the Canadian Armed Forces in World War I, with the rank of Brigadier-General. He served in France, Italy, and England during the war as an Intelligence Officer, winning several honours, becoming the most decorated Intelligence Officer in Canadian military intelligence history. After the war, he returned to Canada to serve as the Dean of Engineering at the University of Toronto Faculty of Applied Science and Engineering. He helped greatly expand and improve the faculty during his tenure and served in that role until 1941. Early life and education Charles Hamilton Mitchell was born to George Mitchell and Agnes Becket in 1872 at Petrolia, Ontario. His father, George Mitchell, was a clergyman and a graduate of Upper Canada College and the University of Toronto in mathematics. He was the great-grandson of a United Empire Loyalist. He attended the School of Practical Science at the University of Toronto, studying Civil Engineering at the School. He received his SPS diploma in 1892 and his B.A.Sc. in 1894. After graduating from the University of Toronto, he worked as a Civil Engineer (officially qualifying as a C.E. in 1898), specializing in hydraulic and hydro-electric power development. He took employment as the Assistant City Engineer in Niagara Falls and later served as the City Engineer. After leaving that post, he set up a Toronto-based consulting firm in 1906 in partnership with his brother Percival, working largely in hydroelectric power plant construction. He was responsible for the design and construction of several plants in the Maritimes, Ontario, and Western Canada. In 1901 he married Myra Ethlyn Stanton, daughter of John N. Stanton and Martha Hubbs of St Catharines. They had one son, Donald Russell Mitchell in 1902, though Donald Russel survived for only 3 weeks. Military service Mitchell joined the Militia in 1899. Prior to World War I, he served in the 44th Lincoln and Welland Regiment and the Corps of Guides. Upon the outbreak of war in 1914 Mitchell attested to serve in the Canadian Expeditionary Force (CEF) and was appointed General Staff Officer (3rd grade) on staff of the Headquarters, 1st Canadian Division. When the Canadian Corps was formed in August 1915, Col. Mitchell was sent to its Headquarters as G.S.O.2 (Int), the senior Intelligence appointment in the CEF. He therefore became involved from the very beginning in establishing a Corps intelligence organization. He had no Canadian precedent to guide him, although he could call on his experience in First Division and on the considerable help he received from his British counterparts. In September 1916 he became the head of the Intelligence Branch of the Second Army as a Colonel and in October 1918 he was promoted to Brigadier General and served as a senior intelligence officer in the War Office in London following posts in France and Italy. He returned to the Canadian Army in June 1919, having won numerous honours and decorations, including French, Belgian and Italian awards. He was appointed to the Order of the Bath on June 3, 1918, while serving on the Headquarters of the General Staff of the British Army in Italy with the CEF. He remains the most decorated Intelligence Officer in Canadian military intelligence history. Post War work Prior to his service in the war, Mitchell had represented engineering graduates of the University of Toronto on the Senate from 1901 to 1913. From 1913 to 1919, he served on the Board of Governors of the university until his appointment as Dean. After the war, he officially took up the position as Dean of the Faculty of Applied Science, beginning his term in 1919. Mitchell never had any experience in academic administration, but he quickly grew into the role. His exemplary service record also lent him authority early in his term, when fully half the student population consisted of returning veterans. Mitchell oversaw the faculty during the entirety of the interwar period. In this time, the faculty grew from a student body of 772 in 1919 to 961 in 1940, despite rising academic standards and the effects of The Great Depression. The programs of Engineering, Physics, and Mining Geology were also added during this period. In addition to his academic role, Mitchell was involved in various public duties. In 1924 he was with American representatives on the Joint Board of Engineers to study the feasibility of a St. Lawrence waterway. He also served on the Board of Trade while he was Dean. Mitchell retired in 1941, shortly before his death. He was succeeded by C.R. Young. Death General Mitchell died 26 August 1941, and his wife 1 May 1958. Honours and awards Companion of the Order of Bath (1918) Companion of the Order of St Michael and St George (1917) Distinguished Service Order (1916) Officer of the Order of Leopold (Belgium) (1917) Croix de Guerre (Belgium) (1918) Officer of the Legion of Honour (France) (1916) Officer of the Order of the Crown (Italy) (1918) War Merit Cross (Italy) (1919) Several Mentions in Dispatches (1916, 1916, 1917, 1917, 1918, 1919) References External links Canadian Military Intelligence: Honours and Awards Page Canadian Great War Project: Brigadier-General Charles Hamilton Mitchell Library and Archives Canada, Attestation Paper: Mitchell, Charles Hamilton Winning the Trench Warfare: Battlefield Intelligence in the Canadian Corps 1914-1918. PhD Thesis by Dan Richard Jenkins (1999). Biography of Charles Hamilton Mitchell European hydro-electric power development, Charles H Mitchell, 1908. Aerial Navigation in Warfare, Charles H Mitchell, 1911. Civil engineering Intelligence assessment Canadian Militia officers 1872 births 1941 deaths Canadian Expeditionary Force officers
Charles Hamilton Mitchell
[ "Engineering" ]
1,130
[ "Construction", "Civil engineering" ]
75,129,597
https://en.wikipedia.org/wiki/283%20%28number%29
283 is the natural number following 282 and preceding 284. In mathematics 283 is an odd prime number, a twin prime with 281, and a super prime, meaning that it is the nth prime where n is a prime number as well. 283 is a strictly non palindromic number. That means that between base 2 and base n-2, that number is never palindromic. 283 is a number such that 4283-3283 is prime. 283 is equivalent to 25+8+35. References Integers
283 (number)
[ "Mathematics" ]
109
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
75,130,631
https://en.wikipedia.org/wiki/Calixcoca
Calixcoca is an experimental vaccine to treat cocaine and crack cocaine addiction. It has been in development since 2015 by the Federal University of Minas Gerais (UFMG) in Brazil. Development The vaccine was created by a team led by Frederico Garcia, professor in the Department of Mental Health at the UFMG Faculty of Medicine. He says that the motivation for the work came from seeing the suffering of pregnant women addicted to cocaine who arrived at the university's outpatient clinic. The active ingredient of the vaccine (V4N2) was designed and synthesized by the synthesis group headed by professor Ângelo de Fátima, from the Department of Chemistry at UFMG. During the pre-clinical phase, the investment was made by the National Council for Scientific and Technological Development (CNPq) and the Minas Gerais Research Support Foundation (Fapemig). On June 1, 2023, the city of São Paulo announced the initial investment of R$4 million in accelerating vaccine research. Vaccine Calixcoca, unlike other anti-cocaine vaccines, is not protein-based. The material that forms the basis of the vaccine is the V4N2 molecule. This molecule stimulates the immune system to produce antibodies that bind to cocaine molecules in the blood. Cocaine molecules that are bound to these antibodies are too large to pass the blood-brain barrier and thus cannot reach the brain and cannot cause psychological effects in the user. Preclinical studies Pre-clinical studies carried out with mice showed the production of anti-cocaine antibodies in the animals' organism. In addition to making the effects of the drug imperceptible to the mice, the vaccine also reduced the number of spontaneous abortions, and the pups were born healthier and with greater resistance to the drug. Awards Calixcoca was selected as one of the finalists in the Euro Health Innovation Award (2023), winning the award in October of that year. References Vaccines Brazilian inventions Vaccines against drugs Federal University of Minas Gerais
Calixcoca
[ "Biology" ]
406
[ "Vaccination", "Vaccines" ]
75,131,048
https://en.wikipedia.org/wiki/HD%204222
HD 4222, also known as HR 196, is the primary of a binary star located in the northern constellation Cassiopeia. It is faintly visible to the naked eye as a white-hued point of light with an apparent magnitude of 5.41. Gaia DR3 parallax measurements imply a distance of 353 light-years and it is drifting closer with a heliocentric radial velocity of . At its current distance, HD 4222's brightness is diminished by an interstellar extinction of 0.13 magnitudes and it has an absolute magnitude of +0.44. HD 4222 has a stellar classification of A2 Vs or A1 V, both classes indicating that it is an A-type main-sequence star that is generating energy via hydrogen fusion at its core. The former class includes the presence of 'sharp' or narrow absorption lines due to slow rotation. Consistent with the class, HD 4222 spins modestly with a projected rotational velocity of approximately . It has 2.59 times the mass of the Sun and 3.47 times the radius of the Sun. It radiates 69.1 times the luminosity of the Sun from its photosphere with an effective temperature of . HD 4222 is metal deficient with an iron abundance of [Fe/H] = −0.26 or 55% of the Sun's. At the age of 407 million years, HD 4222 has completed 81.5% of its main sequence lifetime. HD 4222 and BU 492B make up the binary system BU 492. The companion is a red dwarf with a stellar classification of M2-5V; it is located 1.5" away from the primary along a position angle of 173°. BU 492B was first noticed by astronomer S. W. Burnham in 1878. HD 4222 also has one optical companion; an 11th magnitude star located 88.6" away, which itself is also a double star. A X-ray emission with a luminosity of was detected around the star. A-type stars are not expected to emit X-rays, so it might be coming from the companion. HD 4222 is considered to be a probable member of the Sirius supercluster, a group of stars moving with the bright star Sirius and share a common origin with the system. References Cassiopeia (constellation) A-type main-sequence stars Binary stars Multiple stars BD+54 00143 004222 003544 0196 00445136120
HD 4222
[ "Astronomy" ]
515
[ "Cassiopeia (constellation)", "Sky regions", "Multiple stars", "Constellations" ]
75,131,230
https://en.wikipedia.org/wiki/47%20Cassiopeiae
47 Cassiopeiae (also designated as or called 47 Cas, HR 581, TYC 4499-2252-1, HD 12230, and HIP 9727) is an F-type main-sequence star located 109.45 light years away in the constellation of Cassiopeia. 47 Cassiopeiae is visible to the naked eye in dark skies and is almost never visible in areas with light pollution. The star forms a binary with an unseen companion, 47 Cassiopeiae B, detected only in the radio spectrum. The star, despite being poorly known, has been observed to emit X-rays and microwaves in large flares. It was historically catalogued as an A7V star, but later dropped to F0V. Based on kinematics, this star is likely part of the Pleiades moving group. Despite being much more luminous and massive then the Sun, this star has been used as a solar analog. The star was a bright star in the occasionally used 1775 to 19th century constellation Custos Messium, typically drawn as a depiction of Charles Messier standing on top of the giraffe (Camelopardalis), between Cepheus and Cassiopeia. References F-type main-sequence stars Cassiopeia (constellation) Binary stars
47 Cassiopeiae
[ "Astronomy" ]
270
[ "Cassiopeia (constellation)", "Constellations" ]
75,131,642
https://en.wikipedia.org/wiki/Cystotheca%20lanestris
Cystotheca lanestris, the live oak witch's broom fungus, is a species of mildew that infects buds and induces stem galls called witch's brooms on oak trees in California, Arizona, and Mexico in North America. Witch's brooms are "abnormal clusters of shoots that are thickened, elongated, and highly branched." This fungus infects coast live oaks, interior live oaks, canyon live oaks, valley oaks, and tanoaks, and is most commonly found along the coast. Research published in 2023 newly describes this fungus as also growing in association with Quercus laceyi and Q. toumeyi. According to the U.S. Forest Service Treesearch Department: While this fungus is technically considered a plant disease, most infections are harmless "except possibly in small seedlings". Cystotheca lanestris is considered widespread in California. References External links Oak galls Erysiphales Fungi of California Gall-inducing fungi Fungus species Fungi described in 1884 Taxa named by H. W. Harkness
Cystotheca lanestris
[ "Biology" ]
223
[ "Gall-inducing fungi", "Fungi", "Fungus species" ]
75,134,015
https://en.wikipedia.org/wiki/PGC%2016389
LEDA/PGC 16389 is a Hubble-type dwarf irregular galaxy (dIrr) in the constellation Caelum in the southern sky. It is estimated to be 22 million light-years from the Milky Way and forms an optical galaxy pair with APMBGC 252+125-117. References External links CDS Portal SIMBAD Astronomical Database A cosmic optical illusion 16389 Dwarf irregular galaxies Caelum J02232199+3211492
PGC 16389
[ "Astronomy" ]
95
[ "Caelum", "Galaxy stubs", "Astronomy stubs", "Constellations" ]
75,134,131
https://en.wikipedia.org/wiki/HAL%20HLFT-42
The HAL HLFT-42 (Hindustan Lead-in Fighter Trainer – 42) is a design for an Indian lead-in fighter trainer proposed by Hindustan Aeronautics Limited (HAL). Designed as a next-generation supersonic trainer jet, serving as an advanced trainer for upcoming HAL Tejas Mk2 and HAL AMCA fighter jets. Notably, the HLFT-42 will also be used as a fully-fledged fighter jet to perform combat missions. HAL unveiled the design of the scale model of the HLFT-42 at the 14th edition of Aero India (2023), which was held in Bangalore. The Indian Air Force has expressed its intent to use the HLFT-42 in the future to replace the existing BAE Hawk 132 jet trainers. Development The concept for the HLFT-42 was initiated in 2017, and officially unveiled at Aero India 2023 by Hindustan Aeronautics Limited (HAL). As of February 14, 2023, HAL reported that the HLFT-42 development project had reached an advanced stage of development and was expected to progress to the final design stage within the next four to five years. Its primary role is to replace the Indian Air Force's current BAE Hawk jet trainers and serve as the trainer for future fighter jets, including the HAL Tejas Mk2 and HAL AMCA. Additionally, it will be capable of performing combat missions. Design HLFT-42 is designed as a single-engine, conventionally swept-wing aircraft with a bubble canopy. It is anticipated that maximum take-off weight of HLFT-42 will be around 16,500 kilograms. This aircraft will feature advance avionics, including an active electronically scanned array radar, infrared search and track sensor and electronic warfare (EW) suite, all complemented by a FBW system. The HLFT-42 mock-up displayed at Aero India (2023) showcased three hardpoints under each wing, three under the fuselage, and one on each wing tip, totaling 11 hardpoints for integrating weapons. These weapons may include close-combat-missiles (like ASRAAM) and beyond-visual-range missiles (such as Astra), which effectively transform the HLFT-42 into a fully-fledged fighter jet. Specifications (Projected) See also References External links HAL Official Website for details of Media Releases about HLFT-42 Indian military aircraft procurement programs Proposed military aircraft HLTF-42 Single-engined jet aircraft Low-wing aircraft Aircraft with retractable tricycle landing gear
HAL HLFT-42
[ "Engineering" ]
517
[ "Proposed military aircraft", "Military projects" ]
75,134,428
https://en.wikipedia.org/wiki/Jankov%E2%80%93von%20Neumann%20uniformization%20theorem
In descriptive set theory the Jankov–von Neumann uniformization theorem is a result saying that every measurable relation on a pair of standard Borel spaces (with respect to the sigma algebra of analytic sets) admits a measurable section. It is named after V. A. Jankov and John von Neumann. While the axiom of choice guarantees that every relation has a section, this is a stronger conclusion in that it asserts that the section is measurable, and thus "definable" in some sense without using the axiom of choice. Statement Let be standard Borel spaces and a subset that is measurable with respect to the analytic sets. Then there exists a measurable function such that, for all , if and only if . An application of the theorem is that, given any measurable function , there exists a universally measurable function such that for all . References . . Descriptive set theory Inverse functions Measure theory
Jankov–von Neumann uniformization theorem
[ "Mathematics" ]
187
[ "Theorems in mathematical analysis", "Theorems in measure theory" ]
75,134,813
https://en.wikipedia.org/wiki/BrMT
BrMT (6-bromo-2-mercaptotryptamine) is a neurotoxin found in the hypobranchial gland of the marine snail species Calliostoma canaliculatum. The disulfide-linked dimer of BrMT possesses inhibitory effects on the Kv1 and Kv4 families of voltage-gated potassium channels. Source BrMT was first isolated from the mucus of Calliostoma canaliculatum, a cone snail found in the temperate coastal waters of the western Pacific. BrMT is the first compound found in the hypobranchial gland mucus to produce a biological response. Chemistry BrMT is a brominated tryptamine. It has a thiol group, allowing dimerization via a disulfide linkage. BrMT is found to be light-sensitive and unstable in a reducing environment. Its first total synthesis was reported in 2013. Target Calliostoma canaliculatum deters predators by covering its shells with BrMT-containing mucus, in particular when exposed to a predator, such as Pycnopodia helianthoides or Pisaster giganteus. The BrMT dimer is known to affect voltage-gated potassium channels in the central nervous system. It strongly inhibits ShBΔ potassium channels, and to a lesser degree also isoforms found in humans (hKv1.1) and squid (sqKv1.A). It also affects members of Kv4 family (Kv4.1 and Kv4.2) and Drosophila ether-à-go-go channels. Mode of action The mode of action of BrMT involves inhibition of specific voltage gated potassium channels present in the nervous system. By stabilizing the voltage sensor of the ion channel, the opening of ShBΔ ion channels and other Kv1 members is inhibited. BrMT has an of 1.1 ± 0.1 μM on ShBΔ channels, a member of the Kv1 family. The BrMT binding to ShBΔ has been found to be allosteric in nature, due to a change of conformation in the K+ channel subunits and not by blocking the entrance of the channel. References Ion channel toxins Neurotoxins Snail toxins Amines Indoles Bromoarenes Thiols
BrMT
[ "Chemistry" ]
484
[ "Thiols", "Functional groups", "Amines", "Organic compounds", "Neurochemistry", "Neurotoxins", "Bases (chemistry)" ]
75,135,199
https://en.wikipedia.org/wiki/Edward%20Karavakis
Edward Karavakis (; born October 29, 1983) is a Greek computer scientist working as a Senior Applications Engineer at Brookhaven National Laboratory (BNL) stationed at CERN, the European Organization for Nuclear Research in Geneva, Switzerland. Early life and education Karavakis completed his bachelor's degree in Computer Science in 2005 from the University of East London and his master's degree in Data Communication Systems in 2006 from Brunel University. He was then awarded a three years of funding for his PhD research from the Engineering and Physical Sciences Research Council (EPSRC) in UK. In 2008, Karavakis started collaborating with the IT department of CERN for his doctoral research. He completed his doctorate degree in 2010 from Brunel University. Career Upon completing his PhD, Karavakis was hired as a post-doctoral researcher in the IT department of CERN in 2010 and then hired as a CERN staff in 2013. In May 2022, Edward joined Brookhaven National Laboratory. Based at CERN in Geneva, Switzerland, he is a core member of the PanDA (Production and Distributed Analysis) Workload Management System project team. PanDA is a high-performance, scalable system designed to manage and distribute computational tasks across large-scale, distributed computing environments. This project plays an important role in supporting the ATLAS experiment at CERN, the Vera C. Rubin Observatory in Chile, and sPHENIX at BNL. Before joining Brookhaven National Laboratory, Karavakis was the project leader of the File Transfer Service (FTS), the grid data transfer service used at the Worldwide LHC Computing Grid (WLCG). The WLCG project is a global computing infrastructure of more than 180 data centres in 42 countries scattered around the world that produce a massive distributed computing infrastructure with about 1,000,000 CPU cores, providing more than 10,000 physicists around the world with near real-time access to the LHC data, and the power to process it. Other projects that Karavakis was involved with include the Experiment Dashboard Monitoring System for the WLCG and the MONIT monitoring project at CERN. Edward Karavakis has been a supporter of the CERN & Society Foundation since 2019. His commitment to the charitable branch of CERN is exemplified by his active involvement in promoting its mission. In an internal campaign aimed at raising awareness among CERN personnel, Edward was invited to share his insights about the Foundation through a video interview. This initiative sought to highlight the Foundation's pivotal role in extending the reach and impact of CERN's activities beyond its core research, benefiting the wider public. References External links Edward Karavakis's publications indexed by INSPIRE HEP Living people Greek computer scientists Computer scientists 21st-century Greek scientists Brookhaven National Laboratory staff Alumni of Brunel University London Scientists from Thessaloniki 1983 births People associated with CERN
Edward Karavakis
[ "Technology" ]
582
[ "Computer science", "Computer scientists" ]
75,135,830
https://en.wikipedia.org/wiki/BBM%20Enterprise
BBM Enterprise (abbreviated as BBMe) is a centralized instant messaging client provided by Canadian company BlackBerry Limited. BBMe is marketed as a secure messenger with end-to-end encryption. BBMe was launched in 2014 originally as BBM Protected, based on a revamped version of BBM (BlackBerry Messenger), the company's consumer-oriented instant messenger. Initially offered only for enterprise customers, BBMe was opened up to all customers in 2019 after the shutting down of the older consumer BBM service. From the client to server, messages in BBMe are encrypted using TLS. Each message has its own random encryption public and private key. It uses a FIPS 140-2 certified cryptographic library for generating the keys. According to BlackBerry Ltd., BBMe complies with the following standards: Digital signature FIPS 186-4 AES symmetric encryption standard FIPS 197 HMAC standard FIPS 198-1 based on based on SHA2-256 Cryptographic key generation standard NIST SP 800-133 Secure Hash standard FIPS 180-4 In addition, it makes use of EC-SPEKE, KDF and One-Pass DH (all National Institute of Standards and Technology algorithm standards) with "256-bit equivalent security". The service consists of group chats, voice and video calls. Unlike its predecessor, BBMe is not entirely free, lasting for a year before costing $2.49 for six months. On 1 May 2024, BBMe for Personal Use users were notified of their service's discontinuation effective 1 Nov 2024. References BlackBerry Instant messaging Instant messaging clients BlackBerry Limited BlackBerry software Cryptographic software Secure communication Internet privacy software
BBM Enterprise
[ "Mathematics", "Technology" ]
345
[ "Instant messaging", "Cryptographic software", "Instant messaging clients", "Mathematical software" ]
75,136,868
https://en.wikipedia.org/wiki/Raygrantite
Raygrantite is a mineral first discovered in Big Horn Mountains, Maricopa County, Arizona, US. More specifically, it is located in the evening star mine, which is a Cu, V, Pb, Ag, Au, and W mine. Raygrantite is a member of the iranite mineral group, which consists of hemihedrite, iranite, and raygrantite. This mineral received its name in honor of Raymond W. Grant, a retired professor who primarily focused on the minerals of Arizona. The typical crystal structure of raygrantite is bladed with parallel striations to the C axis. Its ideal chemical formula is Pb10Zn(SO4)6(SiO4)2(OH)2. The IMA (International Mineralogical Association) approved raygrantite in 2013, and the first publication regarding this mineral was put forth in 2017. Occurrence Raygrantite is associated with cerussite, galena, mattheddleite, lanarkite, leadhillite, anglesite, alamosite, hydrocerussite, diaboleite, and caledonite. Crystals were found in pockets encased in masses of galena. Raygrantite is a secondary mineral and is the result of pyrite-galena-chalcopyrite veins. In this district of the Rocky Mountains, intrusions can date back to the late Cretaceous period. Physical properties Raygrantite is a colorless, transparent mineral that occurs in bladed crystal structures. This bladed structure has striations parallel to the C-axis. Its luster is vitreous, which means it looks similar to glass. Raygrantite on the Mohs hardness scale is a three, which is .5 softer than a penny. It exhibits brittle tenacity and has good cleavage along the {120} plain. This mineral also has characteristic fishtail twinning along the {12} in addition to a twin axis along the {010}. This mineral's recorded density is 6.374 g/cm3. Optical properties Raygrantite is transparent with a vitreous luster. It is biaxial positive, which means it will refract light along two axes. The mineral's 2Vmeas. 76° (2) and 2Vcalc. 85°. The refractive indices are: nα= 1.915(7) nβ= 1.981(7) nγ= 2.068(9). Dispersion is strong, v < r. Raygrantite also exhibits absorption spectra of Z>Y>X. Chemical structure Raygrantite is isotypic with iranite and hemihedrite. When looking at the chemical structure of the iranite mineral group, there are 10 symmetrically independent non-H cation sites. Of these sites, five are filled by lead Pb2þ (Pb1, Pb2, Pb3, Pb4, and Pb5). Then three are filled by S6þ (S1, S2, and S3). Finally, one of the sites is filled by Si4þ, and the last is filled by Zn2þ. Raygrantite is composed of layers of tetrahedron and octahedron joined together by lead ions. Chemical composition X-ray crystallography To collect this data, a Bruker X8 APEX2 CCD X-ray diffractometer equipped with graphite-monochromatized MoKa radiation was used. Through these analyses, we can understand that Raygrantite is a member of the triclinic crystal system. It was also noted that the space group of this mineral is – Pinacoidal. The next conclusion that can be drawn from the X-ray diffraction data is the unit cell dimensions. These are as such: a = 9.3175(4) Å, b = 11.1973(5) Å c = 10.08318(5) Å. See also List of minerals References Natural materials Lead minerals Triclinic minerals Minerals in space group 11 Zinc minerals Sulfate minerals Wikipedia Student Program
Raygrantite
[ "Physics" ]
857
[ "Natural materials", "Materials", "Matter" ]
75,136,869
https://en.wikipedia.org/wiki/Takedaite
Takedaite is a borate mineral that was found in a mine in Fuka, Okayama Prefecture Japan during a mineralogical survey in the year 1994. During the survey, Kusachi and Henmi reported the occurrence of an unidentified anhydrous borate mineral closely associated with nifontovite, olshanskyite, and calcite. By the year 1994 two other minerals in the borate group M3B2O6 had been identified in nature Mg3B2O6 known as kotoite and Mn3B2O6 known as jimboite. Takedaite has the ideal chemical formula of Ca3B2O6. The mineral has been approved by the Commission on New Minerals and Mineral Names, IMA, to be named takedaite after Hiroshi Takeda, a professor at the Mineralogical Institute, University of Tokyo Japan. Occurrence Takedaite is found in association with gehlenite, spurrite, bicchulite, rankinite, kilchoanite, oyelite, and fukalite. It occurs in a vein consisting of borate minerals that developed along the boundary between crystalline limestone and the skarns. The vein it was discovered in was approximately 10 cm in thickness and is closely associated with frolovite and calcite. At the circumference of the expanded area, hydrous borates such as nifontovite, olshanskyite, sibirskite, and pentahydroborite occurred 20 cm to 50 cm in thickness. Physical properties Takedaite is a white, or pale gray mineral with a vitreous luster and colorless in thin sections. It exhibits a hardness of 4.5 on the Mohs hardness scale. The density measured by heavy liquids was 3.10(2) g•cm−3, the calculated density being 3.11 g•cm−3. Optical properties Takedaite is optically uniaxial Negative. The refractive indices are: ω = 1.726, ε = 1.630, and the Vickers microhardness was 478(429-503) kg mm−2 (25g load). The infrared spectrum of Takedaite measured by the KBr method for the region 4000 to 250 cm−1. The absorption bands at 907, 795, 710, and 618 cm−1 were in close agreement with those of the synthetic 3CaO·B2O3 reported by Wier and Schroeder (1964). The absorption bands at 1275 and 1230 cm−1 for takedaite were sharper. Chemical properties Takedaite is a borate with the presence of calcium, boron and oxygen. Chemical analysis gave CaO 71.13%, B2O3 28.41%, the H2O content was determined by ignition loss at 900 °C and was 0.14%, totaling 99.68%. The empirical formula calculated on the basis of O=6 is therefore Ca3.053B1.965O6 or more ideally Ca3B2O6. Takedaite is also easily soluble in dilute hydrochloric acid. Chemical composition X-ray crystallography The x-ray powder data for takedaite was obtained by an X-ray diffractometer using Ni-filtered Cu-Κα radiation. Single crystals were also studied using the precession and Weissenberg methods. Takedaite is in the trigonal crystal system. The space group is either Rc or R3c. The unit cell dimensions, refined by least squares from the X-ray powder diffraction data of takedaite, were: a = 8.638(1) Å, c = 11.850(2)  Å. See also List of Minerals References Natural materials Borate minerals Calcium minerals Trigonal minerals Minerals described in 1994 Wikipedia Student Program
Takedaite
[ "Physics" ]
796
[ "Natural materials", "Materials", "Matter" ]
75,136,880
https://en.wikipedia.org/wiki/Aleutite
Aleutite is both a vanadate and arsenate mineral but it can also be considered as a natural salt-inclusion phase that was first discovered at Second scoria cone of the Great Fissure Tolbachik eruption in the summer of 2015 in Kamchatka, Russia. Aleutite is a fumarolic mineral found with many other newly discovered minerals at this location. It gained the name from the Aleuts, the ethnic group who are the original inhabitants living on the Commander Islands, Aleutsky District, Kamchatka Krai. This mineral is very brittle and has a dark red color. Aleutite is a new structure type, the structure was refined as a 2-component twin, the twin ratio equals (0.955:0.045). Occurrence Aleutite occurs as a product of fumarolic activity. It was found in the summer of 2015 in Yadovitaya fumarole at the Second Scoria Cone of the Northern Breakthrough of the Great Fissure Tolbachik Eruption in Kamchatka, Russia. The Second Scoria Cone is located approximately 18 km SSW of the active shield volcano Ploskiy Tolbachik. The temperature of gases at the sampling location was about 300 °C. Aleutite could be deposited directly from the gas phase as a volcanic sublimate. All the recovered samples were immediately packed and isolated to avoid any contact with atmosphere. Aleutite is very rare and closely associates with anhydrite. Other associated minerals are euchlorine, kamchatkite, langbeinite, lyonsite, pseudolyonsite, tenorite, hematite. Physical properties Aleutite occurs as individual crystals in the masses of polycrystalline anhydrite. Aleutite is dark red, with reddish black streak, and has an adamantine luster. It is brittle with no visible cleavage observed. Parting was not observed, and its fracture is uneven. The density could not be measured due to lack of sufficient material. The calculated specific gravity is 4.887 g/cm3. Optical properties The measured optical properties of Aleutite were found through reflected light. The mineral had high values of refractive indices which is typical of arsenates and vanadates. Reflectance measurements were made using a SiC standard in air which ranged from 400–700 nm. Aleutite is grey with yellowish tint in reflected light, it is non-pleochroic with abundant brown-red internal reflections and a weak bireflectance. Chemical properties Aleutite is both a vanadate and arsenate that may can compared to averievite which has had a formula of Cu6(VO4)2O2Cl2. It may also be comparable with piypite with a formula of K2Cu2O(SO4)2. The empirical formula of Aleutite cab be calculated on the basis of (As+V+Mo+Fe3+) = 2 apfu is Сu5.40Zn0.05Ca0.01As1.09V0.84Mo0.04Fe0.03K0.05Pb0.02Rb0.01Cs0.01O9.97Cl1.07 or (Сu4.94Zn0.05Ca0.01)Σ5.00O2.11[(As2.11V0.42Mo0.02Fe0.02)Σ1.00OΣ3.93]2 ∙ (Cu0.46K0.05Pb0.02Rb0.01Cs0.01)Σ0.55Cl1.07. Taking into account structural data, the simplified formula is [Cu5O2](AsO4)(VO4)·(Cu0.5□0.5)Cl. Aleutite is soluble in hot H2O. Chemical composition X-ray crystallography Aleutite is in the monoclinic crystal system and has a space group of C2/m. Its unit cell dimensions are as follows: a = 18.0788(9) Å, b = 6.2270(5) Å, c = 8.2445(3) Å, β = 90.56(4)º, V = 928.09(7) Å3, Z = 4. Aleutite has a point group of 2/m. The [Cu5O2]6+ band in aleutite can be considered part of a kagome network. See also List of Minerals References Natural materials Monoclinic minerals Minerals in space group 21 Wikipedia Student Program Arsenate minerals Vanadate minerals Chloride minerals Copper minerals
Aleutite
[ "Physics" ]
982
[ "Natural materials", "Materials", "Matter" ]
75,137,734
https://en.wikipedia.org/wiki/Function%20analysis%20diagram
A function analysis diagram (FAD) is a method used in engineering design to model and visualize the functions and interactions between components of a system or product. It represents the functional relationships through a diagram consisting of blocks, which represent physical components, and labeled relations/arrows between them, which represent useful or harmful functional interactions. Overview The FAD method was first proposed in a 1997 patent by the company Invention Machine Corporation as part of their TRIZ-based software tools. It has been further developed through research collaborations between academia and industry. FAD modeling is considered more intuitive than traditional function analysis methods like function trees and function structures because it incorporates the physical structure of the product. It allows capturing a richer network of functional relationships compared to the linear representations from other methods. The layout of the diagram can also mirror the spatial arrangement of components, conveying additional meaning. By explicitly mapping out functional interactions between components, FAD diagrams help capture the rationale of why a product is designed the way it is. Modeling harmful or undesired functions provides a starting point for generating design improvements. Modeling FAD diagrams consist of labelled blocks representing the physical components, users, or environmental resources related to the product. The relations between blocks are shown as labelled arrows that describe useful or harmful functional interactions. For example, a piston block can have a "compresses air" relation to a cylinder block. More complex FAD models can be created hierarchically by linking diagrams that focus on different system states or levels of detail. Research has developed techniques for providing overview visualizations of the network of linked FAD diagrams. While natural language terms are often used for labelling functional interactions in FAD, conventions and shorthands can be defined for recurring relation types to approach a modeling language. Examples include shorthand notation for effort and flow transformations in power systems. Intended benefits Intended benefits of FAD modeling include: Simple and intuitive notation Presence of product structure makes it easy to use Captures richer network of functional relationships Layout can reflect spatial arrangement of components Captures design rationale and reasoning Identifies areas for design improvement Can represent hierarchical/complex systems Applications FAD has been used to model and analyze engineering systems in domains including aerospace, manufacturing, and power systems. It provides an intuitive representation for sharing and discussing functional knowledge of product designs. Potential applications include: Capturing design rationale to reflect on decisions Supporting adaptive and variant design tasks Enabling reuse of functional knowledge Revealing areas for design improvement Improving manufacturing quality and reliability Tools While FAD diagrams can be created with general drawing and mapping tools, some engineering design software packages provide specific support for building FAD models. These include: Decision Rationale editor (DRed) TechOptimizer (Invention Machine Corporation) DesignVUE (Imperial College London) References Information systems Problem structuring methods
Function analysis diagram
[ "Technology" ]
560
[ "Information systems", "Information technology" ]
75,138,286
https://en.wikipedia.org/wiki/Pictet%27s%20experiment
Pictet's experiment is the demonstration of the reflection of heat and the apparent reflection of cold in a series of experiments performed in 1790 (reported in English in 1791 in An Essay on Fire) by Marc-Auguste Pictet—ten years before the discovery of infrared heating of the Earth by the Sun. The apparatus for most of the experiments used two concave mirrors facing one another at a distance. An object placed at the focus of one mirror would have heat and light reflected by the mirror and focused. An object at the focus of the counterpart mirror would do the same. Placing a hot object at one focus and a thermometer at the other would register an increase in temperature on the thermometer. This was sometimes demonstrated with the explosion of a flammable mix of gasses in a blackened balloon, as described and depicted by John Tyndall in 1863. After "demonstrating that radiant heat, even when it was not accompanied by any light, could be reflected and focused like light", Pictet used the same apparatus to demonstrate the apparent reflection of cold in a similar manner. This demonstration was important to Benjamin Thompson, Count Rumford who argued for the existence of "frigorific rays" conveying cold. Rumford's continuation of the experiments and promotion of the topic caused the name to be attached to the experiment. The apparent reflection of cold if a cold object is placed in one focus surprised Pictet and two scholars writing about the experiment in 1985 noted "most physicists, on seeing it demonstrated for the first time, find it surprising and even puzzling." The confusion may be resolved by understanding that all objects in the system—including the thermometer—are constantly radiating heat. Pictet described this as "the thermometer acts the same part relatively to the snow as the bullet [heat source] in relation to the thermometer." Addition of a very cold object adds an effective heat sink versus a room temperature object which would not, in the net, cool or warm a thermometer in the other focus. Modern replications and demonstrations There are relatively few published examples of demonstrations or recreation of the experiment. Two physicists in the University of Washington system reported on demonstrations to students and colleagues and produced directions for re-creating the experiment in 1985 as part of an investigation into the role of the experiment in the history of physics. Physicists at Sofia University in Bulgaria reported on reproducing the experiment for high school students in 2017. References External links The Pictet Cabinet: The art of teaching science through experiment, a 2011 pamphlet from the Musée d'histoire des sciences de la Ville de Genève (Museum of the History of Science of the City of Geneva) "Are There Rays of Cold?", an undated video demonstration in Russian from the Moscow Engineering Physics Institute 1790 in science 1791 in science Physics experiments Thermodynamics History of science
Pictet's experiment
[ "Physics", "Chemistry", "Mathematics", "Technology" ]
587
[ "Physics experiments", "History of science", "Experimental physics", "Thermodynamics", "History of science and technology", "Dynamical systems" ]
75,138,662
https://en.wikipedia.org/wiki/Canadian%20Chemical%20Workers%27%20Union
The Canadian Chemical Workers' Union (CCWU) was a trade union in Canada. The union was established in 1975 by 30 Canadian locals, formerly affiliated to the International Chemical Workers' Union. On formation, it had 2,825 members. In 1976, women working as clerks in the Bank of Nova Scotia and Canadian Imperial Bank of Commerce wished to unionize. One was married to a CCWU steward, and the recent formation of the CCWU gave it a high profile. As a result, the union took them into membership, but the following year instead assisted them in forming the Canadian Union of Bank Employees. The union grew steadily, and by 1980, it had 7,214 members in 56 locals, led by Kenneth V. Rogers. That year, it merged with the Canadian district of the Oil, Chemical and Atomic Workers International Union and some directly chartered local unions in Quebec, to form the Energy and Chemical Workers Union. References 1975 establishments in Canada 1980 disestablishments in Canada Defunct trade unions in Canada Chemical industry trade unions Trade unions disestablished in 1980 Trade unions established in 1975
Canadian Chemical Workers' Union
[ "Chemistry" ]
223
[ "Chemical industry trade unions" ]
75,138,932
https://en.wikipedia.org/wiki/C4H7NO2S
{{DISPLAYTITLE:C4H7NO2S}} The molecular formula C4H7NO2S may refer to: Dapansutrile Thioproline
C4H7NO2S
[ "Chemistry" ]
40
[ "Isomerism", "Set index articles on molecular formulas" ]
75,139,616
https://en.wikipedia.org/wiki/284%20%28number%29
284 is the natural number following 283 and preceding 285. In mathematics 284 is an even composite number with 2 prime factors. 284 is in the first pair of amicable numbers with 220. That means that the sum of the proper divisors are the same between the two numbers. 284 can be written as a sum of exactly 4 nonzero perfect squares. 284 is a nontotient number which are numbers where phi(x) equaling that number has no solution. 284 is a number that is the nth prime plus n. It is the 51st prime number (233) plus 51. References Integers
284 (number)
[ "Mathematics" ]
125
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
75,140,051
https://en.wikipedia.org/wiki/Chloryl%20tetraperchloratoaurate
Chloryl tetraperchloratoaurate is an inorganic chemical compound with the formula ClO2Au(ClO4)4 consisting of the chloryl cation and a tetraperchloratoaurate anion. It is an orange solid that readily hydrolyzes in air. Production and reactions Chloryl tetraperchloratoaurate is produced by the oxidation of gold metal, gold(III) chloride, or chloroauric acid by dichlorine hexoxide: 2 AuCl3 + 8 Cl2O6 → 2 ClO2Au(ClO4)4 + 6 ClO2 + 3 Cl2 A production of gold(III) perchlorate is attempted by heating this compound, but it instead forms an oxy-perchlorato derivative. References Gold(III) compounds Perchlorates chloryl compounds
Chloryl tetraperchloratoaurate
[ "Chemistry" ]
184
[ "Perchlorates", "Salts" ]
75,140,490
https://en.wikipedia.org/wiki/286%20%28number%29
286 is the natural number following 285 and preceding 287. In mathematics 286 is an even composite number with 3 prime factors. 286 is in the smallest pair of nontotient anagrams with 268. 286 is a tetrahedral number which means that represents a tetrahedron. 286 is a sphenic number which means that it has exactly 3 prime factors. 286 the first even pseudoprime to base 3. References Integers
286 (number)
[ "Mathematics" ]
89
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
75,142,598
https://en.wikipedia.org/wiki/DMT%20%28company%29
DMT GmbH & Co. KG is an engineering and consulting company based in Essen, Germany. Founded in 1990, DMT has been a subsidiary of TÜV NORD AG since 2007. It operates within the TÜV NORD GROUP, providing engineering and consulting services. The DMT Group consists of 16 engineering and consulting firms with global locations and around 1,100 employees. The group's annual revenue is approximately €130 million. DMT collaborates on research projects with industry, research institutions, and universities internationally. History DMT GmbH & Co. KG traces its corporate history to 1990. It evolved from a series of corporate mergers dating back to 1737 when the Märkische Gewerkschaftskasse was founded. In the 1990s, the business lines were integrated into the newly created Deutsche Montan Technologie für Rohstoff, Energie, Umwelt e.V., where two companies emerged: DMT-Gesellschaft für Forschung und Prüfung mbH DMT-Gesellschaft für Lehre und Bildung mbH Organization DMT GmbH & Co. KG operates 19 testing and specialist centers for safety, with 17 of them being accredited or officially recognized. The company employs approximately 100 recognized experts. Service Areas DMT provides engineering services in areas such as plant construction, process engineering, civil engineering, mining, and oil and gas. Its core activities include engineering, consulting, geotechnics, and exploration. The company also develops measurement and monitoring systems for various industries. Geo Engineering and Exploration In this sector, DMT offers services for the development, planning, and monitoring of infrastructure projects, including geotechnical and site investigation, route engineering, and geomonitoring. Mining Consulting and Engineering DMT assists investors, governments, and mining operators throughout the entire lifecycle of a mine, offering services such as resource prospecting, feasibility studies, and mine planning. Industrial Engineering DMT designs and constructs process engineering plants for coking, the chemical industry, and material handling systems for mining. Plant and Product Safety In this domain, DMT addresses safety aspects in various industries, including fire and explosion protection, building systems safety, and product safety and quality. Exploration Seismics DMT provides solutions for reflection and refraction seismic studies. Products DMT develops monitoring systems and testing equipment, including geodetic measurement systems, monitoring systems, and devices for flow measurement. The company also offers services for optimization in steel production. References Companies established in 1990 Engineering consulting firms Mining companies Geotechnical engineering
DMT (company)
[ "Engineering" ]
506
[ "Engineering consulting firms", "Geotechnical engineering", "Civil engineering", "Engineering companies" ]
75,143,688
https://en.wikipedia.org/wiki/Phlotoxin%201
Phlotoxin (PhlTx1, μ-TRTX-Pspp-1) is a neurotoxin from the venom of the tarantula Phlogiellus that targets mostly voltage-sensitive sodium channels and mainly Nav1.7. The only non-sodium voltage-sensitive channel that is inhibited by Phlotoxin is Kv3.4. Nav1.4 and Nav1.6 seem to be Phlotoxin-1-sensitive to some extent as well. Etymology Another name for phlotoxin is μ-TRTX-Pspp-1: μ for NaV channel inhibition, then TRTX refers to theraphotoxin which refers to a group of toxins found in the Theraphosidae family. Sources Phlotoxin was first purified, characterized and sequenced from Phlogiellus sp. Phlogiellus is a genus of tarantulas. Its venom, which includes several neurotoxic peptides like phlotoxin, targets diverse ion channels and chemical receptors. Chemistry Structure Phlotoxin-1 (PhlTx1), weighing around 4058.83 Da, is identified by a 34-amino acid sequence featuring three disulfide bridges and organized based on the Inhibitor Cystine Knot (ICK) architectural motif which is effective for its structural stabilization as three disulfide bonds are structured in a manner where two of them combine to create a circular arrangement, through which the third disulfide bond passes. It is classified to be a member of the NaV channel spider toxin (NaSpTx) family 1. The structure of phlotoxin comprises six cysteine residues forming an ICK architecture fold, with amidation occurring at the C-terminus. "Cys2-Cys17, Cys9-Cys22, Cys16-Cys29" disulfide bridge organization. The proximity of Cys 16 and 17 makes it challenging to synthesize even though phlotoxin is commercially available. Homology The sequence similarity of PhlTx1 with other peptides does not exceed 59%. The closest match regarding inhibition IC50 for PhlTx1 is found in the NaSpTx family to HnTx-III or HwTx-I. It is basically classified under the NaSpTx family, due to the presence of disulfide bridges. PhlTx1 is categorized within the NaSpTx-1 family primarily because of its disulfide bridges. Notably, the inclusion of three proline residues (Pro11, Pro18, and Pro27) introduces the potential for Cis–trans isomerism. This dynamic property can influence the precise formation of secondary structures and the correct alignment of disulfide bridges, thereby impacting the overall structural integrity of the toxin. Target In examining the effects of PhlTx1 on the sodium channel Nav1.7/β1, it appears to share similarities with TTX (tetrodotoxin). Both PhlTx1 and TTX exhibit a capacity to block the channel pore, resulting in a noticeable decrease in sodium currents. Moreover, the behavior of the channel, as reflected in gating parameters, remains largely unaffected by the presence of PhlTx1. This observation suggests a comparable behavior between PhlTx1 and TTX in modulating the function of Nav1.7/β1 channels. The IC50 for PhlTx1 to inhibit Nav1.7 is 39 +/- 2 nM. The PhlTx1 affects all hNav (human voltage-gated Na channels) channels to a different degree except hNav1.8. There is a poor selectivity of PhlTx1 towards the hNav1.1 and 1.3. It also has shown a high affinity towards hNav1.7. Mode of action The amino acids which are critical for binding of the hNaV1.7 subtype are identified by their substitution with alanine. When tryptophan at position 24, lysine at position 25 and tyrosine at position 26 are replaced with alanine, there is a complete loss of affinity. This highlights the critical role of these amino acids in the binding process to Nav1.7. Other substitutions, like alanine at position 1, serine at position 8, lysine at position 12 or 15, result in a slight change (less than 2.8-fold) in variant affinity, whereas substituting aspartate at position 7 leads to an increase in variant affinity (IC50 = 47.0 ± 40.9). Therapeutic use Phlotoxin-1 (PhlTx1) has demonstrated selectivity in inhibiting the voltage-gated sodium channel NaV1.7. Its potential as an antinociceptive agent became apparent when a loss-of-function mutation in the NaV1.7 gene resulted in a congenital inability to perceive pain. Notably, these peptides do not independently exhibit antinociceptive effects; however, when co-administered with exogenous opioids, they bring about analgesic effect, allowing for a significant reduction in opioid dosage. The mechanism underlying the synergistic effect of opioid receptor agonists with selective NaV1.7 inhibitors remains unknown, but this discovery presents a novel approach to pain management. The primary method for evaluating this property involves the formalin test. However, the poor selectivity towards the hNav1.5 and 1.6 subtypes may be associated with in vivo cardiac and neuromuscular side effects, respectively, which could limit its potential use as an analgesic molecule. References External links Peptides Spider toxins Sodium channel blockers Ion channel toxins
Phlotoxin 1
[ "Chemistry" ]
1,196
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
75,144,170
https://en.wikipedia.org/wiki/Graphitization
Graphitization is a process of transforming a carbonaceous material, such as coal or the carbon in certain forms of iron alloys, into graphite. Process The graphitization process involves a restructuring of the molecular structure of the carbon material. In the initial state, these materials can have an amorphous structure or a crystalline structure different from graphite. Graphitization generally occurs at high temperatures (up to ), and can be accelerated by catalysts such as iron or nickel. When carbonaceous material is exposed to high temperatures for an extended period of time, the carbon atoms begin to rearrange and form layered crystal planes. In the structure of graphite, carbon atoms are arranged in flat hexagonal sheets that are stacked on top of each other. These crystal planes give graphite its characteristic flake structure, giving it specific properties such as good electrical and thermal conductivity, low friction and excellent lubrication. Interest Graphitization can be observed in various contexts. For example, it occurs naturally during the formation of certain types of coal or graphite in the Earth's crust. It can also be artificially induced during the manufacture of specific carbon materials, such as graphite electrodes used in fuel cells, nuclear reactors or metallurgical applications. Graphitization is of particular interest in the field of metallurgy. Some iron alloys, such as cast iron, can undergo graphitization heat treatment to improve their mechanical properties and machinability. During this process, the carbon dissolved in the iron alloy matrix separates and restructures as graphite, which gives the cast iron its specific characteristics, such as improved ductility and wear resistance. Notes and references Molecular physics Metallurgy Materials science
Graphitization
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
349
[ "Applied and interdisciplinary physics", "Molecular physics", "Metallurgy", "Materials science", " molecular", "nan", "Atomic", " and optical physics" ]
75,144,896
https://en.wikipedia.org/wiki/Soara%20and%20the%20House%20of%20Monsters
is a Japanese fantasy adventure manga series written and illustrated by Hidenori Yamaji. Shogakukan publishes it since 2021 in their magazine Shōnen Sunday S and on their webcomic platform , and also releases it in collected tankōbon volumes. These have since 2023 been released in English by Seven Seas Entertainment. The story follows the human woman Soara, who after the end of a war between monsters and humans joins a group of travelling dwarf architects who build houses for various types of monsters. It was nominated for the Next Manga Award for best web manga in 2022 and 2023, and was well received for its artwork of monster houses and its themes of prejudice and understanding. Plot Soara and the House of Monsters is a fantasy adventure story following , an orphan who has been trained to fight in the war between humans and monsters, but is left without a purpose and home when peace is declared just as she becomes old enough to be a knight. Wandering aimlessly, she encounters a group of traveling dwarf architects led by , who renovate and build dream homes for monsters, such as goblins, slimes, and dragons. Soara is distrustful of monsters, but is moved by the architects' work, and joins them in their travels. Production and release Soara and the House of Monsters is written and drawn by Hidenori Yamaji, who originally created the series from a desire to combine the fantasy genre with something relatable to readers. He considered housing as a theme to be simultaneously unusual in fantasy manga, and something very familiar to readers, so he chose to write a story about architecture with fantasy elements. When drawing the artwork, Yamaji focused on drawing houses that not only were entertaining and exciting to look at, but that also were structured in a believable way, that would have made sense for the monsters if they were real. To help reader better visualize the houses, he chose to draw detailed cross-sections of them. The manga is serialized by Shogakukan since November 25, 2021, both in their monthly shōnen manga magazine Shōnen Sunday S and on their webcomic platform . Shogakukan also publishes the series in collected tankōbon volumes since May 12, 2022, under their imprint Sunday Webry Comics; since 2023, these have been released in English by Seven Seas Entertainment and in Chinese by Tong Li Publishing. Volumes Chapters not released in collected volumes The following chapters have not been released in collected tankōbon volumes as of the release of volume 4: Reception Soara and the House of Monsters was nominated for the Next Manga Award in the Best Web Manga category in 2022 and 2023, and for the Japan Society and Anime NYC's first American Manga Awards in the Best New Manga category in 2024. The manga has been well received by critics. Christopher Farris and Rebecca Silverman, both writing for Anime News Network, appreciated its themes of prejudice and understanding how all living things share in basic needs and comforts, without feeling too heavy-handed in its delivery. Farris found it refreshing to see a new take on the fantasy genre, with appealing and "hog-wild" house designs, and fun solutions to their problems, although felt a disconnect in the "before-and-after" visuals due to the monsters' homes often being demolished and rebuilt rather than renovated; Silverman also liked the artwork, calling it a "triumph of fantasy building". The Japanese entertainment news site Magmix also enjoyed the art of the houses, calling them "fantastic and powerful", and good at immersing the reader in the setting. See also Marry Grave, another manga series by Hidenori Yamaji References External links 2021 webcomic debuts Adventure anime and manga Fantasy anime and manga Japanese webcomics Seven Seas Entertainment titles Shogakukan manga Webcomics in print Works about architecture
Soara and the House of Monsters
[ "Engineering" ]
788
[ "Works about architecture", "Architecture" ]
75,145,233
https://en.wikipedia.org/wiki/Ant%20communication
Ant communication in most species involves pheromones, which is a method using chemical trails for other ants or insects to find and follow. However, ants of some species can communicate without using pheromones or chemical trails in general. In particular, red wood ants are able to pass information about distant food source using antennal code alone. Communication using chemical trails Ants have many different pheromones, depending on the species. When an ant finds something interesting, whether it is food or an enemy, it excretes a chemical substance from it and drags it along the floor to the colony. When a different worker sets its antenna down on the trail, it senses the trail, changes its own behavior (depending on the specific pheromone) and follows it depending on what kind. If it is a food trail, the worker will follow the trail to find the food; If it does find the food, it will go back to the colony and strengthen the trail, making more and more workers to follow the trail. Same thing with attacking/defending the colony, when detected, other workers will begin attacking the enemy inside a circle of pheromones, rather than a trail. Communication without using chemical trails Ants of some species, such as red wood ants (Formica s.str.), are able to communicate to each other information about distant food sources using antennal code alone, in a manner distantly similar to the dance language of bees. In these species, there exist teams of constant composition. Each team has one leader, called a scout, and about ten followers (foragers). The scout finds the food source and communicates its location to the followers. The team of followers are then able to find the food source without the scout. This fact has been established by experiments using various artificial trees, including the binary tree and others. While the scout had been communicating with its team, the experimenters changed the maze to make sure the team could not use any chemical trails, and subsequently isolated the scout. The language these ants use is rather sophisticated: the ants adapt their communication, using shorter messages for frequently used locations and compressing some more regular messages. Using a method based on measuring the time it takes the ants to communicate various messages, it has been shown that they can to use simple arithmetic operations. References Communication Animal communication Pheromones
Ant communication
[ "Chemistry" ]
482
[ "Pheromones", "Chemical ecology" ]
75,145,251
https://en.wikipedia.org/wiki/Crematogaster%20aurora
Crematogaster aurora is a valid species of myrmicine ant that lived in Baltic Europe about 46 million to 43 million years ago during the Cenozoic era Eocene epoch. C. aurora has a similar look to the ant genus Acanthomyrmex and shares some similarities with the ant genus Pristomyrmex. The fossil found of C. aurora is of a queen ant that is brown in coloration. It probably died by drowning in a lake approximately 46 million years ago. References Insects described in 2015 aurora Eocene insects of Europe Baltic amber Species known from a single specimen Fossil ant taxa
Crematogaster aurora
[ "Biology" ]
126
[ "Individual organisms", "Species known from a single specimen" ]
75,145,287
https://en.wikipedia.org/wiki/Outgroup%20favoritism
Outgroup favoritism is a social psychological construct intended to capture why some socially disadvantaged groups will express favorable attitudes (and even preferences) toward social, cultural, or ethnic groups other than their own. Considered by many psychologists as part of a variety of system-justifying motives, outgroup favoritism has been widely researched as a potential explanation for why groups—particularly those disadvantaged by the normative social hierarchy—are motivated to support, maintain, and preserve the status quo. Specifically, outgroup favoritism provides a contrast to the idea of ingroup favoritism, which proposes that individuals exhibit a preference for members of their own group over members of the outgroup. Outgroup favoritism and system justification In a 1994 review of the existing literature on the ideas people employ to legitimize and support ideas, structures, and behaviors, psychologists John T. Jost and Mahzarin Banaji observed that the existing theories of ego-justification (i.e., the utilization of stereotypes as a means to protect the self) and group justification (i.e., the utilization of stereotypes to protect the status of a given social group) could not adequately explain why members of a given ingroup would express negative stereotypes about themselves, often times leveraging these in contexts that disadvantaged their own group. Thus, it was arguably out of the attempt to explain the phenomena of outgroup favoritism that led to the development of what would later become system justification theory. According to Jost and Banaji, system justification theory is constructed around the notion that people have three basic needs: 1) a need for certainty and meaning, 2) a need for safety and security, and 3) a need for a shared reality (i.e., epistemic, existential, and relational needs). Taking inspiration from the immense body of work already examining how people justify their experiences to themselves on the individual and group levels, Jost and Banaji additionally proposed that people meet these three needs on a systemic level. Contrary to the long-standing idea that strong identification with the group on an individual level will generate the opposite (i.e., individuals are motivated to preserve a positive image of their own group), system justification theory is founded upon the idea that people meet their epistemic, existential, and relational needs on a systemic level, sometimes above and beyond the individual and group-levels. Thus, conceptualized within a system justification theory framework, outgroup favoritism is best understood as an expression of how people are motivated to defend/preserve the status quo of a given system even when the normative ideologies and practices run counter to their own interests. Proof of concept The current research on this phenomenon tends to fall into three dominant streams. The first of these examines assessments of outgroup favoritism on the group attitude level. Work in this area commonly involves asking members of socially disadvantaged groups the extent to which they would support policies or structures that favor socially advantaged groups. Scholars have examined group-level expressions of out-group favoritism along dimensions ranging from political ideology to economic status to gender. For example, in one of the classic (albeit, somewhat debated) studies, Mark Hoffarth and John Jost analyzed two different samples of sexual minority participants to examine the relationship between implicit stereotypic attitudes about sexual minorities, political orientation, and support for same-sex marriage. Across two samples, the authors found a three-way interaction across the implicit association of sexual minorities with negative stereotypes, conservative political ideology, and support of same-sex marriage. Specifically, they found support for their original hypotheses that political conservatism is strongly associated with the endorsement of negative stereotypes on the implicit level and opposition to same-sex marriage, even amongst sexual minority groups. While the exact interpretation of these findings is still a topic of debate within the system justification literature, this study is one of the most widely cited within the academic community for demonstrating that even groups disadvantaged by the (in this case, legal) structures of the existing status quo will express and employ negative stereotypes about their own group and oppose policies that appear to contradict their own interests. Proposed mechanisms and correlates The second predominant stream within the literature investigates the potential mechanisms and correlated constructs that might fuel the behaviors characteristic of outgroup-favoritism-based motivations. In this area, scholars have struggled to isolate the mechanisms behind outgroup favoritism specifically from those of system-justifying motives more broadly. Consequently, much of the literature in this area tends to focus on how outgroup favoritism interacts with other components of system justification theory such as negative self-stereotyping, depressed entitlement, and the role of individual beliefs. Implicit associations According to the American Psychological Association's dictionary, implicit association captures the subconscious attitudinal associations people express toward various object/evaluative pairings. The most common method of capturing these underlying attitudes is via the implicit association test, a task in which participants are asked to sort members of specific categories (e.g., race) into specific evaluative categories (e.g., good/bad). One common method for capturing outgroup favoritism is via the implicit association test, the idea being that minority group members exist within a societal context that repeatedly reinforces their minority (and commonly, inferior) status. Scholars argue that this repeated exposure embeds rationalizing social inequality on an automatic level such that outgroup preference expresses itself most saliently using implicit measures. For an example of how this operates, Ashburn-Nardo and Johnson (2008) recruited 110 African-American undergraduate students and asked them to categorize faces across two categories: Black/White and pleasant/unpleasant. After completing the IAT task, participants were presented with a task and told that their partner would either be Black or White. Participants were then asked to rate their partner in terms of performance expectations and likability. The authors found that for stereotypically "White" tasks, African Americans implicitly favored Whites, giving them higher performance evaluations and likability—the implication being that, in strongly racially-stereotyped contexts, individuals from minority groups will implicitly express outgroup favoritism. Negative self-stereotyping Negative self-stereotyping refers to the idea that members of various groups will express or endorse group stereotypes about fellow members of their own group that are unflattering and even outright harmful. While much of this work is concentrated on examining gender, scholars have demonstrated that negative self-stereotyping occurs across a variety of social identities including race and sexuality. For an example of how this works (and the proposed connections to outgroup favoritism): Burkley and Blanton conducted a 2008 study in which they asked men and women to complete a math test. All participants received failure feedback and were additionally asked to complete a stereotype endorsement measure with the order of these two components varying across conditions. The authors found that women were far more likely to embrace a negative stereotype about gendered math ability after receiving failure feedback, which they interpret as supporting the notion that individuals will palliatively leverage negative stereotypes against their own group. Extending this work, other scholars in this area have conducted studies on how women will negatively self-stereotype themselves as lacking a wide range of "masculine" traits or competencies after being exposed to information that threatens a gendered status quo. However, similar to Jost and Hoffarth's analyses of conservative sexual minority members, scholars are continuing to critique how negative self-stereotyping interacts with outgroup favoritism. Though many agree that the two share close links, there is an ongoing debate as to whether negative self-stereotyping is an expression of outgroup favoritism or whether it should be operationalized and studied as an independent, but related concepts. On the one hand, several authors argue that, because outgroup favoritism is operationalized as a motive instead of a behavior or attitude, negative self-stereotyping is a clear behavioral and attitudinal expression of an underlying outgroup preference motive that is itself the product of internalized inferiority (essentially, that the stereotyping behavior can't occur without a motive and the motive itself can't be measured independent of its behavioral correlate). Jost explicitly states that it is "not that people have a special motivation to favor the outgroup merely because it is an outgroup. Rather, outgroup favoritism is seen as one manifestation of the tendency to internalize and thus perpetuate the system of inequality." Furthermore, given that system justification theory is motivation-based, some scholars propose that behavioral and attitudinal constructs like negative self-stereotyping would not be appropriate to consider independently of their motives in a purely motive-based understanding of system justification. On the other hand, those that consider negative self-stereotyping as a separate construct under the system justification umbrella note that negative self-stereotyping mediates similar outcomes to outgroup favoritism regardless of whether outgroup favoritism is considered as a variable. The amorphous nature of this debate is not helped by the research indicating that both negative self-stereotyping and outgroup favoritism engender similar beneficial and detrimental outcomes. For example, many scholars' findings support that both negative self-stereotyping and outgroup favoritism have similarly palliative effects by allowing individuals within unjust systems to rationalize the status quo as fair and valid (in line with system justification theory). Specifically, this work finds that both constructs provide the positive effect of buffering one's self-image against personal and social threats. Additionally in line with the "as sub-components" argument, research has demonstrated that the rationalization that occurs as a product from both negative self-stereotyping and outgroup favoritism allows individuals to justify existing inequality. Scholars have found that for both constructs, the perception that preservation of the status quo is the most important goal within a society has the detrimental side-effect of reducing the drive to challenge or change existing discriminatory systems by relieving an individual of his/her/their personal responsibility to engage in such efforts. Due to the similarities in outcomes for both constructs, research has trended toward looking at negative self-stereotyping and outgroup favoritism as interactive system justification components, but this is an area still under discussion. Depressed entitlement Within the psychological literature, entitlement is defined as the judgments people make about their deservingness of specific outcomes based upon their identity or their actions. In 1997, as part of the evolving solidification of system justification theory, Jost and Banaji proposed that one of the important cognitive mechanisms for reconciling outgroup favoritism is a depressed sense of what a given individual deserves. Essentially, in order to hold the idea that the outgroup is more favorable and therefore more deserving of specific outcomes that preserve the status quo, the oppressed ingroup must rationalize these beliefs with a depressed sense of entitlement to various cognitive, social, and psychological resources within a system. The classic study in this area was conducted by Jost in 1997. Jost recruited 132 undergraduate students (68 men and 64 women) from Yale College. The participants were asked to generate "open-ended thought-lists" in response to a prompt and later evaluate the quality and deservingness of their own efforts. The thought-lists were then rated by two independent judges (one woman and one man) who were unaware of the hypotheses and the gender of the participants. The judges evaluated the thought-lists on seven dimensions: meaningfulness, logicality, sophistication, vividness, persuasiveness, originality, and insightfulness. The purpose of this rating procedure was to ensure that there were no differences in the objective quality of thought-lists generated by men and women. Jost found that the independent judges perceived no differences in quality between thought-lists written by men and thought-lists written by women on any of the eight dimensions, indicating that the objective quality of the thought-lists did not differ based on the gender of the author. However, when participants evaluated and paid themselves for their thought-list contributions, women's self-ratings were significantly lower than men's self-ratings on the dimensions of self-payment and insight. According to Jost, the finding that the independent judges did not perceive any differences in the quality of thought-lists generated by men and women, but women evaluated and paid themselves differently by rating their own contributions lower than men demonstrated the "depressed-entitlement effect" observed in previous research. Specifically, that depressed entitlement may be the cognitive mechanism that leads to the expression of outgroup preference (though, like most dimensions of system justification theory, this is a matter of academic debate). Role of individual belief systems Just-world fallacy Originally proposed by Melvin J. Lerner in 1980, the just-world fallacy proposes that individuals have a need to believe that their environment is a just and orderly place where people usually get what they deserve. In confirming the existence of this cognitive bias, Lerner and Simmons conducted what has now become the classic study in the just world fallacy literature. Incorporating heavy influence from Stanley Milgram, the researchers asked participants to observe a confederate receiving electric shocks. The severity of the shocks and the innocence of the victim were manipulated. The researchers found that participants tended to derogate the victim more when the shocks were severe, suggesting that people are more likely to blame innocent victims when the outcomes are more negative. Given the advances in ethics in the social sciences that constrain such methodologies, but still inspired by Lerner and Simmons' original work, current research in this area commonly involves presenting participants with scenarios or vignettes that involve innocent victims experiencing negative outcomes. Participants are then asked to evaluate the victims and assign responsibility or blame for their situation. These studies often manipulate the severity of the outcome or the perceived innocence of the victim to examine how these factors influence participants' reactions. Extensions of this work typically involve manipulating factors such as the attractiveness or likability of the victim, the presence of empathy instructions, or the level of personal involvement in the situation. These studies consistently show that people are more likely to derogate innocent victims when they perceive the world as just and orderly. In terms of outgroup favoritism, researchers have proposed that just world beliefs potentially contribute to the expression of favorable attitudes toward advantaged groups. Specifically, some researchers propose that just world beliefs serve as an ideological foundation for outgroup favoritism, the logic being that in a just and fair hierarchical system, a position of advantage is internally attributable to the members of the advantaged group (i.e., advantaged group members must deserve what they have because the world is a fair place). Meritocracy In a similar vein to just world beliefs, the American Psychological Association's dictionary defines meritocracy as a system that rewards individuals based on what they accomplish within said system. Specifically, the term was first credited to the sociologist Michael Young in his book The Rise of the Meritocracy. Given the complexity of meritocracy as a concept, researchers have historically focused on the role of meritocratic beliefs in informing prejudices and biases. For example, several sociological and psychological scholars have found that meritocratic beliefs are correlated with increased prejudice and discrimination on the basis of aspects of social identity like gender or educational status. In terms of outgroup favoritism, researchers have proposed that meritocratic beliefs serve a similar role to those of just world beliefs, meaning that meritocratic beliefs may serve as a form of ideological foundation leading to an increase in outgroup preference. Negative consequences The third stream of literature on outgroup favoritism is dedicated to examining the consequences minority group members might bear as a result of holding implicit preferences for outgroup members. Numerous studies examining members of minority groups have found that expressions of outgroup favoritism correlate with a number of different detrimental psychological outcomes under specific conditions. Specifically, that while outgroup favoritism and other system justification motives serve palliative functions, there is a point at which reality/perception incongruence inhibits this palliative effect. Outgroup favoritism appears to be beneficial to psychological wellbeing depending on the individual's level of internalization of dominant ideologies and their awareness of system rigidity. To exemplify how this works: a 2007 study examining the psychological health of 316 Black undergraduates found that implicit outgroup favoritism (i.e., African American students implicitly favoring Whites) is correlated with increased depression and lower overall psychological functioning. However, since this study other scholars have examined the relationships between implicit "anti-Black" bias, the centrality of social identity, and psychological health. These studies found that while Black participants with higher levels of anti-Black bias were found to be at higher risk for depression, this outcome varied as a function of the amount of racial discrimination they perceived to begin with. Such findings support the dual-outcome model of outgroup favoritism (particularly for minority groups). On the one hand, outgroup favoritism can lead to benefits by allowing individuals to justify systems of inequality. Yet once the evidence that inequality exists becomes salient enough, such tendencies actually lead to decreases in psychological well-being as individuals begin to attribute perceived discrimination internally (i.e., to themselves) rather than externally. Critiques As somewhat alluded to in the previous sections, academics are continuing to discuss the nature of system justification theory (and by extension, outgroup favoritism). Considering outgroup favoritism as part of the broader ecosystem of system justification theory means accepting the basic premise that the need to justify the systemic status quo is sufficiently powerful that people will endorse ideologies and practices supportive of "the norm" even when these ideologies and practices run counter to their own interests. Yet, in a debate that continues to the present day, outgroup favoritism has been critiqued as contradicting the long-standing idea that strong identification with the group on an individual level will generate the opposite (i.e., individuals are motivated to preserve a positive image of their own group. Specifically, critics argue that the instances of outgroup favoritism thus far observed within the literature are best attributed to demand characteristics or the internalization of social norms (which inherently elevate the status of the dominant group). Interactions with social identity theory This perspective is echoed in some of the broader critiques of system justification theory—particularly those emphasizing that a need for "social accuracy" and a "positively distinct social identity" are sufficient to explain the expression of outgroup favoritism observed by members of low-status groups. In 2023, Rubin and colleagues posited a new model for understanding outgroup favoritism within the context of social identity theory (of which ingroup favoritism is a core component). They termed this new model the Social Identity Model of System Attitudes (SIMSA). Within SISMA, the authors propose that outgroup favoritism is instead best understood as a functional adaptation that fulfills a social-identity-based need to perceived the social world in an accurate way. In a published rejoinder in the British Journal of Social Psychology, Jost and colleagues refuted this idea as incorrectly equating outgroup favoritism with the accurate perception of an unjust reality. The main argument being that outgroup favoritism goes beyond simply acknowledging that a system is unjust or unfair, but rather demonstrates a motivated preference for the prioritization of a group outside of one's own. Citing the work on implicit associations, negative self-stereotyping, and depressed entitlement, Jost and his colleagues emphasize that if outgroup favoritism was merely an expression of accurate social perception, scholars would not have observed the cognitive mechanisms people employ whilst expressing outgroup favoritism if it did not serve some system-justifying function above and beyond a social-identity one. Rubin and colleagues have since responded by clarifying their position, arguing that they were not equating outgroup favoritism with acceptance of an unjust social reality but rather accurate perception. Jost and his colleagues have yet to respond. References Cognitive biases Group processes Barriers to critical thinking Sociological terminology Error Prejudice and discrimination Appeals to emotion Narcissism Corruption
Outgroup favoritism
[ "Biology" ]
4,215
[ "Behavior", "Narcissism", "Human behavior" ]
75,145,344
https://en.wikipedia.org/wiki/Pati%20%28rest%20house%29
Pati (Nepali: पति), also called Sattal and Phalcha are a type of public rest houses in Katmandu Valley in Nepal. Patis are public rest-houses built in towns and villages for practical purposes to give shelter for pilgrims, travelers and traders. They are also used by locals as gathering space. Patis were usually built from donations by private individuals, religious groups or families. The first references to public rest houses in Nepal date back to the Lichhavi period (400 to 750 CE), but no building from this period has survived. Surviving patis today mostly date to late Malla period and Gorkha Kingdom. See also Ambalama Kalithattu References Building types
Pati (rest house)
[ "Engineering" ]
149
[ "Architecture stubs", "Architecture" ]
75,145,394
https://en.wikipedia.org/wiki/Allocordyceps
Allocordyceps is an extinct genus of parasitic fungus in the order Hypocreales that parasitized carpenter ants. The fossil of Allocordyceps baltica, from the Baltic Amber, represents the oldest known fossil of an ant-parasitizing fungus before Ophiocordyceps. Description Allocordyceps is characterized by its ascoma being an orange color, stalked and cusp shaped. It also has a pair of partially immersed perithecia that emerges from the rectum. Hosts parasitized by Allocordyceps have separate stromata with separate mycelium emerging from the neck and abdomen. It might alter its host's behavior much like the extant Ophiocordyceps unilateralis. References Clavicipitaceae Hypocreales genera Monotypic Ascomycota genera Paleogene fungi Parasitic fungi Taxa described in 2021 Taxa named by George Poinar Jr.
Allocordyceps
[ "Biology" ]
198
[ "Fungus stubs", "Fungi" ]
61,437,976
https://en.wikipedia.org/wiki/Flight%20of%20Five%20Locks
The Flight of Five Locks on the Erie Canal in Lockport, New York is a staircase lock constructed to lift or lower a canal boat over the Niagara Escarpment in five stages. The locks are part of the Erie Canalway National Heritage Corridor. In Stairway to Empire: Lockport, the Erie Canal, and the Shaping of America, (SUNY Press, 2009), historian Patrick McGreevy details the construction of the locks. The "Stairway" of McGreeevy's title is the Flight of Five Locks. History To carry the canal across the Niagara Escarpment, the engineers built a five-step staircase lock. Restoration The restored locks reopened in . References Staircase locks Erie Canal Historic Civil Engineering Landmarks
Flight of Five Locks
[ "Engineering" ]
147
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
61,438,505
https://en.wikipedia.org/wiki/Near-Earth%20Object%20Confirmation%20Page
The Near-Earth Object Confirmation Page (NEOCP) is a web service listing recently-submitted observations of objects that may be near-Earth objects (NEOs). It is a service of the Minor Planet Center (MPC), which is the official international archive for astrometric observations of minor planets. The NEOCP was established by the MPC on the World Wide Web in March 1996. Astrometric observations of new NEO candidates are submitted by observers either through email or cURL, after which they are placed in the NEOCP for a period of time until they are confirmed to be a new object, confirmed to be an already-known object, or not confirmed with sufficient follow-up observations. If the object is confirmed as a new NEO, it is given a provisional designation and its observations will be immediately published in a Minor Planet Electronic Circular (MPEC). If the object is a recovery of an already-designated NEO on a new opposition, it will also be immediately published in an MPEC. Otherwise, if the object is confirmed as a minor planet that is not a NEO, it will be published in a Daily Orbit Update MPEC on the following day. Any objects that are not confirmed due to an insufficient observation arc or a false-positive detection will have its observations archived in the MPC's Isolated Tracklet File of unconfirmed minor planet candidates. This tool is updated throughout the day to facilitate follow-up observations as quickly as possible before an object is lost and no longer observable. A number of other services make use of the NEOCP and further process the data to make independent predictions of the likelihood of an object being an NEO and also of the likely risk of Earth impact, some of these are listed below. See also Scout: NEOCP Hazard Assessment Near-Earth object Asteroid impact prediction NEODyS References External links Near-Earth Object Confirmation Page at MPC CNEOS / JPL Scout NEOCP Hazard Assessment Tool SpaceDYS NEOScan Tool Project Pluto Summary of NEOCP Separate comet confirmation page at the Minor Planet Center Interactive graphic showing time of most recent submitted observations for NEOCP objects Previous NEOCP objects listing disposition/status/outcome upon being removed from the list Astronomy data and publications
Near-Earth Object Confirmation Page
[ "Astronomy" ]
448
[ "Works about astronomy", "Astronomy data and publications" ]
61,439,771
https://en.wikipedia.org/wiki/Joseph%20H.%20Simons
Joseph H. Simons (10 May 1897 – 30 December 1983) was a U.S. chemist who became famous for discovering one of the first practical ways to mass-produce fluorocarbons in the 1930s while a professor of chemical engineering at Pennsylvania State University. In 1950, he and other employees of 3M received a patent for the procedure of electrochemical fluorination. Early life and education Joseph H. Simons was born on 10 May 1897 in Chicago. He studied chemistry at the University of Illinois. After graduation in 1919 he continued chemistry and mathematics at the University of California, where in 1922, he received a master's degree and in 1923 a doctorate. Career He returned to the Midwest and became a professor of chemical engineering at Pennsylvania State College, now Pennsylvania State University. In the late 1930s he passed fluorine gas through a carbon arc creating fluorocarbon. Results were published in 1938. In 1940, he was recruited to the Manhattan project by Harold Urey, a physical chemist with expertise in isotope separation, to help with uranium enrichment necessary to build a nuclear bomb. Simons's fluorocarbons turned out to be inert enough to withstand the corrosive effects of uranium hexafluoride and thus could be used as factory parts in the chemical array of sealants, gaskets or pipes, for example. Simons invented a method for the large-scale, industrial production of fluorocarbons. In September 1948, he presented a series of papers at a Fluorocarbon Symposium by the American Chemical Society. On 29 November 1948 he filed an application to patent the electrochemical process of making fluorine-containing carbon compounds with two others from the Minnesota Mining & Manufacturing Company, St. Paul, Minnesota, which he received on 22 August 1950. On 4 September 1951 he also received a patent for "Fluorocarbon acids and derivatives". In 1950 he left for the University of Florida, Gainesville, and by 1952, he was affiliated with the 3M sponsored "Fluorine Laboratories, State College, Pennsylvania" and the "Fluorine Research Center" at the University of Florida, Gainesville. It was there in 1954, when the first unclassified technical report on the preparation of Fluorine containing compounds was written for the Office of Naval Research. He retired in 1967. Personal life Simons was married to Eleanor May Simons and at the time of his death was survived by a daughter, Dorothy Lanning of Spencerville, Md., and a son, Robert W. Simons, of Gainesville. Work and legacy Simons is credited with discovering electrochemical fluorination, and the procedure was named "Simons process" after him. Publications Joseph H. Simons and L.P. Block. 1937. Journal of the American Chemical Society 59: 1407. Simons J. H. and Block L. P. (1939) Fluorocarbons. J. Am.Chem. Soc.61, 2962–2966, doi:10.1021/ja01265a111 Joseph H. Simons. 1972. A Pioneering Trip in Fluorine Chemistry. The Chemist. February 1972. Joseph H. Simons. 1986. The Seven Ages of Fluorine Chemistry. Address presented 19 July 1973 Santa Cruz, CA on receipt of award for "Creative Work in Fluorine Chemistry." Journal of Fluorine Chemistry, 32(1): 7-24. He also published under a pen name Paul P. Plexus: Paul P. Plexus. 1957. Realism. New York: Vantage Press. Paul P. Plexus. 1960. A Structure of Science. New York: Philosophical Library. Paul P. Plexus. 1971. Gebo: Successor to Man. New York: Manyland Books. Awards Chemical Pioneer Award, American Institute of Chemists, 1971 American Chemical Society Award for Creative Work in Fluorine Chemistry, 1973 Further reading Neil MacKay. 1991. A Chemical History of 3M: 1933–1990. Published by 3M. Chapter 1: "Joe Simons's Stuff. ASIN: B0006QEP5O, 210 pp Harold Goldwhite. 1986. The Manhattan Project. In: Fluorine: The First One Hundred Years. R.E. Banks, D.W.A. Sharp and J.C. Tatlow, editors. New York: Elsevier Sequoia. . P.109-132. See also Ralph Landau, chemist, designed equipment to produce fluorine at Oak Ridge National Laboratory Roy J. Plunkett, chemist, discoverer of Teflon at DuPont References Further reading 1897 births 1983 deaths Scientists from Chicago Pennsylvania State University faculty 20th-century American chemists American organic chemists Fellows of the American Physical Society
Joseph H. Simons
[ "Chemistry" ]
985
[ "Organic chemists", "American organic chemists" ]
61,440,632
https://en.wikipedia.org/wiki/Civic%20statistics
Civic statistics is a sub-discipline of statistics focused on the analysis of evidence relevant to understanding and addressing issues of public concern such as human migration, poverty and inequality. It lies at the intersection of politics, social science, statistics, and education. Background In an increasingly complex world, the engagement of individuals and civil society groups is a fundamental resource in public decision-making at international, national and local levels. Drawing on the terms “statistical literacy”, “critical statistical literacy” and “data literacy”, the focus of Civic Statistics is on making sense of statistical information that provides information about social processes, social and economic well-being and the exercise of rights as citizens. Sustainable skills in the field of civic statistics are necessary for informed participation in democratic societies. Data on important societal issues is increasingly accessible to the general public and individual citizens or social action groups (e.g. on topics such as immigration, the labor market, social (in)equality, climate change, health). Understanding such topics is essential for civic engagement in modern societies. History There is a long history of advocacy for the role of evidence to promote social. Marquis de Condorcet’s notion of – knowledge that empowers people to liberate themselves from social oppression - provides an example; William Playfair‘s Political and Commercial Atlas (1786) is an early publication designed to make complex evidence accessible to a wide audience; John Snow and Florence Nightingale both used powerful graphical displays to address problems associated with disease; the Otto Neurath's work on Isotype (picture language) set out to establish a universal graphical language for communicating about social issues. Humans have always lived in uncertain times, and have made decisions in the light of imperfect data. Recently, there has been a dramatic increase in the volume and nature of data available, in the tools available for display and analysis, and in the ways information is communicated. In parallel, there has been a growing disdain for robust evidence – illustrated by notions of a ‘post truth’ era; and descriptions of journalists as enemies of the people. Applications Data that can be used to inform social policy are complex. Data are often multivariate; aggregated data and indicator systems are common; variables interact; data may be time critical. There is a core set of ideas associated with literacy and numeracy that are essential to functioning in society, such as reading skills, understanding arguments, reading graphs and handling percentages. Beyond this citizens need to do more than understand data – they need to see the implications for society and policy. This requires knowledge about the processes of knowledge generation, and ways to represent and model situations, along with ideas commonly taught in statistics courses such as sample bias and inference. It also requires contextual knowledge the state of society (e.g. the percentage of tax revenue spent on healthcare). Civic statistics aims to empower citizens and policy makers by addressing the central issues associated with evidence-informed decision making. These include: developing a sophisticated approach to questions of data provenance and data quality; understanding the uses and abuses of a wide range of methods for presenting and analysing social data from a variety of sources; ways to represent and model situations; understanding risk; and appropriate advocacy. References External links ProCivicStat - Promoting civic engagement via explorations of evidence Book: Data in Society - Challenging Statistics in an Age of Globalization Book: Statistics for Empowerment and Social Engagement Civic
Civic statistics
[ "Mathematics" ]
691
[ "Applied mathematics", "Applied statistics" ]
61,441,616
https://en.wikipedia.org/wiki/C16H14F3N3O2S
{{DISPLAYTITLE:C16H14F3N3O2S}} The molecular formula C16H14F3N3O2S (molar mass: 369.363 g/mol) may refer to: Dexlansoprazole Lansoprazole Molecular formulas
C16H14F3N3O2S
[ "Physics", "Chemistry" ]
65
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,443,482
https://en.wikipedia.org/wiki/Climate%20spiral
A climate spiral (sometimes referred to as a temperature spiral) is an animated data visualization graphic designed as a "simple and effective demonstration of the progression of global warming", especially for general audiences. The original climate spiral was published on 9 May 2016 by British climate scientist Ed Hawkins to portray global average temperature anomaly (change) since 1850. The visualization graphic has since been expanded to represent other time-varying quantities such as atmospheric CO2 concentration, carbon budget, and arctic sea ice volume. Background Hawkins credited a "Friday afternoon" email from Norwegian climatologist Jan Fuglestvedt for the idea of converting a conventional coloured line chart into a spiral, and thanked Fuglestvedt's wife, Taran Fæhn, for having suggested it to Fuglestvedt. Fæhn, a researcher for Statistics Norway and the Oslo Centre for Research on Environmentally Friendly Energy, had suggested that connecting December to the following January would show temperature evolution in a more dynamic way. Ensuing email discussions refined the design of the climate spiral, which Hawkins published on Monday 9 May 2016. Dissemination Expecting "only some vague interest", Hawkins later wrote that his tweet of the new graphic had been viewed 3.4 million times in its first year. The tweeted graphic is widely described as having gone viral. Within a day, Climate Central writer Andrea Thompson remarked in The Guardian that the "metaphoric spiral" of the planet spiraling toward catastrophic consequences "has become a literal one". Initially, Hawkins posited that the graphic resonated because it "doesn’t require any complex interpretation". In a 2019 paper, Hawkins et al. further surmised that the design and communication aspects of the graphic resonated with viewers for a variety of reasons, including: its selection to graph temperature (a quantity that the public feels is relevant and understandable), its production by scientists (who tend to be viewed as "trusted messengers"), its being intuitive and eye-catching (not a "boring" scientific graph), its similarity to a clock (which is normally regular and predictable but which provides a "visual surprise" at the end, portraying the "fortuitous" large temperature increases encountered very recently), its animated nature (not a static graph), and its short duration (holding viewers' attention). The paper further noted that sharing the graphic on social media allowed it to be "consumed within the social media bubble rather than requiring a journey to another website", allowing it to be "subsequently amplified by journalists, the media, and highly popular accounts". Content Climate spirals use changing distance from a center point to represent change of a dependent variable (archetypically, global average temperature). The independent variable, time, is broken down into months (represented by constantly changing rotational angle about the center point) and years (line colour that evolves as years pass). Hawkins explained that in his implementation, colours represent time: purple for early years, through blue, green to yellow for most recent years. He made the graphics in MATLAB and used the "viridis" colour scale, conscious of choosing a colour scale that makes the graphics legible to colour blind viewers. As the graphic includes red concentric circles denoting temperature changes of 1.5°C and 2.0°C, Hawkins explained that "the relationship between current global temperatures and the internationally discussed target limits is also clear without much complex interpretation needed". Ria Misra wrote in Gizmodo that the graphic "lets the noise of tiny variations fade into the background while still showcasing, very simply, the undeniable trend". Describing how global warming "appears to burst outward toward the end of the animation", Hawkins described how sulfate aerosols no longer counteracted the warming effect greenhouse gases after the 1970s, and noted how strong El Niño events in 1998 and 2016 lead to higher temperature plateaus more recently. The first climate spiral portrayed data from HadCRUT4.4 from January 1850 – March 2016, graphing relative to the 1850-1900 mean temperature, the same pre-industrial global average used in the IPCC Fifth Assessment Report. Applications and influence Three days after the first climate spiral was published, it was the subject of an article on the U.S Department of State's ShareAmerica website, noting the "compelling" graphic "shows just how fast" "world temperatures are spiraling upward". A climate spiral was featured in the opening ceremony of the August 2016 Summer Olympics. The design was on the shortlist for the Kantar Information is Beautiful Awards 2016. In January 2017, "Spiraling Global Temperatures" was nominated for the Shorty Awards GIF of the Year. In January 2017, the spiral was tweeted by Bernie Sanders and the U.S. National Park Service, both conveying how almost all recorded warmest years have been recent years. Extensions of the climate spiral concept In May 2016 United States Geological Survey scientist Jay Alder extended Hawkins' historical spiral to the year 2100, creating a predictive spiral graphic showing a possible future trajectory of global warming given the then-current carbon emission trend. Hawkins extended his two-dimensional spiral design to a three-dimensional version in which the graphic appears as an expanding cone-shaped structure. Hawkins' original climate spiral application (global average temperature change) has been expanded to represent other time-varying quantities such as atmospheric CO2 concentration, carbon budget, and arctic sea ice volume. Critical response The day of the climate spiral's first publication (9 May 2016), Brad Plumer wrote in Vox that the "mesmerizing" GIF was "one of the clearest visualizations of global warming" he had ever seen. The following day (10 May), Jason Samenow wrote in The Washington Post that the spiral graph was "the most compelling global warming visualization ever made", and, likewise, former Climate Central senior science writer Andrew Freedman wrote in Mashable that it was "the most compelling climate change visualization we’ve ever seen". Michael E. Mann, creator of the hockey stick graph, said that the spiral graphic was "an interesting and worthwhile approach to representing the data graphically", and PRI's Timothy McGrath wrote that the spiral was "a simple, elegant illustration of a dark history and a potentially terrifying future". In BusinessGreen, environmental journalist James Murray praised the graphic's "elegant simplicity", asserting that "the mistakes, misinterpretations, and misinformation contained in so many climate sceptic arguments are steamrollered by the straightforward force of this spiral". On 11 May, Chris Mooney wrote in The Washington Post that, with his "startling animation" Hawkins had "hit a grand slam—and not through some clever turn of phrase or some new metaphor or framing, but rather, through viral data visualization". Sarah Rense, first writing in Esquire that "climate science is unpalatable" and "depressing data", characterized the new graphic as being "as mesmerizing as it is depressing", "a cool GIF with pretty colours (to which) people will pay attention". In late May 2016, Brian Kahn, communications coordinator at the International Research Institute for Climate and Society, wrote that the spiral was a "revolutionary new way to look at global temperatures" and posited that the graphic's popularity "can be attributed in part to its hypnotic nature and the visceral way it shows the present predicament of climate change". In July 2016, freelance journalist Chelsea Harvey wrote in The Washington Post that, "at a time when climate science communication efforts are often viewed as dense or difficult for general audiences to understand, these types of striking graphics may help climate scientists connect with the public in a way that is both clear and attention-grabbing". Two years later, in May 2018 Jason Samenow commented that, though many Hawkins visualizations had "resonated among science communicators", climate spirals were the scientist's best-known visualizations. After his 2016 development of the climate spiral, Hawkins received the Royal Meteorological Society’s 2017 Climate Science Communication Prize. After developing the warming stripes graphic in May 2018, Hawkins, a lead author for the IPCC 6th Assessment Report, received the Royal Society's 2018 Kavli Medal "for significant contributions to understanding and quantifying natural climate variability and long-term climate change, and for actively communicating climate science and its various implications with broad audiences". In a 2019 paper, Hawkins et al. acknowledged that a "possible scientific criticism" of the design was that uncertainty in the temperature data is not visualized. Also, viewers might interpret the area of the 1.5°C and 2.0°C circles as representing the temperature change rather than the radius of the graphed line itself, though noting the temperature limit circles are clearly labeled. In May 2017 Hawkins responded to a criticism that the human eye might incorrectly interpret the change in area within the spiral rather than the change in radius by noting that the radii are clearly labeled. Informally, climate spirals have been referred to as spirographs. See also Notes References Further reading — Survey of climate change visualizations (List of available spirals)— Synchronized side-by-side graphics of the progression of these quantities Climate change in art Climate communication Climate change Climate and weather statistics Scientific visualization Data and information visualization Spirals
Climate spiral
[ "Physics" ]
1,935
[ "Weather", "Physical phenomena", "Climate and weather statistics" ]
61,444,882
https://en.wikipedia.org/wiki/Graphitizing%20and%20non-graphitizing%20carbons
Graphitizing and non-graphitizing carbons (alternatively graphitizable and non-graphitizable carbon) are the two categories of carbon produced by pyrolysis of organic materials. Rosalind Franklin first identified them in a 1951 paper in Proceedings of the Royal Society. In this paper, she defined graphitizing carbons as those that can transform into crystalline graphite by being heated to , while non-graphitizing carbons do not transform into graphite at any temperature. Precursors that produce graphitizing carbon include polyvinyl chloride (PVC) and petroleum coke. Polyvinylidene chloride (PVDC) and sucrose produce non-graphitizing carbon. Physical properties of the two classes of carbons are quite different. Graphitizing carbons are soft and non-porous, while non-graphitizing carbons are hard, low density materials. Non-graphitizing carbons are otherwise known as chars, hard carbons or, more colloquially, charcoal. Glassy carbon is also an example of a non-graphitizing carbon material. The precursors for graphitizing carbons pass through a fluid stage during pyrolysis (carbonization). This fluidity facilitates the molecular mobility of the aromatic molecules, resulting in intermolecular dehydrogenative polymerization reactions to create aromatic, lamellar (disc-like) molecules. These "associate" to create a new liquid crystal phase, the so-called mesophase. A fluid phase is the dominant requirement for production of graphitizable carbons. Non-graphitizing carbons generally do not pass through a fluid stage during carbonization. Since the time of Rosalind Franklin, researchers have put forward a number of models for their structure. Oberlin and colleagues emphasised the role of basic structural units (BSU), made of planar aromatic structures consisting of less than 10–20 rings, with four layers or fewer. Cross-linking between the BSUs in non-graphitizing carbons prevents graphitization. More recently, some have put forward models that incorporate pentagons and other non-six-membered carbon rings. See also Acheson process Carbonization Graphite References Allotropes of carbon
Graphitizing and non-graphitizing carbons
[ "Chemistry" ]
466
[ "Allotropes of carbon", "Allotropes" ]
61,444,924
https://en.wikipedia.org/wiki/C18H21N3O
{{DISPLAYTITLE:C18H21N3O}} The molecular formula C18H21N3O (molar mass: 295.379 g/mol) may refer to: Dibenzepin Dimethyllysergamide (DAM-57) LAE-32, or D-Lysergic acid ethylamide Molecular formulas
C18H21N3O
[ "Physics", "Chemistry" ]
79
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,444,925
https://en.wikipedia.org/wiki/Axinite-%28Mg%29
Axinite-(Mg) is a borosilicate mineral of aluminum, calcium and magnesium of the axinite group, with magnesium as the dominant cation in the place of the structure that can also be occupied by iron and manganese. It was discovered in gem material from Merelani Hills, Lelatema Mts, Manyara Region, Tanzania, which is consequently its type locality. It was initially called magnesioaxinite, referring to its membership in the axinite group and the role of magnesium as the dominant cation. The International Mineralogical Association (IMA) later changed its name to axinite-(Mg). Occasionally it has been carved as a collection gem. Physical and chemical properties Like the rest of the minerals in the axinite group, axinite-(Mg) belongs to the triclinic system, appearing in the form of crystals with the characteristic ax-shaped morphology. Its structure can be described as a sequence of alternating layers of cations coordinated tetrahedrally and octahedrally. Deposits The axinite group minerals are found in medium-to-low contact metamorphism, regional or metasomatic environments, in boron-containing environments. Axinite-(Mg) appears more frequently in areas of contact metamorphism. It is a relatively rare mineral, known in about a dozen locations in the world. In addition to the type locality, already indicated, in which specimens with transparent crystals of various colors, up to 3 cm in size, have been found in the area of Lunning, Mineral Co., Nevada (USA), as violet brown crystals. In Spain, axinite-(Mg) associated with crystalline calcite has been found in the diabase of a quarry located in El Zurcido, Adamuz (Córdoba) . References Magnesium minerals Silicate minerals Gemstones Triclinic minerals
Axinite-(Mg)
[ "Physics" ]
391
[ "Materials", "Gemstones", "Matter" ]
61,447,404
https://en.wikipedia.org/wiki/Nautilus%20Deep%20Space%20Observatory
Nautilus Deep Space Observatory (NDSO) (also known as Nautilus array, Nautilus mission, Nautilus program, Nautilus telescope array and Project Nautilus) is a proposed deep space fleet of space telescopes designed to search for biosignatures of life in the atmospheres of exoplanets. Daniel Apai, lead astronomer of NDSO from the University of Arizona, and associated with the Steward Observatory and the Lunar and Planetary Laboratory, commented "[With this new space telescope technology], we will be able to vastly increase the light-collecting power of telescopes, and among other science, study the atmospheres of 1,000 potentially Earth-like planets for signs of life." Overview The NDSO mission is based on the development of very lightweight telescope mirrors that enhance the power of space telescopes, while substantially lowering manufacturing and launch costs. The concept is based not on traditional reflective optics but on diffractive optics, employing a single diffractive lens made of a multiorder diffractive engineered (MODE) material. A MODE lens is ten times lighter and 100 times less susceptible to misalignments than conventional lightweight large telescope mirrors. The NDSO mission proposes to launch a fleet of 35 space telescopes, each one a wide spherical telescope, and each featuring an diameter lens. Each of these space telescopes would be more powerful than the mirror of the James Webb Space Telescope, the wide mirror of the Hubble Space Telescope, and the mirror of the Ariel space telescope combined. The NDSO telescope array of 35 spacecraft, when used all together, would have the resolving power equivalent to a diameter telescope. With such telescopic power, the NDSO would be able to analyze the atmospheres of 1,000 exoplanets up to 1,000 light years away. In January 2019, the NDSO research team, which includes lead astronomer Daniel Apai, as well as Tom Milster, Dae Wook Kim and Ronguang Liang from the University of Arizona College of Optical Sciences, and Jonathan Arenberg from Northrop Grumman Aerospace Systems, received a $1.1 million support grant from the Moore Foundation in order to construct a prototype of a single telescope, and test it on the Kuiper Telescope before December 2020. Spacecraft Each individual Nautilus unit has a single solid MODE lens and would be packed in stackable form for a shared rocket launch, and once deployed, each unit would inflate into a diameter Mylar balloon with the instrument payload in the center. See also Astrobiology Biosignature Carl Sagan Institute SISTINE - another way to search for life on exoplanets References External links Deep Space Workshop (2018) Northrop Grumman spacecraft Space telescopes Proposed satellites
Nautilus Deep Space Observatory
[ "Astronomy" ]
567
[ "Space telescopes" ]
61,447,902
https://en.wikipedia.org/wiki/C3H6Br2
{{DISPLAYTITLE:C3H6Br2}} The molecular formula C3H6Br2 (molar mass: 201.889 g/mol, exact mass: 199.8836 u) may refer to: 1,2-Dibromopropane, also known as propylene dibromide 1,3-Dibromopropane
C3H6Br2
[ "Chemistry" ]
82
[ "Isomerism", "Set index articles on molecular formulas" ]
70,834,267
https://en.wikipedia.org/wiki/Aurenin
Aurenin is an antibiotic with the molecular formula C33H54O11. References antibiotics Heterocyclic compounds with 1 ring Lactones Polyols
Aurenin
[ "Chemistry", "Biology" ]
35
[ "Biotechnology products", "Organic compounds", "Antibiotics", "Biocides", "Organic compound stubs", "Organic chemistry stubs" ]
70,834,523
https://en.wikipedia.org/wiki/Sibiromycin
Sibiromycin is an antitumor antibiotic with the molecular formula C24H33N3O7 which is produced by the bacterium Streptosporangium sibiricum. Sibiromycin is a pyrrolobenzodiazepine. References Further reading antibiotics Sibiromycin
Sibiromycin
[ "Chemistry", "Biology" ]
65
[ "Biotechnology products", "Organic compounds", "Antibiotics", "Biocides", "Organic compound stubs", "Organic chemistry stubs" ]
70,835,587
https://en.wikipedia.org/wiki/Yvette%20Swan
Yvette Victoria Angela Swan (20 August 1945 – 18 April 2021) was a Bermudian senator for the United Bermuda Party, optician, and pastor, as well as president of International Federation of Business and Professional Women and Project 5-0. Biography Born in Saint Thomas Parish, Jamaica to two schoolteachers, she went to study at Paddington Technical College in London and Aston University in Birmingham, studying optometry. Swan would then move to Bermuda with her husband Malcolm, working there as an optometrist. She was appointed as a senator for the United Bermuda Party in 1993, having been a campaigner for the party in Warwick West, she was later made Minister of Community and Cultural Affairs in 1995. She would also serve as Minister for Women's Issues, and addressed the United Nations Commission on the Status of Women. She made two bids for the House of Assembly, one in 1998, running in Warwick West with Quinton Edness, the other in 2003, in Warwick North Central, both bids were unsuccessful. She was also president of the International Federation of Business and Professional Women, and Project 5-0. Swan would later move to Canada, studying Theology and earning her Masters in Divinity at the Atlantic School of Theology. She would go on to become a minister at St. Paul's United Church, in Riverview, New Brunswick, and in 2015 was ordained by the Maritime Conference. Swan spent the last seven years of her life serving the Nashwaak Pastoral Charge. Bermudian Leader of the Opposition Cole Simons, a member of the One Bermuda Alliance which formed as a merger between the United Bermuda Party and the Bermuda Democratic Alliance, described Swan as "a woman of great passion and dedication, who served tirelessly at various levels of society". References 1945 births 2021 deaths Members of the Senate of Bermuda People from Saint Thomas Parish, Jamaica United Bermuda Party politicians Opticians Bermudian women in politics
Yvette Swan
[ "Astronomy" ]
393
[ "Opticians", "History of astronomy" ]
70,836,283
https://en.wikipedia.org/wiki/Mimikatz
Mimikatz is both an exploit on Microsoft Windows that extracts passwords stored in memory and software that performs that exploit. It was created by French programmer Benjamin Delpy and is French slang for "cute cats". History Benjamin Delpy discovered a flaw in Microsoft Windows that holds both an encrypted copy of a password and a key that can be used to decipher it in memory at the same time. He contacted Microsoft in 2011 to point out the flaw, but Microsoft replied that it would require the machine to be already compromised. Delpy realised that the flaw could be used to gain access to non-compromised machines on a network from a compromised machine. He released the first version of the software in May 2011 as closed source software. In September 2011, the exploit was used in the DigiNotar hack. Russian conference Delpy spoke about the software at a conference in 2012. Once during the conference, he returned to his room to find a stranger sitting at his laptop. The stranger apologised, saying he was in the wrong room and left. A second man approached him during the conference and demanded he give him copies of his presentation and software on a USB flash drive. Delpy gave him copies. Delpy felt shaken by his experiences and before he left Russia, he released the source code on GitHub. He felt that those defending against cyberattacks should learn from the code in order to defend against the attack. Windows updates In 2013 Microsoft added a feature to Windows 8.1 that would allow turning off the feature that could be exploited. In Windows 10 the feature is turned off by default, but Jake Williams from Rendition Infosec says that it remains effective, either because the system runs an outdated version of Windows, or he can use privilege escalation to gain enough control over the target to turn on the exploitable feature. Benjamin Delpy has updated the software to cover further exploits than the original. Use in malware The Carbanak attack and the cyberattack on the Bundestag used the exploit. The NotPetya and BadRabbit malware used versions of the attack combined with EternalBlue and EternalRomance exploits. In popular culture In Mr. Robot episode 9 of season 2, Angela Moss uses mimikatz to get her boss's Windows domain password. References External links Implementation by Benjamin Delpy on github Implementation of exploit on github Hacking in the 2010s Hacking in the 2020s Microsoft Windows security technology Computer security exploits
Mimikatz
[ "Technology" ]
507
[ "Computer security exploits" ]
70,836,387
https://en.wikipedia.org/wiki/Zwackhiomyces%20polischukii
Zwackhiomyces polischukii is a species of lichenicolous (lichen-eating) fungus in the family Xanthopyreniaceae. It occurs in Ukraine, where it parasitises the crustose lichens Bacidia fraxinea and B. rubella. Taxonomy The fungus was formally described as a new species in 2017 by Valeriy Darmostuk and Alexander Khodosovtsev. The type specimen was collected from a western slope of Mount Castel (Autonomous Republic of Crimea) at an altitude of ; there it was found growing on thalli of Bacidia rubella, which itself was growing on Carpinus betulus. The species epithet polischukii honours Ukrainian virologist, professor Valeriy Polischuk. Description Zwackhiomyces polischukii has perithecioid ascomata that are initially immersed in the host thallus, but are only partially immersed in maturity. These small (typically 170–190 μm in diameter) black structures are more or less spherical, and are either scattered throughout the thallus surface or roughly arranged in groups of three to five. The asci are club-shaped (clavate) and measure 65–75 by 12–15.0 μm. They usually have eight spores (some have four), with the spores either lined up in a single row (uniseriate) or in two rows (biseriate). Ascospores are hyaline, ellipsoid with a marked constriction at the single septum, and typically measure 18.0–21.6 by 6.0–7.6 μm. Habitat and distribution Bacidia fraxinea and B. rubella are the two known hosts for Zwackhiomyces polischukii. The authors suggest that the association between these species ranges from parasymbiotic (i.e., the lichen supports a fungal species growing in close association with it, without apparent disadvantage) to weakly parasitic, because the fungal presence does not seem to greatly affect the host, causing only slight deformations in the thallus. In one of the recorded cases, a second lichenicolous fungus–Muellerella hospitans–was co-infecting the host. In addition to Crimea, Zwackhiomyces polischukii has also been recorded in Bakhchysarai, where the host lichen was growing on oak, and from the Podilskyi District, where the host lichen was growing on maple. As of 2019, Zwackhiomyces polischukii is one of 13 species of Zwackhiomyces known to occur in Ukraine. References Xanthopyreniaceae Fungi described in 2017 Fungi of Europe Lichenicolous fungi Fungus species
Zwackhiomyces polischukii
[ "Biology" ]
586
[ "Fungi", "Fungus species" ]
70,836,460
https://en.wikipedia.org/wiki/Donetsk%20Metallurgical%20Plant
Donetsk Metallurgical Plant also called Donetsk Iron and Steel Works is an enterprise of Donetsk, Ukraine. It is a ferrous metallurgy enterprise that is located in the Leninskyi district of Donetsk. History 1869–1917 In 1866, the government of the Russian Empire concluded an agreement with Prince Sergey Victorovich Kochubey, according to which the prince undertook to build a plant for the manufacture of iron rails in the south of the Russian Empire. In 1869, Kochubey sold the enterprise to John Hughes for £24,000. In 1869, Hughes (or Yuz) began the construction of a metallurgical plant with a working settlement near the village of Aleksandrovka. On April 24, 1871, the first blast furnace was built and on January 24, 1872, the plant of the Novorossiysk Society for Coal, Iron and Rail Production produced the first pig iron. In the "Mining Journal" for 1889, the oldest plan of the NRO plant was published. In addition to the location of industrial facilities, this plan also shows the first, one-story house of John Hughes, built by the Hughes family in 1874. Now this place is located on the territory of a metallurgical plant, near the administrative building of the electric steel smelting and swaging shops, next to the monument "In honor of the smelting of the 100 millionth ton of steel on December 24, 1967." In 1901, a social-democratic circle was created at the plant. The factory workers took an active part in the First Russian Revolution of 1905. The plant worked as a full metallurgical cycle. For the first time in the Russian Empire 8 coke ovens are launched. Hot blast is mastered. The plant became one of the industrial centers of the Russian Empire. In 1908–1913, 1916–1917, the outstanding metallurgist Mikhail Konstantinovich Kurako worked at the plant. In 1917, 25 thousand people worked at the plant, mines, and mines of the society. On March 5 (18), 1917, the Council of Workers' Deputies was created in Yuzovka, which included factory workers, after which an 8-hour working day was established at the enterprise . In the autumn of 1917, workers' control was established at the enterprise, attempts by the plant administration to stop the enterprise were suppressed, coal mining and production continued. A detachment of factory workers was sent to fight Kaledin. The workers of the plant later on took an active part in the establishment of Soviet power in the Donbas. 1918–1941 In April 1918, after the attack of German-Austrian troops, most of the equipment and materials of the Yuzovsky plant were evacuated to Tsaritsyn (Volgograd), and detachments of workers retreated with the Red Army. On May 24, 1918, the plant was nationalized. In May 1918, during the German occupation, an underground organization of the Russian Communist Party bolsheviks or the RCP(b), led by E. Severyanov, began operating at the plant, and in July 1918 an underground regional committee of six members began operating at the plant. In December 1919, Soviet power was restored in Yuzovka, on January 30, 1920, a workers' board of all enterprises of the former "Novorossiysk Society of Coal, Iron and Rail Production" (headed by M. S. Titov) was created in Yuzovka, and the restoration of the plant began. On July 6, 1921, the first blast furnace was launched again, by the end of 1921, production at the plant was restored. In 1924, the plant was named after Joseph Vissarionovich Stalin. During Industrialization in the Soviet Union, the plant was transformed into a new technical base. During the First Five-Year Plan, mechanization of the mill and a bunker system for blast furnace charging were introduced and technical renovation was initiated; in 1936 the reconstruction of the mill and the construction of the flat furnace mill were completed, the plate and profilerolling mills were mechanized and a new, powerful blooming mill replaced the old rail-rolling mill. Under German occupatioon During the Eastern Front (World War II) or the Great Patriotic War, the plant was almost completely destroyed (immediately before the start of the battles for the liberation of the city, on September 6–7, 1943. The underground workers operating at the plant, led by I.I. Kholoshin, including prisoners of war and assistants from the factory guards, disarmed the factory guards, occupied and saved warehouses, a garage, a telephone exchange and a special factory workshop from destruction).. Already in the fall of 1943, the restoration of the plant began. After the end of the war the plant was reconstructed and expanded. Restoration The restoration of the plant began immediately after the end of the battles for the city, and on February 14, 1944, the plant carried out the first steel smelting from the flat furnace; in March 1944, the rolling mill and the first blast furnace were put into operation; between 1944 and 1946, the enterprise was awarded the Red flag ten times by the Soviet Defense Committee for the successful restoration of the facility. Dear Joseph Vissarionovich!The workers of the Stalin Metallurgical Plant in Donbass, inspired by the victorious offensive of the Red Army, collected 1,400,000 rubles from their personal funds for the defense fund of our beloved Motherland. We ask you to spend these funds on the construction of the Metallurg of Donbass air squadron and transfer it to the 4th Ukrainian Front, whose troops liberated our native Donbass, the city of Stalino and the plant from the fascist yoke.Director of the plant ANDREEV.District Committee Secretary of the CP(b) LEBEDEV.Party Organizer of the Central Committee of the CPSU (b) at plant VANSYATSKY. Chairman of the factory committee KIRYUSHIN. I ask you to convey to the workers of the Stalin Metallurgical Plant, who raised 1,400,000 rubles for the construction of the Metallurg of Donbass air squadron, my brotherly greetings and gratitude to the Red Army. May the desires of the workers of the Stalin Metallurgical Plant be fulfilled.J. STALIN"Pravda" Newspaper, March 19, 1944. In 1950, the plant returned to prewar levels of steel production and rolled metal production. In 1952, the plant was the first in the world to implement a steam evaporation system for cooling flat furnace elements, the development and implementation of which earned two employees of the plant the Stalin Prize. The plant also specialized in the production of strip valve profiles and spring strips for Pobeda cars in the Fourth Five-Year Plan. In 1955, the Museum of the History of the DMZ (DMP) was opened. In 1960, a four-jet continuous casting machine for steel blanks was put into operation at the plant. Also, in 1960, the plant, among the first enterprises of the USSR, mastered the smelting of iron on natural gas (for this achievement, the director of the plant I. M. Ektov and the head of the blast furnace shop of the plant G. A. Panev were awarded the Lenin Prize in 1960). February 7, 1966 the plant was awarded the Order of Lenin. In 1967, the plant received a new name: “named after V. I. Lenin.” In 1970, the plant produced 5.9 times more iron, 5.3 times more steel, and produced 5.5 times more rolled metal than in 1913. Also, in 1970, the Living Memorial was built - immortal, in honor of the workers of the DMZ who died in the Great Patriotic War. In 1972, the plant was awarded the Order of the October Revolution. In 1974, a swaging mill 950/900 was built at the plant. From the early 1980s, DMZ'smain products were rolled bars and rolled sheets of cast iron, steel, high-grade and alloy steel. By early 1986, the plant was one of the largest industrial enterprises in Donetsk, whose main products were steel, cast iron, sheet metal and long products. 1991–2013 In August 1997, the plant was included in the list of enterprises of strategic importance for the economy and security of Ukraine. In 1998, for the first time, the plant was certified as a manufacturer of steel and semi-finished products from it (slabs and open-hearth ingots from carbon and carbon-manganese steel grades of normal and increased strength) according to the rules of the English Lloyd's Register. In August 2002, on the basis of the blast-furnace and open-hearth shops of the Donetsk Metallurgical Plant, the Donetskstal Metallurgical Plant PJSC was established. The company specializes in the production of: foundry and pig iron more than 100 varieties of carbon, structural, low-alloyed, alloyed ordinary quality, high-quality and high-quality steel grades church bells made of non-ferrous high-quality alloy steel electric-welded longitudinal pipes and metal furniture frames slag-forming mixtures, granulated slag and building materials Slabs made of normal strength marine structural steel grades GL-A and GL-B are certified according to the rules of Deutsche Lloyd. In 2003, the church of St. Ignatius of Mariupol was built on the territory of the plant. The plant completed 2013 with a net profit of UAH 83.322 million. Since 2014 The outbreak of hostilities in eastern Ukraine in 2014 complicated the activities of the enterprise. As a result, the plant ended 2014 with a net loss of UAH 4,871.533 million. In the first nine months of 2015, the plant produced 426 thousand tons of pig iron and 1.664 million tons of grade K coal concentrate, but losses continued to increase. In June 2016, the leadership of the unrecognized DPR introduced external management at the plant, by which time its communications and a significant part of the equipment had become unusable due to repeated shutdowns and long downtime. Also in June 2016, on the basis of the Donetsk Electrometallurgical Plant (DEMP), the state enterprise Yuzovsky Metallurgical Plant was opened. The YuMZ industrial complex is located on the same territory as the Donetsk Metallurgical Plant, which, in turn, is located in three districts of the DPR capital at once - Voroshilovsky, Budenovsky, Leninsk. It was Re-launched on October 5, 2017. In 2018 YuMZ began to supply products to Turkey, Iran and Syria. Since May 1, 2019, blast-furnace production has been stopped. The company does not manufacture products. During the heating period of 2019–2020, only the factory CHPP-PVS and related power plants worked, in order to supply heat to part of the above three districts of the city of Donetsk. From March to August 2020, the plant suspended work due to a shortage of raw materials. As of November 2020, the YuMZ complex, separated from the main part of the plant, continues to produce steel, specializing in the production of continuously cast square billets. The workforce consists of 858 people. At the DMZ, there are energy workshops that provide transit and supply to factory consumers and sub-consumers of drinking and industrial water, electricity, natural gas, steam and hot water. Museum The Museum of the History of Donetsk Metallurgical Plant was created in 1955, and it is located in the Technology House. The idea for its creation came from the director of the DMZ, Pavel Vasilyevich Andreev. The museum consists of more than 3000 exhibits. In 1971 the museum was awarded the title of national museum. Among the exhibits are certificates for products manufactured by the plant in 1900, original photographs of the Nizhny Novgorod industrial exhibition in 1896 and others. On February 16, 2012, a branch of the Museum of the History of the Donetsk Metallurgical Plant was opened in the lower floor of the St. Ignatius Church, which is dedicated to Ignatius of Mariupol. "City of Smiles" On June 17, 2004, a children's playground "City of Smiles" was opened in the park area of the plant. In the "City of Smiles" a sports area and playground, a railway for children, as well as a zoo were arranged, which contained a mouflon, a Bactrian camel, a pony, a donkey, a collared peccary, a raccoon, a red-headed duck, a chubataya duck, a mandarin duck, a wood duck, a pechanka, a Muscovy duck, a common pochard, a coypu, a rhea ostrich, a bush wallabi, a macaque monkey, a rhesus monkey, a Romanian pheasant, a trogopan pheasant, chickens, a golden pheasant, a royal pheasant, a common peacock, a guinea fowl, gray nymphs, rose-ringed parakeets, a demoiselle crane, kings, strassers, pigeons, pectoral sandpiper, trumpeters, a porcupine, a Cameroon goat, a llama, a spotted deer, a European fallow deer, a Barbary sheep, a savannah zebra, and an American bison. Awards Order of Lenin Order of the October Revolution Gallery References Further reading Володин Г. Г. По следам истории. Очерки из истории Донецкого ордена Ленина металлургического завода имени В. И. Ленина. — Донецк: Донбасс, 1967. — 352 с. — 20 000 экз. Обзор работы Сталинского металлургического завода / Сост. И. М. Ектов. ─ М. : ЦИИНЧМ, 1960 . ─ 24 с. ─ (Сер. 10 «Технико-экономические обзоры работы передовых предприятий чёрной металлургии» / Центральный институт информации чёрной металлургии). Steel companies of Ukraine Companies established in 1872 Companies based in Donetsk Metallurgical facilities Heavy industry Industry in Ukraine Recipients of the Order of Lenin
Donetsk Metallurgical Plant
[ "Chemistry", "Materials_science" ]
3,143
[ "Metallurgy", "Metallurgical facilities" ]
70,836,594
https://en.wikipedia.org/wiki/11%20Trianguli
11 Trianguli is a solitary star located in the northern constellation Triangulum, with an apparent magnitude of 5.55. The star is situated 281 light years away but is approaching with a heliocentric radial velocity of . It is probably on the horizontal branch fusing helium in its core, and is calculated to be about old. It has a stellar classification of K1 III. It has 2.446 times the mass of the Sun and 12.055 times the radius of the Sun. It shines at 54.6 times the luminosity of the Sun from its photosphere at an effective temperature of . References K-type giants Triangulum Trianguli, 11 015176 011432 0712 Durchmusterung objects
11 Trianguli
[ "Astronomy" ]
158
[ "Triangulum", "Constellations" ]
70,836,690
https://en.wikipedia.org/wiki/Lishui%20%28sea-waves%29
Lishui () or shuijiao () is a set of parallel diagonal (either straight or wavy), multicoloured sea-waves/line patterns. It originated in China where it was used by the Qing dynasty court prior to the mid-18th century. Lishui represents the deep sea under which the ocean surges and waves; it is therefore typically topped with "still water" (woshui (), also called pingshui ()), which is represented by concentric semicircle patterns which runs horizontally. Lishui was used to decorate garments, including the bottom hem and cuffs of some of the court clothing of the Qing dynasty. It could be used to decorate as wedding dress items. It is also used to decorate Chinese opera costumes, typically on the bottom hem of the costumes. It was also adopted in some court clothing of the Nguyen dynasty in Vietnam under the influence of the Qing dynasty. Cultural significance and symbolism In ancient China, embroideries on clothing were not only used as a mean to embellish clothing but also held symbolic meanings. When used on dragon robes, lishui could be combined with turbulent waves and a rock in the middle of the clothing. Lishui represents the deep water; the rock represents the sacred mountain (山, shān), which is the representation of the Universe or the Earth. The turbulent waves were buddhist elements. Waves patterns (usually shaped in semicircles as found in woshui (卧水) patterns) are often used to represent tides (潮, cháo) which is the homophone and symbolism for the court "audience" (朝, cháo). Therefore, when worn together, these motifs mean that the wearer of clothing is the "centre of the symbolic universe" being the ruler over the waters, the Earth and Heavens. Lishui could be found in the five colours. Sometimes, within the waves pattern, other religious (Taoist and Buddhist symbols) and auspicious Chinese symbols were added, such as clouds. History In the Ming dynasty, patterns of sea "waves breaking against rocks" were already in use in the Emperor's dragon robe in the early 16th century in order to create a cosmic landscape for the imperial dragons. Other forms of court robes in Ming dynasty worn by nobles, officials and their wives (such as the bufu, i.e. robe with mandarin square) also used ocean waves patterns in the form of concentric semicircles (woshui) as clothing ornaments. After the conquest of the Ming dynasty by the Manchu and the establishment of the Qing dynasty, the Manchu rulers inherited the dragon robes of the Ming dynasty which they would refit and modify these robes by adding their own their own Manchu-style features similarly to how the Manchu (and their predecessors, the Jurchens) used to modify the dragons robes bestowed by the Ming dynasty prior to the Ming dynasty conquest. Some early designs of the Qing dynasty jifu (dragon/python robe) showed patterns of woshui at the bottom hem of the robe but did not have the presence of lishui; this form of dragon robe eventually disappeared in the mid-18th century, possibly having fallen out of fashion. By the end of the 17th century, the Manchu rulers wanted to re-imagine the imagery of their dragon robes to emphasize on the centrality of the emperor within the cosmos and within the imperial court. The Manchu rulers thus decided to redesign and regulate the decorative patterns found on their court robes by introducing and applying elaborate pattern designs on their court robes. Designs and construction of the Qing dynasty court robes were enacted and regulated through imperial edicts; the dress code was a mixed of Manchu (i.e. clothing cut-style) and Ming dynasty Chinese traditions in terms of prescribed designs. The decorative patterns and visual motifs used by the Manchu rulers were adopted from the Han Chinese's adornment designs, decorations, and symbols rooted in Taoism and Buddhism; they were then adapted into new set of designs. Lishui was thus added to on the bottom hem and/or on the sleeves of the court robes. Lishui found on the bottom of robes were initially short in length but gradually increased in length until the end of the dynasty. Usage Court clothing and wedding clothing The lishui was used to decorate the hem of the Qing dynasty court robes, such as the dragon robes, the python robes, the surcoats (e.g. gunfu/ jifu gua/ longgua), the jifupao (e.g. Manchu wedding dress), chaogua, and on the mandarin square. They could also be used to decorate the wedding attire and xiapei of the Han Chinese women in the Qing dynasty.In the early years of the Republic of China, lishui also appeared in the official clothing regulations promulgated by Yuan Shikai for the officials who participated in 1914 Sacrifice at the Temple of Heaven ceremony when he proclaimed the beginning of a new dynasty. Chinese opera Lishui is used to decorate the hem of Chinese opera costumes, such as mang, nümang, long jianyi, long magua. Influences and derivatives Vietnam Lishui, among many other decorative patterns, were also adopted in some court clothing of the Nguyen dynasty under the influence of the Qing dynasty. They were sometimes used to decorate the áo nhật bình. Not to be confused with See also Hanfu - Traditional Han Chinese costume Qizhuang - Traditional Manchu clothing Twelve Ornaments Chinese ornamental gold silk Notes References Chinese traditional clothing Chinese art Chinese folk art Visual motifs Ornaments
Lishui (sea-waves)
[ "Mathematics" ]
1,135
[ "Symbols", "Visual motifs" ]
70,838,041
https://en.wikipedia.org/wiki/Mycena%20nebula
Mycena nebula is a species of fungus belonging to the Mycena genus. It was discovered in Veracruz in Mexico growing on moss-covered bark on living trees. It was documented in 2019 by A. Cortés-Pérez, Desjardin, and A. Rockefeller. Description The cap is 3–9 mm (0.1-0.35 in) in diameter and initially a broad conical shape, expanding to become convex or umbonate. The cap is moist and glabrous and the color ranges from pale pink to red. When cut or bruised, a dark red latex is released. The gills are adnate to adnate with a decurrent tooth, distant, and white to pale pink. The stipe is central, cylindrical, hollow, and has a slightly swollen base. The stipe color ranges from red to translucent pink and releases a dark red latex when cut. The basidiome is bioluminescent and gives off a bright green light. The odor and edibility is unknown. References Bioluminescent fungi nebula Fungi described in 2019 Fungi of Mexico Fungi without expected TNC conservation status Fungus species
Mycena nebula
[ "Biology" ]
231
[ "Fungi", "Fungus species" ]
70,839,377
https://en.wikipedia.org/wiki/AZ12216052
AZ-12216052 is a drug which acts as a potent and selective positive allosteric modulator of the metabotropic glutamate receptor 8, and is used for research into the role of this receptor subtype in various processes including anxiety and neuropathic pain. References MGlu8 receptor agonists 4-Bromophenyl compounds Thioethers Amides
AZ12216052
[ "Chemistry" ]
83
[ "Amides", "Functional groups" ]
70,840,589
https://en.wikipedia.org/wiki/Giancarlo%20Fortino
Giancarlo Fortino is an Italian computer scientist who is currently a full professor of computer engineering at the Department of Informatics, Modeling, Electronics and Systems (DIMES) of the University of Calabria. Education Giancarlo Fortino was born in Italy in 1971. He graduated from University of Calabria in 1995 with a laurea (5-year master's degree) in computer engineering. In 2000, he received a PhD in computer engineering from the University of Calabria. Career Giancarlo Fortino is currently a full-time professor of computer engineering at the Department of Informatics, Modeling, Electronics and Systems (DIMES) of the University of Calabria, where he is the director of the SPEME (Smart, Pervasive and Mobile systems Engineering) lab. He has supervised or co-supervised more than 15 doctoral students. In 1997 and 1999, he was a research scholar at the International Computer Science Institute in Berkeley, US. In 2009, he was a visiting professor at Queensland University of Technology, Australia. In 2001–2006, he was an assistant professor and in 2006–2018 he was an associate professor at the University of Calabria. In 2012, he was nominated Guest Professor of Computer Engineering at the Wuhan University of Technology. In 2015, he was nominated adjunct full professor of computer engineering in the framework of High-End Foreign Experts in China. Since 2015, he has been an adjunct senior research fellow at the Institute of High-Performance Computing and Networks of the National Research Council (Italy). In 2017, he was nominated high-end expert at Huazhong University of Science and Technology, China. In 2019, he served as a visiting scientist at Shenzhen Institute of Information Technology after being awarded a Chinese Academy of Sciences President's International Fellowship Initiative (PIFI). In the same year, he was also nominated distinguished professor at Huazhong Agricultural University, China. Editorial Activities Giancarlo Fortino is the founding editor of the IEEE Press Book Series on "Human-Machine Systems". He is the founding editor-in-chief of Springer Book Series on “Internet of Things: Technology, Communications and Computing”. He currently serves in the editorial board of several IEEE journals, including IEEE Sensors Journal and IEEE Access. He has edited several books, including one published by Wiley-IEEE Press and entitled "Wearable Computing: From Modeling to Implementation of Wearable Systems based on Body Sensor Networks". Publications He has published more than 500 articles in international conferences, journals, and book chapters. He is listed among the Clarivate Web of Science Highly Cited researchers in the field of Computer Science. He is the only Italian highly cited researcher in the computer science area. Honors and awards He is an Institute of Electrical and Electronics Engineers (IEEE) fellow. He is also a fellow of the Asia-Pacific Artificial Intelligence Association. He has been the recipient of three best paper awards, including the 2014 Andrew P. Sage Best IEEE SMC Transactions Paper Award, and of an Outstanding Chapter Award as the Chair of the IEEE SMC Italy Chapter. He held more than 100 invited talks, keynotes, tutorials, and panels, at international conferences and symposia, and he is a Distinguished Lecturer of the IEEE Sensors Council. References Italian computer scientists Electrical engineers Fellows of the IEEE Living people Year of birth missing (living people)
Giancarlo Fortino
[ "Engineering" ]
678
[ "Electrical engineering", "Electrical engineers" ]
70,841,705
https://en.wikipedia.org/wiki/Mercedes-Benz%20M291%20engine
The Mercedes-Benz M291 engine is a 3.5-liter flat-12 racing engine, designed, developed and produced by Mercedes-Benz, for their Group C racing program. It was introduced in 1991, along with their new Mercedes-Benz C291 prototype race car chassis. Background The 1991 season marked the introduction of the FIA’s new, and controversial, 3.5-liter formula which replaced the highly successful Group C category that had been used in the World Sportscar Championship since 1982, though due to a small number of entries for the 3.5-liter formula heavily penalized Group C cars (which were subject to weight penalties and started behind the new-style C1 entries on the grid) were allowed to participate in the season's C2 category. Engine The primary feature of the new regulations was the use of a 3.5-liter naturally aspirated engine. This made it impossible for Mercedes-Benz to use the engines from its previous Group C cars. Also, to produce similar power to the Group C cars a 3.5-liter naturally aspirated engine had to be very high-revving and be constructed from different materials in order to rev highly. Unlike Jaguar's XJR-14 who had the readily available and proven Ford HB V8 engine from the Benetton B190B Formula One car (the engine regulations for the new 3.5-liter formula were identical to Formula One), Mercedes-Benz had to design an all-new purpose-built racing engine and its M-291 3.5L Flat-12 unit was the result. The engine only produced about , compared to over produced by 5.0 litre V8 twin-turbo found in the C291's predecessor, the Sauber-Mercedes C11. Applications Mercedes-Benz C291 Mercedes-Benz C292 (stillborn concept) References Mercedes-Benz engines V12 engines Engines by model Gasoline engines by model
Mercedes-Benz M291 engine
[ "Technology" ]
398
[ "Engines", "Engines by model" ]
70,841,912
https://en.wikipedia.org/wiki/Mercedes-Benz%20M106%20engine
The Mercedes-Benz M106 engine is a high-revving, prototype, four-stroke, 2.5-liter, naturally aspirated, V-6 racing engine, designed, developed and produced by Mercedes-Benz for the DTM and later ITC, between 1994 and 1996. History The new M106 six-cylinder replaced the previous four-cylinder engine used in the Mercedes-Benz 190E during the past seasons. It is a brand-new V6 with a displacement of just under 2.5 liters. Very loosely based on the 4.2 liter V8 used in the E 420 and S 420 models, the new engine uses a cylinder bank V-angle of 90 degrees. Equipped with twin overhead camshafts and four valves per cylinder, the compact unit nevertheless only weighed due to extensive use of alloys. Initially producing between , it drives the rear wheels via a six-speed sequential gearbox that was fitted at the rear of the car to improve the weight balance. While Alfa Romeo's model featured four driven wheels, Mercedes-Benz was restricted to only a rear-wheel drive setup for their new DTM racer as none of the road-going C-Class models used four-wheel drive. The 1996 iteration of the engine developed over , and revved over 11,500 rpm. Applications Mercedes-Benz C-Class DTM (W202) References Mercedes-Benz V6 engines Mercedes-Benz engines Gasoline engines by model Engines by model Piston engines Internal combustion engine
Mercedes-Benz M106 engine
[ "Technology", "Engineering" ]
303
[ "Internal combustion engine", "Engines", "Engines by model", "Piston engines", "Combustion engineering" ]
70,847,281
https://en.wikipedia.org/wiki/Capsulimonas
Capsulimonas is a Gram-negative, non-spore-forming, aerobic and non-motile genus of bacteria from the family of Capsulimonadaceae with one known species (Capsulimonas corticalis). Capsulimonas corticalis has been isolated from the surface of a beech (Fagus crenata) See also List of bacterial orders List of bacteria genera References Bacteria Bacteria genera Monotypic bacteria genera Taxa described in 2019
Capsulimonas
[ "Biology" ]
99
[ "Microorganisms", "Bacteria stubs", "Prokaryotes", "Bacteria" ]
70,847,519
https://en.wikipedia.org/wiki/Import%20One-Stop%20Shop
Import One-Stop Shop (IOSS or Import OSS) is an electronic one-stop shop (OSS) portal in the European Union (EU) which serves as a point of contact for the import of goods from third countries into the European Union. The scheme aims to simplify the declaration and payment of value-added tax when importing goods into the European Union. IOSS became available from 1 July 2021, and applies to distance sales of items imported from third territories or third countries with a value from 0 to 150 euros. Participation in the IOSS portal is voluntary. History A system change in the VAT procedure was proposed by the European Commission in two stages. The first stage came into effect on 1 January 2015 under the name Mini One-Stop Shop (MOSS), and related to telecommunications, radio and television services as well as electronically provided services to end customers. The second package of measures was adopted by the European Council in 2017 December, and extended the VAT system change to distance sales and any types of cross-border service provided to a final customer in the EU. Goals Changes in trade patterns of the world economy and the creation of new technologies have opened up new trading opportunities, and it is expected that e-commerce through distance sales will continue to grow, in particular via electronic marketplaces or platforms (electronic interfaces). To keep up with these e-commerce changes, EU VAT regulations have also changed. Some aims of the EU VAT packages for e-commerce are: VAT is paid when the consumption of goods and services takes place Create a uniform VAT regulation for cross-border deliveries and services, thus simplifying cross-border trade Fight against VAT fraud Ensure fair conditions of competition for EU entrepreneurs and e-commerce traders from third countries, as well as between e-commerce and traditional shops Higher revenues of the union member states as a result of fairer taxation Until the introduction of the IOSS system, there was a VAT exemption on goods imported to the EU with a value from 0 to 22 euros, which meant that sellers in the EU were disadvantaged because they had to charge end customers with VAT, while sellers from a third country did not have to add value added tax (import value added tax) to the purchase price for long-distance purchases. However, for imported products from 22 to 150 euros the customer had to pay the VAT themselves, which resulted in sellers from third countries being disadvantaged because the customer often would have to pay high custom clearance fees when VAT was collected. Between 23 and 150 €, the goods could be inspected at the customs, adding the VAT and the delivery company customs clearance fee. This is still the case when the seller isn't registered in the IOSS or when the value is above 150 €. The IOSS lets the user know the total cost during the checkout. The VAT is included and the delivery company doesn't charge a customs clearance fee. The customs clearance is also much faster. By abolishing the 22 euro VAT exemption for deliveries from third countries, it is estimated that more than 7 billion euros in additional taxes will be collected from the EU member states yearly. Implementation with IOSS The IOSS allows suppliers selling imported goods to buyers in the EU (distance selling of goods) to collect the VAT applicable in the country of destination and pay it to the tax relevant authorities. Thus, under the IOSS the buyer is no longer obliged to pay VAT themselves at the time of importing the goods into the EU, as was the case before (for products over 22 euros). IOSS thus facilitates the collection, declaration and payment of VAT for third-country sellers making distance sales of imported goods to buyers residing in the EU. As a result, the buyer is no longer surprised by hidden fees (taxes), such as high customs clearance fees for collecting VAT. However, if the third-country seller is not registered with the IOSS (which is voluntary), the buyer will still have to pay VAT and possibly a customs duty levied by the transport company (e.g. post office) at the time of importing the goods into the EU. Registration Registration for companies has been possible since 1 April 2021 on the IOSS portal of any EU member state, and registration in a single union member state is sufficient. The company receives an IOSS identification number (also simply called an IOSS number). If a company is not based in the EU, it must also use an EU-based intermediary (a fiscal representative) to meet and guarantee its VAT obligations under the IOSS. The IOSS VAT number issued by a tax authority in a union member state, and is made available electronically to all other customs authorities in the EU. However, this database of IOSS VAT registration numbers is not public. When making a customs declaration, the customs authorities check the IOSS VAT identification number of the package against the database of IOSS VAT identification numbers. If the IOSS number is valid and the real value of the shipment does not exceed 150 euros, customs authorities do not require immediate payment of VAT on low-value goods registered through the IOSS. Non-participation in IOSS If a company does not participate in IOSS, the customer must pay the VAT themselves when importing the goods into the EU. Postal operators or courier services can also charge the customer a handling fee to cover the costs then required for (customs) formalities when importing goods. Customers in the EU will only receive the ordered goods after paying the VAT. This can result in the customer refusing to accept the package in question because of the additional costs. A seller which does not participate in IOSS must fulfill any customs and tax obligations separately in each EU member state to which they delivers, which may include registering there. Exceptions If several goods are sold to the same buyer and if these goods exceed a value of 150 euro per package, these goods are taxed when imported into the EU member state (import sales tax). In the case of distance selling of goods through an electronic interface such as a marketplace or platform, the electronic interface is responsible for the overdue VAT. Goods that are subject to excise duties (e.g. alcohol or tobacco products) cannot be processed through the IOSS. Even if these excise goods are the subject of a distance sale from third countries, they are not covered by the IOSS regulations. See also VOEC, a similar, but unrelated scheme implemented in Norway from 2020 References Further reading Council Implementing Regulation (EU) No 282/2011 Council Implementing Regulation (EU) No 282/2011 Council Regulation (EU) No 904/2010 Council Directive (EU) 2017/2455 Regulation (EU) 2017/2454, Implementing Regulation (EU) 2017/2459 Council Directive (EU) 2019/1995 Council Implementing Regulation (EU) 2019/2026 Implementing Regulation (EU) 2020/194 Council Decision (EU) 2020/1109 Council Regulation (EU) 2020/1108 Council Implementing Regulation (EU) 2020/1112 Commission Implementing Regulation (EU) 2020/1318 E-commerce
Import One-Stop Shop
[ "Technology" ]
1,455
[ "Information technology", "E-commerce" ]
70,848,253
https://en.wikipedia.org/wiki/Manchester%20Steam%20Users%27%20Association
By the 1840s, thousands of high-pressure boilers were used in the United Kingdom. However, many boilers were poorly constructed and not well-managed or maintained once installed. Boilers could explode suddenly and powerfully, sending debris flying into nearby streets or fields. In the 1860s, nearly 500 explosions were recorded, causing over 700 deaths and 900 injuries. In 1854, engineers and mill owners met in Manchester to form an organisation to deal with the growing number of boiler explosions. Those present included William Fairbairn and Joseph Whitworth. In 1855 they officially founded the Association for the Prevention of Steam Boiler Explosions and for Effecting Economy in the Raising and Use of Steam. This later became known as the Manchester Steam Users Association or MSUA. The primary purpose of the MSUA was to provide 'the increased security against explosions, which a periodical inspection by an experienced engineer affords, and the saving of fuel which may be expected from the inspection of an intelligent officer well acquainted with the principle on which perfect combustion depends'. References Organizations established in 1855 Organisations based in Manchester Safety organizations Boilers
Manchester Steam Users' Association
[ "Chemistry" ]
217
[ "Boilers", "Pressure vessels" ]
70,848,363
https://en.wikipedia.org/wiki/The%20Lichenologist
The Lichenologist is a peer-reviewed scientific journal specializing in lichenology. It is published bimonthly by the British Lichen Society. According to the Journal Citation Reports, the 2020 impact factor of The Lichenologist is 1.514, ranking it 149 out of 235 in plant sciences and 26 of 29 in mycology. More than 51,000 lichen-related articles were published up to 2019, about 4.7% (over 2400) of which were published in The Lichenologist; about half of these were published under the senior editorship of Peter Crittenden, who had a 20-year tenure at the journal, from 2000 to 2020. History The first issue of The Lichenologist was published in November 1958. It followed the establishment of the British Lichen Society on 1 February 1958. In its first editorial, the primary objectives of the journal were outlined, which focussed on both the enhancement of lichenological study and the importance of nature conservation. The journal sought to address the scarcity of contemporary literature on British lichen taxonomy by providing detailed articles to assist botanists in identifying local species. Additionally, it aimed to foster contributions on the distribution and ecology of lichens in Britain, areas that were then underexplored. Emphasising the balance between research and the ecological impact of specimen collection, The Lichenologist advocated for careful, responsible study practices to avoid harming these slow-growing organisms. In its early years, the journal experimented with different cover designs before settling on a mint green cover in 1959, which remained in use until 2000. The journal also transitioned from irregularly published volumes to annual volumes, with volume 6 in 1974 marking the start of consecutively numbered volumes synchronised with calendar years. Editorial leadership The journal has had several distinguished senior editors throughout its history: Peter W. James (1958–1977) David L. Hawksworth (1978–1988) Dennis H. Brown (1989–2000) Peter Crittenden (2000–2019) Christopher J. Ellis and Leena Myllys (2020–present) Journal development During Crittenden's tenure as senior editor from 2000 to 2019, The Lichenologist underwent several significant changes that modernised and enhanced the journal's impact. In 2001, Crittenden initiated a comprehensive overhaul of the journal's layout and printing, introducing a larger page size and a new cover design that departed from the long-standing mint green cover used since 1959. This visual refresh coincided with efforts to broaden the journal's content and appeal. Crittenden introduced thematic issues focusing on specific topics within lichenology, which helped to consolidate research in particular areas and increase reader engagement. He also encouraged the submission of longer, more comprehensive papers, allowing for more in-depth treatments of complex subjects. This shift towards more substantial contributions was reflected in an increase in the average number of pages per paper over the years. The journal also adapted to changes in scientific publishing practices under Crittenden's leadership. The Lichenologist implemented effective electronic publication, keeping pace with the digital transformation of academic publishing. In response to evolving nomenclatural requirements, the journal introduced the obligate registration of new fungal names, ensuring that taxonomic contributions met the latest standards in the field. Perhaps one of the most notable changes came in 2016 when Crittenden implemented a policy to reject "single naked species descriptions". This decision encouraged authors to contextualise new species descriptions within broader taxonomic or ecological frameworks, thereby increasing the overall impact and usefulness of such contributions. Despite initial concerns, this policy change did not decrease the number of new species described in the journal; instead, it led to more comprehensive and valuable taxonomic papers. Impact and output The Lichenologist received its first impact factor in 1999. Under Crittenden's editorship, the impact factor increased from around 0.8–1.0 in the early 2000s to over 1.5 in recent years. The journal's output has been substantial. A total of 1256 papers were published by the journal between 1958 and 1999, and 1197 papers were published between 2000 and 2019. More than 2000 new lichen species were described between 2000 and 2019, representing 69% of all new species described in the journal since its inception. Content and scope The Lichenologist covers a wide range of topics in lichenology, including taxonomy, systematics, ecology, biogeography, and conservation. The journal has a global scope, with contributions from authors worldwide and studies covering diverse geographical regions. In recent years, the journal has collaborated with The Bryologist to increase the quality of both publications and enhance their impact in the field of lichenology. References Cited literature Botany journals Academic journals established in 1958 Bimonthly journals English-language journals Mycology
The Lichenologist
[ "Biology" ]
976
[ "Mycology" ]
70,849,237
https://en.wikipedia.org/wiki/Entoloma%20griseocyaneum
Entoloma griseocyaneum is a species of agaric (gilled mushroom) in the family Entolomataceae. It has been given the recommended English name of Felted Pinkgill. The species has a European distribution, occurring mainly in agriculturally unimproved grassland. Threats to its habitat have resulted in the Felted Pinkgill being assessed as globally "vulnerable" on the IUCN Red List of Threatened Species. Taxonomy The species was first described by Swedish mycologist Elias Magnus Fries in 1821 as Agaricus griseocyaneus. German mycologist Paul Kummer transferred it to the genus Entoloma in 1871. Description Basidiocarps are agaricoid, up to 120 mm (4.75 in) tall, the cap conical to convex then flat to broadly umbonate, up to 50 mm (2 in) across. The cap surface is finely fibrillose, yellow-brown to sepia. The lamellae (gills) are white becoming pink from the spores. The stipe (stem) is smooth, finely fibrillose, typically pale grey-blue, lacking a ring. The spore print is pink, the spores (under a microscope) multi-angled, inamyloid, measuring about 9 to 13.5 by 6.5 to 8 μm. Similar species Entoloma isborscanum is superficially similar, but can be distinguished microscopically by having a sterile lamella edge with abundant cheilocystidia. Entoloma scabropellis, which lacks blue colours on the stipe, is said to be a synonym of Entoloma griseocyaneum based on DNA analysis. Distribution and habitat The Felted Pinkgill is rare but widespread in Europe. Like many other European pinkgills, it occurs in old, agriculturally unimproved, short-sward grassland (pastures and lawns). Conservation Entoloma griseocyaneum is typical of waxcap grasslands, a declining habitat due to changing agricultural practices. As a result, the species is of global conservation concern and is listed as "vulnerable" on the IUCN Red List of Threatened Species. References Entolomataceae Fungi of Europe Taxa named by Elias Magnus Fries Fungi described in 1821 Fungus species
Entoloma griseocyaneum
[ "Biology" ]
476
[ "Fungi", "Fungus species" ]
70,849,383
https://en.wikipedia.org/wiki/Taiwania%203
Taiwania 3 (Traditional Chinese (Taiwan): 台灣杉三號) is one of the supercomputers made by Taiwan, and also the newest one (August, 2021). It is placed in the National Center for High-performance Computing of NARLabs. There are 50,400 cores in total with 900 nodes, using Intel Xeon Platinum 8280 2.4 GHz CPU (28 Cores/CPU) and using CentOS as Operating System. It is an open access for public supercomputer. It is currently open access to scientists and more to do specific research after getting permission from Taiwan's National Center for High-performance Computing. This is the third supercomputer of the Taiwania series. It uses CentOS x86_64 7.8 as its system operator and Slurm Workload Manager as workflow manager to ensure better performance. Taiwania 3 uses InfiniBand HDR100 100 Gbit/s high speed Internet connection to ensure better performance of the supercomputer. The main memory capability is 192 GB. There's currently two Intel Xeon Platinum 8280 2.4 GHz CPU (28 Cores/CPU) inside each node. The full calculation capability is 2.7PFLOPS. It is launched into operation in November 2020 before schedule due to the needed for COVID-19. It is currently ranked number 227 on Top 500 list of June, 2021 and number 80 on Green 500 list. It is manufactured by Quanta Computer, Taiwan Fixed Network, and ASUS Cloud. Capability and specifications This supercomputer's Rmax is 2297.6 TFLOPS, with Rpeak at 4354.6 TFLOPS and Nmax at 4,354,560, costing 563.85 kW. The housing is mainly designed and manufactured by ASUS Cloud, owned by Taiwan government which has experiences on constructing supercomputer housing and storage device housing The hardware is provided by Quanta Computer, which mainly manufactures servers. Software Software details are listed below (all data are according to Top 500 and NCHC): Operating system : CentOS x86_64 7.8 Workload manager : Slurm Workload Manager Compiler : Intel Parallel Studio XE Composer Edition for Fortran and C++ Linux 2020 Update 4 Math library : Intel Math Kernel Library for Linux 2020 Update 4 MPI : Intel MPI Library for Linux 2019 Update 9 Hardware Hardware details are listed below (all data are according to Top 500 and NCHC): CPU : Intel Xeon Platinum 8280 2.4 GHz CPU (28 Cores/CPU) Main memory : 192 GB/node (172800 GB total) Interconnection : NVIDIA Mellanox InfiniBand HDR100 Hardware basically uses :QuantaPlex T42D-2U (4-Node) Dense Memory Multi-node Compute Servermanufactured by Quanta Computer.Operating temperature: 5 °C to 35 °C (41 °F to 95 °F), operating relative humidity: 20% to 85%RH. It has two processors per node.The Intel Xeon CPU and memory mentioned above is inside it. Speed Speed details are listed below. Rpeak : 4354.6 TeraFLOPS Rmax : 2297.6 TeraFLOPS P.S. This machine relies on CPU to calculate. Accessibility Taiwania series have always been available for everybody access through iService and pay according to their requested time and CPUs and GPUs. Films involved Seqalu movie was filmed in collaboration with TWCC, a national service provided by Taiwan. TWCC includes enormous calculation resource provided by Taiwania supercomputer series. Taiwania 3 is one of the resource providers of the film. The programmers at National Centers for High-performance Computing of Taiwan designed the algorithms used in the simulations such as arrows and guns firing. Taiwanese Environment Evaluation Contributions The system operates under NCHC, NARLabs, which means it is part of the Taiwanese government. It contributes to the federal analytics by assisting on combining the information generated by the miscellaneous equipment all around Taiwan. It also help via combining the LIDAR, visual camera, DSM, and more together to form a map during disasters. Moreover, it constructs 3D visualizations for the Taiwanese government along with other supercomputers to assist on rescue, research, training, decision making, mapping, and more. Biology and medication Taiwania 3 is a supercomputer aimed to help the biomedical development. Taiwania 3 supercomputer was meant to help scientists find a solution to the COVID-19 pandemic. It has also been connected to the Taiwanese biological laboratories and their data base. The laboratories must reach a certain level in order to make contact with the system. History 2019 In 2019, the NCHC started project on Taiwania 3 construction... 2020 November 2020Taiwania 3 supercomputer launched officially by NCHC. November 2020Taiwania 3 supercomputer joins COVID-19 research 2021 May 2021outbreak of COVID-19 in Taiwan started June 2021Taiwania 3 grandly activates officially. July 3 of 2021Tech V2.0 Coronavirus project last registration date.note that the deadline has been moved to 8/31 September 2021unveiled collaboration to film Seqalu. Architecture Taiwania 3 Supercomputer is a CPU-based supercomputer, with about fifty thousand cores of Intel Xeon CPUs. This supercomputer uses Linux as OS, just like any other Top 500 supercomputers. More information about supercomputer architecture, visit here! Comparison with other Taiwania supercomputers Taiwania series is a family of supercomputers made by Taiwan during 21st century. Taiwania 2 supercomputer, a GPU machine also made by the NCHC of Taiwan, has a capability of 9 PetaFLOPS, nearly 4 times greater compared with the 2.2 ~ 2.7 PetaFLOPS of Taiwania 3 (which uses mainly CPUs, just like Taiwania 1). The main difference between Taiwania 2 and 3 are the main calculators and objectives. Taiwania 2 is a GPU machine learning usage supercomputer, whereas Taiwania 3 is a CPU computing device for general scientific research usage. They are very different from head to toe. Compare with Taiwania 2, Taiwania 3 is more alike with the Taiwania 1 supercomputer also of Taiwan. They both uses CPU architecture and are both more open access to the public (August, 2021). These two supercomputers both uses Intel CPUs to perform calculations. Their main difference is the capacity of the two supercomputers. These three supercomputers are all currently part of iService systems and partially TWCC computing systems, though Taiwania 3 was to replace Taiwania 1. The new replacement will be Taiwania 4, a CPU HPC which will replace the retiring Taiwania 1 when finished. Taiwania 3 is a CPU supercomputer made by Taiwan. It has total capacity of 2.2 PetaFLOPS according to NCHC of Taiwan. It is a 21st-century HPC, meaning that it uses "Multitasking" to perform high performance computations. Other than using NVIDIA GPUs to boost capacity, this system uses Intel Xeon CPUs to calculate, making it closer to our regular life programming (but still a little bit different). Because it's using CPUs, the overall capacity on Machine learning is not as good as NVIDIA GPU Machine learning systems See also Taiwania (supercomputer) Supercomputers Computer cluster Xeon References Supercomputers Science and technology in Taiwan 2020 establishments in Taiwan Computer-related introductions in 2020
Taiwania 3
[ "Technology" ]
1,606
[ "Supercomputers", "Supercomputing" ]
70,849,408
https://en.wikipedia.org/wiki/Buelliella%20lecanorae
Buelliella lecanorae is a species of lichenicolous (lichen-eating) fungus in the class Dothideomycetes. It is found in a few locations in Estonia and in Crimea, where it grows parasitically on members of the Lecanora subfusca species group. Taxonomy Buelliella lecanorae was formally described as a new species in 2004 by lichenologists Ave Suija and Vagn Alstrup. The type specimen was collected from a churchyard in Suure-Jaani (Viljandi County); there, the fungus was found growing parasitically on the crustose lichen Lecanora chlarotera, which itself was growing as an epiphyte on Norway maple. The species epithet lecanorae refers to the genus of the host lichen. Description The ascomata of Buelliella lecanorae are in the form of small, rounded to irregularly shaped black apothecia (up to 0.2 in diameter), which are scattered over the thallus of the host. The asci are broadly club-shaped (clavate), measure about 50–57 by 18–20 μm, and usually contain eight spores (some have six). The ascospores, which are divided into two cells by a single septum, are smooth, ellipsoid, and measure 16–20.5 by 6.5–9.5 μm. They are initially colourless before turning brown. Habitat and distribution Buelliella lecanorae was originally known to occur only in a few locations in Estonia. In 2014, it was reported from Novyi Svit, a nature reserve in Crimea. It parasitizes members of the Lecanora subfusca species group, which also includes Lecanora chlarotera, L. pulicaris, and L. argentata. The relationship between the fungus and lichen appears to be commensalistic, as the fungus does not appear to cause visible damage to the host. References Dothideomycetes Fungi described in 2004 Fungi of Europe Lichenicolous fungi Taxa named by Ave Suija Fungus species
Buelliella lecanorae
[ "Biology" ]
443
[ "Fungi", "Fungus species" ]
70,849,723
https://en.wikipedia.org/wiki/Molly%20Gibson
Molly Gibson is an American girl widely known for being frozen as an embryo for 28 years before being born on October 26, 2020. She holds the world record for the longest-frozen embryo to ever come to birth. Background After being donated by a couple in 1992, the embryo of Molly was frozen and placed in a cryogenic freezer. The embryo was thawed and transferred to the uterus of 28-year-old Tina Gibson in February 2020. Tina, born in April 1991, was under 2 years old when the original couple donated Molly's embryo to a clinic in the Midwest. The couple had earlier adopted the embryo of Emma, a 24-year-old embryo which was the oldest human embryo in history to have been born, until Molly. References Living people People from Tennessee 2020 births 21st-century American people American children Assisted reproductive technology
Molly Gibson
[ "Biology" ]
172
[ "Assisted reproductive technology", "Medical technology" ]
70,850,869
https://en.wikipedia.org/wiki/Power%20system%20operations%20and%20control
Power system operations is a term used in electricity generation to describe the process of decision-making on the timescale from one day (day-ahead operation) to minutes prior to the power delivery. The term power system control describes actions taken in response to unplanned disturbances (e.g., changes in demand or equipment failures) in order to provide reliable electric supply of acceptable quality. The corresponding engineering branch is called Power System Operations and Control. Electricity is hard to store, so at any moment the supply (generation) shall be balanced with demand ("grid balancing"). In an electrical grid the task of real-time balancing is performed by a regional-based control center, run by an electric utility in the traditional (vertically integrated) electricity market. In the restructured North American power transmission grid, these centers belong to balancing authorities numbered 74 in 2016, the entities responsible for operations are also called independent system operators, transmission system operators. The other form of balancing resources of multiple power plants is a power pool. The balancing authorities are overseen by reliability coordinators. Day-ahead operation Day-ahead operation schedules the generation units that can be called upon to provide the electricity on the next day (unit commitment). The dispatchable generation units can produce electricity on demand and thus can be scheduled with accuracy. The production of the weather-dependent variable renewable energy for the next day is not certain, its sources are thus non-dispatchable. This variability, coupled with uncertain future power demand and the need to accommodate possible generation and transmission failures requires scheduling of operating reserves that are not expected to produce electricity, but can be dispatched on a very short notice. Some units have unique features that require their commitment much earlier: for example, the nuclear power stations take a very long time to start, while hydroelectric plants require planning of water resources usage way in advance, therefore commitment decisions for these are made weeks or even months before prior to the delivery. For a "traditional" vertically integrated electric utility the main goal of the unit commitment is to minimize both the marginal cost of producing the unit electricity and the (quite significant for fossil fuel generation) start-up costs. In a "restructured" electricity market a market clearing algorithm is utilized, frequently in a form of an auction; the merit order is sometimes defined not just by the monetary costs, but also by the environmental concerns. Unit commitment is more complex than the shorter-time-frame operations, since unit availability is subject to multiple constraints: demand-supply balance need to be maintained, including the sufficient spinning reserves for contingency. The balance need to reflect the transmission constraints; thermal units might have limits on minimum uptime (once switched on, cannot be turned off quickly) and downtime (once stopped, cannot be quickly restarted again); "must-run" units have to run due to technical constraints (for example, combined heat and power plants must run if their heat is needed); there is usually a single crew at the plant that needs to be present during a thermal unit start-up, so only one unit can be started at a time. Hours-ahead operation In the hours prior to the delivery, a system operator might need to deploy additional supplemental reserves or even commit more generation units, primarily to ensure the reliability of the supply while still trying to minimize the costs. At the same time, operator must ensure that enough reactive power reserves are available to prevent the voltage collapse. Dispatch curve The decisions ("economic dispatch") are based on the dispatch curve, where the X-axis constitutes the system power, intervals for the generation units are placed on this axis in the merit order with the interval length corresponding to the maximum power of the unit, Y-axis values represent the marginal cost (per-MWh of electricity, ignoring the startup costs). For cost-based decisions, the units in the merit order are sorted by the increasing marginal cost. The graph on the right describes an extremely simplified system, with three committed generator units (fully dispatchable, with constant per-MWh cost): unit A can deliver up to 120 MW at the cost of $30 per MWh (from 0 to 120 MW of system power); unit B can deliver up to 80 MW at $60/MWh (from 120 to 200 MW of system power); unit C is capable of 50 MW at $120/MWh (from 200 to 250 MW of system power). At the expected demand is 150 MW (a vertical line on the graph), unit A will be engaged at full 120 MW power, unit B will run at the dispatch level of 30 MW, unit C will be kept in reserve. The area under the dispatch curve to the left of this line represents the cost per hour of operation (ignoring the startup costs, $30 * 120 + $60 * 30 = $5,400 per hour), the incremental cost of the next MWh of electricity ($60 in the example, represented by a horizontal line on the graph) is called system lambda (thus another name for the curve, system lambda curve). In real systems the cost per MWh usually is not constant, and the lines of the dispatch curve are therefore not horizontal (typically the marginal cost of power increases with the dispatch level, although for the combined cycle power plants there are multiple cost curves depending on the mode of operation, so the power-cost relationship is not necessarily monotonic). If the minimum level of demand in the example will stay above 120 MW, the unit A will constantly run at full power, providing baseload power, unit B will operate at variable power, and unit C will need to be turned on and off, providing the "intermediate" or "cycling" capacity. If the demand goes above 200 MW only occasionally, the unit C will be idle most of the time and will be considered a peaking power plant (a "peaker"). Since a peaker might run for just tens of hours per year, the cost of peaker-produced electricity can be very high in order to recover the capital investment and fixed costs (see the right side of a hypothetical full-scale dispatch curve). Redispatch Sometimes the grid constraints change unpredictably and a need arises to change the previously set unit commitments. This system redispatch change is controlled in real-time by the central operator issuing directives to market participants that submit in advance bids for the increase/decrease in the power levels. Due to the centralized nature of redispatch, there is no delay to negotiate terms of contracts; the cost incurred are allocated either to participants responsible for the disruption based on preestablished tariffs or in equal shares. Minutes-ahead operation In the minutes prior to the delivery, a system operator is using the power-flow study algorithms in order to find the optimal power flow. At this stage the goal is reliability ("security") of the supply. The practical electric networks are too complex to perform the calculations by hand, so from the 1920s the calculations were automated, at first in the form of specially-built analog computers, so called network analyzers, replaced by digital computers in the 1960s. Control after disturbance Small mismatches between the total demand and total load are typical and initially are taken care of by the kinetic energy of the rotating machinery (mostly synchronous generators): when there is too much supply, the devices absorb the excess, and frequency goes above the scheduled rate, conversely, too much demand causes the generator to deliver extra electricity through slowing down, with frequency slightly decreasing, not requiring an intervention from the operator. There are obvious limits to this "immediate control", so a control continuum is built into a typical power grid, spanning reaction intervals from seconds ("primary control") to hours ("time control"). Seconds-after control The is engaged automatically within seconds after the frequency disturbance. Primary control stabilizes the situation, but does not return the conditions to the normal and is applied both to the generation side (where the governor adjusts the power of the prime mover) and to the load, where: induction motors self-adjust (lower frequency reduces the energy use); under-frequency relays disconnect interruptible loads; ancillary services are engaged (load is reduced as procured via reliability services contracts). Another term commonly used for the primary control is frequency response (or "beta"). Frequency response also includes the inertial response of the generators. This is the parameter that is approximated by the frequency bias coefficient of the area control error (ACE) calculation used for automatic generation control. Minutes-after control The is used to restore the system frequency after a disturbance, with adjustments made by the balancing authority control computer (this is typically referred to as load-frequency control or automatic generation control) and manual actions taken by the balancing authority staff. Secondary control uses both the spinning and non-spinning reserves, with balancing services deployed within minutes after disturbance (hydropower plants are capable of an even faster reaction). Tertiary control The tertiary control involves reserve deployment and restoration to handle the current and future contingencies. Time control The goal of the time control is to maintain the long-term frequency at the specified value within a wide area synchronous grid. Due to the disturbances, the average frequency drifts, and a time error accumulates between the official time and the time measured in the AC cycles. In the US, the average 60 Hz frequency is maintained within each interconnection by a designated entity, time monitor, that periodically changes the frequency target of the grid (scheduled frequency) to bring the overall time offset within the predefined limits. For example, in the Eastern Interconnection the action (temporarily setting the frequency to 60.02 Hz or 59.98 Hz) is initiated when the time offset reaches 10 seconds and ceases once the offset reaches 6 seconds. Time control is performed either by a computer (Automatic Time Error Correction), or by the monitor requesting balancing authorities to adjust their settings. References Sources Electric power generation Power engineering Electric power infrastructure Power station technology
Power system operations and control
[ "Engineering" ]
2,055
[ "Power engineering", "Electrical engineering", "Energy engineering" ]
70,851,089
https://en.wikipedia.org/wiki/Nano%20Energy
Nano Energy is a monthly peer-reviewed scientific journal covering nanotechnology and energy. It was established in 2012 and is published by Elsevier. The editor-in-chief is Zhong Lin Wang (Georgia Institute of Technology). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 19.069. References External links Materials science journals Academic journals established in 2012 English-language journals Monthly journals Elsevier academic journals Nanotechnology journals
Nano Energy
[ "Materials_science", "Engineering" ]
108
[ "Nanotechnology journals", "Materials science journals", "Materials science" ]
67,939,372
https://en.wikipedia.org/wiki/Tradeston%20Flour%20Mills%20explosion
On 9 July 1872 the Tradeston Flour Mills, in Glasgow, Scotland, exploded. Eighteen people died, and at least sixteen were injured. An investigation suggested that the explosion was caused by the grain feed to a pair of millstones stopping, causing them to rub against each other, resulting in a spark or fire igniting the grain dust in the air. That fire was then drawn by a fan into an "exhaust box" designed to collect grain dust, which then ignited, causing a second explosion which destroyed the building. At the time, there were general concerns about similar incidents worldwide, so the incident and investigation were widely reported across the world. Background The mill was owned by Matthew Muir & Sons, had been in operation for thirty years, and consisted of a five-storey grain store on King Street (now Kingston Street), another grain store that occupied most of a four-storey building on Clyde Place, and a four-storey grain mill building between the two, with three boilers and an engine shed attached. This occupied the majority of the block surrounded by Clyde Place, Commerce Street, King Street and Centre Streets, with Gorbals Free Church, the Bute Hotel, some shops and some dwelling houses taking up the rest of the block. Explosion At 4 pm on 9 July, just as the day shift was about to finish, a large explosion blew out the front and back of the mill building. Survivors of the explosion described a small initial explosion that filled the building with flour, and then a large explosion that blew out the walls. The buildings were then engulfed in fire. Employees of neighbouring businesses were also injured and killed in the explosion. Six people were taken to the Royal Infirmary with serious injuries, while another ten with less serious injuries were sent home to recover. Firefighters were dispatched from all but one of the city's fire stations with firefighters from Bridgeton station being held in reserve in case of another fire. An off-duty firefighter from the Central Fire Brigade had actually witnessed the explosion and flames while working on a roof across the river. On arriving at the scene an immediate concern was preventing the fire from spreading to nearby buildings such as the riverside sheds or Bridge Street railway station. The windows of the station that faced onto Commerce Street had been shattered by the explosion, as well as parts of the glass roof, but firefighters were particularly concerned about the fire reaching the large spirit stores in the basement of the station. Ships like Anchor Line’s Sidonian were moved away from the quayside for fear of the fire spreading. After a few hours, the roof of the mill building collapsed and the remains of the wall facing onto Commerce Street collapsed into the street, but by 11 pm the fire was considered under control. The following day, while the fire was contained but continued to burn within the ruins, work started to remove insecure pieces of the remaining buildings that faced onto the surrounding streets. While engaged in this work, men discovered two bodies in a tenement at the corner of Clyde Place and Commerce Street; one of them was Catherine Drennan, a young widowed mother of five. Three of her children were not home at the time of the explosion, but while one girl survived the explosion and escaped the subsequent fire, a nine-month old daughter died. Efforts were made to look for survivors and recover bodies throughout the day but only two bodies were recovered: Jane Mulholland from County Londonderry, Ireland, an employee of the Bute Hotel who had been retrieving clothes drying behind the hotel when the explosion occurred, and 14-year-old James Tanner from Donaghadee, Ireland, who had been working in the mill building. During the day the site was visited by the city's Lord Provost Sir James Lumsden, Master of Works John Carrick, Dean of Guild Alexander Ewing and procurators fiscal John Lang and James Neil Hart. Work to recover bodies continued through until at least 8 August, with the final recovery being the body of 29-year-old Arthur Ferns, who had been employed in the mill. This brought the total of deaths to eighteen (fourteen employees of the mill, three residents on Clyde Street and one employee of the Bute Hotel), with at least sixteen injured. Victims Investigation Professor of Civil Engineering and Mechanics at Glasgow University Macquorn Rankine and Dr. Stevenson Macadam, who lectured in Chemistry at the Royal College of Surgeons of Edinburgh, were asked by an insurance company to investigate the cause of the explosion. They interviewed survivors, visited operating mills, and studied similar incidents, and published their report on 9 August. They theorised that the explosion was caused by a spark or fire from a pair of millstones igniting the finely ground flour dust in the air. Flour mills like the one at Tradeston had exhaust fans that drew flour dust from the mill stones into an "exhaust box" and from there into a stive room. Rankine and Macadam stated that the grain feed to a pair of mill stones had stopped, while the stones kept turning, causing them to overheat. They suggested that the stones started a fire that was drawn by the fan into the "exhaust box", which then exploded, distributing dust throughout the building; this dust then ignited, causing the second larger explosion reported by survivors. Their primary recommendation was that exhaust boxes and stive rooms should be housed outside mill buildings and designed to "be readily blown to pieces" so that, when similar fires happened, they would be drawn out of buildings themselves and the force of any explosion expended externally. Their conclusions were reported around the world, from the Belfast News-Letter, and London's The Pall Mall Gazette, to Fort Wayne's Daily Sentinel and The Brooklyn Daily Eagle. See also Great Mill Disaster – A similar dust explosion at a flour mill in Minneapolis in 1878 References 1870s in Glasgow 19th-century fires in the United Kingdom 1870s disasters in the United Kingdom 1872 disasters in Europe 1872 fires 1872 in Scotland Building and structure fires in Scotland Building and structure collapses caused by fire Building and structure collapses in the United Kingdom Commercial building fires Disasters in Glasgow Dust explosions Explosions in 1872 Explosions in Scotland Fire and rescue in Scotland Food processing disasters Food processing industry in the United Kingdom Gorbals Industrial fires and explosions in the United Kingdom
Tradeston Flour Mills explosion
[ "Chemistry" ]
1,269
[ "Dust explosions", "Explosions" ]
67,940,054
https://en.wikipedia.org/wiki/C2orf72
C2orf72 (Chromosome 2, Open Reading Frame 72) is a gene in humans (Homo sapiens) that encodes a protein currently named after its gene, C2orf72. It is also designated LOC257407 and can be found under GenBank accession code NM_001144994.2. The protein can be found under UniProt accession code A6NCS6. This gene is primarily expressed in the liver, brain, placental, and small intestine tissues. C2orf72 is an intracellular protein that has been predicted to reside within the nucleus, cytosol, and plasma membrane of cells. The function of C2orf72 is unknown, but it is predicted to be involved in very-low-density lipoprotein particle assembly and also involved in the regulation of cholesterol esterification. This prediction also matches with the fact that both estradiol and testosterone have been reported to upregulate expression of C2orf72. Gene Locus C2orf72 is a protein-coding gene found on the forward (+) strand of chromosome 2 at the locus 2q37.1, on the long arm of the chromosome. mRNA C2orf72's mRNA transcript is reported to be about 3,629 base pairs long. It appears to have two polyadenylation sites near the 5′ end of the mRNA transcript, each preceded by their respective regulatory sequences, such as ATTAAA or AATAAA. There are three predicted exons reported for human C2orf72. Expression pattern C2orf72 is preferentially expressed in brain, liver, placenta, colon, small intestine, gallbladder, stomach, and prostate, and to a lesser extent in adrenal gland, appendix, pancreas, lung, kidney, testis, and urinary bladder. Predicted Biological Functions It is predicted via Archs4 (July 16, 2022) that the function of this gene may be related to very-low-density lipoprotein particle assembly and also involved in the regulation of cholesterol esterification. Regulation Gene-level regulation Gene perturbation data In a study of embryonic liver samples lacking hepatocyte nuclear factor 4 alpha (HNF4α), the expression of C2orf72 was downregulated. Both estradiol and testosterone upregulate expression of C2orf72. Expression pattern C2orf72 mRNA and protein products are found preferentially in the liver, kidney, and placenta. The protein is localized to the cell membrane and cytoplasm in liver, brain, and placental tissues. Transcript-level regulation miR-1271-5p is a microRNA that could bind to the 3′ untranslated region of the C2orf72 mRNA transcript at 5′-...GUGCCAA...-3′. Protein-level regulation Predicted phosphorylation sites There are at least two predicted phosphorylation sites for the human C2orf72 protein, one at threonine-286 and the other at serine-294. Protein Human protein The predicted molecular weight of C2orf72 is 30.5 kDa, and it has a predicted isoelectric point (pI) of pH 8.7. There are eight cysteine residues, for a potential of four disulfide bonds. Most of the cysteine residues are positioned next to a polar amino acid (uncharged or positively or negatively charged). At physiological pH, there are 33 positively charged amino acid residues, including histidine, most of which are arginines. Likewise, there are 33 negatively charged amino acid residues, most of which are glutamates. There are 14 hydroxyl-containing residues (tyrosine, threonine or serine) that could serve as typical phosphorylation sites; most of these are serines. Interacting proteins These proteins have been reported to interact with human C2orf72: RASN (GTPase NRas), RASK (GTPase KRas), and CD81. Homology There are at least 203 organisms with an ortholog of C2orf72. The most evolutionarily distant reported ortholog of C2orf72 is in the Australian ghost shark (Callorhincus milii);, and it is broadly conserved from Actinopterygii (bony fish) to Mammalia. References Genes Lipids Cholesterol and steroid metabolism disorders
C2orf72
[ "Chemistry" ]
948
[ "Organic compounds", "Biomolecules by chemical classification", "Lipids" ]
67,942,902
https://en.wikipedia.org/wiki/Seismic%20oceanography
Seismic oceanography is a form of acoustic oceanography, in which sound waves are used to study the physical properties and dynamics of the ocean. It provides images of changes in the temperature and salinity of seawater. Unlike most oceanographic acoustic imaging methods, which use sound waves with frequencies greater than 10,000 Hz, seismic oceanography uses sound waves with frequencies lower than 500 Hz. Use of low-frequency sound means that seismic oceanography is unique in its ability to provide highly detailed images of oceanographic structure that span horizontal distances of hundreds of kilometres and which extend from the sea surface to the seabed. Since its inception in 2003, seismic oceanography has been used to image a wide variety of oceanographic phenomena, including fronts, eddies, thermohaline staircases, turbid layers and cold methane seeps. In addition to providing spectacular images, seismic oceanographic data have given quantitative insight into processes such as movement of internal waves and turbulent mixing of seawater. Method Data acquisition Seismic oceanography is based on marine seismic reflection profiling, in which a ship tows specialised equipment for generating underwater sound. This equipment is known as the acoustic source. The ship also tows one or more cables along which are arranged hundreds of hydrophones, which are instruments for recording underwater sound. These cables are referred to as streamers, and are between a few hundred metres and 10 km in length. Both the acoustic source and the streamers lie a few metres beneath the sea surface. The acoustic source generates sound waves once every few seconds by releasing either compressed air or electrical charge into the sea. Most of these sound waves travel downwards towards the seabed, and a small fraction of the sound is reflected from boundaries at which the temperature or salinity of seawater changes (these boundaries are known as thermohaline boundaries). The hydrophones detect these reflected sound waves. As the ship moves forwards, the positions of the acoustic source and hydrophones change with respect to the reflecting boundaries. Over a period of 30 minutes or less, multiple different configurations of acoustic source and hydrophones sample the same point on a boundary. Image creation Idealised case Seismic data record how the intensity of sound at each hydrophone changes with time. The time at which reflected sound arrives at a particular hydrophone depends on the horizontal distance between the hydrophone and the acoustic source, on the depth and shape of the reflecting boundary, and on the speed of sound in seawater. The depth and shape of the boundary and the local speed of sound, which can vary between approximately 1450 m/s and 1540 m/s, are initially unknown. By analysing records from multiple different configurations of acoustic source and hydrophones, the speed of sound can be estimated. Using this estimated speed, the boundary depth is determined under the assumption that the boundary is horizontal. The effects of reflection from boundaries that are not horizontal can be accounted for using methods which are collectively known as seismic migration. After migration, different records that sample the same point on a boundary are added together to increase the signal-to-noise ratio (this process is known as stacking). Migration and stacking are carried out at every depth and at every horizontal position to make a spatially accurate seismic image. Complications The intensity of sound recorded by hydrophones can change due to causes other than reflection of sound from thermohaline boundaries. For instance, the acoustic source produces some sound waves that travel horizontally along the streamer, rather than downwards towards the seabed. Aside from sound produced by the acoustic source, the hydrophones record background noise caused by natural processes such as breaking of wind waves at the ocean surface. These other, unwanted sounds are often much louder than sound reflected from thermohaline boundaries. Use of signal-processing filters quietens unwanted sounds and increases the signal-to-noise ratio of reflections from thermohaline boundaries. Analysis The key advantage of seismic oceanography is that it provides high-resolution (up to 10 m) images of oceanic structure, that can be combined with quantitative information about the ocean. The imagery can be used to identify the length, width, and height of oceanic structures across a range of scales. If the seismic data is also 3D, then the evolution of the structures over time can be analyzed too. Inverting for temperature and salinity Combined with its imagery, processed seismic data can be used to extract other quantitative information about the ocean. So far, seismic oceanography has been used to extract distributions of temperature, and salinity, and therefore density and other important properties. There is a range of approaches that can be used to extract this information. For example, Paramo and Holbrook (2005) extracted temperature gradients in the Norwegian Sea using the Amplitude Versus Offset methods. The distributions of physical properties were limited to one-dimension however. More recently, there has been a move toward two-dimensional technique. Cord Papenberg et al. (2010) presented high-resolution two-dimensional temperature and salinity distributions. These fields were derived using an iterative inversion that combines seismic and physical oceanographic data. Since then, more complex inversions have been presented that are based on Monte Carlo inversion techniques, amongst others. Spectral analysis for vertical mixing rates Aside from temperature and salinity distributions, seismic data of the ocean can also be used to extract mixing rates through spectral analysis. This process is based on the assumption that reflections, which show undulations at a number of scales, track the internal wave field. Therefore, the vertical displacement of these undulations can give a measure of the vertical mixing rates of the ocean. This technique was first developed using data from the Norwegian Sea and showed the enhancement of internal wave energy close to the continental slope. Since 2005, the techniques have been further developed, adapted, and automated so that any seismic section may be converted into a two-dimensional distribution of mixing rates References Seismic Oceanography Earth sciences
Seismic oceanography
[ "Physics", "Environmental_science" ]
1,206
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
67,944,136
https://en.wikipedia.org/wiki/Dipyanone
Dipyanone is an opioid analgesic which has been sold as a designer drug, first identified in Germany in 2021. It is closely related to medically used drugs such as methadone, dipipanone and phenadoxone, but is slightly less potent. See also Desmethylmoramide IC-26 Nufenoxole Pyrrolidinylthiambutene R-4066 References Opioids 1-Pyrrolidinyl compounds Mu-opioid receptor agonists Ketones Designer drugs
Dipyanone
[ "Chemistry" ]
115
[ "Ketones", "Functional groups" ]
67,944,283
https://en.wikipedia.org/wiki/4%27-Fluoro-4-methylaminorex
4'-Fluoro-4-methylaminorex (4F-MAR, 4-FPO) is a recreational designer drug from the substituted aminorex family, with stimulant effects. It was first detected in Slovenia in 2018. It was made illegal in Italy in March 2020. See also 2C-B-aminorex 2F-MAR 3-Fluorophenmetrazine 4C-MAR 4-Fluoroamphetamine 4,4'-DMAR Fluminorex MDMAR List of aminorex analogues References Aminorexes Designer drugs 4-Fluorophenyl compounds
4'-Fluoro-4-methylaminorex
[ "Chemistry" ]
132
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
67,944,445
https://en.wikipedia.org/wiki/MDMAR
3',4'-Methylenedioxy-4-methylaminorex (MDMAR) is a recreational designer drug from the substituted aminorex family, with monoamine-releasing effects. It is a potent serotonin–norepinephrine–dopamine releasing agent (SNDRA). See also 2C-B-aminorex 4B-MAR 4C-MAR 4,4'-DMAR 4'-Fluoro-4-methylaminorex 5-MAPB MDMA Methylenedioxyphenmetrazine List of aminorex analogues References Aminorexes Designer drugs Benzodioxoles
MDMAR
[ "Chemistry" ]
137
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
67,944,487
https://en.wikipedia.org/wiki/Knowledge%20graph%20embedding
In representation learning, knowledge graph embedding (KGE), also referred to as knowledge representation learning (KRL), or multi-relation learning, is a machine learning task of learning a low-dimensional representation of a knowledge graph's entities and relations while preserving their semantic meaning. Leveraging their embedded representation, knowledge graphs (KGs) can be used for various applications such as link prediction, triple classification, entity recognition, clustering, and relation extraction. Definition A knowledge graph is a collection of entities , relations , and facts . A fact is a triple that denotes a link between the head and the tail of the triple. Another notation that is often used in the literature to represent a triple (or fact) is . This notation is called resource description framework (RDF). A knowledge graph represents the knowledge related to a specific domain; leveraging this structured representation, it is possible to infer a piece of new knowledge from it after some refinement steps. However, nowadays, people have to deal with the sparsity of data and the computational inefficiency to use them in a real-world application. The embedding of a knowledge graph translates each entity and relation of a knowledge graph, into a vector of a given dimension , called embedding dimension. In the general case, we can have different embedding dimensions for the entities and the relations . The collection of embedding vectors for all the entities and relations in the knowledge graph can then be used for downstream tasks. A knowledge graph embedding is characterized by four different aspects: Representation space: The low-dimensional space in which the entities and relations are represented. Scoring function: A measure of the goodness of a triple embedded representation. Encoding models: The modality in which the embedded representation of the entities and relations interact with each other. Additional information: Any additional information coming from the knowledge graph that can enrich the embedded representation. Usually, an ad hoc scoring function is integrated into the general scoring function for each additional information. Embedding procedure All the different knowledge graph embedding models follow roughly the same procedure to learn the semantic meaning of the facts. First of all, to learn an embedded representation of a knowledge graph, the embedding vectors of the entities and relations are initialized to random values. Then, starting from a training set until a stop condition is reached, the algorithm continuously optimizes the embeddings. Usually, the stop condition is given by the overfitting over the training set. For each iteration, is sampled a batch of size from the training set, and for each triple of the batch is sampled a random corrupted facti.e., a triple that does not represent a true fact in the knowledge graph. The corruption of a triple involves substituting the head or the tail (or both) of the triple with another entity that makes the fact false. The original triple and the corrupted triple are added in the training batch, and then the embeddings are updated, optimizing a scoring function. At the end of the algorithm, the learned embeddings should have extracted the semantic meaning from the triples and should correctly predict unseen true facts in the knowledge graph. Pseudocode The following is the pseudocode for the general embedding procedure. algorithm Compute entity and relation embeddings is input: The training set , entity set , relation set , embedding dimension  output: Entity and relation embeddings initialization: the entities and relations embeddings (vectors) are randomly initialized while stop condition do // From the training set randomly sample a batch of size b for each in do // sample a corrupted fact of triple end for Update embeddings by minimizing the loss function end while Performance indicators These indexes are often used to measure the embedding quality of a model. The simplicity of the indexes makes them very suitable for evaluating the performance of an embedding algorithm even on a large scale. Given Q as the set of all ranked predictions of a model, it is possible to define three different performance indexes: Hits@K, MR, and MRR. Hits@K Hits@K or in short, H@K, is a performance index that measures the probability to find the correct prediction in the first top K model predictions. Usually, it is used . Hits@K reflects the accuracy of an embedding model to predict the relation between two given triples correctly. Hits@K Larger values mean better predictive performances. Mean rank (MR) Mean rank is the average ranking position of the items predicted by the model among all the possible items. The smaller the value, the better the model. Mean reciprocal rank (MRR) Mean reciprocal rank measures the number of triples predicted correctly. If the first predicted triple is correct, then 1 is added, if the second is correct is summed, and so on. Mean reciprocal rank is generally used to quantify the effect of search algorithms. The larger the index, the better the model. Applications Machine learning tasks Knowledge graph completion (KGC) is a collection of techniques to infer knowledge from an embedded knowledge graph representation. In particular, this technique completes a triple inferring the missing entity or relation. The corresponding sub-tasks are named link or entity prediction (i.e., guessing an entity from the embedding given the other entity of the triple and the relation), and relation prediction (i.e., forecasting the most plausible relation that connects two entities). Triple Classification is a binary classification problem. Given a triple, the trained model evaluates the plausibility of the triple using the embedding to determine if a triple is true or false. The decision is made with the model score function and a given threshold. Clustering is another application that leverages the embedded representation of a sparse knowledge graph to condense the representation of similar semantic entities close in a 2D space. Real world applications The use of knowledge graph embedding is increasingly pervasive in many applications. In the case of recommender systems, the use of knowledge graph embedding can overcome the limitations of the usual reinforcement learning. Training this kind of recommender system requires a huge amount of information from the users; however, knowledge graph techniques can address this issue by using a graph already constructed over a prior knowledge of the item correlation and using the embedding to infer from it the recommendation. Drug repurposing is the use of an already approved drug, but for a therapeutic purpose different from the one for which it was initially designed. It is possible to use the task of link prediction to infer a new connection between an already existing drug and a disease by using a biomedical knowledge graph built leveraging the availability of massive literature and biomedical databases. Knowledge graph embedding can also be used in the domain of social politics. Models Given a collection of triples (or facts) , the knowledge graph embedding model produces, for each entity and relation present in the knowledge graph a continuous vector representation. is the corresponding embedding of a triple with and , where is the embedding dimension for the entities, and for the relations. The score function of a given model is denoted by and measures the distance of the embedding of the head from the embedding of tail given the embedding of the relation, or in other words, it quantifies the plausibility of the embedded representation of a given fact. Rossi et al. propose a taxonomy of the embedding models and identifies three main families of models: tensor decomposition models, geometric models, and deep learning models. Tensor decomposition model The tensor decomposition is a family of knowledge graph embedding models that use a multi-dimensional matrix to represent a knowledge graph, that is partially knowable due to the gaps of the knowledge graph describing a particular domain thoroughly. In particular, these models use a three-way (3D) tensor, which is then factorized into low-dimensional vectors that are the entities and relations embeddings. The third-order tensor is a suitable methodology to represent a knowledge graph because it records only the existence or the absence of a relation between entities, and for this reason is simple, and there is no need to know a priori the network structure, making this class of embedding models light, and easy to train even if they suffer from high-dimensional and sparsity of data. Bilinear models This family of models uses a linear equation to embed the connection between the entities through a relation. In particular, the embedded representation of the relations is a bidimensional matrix. These models, during the embedding procedure, only use the single facts to compute the embedded representation and ignore the other associations to the same entity or relation. DistMult: Since the embedding matrix of the relation is a diagonal matrix, the scoring function can not distinguish asymmetric facts. ComplEx: As DistMult uses a diagonal matrix to represent the relations embedding but adds a representation in the complex vector space and the hermitian product, it can distinguish symmetric and asymmetric facts. This approach is scalable to a large knowledge graph in terms of time and space cost. ANALOGY: This model encodes in the embedding the analogical structure of the knowledge graph to simulate inductive reasoning. Using a differentiable objective function, ANALOGY has good theoretical generality and computational scalability. It is proven that the embedding produced by ANALOGY fully recovers the embedding of DistMult, ComplEx, and HolE. SimplE: This model is the improvement of canonical polyadic decomposition (CP), in which an embedding vector for the relation and two independent embedding vectors for each entity are learned, depending on whether it is a head or a tail in the knowledge graph fact. SimplE resolves the problem of independent learning of the two entity embeddings using an inverse relation and average the CP score of and . In this way, SimplE collects the relation between entities while they appear in the role of subject or object inside a fact, and it is able to embed asymmetric relations. Non-bilinear models HolE: HolE uses circular correlation to create an embedded representation of the knowledge graph, which can be seen as a compression of the matrix product, but is more computationally efficient and scalable while keeping the capabilities to express asymmetric relation since the circular correlation is not commutative. HolE links holographic and complex embeddings since, if used together with Fourier, can be seen as a special case of ComplEx. TuckER: TuckER sees the knowledge graph as a tensor that could be decomposed using the Tucker decomposition in a collection of vectorsi.e., the embeddings of entities and relationswith a shared core. The weights of the core tensor are learned together with the embeddings and represent the level of interaction of the entries. Each entity and relation has its own embedding dimension, and the size of the core tensor is determined by the shape of the entities and relations that interact. The embedding of the subject and object of a fact are summed in the same way, making TuckER fully expressive, and other embedding models such as RESCAL, DistMult, ComplEx, and SimplE can be expressed as a special formulation of TuckER. MEI: MEI introduces the multi-partition embedding interaction technique with the block term tensor format, which is a generalization of CP decomposition and Tucker decomposition. It divides the embedding vector into multiple partitions and learns the local interaction patterns from data instead of using fixed special patterns as in ComplEx or SimplE models. This enables MEI to achieve optimal efficiency—expressiveness trade-off, not just being fully expressive. Previous models such as TuckER, RESCAL, DistMult, ComplEx, and SimplE are suboptimal restricted special cases of MEI. MEIM: MEIM goes beyond the block term tensor format to introduce the independent core tensor for ensemble boosting effects and the soft orthogonality for max-rank relational mapping, in addition to multi-partition embedding interaction. MEIM generalizes several previous models such as MEI and its subsumed models, RotaE, and QuatE. MEIM improves expressiveness while still being highly efficient in practice, helping it achieve good results using fairly small model sizes. Geometric models The geometric space defined by this family of models encodes the relation as a geometric transformation between the head and tail of a fact. For this reason, to compute the embedding of the tail, it is necessary to apply a transformation to the head embedding, and a distance function is used to measure the goodness of the embedding or to score the reliability of a fact. Geometric models are similar to the tensor decomposition model, but the main difference between the two is that they have to preserve the applicability of the transformation in the geometric space in which it is defined. Pure translational models This class of models is inspired by the idea of translation invariance introduced in word2vec. A pure translational model relies on the fact that the embedding vector of the entities are close to each other after applying a proper relational translation in the geometric space in which they are defined. In other words, given a fact, when the embedding of head is added to the embedding of relation, the expected result should be the embedding of the tail. The closeness of the entities embedding is given by some distance measure and quantifies the reliability of a fact. TransE: This model uses a scoring function that forces the embeddings to satisfy a simple vector sum equation in each fact in which they appear: . The embedding will be exact if each entity and relation appears in only one fact, and, for this reason, in practice does not well represent one-to-many, many-to-one, and asymmetric relations. TransH: It is an evolution of TransE introducing a hyperplane as geometric space to solve the problem of representing correctly the types of relations. In TransH, each relation has a different embedded representation, on a different hyperplane, based on which entities it interacts with. Therefore, to compute, for example, the score function of a fact, the embedded representation of the head and tail need to be projected using a relational projection matrix on the correct hyperplane of the relation. TransR: TransR is an evolution of TransH because it uses two different spaces to represent the embedded representation of the entities and the relations, and separate completely the semantic space of entities and relations. Also TransR uses a relational projection matrix to translate the embedding of the entities to the relation space. TransD: Given a fact, in TransR, the head and the tail of a fact could belongs to two different types of entities, for example, in the fact, Obama and USA are two entities but one is a person and the other is a country. The matrix multiplication also is an expensive procedure in TransR to compute the projection. In this context, TransD employs two vector for each entity-relation pair to compute a dynamic mapping that substitutes the projection matrix while reducing the dimensional complexity. The first vector is used to represent the semantic meaning of the entities and relations, the second one to compute the mapping matrix. TransA: All the translational models define a score function in their representation space, but they oversimplify this metric loss. Since the vector representation of the entities and relations is not perfect, a pure translation of could be distant from , and a spherical equipotential Euclidean distance makes it hard to distinguish which is the closest entity. TransA, instead, introduces an adaptive Mahalanobis distance to weights the embedding dimensions, together with elliptical surfaces to remove the ambiguity. Translational models with additional embeddings It is possible to associate additional information to each element in the knowledge graph and their common representation facts. Each entity and relation can be enriched with text descriptions, weights, constraints, and others in order to improve the overall description of the domain with a knowledge graph. During the embedding of the knowledge graph, this information can be used to learn specialized embeddings for these characteristics together with the usual embedded representation of entities and relations, with the cost of learning a more significant number of vectors. STransE: This model is the result of the combination of TransE and of the structure embedding in such a way it is able to better represent the one-to-many, many-to-one, and many-to-many relations. To do so, the model involves two additional independent matrix and for each embedded relation in the KG. Each additional matrix is used based on the fact the specific relation interact with the head or the tail of the fact. In other words, given a fact , before applying the vector translation, the head is multiplied by and the tail is multiplied by . CrossE: Crossover interactions can be used for related information selection, and could be very useful for the embedding procedure. Crossover interactions provide two distinct contributions in the information selection: interactions from relations to entities and interactions from entities to relations. This means that a relation, e.g.'president_of' automatically selects the types of entities that are connecting the subject to the object of a fact. In a similar way, the entity of a fact inderectly determine which is inference path that has to be choose to predict the object of a related triple. CrossE, to do so, learns an additional interaction matrix , uses the element-wise product to compute the interaction between and . Even if, CrossE, does not rely on a neural network architecture, it is shown that this methodology can be encoded in such architecture. Roto-translational models This family of models, in addition or in substitution of a translation they employ a rotation-like transformation. TorusE: The regularization term of TransE makes the entity embedding to build a spheric space, and consequently loses the translation properties of the geometric space. To address this problem, TorusE leverages the use of a compact Lie group that in this specific case is n-dimensional torus space, and avoid the use of regularization. TorusE defines the distance functions to substitute the L1 and L2 norm of TransE. RotatE: RotatE is inspired by the Euler's identity and involves the use of Hadamard product to represent a relation as a rotation from the head to the tail in the complex space. For each element of the triple, the complex part of the embedding describes a counterclockwise rotation respect to an axis, that can be describe with the Euler's identity, whereas the modulus of the relation vector is 1. It is shown that the model is capable of embedding symmetric, asymmetric, inversion, and composition relations from the knowledge graph. Deep learning models This group of embedding models uses deep neural network to learn patterns from the knowledge graph that are the input data. These models have the generality to distinguish the type of entity and relation, temporal information, path information, underlay structured information, and resolve the limitations of distance-based and semantic-matching-based models in representing all the features of a knowledge graph. The use of deep learning for knowledge graph embedding has shown good predictive performance even if they are more expensive in the training phase, data-hungry, and often required a pre-trained embedding representation of knowledge graph coming from a different embedding model. Convolutional neural networks This family of models, instead of using fully connected layers, employs one or more convolutional layers that convolve the input data applying a low-dimensional filter capable of embedding complex structures with few parameters by learning nonlinear features. ConvE: ConvE is an embedding model that represents a good tradeoff expressiveness of deep learning models and computational expensiveness, in fact it is shown that it used 8x less parameters, when compared to DistMult. ConvE uses a one-dimensional -sized embedding to represent the entities and relations of a knowledge graph. To compute the score function of a triple, ConvE apply a simple procedure: first concatenes and merge the embeddings of the head of the triple and the relation in a single data [h; \mathcal{r}], then this matrix is used as input for the 2D convolutional layer. The result is then passed through a dense layer that apply a linear transformation parameterized by the matrix and at the end, with the inner product is linked to the tail triple. ConvE is also particularly efficient in the evaluation procedure: using a 1-N scoring, the model matches, given a head and a relation, all the tails at the same time, saving a lot of evaluation time when compared to the 1-1 evaluation program of the other models. ConvR: ConvR is an adaptive convolutional network aimed to deeply represent all the possible interactions between the entities and the relations. For this task, ConvR, computes convolutional filter for each relation, and, when required, applies these filters to the entity of interest to extract convoluted features. The procedure to compute the score of triple is the same as ConvE. ConvKB: ConvKB, to compute score function of a given triple , it produces an input [h; \mathcal{r}; t]of dimension without reshaping and passes it to series of convolutional filter of size . This result feeds a dense layer with only one neuron that produces the final score. The single final neuron makes this architecture as a binary classifier in which the fact could be true or false. A difference with ConvE is that the dimensionality of the entities is not changed. Capsule neural networks This family of models uses capsule neural networks to create a more stable representation that is able to recognize a feature in the input without losing spatial information. The network is composed of convolutional layers, but they are organized in capsules, and the overall result of a capsule is sent to a higher-capsule decided by a dynamic process routine. CapsE: CapsE implements a capsule network to model a fact . As in ConvKB, each triple element is concatenated to build a matrix [h; \mathcal{r}; t]and is used to feed to a convolutional layer to extract the convolutional features. These features are then redirected to a capsule to produce a continuous vector, more the vector is long, more the fact is true. Recurrent neural networks This class of models leverages the use of recurrent neural network. The advantage of this architecture is to memorize a sequence of fact, rather than just elaborate single events. RSN: During the embedding procedure is commonly assumed that, similar entities has similar relations. In practice, this type of information is not leveraged, because the embedding is computed just on the undergoing fact rather than a history of facts. Recurrent skipping networks (RSN) uses a recurrent neural network to learn relational path using a random walk sampling. Model performance The machine learning task for knowledge graph embedding that is more often used to evaluate the embedding accuracy of the models is the link prediction. Rossi et al. produced an extensive benchmark of the models, but also other surveys produces similar results. The benchmark involves five datasets FB15k, WN18, FB15k-237, WN18RR, and YAGO3-10. More recently, it has been discussed that these datasets are far away from real-world applications, and other datasets should be integrated as a standard benchmark. Libraries See also Knowledge graph Embedding Machine learning Knowledge base Knowledge extraction Statistical relational learning Representation learning Graph embedding References External links Open Graph Benchmark - Stanford WordNet - Princeton Knowledge graphs Machine learning Graph algorithms Information science
Knowledge graph embedding
[ "Engineering" ]
4,945
[ "Artificial intelligence engineering", "Machine learning" ]
67,944,516
https://en.wikipedia.org/wiki/Physics-informed%20neural%20networks
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples. Function approximation Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass, momentum, and energy) that govern fluid mechanics. The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences, finite elements and finite volumes). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization. Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity. PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. In addition, they allow for exploiting automatic differentiation (AD) to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation. Modeling and computation A general nonlinear partial differential equation can be: where denotes the solution, is a nonlinear operator parameterized by , and is a subset of . This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems: data-driven solution data-driven discovery of partial differential equations. Data-driven solution of partial differential equations The data-driven solution of PDE computes the hidden state of the system given boundary data and/or measurements , and fixed model parameters . We solve: . By defining the residual as , and approximating by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of and can be then learned by minimizing the following loss function : . Where is the error between the PINN and the set of boundary conditions and measured data on the set of points where the boundary conditions and data are defined, and is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process. This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, model predictive control, multi-physics and multi-scale modeling, and simulation. It has been shown to converge to the solution of the PDE. Data-driven discovery of partial differential equations Given noisy and incomplete measurements of the state of the system, the data-driven discovery of PDE results in computing the unknown state and learning model parameters that best describe the observed data and it reads as follows: . By defining as , and approximating by a deep neural network, results in a PINN. This network can be derived using automatic differentiation. The parameters of and , together with the parameter of the differential operator can be then learned by minimizing the following loss function : . Where , with and state solutions and measurements at sparse location , respectively and residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process. This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation. Physics-informed neural networks for piece-wise function approximation PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. Another school of thought is discretization for parallel computation to leverage usage of available computational resources. XPINNs is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization. Physics-informed neural networks and theory of functional connections In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of functional connections (TFC)'s constrained expression, in the Deep-TFC framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications. Physics-informed PointNet (PIPN) for multiple sets of irregular geometries Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow, heat transfer and linear elasticity. Physics-informed neural networks (PINNs) for inverse computations Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization, multiphase flow in porous media, and high-speed fluid flow. PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations. Physics-informed neural networks for elasticity problems Surrogate networks are intended for the unknown functions, namely, the components of the strain and the stress tensors as well as the unknown displacement field, respectively. The residual network provides the residuals of the partial differential equations (PDEs) and of the boundary conditions.The computational approach is based on principles of artificial intelligence. Physics-informed neural networks (PINNs) with backward stochastic differential equation Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions. Physics-informed neural networks for biology An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function is modified to include , a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases. A natural example of BINNs can be found in cell dynamics, where the cell density is governed by a reaction-diffusion equation with diffusion and growth functions and , respectively: In this case, a component of could be for , which penalizes values of that fall outside a biologically relevant diffusion range defined by . Furthermore, the BINN architecture, when utilizing multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct from model inputs , serving as a surrogate model for the cell density . This surrogate is then fed into the two additional MLPs, and , which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of , and to form the governing reaction-diffusion equation. Note that since is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a method-of-lines approach approach. Limitations Translation and discontinuous behavior are hard to approximate using PINNs. They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour causes the method to fail. Such PDEs could be solved by scaling variables. This difficulty in training of PINNs in advection-dominated PDEs can be explained by the Kolmogorov n–width of the solution. They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. One of the reasons behind the failure of regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose a multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize. More generally, posing the solution of a PDE as an optimization problem brings with it all the problems that are faced in the world of optimization, the major one being getting stuck in local optima. References External links Physics Informed Neural Network PINN – repository to implement physics-informed neural network in Python XPINN – repository to implement extended physics-informed neural network (XPINN) in Python PIPN – repository to implement physics-informed PointNet (PIPN) in Python Differential equations Deep learning
Physics-informed neural networks
[ "Mathematics" ]
3,172
[ "Mathematical objects", "Differential equations", "Equations" ]
67,944,517
https://en.wikipedia.org/wiki/Speculative%20design
Speculative design is a design practice concerned with future design proposals of a critical nature. The term was popularised by Anthony Dunne and Fiona Raby as a subsidiary of critical design. The aim is not to present commercially-driven design proposals but to design proposals that identify and debate crucial issues that might happen in the future. Speculative design is concerned with future consequences and implications of the relationship between science, technology, and humans. It problematizes this relation by proposing provocative future design scenarios where technology and design implications are accentuated. These design proposals are meant to trigger debates about the future rather than marketing products. Definition Dunne and Raby, the researchers who coined the term speculative design, describe it as: Speculative design is used to challenge preconceptions, raise questions and to provoke debate. It opens the door for designers to imagine possible futures. James Auger claims speculative design "combines informed, hypothetical extrapolations of an emerging technology’s development with a deep consideration of the cultural landscape into which it might be deployed, to speculate on future products, systems and services”. Speculative designers develop alternative presents to ask why things are the way they are so that they can project the future. James Auger explains that these alternative presents can make radical interventions to the current practices and evolving technologies by applying different ideologies and practices. Speculative design emphasizes the “philosophical inquiry into technological application”; it tends to take the discussion on technology beyond the experts to a broad population of the audience. The resulting artifacts often appear subversive and irreverent in nature; they look different to the public, and this is the key behind triggering discussions and stimulating questions. Speculative design can be distinguished from design that operates within commercial borders where the aim of designing is profitability. Speculative design is an exploratory design genre and a Research through Design (RtD) approach. Origins and early attempts Anti-design and Italian radical design could be considered as ancestors of speculative design. However, the format of speculative design as we know it today is derived from the critical design practice. Both are connected and use similar approaches. Dunne and Raby described critical design as a practice that “uses speculative design proposals to challenge narrow assumptions, preconceptions, and givens about the role products play in everyday life”. Critical design is a form of design that uses design tools and process not to solve a problem but to rethink the borders and parameters of a problem from a critical point of view Dunne and Raby explained the term further in their book ‘Design Noir: The Secret Life of Electronic Objects,’ as “Instead of thinking about appearance, user-friendliness or corporate identity, industrial designers could develop design proposals that challenge conventional values”. The relationship between speculative design and critical design can be seen from Matt Malpass identification of the current contemporary design practices into three classifications; the first is associative design, the second is speculative design, and the third is critical design. Speculative design is a form of critical design that is concerned with future proposals. It examines future scenarios to ask the question of “what if?”. Some attempts of the Italian radical design can be considered as speculative design. For instance, Ettore Sottsass worked on “The planet as a festival” in 1973. Speculative design is inspired by the attitude and position of the Italian radical design, yet does not necessarily imitate its format and motivations. Motivation Speculative design aims to defy capitalist-driven design directions and showcase their negative impacts on design practice. Dunne and Raby note that hyper-commercialization of design during the 1980s drove this practice. Designers struggled to find a social model to align with outside of the capitalist economy. However, after the financial crash of 2008, the interest in finding other alternatives to the current design models was triggered. In this sense, the role of design is to be a catalyst in producing alternative visions rather than being the source of vision itself. Speculative designers' motivation is to take a position or an attitude towards the current design practice and propose alternatives. Designers might have different points of view about how they would present a design idea or focal issue. Bruce and Stephanie Tharp identify the different positions designers could take towards their projects; these could be: declarative, suggestive, inquisitive, facilitative, and disruptive. Auger extends this discussion on explaining what speculative design should do by mentioning aspects for it: " Arrange emerging (not yet available) technological ‘elements’ to hypothesise future products and artefacts, or apply alternative plans, motivations, or ideologies to those currently driving technological development in order to facilitate new arrangements of existing elements, and develop new perspectives on big systems." Aiming at: " Asking ‘what is a better future (or present)?’ Generating a better understanding of the potential implications of a specific (disruptive) technology in various contexts and on multiple scales – with a particular focus on everyday life. Moving design ‘upstream’ – to not simply package technology at the end of the technological journey but to impact and influence that journey from its genesis." In theory Speculative design relies on speculation and proposition; its value comes from speculating about future scenarios where design is used in a particular context to showcase a notion or an idea of debate. The most significant aim of speculative design is to enact change rather than conforming to the status quo. According to Johannessen, Keitsch and Pettersen the change aspects can be segmented into three elements: Political and social change, Product value and user experience change Aesthetics Speculative designers do not suggest what a preferable future is; they let society decide what is a preferable future for them, whereas affirmative design, government, and industries actually decide on their preferable future and create it. It encourages the audience to suggest their preferable future that has no direct relevance with today’s perspective of how the future should be and this raises the awareness for society on how they could influence their choices for the future; the logic of the ‘laws’ of future implies that if we strive for something, we can eventually turn it into reality, even if it seems incredible now. Speculative design triggers the debate about the actions we take today (in the present) that build future events. It encourages the users to be the change of today. It questions technology at early stages; it is concerned with the domestication of technology and upstream engagement. It poses societal and ethical implications to interrogate them. It questions the role of industrial and product design in delivering new science and technology. Speculative design as a subsidiary of critical design is built on the fundamentals Frankfurt school of criticism. Therefore, critical thinking is an essential aspect of speculative design. Critiquing norms, values and why we design is what motivates speculative designers. Design is a future-oriented practice by nature. However, the issue lies in the fact that vast majority of designers tend to abide by technological advancements without interrogating them or questioning the implications of such technology. An example of this is the wide adoption of social media and how this affected society (for example, the social dilemma). Designers, in this case, do not attempt to change the future, but rather they tend to adapt their design towards what they can see as a probable future. In this sense, they see it as something that they cannot change. In this context, speculative design aims to influence change by raising questions and provoking debates by implementing designed objects. Speculative design uses objects or prototypes that do imply implicit meanings about complex social and technological issues. To highlight the differences between affirmative design and speculative design, Dunne and Raby introduced the A/B Manifesto to contrast their meanings and to highlight what does it mean to be critical or speculative in design. In practice Speculative design can be seen as an attitude, stance, or position instead of a process or methodology. Tactics, methods, and strategies for speculative design have wide variation. It depends on the designer’s intention and the careful management of the outcome of the design project. Speculative design needs a “perceptual bridge” between what the audience identifies as their reality and the fictional elements in the speculative concept. Tactics and strategies of speculative design: Reductio Ad Absurdum Counterfactuals Ambiguity Satire and the outcome of speculative design can be a project in the form of: Para-functional prototypes Post-optimal prototypes Adjacent practices Speculative design has many adjacent practices including critical design, discursive design, and design fiction. They share similar motivations but different purposes or target areas. Criticism The most significant criticism for critical and speculative design would be based on the understanding that design is not functional or useful, so it cannot be considered as design. The grounds for criticism are built on the basic understanding of design as a problem-solving activity. In contrast, speculative design is concerned with problem finding. It does not create functional objects at the end but rather problematizes an issue or social implication. Other criticism would be directed towards speculative design as it does sometimes present dystopian futures that do resemble the lives of other parts of the world. It can sometimes be considered as a niche practice that is only presented in highly intellectual venues such as MOMA and V&A Museum, as pointed out by Prado & Oliveira in 2014. Another criticism for speculative design is dissemination and reflection. The format and venues of presenting speculative design proposals do not imply a methodological approach for engaging with the audience and broader society. This is what Bruce and Stephanie Tharp call (a message in a bottle). See also Critical design Critical making Design fiction Dunne & Raby References Critical design Futures techniques Industrial design
Speculative design
[ "Technology", "Engineering" ]
1,948
[ "Industrial design", "Critical design", "Design", "Design engineering" ]
67,944,523
https://en.wikipedia.org/wiki/Quantum%20Markov%20semigroup
In quantum mechanics, a quantum Markov semigroup describes the dynamics in a Markovian open quantum system. The axiomatic definition of the prototype of quantum Markov semigroups was first introduced by A. M. Kossakowski in 1972, and then developed by V. Gorini, A. M. Kossakowski, E. C. G. Sudarshan and Göran Lindblad in 1976. Motivation An ideal quantum system is not realistic because it should be completely isolated while, in practice, it is influenced by the coupling to an environment, which typically has a large number of degrees of freedom (for example an atom interacting with the surrounding radiation field). A complete microscopic description of the degrees of freedom of the environment is typically too complicated. Hence, one looks for simpler descriptions of the dynamics of the open system. In principle, one should investigate the unitary dynamics of the total system, i.e. the system and the environment, to obtain information about the reduced system of interest by averaging the appropriate observables over the degrees of freedom of the environment. To model the dissipative effects due to the interaction with the environment, the Schrödinger equation is replaced by a suitable master equation, such as a Lindblad equation or a stochastic Schrödinger equation in which the infinite degrees of freedom of the environment are "synthesized" as a few quantum noises. Mathematically, time evolution in a Markovian open quantum system is no longer described by means of one-parameter groups of unitary maps, but one needs to introduce quantum Markov semigroups. Definitions Quantum dynamical semigroup (QDS) In general, quantum dynamical semigroups can be defined on von Neumann algebras, so the dimensionality of the system could be infinite. Let be a von Neumann algebra acting on Hilbert space , a quantum dynamical semigroup on is a collection of bounded operators on , denoted by , with the following properties: , , , , , is completely positive for all , is a -weakly continuous operator in for all , For all , the map is continuous with respect to the -weak topology on . Under the condition of complete positivity, the operators are -weakly continuous if and only if are normal. Recall that, letting denote the convex cone of positive elements in , a positive operator is said to be normal if for every increasing net in with least upper bound in one has for each in a norm-dense linear sub-manifold of . Quantum Markov semigroup (QMS) A quantum dynamical semigroup is said to be identity-preserving (or conservative, or Markovian) if where is the identity element. For simplicity, is called quantum Markov semigroup. Notice that, the identity-preserving property and positivity of imply for all and then is a contraction semigroup. The Condition () plays an important role not only in the proof of uniqueness and unitarity of solution of a Hudson–Parthasarathy quantum stochastic differential equation, but also in deducing regularity conditions for paths of classical Markov processes in view of operator theory. Infinitesimal generator of QDS The infinitesimal generator of a quantum dynamical semigroup is the operator with domain , where and . Characterization of generators of uniformly continuous QMSs If the quantum Markov semigroup is uniformly continuous in addition, which means , then the infinitesimal generator will be a bounded operator on von Neumann algebra with domain , the map will automatically be continuous for every , the infinitesimal generator will be also -weakly continuous. Under such assumption, the infinitesimal generator has the characterization where , , , and is self-adjoint. Moreover, above denotes the commutator, and the anti-commutator. Selected recent publications See also References Quantum mechanics Semigroup theory
Quantum Markov semigroup
[ "Physics", "Mathematics" ]
780
[ "Mathematical structures", "Theoretical physics", "Quantum mechanics", "Fields of abstract algebra", "Algebraic structures", "Semigroup theory" ]
67,944,537
https://en.wikipedia.org/wiki/Reinforcement%20in%20concrete%203D%20printing
The reinforcement of 3D printed concrete is a mechanism where the ductility and tensile strength of printed concrete are improved using various reinforcing techniques, including reinforcing bars, meshes, fibers, or cables. The reinforcement of 3D printed concrete is important for the large-scale use of the new technology, like in the case of ordinary concrete. With a multitude of additive manufacturing application in the concrete construction industryspecifically the use of additively constructed concrete in the manufacture of structural concrete elementsthe reinforcement and anchorage technologies vary significantly. Even for non-structural elements, the use of non-structural reinforcement such as fiber reinforcement is not uncommon. The lack of formwork in most 3D printed concrete makes the installation of reinforcement complicated. Early phases of research in concrete 3D printing primarily focused on developing the material technologies of the cementitious/concrete mixes. These causes combined with the non-existence of codal provisions on reinforcement and anchorage for printed elements speak for the limited awareness and the usage of the various reinforcement techniques in additive manufacturing. The material extrusion-based printing of concrete is currently favorable both in terms of availability of technology and of the cost-effectiveness. Therefore, most of the reinforcement techniques developed or currently under development are suitable to the extrusion-based 3D printing technology. Types of reinforcement The reinforcement in concrete 3D printing, much like that in conventional concrete, can be classified based either on the method of placement or the method of action. The methods of placement of reinforcement are preinstallation, co-installation, and post-installation. The examples of each are pre-installed meshes, fibers mixed with concrete, and post-tensioning cables, respectively. The classification based on the structural action is once again the same as that in conventional concrete. Examples of active and passive reinforcement in 3D printed concrete are reinforcement bars and post-tensioning cables used to prestress segmental elements, respectively. The majority of the reinforcement in concrete has conventionally been steel and continues to be even in 3D printed concrete. Alternate composite materials such as FRPs and fibers of glass, basalt etc., in the mix have gained considerable prominence. Some common reinforcements in 3D printing Reinforcing steel bars The high availability and popularity of deformed bars or rebars as a passive structural reinforcement in conventional concrete systems make it sought after in printed concrete. They are welded together to form trusses laid between layers to form a very effective co-installed reinforcement strategy without the use of formworks. They are erected to reinforce cages around which concrete is printed to form wall and beam elements, making rebars an effective pre-installment strategy. The rebar-based formative skeletal structure can also act as a core on which printable concrete is shotcreted in a new method developed at TU Braunschweig. The rebar cages can also be installed inside printed concrete formworks in non-structural members, and the holes are filled with grout. This method of post-installed reinforcement has proven to be cost-effective; however, it requires attention to the interface between steel and the printed concrete. The use of printed concrete as formwork requires higher tensile hoop strength of the concrete, which could be provided by the use of fibers in the mix. Smart Dynamic Casting Smart Dynamic Casting (SDC), a new printing technology being developed in ETH Zurich, combines slipforming and printing material technologies to produce varied cross-sections and complex geometries using very little formwork. Reinforcement bars are pre-installed, just like in the case of conventionally cast concrete, and the rheology of the concrete is adapted to retain the shape of the slipforming formwork before concrete hydrates enough to sustain self-weight. Concrete facade mullions of varying cross-sections are produced for a DFAB house in Switzerland. Reinforcement meshes Similar to the use of rebars, reinforcement meshes are also used popularly as a passive reinforcement technique. The welded wire meshes are laid in-between printed layers of slabs without requiring any formwork. They can also be used to print wall elements that are fabricated laterally and erected in place. In a method unlike with rebars, spools of meshes are unwound simultaneously ahead of the printer nozzle to provide both horizontal and vertical reinforcement to the printed elements. This method not only acts as reinforcement in the hardened state of concrete but also compensates for the lack of formwork in the fresh state of concrete. Cables High-strength galvanised steel cables provide effective reinforcement in printed concrete elements where sufficient cover concrete cannot be provided owing to the complexity of the shape. The cables can either be laid in-between layers or extruded simultaneously like the meshes. The bond between high-strength steel cables and concrete needs special attention. Continus yarn or Flow-based Pultrusion Continuous yarn in Glass, Basalt, High-performance Polymer or carbon can also effectively be used as reinforcement for 3D-printed concrete without needing additional motors. The technique takes advantage of the extruded concrete consistency to passively pultrude numerous continuous yarns. The obtained material is a unidirectional cementitious composite with an increase in strength and ductility in the extrusion direction depending on the proportion of fiber. Thanks to the small diameter of the yarn used their bond with the matrix is usually great. Furthermore, the process takes advantage of the small bending stiffness of the yarn to ensure the same geometric freedom with extended buildability possibility thanks to the early traction strength provided by the yarn during the printing. This feature comes with a more complex extrusion nozzle and the use of a specific device for handling the numerous yarns. Post-tensioning cables The automated fabrication of elements realises its true potential when printed segmental elements are fit in place using post-tensioning. The concrete segments are printed, leaving holes for the post-tensioning cables that not only act as an active reinforcement but also help in connecting the segmental elements to form a load-bearing structure. The holes left behind for the cables are filled with grout post the tensioning of the cables. A bicycle bridge has been constructed in TU Eindhoven by printing segments that are post-tensioned using high-strength cables running perpendicular to the printing direction. The post-tensioning technology has a lot of potential as a reinforcement strategy in additively manufactured concrete systems. Fiber reinforcement The use of fibers in the mix has several advantages like in the case of conventional concrete. The higher cement content and faster hydration rate requirements of printed concrete make it susceptible to shrinkage cracking and thermal stresses. The use of fibers (structural or non-structural) can counter these significantly. Fiber reinforcements are also useful in printing shell structures as the tensile membrane action required to convert bending moment into axial force is possible only with tough and high stiffness concrete. Fibers, when aligned can provide this required higher toughness and stiffness. The flexural tensile strength is also improved with the addition of structural steel or PVA fibers. These properties make the fiber-reinforced concrete a suitable material for printing formworks. The cohesiveness of concrete in the fresh state, which is crucial for printing, can be improved by using non-structural fibers such as polypropylene or basalt. The use of fiber reinforcement in 3D printing creates a much-needed segway into the fields of ultra-high performance concretes with enhanced strengths and durability, crucial in aesthetic slender elements. External anchor connectors Anchor connectors are installed in truss elements with the aim of connecting them to similar units using exposed threaded bars. This reinforcing technique has the advantage of faster fabrication of lightweight units that can be arranged in a free-form manner on-site, depending on the requirement. The exposed reinforcement might face corrosion issues when installed in outdoor environments. Topologically optimised truss shapes with force-follows-form can be created and used to save material and, in turn, the construction costs. The anchors can be connected both by in-plane and out-of-plane threaded rebars to create elements beyond simple beams and arches. Bamboo Reinforcement Bamboo reinforcement, including bamboo wrapped in steel wires has been proposed as reinforcement for traditional concrete elements as early as 2005, with recent studies suggesting possible applications in 3D-printed concrete. This technique has the advantage of producing potentially 50 times less carbon emissions than traditional steel reinforcement techniques. One drawback of this method is potential durability issues, as the organic nature of bamboo makes it vulnerable to pests and decomposition. Proper treating of the material can circumvent this issue, and can preserve the bamboo reinforcement for as long as 15 years. Other less common reinforcement techniques Interface ties and staples are sometimes used to improve the bonding between printed layers. Ladder wire is used to reinforce printed elements to improve horizontal bending. Print stabilisers are used to prevent the elastic buckling of printed layers during the printing process. Welded/printed reinforcement is a technology being developed at TU Braunschweig where the steel reinforcements are simultaneously printed using gas metal arc welding. Hybrid solutions Each reinforcement technology is usually more effective when used in conjuncture with another reinforcing technology, leaving a lot of scope for research and development. The mesh mould technology can be combined with SDC to produce highly automated elements faster. The printable Fiber Reinforced Concrete (FRC) technology can be combined with most other reinforcement techniques seamlessly to produce a highly durable concrete structure. Fiber-reinforced concrete, when used to print formwork, has a higher resistance to hoop stresses owing to higher filament strengths. The meshes and bar cages are almost always combined in the usage of large-scale construction projects. See also Construction 3D printing Applications of 3D printing Reinforced concrete Types of concrete 3D printing Automation References 3D printing Building technology Construction Building materials
Reinforcement in concrete 3D printing
[ "Physics", "Engineering" ]
1,991
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
67,944,539
https://en.wikipedia.org/wiki/Groundwater%20contamination%20by%20pharmaceuticals
Groundwater contamination by pharmaceuticals, which belong to the category of contaminants of emerging concern (CEC) or emerging organic pollutants (EOP), has been receiving increasing attention in the fields of environmental engineering, hydrology and hydrogeochemistry since the last decades of the twentieth century. Pharmaceuticals are suspected to provoke long-term effects in aquatic ecosystems even at low concentration ranges (trace concentrations) because of their bioactive and chemically stable nature, which leads to recalcitrant behaviours in the aqueous compartments, a feature that is typically associated with the difficulty in degrading these compounds to innocuous molecules, similarly with the behaviour exhibited by persistent organic pollutants. Furthermore, continuous release of medical products in the water cycle poses concerns about bioaccumulation and biomagnification phenomena. As the vulnerability of groundwater systems is increasingly recognized even from the regulating authority (the European Medicines Agency, EMA), environmental risk assessment (ERA) procedures, which is required for pharmaceuticals appliance for marketing authorization and preventive actions urged to preserve these environments. In the last decades of the twentieth century, scientific research efforts have been fostered towards deeper understanding of the interactions of groundwater transport and attenuation mechanisms with the chemical nature of polluting agents. Amongst the multiple mechanisms governing solutes mobility in groundwater, biotransformation and biodegradation play a crucial role in determining the evolution of the system (as identified by developing concentration fields) in the presence of organic compounds, such as pharmaceuticals. Other processes that might impact on pharmaceuticals fate in groundwater include classical advective-dispersive mass transfer, as well as geochemical reactions, such as adsorption onto soils and dissolution / precipitation. One major goal in the field of environmental protection and risk mitigation is the development of mathematical formulations yielding reliable predictions of the fate of pharmaceuticals in aquifer systems, eventually followed by an appropriate quantification of predictive uncertainty and estimation of the risks associated with this kind of contamination. General problem Pharmaceuticals represent a serious threat to aquifer systems because of their bioactive nature, which makes them capable of interacting directly with therein residing living microorganisms and yielding bioaccumulation and biomagnification phenomena. Occurrence of xenobiotics in groundwater has been proven to harm the delicate equilibria of aquatic ecosystems in several ways, such as promoting the growth of antibiotic-resistant bacteria or causing hormones-related sexual disruption in living organisms in surface waters. Considering then the role of groundwater systems as main worldwide drinking water resources, the capability of pharmaceuticals to interact with human tissues poses serious concerns also in terms of human health. Indeed, the majority of pharmaceuticals do not degrade in groundwater, where get accumulated due to their continuous release in the environment. Then, these compounds reach subsurface systems through different sources, such as hospital effluents, wastewaters and landfill leachates, which clearly risk contaminating drinking water. Most detected pharmaceutical classes The main pharmaceutical classes detected in worldwide groundwater systems are listed below. The following categorisation is based on a medical perspective and it is often referred to as therapeutic classification. Antibiotics Estrogens and hormones Anti-inflammatories and analgesics Antiepileptics Lipid regulators Antihypertensives Contrast media Antidepressants Antiulcer drugs and Antihistamines Chemical aspects relevant to aquifer systems dynamics The chemical structure of pharmaceuticals affects the type of hydro-geochemical processes that mainly impacts on their fate in groundwater and it is strictly associated with their chemical properties. Therefore, a classification of pharmaceuticals based on chemical classes is a valid alternative to the purpose of understanding the role of molecular structures in determining the kind of physical and geochemical processes affecting their mobility in porous media. With regard to the occurrence of medical drugs in subsurface aquatic systems, the following chemical properties are of major interest: Solubility in the aqueous phase Pharmaceuticals solubility in water affects the mobility of these compounds within aquifers. This feature depends on pharmaceuticals polarity, as polar substances are typically hydrophilic, thereby showing marked tendency to dissolve in the aqueous phase, where they become solutes. This aspect impacts on dissolution / precipitation equilibrium, a phenomenon that is mathematically described in terms of the substance solubility product (addressed in many books with the notation ). Lipophilicity, often measured through the so-called octanol-water partition coefficient (typically addressed as ) Large values outline the non polar character of the chemical species, which shows instead particular affinity to dissolve into organic solvents. Therefore, lipophilic pharmaceuticals are markedly subjected to the risk to bioaccumulate and biomagnificate in the environment, consistent with their preferential partition with the organic tissues of living organisms. Sufficiently large pharmaceuticals are in fact subjected to specific tiers in the environmental risk assessment (ERA) procedure (to be supplied for the marketing authorisation application) and are highlighted as potential sources of bioaccumulation and biomagnification according to the EMA guidelines. Lipophilic compounds are then insoluble in water, where they persist as a separated phase from the aqueous one. This renders their mobility in groundwater basically decoupled with dissolution / precipitation mechanisms and attributed to the mean flow transport (advection and dispersion) and soil-mediated mechanisms of reaction (adsorption). Affinity of sorption onto the soils This feature is expressed in terms of the so-called organic carbon-water partition coefficient, that is usually referred to as and is an intrinsic property of the molecule. Acidic character Molecules behaviour in relation to aqueous dissociation reactions is typically related to their acid dissociation constants, that are typically outlined in terms of their coefficients. Affinity to redox reactions, even in the context of bacterially-mediated metabolic pathways The molecular structure of xenobiotics typically outlines the existence of several possible reaction pathways, which are embedded in complex reaction networks and are typically referred to as transformation processes. With reference to organic compounds, such as pharmaceuticals, innumerable kinds of chemical reactions exist, most of them involving common chemical mechanisms, such as functional groups elimination, addition and substitution. These processes often involve further redox reactions accomplished on the substrates, which are here represented by pharmaceutical solutes and, eventually, their transformation products and metabolites. These processes can be then classified as either biotic or abiotic, depending on the presence or absence of bacterial communities acting as reaction mediators. In the former case, these transformation pathways are typically addressed as biodegradation or biotransformation in the hydrogeochemical literature, depending on the extent of cleavage of the parent molecule into highly oxidized, innocuous species. Transport and attenuation processes The fate of pharmaceuticals in groundwater is governed by different processes. The reference theoretical framework is that of reactive solute transport in porous media at the continuum scale, that is typically interpreted through the advective-dispersive-reactive equation (ADRE). With reference to the saturated region of the aquifer, the ADRE is written as: Where represents the effective porosity of the medium, and represent - respectively - the spatial coordinates vector and the time coordinate. represents the divergence operator, except for when it applies to , where the nabla symbol stands for gradient of . The term denotes then the pharmaceutical solute concentration field in the water phase (for unsaturated regions of the aquifer, the ADRE equation has a similar shape, but it includes additional terms accounting for volumetric contents and contaminants concentrations in other phases than water), while represents the velocity field. is the hydrodynamic dispersion tensor and is typically function of the sole variable . Lastly, the storage term includes the accumulation or removal contribution due to all possible reactive processes in the system, i.e., adsorption, dissolution / precipitation, acid dissociation and other transformation reactions, such as biodegradation. The main hydrological transport processes driving pharmaceuticals and organic contaminants migration in aquifer systems are: Advection Hydrodynamic dispersion The most influential geochemical processes, also referred to as reactive processes and whose effect is embedded in the term of the ADRE, include: Adsorption onto soil Dissolution and precipitation Acid dissociation and aqueous complexation Biodegradation, biotransformation and other transformation pathways Advection Advective transport accounts for the contribution of solute mass transfer across the system that originates from bulk flow motion. At the continuum scale of analysis, the system is interpreted as a continuous medium rather than a collection of solid particles (grains) and empty spaces (pores) through which the fluid can flow. In this context, an average flow velocity can be typically estimated, which arises upscaling the pore scale velocities. Here, the fluid flow conditions ensure the validity of the Darcy's law, which governs the system evolution in terms of average fluid velocity, typically referred to as seepage or advective velocity. Dissolved pharmaceuticals in groundwater are transferred within the domain along with the mean fluid flow and in agreement with the physical principles governing any other solute migration across the system. Hydrodynamic dispersion Hydrodynamic dispersion identifies a process that arises as summation of two separate effects. First, it is associated with molecular diffusion, a phenomenon that is appreciated at the macroscale as consequence of microscale Brownian motions. Secondly, it includes a contribution (called mechanical dispersion) arising as an effect of upscaling the fluid-dynamic transport problem from the pore to the continuum scale of investigation, due to the upscaling of local dishomogeneous velocities. The latter contribution is therefore not related to the occurrence of any physical process at the pore scale, but it is only a fictitious consequence of the modelling scale choice. Hydrodynamic dispersion is then embedded in the advective-dispersive-reactive equation (ADRE) assuming a Fickian closure model. Dispersion is felt at the macroscale as responsible of a spread effect of the contaminant plume around its center of mass. Adsorption onto soil Sorption identifies a heterogeneous reaction that is often driven by instantaneous thermochemical equilibrium. It describes the process for which a certain mass of solute dissolved in the aqueous phase adheres to a solid phase (such as the organic fraction of soil in the case of organic compounds), being therefore removed from the liquid phase. In hydrogeochemistry, this phenomenon has been proved to cause a delayed effect in solute mobility with respect to the case in which solely advection and dispersion occur in the aquifer. For pharmaceuticals, it can be typically interpreted using a linear adsorption model at equilibrium, which is fully applicable at low concentrations ranges. The latter model relies upon assessment of a linear partition coefficient, usually denoted as , that depends - for organic compounds - on both organic carbon-water partition coefficient and organic carbon fraction into soil. While the former term is an intrinsic chemical property of the molecule, the latter one instead depends on the soil moisture of the analyzed aquifer. Sorption of trace elements like pharmaceuticals in groundwater is interpreted through the following linear isotherm model: Where identifies the adsorbed concentration on the solid phase and . The neutral form of the organic molecules dissolved in water is typically the sole responsible of sorptive mechanisms, that become as more important as the soils are rich in terms of organic carbon. Anionic forms are instead insensitive to sorptive mechanisms, while cations can undergo adsorption only in very particular conditions. Dissolution and precipitation Dissolution represents the heterogeneous reaction during which a solid compound, such as an organic salt in the case of pharmaceuticals, gets dissolved into the aqueous phase. Here, the original salt appears in the form of both aqueous cations and anions, depending on the stoichiometry of the dissolution reaction. Precipitation represents the reverse reaction. This process is typically accomplished at thermochemical equilibrium, but in some applications of hydrogeochemical modelling it might be required to consider its kinetics. As an example for the case of pharmaceuticals, the non-steroidal anti-inflammatory drug diclofenac, which is commercialised as sodium diclofenac, undergoes this process in groundwater environments. Acid dissociation and aqueous complexation Acid dissociation is a homogeneous reaction that yields dissociation of a dissolved acid (in the water phase) into cationic and anionic forms, while aqueous complexation denotes its reverse process. The aqueous speciation of a solution is determined on the basis of the coefficient, that typically ranges between 3 and 50 (approximately) for organic compounds, such as pharmaceuticals. Being the latter ones weak acids and considering that this process is always accomplished upon instantaneous achievement of thermochemical equilibrium conditions, it is then reasonable to assume that the undissociated form of the original contaminant is predominant in the water speciation for most practical cases in the field of hydrogeochemistry. Biodegradation, biotransformation and other transformation pathways Pharmaceuticals can undergo biotransformation or transformation processes in groundwater systems. Aquifers are indeed rich reserves in terms of minerals and other dissolved chemical species, such as organic matter, dissolved oxygen, nitrates, ferrous and manganese compounds, sulfates, etc., as well as dissolved cations, such as calcium, magnesium and sodium ones. All of these compounds interact through complex reaction networks embedding reactive processes of different nature, such as carbonates precipitation / dissolution, acid–base reactions, sorption and redox reactions. With reference to the latter kind of processes, several pathways are typically possible in aquifers because the environment is often rich in both reducing (like organic matter) and oxidizing agents (like dissolved oxygen, nitrates, ferrous and Manganese oxides, sulfates etc.). Pharmaceuticals can act as substrates as well in this scenario, i.e., they can represent either the reducing, or the oxidizing agent in the context of redox processes. In fact, most chemical reactions involving organic molecules are typically accomplished upon gain or loss of electrons, so that the oxidation state of the molecule changes along the reactive pathway. In this context, the aquifer acts as a "chemical reactor". There are innumerable kinds of chemical reactions that pharmaceuticals can undergo in this environment, which depend on the availability of other reactants, pH and other environmental conditions, but all of these processes typically share common mechanisms. The main ones involve addition, elimination or substitution of functional groups. The mechanism of reaction is important in the field of hydrogeochemical modeling of aquifer systems because all of these reactions are typically governed by kinetic laws. Therefore, recognizing the correct molecular mechanisms through which a chemical reaction progresses is fundamental to the purpose of modelling the reaction rates correctly (for example, it is often possible to identify a rate limiting step within multistep reactions and relate the rate of reaction progress to that particular step). Modelling these reactions typically follows the classic kinetic laws, except for the case in which reactions involving the contaminant are accomplished in the context of bacterial metabolism. While in the former case the ensemble of reactions is addressed as transformation pathway, in the latter one the terms biodegradation or biotransformation are used, depending on the extent to which the chemical reactions effectively degrade the original organic molecule to innocuous compounds in their maximum oxidation state (i.e., carbon dioxide, methane and water). In case of biologically mediated pathways of reaction, which are relevant in the study of groundwater contamination by pharmaceuticals, there are appropriate kinetic laws that can be employed to model these processes in hydrogeochemical contexts. For example, the Monod and Michaelis-Menten equations are suitable options in case of biotic transformation processes involving organic compounds (such as pharmaceuticals) as substrates. Despite most hydrogeochemical literature addresses these processes through linear biodegradation models, several studies have been carried out since the second decade of the twenty-first century, as the former ones are typically too simplified to ensure reliable predictions of pharmaceuticals fate in groundwater and might bias risk estimates in the context of risk mitigation applications for the environment. Hydrologic and geochemical modelling approaches Groundwater contamination by pharmaceuticals is a topic of great interest in the field of the environmental and hydraulic engineering, where most research efforts have been fostered towards studies on this kind of contaminants since the beginning of the twenty-first century. The general goal of those disciplines is that of developing interpretive models capable to predict the behaviour of aquifer systems in relation to the occurrence of various types of contaminants, among which are included also medical drugs. Such goal is motivated by the necessity to provide mathematical tools to predict, for example, how contaminants concentration fields develop across the aquifer along time. This may provide useful information to support decision-making processes in the context of environmental risk assessment procedures. To this purpose, several interdisciplinary strategies and tools are typically employed, the most fundamental ones being listed below: Numerical modelling strategies are employed to simulate hydrogeochemical transport models. Some examples of commonly used softwares are MODFLOW and PHREEQC, but there are plenty of available software that can be used. Statistical inference tools are used to calibrate available hydrogeochemical models against raw data. A widely employed software is, for example, PEST. Knowledge in organic chemistry stands as fundamental prerequisite to develop geochemical models to be fit against data. Laboratory or field scale experiments are designed to obtain raw data, which are necessary to study the behaviour of aquifer systems under exposure to compounds of concern. All of these interdisciplinary tools and strategies are contemporarily employed to analyse the fate of pharmaceuticals in groundwater. See also Groundwater pollution Environmental impact of pharmaceuticals and personal care products Reactive transport modeling in porous media Computer simulation References Natural resources Aquifers Environmental science Water chemistry Water pollution Environmental issues with water Drug manufacturing
Groundwater contamination by pharmaceuticals
[ "Chemistry", "Environmental_science" ]
3,780
[ "Hydrology", "Aquifers", "nan", "Water pollution" ]
67,944,543
https://en.wikipedia.org/wiki/Design%20prototyping
Design prototyping in its broader definition comprises the actions to make, test and analyse a prototype, a model or a mockup according to one or various purposes in different stages of the design process. Other definitions consider prototyping as the methods or techniques for making a prototype (e.g., rapid prototyping techniques), or a stage in the design process (prototype development, prototype or prototyping). The concept of prototyping in design disciplines' literature is also related to the concepts of experimentation (i.e., an iterative problem-solving process of trying, failing and improving), and Research through Design (RtD) (i.e., designers make a prototype with the purpose of conducting research and generating knowledge while trying it, rather than aiming to improving it to become a final product). Background Initial references to the concept of prototyping in design could be traced to the proceedings of the Conference on Design Methods in 1962: In 1968, Bruce Archer, a relevant figure in the "Design Methods Movement" describes the design process. One of the stages of the process is called "Prototype development" and it indicates activities to build and test a prototype. Thus, it would be possible to say that from a design methods' perspective, prototyping recalls a process in which a prototype is built, tried out and tested. In the same line, additional references to prototyping can be found in later editions of the Design Research Society's Conferences. For example, referring to build models and use them to consult people out of the design team, review the model and make decisions on how to modify the design proposal; or describing modelling (creating a model) and model simulation. However, one of the first documented uses of the term prototyping linked to a design process appears in 1983 in A systematic look at prototyping in the field of information systems and software development. The work of Floyd was inspired by the discussions among the scholars who were preparing the Working Conference on Prototyping. It focuses on prototype as a process, rather than the artefact and how prototyping could be applied to the full solution (or product) or parts of it seeking to improve the final output. Although this work was not developed within the design discipline, it provides a comprehensive characterisation of prototyping by defining its steps, purposes and strategies. Moreover, it serves as a referent to further studies of design prototyping. Later, around the year of 1990, the availability of methods for rapidly manufacturing models and prototypes stimulated the publication of a great body of literature dedicated to rapid prototyping techniques and technologies (e.g., 3D printing). Technologies for additive manufacturing (i.e., adding material) or substractive manufacturing (i.e., removing material) together with the use of software for computer-aided design (CAD), leveraged prototype building but also the fabrication of products in limited numbers. Along the years, further efforts have been dedicated to characterising prototyping in design disciplines in the fields of interaction design, experience design, product design and service design, as well as in product-design-related fields such as engineering/mechanical design. In 2000, designers from IDEO described experience prototyping, introducing types of design representations and methods that allow to simulate aspects of an interaction that people experience by themselves. Experience prototyping can combine various types of prototypes such as spaces, products and interfaces to resemble what the real experience could be like. Around the year of 2010, studies were developed to examine the prototyping of services theorising from the growing practice of service design, which later in 2018 were also used as a reference for service design practitioners. Prototyping cycle Prototyping is developed in an iterative cycle of making, testing and analysing which allows to examine dimensions of a solution before its future implementation, anticipating to possible issues and improving them earlier in the process. This cycle can be portrayed the following steps: Preparation: to decide the aims of prototyping, define questions and assumptions that are going to be examined, identify the participants of the prototyping sessions and the dimensions of the prototype that are going to be tested. Making: some or various dimensions will be represented in a prototype (e.g., material, form or function) employing an appropriate depending on the purpose. The relevance on making on design has been increasing in the last years and transforming while new design disciplines emerge. For instance, whilst sketches were previously another category of visual design representations, today they could also be considered prototypes in service design. Testing: the prototyping session develops in a defined setup with certain characteristics of space and environment and will follow a method to gather feedback. Analysing: the results of the testing will be integrated into the solution and updated in the following prototype versions. One example of this cycle could be the design of a digital interface in the early stages of the process applying paper prototyping. In this case, prototyping may seek to explore and evaluate multiple alternatives of ideas with the users as fast and cheap as possible, before investing time to program it. Thus, the prototypes will represent the structure of the interface by using simple forms and text to indicate the elements (1). A common technique for creating prototypes of digital interfaces would be to sketch wireframes in paper (2). The team will meet with a potential user and the wireframes will be presented by the design researcher. The user will simulate to click the elements and explain the actions that intends to do while moving to other sheets that represent other screens in the navigation flow (3). The feedback gathered will be used to make decisions on the aspects that need to be modified and the layout of the interface will be updated (4). Characteristics of prototyping To prepare for prototyping, some aspects need to be decided. For this purpose, it is useful to individualise and consider various characteristics that will allow identifying how prototyping should be developed according to the design needs. In this regard, the prototyping framework proposed by Blomkvist and Holmid could provide some guidelines. As a result of a literature review, they identify a set of characteristics which are: Position in the process Whilst for some scholars prototyping was happening in a particular stage of the design process, the importance of prototyping has been gaining relevance as a continuous activity since the early stages of the process. Considering in which moment of the process prototyping is going to be developed will guide decisions on its purpose and further characteristics of prototyping. Purpose Prototyping can be developed according to different aims of the design process that influence decisions such as what variables of the prototype are going to be examined and who is going to be involved in the testing session. For example, in the early stages of the process, the need could be to explore various ideas within the design team and prototypes may be created fast and with little resources, while at the end of the process the functionality of the solution may be evaluated with future users so the prototype would largely resemble its final version. Some of the purposes of prototyping identified by different authors are: Stakeholder A prototyping session can involve a variety of people related to the solution. Internal to the organisation, the participants could range from the members of the design team to colleagues from other departments and managers. External to the organisation, prototyping could involve future users and clients, and representatives from other organisations. The selection of the participants would depend on the purposes of prototyping. For instance, a prototyping session for exploration could be developed internally with colleagues in order to get quick feedback about initial design proposals. Another example would be to involve users in co-design prototyping sessions in order to explore proposals directly with future users. Activity The activity refers to the method that would be used for testing a prototype, the context in which it is going to occur, and the strategies for testing in relation to what would be the real conditions of use of the solution. Prototype Prototypes can represent one component of a future solution such as "(Inter)actions, service processes, experiences, physical objects, environments, spaces, architecture, digital artifacts and software, ecosystems, [or] (business) value" or comprise various of these components. Moreover, a prototype can reflect one or multiple dimensions of the future solution and a variety of aspects could be considered. A simple approach would be to think on the fidelity, meaning how close the prototype resembles to the final solution (blom)(stick). More comprehensive approaches can be considered through multiple dimensions. For instance, Houde and Hill describe the “role” (i.e., functionality for the user), “look and feel” (i.e., sensory, and experiential aspects), “implementation” (i.e., performance of the solution). Lim, Stolterman and Tenenberg propose a classification of prototypes according to “filtering dimensions: functionality, interactivity, and spatial structure"; and “manifestation dimensions:materials, resolution, and scope". They suggest these dimensions can be pondered in order to decide how the prototype should be. See also Prototype Model Mockup Rapid prototyping Design methods Interaction design User experience design Product design Service design Software prototyping Participatory design - co-design References Design Human–computer interaction Industrial design Product development
Design prototyping
[ "Engineering" ]
1,915
[ "Industrial design", "Design engineering", "Human–machine interaction", "Design", "Human–computer interaction" ]
67,944,546
https://en.wikipedia.org/wiki/Tantalum%20diselenide
Tantalum diselenide is a compound made with tantalum and selenium atoms, with chemical formula TaSe2, which belongs to the family of transition metal dichalcogenides. In contrast to molybdenum disulfide (MoS2) or rhenium disulfide (ReS2), tantalum diselenide does not occur spontaneously in nature, but it can be synthesized. Depending on the growth parameters, different types of crystal structures can be stabilized. In the 2010s, interest in this compound has risen due to its ability to show a charge density wave (CDW), which depends on the crystal structure, up to , while other transition metal dichalcogenides normally need to be cooled down to hundreds of kelvins or even below to observe the same capability. Structure As other TMDs, TaSe2 is a layered compound, with a central tantalum hexagonal lattice sandwiched between two layers of selenium atoms, still with a hexagonal structure. Differently with respect to other 2D materials such as graphene, which is atomically thin, TMDs are composed by trilayers of atoms strongly bounded to each others, stacked above other trilayers and kept together through Van der Waals forces. TMDs can be easily exfoliated. The most studied crystal structures of TaSe2 are the 1T and 2H phases that feature, respectively, octahedral and trigonal prismatic symmetries. However, it is also possible to synthesize the 3R phase or the 1H phase. 1T phase In the 1T phase, selenium atoms show an octahedral symmetry and the relative orientation of the selenium atoms in the topmost and bottommost layers is opposed. On a macroscopic scale, the sample shows a gold colour. The lattice parameters are a = b = 3.48 Å, while c = 0.627 nm. Depending on the temperature, it shows different types of charge density waves (CDW): an incommensurate CDW (ICDW) between and a commensurate CDW (CCDW) below . In the commensurate CDW, the resulting superlattice shows a  ×  reconstruction often referred to as star of David (SOD), with respect to the lattice parameter (a = b) of non distorted TaSe2 (above ). Film thickness can influence as well the CDW transition temperature: the thinner the film, the lower the transition temperature from ICDW to CCDW. In the 1T phase the single trilayers are stacked always in the same geometry, as shown in the corresponding image. 2H phase The 2H phase is based on a configuration of selenium atoms characterized by a trigonal prismatic symmetry and an equal relative orientation in the topmost and bottommost layers. The lattice parameters are a = b = 3.43 Å, while c = 1.27 nm. Depending on the temperature, it shows different types of charge density wave: an incommensurate CDW (ICDW) between and a commensurate CDW (CCDW) below . The lattice distortion below gives rise to a CCDW that makes a 3 × 3 reconstruction with respect to the non-distorted lattice parameter (a = b) of 2H-TaSe2 (above ). In the 2H phase the single trilayers are stacked one opposed to others, as shown in the relative image. Through molecular beam epitaxy it is possible to grow one single trilayer of 2H-TaSe2, also known as 1H phase. Basically, the 2H phase can be seen as the stacking of 1H phase with opposed relative orientation with respect to each others. In the 1H phase the ICDW transition temperature is raised to . Properties Electric and Magnetic TaSe2 exhibits different properties according to the polytype (2H or 1T), even if the chemical composition remains unchanged. 1T phase The resistivity at low temperature is similar to that of a metal, but it starts decreasing at higher temperatures. A peak is exhibited at approximately , which resembles the behavior of semiconductors. 1T phase has almost two orders of magnitude higher resistivity than to the 2H phase. The magnetic susceptibility of the 1T phases has no peaks at low temperature and remains always nearly constant until is reached (ICDW temperature transition), when it jumps to slightly higher values. 1T phase is diamagnetic. 2H phase Resistivity linearly depends on the temperature when the latter exceeds . On the opposite, below this threshold it shows a non-linear behaviour. This abrupt variation of R(T) at might be related to the formation of some kinds of magnetic ordering in TaSe2: ordered spins scatter electrons in a less efficient way. This increases electrons mobility and yields a faster drop in resistivity than that ideally corresponding to a linear trend. The magnetic susceptibility of the 2H polytype slightly depends on the temperature and peaks in the range . The trend is linearly ascending or descending below and above , respectively. This maximum in the 2H phases is related to the formation of the CCDW at . The 2H phase is Pauli paramagnetic. The Hall coefficient RH is almost independent of the temperature above , a threshold below which it instead starts to drop to eventually reach a value of zero at . In the range between , the coefficient RH is negative, its minimum being experienced at approximately . Electronic 1T phase Bulk 1T-TaSe2 is metallic, while single monolayer (trilayer Se–Ta–Se in octahedral symmetry) is observed to be insulating with a band gap of 0.2 eV, in contrast with theoretical calculation which expected to be metallic as the bulk. 2H phase Bulk 2H-TaSe2 is metallic and so the single monolayer (trilayer Se–Ta–Se in trigonal prismatic symmetry), which is also known as the 1H phase. Optical Investigating the non-linear refractive index of tantalum diselenide can be pursued preparing atomically thin flakes of TaSe2 with the liquid phase exfoliation method. Since this technique requires using alcohol, the refractive index of tantalum diselenide can be retrieved through Kerr's law: , where n0 = 1.37 represents the linear refractive index of ethanol, n2 is the non-linear refractive index of TaSe2 and is the incident intensity of the laser beam. Using different light wavelengths, in particular λ = 532 nm and λ = 671 nm, it is possible to measure both n2 and χ(3), the third order nonlinear susceptibility. Both these quantities depend on because the higher the intensity of the laser, the higher the samples are heated up, which results in a variation of the refractive index. For λ = 532 nm, n2 = and χ(3) = (e.s.u.). For λ = 671 nm, n2 = and χ(3) = (e.s.u.). Superconductivity Bulk 2H-TaSe2 has been demonstrated to be superconductive below a temperature of . However, the single monolayer (1H phase) can be associated with a critical temperature increased by an increment that can range up to . Despite the 1T phase typically does not show any superconductive behaviour, formation of TaSe2−xTex compound is possible through doping with tellurium atoms. The former compound superconductive character depends on the fraction of tellurium (x can vary in the range ). The superconductive state arises when the fraction of Te ranges within : the optimal configuration is achieved at x = 0.6 and in correspondence of a critical temperature Tc = . In the optimal configuration, the CDW is totally suppressed by the presence of tellurium. Lubricant Opposite to MoS2, which is largely employed as a lubricant in many different mechanical application, TaSe2 has not shown the same properties, with an average friction coefficient of 0.15. Under friction tests, like the Barker pendulum, it shows an initial friction coefficient of 0.2 to 0.3, which quickly increases to larger values as the number of oscillations of the pendulum increases (while for MoS2 it is almost constant during all the oscillations.) Synthesis There are different methods in order to synthesize tantalum diselenide: depending on the growth parameter, different types of polytype can be stabilized. Chemical Vapor Transport In general, TMDs can be synthesized through a chemical vapor transport technique accordingly to the following chemical equation: M + MCl5 + 2 X → MX2 + Cl2 where M is the chosen transition metal (Ta, Mo, etc.) and X represents the chosen chalcogen element (Se, Te, S). The parameter n, which governs the crystal growth, can vary between 3 and 50, and can be selected appropriately so that the crystal growth is optimized. During such growth, which might last for 2 to 7 days, the temperature is initially increased within a range between Th = . Then, it is cooled down to Tc = . After the growth completion, the crystals are cooled down to room temperature. Depending on the value of Tc, either the 2H or the 1T phase can be stabilized: in particular, using tantalum and selenium with Tc < , only the 2H phase is stabilized. For the 1T phase, Tc must be larger. This allows to selectively grow the desirable phase of the chosen TMD. Chemical vapor deposition Using powder of TaCl5 and selenium as precursors, and a gold substrate, the 2H phase can be stabilized. The gold substrate has to be heated up to , while TaCl5 and Se can be heated to and , respectively. Argon and hydrogen gases are used as carriers. Once the growth is complete, the sample is cooled down to room temperature. Mechanical exfoliation Since the single trilayers are kept together only by weak Van der Waals forces, atomically thin layers of tantalum diselenide can be easily separated by using scotch/carbon tape on the bulk TaSe2 crystals. With this method it is possible to isolate few layers (or even a single layer) of TaSe2. Then, the isolated layers can be deposited above other substrates, such as SiO2, for further characterizations. Molecular Beam Epitaxy Pure tantalum is directly sublimated on a bilayer of graphene inside a selenium atmosphere. Depending on the temperature of the substrate Ts (graphene bilayer), the 1T or the 2H phase can be stabilized: in particular, if Ts = the 2H is favoured, while at Ts = the 1T is stabilized. This growth method is suitable only for atomically thin/few layers, but not for bulk crystals. Liquid Phase Exfoliation Bulk crystals of TaSe2 (or any other TMDs) are put in a solution of pure ethanol. The mixture is then sonicated in an ultrasonic device with a power of at least 450 W for 15 hours. In this way it is possible to overcome the Van der Waals forces that keep the single monolayers of TaSe2 together, resulting in the formation of atomically thin flakes of tantalum diselenide. Research Optoelectronics Since 2H TaSe2 has been found to feature very large optical absorption and emission of light at approximately 532 nm, it might be used for the development of new devices. In particular, the possibility of transferring energy between TaSe2 and other TMDs, especially MoS2, has been proved. This process can be accomplished in a non-radiative resonant way by exploiting the large coupling between the TaSe2 emission and the excitonic absorption of TMDs. Moreover, it is a promising material that may be used for the injection of hot carriers in semiconducting materials and other non-metallic TMDs due to the high lifetime of the generated photoelectrons. All-optical switch and transferring of information Exploiting the dependence of the non linear effects of TaSe2 by the intensity of the incident laser beam, it is possible to build an all-optical switch by means of two lasers which operate at different wavelengths and intensities. In particular, a high-intensity laser at λ2 = 671 nm is used to modulate a low-intensity signal at λ2 = 532 nm. Since there is a minimum value of in order to trigger the non-linear effects, the low intensity signal cannot excite alone. On the contrary, when the high-intensity beam (λ1) is coupled with the low intensity signal (λ2), non-linear effects at both λ1 and λ2 arise. So, it is possible to trigger the non-linear effects on the low-intensity signal (λ2) by operating on the high-intensity one (λ1). Exploiting the coupling between λ1 and λ2 enables transferring information from the high-intensity beam to the low-intensity one. With this method, the delay time for transferring the information from λ1 to λ2 is around 0.6 seconds Spin-orbit torque devices Usually spin-orbit torque and spin to charge devices are built by interfacing a ferromagnetic layer with a bulk heavy transition metal, such as platinum. However, these effects take mainly place at the interface rather than in the platinum bulk, which introduces heat dissipation due to ohmic losses. Theoretical and DFT simulations suggest that interfacing a 1T-TaSe2 monolayer with cobalt might lead to higher performances with respect to the usual platinum-based devices. Recent experiments showed that the spin-orbit scattering length of TaSe2 is around Lso = 17 nm, which is highly comparable with the one of platinum, Lso = 12 nm. This suggests the possible implementation of tantalum diselenide for the development of new 2D spintronic devices based on the spin Hall effect. Hydrogen evolution reaction (HER) DFT and AIMD simulations suggest that the stacking of flakes of both TaSe2 and TaS2 in a disordered way could be used for the development of a new efficient and cheaper cathode that might be used for the extraction of H2 from other chemical compounds. See also Molybdenum disulfide Molybdenum diselenide Molybdenum ditelluride Rhenium diselenide Rhenium disulfide Tungsten diselenide References Transition metal dichalcogenides Selenides Monolayers Tantalum compounds Nanomaterials
Tantalum diselenide
[ "Physics", "Materials_science" ]
3,087
[ "Monolayers", "Nanotechnology", "Nanomaterials", "Atoms", "Matter" ]