id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
66,160,845
https://en.wikipedia.org/wiki/ARMF-I
Advanced Reactivity Measurement Facility I (ARMF-I) was a research reactor which was located at the Argonne National Laboratory, a United States Department of Energy national laboratory, facility located in the high desert of southeastern Idaho between Idaho Falls, Idaho and Arco, Idaho. ARMF-I was nearly identical to ARMF-II. History The ARMF-I, a reactor located in a small pool in a building east of the MTR in the Test Reactor Area, was used to determine the nuclear characteristics of reactor fuels and other materials subject to testing in the MTR. Together with the MTR, the reactor helped improve the performance, reliability, and quality of reactor core components. Until the next generation reactor, the ARMF-II, this was considered the most sensitive device for reactivity determinations then in existence. The Advanced Reactivity Measurement Facilities, ARMF-I and ARMF-II, were nearly identical critical facilities and were used almost exclusively for measuring reactor physics parameters, such as reactor-spectrum cross sections and resonance integral cross sections. They were designed to (a) have large statistical weights for fuels and poisons, (b) be mechanically stable enough to produce reproducible reactivity measurements, and (c) have sensitive instrumentation capable of measuring very small reactivities. ARMF-I operated from October 10, 1960, until 1974. Design ARMF-I and ARMF-II were near identical pool reactors. ARMF-I and ARMF-II were housed in a 40-foot by 60-foot cinder block building located just east of the MTR canal tunnel; the building was designed from the initial concept for two reactors. These facilities were swimming-pool-type reactors having light-water moderated cores made up of plate-type fuel elements containing high enriched uranium at 93% U-235. The facilities were located at the National Reactor Testing Station near the Materials Testing Reactor (MTR). Being located near the MTR permitted measurements on short-lived fission products and transmuted isotopes. By use of the capsule 'transfer tube' connecting the MTR and ARMF canals, a capsule could be transferred from the MTR hydraulic rabbit to the ARMF and prepared for reactivity measurements in 15 minutes. Bibliography Stacy, Susan M. “Proving the Principle.” Idaho Operations Office of the Department of Energy. Idaho Falls, Idaho. DOE/ID-10799. 2000. Retrieved from: https://factsheets.inl.gov/SitePages/Publications.aspx This article incorporates text from the public domain (prepared by or on behalf of the US government) work “Proving the Principle” (2000) which may be found at: https://factsheets.inl.gov/SitePages/Publications.aspx. See also Argonne National Laboratory Idaho National Laboratory References Citations Nuclear research reactors
ARMF-I
[ "Physics" ]
592
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
66,168,924
https://en.wikipedia.org/wiki/Late-stage%20functionalization
Late-stage functionalization (LSF) is a desired, chemical or biochemical, chemoselective transformation on a complex molecule to provide at least one analog in sufficient quantity and purity for a given purpose without needing the addition of a functional group that exclusively serves to enable said transformation. Molecular complexity is an intrinsic property of each molecule and frequently determines the synthetic effort to make it. LSF can significantly diminish this synthetic effort, and thus enables access to molecules, which would otherwise not be available or too difficult to access. The requirements for LSF can be met by both C–H functionalization reactions and functional group manipulations. LSF reactions are particularly relevant and often used in the fields of drug discovery and materials chemistry, although no LSF has been implemented in a commercial process. Chemoselectivity All LSF reactions are chemoselective but not every chemoselective reaction fulfills the requirements of the definition for LSF. High chemoselectivity is required for a useful LSF with a predictable reaction outcome because complex molecules typically feature several distinct functional groups that need to be tolerated. In this sense, chemoselectivity is sometimes referred to as functional group tolerance. Furthermore, high chemoselectivity avoids often undesired over-functionalization of the valuable substrate, which is used as a limiting reagent in LSF reactions. Every C–H bond functionalization on a complex molecule classifies as LSF, except when a directing or activating group must be installed in a previous step of the synthesis to accomplish the transformation. For functional group manipulations, the distinction between LSF and functional-group-tolerant reactions is more subtle. For example, peptide bioconjugation reactions make use of the native functionality in amino acid side chains, and thus classify as LSF. In contrast, bioorthogonal 1,3-dipolar cycloadditions (see also copper-free click chemistry and Huisgen cycloaddition) generally require prior introduction of azide or cycloalkyne functionalities to biomolecules. Hence, such transformations do not classify as LSF despite their excellent functional group tolerance. Site-selectivity Site-selectivity, also positional or regioselectivity, is generally desired but no requirement for LSF reactions because site-unselective LSF reactions can also be useful for special purposes. For example, site-unselective late-stage C–H functionalization reactions can provide quick access to several constitutional isomers of complex molecules relevant for biological testing in drug discovery. Site-selective reactions to access each possible constitutional isomer independently are scarce but highly desirable because cumbersome purification procedures are avoided, and other isomers are not produced as waste. Some LSF reactions provide one constitutional isomer in high selectivity based on innate substrate selectivity for a given reaction or based on catalyst control. The discovery of site-selective LSF reactions constitutes an important research objective in the field of synthetic methodology development. References Molecules Transformation (function)
Late-stage functionalization
[ "Physics", "Chemistry", "Mathematics" ]
626
[ "Molecular physics", "Transformation (function)", "Molecules", "Physical objects", "nan", "Geometry", "Atoms", "Matter" ]
69,057,786
https://en.wikipedia.org/wiki/Electron%20quadruplets
The condensate of electron quadruplets is a proposed state of matter in which Cooper pairs are formed but do not exhibit long-range order, but electron quadruplets do. Such states emerge in systems with multiple broken symmetries due to the partial melting of the underlying low-temperature order, which destroys the condensates of Cooper pairs but preserves the condensates formed by pairs of preformed fermion pairs. One example of the proposed electron quadruplet condensates is charge-4e superconductivity first proposed by Berg, Fradkin and Kivelson. Another example is "quartic metal" phase is related to but distinct from those superconductors explained by the standard BCS theory; rather than expelling magnetic field lines as in the Meissner effect, it generates them, a spontaneous Nernst effect that indicates the breaking of time-reversal symmetry. Related states can form in pair-density-wave systems. In systems with a greater number of broken symmetries, theoretical studies have demonstrated the existence of charge-6e and more complex orders. After the theoretical possibility was raised, observations consistent with electron quadrupling were published using hole-doped Ba1-xKxFe2As2 in 2021, with evidence of vestigial quadrupling reported in CsV3Sb5 soon after, in early 2022. See also List of states of matter References Phases of matter
Electron quadruplets
[ "Physics", "Chemistry" ]
298
[ "Phases of matter", "Matter" ]
69,062,088
https://en.wikipedia.org/wiki/Configuration%20linear%20program
The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to the bin packing and job scheduling problems. In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin (these configurations are also known as patterns) . Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations. In bin packing The integral LP In the bin packing problem, there are n items with different sizes. The goal is to pack the items into a minimum number of bins, where each bin can contain at most B. A feasible configuration is a set of sizes with a sum of at most B. Example: suppose the item sizes are 3,3,3,3,3,4,4,4,4,4, and B=12. Then the possible configurations are: 3333; 333; 33, 334; 3, 34, 344; 4, 44, 444. If we had only three items of size 3, then we could not use the 3333 configuration. Denote by S the set of different sizes (and their number). Denote by C the set of different configurations (and their number). For each size s in S and configuration c in C, denote: ns - the number of items of size s. as,c - the number of occurrences of size s in configuration c. xc - a variable denoting the number of bins with configuration c. Then, the configuration LP of bin-packing is: for all s in S (- all ns items of size s are packed). for all c in C (- there are at most n bins overall, so at most n of each individual configuration). The configuration LP is an integer linear program, so in general it is NP-hard. Moreover, even the problem itself is generally very large: it has C variables and S constraints. If the smallest item size is eB (for some fraction e in (0,1)), then there can be up to 1/e items in each bin, so the number of configurations C ~ S1/e, which can be very large if e is small (if e is considered a constant, then the integer LP can be solved by exhaustive search: there are at most S1/e configurations, and for each configuration there are at most n possible values, so there are at most combinations to check. For each combination, we have to check S constraints, so the run-time is , which is polynomial in n when S, e are constant). However, this ILP serves as a basis for several approximation algorithms. The main idea of these algorithms is to reduce the original instance into a new instance in which S is small and e is large, so C is relatively small. Then, the ILP can be solved either by complete search (if S, C are sufficiently small), or by relaxing it into a fractional LP. The fractional LP The fractional configuration LP of bin-packing It is the linear programming relaxation of the above ILP. It replaces the last constraint with the constraint . In other words, each configuration can be used a fractional number of times. The relaxation was first presented by Gilmore and Gomory, and it is often called the Gilmore-Gomory linear program. Example: suppose there are 31 items of size 3 and 7 items of size 4, and the bin-size is 10. The configurations are: 4, 44, 34, 334, 3, 33, 333. The constraints are [0,0,1,2,1,2,3]*x=31 and [1,2,1,1,0,0,0]*x=7. An optimal solution to the fractional LP is [0,0,0,7,0,0,17/3] That is: there are 7 bins of configuration 334 and 17/3 bins of configuration 333. Note that only two different configurations are needed. In short, the fractional LP can be written as follows:Where 1 is the vector (1,...,1) of size C, A is an S-by-C matrix in which each column represents a single configuration, and n is the vector (n1,...,nS). Solving the fractional LP A linear program with no integrality constraints can be solved in time polynomial in the number of variables and constraints. The problem is that the number of variables in the fractional configuration LP is equal to the number of possible configurations, which might be huge. Karmarkar and Karp present an algorithm that overcomes this problem. First, they construct the dual linear program of the fractional LP:.It has S variables y1,...,yS, and C constraints: for each configuration c, there is a constraint , where is the column of A representing the configuration c. 3It has the following economic interpretation. For each size s, we should determine a nonnegative price ys. Our profit is the total price of all items. We want to maximize the profit n y subject to the constraints that the total price of items in each configuration is at most 1. Second, they apply a variant of the ellipsoid method, which does not need to list all the constraints - it just needs a separation oracle. A separation oracle is an algorithm that, given a vector y, either asserts that it is feasible, or finds a constraint that it violates. The separation oracle for the dual LP can be implemented by solving the knapsack problem with sizes s and values y: if the optimal solution of the knapsack problem has a total value at most 1, then y is feasible; if it is larger than 1, than y is not feasible, and the optimal solution of the knapsack problem identifies a configuration for which the constraint is violated. Third, they show that, with an approximate solution to the knapsack problem, one can get an approximate solution to the dual LP, and from this, an approximate solution to the primal LP; see Karmarkar-Karp bin packing algorithms. All in all, for any tolerance factor h, finds a basic feasible solution of cost at most LOPT(I) + h, and runs in time: , where S is the number of different sizes, n is the number of different items, and the size of the smallest item is eB. In particular, if e ≥ 1/n and h=1, the algorithm finds a solution with at most LOPT+1 bins in time: . A randomized variant of this algorithm runs in expected time: . Rounding the fractional LP Karmarkar and Karp further developed a way to round the fractional LP into an approximate solution to the integral LP; see Karmarkar-Karp bin packing algorithms. Their proof shows that the additive integrality gap of this LP is in O(log2(n)). Later, Hoberg and Rothvoss improved their result and proved that the integrality gap is in O(log(n)). The best known lower bound on the integrality gap is a constant Ω(1). Finding the exact integrality gap is an open problem. In bin covering In the bin covering problem, there are n items with different sizes. The goal is to pack the items into a maximum number of bins, where each bin should contain at least B. A natural configuration LP for this problem could be:where A represents all configurations of items with sum at least B (one can take only the inclusion-minimal configurations). The problem with this LP is that, in the bin-covering problem, handling small items is problematic, since small items may be essential for the optimal solution. With small items allowed, the number of configurations may be too large even for the technique of Karmarkar and Karp. Csirik, Johnson and Kenyon present an alternative LP. First, they define a set of items that are called small. Let T be the total size of all small items. Then, they construct a matrix A representing all configurations with sum < 2. Then, they consider the above LP with one additional constraint:The additional constraint guarantees that the "vacant space" in the bins can be filled by the small items. The dual of this LP is more complex and cannot be solved by a simple knapsack-problem separation oracle. Csirik, Johnson and Kenyon present a different method to solve it approximately in time exponential in 1/epsilon. Jansen and Solis-Oba present an improved method to solve it approximately in time exponential in 1/epsilon. In machine scheduling In the problem of unrelated-machines scheduling, there are some m different machines that should process some n different jobs. When machine i processes job j, it takes time pi,j. The goal is to partition the jobs among the machines such that maximum completion time of a machine is as small as possible. The decision version of this problem is: given time T, is there a partition in which the completion time of all machines is at most T? For each machine i, there are finitely many subsets of jobs that can be processed by machine i in time at most T. Each such subset is called a configuration for machine i. Denote by Ci(T) the set of all configurations for machine i, given time T. For each machine i and configuration c in Ci(T), define a variable which equals 1 iff the actual configuration used in machine i is c, and 0 otherwise. Then, the LP constraints are: for every machine i in 1,...,m; for every job j in 1,...,n; for every i, j. Properties The integrality gap of the configuration-LP for unrelated-machines scheduling is 2. See also High-multiplicity bin packing References Bin packing Number partitioning Job scheduling Linear programming
Configuration linear program
[ "Mathematics" ]
2,084
[ "Bin packing", "Mathematical problems", "Packing problems" ]
69,066,355
https://en.wikipedia.org/wiki/Project%20Vartak
Vartak, also known as Project VARTAK is a project of the Border Roads Organisation under the Ministry of Defence of India. It was formed on 7 May 1960 as a provision of the 2nd Border Roads Development Board Meeting with the then Prime Minister of India Jawahar Lal Nehru as Project Tusker, which was later renamed Project Vartak in 1963. The initial task of this project was to construct and maintain roads between Bhalukpong and Tenga. It was the first established project of Border Roads Organisation. Its task was later expanded to construct and maintain roads in Arunachal Pradesh and adjoining districts of Assam. Major General O.M Mani was the first Chief Engineer of the project. Vartak successfully completed its initial task in October 1962, connecting Bhalukpong to Bomdila via Tenga and thus bringing motorable connectivity to these far-flung regions for the first time. Works and involvement Over the years, Vartak has become a major candidate for infrastructure development in western Arunachal Pradesh. It has successfully completed massive projects, improving connectivity all around Arunachal Pradesh. It has constructed major bridges and roads connecting the far-flung border areas of Arunachal Pradesh. Some major bridges include Yasong and Sarti bridges, Karteso Kong and Kangdang Sila bridges, Tanchen Panga bridge, Ungu bridge, Siang bridge, Sigit bridge, Sisseri Bridge. Major roads like the Balipara-Charduar-Tawang Axis and the Guwahati-Tawang Axis are important networks for improving connectivity in border areas. Vartak is also constructing numerous tunnels to facilitate all-weather travel in regions where fog is common to shorten travelling time between isolated places in Western Arunachal Pradesh. Major tunnels include the Nechiphu Tunnel and the Sela Tunnel. Many roads connecting Bhalukpong to adjoining districts of Assam are also built. Vartak had earlier constructed infrastructure for universities. Construction of residential accommodation, development of internal roads in Tezpur University, construction of the Degree & Diploma Academic blocks, and residential accommodation for the North Eastern Regional Institute of Science and Technology(NERIST), Itanagar and the construction of residential school with boy's and girl's hostel for Jawahar Navodaya Vidyalaya at Gorponding in Tawang, as part of the Rural Education Development Programme are the major works. Deposit works from various agencies like North East Council (NEC), North Eastern Electrical Power Corporation (NEEPCO) and Oil India were undertaken by this Project The headquarters of Vartak in Tezpur also hosts a primary school within it offering primary education for the children of the personnels as well as civilians. Vartak specializes in constructing motorable roads and bridges in mountainous regions bringing connectivity to many isolated towns in Western Arunachal Pradesh. Command structure Vartak initially started with four Task Forces. These Task Forces were spread across various regions, each specializing in different works. Between 1967 and 1971, there was a major reorganization of Task Forces. 7 Border Roads Task Force was disbanded in April 1967. 4 Border Roads Task Force was re-organised as 39 Maintenance Task Force and in January 1971 moved to Dimwe in Lohit District and subsequently renamed as 48 Border Roads Task Force. 3 Border Roads Task Force was renamed as 44 Maintenance Task Force in September 1970 and re-organised as 44 Border Roads Task Force in May 1972 . 1310 Fractional Task Force was raised in April 1984, re-organised in December 1986 as 756 Border Roads Task Force and was located at Ziro. 756 Task Force Headquarters subsequently to moved Naharlagun in February 1998. To cope up with the increased work load two new Task Forces were introduced, 763 Border Roads Task Force and 42 Border Roads Task Force. 48 Border Roads Task Force was later merged with Project Udayak. Subsequently, 756 Border Roads Task Force and 44 Border Roads Task Force were merged with Project Arunak to ease the workload. Bridge construction In 1986, Border Roads Organisation decided to go in for construction of permanent bridges departmentally. This resulted in the creation of a Bridge Construction Company, 1441 BCC, which was allotted to Vartak. The first bridge constructed by 1441 BCC was a pre-stressed concrete bridge Kamla II on the Balipara – Charduar – Tawang road. Subsequently, six more permanent bridges were completed by 1993. 1441 BCC continues to construct high-quality permanent bridges, even at altitudes of over 10,000 feet and across swift mountain rivers in Arunachal Pradesh and Assam. Major works References Infrastructure Border Roads Organisation
Project Vartak
[ "Engineering" ]
943
[ "Construction", "Infrastructure" ]
64,667,394
https://en.wikipedia.org/wiki/Raffaele%20Mezzenga
Raffaele Mezzenga is a soft condensed matter scientist, currently heading the Laboratory of Food and Soft Materials at the Swiss Federal Institute of Technology in Zurich. He is among the 0.1% most cited scientists according to the Clarivate 2023 Highly Cited Researchers list in the cross-field discipline. Education Prof. Mezzenga received his M.S. in Materials Science (1997) from Perugia University in Italy, while actively working for the Alpha Magnetic Spectrometer experiment at the European Center for Nuclear Research (CERN) and NASA (NASA Space Shuttle Discovery mission STS91), followed by a PhD in the field of Polymer Physics at the Swiss Federal Institute of Technology, Lausanne, Switzerland (2001). Research and career Mezzenga did postdoctoral research on semiconductive polymer colloids at the University of California Santa Barbara (UCSB) and then moved to the Nestlé Research Center in Lausanne as research scientist, working on the self-assembly of surfactants, natural amphiphiles and lyotropic liquid crystals. In 2005 he was hired as Associate Professor in the Physics Department of the University of Fribourg, and he then joined ETH Zurich on 2009 as Full Professor. His research mainly focuses on the fundamental understanding of self-assembly processes in polymers, lyotropic liquid crystals, biological and food colloidal systems. His work has led to over 400 scientific publications and about 20 patents. He has made seminal contributions to several fields of soft condensed matter such as in protein aggregation, biopolymers and surfactants self-organisation. He has pioneered the use of protein-based materials in the establishment of new technologies for environmental remediation, health and advanced materials design. Awards and honours Prof. Mezzenga was the recipient of the 2011 John H. Dillon Medal by the American Physical Society. He was elected Fellow of the American Physical Society in 2017. Other awards include the 2011 Young Scientist Research Award of the American Oil Chemist Society, the 2013 Biomacromolecules/Macromolecules Young Investigator Award of the American Chemical Society and the 2019 Spark Award for the most promising ETH Zurich invention in 2019. Mezzenga served as an Executive, Associate and Guest Editor for various journals including Food Biophysics, Food Hydrocolloids, Polymer International, Trends in Food Science, and has been a board member of Swiss Chemical Society for over 15 years. External links References Year of birth missing (living people) Living people Condensed matter physicists 21st-century Swiss scientists Fellows of the American Physical Society People associated with CERN University of Perugia alumni École Polytechnique Fédérale de Lausanne alumni Academic staff of the University of Fribourg Academic staff of ETH Zurich
Raffaele Mezzenga
[ "Physics", "Materials_science" ]
561
[ "Condensed matter physicists", "Condensed matter physics" ]
64,668,653
https://en.wikipedia.org/wiki/OpenCell
OpenCell is a laboratory in London. Laboratories OpenCell is primarily used for work related to biochemical and biomolecular activities such as DNA sequencing. It opened to the public in June 2018. The space uses shipping containers to house biotechnology laboratories. The laboratories contain biotechnology equipment including real-time PCR instruments, Plate reader, Opentrons liquid handling robots, flow hoods, non-ducted fume cupboards, -80, -20 and 4C storage, incubators (static/shaking), centrifuges (1ml-50ml refrigerated), and bench space COVID-19 testing In August 2020, a shipping container laboratory for COVID-19 diagnostics was delivered to the Bailiwick of Jersey. The laboratory began processing tests on Tuesday, September 15, with 170 samples, collected from arriving airport passengers, processed within an average of 12 hours. Deputy Medical Officer of Health Dr Ivan Muscat said: “The opening of the covid-19 laboratory is a significant milestone in managing Jersey’s testing requirements." References Companies based in the London Borough of Hammersmith and Fulham Public laboratories Shipping containers 2018 establishments in England COVID-19 pandemic in England
OpenCell
[ "Biology" ]
250
[ "Biotechnology stubs" ]
64,668,694
https://en.wikipedia.org/wiki/Opentrons
Opentrons Labworks, Inc. (or Opentrons) is a biotechnology company that manufactures liquid handling robots that use open-source software, which at one point used open-source hardware but no longer does. Their robots can be used by scientists to manipulate small volumes of liquids for the purpose of undertaking biochemical or chemical reactions. Currently, they offer the OT-2 and Flex robots. These robots are used primarily by researchers and scientists interested in DIY biology, but they are increasingly being used by other biologists. Products Current: OT-2 – The OT-2 was released in 2018 and has seen utilization as one of the tools that researchers are leveraging in the fight against COVID-19. The OT-2 and later products, including its electronic micropipettes and hardware modules, are closed source (proprietary) hardware. Only coarse CAD files for the enclosure have been released, with no details on the internals, such that it no longer complies with current open hardware standards. The software remains open source. Flex – Successor to the OT-2, the Flex was released in 2023, "measures two feet by two feet by two feet", and is purchased with a one-time cost rather than a robot as a service (RaaS) subscription. Its open-source and accessible API allows it to interact with potential AI tools. Flex Prep – Similar to the Flex, the Flex Prep was released in 2024 and provides a no-code software for setting up pipetting tasks and executing that workflow through the Flex Prep touchscreen. Discontinued: OT-1 – The OpenTrons OT-1 was the result of a crowdfunding campaign on the Kickstarter platform and was released in 2015 for $2,000. This robot employed adapters to actuate handheld micropipettes. The release of the OT-1 marked the first commercial open source liquid handling robot in the life science industry. It was also the last in the series to adhere to open hardware standards, however, editable CAD files were not released. It is no longer commercially available, though at least one replication was attempted. History The company originated from Genspace, a community biology laboratory in Brooklyn, New York. Will Canine, a biohacker and former Occupy Wall Street organizer, partnered with Nicholas Wagner and Chiu Chau as his eventual co-founders who he found from a DIY-bio listserve. In 2014, the startup officially launched with financial backing from HAXLR8TR, a hardware accelerator in Shenzhen, China. In late 2014, they launch a Kickstarter campaign. They show their machine inserting DNA inside E. coli after their campaign successfully gets funded. Jonathan Brennan-Badal, who was VP of strategy at ComiXology and a board member of Genspace, joined Opentrons in 2014 and is the current CEO. In 2016, Opentrons was part of Y Combinator's Winter cohort of startups. Impact Opentrons robots have had a variety of uses in the scientific and DIY community. Scientists at UCSD modified an existing OT-1 robot to automate adding in reagents and imaging their cell signaling experiments. Scientists at Carnegie Mellon University used the OT-2, Opentrons Python API, and OpenAI's GPT-4 to autonomously design, plan, and perform experiments. During the COVID-19 pandemic, Opentrons helped set up the Pandemic Response Lab (PRL), a sequencing facility located in Queens, New York. Opentrons' robots at the PRL helped speed up turnaround time for COVID-19 testing, going from 7 to 14 days to 12 hours, and reducing costs from $2,000 to under $28. Institutions that made use of Opentrons' robots for COVID-19 testing include: Mayo Clinic, Harvard, Stanford, Caltech, MIT, and BioNTech. Subsidiaries As a company, Opentrons has a number of subsidiaries. Opentrons Robotics – business unit for user-friendly lab automation Pandemic Response Lab (PRL) – in partnership with NYU Langone Health, provides diagnostic lab services to health systems across the US, and as of December 31, 2022 has shut down Neochromosome (Neo) – acquired in March 2021, Neo creates genome-scale cell engineering solutions for therapeutics Zenith AI – acquired in June 2021, Zenith AI brings no-code AI and modern machine learning to the platform See also Laboratory automation Liquid handling robot List of biotech and pharmaceutical companies in the New York metropolitan area References External links Opentrons' Y Combinator profile Opentrons' GitHub organization page Laboratory robots Open-source robots Biotechnology companies Companies based in Queens, New York Y Combinator companies
Opentrons
[ "Engineering", "Biology" ]
976
[ "Biotechnology organizations", "Biotechnology companies" ]
64,668,822
https://en.wikipedia.org/wiki/Rainbow-independent%20set
In graph theory, a rainbow-independent set (ISR) is an independent set in a graph, in which each vertex has a different color. Formally, let be a graph, and suppose vertex set is partitioned into subsets , called "colors". A set of vertices is called a rainbow-independent set if it satisfies both the following conditions: It is an independent set – every two vertices in are not adjacent (there is no edge between them); It is a rainbow set – contains at most a single vertex from each color . Other terms used in the literature are independent set of representatives, independent transversal, and independent system of representatives. As an example application, consider a faculty with departments, where some faculty members dislike each other. The dean wants to construct a committee with members, one member per department, but without any pair of members who dislike each other. This problem can be presented as finding an ISR in a graph in which the nodes are the faculty members, the edges describe the "dislike" relations, and the subsets are the departments. Variants It is assumed for convenience that the sets are pairwise-disjoint. In general the sets may intersect, but this case can be easily reduced to the case of disjoint sets: for every vertex , form a copy of for each such that contains . In the resulting graph, connect all copies of to each other. In the new graph, the are disjoint, and each ISR corresponds to an ISR in the original graph. ISR generalizes the concept of a system of distinct representatives (SDR, also known as transversal). Every transversal is an ISR where in the underlying graph, all and only copies of the same vertex from different sets are connected. Existence of rainbow-independent sets There are various sufficient conditions for the existence of an ISR. Condition based on vertex degree Intuitively, when the departments are larger, and there is less conflict between faculty members, an ISR should be more likely to exist. The "less conflict" condition is represented by the vertex degree of the graph. This is formalized by the following theorem: If the degree of every vertex in is at most , and the size of each color-set is at least , then has an ISR. The is best possible: there are graph with vertex degree and colors of size without an ISR. But there is a more precise version in which the bound depends both on and on . Condition based on dominating sets Below, given a subset of colors (a subset of ), we denote by the union of all subsets in (all vertices whose color is one of the colors in ), and by the subgraph of induced by . The following theorem describes the structure of graphs that have no ISR but are edge-minimal, in the sense that whenever any edge is removed from them, the remaining graph has an ISR. If has no ISR, but for every edge in , has an ISR, then for every edge in , there exists a subset of the colors and a set of edges of , such that: The vertices and are both in ; The edge is in ; The set of vertices adjacent to dominates ; ; is a matching – no two edges of it are adjacent to the same vertex. Hall-type condition Below, given a subset of colors (a subset of ), an independent set of is called special for if for every independent subset of vertices of of size at most , there exists some in such that is also independent. Figuratively, is a team of "neutral members" for the set of departments, that can augment any sufficiently small set of non-conflicting members, to create a larger such set. The following theorem is analogous to Hall's marriage theorem:If, for every subset S of colors, the graph contains an independent set that is special for , then has an ISR.Proof idea. The theorem is proved using Sperner's lemma. The standard simplex with endpoints is assigned a triangulation with some special properties. Each endpoint of the simplex is associated with the color-set , each face of the simplex is associated with a set of colors. Each point of the triangulation is labeled with a vertex of such that: (a) For each point on a face , is an element of – the special independent set of . (b) If points and are adjacent in the 1-skeleton of the triangulation, then and are not adjacent in . By Sperner's lemma, there exists a sub-simplex in which, for each point , belongs to a different color-set; the set of these is an ISR. The above theorem implies Hall's marriage condition. To see this, it is useful to state the theorem for the special case in which is the line graph of some other graph ; this means that every vertex of is an edge of , and every independent set of is a matching in . The vertex-coloring of corresponds to an edge-coloring of , and a rainbow-independent-set in corresponds to a rainbow-matching in . A matching in is special for , if for every matching in of size at most , there is an edge in such that is still a matching in .Let be a graph with an edge-coloring. If, for every subset of colors, the graph contains a matching that is special for , then has a rainbow-matching. Let be a bipartite graph satisfying Hall's condition. For each vertex of , assign a unique color to all edges of adjacent to . For every subset of colors, Hall's condition implies that has at least neighbors in , and therefore there are at least edges of adjacent to distinct vertices of . Let be a set of such edges. For any matching of size at most in , some element of has a different endpoint in than all elements of , and thus is also a matching, so is special for . The above theorem implies that has a rainbow matching . By definition of the colors, is a perfect matching in . Another corollary of the above theorem is the following condition, which involves both vertex degree and cycle length:If the degree of every vertex in is at most 2, and the length of each cycle of is divisible by 3, and the size of each color-set is at least 3, then has an ISR.Proof. For every subset of colors, the graph contains at least vertices, and it is a union of cycles of length divisible by 3 and paths. Let be an independent set in containing every third vertex in each cycle and each path. So contains at least vertices. Let be an independent set in of size at most . Since the distance between each two vertices of is at least 3, every vertex of is adjacent to at most one vertex of . Therefore, there is at least one vertex of which is not adjacent to any vertex of . Therefore is special for . By the previous theorem, has an ISR. Condition based on homological connectivity One family of conditions is based on the homological connectivity of the independence complex of subgraphs. To state the conditions, the following notation is used: denotes the independence complex of a graph (that is, the abstract simplicial complex whose faces are the independent sets in ). denotes the homological connectivity of a simplicial complex (i.e., the largest integer such that the first homology groups of are trivial), plus 2. is the set of indices of colors, For any subset of , is the union of colors for in . is the subgraph of induced by the vertices in . The following condition is implicit in and proved explicitly in. If, for all subsets of : then the partition admits an ISR.As an example, suppose is a bipartite graph, and its parts are exactly and . In this case so there are four options for : then and and the connectivity is infinite, so the condition holds trivially. then is a graph with vertices and no edges. Here all vertex sets are independent, so is the power set of , i.e., it has a single -simplex (and all its subsets). It is known that a single simplex is -connected for all integers , since all its reduced homology groups are trivial (see simplicial homology). Hence the condition holds. this case is analogous to the previous one. then , and contains two simplices and (and all their subsets). The condition is equivalent to the condition that the homological connectivity of is at least 0, which is equivalent to the condition that is the trivial group. This holds if-and-only-if the complex contains a connection between its two simplices and . Such a connection is equivalent to an independent set in which one vertex is from and one is from . Thus, in this case, the condition of the theorem is not only sufficient but also necessary. Other conditions Every properly coloured triangle-free graph of chromatic number contains a rainbow-independent set of size at least . Several authors have studied conditions for existence of large rainbow-independent sets in various classes of graphs. Computation The ISR decision problem is the problem of deciding whether a given graph and a given partition of into colors admits a rainbow-independent set. This problem is NP-complete. The proof is by reduction from the 3-dimensional matching problem (3DM). The input to 3DM is a tripartite hypergraph , where , , are vertex-sets of size , and is a set of triplets, each of which contains a single vertex of each of , , . An input to 3DM can be converted into an input to ISR as follows: For each edge in , there is a vertex in ; For each vertex in , let For each , , , , , there is an edge in ; For each , , , , , there is an edge in ; In the resulting graph , an ISR corresponds to a set of triplets such that: Each triplet has a different value (since each triplet belongs to a different color-set ); Each triplet has a different value and a different value (since the vertices are independent). Therefore, the resulting graph admits an ISR if and only if the original hypergraph admits a 3DM. An alternative proof is by reduction from SAT. Related concepts If is the line graph of some other graph , then the independent sets in are the matchings in . Hence, a rainbow-independent set in is a rainbow matching in . See also matching in hypergraphs. Another related concept is a rainbow cycle, which is a cycle in which each vertex has a different color. When an ISR exists, a natural question is whether there exist other ISRs, such that the entire set of vertices is partitioned into disjoint ISRs (assuming the number of vertices in each color is the same). Such a partition is called strong coloring. Using the faculty metaphor: A system of distinct representatives is a committee of distinct members, with or without conflicts. An independent set is a committee with no conflict. An independent transversal is a committee with no conflict, with exactly one member from each department. A graph coloring is a partitioning of the faculty members into committees with no conflict. A strong coloring is a partitioning of the faculty members into committees with no conflict and with exactly one member from each department. Thus this problem is sometimes called the happy dean problem. A rainbow clique or a colorful clique is a clique in which every vertex has a different color. Every clique in a graph corresponds to an independent set in its complement graph. Therefore, every rainbow clique in a graph corresponds to a rainbow-independent set in its complement graph. See also Graph coloring List coloring Rainbow coloring Rainbow-colorable hypergraph Independence complex References Graph theory Rainbow problems NP-complete problems
Rainbow-independent set
[ "Mathematics" ]
2,436
[ "Discrete mathematics", "Graph theory", "Computational problems", "Combinatorics", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
74,805,735
https://en.wikipedia.org/wiki/Consumer-resource%20model
In theoretical ecology and nonlinear dynamics, consumer-resource models (CRMs) are a class of ecological models in which a community of consumer species compete for a common pool of resources. Instead of species interacting directly, all species-species interactions are mediated through resource dynamics. Consumer-resource models have served as fundamental tools in the quantitative development of theories of niche construction, coexistence, and biological diversity. These models can be interpreted as a quantitative description of a single trophic level. A general consumer-resource model consists of resources whose abundances are and consumer species whose populations are . A general consumer-resource model is described by the system of coupled ordinary differential equations, where , depending only on resource abundances, is the per-capita growth rate of species , and is the growth rate of resource . An essential feature of CRMs is that species growth rates and populations are mediated through resources and there are no explicit species-species interactions. Through resource interactions, there are emergent inter-species interactions. Originally introduced by Robert H. MacArthur and Richard Levins, consumer-resource models have found success in formalizing ecological principles and modeling experiments involving microbial ecosystems. Models Niche models Niche models are a notable class of CRMs which are described by the system of coupled ordinary differential equations, where is a vector abbreviation for resource abundances, is the per-capita growth rate of species , is the growth rate of species in the absence of consumption, and is the rate per unit species population that species depletes the abundance of resource through consumption. In this class of CRMs, consumer species' impacts on resources are not explicitly coordinated; however, there are implicit interactions. MacArthur consumer-resource model (MCRM) The MacArthur consumer-resource model (MCRM), named after Robert H. MacArthur, is a foundational CRM for the development of niche and coexistence theories. The MCRM is given by the following set of coupled ordinary differential equations:where is the relative preference of species for resource and also the relative amount by which resource is depleted by the consumption of consumer species ; is the steady-state carrying capacity of resource in absence of consumption (i.e., when is zero); and are time-scales for species and resource dynamics, respectively; is the quality of resource ; and is the natural mortality rate of species . This model is said to have self-replenishing resource dynamics because when , each resource exhibits independent logistic growth. Given positive parameters and initial conditions, this model approaches a unique uninvadable steady state (i.e., a steady state in which the re-introduction of a species which has been driven to extinction or a resource which has been depleted leads to the re-introduced species or resource dying out again). Steady states of the MCRM satisfy the competitive exclusion principle: the number of coexisting species is less than or equal to the number of non-depleted resources. In other words, the number of simultaneously occupiable ecological niches is equal to the number of non-depleted resources. Externally supplied resources model The externally supplied resource model is similar to the MCRM except the resources are provided at a constant rate from an external source instead of being self-replenished. This model is also sometimes called the linear resource dynamics model. It is described by the following set of coupled ordinary differential equations:where all the parameters shared with the MCRM are the same, and is the rate at which resource is supplied to the ecosystem. In the eCRM, in the absence of consumption, decays to exponentially with timescale . This model is also known as a chemostat model. Tilman consumer-resource model (TCRM) The Tilman consumer-resource model (TCRM), named after G. David Tilman, is similar to the externally supplied resources model except the rate at which a species depletes a resource is no longer proportional to the present abundance of the resource. The TCRM is the foundational model for Tilman's R* rule. It is described by the following set of coupled ordinary differential equations:where all parameters are shared with the MCRM. In the TCRM, resource abundances can become nonphysically negative. Microbial consumer-resource model (MiCRM) The microbial consumer resource model describes a microbial ecosystem with externally supplied resources where consumption can produce metabolic byproducts, leading to potential cross-feeding. It is described by the following set of coupled ODEs:where all parameters shared with the MCRM have similar interpretations; is the fraction of the byproducts due to consumption of resource which are converted to resource and is the "leakage fraction" of resource governing how much of the resource is released into the environment as metabolic byproducts. Symmetric interactions and optimization MacArthur's Minimization Principle For the MacArthur consumer resource model (MCRM), MacArthur introduced an optimization principle to identify the uninvadable steady state of the model (i.e., the steady state so that if any species with zero population is re-introduced, it will fail to invade, meaning the ecosystem will return to said steady state). To derive the optimization principle, one assumes resource dynamics become sufficiently fast (i.e., ) that they become entrained to species dynamics and are constantly at steady state (i.e., ) so that is expressed as a function of . With this assumption, one can express species dynamics as, where denotes a sum over resource abundances which satisfy . The above expression can be written as , where, At un-invadable steady state for all surviving species and for all extinct species . Minimum Environmental Perturbation Principle (MEPP) MacArthur's Minimization Principle has been extended to the more general Minimum Environmental Perturbation Principle (MEPP) which maps certain niche CRM models to constrained optimization problems. When the population growth conferred upon a species by consuming a resource is related to the impact the species' consumption has on the resource's abundance through the equation, species-resource interactions are said to be symmetric. In the above equation and are arbitrary functions of resource abundances. When this symmetry condition is satisfied, it can be shown that there exists a function such that:After determining this function , the steady-state uninvadable resource abundances and species populations are the solution to the constrained optimization problem:The species populations are the Lagrange multipliers for the constraints on the second line. This can be seen by looking at the KKT conditions, taking to be the Lagrange multipliers:Lines 1, 3, and 4 are the statements of feasibility and uninvadability: if , then must be zero otherwise the system would not be at steady state, and if , then must be non-positive otherwise species would be able to invade. Line 2 is the stationarity condition and the steady-state condition for the resources in nice CRMs. The function can be interpreted as a distance by defining the point in the state space of resource abundances at which it is zero, , to be its minimum. The Lagrangian for the dual problem which leads to the above KKT conditions is, In this picture, the unconstrained value of that minimizes (i.e., the steady-state resource abundances in the absence of any consumers) is known as the resource supply vector. Geometric perspectives The steady states of consumer resource models can be analyzed using geometric means in the space of resource abundances. Zero net-growth isoclines (ZNGIs) For a community to satisfy the uninvisibility and steady-state conditions, the steady-state resource abundances (denoted ) must satisfy, for all species . The inequality is saturated if and only if species survives. Each of these conditions specifies a region in the space of possible steady-state resource abundances, and the realized steady-state resource abundance is restricted to the intersection of these regions. The boundaries of these regions, specified by , are known as the zero net-growth isoclines (ZNGIs). If species survive, then the steady-state resource abundances must satisfy, . The structure and locations of the intersections of the ZNGIs thus determine what species and feasibly coexist; the realized steady-state community is dependent on the supply of resources and can be analyzed by examining coexistence cones. Coexistence cones The structure of ZNGI intersections determines what species can feasibly coexist but does not determine what set of coexisting species will be realized. Coexistence cones determine what species determine what species will survive in an ecosystem given a resource supply vector. A coexistence cone generated by a set of species is defined to be the set of possible resource supply vectors which will lead to a community containing precisely the species . To see the cone structure, consider that in the MacArthur or Tilman models, the steady-state non-depleted resource abundances must satisfy, where is a vector containing the carrying capacities/supply rates, and is the th row of the consumption matrix , considered as a vector. As the surviving species are exactly those with positive abundances, the sum term becomes a sum only over surviving species, and the right-hand side resembles the expression for a convex cone with apex and whose generating vectors are the for the surviving species . Complex ecosystems In an ecosystem with many species and resources, the behavior of consumer-resource models can be analyzed using tools from statistical physics, particularly mean-field theory and the cavity method. In the large ecosystem limit, there is an explosion of the number of parameters. For example, in the MacArthur model, parameters are needed. In this limit, parameters may be considered to be drawn from some distribution which leads to a distribution of steady-state abundances. These distributions of steady-state abundances can then be determined by deriving mean-field equations for random variables representing the steady-state abundances of a randomly selected species and resource. MacArthur consumer resource model cavity solution In the MCRM, the model parameters can be taken to be random variables with means and variances: With this parameterization, in the thermodynamic limit (i.e., with ), the steady-state resource and species abundances are modeled as a random variable, , which satisfy the self-consistent mean-field equations, where are all moments which are determined self-consistently, are independent standard normal random variables, and and are average susceptibilities which are also determined self-consistently. This mean-field framework can determine the moments and exact form of the abundance distribution, the average susceptibilities, and the fraction of species and resources that survive at a steady state. Similar mean-field analyses have been performed for the externally supplied resources model, the Tilman model, and the microbial consumer-resource model. These techniques were first developed to analyze the random generalized Lotka–Volterra model. See also Theoretical ecology Community (ecology) Competition (biology) Lotka–Volterra equations Competitive Lotka–Volterra equations Generalized Lotka–Volterra equation Random generalized Lotka–Volterra model References Further reading Stefano Allesina's Community Ecology course lecture notes: https://stefanoallesina.github.io/Theoretical_Community_Ecology/ Ecology Ordinary differential equations Mathematical modeling Biophysics Community ecology Ecological niche Population ecology Dynamical systems Random dynamical systems Theoretical ecology
Consumer-resource model
[ "Physics", "Mathematics", "Biology" ]
2,345
[ "Mathematical modeling", "Applied and interdisciplinary physics", "Applied mathematics", "Random dynamical systems", "Ecology", "Biophysics", "Mechanics", "Dynamical systems" ]
74,810,557
https://en.wikipedia.org/wiki/Ovine%20forestomach%20matrix
Ovine forestomach matrix (OFM), marketed as AROA ECM, is a layer of decellularized extracellular matrix (ECM) biomaterial isolated from the propria submucosa of the rumen of sheep. OFM is used in tissue engineering and as a tissue scaffold for wound healing and surgical applications. History OFM was developed and is manufactured by Aroa Biosurgery Limited (New Zealand, formerly Mesynthes Limited, New Zealand) and was first patented in 2008 and described in the scientific literature in 2010. OFM is manufactured from sheep rumen tissue, using a process of decellularization to selectively remove the unwanted sheep cells and cell components to leave an intact and functional extracellular matrix. OFM comprises a special layer of tissue found in rumen, the propria submucosa, which is structurally and functionally distinct from the submucosa of other gastrointestinal tissues. OFM was first cleared by the FDA in 2009 for the treatment of wounds. Since 2008 there have been >70 publications describing OFM and its clinical applications, and over 6 million clinical applications of OFM-based devices. Composition OFM comprises more than 24 collagens (most notably types I and III), but also contains many growth factors, polysaccharides and proteoglycans that naturally exist as part of the extracellular matrix and play important roles in wound healing and soft tissue repair. The composition includes more than 150 different proteins, including elastin, fibronectin, glycosaminoglycans, basement membrane components, and various growth factors, such as vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF) and platelet derived growth factor (PDGF). OFM has been shown to recruit mesenchymal stem cells, stimulate cell proliferation, angiogenesis and vascularogenesis, and modulate matrix metalloproteinase and neutrophil elastase. The porous structure of OFM has been characterized by differential scanning calorimetry (DSC), scanning electron microscopy (SEM), atomic force microscopy (AFM), histology, Sirius Red staining, small-angle x-ray scattering (SAXS), and micro computerized topography (MicroCT). OFM has been shown to contain residual vascular channels that facilitate blood vessel formation through angioconduction. Tissue engineering OFM can be fabricated into a range of different product presentations for tissue engineering applications, and can be functionalized with therapeutic agents including silver, doxycycline and hyaluronic acid. OFM has been commercialized as single and multi-layered sheets, reinforced biologics and powders. When placed in the body OFM does not elicit a negative inflammatory response and is absorbed into the regenerating tissues via a process called tissue remodeling. Clinical significance Wound healing Aroa Biosurgery Limited first distributed OFM commercially in 2012 as Endoform™ Dermal Template (later Endoform™ Natural) through a distribution partnership with Hollister Incorporated (IL, USA). Endoform™ Natural and Endoform™ Antimicrobial (0.3% ionic silver w/w), are single layers of OFM is used in the treatment of acute and chronic wounds, including diabetic foot ulcers (DFU) and venous leg ulcers (VLU). Endoform™ Natural has been shown to accelerate wound healing of DFU. The wound product Symphony™ combines OFM and hyaluronic acid and is designed to support healing during the proliferative phase particularly in patients whose healing is severely impaired or compromised due to disease Complex plastics and reconstructive surgery OFM was cleared by the FDA in 2016 and 2021 for surgical applications in plastics and reconstructive surgery as a multi-layered product (Myriad Matrix™) and powdered format (Myriad Morcells™). OFM-based surgical devices are routinely used in complex lower extremity reconstruction, pilonidal sinus reconstruction, hidradenitis suppurativa and complex traumatic wounds. OFM-based surgical devices are routinely used in plastics and reconstructive surgery for the regeneration of soft tissues when used as an artificial skin Hernia repair Multi-layered OFM devices, reinforced with synthetic polymer were first described in 2008 and in the scientific literature in 2010. These devices, termed ‘reinforced biologics’ have been designed for applications in the surgical repair of hernia as an alternative to synthetic surgical mesh (a mesh prosthesis). OFM reinforced biologics are distributed in the US by Tela Bio Inc. Clinical studies have shown that OFM reinforced biologics have lower hernia recurrence rates versus synthetic hernia meshes or biologics such as acellular dermis. References Tissue engineering
Ovine forestomach matrix
[ "Chemistry", "Engineering", "Biology" ]
1,021
[ "Biological engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
74,814,591
https://en.wikipedia.org/wiki/Development%20of%20the%20respiratory%20system
Development of the respiratory system begins early in the fetus. It is a complex process that includes many structures, most of which arise from the endoderm. Towards the end of development, the fetus can be observed making breathing movements. Until birth, however, the mother provides all of the oxygen to the fetus as well as removes all of the fetal carbon dioxide via the placenta. Timeline The development of the respiratory system begins at about week 4 of gestation. By week 28, enough alveoli have matured that a baby born prematurely at this time can usually breathe on its own. The respiratory system, however, is not fully developed until early childhood, when a full complement of mature alveoli is present. Weeks 4-7 Respiratory development in the embryo begins around week 4. Ectodermal tissue from the anterior head region invaginates posteriorly to form olfactory pits, which fuse with endodermal tissue of the developing pharynx. An olfactory pit is one of a pair of structures that will enlarge to become the nasal cavity. At about this same time, the lung bud forms. The lung bud is a dome-shaped structure composed of tissue that bulges from the foregut. The foregut is endoderm just inferior to the pharyngeal pouches. The laryngotracheal bud is a structure that forms from the longitudinal extension of the lung bud as development progresses. The portion of this structure nearest the pharynx becomes the trachea, whereas the distal end becomes more bulbous, forming bronchial buds. A bronchial bud is one of a pair of structures that will eventually become the bronchi and all other lower respiratory structures. Weeks 7-16 Bronchial buds continue to branch as development progresses until all of the segmental bronchi have been formed. Beginning around week 13, the lumens of the bronchi begin to expand in diameter. By week 16, respiratory bronchioles form. The fetus now has all major lung structures involved in the airway. Weeks 16-24 Once the respiratory bronchioles form, further development includes extensive vascularization, or the development of the blood vessels, as well as the formation of alveolar ducts and alveolar precursors. At about week 19, the respiratory bronchioles have formed. In addition, cells lining the respiratory structures begin to differentiate to form type I and type II pneumocytes. Once type II cells have differentiated, they begin to secrete small amounts of pulmonary surfactant. Around week 20, fetal breathing movements may begin. Weeks 24-term Major growth and maturation of the respiratory system occurs from week 24 until term. More alveolar precursors develop, and larger amounts of pulmonary surfactant are produced. Surfactant levels are not generally adequate to create effective lung compliance until about the eighth month of pregnancy. The respiratory system continues to expand, and the surfaces that will form the respiratory membrane develop further. At this point, pulmonary capillaries have formed and continue to expand, creating a large surface area for gas exchange. The major milestone of respiratory development occurs at around week 28, when sufficient alveolar precursors have matured so that a baby born prematurely at this time can usually breathe on its own. However, alveoli continue to develop and mature into childhood. A full complement of functional alveoli does not appear until around 8 years of age. Fetal breathing Although the function of fetal breathing movements is not entirely clear, they can be observed starting at 20–21 weeks of development. Fetal breathing movements involve muscle contractions that cause the inhalation of amniotic fluid and exhalation of the same fluid, with pulmonary surfactant and mucus. Fetal breathing movements are not continuous and may include periods of frequent movements and periods of no movements. Maternal factors can influence the frequency of breathing movements. For example, high blood glucose levels, called hyperglycemia, can boost the number of breathing movements. Conversely, hypoglycemia can reduce the number of fetal breathing movements. Tobacco use is also known to lower fetal breathing rates. Fetal breathing may help tone the muscles in preparation for breathing movements once the fetus is born. It may also help the alveoli to form and mature. Fetal breathing movements are considered a sign of robust health. Birth Prior to birth, the lungs are filled with amniotic fluid, mucus, and surfactant. As the fetus is squeezed through the birth canal, the fetal thoracic cavity is compressed, expelling much of this fluid. Some fluid remains, however, but is rapidly absorbed by the body shortly after birth. The first inhalation occurs within 10 seconds after birth and not only serves as the first inspiration, but also acts to inflate the lungs. Pulmonary surfactant is critical for inflation to occur, as it reduces the surface tension of the alveoli. Preterm birth around 26 weeks frequently results in severe respiratory distress, although with current medical advancements, some babies may survive. Prior to 26 weeks, sufficient pulmonary surfactant is not produced, and the surfaces for gas exchange have not formed adequately; therefore, survival is low. Sources Respiratory system Human development
Development of the respiratory system
[ "Biology" ]
1,079
[ "Behavior", "Respiratory system", "Human development", "Behavioural sciences", "Organ systems" ]
74,815,191
https://en.wikipedia.org/wiki/Butoxyacetic%20acid
Butoxyacetic acid is an aliphatic organic chemical. It is a liquid. It has the formula C6H12O3 and CAS Registry Number of 2516-93-0. It is REACH registered with the EC number 677-344-8. n-Butyl glycidyl ether is metabolized renally to this compound as is 2-butoxyethanol. Methods have been developed and papers published to detect the compound in urine and blood. Uses It is used as a biocide. References Organic acids Ethers
Butoxyacetic acid
[ "Chemistry" ]
117
[ "Organic acids", "Acids", "Functional groups", "Organic compounds", "Ethers" ]
74,817,490
https://en.wikipedia.org/wiki/Battery%20leakage
Battery leakage is the escape of chemicals, such as electrolytes, within an electric battery due to generation of pathways to the outside environment caused by factory or design defects, excessive gas generation, or physical damage to the battery. The leakage of battery chemical often causes destructive corrosion to the associated equipment and may pose a health hazard. Leakage by type Primary Zinc–carbon Zinc–carbon batteries were the first commercially available battery type and are still somewhat frequently used, although they have largely been replaced by the similarly composed alkaline battery. Like the alkaline battery, the zinc–carbon battery contains manganese dioxide and zinc electrodes. Unlike the alkaline battery, the zinc–carbon battery uses ammonium chloride as the electrolyte (zinc chloride in the case of "heavy-duty" zinc–carbon batteries), which is acidic. Either when it has been completely consumed or after three to five years from its manufacture (its shelf life), a zinc–carbon battery is prone to leaking. The byproducts of the leakage may include manganese hydroxide, zinc ammonium chloride, ammonia, zinc chloride, zinc oxide, water and starch. This combination of materials is corrosive to metals, such as those of the battery contacts and surrounding circuitry. Anecdotal evidence suggests that zinc–carbon battery leakage can be effectively cleaned with sodium bicarbonate (baking soda). Alkaline Alkaline batteries use manganese dioxide and zinc electrodes with an electrolyte of potassium hydroxide. The alkaline battery gets its name from the replacement of the acidic ammonium chloride of zinc–carbon batteries with potassium hydroxide, which is an alkaline. Alkaline batteries are considerably more efficient, more environmentally friendly, and more shelf-stable than zinc–carbon batteries—five to ten years, when stored room temperature. Alkaline batteries largely replaced zinc–carbon batteries in regular use by 1990. After an alkaline battery has been spent, or as it reaches the ends of its shelf life, the chemistry of its cells change, and hydrogen gas is generated as a byproduct. When enough pressure has been built up internally, the casing splits at the bases or side (or both), releasing manganese oxide, zinc oxide, potassium hydroxide, zinc hydroxide, and manganese hydroxide. Alkaline battery leakage can be effectively neutralized with lemon juice or distilled white vinegar. Eye protection and rubber gloves should be worn, as the potassium hydroxide electrolyte is caustic. Rechargeable Nickel–cadmium (Ni-Cd) Nickel–cadmium batteries (Ni-Cd) use nickel oxide hydroxide and metallic cadmium electrodes with an electrolyte of potassium hydroxide. Sealed Ni-Cd batteries were widely used in photography equipment, handheld power tools, and radio-controlled toys from the early 1940s until the early 1990s, when nickel–metal hydride batteries supplanted them (like how alkaline batteries replaced zinc–carbon batteries). In personal computers, Ni-Cd batteries first saw use in the mid-1980s as a cheaper alternative to lithium batteries for powering real-time clocks and preserving BIOS settings. Nickel–cadmium batteries were also briefly used in laptop battery packs, until the advent of commercially viable nickel–metal hydride batteries in the early 1990s. Ni-Cd batteries are still used in some uninterruptible power supplies and emergency lighting setups. Except in aeronautical or other high-risk applications, Ni-Cd batteries are intentionally not hermetically sealed and include pressure vents for safety if the batteries are charged improperly. With age and sufficient thermal cycles the seal will degrade and allow electrolyte to leak through. The leakage usually travels down the positive and/or negative terminals onto any surrounding circuitry (see the top image). Like with alkaline battery leakage, Ni-Cd leakage can be effectively neutralized with lemon juice or distilled white vinegar. Nickel–metal hydride (Ni-MH) Nickel–metal hydride batteries (Ni-MH) largely replaced Ni-Cd batteries in the early 1990s. They replaced the metallic cadmium electrode with a hydrogen-absorbing alloy, allowing it to have over two times the capacity of Ni-Cd batteries while being easier to recycle. Their heyday in computer equipment was in the early- to mid-1990s. By 1995, most motherboard manufacturers switched to non-rechargeable lithium button cells to keep the BIOS chip powered. Lithium-based battery packs replaced Ni-MH packs in all but the lowest-end laptops by the early 2000s. The practical shelf life of a Ni-MH is roughly five years. Cylindrical jelly-roll Ni-MH cells, like the ones used in 1990s laptop battery packs, discharge at a rate of up to 2% per day, while button cells like the ones used in motherboard batteries discharge at a rate of less than 20% per month. They are said to leak less frequently than alkaline batteries but have a similar failure mode. Ni-MH leakage can be effectively neutralized with lemon juice or distilled white vinegar. History In the United States in 1964, the Federal Trade Commission proscribed the use of the word leakproof or the phrase "guaranteed leakproof" in advertisements for or on the packages of dry-cell batteries, as they had determined that no manufacturer had yet developed a battery that was truly impervious to leaking. The FTC repealed this ban in 1997. References Leakage Corrosion Technological failures
Battery leakage
[ "Chemistry", "Materials_science", "Technology" ]
1,156
[ "Metallurgy", "Technological failures", "Corrosion", "Electrochemistry", "Materials degradation" ]
74,820,340
https://en.wikipedia.org/wiki/Random%20generalized%20Lotka%E2%80%93Volterra%20model
The random generalized Lotka–Volterra model (rGLV) is an ecological model and random set of coupled ordinary differential equations where the parameters of the generalized Lotka–Volterra equation are sampled from a probability distribution, analogously to quenched disorder. The rGLV models dynamics of a community of species in which each species' abundance grows towards a carrying capacity but is depleted due to competition from the presence of other species. It is often analyzed in the many-species limit using tools from statistical physics, in particular from spin glass theory. The rGLV has been used as a tool to analyze emergent macroscopic behavior in microbial communities with dense, strong interspecies interactions. The model has served as a context for theoretical investigations studying diversity-stability relations in community ecology and properties of static and dynamic coexistence. Dynamical behavior in the rGLV has been mapped experimentally in community microcosms. The rGLV model has also served as an object of interest for the spin glass and disordered systems physics community to develop new techniques and numerical methods. Definition The random generalized Lotka–Volterra model is written as the system of coupled ordinary differential equations,where is the abundance of species , is the number of species, is the carrying capacity of species in the absence of interactions, sets a timescale, and is a random matrix whose entries are random variables with mean , variance , and correlations for where . The interaction matrix, , may be parameterized as,where are standard random variables (i.e., zero mean and unit variance) with for . The matrix entries may have any distribution with common finite first and second moments and will yield identical results in the large limit due to the central limit theorem. The carrying capacities may also be treated as random variables with Analyses by statistical physics-inspired methods have revealed phase transitions between different qualitative behaviors of the model in the many-species limit. In some cases, this may include transitions between the existence of a unique globally-attractive fixed point and chaotic, persistent fluctuations. Steady-state abundances in the thermodynamic limit In the thermodynamic limit (i.e., the community has a very large number of species) where a unique globally-attractive fixed point exists, the distribution of species abundances can be computed using the cavity method while assuming the system is self-averaging. The self-averaging assumption means that the distribution of any one species' abundance between samplings of model parameters matches the distribution of species abundances within a single sampling of model parameters. In the cavity method, an additional mean-field species is introduced and the response of the system is approximated linearly. The cavity calculation yields a self-consistent equation describing the distribution of species abundances as a mean-field random variable, . When , the mean-field equation is,where , and is a standard normal random variable. Only ecologically uninvadable solutions are taken (i.e., the largest solution for in the quadratic equation is selected). The relevant susceptibility and moments of , which has a truncated normal distribution, are determined self-consistently. Dynamical phases In the thermodynamic limit where there is an asymptotically large number of species (i.e., ), there are three distinct phases: one in which there is a unique fixed point (UFP), another with a multiple attractors (MA), and a third with unbounded growth. In the MA phase, depending on whether species abundances are replenished at a small rate, may approach arbitrarily small population sizes, or are removed from the community when the population falls below some cutoff, the resulting dynamics may be chaotic with persistent fluctuations or approach an initial conditions-dependent steady state. The transition from the UFP to MA phase is signaled by the cavity solution becoming unstable to disordered perturbations. When , the phase transition boundary occurs when the parameters satisfy,In the case, the phase boundary can still be calculated analytically, but no closed-form solution has been found; numerical methods are necessary to solve the self-consistent equations determining the phase boundary. The transition to the unbounded growth phase is signaled by the divergence of as computed in the cavity calculation. Dynamical mean-field theory The cavity method can also be used to derive a dynamical mean-field theory model for the dynamics. The cavity calculation yields a self-consistent equation describing the dynamics as a Gaussian process defined by the self-consistent equation (for ),where , is a zero-mean Gaussian process with autocorrelation , and is the dynamical susceptibility defined in terms of a functional derivative of the dynamics with respect to a time-dependent perturbation of the carrying capacity. Using dynamical mean-field theory, it has been shown that at long times, the dynamics exhibit aging in which the characteristic time scale defining the decay of correlations increases linearly in the duration of the dynamics. That is, when is large, where is the autocorrelation function of the dynamics and is a common scaling collapse function. When a small immigration rate is added (i.e., a small constant is added to the right-hand side of the equations of motion) the dynamics reach a time transitionally invariant state. In this case, the dynamics exhibit jumps between and abundances. Related articles Generalized Lotka–Volterra equation Competitive Lotka–Volterra equations Lotka–Volterra equations Consumer-resource model Theoretical ecology Random dynamical system Spin glass Cavity method Dynamical mean-field theory Quenched disorder Community (ecology) Ecological stability References Further reading Stefano Allesina's Community Ecology course lecture notes: https://stefanoallesina.github.io/Theoretical_Community_Ecology/ Bunin, Guy (2017-04-28). "Ecological communities with Lotka-Volterra dynamics". Physical Review E. 95 (4): 042414. Bibcode:2017PhRvE..95d2414B. doi:10.1103/PhysRevE.95.042414. PMID 28505745. Community ecology Complex systems theory Theoretical ecology Random dynamical systems Dynamical systems Mathematical modeling Biophysics Ordinary differential equations Population ecology Ecology
Random generalized Lotka–Volterra model
[ "Physics", "Mathematics", "Biology" ]
1,304
[ "Mathematical modeling", "Applied and interdisciplinary physics", "Random dynamical systems", "Applied mathematics", "Ecology", "Biophysics", "Mechanics", "Dynamical systems" ]
62,421,802
https://en.wikipedia.org/wiki/Perturbed%20angular%20correlation
The perturbed γ-γ angular correlation, PAC for short or PAC-Spectroscopy, is a method of nuclear solid-state physics with which magnetic and electric fields in crystal structures can be measured. In doing so, electrical field gradients and the Larmor frequency in magnetic fields as well as dynamic effects are determined. With this very sensitive method, which requires only about 10–1000 billion atoms of a radioactive isotope per measurement, material properties in the local structure, phase transitions, magnetism and diffusion can be investigated. The PAC method is related to nuclear magnetic resonance and the Mössbauer effect, but shows no signal attenuation at very high temperatures. Today only the time-differential perturbed angular correlation (TDPAC) is used. History and development PAC goes back to a theoretical work by Donald R. Hamilton from 1940. The first successful experiment was carried out by Brady and Deutsch in 1947. Essentially spin and parity of nuclear spins were investigated in these first PAC experiments. However, it was recognized early on that electric and magnetic fields interact with the nuclear moment, providing the basis for a new form of material investigation: nuclear solid-state spectroscopy. Step by step the theory was developed. After Abragam and Pound published their work on the theory of PAC in 1953 including extra nuclear fields, many studies with PAC were carried out afterwards. In the 1960s and 1970s, interest in PAC experiments sharply increased, focusing mainly on magnetic and electric fields in crystals into which the probe nuclei were introduced. In the mid-1960s, ion implantation was discovered, providing new opportunities for sample preparation. The rapid electronic development of the 1970s brought significant improvements in signal processing. From the 1980s to the present, PAC has emerged as an important method for the study and characterization of materials, e.g. for the study of semiconductor materials, intermetallic compounds, surfaces and interfaces, and a number of applications have also appeared in biochemistry. While until about 2008 PAC instruments used conventional high-frequency electronics of the 1970s, in 2008 Christian Herden and Jens Röder et al. developed the first fully digitized PAC instrument that enables extensive data analysis and parallel use of multiple probes. Replicas and further developments followed. Measuring principle PAC uses radioactive probes, which have an intermediate state with decay times of 2 ns to approx. 10 μs, see example 111In in the picture on the right. After electron capture (EC), indium transmutates to cadmium. Immediately thereafter, the 111cadmium nucleus is predominantly in the excited 7/2+ nuclear spin and only to a very small extent in the 11/2- nuclear spin, the latter should not be considered further. The 7/2+ excited state transitions to the 5/2+ intermediate state by emitting a 171 keV γ-quantum. The intermediate state has a lifetime of 84.5 ns and is the sensitive state for the PAC. This state in turn decays into the 1/2+ ground state by emitting a γ-quantum with 245 keV. PAC now detects both γ-quanta and evaluates the first as a start signal, the second as a stop signal. Now one measures the time between start and stop for each event. This is called coincidence when a start and stop pair has been found. Since the intermediate state decays according to the laws of radioactive decay, one obtains an exponential curve with the lifetime of this intermediate state after plotting the frequency over time. Due to the non-spherically symmetric radiation of the second γ-quantum, the so-called anisotropy, which is an intrinsic property of the nucleus in this transition, it comes with the surrounding electrical and/or magnetic fields to a periodic disorder (hyperfine interaction). The illustration of the individual spectra on the right shows the effect of this disturbance as a wave pattern on the exponential decay of two detectors, one pair at 90° and one at 180° to each other. The waveforms to both detector pairs are shifted from each other. Very simply, one can imagine a fixed observer looking at a lighthouse whose light intensity periodically becomes lighter and darker. Correspondingly, a detector arrangement, usually four detectors in a planar 90 ° arrangement or six detectors in an octahedral arrangement, "sees" the rotation of the core on the order of magnitude of MHz to GHz. According to the number n of detectors, the number of individual spectra (z) results after z=n²-n, for n=4 therefore 12 and for n=6 thus 30. In order to obtain a PAC spectrum, the 90° and 180° single spectra are calculated in such a way that the exponential functions cancel each other out and, in addition, the different detector properties shorten themselves. The pure perturbation function remains, as shown in the example of a complex PAC spectrum. Its Fourier transform gives the transition frequencies as peaks. , the count rate ratio, is obtained from the single spectra by using: Depending on the spin of the intermediate state, a different number of transition frequencies show up. For 5/2 spin, 3 transition frequencies can be observed with the ratio ω1+ω2=ω3. As a rule, a different combination of 3 frequencies can be observed for each associated site in the unit cell. PAC is a statistical method: Each radioactive probe atom sits in its own environment. In crystals, due to the high regularity of the arrangement of the atoms or ions, the environments are identical or very similar, so that probes on identical lattice sites experience the same hyperfine field or magnetic field, which then becomes measurable in a PAC spectrum. On the other hand, for probes in very different environments, such as in amorphous materials, a broad frequency distribution or no is usually observed and the PAC spectrum appears flat, without frequency response. With single crystals, depending on the orientation of the crystal to the detectors, certain transition frequencies can be reduced or extinct, as can be seen in the example of the PAC spectrum of zinc oxide (ZnO). Instrumental setup In the typical PAC spectrometer, a setup of four 90° and 180° planar arrayed detectors or six octahedral arrayed detectors are placed around the radioactive source sample. The detectors used are scintillation crystals of BaF2 or NaI. For modern instruments today mainly LaBr3:Ce or CeBr3 are used. Photomultipliers convert the weak flashes of light into electrical signals generated in the scintillator by gamma radiation. In classical instruments these signals are amplified and processed in logical AND/OR circuits in combination with time windows the different detector combinations (for 4 detectors: 12, 13, 14, 21, 23, 24, 31, 32, 34, 41, 42, 43) assigned and counted. Modern digital spectrometers use digitizer cards that directly use the signal and convert it into energy and time values and store them on hard drives. These are then searched by software for coincidences. Whereas in classical instruments, "windows" limiting the respective γ-energies must be set before processing, this is not necessary for the digital PAC during the recording of the measurement. The analysis only takes place in the second step. In the case of probes with complex cascades, this makes it makes it possible to perform a data optimization or to evaluate several cascades in parallel, as well as measuríng different probes simultaneously. The resulting data volumes can be between 60 and 300 GB per measurement. Sample materials As materials for the investigation (samples) are in principle all materials that can be solid and liquid. Depending on the question and the purpose of the investigation, certain framework conditions arise. For the observation of clear perturbation frequencies it is necessary, due to the statistical method, that a certain proportion of the probe atoms are in a similar environment and e.g. experiences the same electric field gradient. Furthermore, during the time window between the start and stop, or approximately 5 half-lives of the intermediate state, the direction of the electric field gradient must not change. In liquids, therefore, no interference frequency can be measured as a result of the frequent collisions, unless the probe is complexed in large molecules, such as in proteins. The samples with proteins or peptides are usually frozen to improve the measurement. The most studied materials with PAC are solids such as semiconductors, metals, insulators, and various types of functional materials. For the investigations, these are usually crystalline. Amorphous materials do not have highly ordered structures. However, they have close proximity, which can be seen in PAC spectroscopy as a broad distribution of frequencies. Nano-materials have a crystalline core and a shell that has a rather amorphous structure. This is called core-shell model. The smaller the nanoparticle becomes, the larger the volume fraction of this amorphous portion becomes. In PAC measurements, this is shown by the decrease of the crystalline frequency component in a reduction of the amplitude (attenuation). Sample preparation The amount of suitable PAC isotopes required for a measurement is between about 10 to 1000 billion atoms (1010-1012). The right amount depends on the particular properties of the isotope. 10 billion atoms are a very small amount of substance. For comparison, one mol contains about 6.22x1023 particles. 1012 atoms in one cubic centimeter of beryllium give a concentration of about 8 nmol/L (nanomol=10−9 mol). The radioactive samples each have an activity of 0.1-5 MBq, which is in the order of the exemption limit for the respective isotope. How the PAC isotopes are brought into the sample to be examined is up to the experimenter and the technical possibilities. The following methods are usual: Implantation During implantation, a radioactive ion beam is generated, which is directed onto the sample material. Due to the kinetic energy of the ions (1-500 keV) these fly into the crystal lattice and are slowed down by impacts. They either come to a stop at interstitial sites or push a lattice-atom out of its place and replace it. This leads to a disruption of the crystal structure. These disorders can be investigated with PAC. By tempering these disturbances can be healed. If, on the other hand, radiation defects in the crystal and their healing are to be examined, unperseived samples are measured, which are then annealed step by step. The implantation is usually the method of choice, because it can be used to produce very well-defined samples. Evaporation In a vacuum, the PAC probe can be evaporated onto the sample. The radioactive probe is applied to a hot plate or filament, where it is brought to the evaporation temperature and condensed on the opposite sample material. With this method, e.g. surfaces are examined. Furthermore, by vapor deposition of other materials, interfaces can be produced. They can be studied during tempering with PAC and their changes can be observed. Similarly, the PAC probe can be transferred to sputtering using a plasma. Diffusion In the diffusion method, the radioactive probe is usually diluted in a solvent applied to the sample, dried and it is diffused into the material by tempering it. The solution with the radioactive probe should be as pure as possible, since all other substances can diffuse into the sample and affect thereby the measurement results. The sample should be sufficiently diluted in the sample. Therefore, the diffusion process should be planned so that a uniform distribution or sufficient penetration depth is achieved. Added during synthesis PAC probes may also be added during the synthesis of sample materials to achieve the most uniform distribution in the sample. This method is particularly well suited if, for example, the PAC probe diffuses only poorly in the material and a higher concentration in grain boundaries is to be expected. Since only very small samples are necessary with PAC (about 5 mm), micro-reactors can be used. Ideally, the probe is added to the liquid phase of the sol-gel process or one of the later precursor phases. Neutron activation In neutron activation, the probe is prepared directly from the sample material by converting very small part of one of the elements of the sample material into the desired PAC probe or its parent isotope by neutron capture. As with implantation, radiation damage must be healed. This method is limited to sample materials containing elements from which neutron capture PAC probes can be made. Furthermore, samples can be intentionally contaminated with those elements that are to be activated. For example, hafnium is excellently suited for activation because of its large capture cross section for neutrons. Nuclear reaction Rarely used are direct nuclear reactions in which nuclei are converted into PAC probes by bombardment by high-energy elementary particles or protons. This causes major radiation damage, which must be healed. This method is used with PAD, which belongs to the PAC methods. Laboratories The currently largest PAC laboratory in the world is located at ISOLDE in CERN with about 10 PAC instruments, that receives its major funding form BMBF. Radioactive ion beams are produced at the ISOLDE by bombarding protons from the booster onto target materials (uranium carbide, liquid tin, etc.) and evaporating the spallation products at high temperatures (up to 2000 °C), then ionizing them and then accelerating them. With the subsequent mass separation usually very pure isotope beams can be produced, which can be implanted in PAC samples. Of particular interest to the PAC are short-lived isomeric probes such as: 111mCd, 199mHg, 204mPb, and various rare earth probes. Theory The first -quantum () will be emitted isotropically. Detecting this quantum in a detector selects a subset with an orientation of the many possible directions that has a given. The second -quantum () has an anisotropic emission and shows the effect of the angle correlation. The goal is to measure the relative probability with the detection of at the fixed angle in relation to . The probability is given with the angle correlation (perturbation theory): For a --cascade, is due to the preservation of parity: Where is the spin of the intermediate state and with the multipolarity of the two transitions. For pure multipole transitions, is . is the anisotropy coefficient that depends on the angular momentum of the intermediate state and the multipolarities of the transition. The radioactive nucleus is built into the sample material and emits two -quanta upon decay. During the lifetime of the intermediate state, i.e. the time between and , the core experiences a disturbance due to the hyperfine interaction through its electrical and magnetic environment. This disturbance changes the angular correlation to: is the perturbation factor. Due to the electrical and magnetic interaction, the angular momentum of the intermediate state experiences a torque about its axis of symmetry. Quantum-mechanically, this means that the interaction leads to transitions between the M states. The second -quantum () is then sent from the intermediate level. This population change is the reason for the attenuation of the correlation. The interaction occurs between the magnetic core dipole moment and the intermediate state or/and an external magnetic field . The interaction also takes place between nuclear quadrupole moment and the off-core electric field gradient . Magnetic dipole interaction For the magnetic dipole interaction, the frequency of the precession of the nuclear spin around the axis of the magnetic field is given by: is the Landé g-factor und is the nuclear magneton. With follows: From the general theory we get: For the magnetic interaction follows: Static electric quadrupole interaction The energy of the hyperfine electrical interaction between the charge distribution of the core and the extranuclear static electric field can be extended to multipoles. The monopole term only causes an energy shift and the dipole term disappears, so that the first relevant expansion term is the quadrupole term:     ij=1;2;3 This can be written as a product of the quadrupole moment and the electric field gradient . Both [tensor]s are of second order. Higher orders have too small effect to be measured with PAC. The electric field gradient is the second derivative of the electric potential at the core: becomes diagonalized, that: The matrix is free of traces in the main axis system (Laplace equation) Typically, the electric field gradient is defined with the largest proportion and : ,         In cubic crystals, the axis parameters of the unit cell x, y, z are of the same length. Therefore: and In axisymmetric systems is . For axially symmetric electric field gradients, the energy of the substates has the values: The energy difference between two substates, and , is given by: The quadrupole frequency is introduced. The formulas in the colored frames are important for the evaluation: The publications mostly list . as elementary charge and as Planck constant are well known or well defined. The nuclear quadrupole moment is often determined only very inaccurately (often only with 2-3 digits). Because can be determined much more accurately than , it is not useful to specify only because of the error propagation. In addition, is independent of spin! This means that measurements of two different isotopes of the same element can be compared, such as 199mHg(5/2−), 197mHg(5/2−) and 201mHg(9/2−). Further, can be used as finger print method. For the energy difference then follows: If , then: with: For integer spins applies:          und          For half integer spins applies:          und          The perturbation factor is given by: With the factor for the probabilities of the observed frequencies: As far as the magnetic dipole interaction is concerned, the electrical quadrupole interaction also induces a precision of the angular correlation in time and this modulates the quadrupole interaction frequency. This frequency is an overlap of the different transition frequencies . The relative amplitudes of the various components depend on the orientation of the electric field gradient relative to the detectors (symmetry axis) and the asymmetry parameter . For a probe with different probe nuclei, one needs a parameter that allows a direct comparison: Therefore, the quadrupole coupling constant independent of the nuclear spin is introduced. Combined interactions If there is a magnetic and electrical interaction at the same time on the radioactive nucleus as described above, combined interactions result. This leads to the splitting of the respectively observed frequencies. The analysis may not be trivial due to the higher number of frequencies that must be allocated. These then depend in each case on the direction of the electric and magnetic field to each other in the crystal. PAC is one of the few ways in which these directions can be determined. Dynamic interactions If the hyperfine field fluctuates during the lifetime of the intermediate level due to jumps of the probe into another lattice position or from jumps of a near atom into another lattice position, the correlation is lost. For the simple case with an undistorted lattice of cubic symmetry, for a jump rate of for equivalent places , an exponential damping of the static -terms is observed:             Here is a constant to be determined, which should not be confused with the decay constant . For large values of , only pure exponential decay can be observed: The boundary case after Abragam-Pound is , if , then: After effects Cores that transmute beforehand of the --cascade usually cause a charge change in ionic crystals (In3+) to Cd2+). As a result, the lattice must respond to these changes. Defects or neighboring ions can also migrate. Likewise, the high-energy transition process may cause the Auger effect, that can bring the core into higher ionization states. The normalization of the state of charge then depends on the conductivity of the material. In metals, the process takes place very quickly. This takes considerably longer in semiconductors and insulators. In all these processes, the hyperfine field changes. If this change falls within the --cascade, it may be observed as an after effect. The number of nuclei in state (a) in the image on the right is depopulated both by the decay after state (b) and after state (c): mit: From this one obtains the exponential case: For the total number of nuclei in the static state (c) follows: The initial occupation probabilities are for static and dynamic environments: General theory In the general theory for a transition is given: Minimum von with: References Nuclear physics Atomic physics Electromagnetism Spectroscopy Scientific techniques Laboratory techniques in condensed matter physics Solid-state chemistry Materials science
Perturbed angular correlation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,222
[ "Physical phenomena", "Quantum mechanics", "Laboratory techniques in condensed matter physics", "Fundamental interactions", "Spectroscopy", "Solid-state chemistry", "Electromagnetism", "Instrumental analysis", "Materials science", " molecular", "Nuclear physics", " and optical physics", "Mol...
62,428,696
https://en.wikipedia.org/wiki/Centrifugal%20pump%20selection%20and%20characteristics
The basic function of a pump is to do work on a liquid. It can be used to transport and compress a liquid. In industries heavy-duty pumps are used to move water, chemicals, slurry, food, oil and so on. Depending on their action, pumps are classified into two types — Centrifugal Pumps and Positive Displacement Pumps. While centrifugal pumps impart momentum to the fluid by motion of blades, positive displacement pumps transfer fluid by variation in the size of the pump’s chamber. Centrifugal pumps can be of rotor or propeller types, whereas positive displacement pumps may be gear-based, piston-based, diaphragm-based, etc. As a general rule, centrifugal pumps are used with low viscosity fluids and positive displacement pumps are used with high viscosity fluids. Parameters and Definitions Volume flow rate (Q), specifies the volume of fluid flowing through the pump per unit time. Thus, it gives the rate at which fluid travels through the pump. Given the density of the operating fluid, mass flow rate (ṁ) can also be used to obtain the volume flow rate. The relationship between the mass flow rate and volume flow rate (also known as the capacity) is given by: Where ρ is the operating fluid density. One of the most important considerations, as a consequence, is to match the rated capacity of the pump with the required flow rate in the system that we are designing. Discharge Head, is the net head obtained at the outlet of a pump. For a centrifugal pump, the discharge pressure depends on the suction or inlet pressure as well, along with the fluid’s density. Thus, for the same flow rate of the fluid, we may have different values of discharge pressure depending on the inlet pressure. Thus, discharge head (the height which the fluid can reach after getting pumped) varies according to its operating conditions. Total Head is the difference between the height to which the fluid can rise at the outlet and the height to which it can rise at the inlet for a centrifugal pump. This is a crucial parameter for pump selection and is a popularly used parameter for ascertaining industrial requirements. By eliminating the inlet head, we remove the effect of the supplied pressure to the pump and are left with only the pump’s energy (head) contribution to the fluid flow. Factors Affecting Pump Selection Flow Rate – The flow rate is necessary to select a pump because the head characteristics of a pump will be affected by the flow rate of the system. It is necessary to importantly measure or ascertain this parameter, since the flow rate is critical in many industrial processes, especially in chemical industries. Static Head – The difference between the inlet tank fluid surface elevation and the discharge tank fluid surface elevation. Friction Head – The friction head accounts for the frictional losses in the pumping system. The value of the friction head can be found from available data-tables depending on the flow parameters such as fluid viscosity, pipe dimensions, flow rate, etc. Total Head – It is obtained by adding the friction and static heads. It gives a measure of the amount of energy imparted by the pump to the fluid. Using the total head and the flow rate, the appropriate dynamic pump (centrifugal pump) can be selected. Selection Using Pump Characteristics Whenever there is a need to select a pump for any industrial or personal requirement, it is important to determine the required total head for the operation and the required flow rate. All this data is important because each pump which is manufactured by manufacturer has a characteristic value of head and flow at which it leads to maximum efficiency operation. For example, in a process industry if there is a need to transport chemical liquids at a specific flow rate for a particular chemical reaction to take place then there is a need to ascertain both the dynamic head (which is related to the flow rate) and static head. After calculating both the head and the flow rate, the pump curves given by the manufacturer are referred and the pump giving the maximum efficiency at the operational condition is selected. It should however be noted that the best efficiency point is not the best operating point in practice, because the pump curve describes how a centrifugal pump performs in isolation from plant equipment. How it operates in practice is determined by the resistance of the system it is installed in. Characteristic Pump Curves Pump curves are quite useful in the pump selection, testing, operation and maintenance. Pump performance curve is a graph of differential head against the operating flow rate. They specify performance and efficiency characteristics. Performance tests are done on the pumps to verify the claims made by the pump maker. It is quite possible that with time in the plant, requirements of the process along with the infrastructure and conditions may change considerably. In that case pump curves are used to verify whether the pumps would still be the best fit for modified requirements. Selecting Using Pump Curves Pump performance curves are important indicators of pump characteristics provided by the manufacturer. These curves are fundamental in predicting the variation in the differential head across the pump, as the flow changes. However, such curves are not limited to the head, and variation in other parameters such as power, efficiency or NPSH with flow can also be shown on similar plots by the manufacturer. Due to mechanical and power constraints head provided by the pump drops as it pushes more quantity of fluid. In other words, when there is an increase the flow rate (for the same impeller diameter), there is a drop in differential head that the pump is capable of providing. The two are related as follows: Here and depend on the geometric parameters and the rotational speed of the pump and are assumed to be constant for the purpose of comparison. However, this simple linear relationship undergoes modification on account of various losses and a non-linear, decreasing relationship is seen in the pump characteristic curve. From the curve, it is observed that even when the differential head drops off, the output obtained increases because the product of flow rate and head increases (recall that the net pump output is given by and the efficiency is ). This is due to the increase in flow rate. However, the reduction in the discharge head means that the pump consumes more power to push the additional fluid that we need (on account of the increased flow rate). After a specific point, known as the best efficiency point, the effect of reduction in the obtained head outweighs the increase in the flow rate. As a consequence, the power starts reducing hereafter, and the efficiency starts falling. Mathematically, the effect of flow rate on the efficiency is given by: where is called the capacity constant, and and are constants that depend on the pump design and rotation speed. Because of this contradicting feature a point of optimal efficiency is achieved for the pump. Our target should be to select pump which operates close to the maximum efficiency point for required operational requirement. This is the best efficiency point for pump and plotted on Pump Efficiency Curve. References Pumps
Centrifugal pump selection and characteristics
[ "Physics", "Chemistry" ]
1,419
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
62,429,844
https://en.wikipedia.org/wiki/Out-flow%20radial%20turbine
Radial means that the fluid is flowing in radial direction that is either from inward to outward or from outward to inward, with respect to the runner shaft axis. If the fluid is flowing from inward to outward then it is called outflow radial turbine. In this turbine, the working fluid enters around the axis of the wheel and then flows outwards (i.e., towards the outer periphery of the wheel). The guide vane mechanism is typically surrounded by the runner/turbine. In this turbine, the inner diameter of the runner is the inlet and outer diameter is an outlet. Most practical radial outflow turbines are Reaction-type turbines, whereas the converse, radial inflow turbines can be either reaction type, impulse type (in the case of a typical turbo-supercharger), or intermediate (in the case of Francis turbines for example.) Components of Out-flow Turbine The Main Components of Reaction Turbine are : Casing/ Involute: Typically the Runner shaft bearings, rotating seals, guide vane assembly and inlet tube are mounted to the casing Guide Vanes: In liquid turbines these are also sometimes referred to as Wicket gates. These convert some of the pressure energy into momentum energy, but their main functions are to control the flow rate and impart an average tangential velocity on the fluid greater than or equal to the tangential velocity of the runner inlets. In an OFRT these are typically mounted concentrically, in the same plane as the turbine. However the guide vanes can also be designed in an axial or diagonal/mixed configuration. Runner/Turbine: The passage between the blades has a converging-diverging profile. The majority of the head loss or pressure drop occurs as the working fluid passes through the turbine in radial outflow design. The runner is connected to the shaft which rotates along with it and thus this can be used for power production. Depending on the design, the flow through the turbine may be strictly planar, or it may enter the turbine axially and undergo a 90° turn therein. Draft Tube: It is connected to outlet of the turbine which assists fluid exiting the spiral casing. It is used because the exit pressure may becomes less than the stagnation pressure within the tail race and thus it may become difficult for the fluid to proceed downstream causing choked-flow. To make it exit from the tail race/involute it's necessary to provide diverging cross section so that the pressure can increase while the linear velocity greatly decreases. Comparison between inward and outward radial flow reaction turbine Advantages Some of the advantages of radial outflow turbine are: The configuration of radial flow turbine is simple, similar to a centrifugal compressor. Radial flow turbines are mechanically robust compared to axial turbines and they are easy to configure. As a result of that they were considered for the application before axial turbine. They are more tolerant of overspeed and temporary temperature extremes. Radial flow turbines have higher energy extraction capability in one single stage. Because the high pressure side is near the rotational axis (at low radius), it is possible to keep leakage losses lower than with other reaction turbines (Ljungström, axial or in-flow radial). This is more important in small turbines where complex rotating seal systems aren't cost effective. Radial flow turbines are generally more preferred in small turbines because of simpler construction. Radial flow turbine rotor does not use aerofoil sections, as a result of which the rotor of radial flow turbine has a shape very similar to a centrifugal compressor and it uses 3D shape for energy extraction. They are more conducive to being produced from a single casting or round billet as a Bladed-disk or "blisk." References Al Jubori, A. M., Al-Dadah, R. K., Mahmoud, S., & Daabo, A. (2017). Modelling and parametric analysis of small-scale axial and radial-outflow turbines for Organic Rankine Cycle applications. Applied energy, 190, 981-996. Erwin, J. R. (1969). U.S. Patent No. 3,465,518. Washington, DC: U.S. Patent and Trademark Office. Turnquist, N. A., Willey, L. D., & Wolfe, C. E. (2002). U.S. Patent No. 6,439,844. Washington, DC: U.S. Patent and Trademark Office. Pini, M., Persico, G., Casati, E., & Dossena, V. (2013). Preliminary design of a centrifugal turbine for organic rankine cycle applications. Journal of Engineering for Gas turbines and power, 135(4), 042312. Childs, D. (1993). Turbomachinery rotordynamics: phenomena, modeling, and analysis. John Wiley & Sons. Hiett, G. F., & Johnston, I. H. (1963, June). Paper 7: Experiments concerning the aerodynamic performance of inward flow radial turbines. In Proceedings of the Institution of Mechanical Engineers, Conference Proceedings (Vol. 178, No. 9, pp. 28-42). Sage UK: London, England: SAGE Publications. Rodgers, C., & Geiser, R. (1987). Performance of a high-efficiency radial/axial turbine. Turbines
Out-flow radial turbine
[ "Chemistry" ]
1,116
[ "Turbines", "Turbomachinery" ]
72,024,289
https://en.wikipedia.org/wiki/Gadopiclenol
Gadopiclenol, sold under the brand name Elucirem among others, is a contrast agent used with magnetic resonance imaging (MRI) to detect and visualize lesions with abnormal vascularity in the central nervous system and in the body. Gadopiclenol is a paramagnetic macrocyclic non-ionic complex of gadolinium. Gadopiclenol was approved for medical use in the United States in September 2022, and in the European Union in December 2023. Pharmacology Gadopiclenol has a higher relaxivity compared with standard gadolinium-based contrast agents (GBCAs). The higher relaxivity allows for a lower dose of gadopiclenol, reducing the total amount of gadolinium administered to the patient while preserving imaging quality. Gadopiclenol was approved by the FDA with a recommended dose of 0.05 mmol/kg for adults and pediatric patients aged 2 years and older. This is half the dose of standard macrocyclic GBCAs, which have a recommended dose of 0.1 mmol/kg. Society and culture Legal status Gadopiclenol was approved for medical use in the United States in September 2022 by the Food and Drug Administration. In October 2023, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Elucirem, intended for contrast-enhanced magnetic resonance imaging (MRI) to improve detection and, visualization of pathologies when diagnostic information is essential and not available with unenhanced MRI. The applicant for this medicinal product is Guerbet. In October 2023, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Vueway, intended for contrast-enhanced magnetic resonance imaging (MRI) to improve detection and, visualization of pathologies when diagnostic information is essential and not available with unenhanced MRI. The applicant for this medicinal product is Bracco Imaging S.p.A. Gadopiclenol was approved for medical use in the European Union in December 2023. Brand names Gadopiclenol is the international nonproprietary name. Gadopiclenol is sold under the brand names Elucirem and Vueway. References External links MRI contrast agents Pyridines Heterocyclic compounds with 2 rings Carboxylic acids Polyols Amides Gadolinium compounds
Gadopiclenol
[ "Chemistry" ]
526
[ "Carboxylic acids", "Amides", "Functional groups" ]
72,031,127
https://en.wikipedia.org/wiki/Lauren%20B.%20Hitchcock
Lauren Blakely Hitchcock (March 18, 1900 – October 15, 1972) was a chemical engineer and early opponent of air pollution. Hitchcock was born in Paris to Frank Lauren Hitchcock, a mathematician and physicist, and Margaret Johnson Blakely, and was raised in Belmont, Massachusetts. He received his undergraduate (1920), master's (1927), and doctorate degree (1933) from Massachusetts Institute of Technology. He taught at the University of Virginia from 1928 to 1935 and then moved into private industry. Hitchcock became president of the Southern California Air Pollution Foundation (APF) in 1954, which had been formed to fight smog. Hitchcock identified automobile exhaust and backyard incinerators as the cause and advised that significant steps would be needed--comparable to wartime efforts--to fight the problem in a meaningful way. In 1963, Hitchcock was appointed to the faculty at University at Buffalo, where his work papers are now archived. References External links Hitchcock (Lauren B.) Papers, 1923-1966, at University at Buffalo Archives 1972 deaths 1900 births Chemical engineers Massachusetts Institute of Technology alumni People from Belmont, Massachusetts
Lauren B. Hitchcock
[ "Chemistry", "Engineering" ]
223
[ "Chemical engineering", "Chemical engineers" ]
72,039,476
https://en.wikipedia.org/wiki/UPt3
{{DISPLAYTITLE:UPt3}} UPt3 is an inorganic binary intermetallic crystalline compound of platinum and uranium. Production It can be synthesised in the following ways: as an intermetallic compound, by direct fusion of pure components according to stoichiometric calculations: by reduction of uranium dioxide with hydrogen in the presence of platinum: Physical properties UPt3 forms crystals of hexagonal symmetry (some studies hypothesize a trigonal structure instead), space group P63/mmc, cell parameters a = 0.5766 nm and c = 0.4898 nm (c should be understood as distance from planes), with a structure similar to nisnite (Ni3Sn) and MgCd3. The compound congruently melts at 1700 °C. The enthalpy of formation of the compound is -111 kJ/mol. At temperatures below 1 K it becomes superconducting, thought to be due to the presence of heavy fermions (the uranium atoms). References Platinum compounds Uranium compounds Intermetallics
UPt3
[ "Physics", "Chemistry", "Materials_science" ]
230
[ "Inorganic compounds", "Metallurgy", "Intermetallics", "Condensed matter physics", "Alloys" ]
58,083,234
https://en.wikipedia.org/wiki/Exterior%20calculus%20identities
This article summarizes several identities in exterior calculus, a mathematical notation used in differential geometry. Notation The following summarizes short definitions and notations that are used in this article. Manifold , are -dimensional smooth manifolds, where . That is, differentiable manifolds that can be differentiated enough times for the purposes on this page. , denote one point on each of the manifolds. The boundary of a manifold is a manifold , which has dimension . An orientation on induces an orientation on . We usually denote a submanifold by . Tangent and cotangent bundles , denote the tangent bundle and cotangent bundle, respectively, of the smooth manifold . , denote the tangent spaces of , at the points , , respectively. denotes the cotangent space of at the point . Sections of the tangent bundles, also known as vector fields, are typically denoted as such that at a point we have . Sections of the cotangent bundle, also known as differential 1-forms (or covector fields), are typically denoted as such that at a point we have . An alternative notation for is . Differential k-forms Differential -forms, which we refer to simply as -forms here, are differential forms defined on . We denote the set of all -forms as . For we usually write , , . -forms are just scalar functions on . denotes the constant -form equal to everywhere. Omitted elements of a sequence When we are given inputs and a -form we denote omission of the th entry by writing Exterior product The exterior product is also known as the wedge product. It is denoted by . The exterior product of a -form and an -form produce a -form . It can be written using the set of all permutations of such that as Directional derivative The directional derivative of a 0-form along a section is a 0-form denoted Exterior derivative The exterior derivative is defined for all . We generally omit the subscript when it is clear from the context. For a -form we have as the -form that gives the directional derivative, i.e., for the section we have , the directional derivative of along . For , Lie bracket The Lie bracket of sections is defined as the unique section that satisfies Tangent maps If is a smooth map, then defines a tangent map from to . It is defined through curves on with derivative such that Note that is a -form with values in . Pull-back If is a smooth map, then the pull-back of a -form is defined such that for any -dimensional submanifold The pull-back can also be expressed as Interior product Also known as the interior derivative, the interior product given a section is a map that effectively substitutes the first input of a -form with . If and then Metric tensor Given a nondegenerate bilinear form on each that is continuous on , the manifold becomes a pseudo-Riemannian manifold. We denote the metric tensor , defined pointwise by . We call the signature of the metric. A Riemannian manifold has , whereas Minkowski space has . Musical isomorphisms The metric tensor induces duality mappings between vector fields and one-forms: these are the musical isomorphisms flat and sharp . A section corresponds to the unique one-form such that for all sections , we have: A one-form corresponds to the unique vector field such that for all , we have: These mappings extend via multilinearity to mappings from -vector fields to -forms and -forms to -vector fields through Hodge star For an n-manifold M, the Hodge star operator is a duality mapping taking a -form to an -form . It can be defined in terms of an oriented frame for , orthonormal with respect to the given metric tensor : Co-differential operator The co-differential operator on an dimensional manifold is defined by The Hodge–Dirac operator, , is a Dirac operator studied in Clifford analysis. Oriented manifold An -dimensional orientable manifold is a manifold that can be equipped with a choice of an -form that is continuous and nonzero everywhere on . Volume form On an orientable manifold the canonical choice of a volume form given a metric tensor and an orientation is for any basis ordered to match the orientation. Area form Given a volume form and a unit normal vector we can also define an area form on the Bilinear form on k-forms A generalization of the metric tensor, the symmetric bilinear form between two -forms , is defined pointwise on by The -bilinear form for the space of -forms is defined by In the case of a Riemannian manifold, each is an inner product (i.e. is positive-definite). Lie derivative We define the Lie derivative through Cartan's magic formula for a given section as It describes the change of a -form along a flow associated to the section . Laplace–Beltrami operator The Laplacian is defined as . Important definitions Definitions on Ωk(M) is called... closed if exact if for some coclosed if coexact if for some harmonic if closed and coclosed Cohomology The -th cohomology of a manifold and its exterior derivative operators is given by Two closed -forms are in the same cohomology class if their difference is an exact form i.e. A closed surface of genus will have generators which are harmonic. Dirichlet energy Given , its Dirichlet energy is Properties Exterior derivative properties ( Stokes' theorem ) ( cochain complex ) for ( Leibniz rule ) for ( directional derivative ) for Exterior product properties for ( alternating ) ( associativity ) for ( compatibility of scalar multiplication ) ( distributivity over addition ) for when is odd or . The rank of a -form means the minimum number of monomial terms (exterior products of one-forms) that must be summed to produce . Pull-back properties ( commutative with ) ( distributes over ) ( contravariant ) for ( function composition ) Musical isomorphism properties Interior product properties ( nilpotent ) for ( Leibniz rule ) for for for Hodge star properties for ( linearity ) for , , and the sign of the metric ( inversion ) for ( commutative with -forms ) for ( Hodge star preserves -form norm ) ( Hodge dual of constant function 1 is the volume form ) Co-differential operator properties ( nilpotent ) and ( Hodge adjoint to ) if ( adjoint to ) In general, for Lie derivative properties ( commutative with ) ( commutative with ) ( Leibniz rule ) Exterior calculus identities if ( bilinear form ) ( Jacobi identity ) Dimensions If for for If is a basis, then a basis of is Exterior products Let and be vector fields. Projection and rejection ( interior product dual to wedge ) for If , then is the projection of onto the orthogonal complement of . is the rejection of , the remainder of the projection. thus ( projection–rejection decomposition ) Given the boundary with unit normal vector extracts the tangential component of the boundary. extracts the normal component of the boundary. Sum expressions given a positively oriented orthonormal frame . Hodge decomposition If , such that Poincaré lemma If a boundaryless manifold has trivial cohomology , then any closed is exact. This is the case if M is contractible. Relations to vector calculus Identities in Euclidean 3-space Let Euclidean metric . We use differential operator for . ( scalar triple product ) ( cross product ) if ( scalar product ) ( gradient ) ( directional derivative ) ( divergence ) ( curl ) where is the unit normal vector of and is the area form on . ( divergence theorem ) Lie derivatives ( -forms ) ( -forms ) if ( -forms on -manifolds ) if ( -forms ) References Calculus Mathematical identities Mathematics-related lists Differential forms Differential operators Generalizations of the derivative
Exterior calculus identities
[ "Mathematics", "Engineering" ]
1,627
[ "Mathematical analysis", "Mathematical theorems", "Tensors", "Calculus", "Differential forms", "Mathematical identities", "Mathematical problems", "Differential operators", "Algebra" ]
58,084,624
https://en.wikipedia.org/wiki/Electromagnetic%20radio%20frequency%20convergence
Electromagnetic radio frequency (RF) convergence is a signal-processing paradigm that is utilized when several RF systems have to share a finite amount of resources among each other. RF convergence indicates the ideal operating point for the entire network of RF systems sharing resources such that the systems can efficiently share resources in a manner that's mutually beneficial. With communications spectral congestion recently becoming an increasingly important issue for the telecommunications sector, researchers have begun studying methods of achieving RF convergence for cooperative spectrum sharing between remote sensing systems (such as radar) and communications systems. Consequentially, RF convergence is commonly referred to as the operating point of a remote sensing and communications network at which spectral resources are jointly shared by all nodes (or systems) of the network in a mutually beneficial manner. Remote sensing and communications have conflicting requirements and functionality. Furthermore, spectrum sharing approaches between remote sensing and communications have traditionally been to separate or isolate both systems (temporally, spectrally or spatially). This results in stove pipe designs that lack back compatibility. Future of hybrid RF systems demand co-existence and cooperation between sensibilities with flexible system design and implementation. Hence, achieving RF convergence can be an incredibly complex and difficult problem to solve. Even for a simple network consisting of one remote sensing and communications system each, there are several independent factors in the time, space, and frequency domains that have to be taken into consideration in order to determine the optimal method to share spectral resources. For a given spectrum-space-time resource manifold, a practical network will incorporate numerous remote sensing modalities and communications systems, making the problem of achieving RF convergence intangible. Motivation Spectral congestion is caused by too many RF communications users concurrently accessing the electromagnetic spectrum. This congestion may degrade communications performance and decrease or even restrict access to spectral resources. Spectrum sharing between radar and communications applications was proposed as a way to alleviate the issues caused by spectral congestion. This has led to a greater emphasis being placed by researchers into investigating methods of radar-communications cooperation and co-design. Government agencies such as The Defense Advanced Research Projects Agency (DARPA) and others have begun funding research that investigates methods of coexistence for military radar systems, such that their performance will not be affected when sharing spectrum with communications systems. These agencies are also interested in fundamental research investigating the limits of cooperation between military radar and communications systems that in the long run will lead to better co-design methods that improve performance. However, the problems caused by spectrum sharing do not affect just military systems. There are a wide variety of remote sensing and communications applications that will be adversely affected by sharing spectrum with communications systems such as automotive radars, medical devices, 5G etc. Furthermore, applications like autonomous automobiles and smart home networks can stand to benefit substantially by cooperative remote sensing and communications. Consequently, researchers have started investigating fundamental approaches to joint remote sensing and communications. Remote sensing and communications fundamentally tend to conflict with one another. Remote sensing typically transmits known information into the environment (or channel) and measures a reflected response, which is then used to extract unknown information about the environment. For example, in the case of a radar system, the known information is the transmitted signal and the unknown information is the target channel that is desired to be estimated. On the other hand, a communications system basically sends unknown information into a known environment. Although a communications system does not know what the environment (also called a propagation channel) is beforehand, every system operates under the assumption that it is either previously estimated or its underlying probability distribution is known. Due to both systems’ conflicting nature, it is clear that when it comes to designing systems that can jointly sense and communicate, the solution is non-trivial. Due to difficulties in jointly sensing and communicating, both systems are often designed to be isolated in time, space, and/or frequency. Often, the only time legacy systems consider the other user in their mode of operation is through regulations, which are defined by agencies such as the FCC (United States), that constrain the other user's functionality. As spectral congestion continues to force both remote sensing and communications system to share spectral resources, achieving RF convergence is the solution to optimally function in an increasingly crowded wireless spectrum. Applications of joint sensing-communications systems Several applications can benefit from RF convergence research such as autonomous driving, cloud-based medical devices, light based applications etc. Each application may have different goals, requirements, and regulations which present different challenges to achieving RF convergence. A few examples of joint sensing-communications applications are listed below. Intelligent Transport Systems (Vehicle-to-vehicle Communications) Commercial Flight Control Communications & Military Radar Remote Medical Monitoring and Wearable Medical Sensors High Frequency Imaging and Communications Li-Fi and Lidar RFID & Asset Tracking Capable Wireless Sensor Networks Joint sensing-communications system design and integration Joint sensing-communications systems can be designed based on four different types of system integration. These different levels range from complete isolation, to complete co-design of systems. Some levels of integration, such as non-integration (or isolation) and coexistence, are not complex in nature and do not require an overhaul of how either sensing or communications systems operate. However, this lack of complexity also implies that joint systems employing such methods of system integration will not see significant performance benefits on achieving RF convergence. As such, non-integration and coexistence methods are more short-term solutions to the spectral congestion problem. In the long term, systems will have to be co-designed together to see significant improvements in joint system performance. Non-integration Systems employing non-integration methods are forced to operate in isolated regions of spectrum-space-time. However, in the real world, perfect isolation is not realizable and as a result, isolated systems will leak out and occupy segments of spectrum-space-time occupied by other systems. This is why systems that employ non-integration methods end up interfering with each other, and due to the philosophy of isolation being employed, each system makes no attempt at interference mitigation. Consequentially, each user's performance is degraded. Non-integration is one of the common and traditional solutions, and as highlighted here, is a part of the problem. Coexistence Remote sensing and communications systems that implement coexistence methods are forced to coexist with each other and treat each other as sources of interference. This means that unlike non-integration methods, each system tries to perform interference mitigation. However, since both systems are not cooperative and have no knowledge about the other system, any information required to perform such interference mitigation is not shared or known and has to be estimated. As a result, interference mitigation performance is limited since it is dependent on the estimated information. Cooperation Cooperative techniques, unlike coexistence methods, do not require that both sensing and communications systems treat each other as sources of interference and both systems share some knowledge or information. Cooperative methods exploit this joint knowledge to enable both systems to effectively perform interference mitigation and subsequently improve their performance. Systems willingly share necessary information with each other in order to facilitate mutual interference mitigation. Cooperative methods are the first step toward designing joint systems and achieving RF convergence as an effective solution to the spectral congestion problem.. Co-design Co-design methods consist of jointly considering radar and communications systems when designing new systems to optimally share spectral resources. Such systems are jointly designed from scratch to efficiently utilize the spectrum and can potentially result in performance benefits when compared to an isolated approach to system design. Co-designed systems are not necessarily physically co-located. When operating from the same platform, co-design includes the cases where radar beams and waveforms are modulated to convey communications messages, an approach which is typically referred to as dual function radar communications systems. For example, some recent experimentally demonstrated co-design approaches include: Tandem hopped radar and communications (THoRaCs), where undistorted orthogonal frequency-division multiplexing (OFDM) sub-carriers are embedded into a frequency modulation (FM) radar waveform Phase-attached radar/communication (PARC), where FM and continuous phase modulation (CPM) are merged into a single waveform Far-field radiated emission design (FFRED), where FM multiple-input and multiple-output (MIMO) waveforms produce separate radar and communication beams in different spatial directions See also Radar Communications Systems Co-channel interference Spectrum Management Radio resource management References Radio frequency propagation Electromagnetic components Electromagnetism
Electromagnetic radio frequency convergence
[ "Physics" ]
1,717
[ "Physical phenomena", "Electromagnetism", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves", "Fundamental interactions" ]
58,086,456
https://en.wikipedia.org/wiki/Nuclear%20Physics%20and%20Atomic%20Energy%20%28journal%29
Nuclear Physics and Atomic Energy is a quarterly peer-reviewed open-access scientific journal published by Institute for Nuclear Research of the National Academy of Sciences of Ukraine. It was established in 2000 and covers all aspects of nuclear physics, particle physics, atomic energy, radiation physics, plasma physics, radiobiology, radioecology, technique and experimental methods. The editor-in-chief is V.I. Slisenko (Institute for Nuclear Research). Articles are published in English, Ukrainian or Russian with titles and abstracts in all three languages. Abstracting and indexing The journal is abstracted and indexed in Scopus. References External links Quarterly journals Multilingual journals Nuclear physics journals Particle physics journals Plasma science journals Academic journals established in 2002
Nuclear Physics and Atomic Energy (journal)
[ "Physics" ]
148
[ "Plasma science journals", "Nuclear physics journals", "Plasma physics", "Nuclear and atomic physics stubs", "Particle physics", "Plasma physics stubs", "Nuclear physics", "Particle physics stubs", "Particle physics journals" ]
58,090,951
https://en.wikipedia.org/wiki/C17H14N2O2
{{DISPLAYTITLE:C17H14N2O2}} The molecular formula C17H14N2O2 (molar mass: 278.305 g/mol, exact mass: 278.1055 u) may refer to: Bimakalim Sudan Red G Molecular formulas
C17H14N2O2
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
58,091,424
https://en.wikipedia.org/wiki/Pool%20fire
A pool fire is a type of diffusion flame where a layer of volatile liquid fuel is evaporating and burning. The fuel layer can be either on a horizontal solid substrate or floating on a higher-density liquid, usually water. Pool fires are an important scenario in fire process safety and combustion science, as large amounts of liquid fuels are stored and transported by different industries. Physical properties The most important physical parameter describing a pool fire is the heat release rate, which determines the minimum safe distance needed to avoid burns from thermal radiation. The heat release rate is limited by the rate of evaporation of the fuel, as the combustion reaction takes place in the gas phase. The evaporation rate, in turn, is determined by other physical parameters, such as the depth, surface area and shape of the pool, as well as the fuel boiling point, heat of vaporization, heat of combustion, thermal conductivity and others. A feedback loop exists between the heat release rate and evaporation rate, as a significant part of the energy released in the combustion reaction will be transmitted from the gas phase to the liquid fuel, and can supply the needed heat of vaporization. In the case of large pool fires, most of the heat transfer happens in the form of thermal radiation. Typical fuels in accidental pool fires, or experiments simulating them, include aliphatic hydrocarbons (n-heptane, liquefied propane gas), aromatic hydrocarbons (toluene, xylene), alcohols (methanol, ethanol) or mixtures thereof (kerosene). It is important that a pool fire involving a water-insoluble fuel is not attempted to be extinguished with water, as this can trigger explosive boiling and spattering of the burning material. Open-top tank fires are pool fires of industrial scale that occur when the roof of an atmospheric tank fails due to internal tank blast, followed by the contents of the tank catching fire. If a layer of water is present underneath the fuel and the fuel is a mixture of chemical species with several different boiling points, a boilover may eventually occur, greatly aggravating the fire. The boilover onset occurs as soon as a hot zone propagates down through the fuel, reaching the water and making it boil. See also Radiative transfer Fire safety References Types of fire Process safety
Pool fire
[ "Chemistry", "Engineering" ]
481
[ "Chemical process engineering", "Safety engineering", "Process safety" ]
58,094,662
https://en.wikipedia.org/wiki/Optically%20detected%20magnetic%20resonance
In physics, optically detected magnetic resonance (ODMR) is a double resonance technique by which the electron spin state of a crystal defect may be optically pumped for spin initialisation and readout. Like electron paramagnetic resonance (EPR), ODMR makes use of the Zeeman effect in unpaired electrons. The negatively charged nitrogen vacancy centre (NV−) has been the target of considerable interest with regards to performing experiments using ODMR. ODMR of NV−s in diamond has applications in magnetometry and sensing, biomedical imaging, quantum information and the exploration of fundamental physics. NV ODMR The nitrogen vacancy defect in diamond consists of a single substitutional nitrogen atom (replacing one carbon atom) and an adjacent gap, or vacancy, in the lattice where normally a carbon atom would be located. The nitrogen vacancy occurs in three possible charge states: positive (NV+), neutral (NV0) and negative (NV−). As NV− is the only one of these charge states which has shown to be ODMR active, it is often referred to simply as the NV. The energy level structure of the NV− consists of a triplet ground state, a triplet excited state and two singlet states. Under resonant optical excitation, the NV may be raised from the triplet ground state to the triplet excited state. The centre may then return to the ground state via two routes; by the emission of a photon of 637 nm in the zero phonon line (ZPL) (or longer wavelength from the phonon sideband) or alternatively via the aforementioned singlet states through intersystem crossing and the emission of a 1042 nm photon. A return to the ground state via the latter route will preferentially result in the state. Relaxation to the state necessarily results in a decrease in visible wavelength fluorescence (as the emitted photon is in the infrared range). Microwave pumping at a resonant frequency of places the centre in the degenerate state. The application of a magnetic field lifts this degeneracy, causing Zeeman splitting and the decrease of fluorescence at two resonant frequencies, given by , where is the Planck constant, is the electron g-factor and is the Bohr magneton. Sweeping the microwave field through these frequencies results in two characteristic dips in the observed fluorescence, the separation between which enables determination of the strength of the magnetic field . Hyperfine splitting Further splitting in the fluorescence spectrum may occur due to the hyperfine interaction, which leads to further resonance conditions and corresponding spectral lines. In NV ODMR, this detailed structure usually originates from nitrogen and carbon-13 atoms near to the defect. These atoms have small magnetic fields which interact with the spectral lines from the NV, causing further splitting. Hyperfine interactions in nitrogen-vacancy (NV) centres arise from nearby nuclear spins, primarily due to nitrogen (14N or 15N) and, in some cases 13C atoms near the defect. These interactions are significant because they further split the energy levels of the NV center, resulting in additional resonances in the ODMR spectrum. The nitrogen atom in the NV centre can exist as either 14N (with nuclear spin I = 1) or 15N (with nuclear spin I=1/2). The most common isotope, 14N, couples with the electron spin of the NV center, leading to a hyperfine splitting of the states into three sub-levels. The interaction of NV electron spin with 14N nuclear spin can be defined by the hamiltonian shown above where S represents NV electron spin system and I represents nitrogen nuclear spin. This splitting typically depends upon the constants MHz and MHz. Splitting can be observed as three peaks in the ODMR hyperfine resolved spectrum. In NV centres, hyperfine splitting arises due to the interaction between the NV electron spin magnetic spin moment and nuclear spin magnetic moments. NV spin magnetic moments also depend upon the external magnetic field magnitude and orientation. To perform hyperfine resolved ODMR, a single NV ODMR experiment is generally preferable. If 15N is present instead of 14N. It will split into two sublevels. Nearby 13C atoms (with nuclear spin I=1/2) can also interact with the NV centre (3). 13C carbon atoms are randomly distributed in diamonds and have a natural abundance of about 1.1%. When located near the NV center, they induce additional fine structures in the ODMR signal. The coupling strength varies with the position of the 13C nuclei relative to the NV center. References Bibliography Quantum mechanics Materials science Scientific techniques Spectroscopy
Optically detected magnetic resonance
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
975
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Molecular physics", "Instrumental analysis", "Theoretical physics", "Materials science", "Quantum mechanics", "nan", "Spectroscopy" ]
59,730,114
https://en.wikipedia.org/wiki/Parallel%20external%20memory
In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory (EM) model. In a similar way, it is the cache-aware analogy to the parallel random-access machine (PRAM). The PEM model consists of a number of processors, together with their respective private caches and a shared main memory. Model Definition The PEM model is a combination of the EM model and the PRAM model. The PEM model is a computation model which consists of processors and a two-level memory hierarchy. This memory hierarchy consists of a large external memory (main memory) of size and small internal memories (caches). The processors share the main memory. Each cache is exclusive to a single processor. A processor can't access another’s cache. The caches have a size which is partitioned in blocks of size . The processors can only perform operations on data which are in their cache. The data can be transferred between the main memory and the cache in blocks of size . I/O complexity The complexity measure of the PEM model is the I/O complexity, which determines the number of parallel blocks transfers between the main memory and the cache. During a parallel block transfer each processor can transfer a block. So if processors load parallelly a data block of size form the main memory into their caches, it is considered as an I/O complexity of not . A program in the PEM model should minimize the data transfer between main memory and caches and operate as much as possible on the data in the caches. Read/write conflicts In the PEM model, there is no direct communication network between the P processors. The processors have to communicate indirectly over the main memory. If multiple processors try to access the same block in main memory concurrently read/write conflicts occur. Like in the PRAM model, three different variations of this problem are considered: Concurrent Read Concurrent Write (CRCW): The same block in main memory can be read and written by multiple processors concurrently. Concurrent Read Exclusive Write (CREW): The same block in main memory can be read by multiple processors concurrently. Only one processor can write to a block at a time. Exclusive Read Exclusive Write (EREW): The same block in main memory cannot be read or written by multiple processors concurrently. Only one processor can access a block at a time. The following two algorithms solve the CREW and EREW problem if processors write to the same block simultaneously. A first approach is to serialize the write operations. Only one processor after the other writes to the block. This results in a total of parallel block transfers. A second approach needs parallel block transfers and an additional block for each processor. The main idea is to schedule the write operations in a binary tree fashion and gradually combine the data into a single block. In the first round processors combine their blocks into blocks. Then processors combine the blocks into . This procedure is continued until all the data is combined in one block. Comparison to other models Examples Multiway partitioning Let be a vector of d-1 pivots sorted in increasing order. Let be an unordered set of N elements. A d-way partition of is a set , where and for . is called the i-th bucket. The number of elements in is greater than and smaller than . In the following algorithm the input is partitioned into N/P-sized contiguous segments in main memory. The processor i primarily works on the segment . The multiway partitioning algorithm (PEM_DIST_SORT) uses a PEM prefix sum algorithm to calculate the prefix sum with the optimal I/O complexity. This algorithm simulates an optimal PRAM prefix sum algorithm. // Compute parallelly a d-way partition on the data segments for each processor i in parallel do Read the vector of pivots into the cache. Partition into d buckets and let vector be the number of items in each bucket. end for Run PEM prefix sum on the set of vectors simultaneously. // Use the prefix sum vector to compute the final partition for each processor i in parallel do Write elements into memory locations offset appropriately by and . end for Using the prefix sums stored in the last processor P calculates the vector of bucket sizes and returns it. If the vector of pivots M and the input set A are located in contiguous memory, then the d-way partitioning problem can be solved in the PEM model with I/O complexity. The content of the final buckets have to be located in contiguous memory. Selection The selection problem is about finding the k-th smallest item in an unordered list of size . The following code makes use of PRAMSORT which is a PRAM optimal sorting algorithm which runs in , and SELECT, which is a cache optimal single-processor selection algorithm. if then return end if //Find median of each for each processor in parallel do end for // Sort medians // Partition around median of medians if then return else return end if Under the assumption that the input is stored in contiguous memory, PEMSELECT has an I/O complexity of: Distribution sort Distribution sort partitions an input list of size into disjoint buckets of similar size. Every bucket is then sorted recursively and the results are combined into a fully sorted list. If the task is delegated to a cache-optimal single-processor sorting algorithm. Otherwise the following algorithm is used: // Sample elements from for each processor in parallel do if then Load in -sized pages and sort pages individually else Load and sort as single page end if Pick every 'th element from each sorted memory page into contiguous vector of samples end for in parallel do Combine vectors into a single contiguous vector Make copies of : end do // Find pivots for to in parallel do end for Pack pivots in contiguous array // Partition around pivots into buckets // Recursively sort buckets for to in parallel do recursively call on bucket of size using processors responsible for elements in bucket end for The I/O complexity of PEMDISTSORT is: where If the number of processors is chosen that and the I/O complexity is then: Other PEM algorithms Where is the time it takes to sort items with processors in the PEM model. See also Parallel random-access machine (PRAM) Random-access machine (RAM) External memory (EM) References Algorithms Models of computation Analysis of parallel algorithms External memory algorithms Cache (computing)
Parallel external memory
[ "Mathematics" ]
1,344
[ "Applied mathematics", "Algorithms", "Mathematical logic" ]
59,730,509
https://en.wikipedia.org/wiki/Microprotein
A microprotein (miP) is a small protein encoded from a small open reading frame (sORF), also known as sORF-encoded protein (SEP). They are a class of protein with a single protein domain that are related to multidomain proteins. Microproteins regulate larger multidomain proteins at the post-translational level. Microproteins are analogous to microRNAs (miRNAs) and heterodimerize with their targets causing dominant and negative effects. In animals and plants, microproteins have been found to greatly influence biological processes. Because of microproteins' dominant effects on their targets, microproteins are currently being studied for potential applications in biotechnology. History The first microprotein (miP) discovered was during a research in the early 1990s on genes for basic helix–loop–helix (bHLH) transcription factors from a murine erythroleukaemia cell cDNA library. The protein was found to be an inhibitor of DNA binding (ID protein), and it negatively regulated the transcription factor complex. The ID protein was 16 kDa and consisted of a helix-loop-helix (HLH) domain. The microprotein formed bHLH/HLH heterodimers which disrupted the functional basic helix–loop–helix (bHLH) homodimers. The first microprotein discovered in plants was the LITTLE ZIPPER (ZPR) protein. The LITTLE ZIPPER protein contains a leucine zipper domain but does not have the domains required for DNA binding and transcription activation. Thus, LITTLE ZIPPER protein is analogous to the ID protein. Despite not all proteins being small, in 2011, this class of protein was given the name microproteins because their negative regulatory actions are similar to those of miRNAs. Evolutionarily, the ID protein or proteins similar to ID found in all animals. In plants, microproteins are only found in higher order. However, the homeodomain transcription factors that belong to the three-amino-acid loop-extension (TALE) family are targets of microproteins, and these homeodomain proteins are conserved in animals, plants, and fungi. Structure Microproteins are generally small proteins with a single protein domain. The active form of microproteins are translated from smORF. The smORF codons which microproteins are translated from can be less than 100 codons. However, not all microproteins are small, and the name was given because their actions are analogous to miRNAs. Function The function of microproteins is post-translational regulators. Microproteins disrupt the formation of heterodimeric, homodimeric, or multimeric complexes. Furthermore, microproteins can interact with any protein that require functional dimers to function normally. The primary targets of microproteins are transcription factors that bind to DNA as dimers. Microproteins regulate these complexes by creating homotypic dimers with the targets and inhibit protein complex function. There are two types of miP inhibitions: homotypic miP inhibition and heterotypic miP inhibition. In homotypic miP inhibition, microproteins interact with proteins with similar protein-protein interaction (PPI) domain. In heterotypic miP inhibition, microproteins interact with proteins with different but compatible PPI domain. In both types of inhibition, microproteins interfere and prevent the PPI domains from interacting with their normal proteins. References Protein classification Post-translational modification
Microprotein
[ "Chemistry", "Biology" ]
731
[ "Post-translational modification", "Gene expression", "Protein classification", "Biochemical reactions" ]
63,331,699
https://en.wikipedia.org/wiki/Recursive%20islands%20and%20lakes
A recursive island or lake, also known as a nested island or lake, is an island or a lake that lies within a lake or an island. For the purposes of defining recursion, small continental land masses such as Madagascar and New Zealand count as islands, while large continental land masses do not. Islands found within lakes in these countries are often recursive islands because the lake itself is located on an island. Recursive islands Islands in lakes Islands in lakes on islands There are nearly 1,000 islands in lakes on islands in Finland alone. Islands in lakes on islands in lakes Islands in lakes on islands in lakes on islands Islands in lakes on islands in lakes on islands in lakes Moose Boulder was claimed to exist in the seasonal pond of Moose Flats on Ryan Island in Siskiwit Lake on Isle Royale in Lake Superior in the United States. In 2020, an expedition to the island found that it is potentially a hoax, along with the aforementioned seasonal pond. Recursive lakes Lakes on islands Lakes on islands in lakes Lakes on islands in lakes on islands Lakes on islands in lakes on islands in lakes. The 4th grade recursion lakes are incredibly rare. There is only 2 of them found right now. See also List of endorheic basins Volcanic crater lake List of islands by area List of lakes by area List of islands by population Notes References Coastal and oceanic landforms Coastal geography Lake islands Lakes Lists of islands Recursion
Recursive islands and lakes
[ "Mathematics" ]
294
[ "Mathematical logic", "Recursion" ]
63,337,149
https://en.wikipedia.org/wiki/International%20Linear%20Algebra%20Society
The International Linear Algebra Society (ILAS) is a professional mathematical society organized to promote research and education in linear algebra, matrix theory and matrix computation. It serves the international community through conferences, publications, prizes and lectures. Membership in ILAS is open to all mathematicians and scientists interested in furthering its aims and participating in its activities. History ILAS was founded in 1989. Its genesis occurred at the Combinatorial Matrix Analysis Conference held at the University of Victoria in British Columbia, Canada, May 20–23, 1987, hosted by Dale Olesky and Pauline van den Driessche. ILAS was initially known as the International Matrix Group, founded in 1987. The founding officers of ILAS were Hans Schneider, President; Robert C. Thompson, Vice President; Daniel Hershkowitz, Secretary; and James R. Weaver, Treasurer. ILAS Conferences The inaugural meeting of ILAS took place at Brigham Young University (including one day at the Sundance Mountain Resort) in Provo, Utah, USA, from August 12–15, 1989. The organizing committee consisted of Wayne Barrett, Daniel Hershkowitz, Charles Johnson, Hans Schneider, and Robert C. Thompson. Much additional support came from Don Robinson, Chair of the BYU Mathematics Department, and James R. Weaver, ILAS Treasurer. The conference received support from Brigham Young University, the National Security Agency, and the National Science Foundation. There were 85 in attendance at the conference from 15 countries including Olga Taussky-Todd, a renowned mathematician in Matrix Theory. The proceedings of the Conference appeared in volume 150 of the journal Linear Algebra and Its Applications. The 2nd ILAS conference was held in Lisbon, Portugal, August 3–7, 1992. The chair of the organizing committee was José Dias da Silva. There were 150 participants from 27 countries and the conference was supported by 11 different organizations. The proceedings of the conference can be found in volumes 197-198 of Linear Algebra and Its Applications. ILAS conferences were held the next 4 years, alternating between the United States and Europe, before beginning the standard pattern of holding the Conference two of every three years (with a few exceptions). The number of participants at each ILAS conference has grown steadily through the years. The first ILAS conference outside of the United States and Europe was held in Haifa, Israel in 2001. The first in the Far East was in Shanghai in 2007 and the first in Latin America was in Cancun, Mexico in 2008. The complete list of locations hosting ILAS conferences follows: 1. Provo, Utah, USA (1989) 2. Lisbon, Portugal (1992) 3. Pensacola, Florida, USA (1993) 4. Rotterdam, The Netherlands (1994) 5. Atlanta, Georgia, USA (1995) 6. Chemnitz, Germany (1996) 7. Madison, Wisconsin, USA (1998) 8. Barcelona, Spain (1999) 9. Haifa, Israel (2001) 10. Auburn, Alabama, USA (2002) 11. Coimbra, Portugal (2004) 12. Regina, Saskatchewan, Canada (2005) 13. Amsterdam, the Netherlands (2006) 14. Shanghai, China (2007) 15. Cancun, Mexico (2008) 16. Pisa, Italy (2010) 17. Braunschweig, Germany (2011) 18. Providence, Rhode Island, USA (2013) 19. Seoul, Korea (2014) 20. Leuven, Belgium (2016) 21. Ames, Iowa, USA (2017) 22. Rio de Janeiro, Brazil (2019) 23. Virtual (originally planned for New Orleans, Louisiana, USA) (2021) 24. Galway, Ireland (2022) 25. Madrid, Spain (2023) 26. Kaohsiung, Taiwan (2025) Prizes and Special Lectures ILAS has three prizes named after giants in Linear Algebra. The Hans Schneider Prize. A distinctive feature of the 3rd ILAS meeting held at the University of West Florida in Pensacola, Florida, March 17–20, 1993, was the institution of the Hans Schneider Prize. This prize was initiated thanks to a donation to ILAS from Hans Schneider, the first president of ILAS and a founding editor of the journal Linear Algebra and Its Applications. Typically, the prize is awarded every 3 years and has evolved as a prize to recognize a person's career. The ILAS Taussky–Todd Prize. Olga Taussky-Todd and John Todd have had a decisive impact on the development of theoretical and numerical linear algebra for over half a century. The ILAS Taussky–Todd Prize honors them for their many and varied mathematical achievements and for their efforts in promoting linear algebra and matrix theory. The prize is awarded once every three to four years recognizing a linear algebra researcher in their mid career. The ILAS Taussky–Todd Prize was originally referred to as the Taussky–Todd lecture, and was instituted at the 3rd ILAS meeting held at the University of West Florida in Pensacola, Florida, March 17–20, 1993. The ILAS Richard A. Brualdi Early Career Prize. The prize is named for Richard A. Brualdi, who has had a major impact on the field, especially in combinatorial matrix theory. In addition, he has been instrumental to the success of ILAS since its inception. The ILAS Richard A. Brualdi Early Career Prize was instituted in 2021 and is awarded every three years to an outstanding early career researcher in the field of linear algebra, for distinguished contributions to the field. In addition ILAS awards Special Lectures at ILAS conferences as well as conferences of collaborating mathematics organizations. Publications ILAS publishes an electronic journal - the Electronic Journal of Linear Algebra (ELA), founded in 1996. The first Editors-in-Chief were Volker Mehrmann and Daniel Hershkowitz. ELA is a platinum open access journal, meaning that it is free to all: no subscription and no article processing fee or page charges. ELA is an all-electronic journal that welcomes high quality mathematical articles that contribute new insights to matrix analysis and the various aspects of linear algebra and its applications. ELA sets high standards for refereeing while using conventional refereeing of articles that is carried out electronically. ILAS also produces and distributes IMAGE, a semiannual electronic bulletin founded in 1988 with Robert C. Thompson as its first Editor. IMAGE contains: essays related to linear algebra activities; feature articles; interviews of linear algebra experts; book reviews; brief reports on conferences; ILAS business notices; announcements of upcoming workshops and conferences; problems and solutions; and news about individual members. Presidents Hans Schneider, 1987–1996 Richard A. Brualdi, 1996–2002 Daniel Hershkowitz, 2002–2008 Stephen Kirkland, 2008–2014 Peter Šemrl, 2014–2020 Daniel B. Szyld, 2020–present Collaborations with other mathematics organizations ILAS collaborates with the Society for Industrial and Applied Mathematics (SIAM), the American Mathematical Society (AMS) and the International Workshop on Operator Theory and its Applications (IWOTA). The collaboration with SIAM started in 1999. The SIAM Activity Group on Linear Algebra (SIAG/LA) holds a conference every three years (when the year minus 2000 is divisible by 3). As part of the agreement, and to encourage interaction between ILAS and SIAG/LA members, the two societies do not hold conferences in the same year. As a result, ILAS holds conferences two out of every three years. In addition, the two societies exchange speakers with ILAS sponsoring two ILAS speakers at every triennial SIAM Applied Linear Algebra (SIAM ALA) meeting (organized by SIAG/LA) and with SIAM sponsoring a SIAM speaker at every ILAS conference. The first ILAS speakers at a SIAM ALA meeting were Hans Schneider and Hugo Woerdeman in 2000, and the first SIAM speakers at an ILAS conference were Michele Benzi and Misha Kilmer in 2002. The collaboration with AMS started in late 2020 with the establishment of ILAS as a partner in the Joint Mathematics Meetings (JMM). In this capacity ILAS will support a speaker for the "ILAS Lecture" at the JMM to be selected by ILAS. In addition, at least four special sessions at the JMM will be identified as ILAS special sessions, the contents of which will be determined by ILAS. The partnership took effect starting with the JMM 2022 held virtually. The collaboration with IWOTA started in 2017 with the establishment of the Israel Gohberg ILAS-IWOTA Lecture, which is funded by donations. This lecture series consists of biennial lectures either at an ILAS conference or at an IWOTA meeting. Israel Gohberg was the founding president of IWOTA and an active member of ILAS. The first Israel Gohberg ILAS-IWOTA Lecturer was Vern Paulsen at the 2021 IWOTA Lancaster UK meeting. References External links International Linear Algebra Society (ILAS) home page Electronic Journal of Linear Algebra (ELA) home page Linear algebra Matrix theory Mathematical societies Mathematics conferences Organizations established in 1989
International Linear Algebra Society
[ "Mathematics" ]
1,858
[ "Linear algebra", "Algebra" ]
73,452,946
https://en.wikipedia.org/wiki/Proselenos
Proselenos () is the concept referring to the belief that the ancient Arcadians were a group of people older than the Moon (Selene in Greek) itself. This aspect of the Arcadian identity was in opposition to the other groups inhabiting the Peloponnese, who claimed to be descended from the Dorians. There were some other exceptions, however, such as the Eleans (who were thought to descend from Aetolia), the Cynurians (who adapted Dorian elements into their local identity), and the Achaeans (who were thought to have relocated to the northern Peloponnese following the Dorian invasion of the peninsula). The antiquity of the Arcadians was also shown in their mythical ancestry, claiming that they descended from the hero Pelasgus, who sprung from the earth to become their ancestor and whose son was Lycaon, the grandfather of the region's eponymous hero, Arcas. Concept origin The oldest reference to the story of the Arcadians pre-dating the moon is attested in the Classical period (479 - 323 BCE) of Greek history from the fifth century BCE historian, Hippys of Rhegium. The fragment from Hippys is preserved in the later (sixth century CE) work of Stephanus of Byzantium. The term is also applied by the fourth century BCE philosopher Aristotle, as well as Eudoxos of Cnidus, a fourth century BCE astronomer and mathematician. An unknown fifth century poet also mentioned proselenaios as an epithet of Pelasgus, the ancestor of the Arcadians; Borgeaud and Nielsen proposes that this may have been the fifth century Theban lyric poet, Pindar. Later sources The idea that the Arcadians were older than the moon is also referenced in the work of later writers, such as Apollonius of Rhodes, Statius and Lucian. It is also worth noting that Plutarch mentions that the Arcadians shared a kinship with oak trees, as they were believed to be the first men who sprung from the earth, already when the first oak was planted, further illustrating the great antiquity of the Arcadians as the people who preceded the moon. Notes References Bibliography Arcadian mythology Moon myths
Proselenos
[ "Astronomy" ]
451
[ "Astronomical myths", "Moon myths" ]
73,453,650
https://en.wikipedia.org/wiki/Institute%20of%20Theoretical%20Physics%2C%20Saclay
The Institute of Theoretical Physics ("Institut de physique théorique") (IPhT) is a research institute of the Direction of Fundamental Research (DRF) of the French Alternative Energies and Atomic Energy Commission (CEA). The Institute is also a joint research unit of the Institute of Physics (INP), a subsidiary of the French National Center for Scientific Research (CNRS). It is associated to the Paris-Saclay University. IPhT is situated on the Saclay Plateau South of Paris. History The IPhT was created in 1963 as the "Service de Physique Théorique" (SPhT), in succession of the "Service de Physique Mathématique" (SPM) of CEA. It became an Institute (and took the name IPhT) in 2008. It was initially devoted to nuclear physics and superconductivity. Particle physics quickly became an important theme. After its move in 1968 from the main CEA-Saclay site to the present site of Orme des Merisiers, quantum field theory became a major research topic, together with statistical physics. Subsequently, new topics such as conformal theories and matrix models, cosmology and string theory, condensed matter physics and out-of-equilibrium statistical physics, quantum information, found their place there. IPhT is usually considered one of the top theoretical physics research institute in Europe. Present research themes Research at IPhT covers most areas of theoretical physics: Cosmology and astroparticule physics Particle Physics : quantum chromodynamics, hadron physics, Collider physics, scattering amplitudes, physics beyond the standard model Quantum Gravity, String theory Mathematical Physics : Quantum field theory, conformal field theory, integrable systems, topological recursion, combinatorics, random geometries Condensed matter physics Statistical Physics: out of equilibrium systems, complex systems, network theory, biophysics Quantum information science IPhT organizes each spring the "Itzykson Conference", an international meeting centered on theme which is different every year. Its name is a tribute to Claude Itzykson, former IPhT researcher. Teaching IPhT is not part of a teaching department, but graduate and postgraduate courses of theoretical physics are organized at IPhT. They are aimed at graduate students and researchers of Paris area. The lecturers are researchers from IPhT or other Paris Area labs, and senior visitors of IPhT. Most courses are part of the courses of the Ecole Doctorale Physique en Ile de France (EDPIF). IPhT hosts numerous master and graduate students, as well as postdoctoral researchers. Research dissemination and outreach Talks and conferences of IPhT are usually available by live streaming and are available for replay on the IPhT YouTube channel. Outreach talks and presentations for the general public are also available there. Many scientific books are being published by researchers from IPhT, aiming at students and researchers as well as at the general public. Researchers of IPhT Some researchers who held permanent positions at SPM/SPhT/IPhT: Claude Bloch, Édouard Brézin, Gilles Cohen-Tannoudji, Cirano de Dominicis, Bernard Derrida, Claude Itzykson, Stanislas Leibler, Madan Lal Mehta, Albert Messiah, Stéphane Nonnenmacher, Yves Pomeau, Volker Schomerus, Raymond Stora, Lenka Zdeborová, Jean Zinn-Justin, Jean-Bernard Zuber Some researchers who are presently members of IPhT:(2023) Roger Balian, Jean-Paul Blaizot, François David, Philippe Di Francesco, Michel Gaudin, David Kosower, Vincent Pasquier, Mannque Rho, Hubert Saleur, Pierre Vanhove, André Voros Directors of IPhT Claude Bloch: 1963–1971 Cirano de Dominicis: 1971–1979 Roger Balian: 1979–1987 André Morel: 1987–1992 Jean Zinn–Justin: 1993–1998 Jean–Paul Blaizot: 1998–2004 Henri Orland: 2004–2011 Michel Bauer: 2011–2016 François David: 2017–2021 Catherine Pépin: 2022– Campus The IPhT is located on the Plateau de Saclay, about 20 km southwest of Paris, on the Orme des Merisiers site, which is an annex of the main CEA-Saclay center. References External links IPhT lectures web site IPhT YouTube channel (seminars, talks, outreach presentations) French Alternative Energies and Atomic Energy Commission Theoretical physics French National Centre for Scientific Research Paris-Saclay University 1963 establishments in France
Institute of Theoretical Physics, Saclay
[ "Physics" ]
941
[ "Theoretical physics" ]
73,464,124
https://en.wikipedia.org/wiki/Capronia%20cogtii
Capronia cogtii is a rare species of lichenicolous (lichen-dwelling) fungus in the family Herpotrichiellaceae. Found in northern Mongolia, it was described as a new species in 2019. Taxonomy Capronia cogtii belongs to the fungal family Herpotrichiellaceae and is characterized by its hyaline ascospores, which distinguish it from most other Capronia species that have pigmented ascospores. The new species is most similar to C. amylacea, C. hypotrachynae, C. normandinae, and C. pseudonormandinae, but can be distinguished by its smaller ascomata, longer hyaline ascospores, and different host genus, Vahliella (Vahliellaceae), compared to Peltigera (Peltigeraceae). The species epithet cogtii was given in honor of the late Professor Ulzii Cogt, who was a prominent figure in Mongolian lichenology. Description The vegetative hyphae of Capronia cogtii are pale brown, 2–3.5 μm wide, septate, and ramify from the lower parts of the . The ascomata are , blackish, more or less glossy, roughly spherical to ovoid, and occasionally shortly at the apex. They are above, ostiolate, 90–150 μm in diameter, and have a rough surface. The setae are dark brown, straight, not branched, 15–60 μm tall, 4–5 μm wide at base, and arise from a discrete dark foot-cell. The exciple is made of medium to dark brown cells outwardly, and somewhat hyaline, strongly elongated, radially compressed cells inwardly. The are hyaline, measure 10–20 by 2–3 μm, septate, and are not branched. The ascospores are hyaline, to very narrowly , and typically have 3 transverse septa (sometimes as few as 1 or as many or 5). They are usually constricted at the septa, smooth-walled, and overlappingly crowded in the ascus. Habitat and distribution Capronia cogtii is known only from the holotype, which was collected on the thallus of Vahliella leucophaea and occasionally on adjacent decaying mosses in sparse Larix sibirica mountain forest in northern Mongolia. The host lichen, Vahliella leucophaea, is morphologically similar to some species of Pannariaceae and has long been placed in this family. This is the first species of Capronia known to grow on members of Vahliellaceae. Two Capronia species are known to grow on Pannariaceae hosts, C. magellanica, growing on species of Fuscopannaria, and C. paranectrioides, growing on species of Erioderma. Capronia cogtii is also similar to C. andina and C. solitaria but can be distinguished by its hyaline ascospores and larger ascomata, respectively. References Eurotiomycetes Lichenicolous fungi Fungi described in 2019 Fungi of Asia Taxa named by Mikhail Petrovich Zhurbenko Fungus species
Capronia cogtii
[ "Biology" ]
677
[ "Fungi", "Fungus species" ]
76,395,086
https://en.wikipedia.org/wiki/NGC%202523B
NGC 2523B is a spiral galaxy located around 186 million light-years away in the constellation Camelopardalis. The discovery of this galaxy is credited to Philip C. Keenan, in his paper Studies of Extra-Galactic Nebulae. Part I: Determination of Magnitudes, published in The Astrophysical Journal in 1935. According to A.M. Garcia, NGC 2523B is a member of the five member UGC 4057 galaxy group (also known as LGG 149). The other galaxies in the group are NGC 2523, UGC 4014, UGC 4028, and UGC 4057. See also List of NGC objects (2001–3000) References External links 2523B spiral galaxies Camelopardalis 023025 +12-08-030 04259 08072+7342 Astronomical objects discovered in 1935
NGC 2523B
[ "Astronomy" ]
176
[ "Camelopardalis", "Constellations" ]
76,401,012
https://en.wikipedia.org/wiki/Topological%20deep%20learning
Topological deep learning (TDL) is a research field that extends deep learning to handle complex, non-Euclidean data structures. Traditional deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in processing data on regular grids and sequences. However, scientific and real-world data often exhibit more intricate data domains encountered in scientific computations , including point clouds, meshes, time series, scalar fields graphs, or general topological spaces like simplicial complexes and CW complexes. TDL addresses this by incorporating topological concepts to process data with higher-order relationships, such as interactions among multiple entities and complex hierarchies. This approach leverages structures like simplicial complexes and hypergraphs to capture global dependencies and qualitative spatial properties, offering a more nuanced representation of data. TDL also encompasses methods from computational and algebraic topology that permit studying properties of neural networks and their training process, such as their predictive performance or generalization properties.,. The mathematical foundations of TDL are algebraic topology, differential topology, and geometric topology. Therefore, TDL can be generalized for data on differentiable manifolds, knots, links, tangles, curves, etc. History and motivation The term ``topological deep learning``, including multichannel TDL and multitask TDL, was first introduced in 2017. Traditional techniques from deep learning often operate under the assumption that a dataset is residing in a highly-structured space (like images, where convolutional neural networks exhibit outstanding performance over alternative methods) or a Euclidean space. The prevalence of new types of data, in particular graphs, meshes, and molecules, resulted in the development of new techniques, culminating in the field of geometric deep learning, which originally proposed a signal-processing perspective for treating such data types. While originally confined to graphs, where connectivity is defined based on nodes and edges, follow-up work extended concepts to a larger variety of data types, including simplicial complexes and CW complexes, with recent work proposing a unified perspective of message-passing on general combinatorial complexes. An independent perspective on different types of data originated from topological data analysis, which proposed a new framework for describing structural information of data, i.e., their "shape," that is inherently aware of multiple scales in data, ranging from local information to global information. While at first restricted to smaller datasets, subsequent work developed new descriptors that efficiently summarized topological information of datasets to make them available for traditional machine-learning techniques, such as support vector machines or random forests. Such descriptors ranged from new techniques for feature engineering over new ways of providing suitable coordinates for topological descriptors, or the creation of more efficient dissimilarity measures. Contemporary research in this field is largely concerned with either integrating information about the underlying data topology into existing deep-learning models or obtaining novel ways of training on topological domains. Learning on topological spaces Focusing on topology in the sense of point set topology, an active branch of TDL is concerned with learning on topological spaces, that is, on different topological domains. An introduction to topological domains One of the core concepts in topological deep learning is the domain upon which this data is defined and supported. In case of Euclidean data, such as images, this domain is a grid, upon which the pixel value of the image is supported. In a more general setting this domain might be a topological domain. Next, we introduce the most common topological domains that are encountered in a deep learning setting. These domains include, but not limited to, graphs, simplicial complexes, cell complexes, combinatorial complexes and hypergraphs. Given a finite set S of abstract entities, a neighborhood function on S is an assignment that attach to every point in S a subset of S or a relation. Such a function can be induced by equipping S with an auxiliary structure. Edges provide one way of defining relations among the entities of S. More specifically, edges in a graph allow one to define the notion of neighborhood using, for instance, the one hop neighborhood notion. Edges however, limited in their modeling capacity as they can only be used to model binary relations among entities of S since every edge is connected typically to two entities. In many applications, it is desirable to permit relations that incorporate more than two entities. The idea of using relations that involve more than two entities is central to topological domains. Such higher-order relations allow for a broader range of neighborhood functions to be defined on S to capture multi-way interactions among entities of S. Next we review the main properties, advantages, and disadvantages of some commonly studied topological domains in the context of deep learning, including (abstract) simplicial complexes, regular cell complexes, hypergraphs, and combinatorial complexes. Comparisons among topological domains Each of the enumerated topological domains has its own characteristics, advantages, and limitations: Simplicial complexes Simplest form of higher-order domains. Extensions of graph-based models. Admit hierarchical structures, making them suitable for various applications. Hodge theory can be naturally defined on simplicial complexes. Require relations to be subsets of larger relations, imposing constraints on the structure. Cell Complexes Generalize simplicial complexes. Provide more flexibility in defining higher-order relations. Each cell in a cell complex is homeomorphic to an open ball, attached together via attaching maps. Boundary cells of each cell in a cell complex are also cells in the complex. Represented combinatorially via incidence matrices. Hypergraphs Allow arbitrary set-type relations among entities. Relations are not imposed by other relations, providing more flexibility. Do not explicitly encode the dimension of cells or relations. Useful when relations in the data do not adhere to constraints imposed by other models like simplicial and cell complexes. Combinatorial Complexes : Generalize and bridge the gaps between simplicial complexes, cell complexes, and hypergraphs. Allow for hierarchical structures and set-type relations. Combine features of other complexes while providing more flexibility in modeling relations. Can be represented combinatorially, similar to cell complexes. Hierarchical structure and set-type relations The properties of simplicial complexes, cell complexes, and hypergraphs give rise to two main features of relations on higher-order domains, namely hierarchies of relations and set-type relations. Rank function A rank function on a higher-order domain X is an order-preserving function rk: X → Z, where rk(x) attaches a non-negative integer value to each relation x in X, preserving set inclusion in X. Cell and simplicial complexes are common examples of higher-order domains equipped with rank functions and therefore with hierarchies of relations. Set-type relations Relations in a higher-order domain are called set-type relations if the existence of a relation is not implied by another relation in the domain. Hypergraphs constitute examples of higher-order domains equipped with set-type relations. Given the modeling limitations of simplicial complexes, cell complexes, and hypergraphs, we develop the combinatorial complex, a higher-order domain that features both hierarchies of relations and set-type relations. The learning tasks in TDL can be broadly classified into three categories: Cell classification: Predict targets for each cell in a complex. Examples include triangular mesh segmentation, where the task is to predict the class of each face or edge in a given mesh. Complex classification: Predict targets for an entire complex. For example, predict the class of each input mesh. Cell prediction: Predict properties of cell-cell interactions in a complex, and in some cases, predict whether a cell exists in the complex. An example is the prediction of linkages among entities in hyperedges of a hypergraph. In practice, to perform the aforementioned tasks, deep learning models designed for specific topological spaces must be constructed and implemented. These models, known as topological neural networks, are tailored to operate effectively within these spaces. Topological neural networks Central to TDL are topological neural networks (TNNs), specialized architectures designed to operate on data structured in topological domains. Unlike traditional neural networks tailored for grid-like structures, TNNs are adept at handling more intricate data representations, such as graphs, simplicial complexes, and cell complexes. By harnessing the inherent topology of the data, TNNs can capture both local and global relationships, enabling nuanced analysis and interpretation. Message passing topological neural networks In a general topological domain, higher-order message passing involves exchanging messages among entities and cells using a set of neighborhood functions. Definition: Higher-Order Message Passing on a General Topological Domain Let be a topological domain. We define a set of neighborhood functions on . Consider a cell and let for some . A message between cells and is a computation dependent on these two cells or the data supported on them. Denote as the multi-set , and let represent some data supported on cell at layer . Higher-order message passing on , induced by , is defined by the following four update rules: , where is the intra-neighborhood aggregation function. , where is the inter-neighborhood aggregation function. , where are differentiable functions. Some remarks on Definition above are as follows. First, Equation 1 describes how messages are computed between cells and . The message is influenced by both the data and associated with cells and , respectively. Additionally, it incorporates characteristics specific to the cells themselves, such as orientation in the case of cell complexes. This allows for a richer representation of spatial relationships compared to traditional graph-based message passing frameworks. Second, Equation 2 defines how messages from neighboring cells are aggregated within each neighborhood. The function aggregates these messages, allowing information to be exchanged effectively between adjacent cells within the same neighborhood. Third, Equation 3 outlines the process of combining messages from different neighborhoods. The function aggregates messages across various neighborhoods, facilitating communication between cells that may not be directly connected but share common neighborhood relationships. Fourth, Equation 4 specifies how the aggregated messages influence the state of a cell in the next layer. Here, the function updates the state of cell based on its current state and the aggregated message obtained from neighboring cells. Non-message passing topological neural networks While the majority of TNNs follow the message passing paradigm from graph learning, several models have been suggested that do not follow this approach. For instance, Maggs et al. leverage geometric information from embedded simplicial complexes, i.e., simplicial complexes with high-dimensional features attached to their vertices.This offers interpretability and geometric consistency without relying on message passing. Furthermore, in a contrastive loss-based method was suggested to learn the simplicial representation. Learning on topological descriptors Motivated by the modular nature of deep neural networks, initial work in TDL drew inspiration from topological data analysis, and aimed to make the resulting descriptors amenable to integration into deep-learning models. This led to work defining new layers for deep neural networks. Pioneering work by Hofer et al., for instance, introduced a layer that permitted topological descriptors like persistence diagrams or persistence barcodes to be integrated into a deep neural network. This was achieved by means of end-to-end-trainable projection functions, permitting topological features to be used to solve shape classification tasks, for instance. Follow-up work expanded more on the theoretical properties of such descriptors and integrated them into the field of representation learning. Other such topological layers include layers based on extended persistent homology descriptors, persistence landscapes, or coordinate functions. In parallel, persistent homology also found applications in graph-learning tasks. Noteworthy examples include new algorithms for learning task-specific filtration functions for graph classification or node classification tasks. Learning through alternative formulations Most existing TDL techniques are rooted in homology. However, alternative mathematical approaches, such as topological Laplacians and topological Dirac operators, also provide valuable insights into the topological properties of TDL. Topological Laplacians, including Hodge Laplacians on differentiable manifolds, combinatorial Laplacians for point clouds, and Khovanov Laplacians for knots and links, serve as powerful tools for extracting topological features from their respective data formats. Despite being defined in distinct contexts, these Laplacians share a common algebraic foundation. They are constructed using the (co-)boundary operator and its adjoint, with their kernels isomorphic to homology groups. Consequently, the number of zero eigenvalues corresponds to the Betti numbers of the associated (co-)homology groups. Moreover, the nonzero eigenvalues provide richer insights into the data structure, particularly when analyzed through the perspective of spectral theory. Hodge Laplacians, combinatorial Laplacians, and Khovanov Laplacians draw upon the mathematical fields of differential geometry, graph theory, and geometric topology, respectively, to extend the classical theory of homology beyond the domain of algebraic topology. Each functions as a bridge, linking algebraic topology to its associated mathematical discipline. Persistent Hodge Laplacians were first introduced in 2019 to analyze data on differentiable manifolds with boundary. Additionally, persistent combinatorial Laplacians, also known as persistent Laplacians, were developed for point cloud data. These approaches extend classical persistent homology and have stimulated research interest, fueling advancements in both theory and applications. Persistent Laplacians outperform persistent homology in extensive protein engineering tasks and the prediction of mutation induced protein-protein binding affinity changes. Persistent topological Laplacians have been constructed on various mathematical objects, including simplicial complex, directed flag complex, path complex, cellular sheaves, hypergraph, hyperdigraph, and differentiable manifolds. Persistent Dirac was constructed on various topological spaces, including simplicial complex, path complex, and hypergraph. These new approaches extend the scope of TDL to manifold topological learning and curve data learning. Applications TDL is rapidly finding new applications across different domains, including data compression, enhancing the expressivity and predictive performance of graph neural networks, action recognition, and trajectory prediction. Topology inherently simplifies data, which implies the irreversible loss of certain information. Therefore, competitive performance from TDL mostly involves intrinsically complex data, such as those arising in biological sciences. Perhaps some of the most compelling examples of applications in which TDL consistently demonstrates its advantages over other competing methods are the victories of TDL in the D3R Grand Challenges, the discovery of SARS-CoV-2 evolution mechanisms, and the successful forecasting of SARS-CoV-2 variants BA.2, BA.4 and BA.5, about two months in advance. See also Topological data analysis Deep learning References Deep learning Topology
Topological deep learning
[ "Physics", "Mathematics" ]
3,042
[ "Spacetime", "Topology", "Space", "Geometry" ]
76,403,602
https://en.wikipedia.org/wiki/Kristallografija
Kristallografija, also transliterated as Kristallografiya or Kristallografiia, () is a bimonthly, peer-reviewed, Russian crystallography journal currently published by MAIC "Science/Interperiodica". An English translation Crystallography Reports is published by Pleiades Publishing, Inc. History The journal was founded in 1956 by Alexei Vasilievich Shubnikov and was initially dedicated to the publication of research from the Institute of Crystallography of the Russian Academy of Sciences. The journal is also available in English translation as Soviet Physics Crystallography (ISSN 0038-5638) 1956–1992 (volumes 1–37) continued as Crystallography Reports (ISSN 1063-7745) 1993–present (volumes 38–present). The journal is available in online format (ISSN 1562-689X) from 2000–present. The current publisher of the translated journal is Pleiades Publishing, Inc., and the distributor is Springer Nature. The journal was the first to publish papers in the new areas of antisymmetry, polychromatic symmetry, and generalized symmetry. Scope The journal publishes original articles, short communications, and reviews on various aspects of crystallography: crystallographic symmetry; theory of crystalline structures; diffraction and scattering of X-rays, electrons, and neutrons, determination of crystal structure of inorganic and organic substances, including proteins and other biological substances; UV–Vis and IR spectroscopy; growth, imperfect structure and physical properties of crystals; thin films, liquid crystals, nanomaterials and ceramics, partially disordered systems, crystallographic methods; instruments and equipment; crystallographic software; history of crystallography; anniversaries; and obituaries. Editors A.V. Shubnikov (1956–1968) N.V. Belov (1968–1982) B.K. Vainshtein (1982–1996) L.A. Shuvalov (1997–2004) M.V. Kovalchuk (since 2004) Abstracting and indexing Crystallography Reports is abstracted and indexed by the following services. Astrophysics Data System (ADS) Baidu CLOCKSS CNKI CNPIEC (China National Publications Import Export Corporation) Chemical Abstracts Service (CAS) Current Contents Physical, Chemical and Earth Sciences Dimensions EBSCO EI Compendex FIZ Karlsruhe Google Scholar INIS Atomindex INSPEC Japanese Science and Technology Agency (JST) Journal Citation Reports/Science Edition Naver OCLC WorldCat Discovery Service Portico ProQuest-ExLibris Primo / Summon Reaction Citation Index SCImago SCOPUS Science Citation Index Expanded (SCIE) TD Net Discovery Service UGC-CARE List (India) Wanfang References Crystallography journals Russian-language journals Academic journals established in 1956 Bimonthly journals
Kristallografija
[ "Chemistry", "Materials_science" ]
590
[ "Crystallography journals", "Crystallography" ]
76,404,147
https://en.wikipedia.org/wiki/Zirconium%20iodate
Zirconium iodate is an inorganic compound with the chemical formula Zr(IO3)4. It can be prepared by reacting sodium iodate and zirconium sulfate tetrahydrate in an aqueous solution. The resulting precipitate is dried and refluxed in concentrated nitric acid. Zirconium iodate trihydrate can be obtained by reacting hydrated zirconium oxide and iodine pentoxide (1.4~3.3% concentration) in water. Its basic salt Zr(OH)n(IO3)4−n is known. References Zirconium compounds Iodates
Zirconium iodate
[ "Chemistry" ]
138
[ "Iodates", "Oxidizing agents" ]
76,404,216
https://en.wikipedia.org/wiki/Bismuth%20iodate
Bismuth iodate is an inorganic compound with the chemical formula Bi(IO3)3. Its anhydrate can be obtained by reacting bismuth nitrate and iodic acid, dissolving the resulting precipitate in 7.8 mol/L nitric acid, and heating to volatilize and crystallize at 70 °C; The dihydrate can be obtained by reacting bismuth nitrate and potassium iodate or sodium iodate. It is obtained by evaporation and crystallization in 7 mol/L nitric acid at 50 °C. Its basic salt BiOIO3 is known. References Bismuth compounds Iodates
Bismuth iodate
[ "Chemistry" ]
139
[ "Iodates", "Inorganic compounds", "Oxidizing agents", "Inorganic compound stubs" ]
76,404,260
https://en.wikipedia.org/wiki/Tin%28IV%29%20iodate
Tin(IV) iodate is an inorganic compound with the chemical formula Sn(IO3)4. It was first obtained in 2020 through the hydrothermal reaction of tin(II) oxide and iodic acid in water at 220 °C. It is a colorless columnar crystal, crystallized in the triclinic P space group. It has an indirect band gap (experimental 4.0 eV; calculated 2.75 eV). References Tin compounds Iodates Substances discovered in the 2020s
Tin(IV) iodate
[ "Chemistry" ]
103
[ "Iodates", "Inorganic compounds", "Oxidizing agents", "Inorganic compound stubs" ]
76,404,335
https://en.wikipedia.org/wiki/Plutonium%28IV%29%20iodate
Plutonium(IV) iodate is an inorganic compound with the chemical formula Pu(IO3)4, it is a salt which decomposes into plutonium(IV) oxide above 540 °C. It can be generated in the reaction of plutonium(IV) nitrate and iodic acid, but this method cannot obtain a pure product; Another preparation method is the reaction of plutonium(IV) nitrate or plutonium(IV) chloride with potassium iodate and dilute nitric acid. It can crystallize in the tetragonal crystal system with space group P42/n. References Plutonium compounds Iodates
Plutonium(IV) iodate
[ "Chemistry" ]
128
[ "Iodates", "Inorganic compounds", "Oxidizing agents", "Inorganic compound stubs" ]
76,408,069
https://en.wikipedia.org/wiki/Basket-handle%20arch
A basket-handle arch (also depressed arch or chop arch) is characterized by an intrados profile formed by a sequence of circular arcs, each tangent to its neighbors, resulting in a smooth transition between arcs. The simplest form, a three-centered arch, consists of three arc segments with distinct centers, while a five-centered arch is also commonly used. This type of arch is prevalent in architectural applications, particularly in bridge construction. The shape of a basket-handle arch resembles that of a semi-ellipse, featuring a continuous curvature that varies from the extremities of the long axis to the apex of the short axis. It is also referred to as a depressed arch or basket arch, highlighting its distinctive curvature and structural function. History Since Roman times, bridge vaults have been built with semicircular arches, forming a complete half-circumference. From the early Middle Ages onwards, the segmental arch, an incomplete half-circumference, was used to build vaults that were less than half the height of their opening. The pointed arch, which emphasizes height by rising above half the opening, did not see use in bridge construction until the Middle Ages. The basket-handle arch appeared at the beginning of the Renaissance, offering aesthetic advantages over segmental vaults, notably through its end arches being vertically tangential to the supports. The earliest applications of basket-handle arches in France can be seen in the Pont-Neuf in Toulouse, constructed in the 16th century, and the Pont Royal in the following century. By the 18th century, the use of basket-handle arches became prevalent, particularly with three centers, as exemplified by the bridges at Vizille, Lavaur, Gignac, Blois (1716–1724), Orléans (1750–1760), Moulins (1756–1764), and Saumur (1756-1770). Notable architect Jean-Rodolphe Perronet designed bridges with eleven centers during the latter half of the 18th century, including those at Mantes (1757–1765), Nogent (1766–1769), and Neuilly (1766–1774). The Tours bridge (1764–1777) also featured eleven centers. Other arches were generally reduced to one-third or slightly more, except for Neuilly, which was reduced to one-fourth. In the 19th century, basket-handle arches were utilized in France's first major railroad bridges, including the Cinq-Mars bridge (1846–1847), Port-de-Piles bridge (1846–1848), Morandière bridges: Montlouis (1843–1845), and Plessis-les-Tours (1855–1857). In England, while the Gloucester Bridge (1826–1827) and the London Bridge (1824–1831) were elliptical, the Waterloo Bridge in London (1816–1818) retained the basket-handle arch form. Several basket-handle arches continued to be constructed into the late 19th and early 20th centuries. Notable examples include the Edmonson Avenue Bridge in Baltimore (1908–1909) with three centers, the Annibal Bridge (1868–1870) and Devil's Bridge (1870–1872) with five centers, the Emperor Francis Bridge in Prague (1898–1901) with seven centers, and the Signac Bridge (1871–1872) with nineteen centers. In the United States, the Thomas Viaduct, featuring a basket-handle arch, was built between 1833 and 1835. It is now owned and operated by CSX Transportation and remains one of the oldest railroad bridges still in service. Comparison between basket handle arch and ellipse Aesthetics Ancient architects placed considerable importance on the methods used to define the outline of the basket-handle arch. The flexibility inherent in these processes allowed for a wide variety of configurations, leading many architects to favor this type of curve over the ellipse, whose contour is rigidly determined by geometric principles. In the case of an ellipse, the opening of a vault and the height at the center—corresponding to the major and minor axes—result in fixed points along the intrados curve, leaving no room for architectural modification. Conversely, the multi-center curve offers greater design freedom, allowing architects to adjust the curve’s base and apex according to their preferences, depending on the arrangement of the centers. This adaptability made the basket-handle arch an attractive option for those seeking aesthetic flexibility. Advantages and disadvantages The advantages of this layout approach were significant: the establishment of full-scale grooves was perceived as easier and more precise, allowing for immediate on-site layout of the normals and segment joints. The number of voussoir shapes was constrained by the number of distinct radii employed, whereas for elliptical arches, this number was typically equal to half the number of voussoirs plus one. However, the discontinuity of the layout led to the appearance of unsightly voussoirs, which could not always be removed during restoration work. Tracing curves with three centers The ancient oval Although the basket-handle arch was not utilized for bridge vaults in ancient times, it found application in the construction of other types of vaults. Heron of Alexandria, who authored mathematical treatises more than a century before the Common Era, outlined a straightforward method for tracing this arch. In Heron's method, if AB represents the width of the intended vault and the height (or rise) is undetermined, a half-circumference is described on AB. A vertical line OC is drawn through point C on this arc, and a tangent mn is constructed at point C. Lengths Cm and Cn are taken to be equal to half the radius of the arc. By connecting points mO and nO, points D and E are established. An isosceles triangle DOE is then traced, with its base equal to the height of the arch. Next, the line segment DA is divided into four equal parts, and parallels to DO are drawn through these division points (a, b, c). The intersections of these parallels with the horizontal axis AB and the extended vertical axis CO yield the necessary centers for tracing various curves with three centers along AB, often referred to as the ancient oval. As the basket-handle arch became more prevalent in bridge construction, numerous procedures for tracing it emerged, leading to an increase in the number of centers used. The objective was to create perfectly continuous curves with an aesthetically pleasing contour. Given the indeterminate nature of the problem, certain conditions were often imposed arbitrarily to achieve the desired result. For instance, it was sometimes accepted that the arcs of circles composing the curve must correspond to equal angles at the center, while at other times, these arcs were required to be of equal length. Additionally, either the amplitude of the angles or the lengths of the successive radii were allowed to vary according to specific proportions. A consistent ratio between the lowering of the arch and the number of centers used to trace the intrados curve was also established. This lowering is measured by the ratio of the rise (b) to the width of the arch (2a), expressed as b/2a. Acceptable ratios may include one-third, one-quarter, or one-fifth; however, if the ratio falls below one-fifth, a circular arc is generally preferred over the basket-handle arch or ellipse. For steeper slopes, it is advisable to employ at least five centers, with some designs utilizing up to eleven centers, as seen in the curve of the Neuilly Bridge, or even up to nineteen for the Signac Bridge. As one of the centers must always be positioned on the vertical axis, the remaining centers are symmetrically arranged, resulting in an odd total number of centers. The Huygens method For constructing curves with three centers, Huyghens outlines a method that involves tracing arcs of varying radii corresponding to equal angles, specifically angles of 60 degrees. To begin, let AB represent the opening and OE signify the arrow of the vault. From the center point O, an arc AMF is drawn using radius OA. The arc AM is then taken to be one-sixth of the circumference, meaning its chord equals the radius OA. The chords AM and MF are drawn, followed by a line Em through point E, which is the endpoint of the minor axis, parallel to MF. The intersection of chords AM and Em determines point m, the boundary of the first arc. By drawing the line mP parallel to MO, points n and P are established as the two centers required for the construction. The third center n is positioned at a distance n'O from the axis OE, equal to nO. Analysis of the figure reveals that the three arcs—Am, mEm', and m'B—comprise the curve and correspond to equal angles at the centers Anm, mPm', and m'n'B, all measuring 60 degrees. The Bossut method Charles Bossut proposed a more efficient method for tracing a three-center curve, which simplifies the process. In this method, AB represents the opening and OE denotes the arrow of the vault, serving as the long and short axes of the curve. To begin, the line segment AE is drawn. From point E, a segment EF' is taken, equal to the difference between OA and OE. A perpendicular line is then drawn from the midpoint m of AF'. The points n and P, where this perpendicular intersects the major axis and the extension of the minor axis, serve as the two centers required for the construction. When using the same opening and rise, the curve produced by this method exhibits minimal deviation from those generated by previous techniques. Curves with more than three centers For curves with more than three centers, the methods indicated by Bérard, Jean-Rodolphe Perronet, Émiland Gauthey, and others consisted, as for the Neuilly bridge, in proceeding by trial and error. Tracing a first approximate curve according to arbitrary data, whose elements were then rectified, using more or less certain formulas, so that they passed exactly through the extremities of the major and minor axes. The Michal method In a paper published in 1831, mathematician Michal addressed the problem of curve construction with a scientific approach. He developed tables containing the necessary data to draw curves with 5, 7, and 9 centers, achieving precise results without the need for trial and error. Michal's calculation method is applicable to curves with any number of centers. He noted that the conditions required to resolve the problem can be somewhat arbitrary. To address this, he proposed that the curves be constructed using either arcs of a circle that subtend equal angles or arcs of equal length. However, to fully determine the radii of these arcs, he also posited that the radii should correspond to the radii of curvature of an ellipse centered at the midpoint of each arc, where the opening serves as the major axis and the ascent functions as the minor axis. As the number of centers increases, the resulting curve approximates the shape of an ellipse with the same opening and slope. The following table illustrates the construction of a basket-handle arch, characterized by equal angles subtended by the various arcs that comprise it. The proportional values for the initial radii are calculated using half the opening as the unit of measurement. Additionally, the overhang is defined as the ratio of the arrow (the vertical distance from the highest point of the arch to the line connecting its endpoints) to the total opening. The table provided allows for the straightforward construction of a basket-handle arch with any specified opening using five, seven, or nine centers, eliminating the need for extensive calculations. The only stipulation is that the drop must match one of the values proposed by Michal. For instance, to draw a curve with seven centers, a 12-meter opening, and a 3-meter slope corresponding to a drop of one-quarter (or 0.25), the first and second radii can be calculated as follows: 6×0.265 and 6×0.419, resulting in values of 1.594 meters and 2.514 meters, respectively. To inscribe the curve within a rectangle labeled ABCD, one would start by describing a semicircle on line segment AB, which serves as the diameter, and divide it into seven equal parts. Chords Aa, ab, bc, and cd are then traced, with chord cd representing a half-division. On the AB axis, from point A, a length of 1.590 meters is measured to establish the first center, labeled m1. A parallel line with radius Oa is drawn through this point, intersecting chord Aa at point n, marking the endpoint of the first arc. From point n, a length of nm2 equal to 2.514 meters is measured to identify the second center, m2. A parallel line with radius Ob is drawn from point m2, while a parallel line to chord ab is drawn from point n. The intersection of these two parallels at point n′ defines the endpoint of the second arc. Continuing this process, a parallel is drawn through point n′ to chord bc, and from point E, a parallel is drawn to chord cd. The intersection of these two lines at point n′′ is used to draw a parallel to radius Oc. The points m3 and m4, where this line intersects the extensions of radius n′m2 and the vertical axis, become the third and fourth centers. The final three centers, m5, m6, and m7, are positioned symmetrically relative to the first three centers m1, m2, and m3. As illustrated in the figure, the arcs An, nn′, n′n′′, etc., subtend equal angles at their centers, specifically 51° 34' 17" 14'. Moreover, constructing a semi-ellipse with AB as the major axis and OE as the minor axis reveals that the arcs of the semi-ellipse, contained within the same angles as the circular arcs, possess a radius of curvature equal to that of the arcs themselves. This method demonstrates the ease with which curves can be constructed with five, seven, or nine centers. The Lerouge method Following Mr. Michal's contributions, the subject was further explored by Mr. Lerouge, the chief engineer of the Ponts et Chaussées. Lerouge developed tables for constructing curves with three, five, seven, and even up to fifteen centers. His approach diverges from Michal's methodology by stipulating that the successive radii must increase according to an arithmetic progression. This requirement means that the angles formed between the radii do not necessarily need to be equal, allowing for greater flexibility in the design of the curves. References Bibliography Architecture Piecewise-circular curves Bridges Arch bridges
Basket-handle arch
[ "Mathematics", "Engineering" ]
3,065
[ "Structural engineering", "Piecewise-circular curves", "Euclidean plane geometry", "Construction", "Planes (geometry)", "Bridges", "Architecture" ]
54,852,772
https://en.wikipedia.org/wiki/Breakthrough%20curve
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials. Importance Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase. The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment. Measurement A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored. Results Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains information about the mass transfer properties of the adsorptive-adsorbent system. These properties can be evaluated by applying simplified models and fitting to experimental data by simulations. References Surface science Materials science Colloidal chemistry
Breakthrough curve
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
427
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Materials science", "Colloids", "Surface science", "Condensed matter physics", "nan" ]
54,859,077
https://en.wikipedia.org/wiki/NGC%20464
NGC 464 is a double star located in the Andromeda constellation. It was discovered in 1882 by Wilhelm Tempel. References External links Andromeda (constellation) 0464 Double stars
NGC 464
[ "Astronomy" ]
41
[ "Andromeda (constellation)", "Constellations" ]
54,869,297
https://en.wikipedia.org/wiki/Tim%20Elliott%20%28geochemist%29
Timothy Richard Elliott is a professor at the University of Bristol. Education Timothy Elliot was educated at the University of Cambridge and the Open University where he was awarded a PhD in 1991 for research investigating element fractionation in the petrogenesis of ocean island basalts. Career and research Elliott specialises in developing analytical approaches to yield novel isotopic means to reconstruct planetary histories. He has investigated production of melt from the Earth's interior and the chemical consequences of the return of solidified melts to depth via the plate tectonic cycle. In particular, he has assessed elemental fluxes from descending plates and has highlighted how the rise of atmospheric oxygen has been remarkably recorded in the isotopic composition of the deep, solid Earth. His recent focus on planetary growth has identified the rapid formation of metallic cores, how bulk chemistry is notably modified during early accretion and distinctively embellished in its terminal stages. Awards and honours Elliot was awarded the Murchison Medal by the Geological Society of London in 2017 and elected a Fellow of the Royal Society (FRS) in 2017. References Fellows of the Royal Society Living people Year of birth missing (living people) Alumni of the University of Cambridge Alumni of the Open University Academics of the University of Bristol British geochemists Murchison Medal winners
Tim Elliott (geochemist)
[ "Chemistry" ]
261
[ "Geochemists", "British geochemists" ]
66,174,650
https://en.wikipedia.org/wiki/Heat%20pen
A heat pen (also known as a thermal stick) is a device used to mitigate the effects of an insect sting (e. g. wasp sting) or insect bite (e. g. mosquito bite) by briefly heating the skin. Shape The heat pen is available either as a pen-like device or as a USB-attachment for the smartphone. Effect A heat pen has a ceramic or metal plate at the tip, which heats to 50 to 60 °C. The heated plate is brought into contact with the area of skin affected by the insect bite for 3 to 10 seconds, causing the skin to briefly heat up to 53 °C (local hyperthermia). The heat activates various physiological processes. For example, it is assumed that the insect proteins are destroyed (denatured) and the body's histamine release is reduced. This results in symptom relief, for example itching is avoided. Due to the short application time, the skin is not damaged. The positive effect of the heat stick could be confirmed by a study, however employees of the manufacturer are the lead authors and may be biased. The exact effect is not known; various mechanisms are discussed. The same mode of action is also used to treat cold sores. References Medical devices
Heat pen
[ "Biology" ]
258
[ "Medical devices", "Medical technology" ]
66,175,500
https://en.wikipedia.org/wiki/Dana%20Carroll
Dana Carroll is an American molecular biologist and biochemist at the University of Utah School of Medicine who has made important contributions to the field of genome editing. He has been a member of the National Academy of Sciences since 2017. References Living people Year of birth missing (living people) American molecular biologists Genome editing United States National Academy of Sciences Place of birth missing (living people)
Dana Carroll
[ "Engineering", "Biology" ]
77
[ "Genetics techniques", "Genetic engineering", "Genome editing" ]
61,194,766
https://en.wikipedia.org/wiki/Time-domain%20diffuse%20optics
Time-domain diffuse optics or time-resolved functional near-infrared spectroscopy is a branch of functional near-Infrared spectroscopy which deals with light propagation in diffusive media. There are three main approaches to diffuse optics namely continuous wave (CW), frequency domain (FD) and time-domain (TD). Biological tissue in the range of red to near-infrared wavelengths are transparent to light and can be used to probe deep layers of the tissue thus enabling various in vivo applications and clinical trials. Physical concepts In this approach, a narrow pulse of light (< 100 picoseconds) is injected into the medium. The injected photons undergo multiple scattering and absorption events and the scattered photons are then collected at a certain distance from the source and the photon arrival times are recorded. The photon arrival times are converted into the histogram of the distribution of time-of-flight (DTOF) of photons or temporal point spread function. This DTOF is delayed, attenuated and broadened with respect to the injected pulse. The two main phenomena affecting photon migration in diffusive media are absorption and scattering. Scattering is caused by microscopic refractive index changes due to the structure of the media. Absorption, on the other hand, is caused by a radiative or non-radiative transfer of light energy on interaction with absorption centers such as chromophores. Both absorption and scattering are described by coefficients and respectively. Multiple scattering events broaden the DTOF and the attenuation of a result of both absorption and scattering as they divert photons from the direction of the detector. Higher scattering leads to a more delayed and a broader DTOF and higher absorption reduces the amplitude and changes the slope of the tail of the DTOF. Since absorption and scattering have different effects on the DTOF, they can be extracted independently while using a single source-detector separation. Moreover, the penetration depth in TD depends solely on the photon arrival times and is independent of the source-detector separation unlike in CW approach. The theory of light propagation in diffusive media is usually dealt with using the framework of radiative transfer theory under the multiple scattering regime. It has been demonstrated that radiative transfer equation under the diffusion approximation yields sufficiently accurate solutions for practical applications. For example, it can be applied for the semi-infinite geometry or the infinite slab geometry, using proper boundary conditions. The system is considered as a homogeneous background and an inclusion is considered as an absorption or scattering perturbation. The time-resolved reflectance curve at a point from the source for a semi-infinite geometry is given by where is the diffusion coefficient, is the reduced scattering coefficient and is asymmetry factor, is the photon velocity in the medium, takes into account the boundary conditions and is a constant. The final DTOF is a convolution of the instrument response function (IRF) of the system with the theoretical reflectance curve. When applied to biological tissues estimation of and allows us to then estimate the concentration of the various tissue constituents as well as provides information about blood oxygenation (oxy and deoxy-hemoglobin) as well as saturation and total blood volume. These can then be used as biomarkers for detecting various pathologies. Instrumentation Instrumentation in time-domain diffuse optics consists of three fundamental components namely, a pulsed laser source, a single photon detector and a timing electronics. Sources Time-domain diffuse optical sources must have the following characteristics; emission wavelength in the optical window i.e. between 650 and 1350 nanometre (nm); a narrow full width at half maximum (FWHM), ideally a delta function; high repetition rate (>20 MHz) and finally, sufficient laser power (>1 mW) to achieve good signal to noise ratio. In the past bulky tunable Ti:sapphire Lasers were used. They provided a wide wavelength range of 400 nm, a narrow FWHM (< 1 ps) high average power (up to 1W) and high repetition (up to 100 MHz) frequency. However, they are bulky, expensive and take a long time for wavelength swapping. In recent years, pulsed fiber lasers based on super continuum generation have emerged. They provide a wide spectral range (400 to 2000 ps), typical average power of 5 to 10 W, a FWHM of < 10ps and a repetition frequency of tens of MHz. However, they are generally quite expensive and lack stability in super continuum generation and hence, have been limited in there use. The most wide spread sources are the pulsed diode lasers. They have a FWHM of around 100 ps and repetition frequency of up to 100 MHz and an average power of about a few milliwatts. Even though they lack tunability, their low cost and compactness allows for multiple modules to be used in a single system. Detectors Single photon detector used in time-domain diffuse optics require not only a high photon detection efficiency in the wavelength range of optical window, but also a large active area as well as large numerical aperture (N.A.) to maximize the overall light collection efficiency. They also require narrow timing response and a low noise background. Traditionally, fiber coupled photomultiplier tubes (PMT) have been the detector of choice for diffuse optical measurements, thanks mainly due to the large active area, low dark count and excellent timing resolution. However, they are intrinsically bulky, prone to electromagnetic disturbances and they have a quite limited spectral sensitivity. Moreover, they require a high biasing voltage and they are quite expensive. Single photon avalanche diodes have emerged as an alternative to PMTS. They are low cost, compact and can be placed in contact, while needing a much lower biasing voltage. Also, they offer a wider spectral sensitivity and they are more robust to bursts of light. However, they have a much lower active area and hence a lower photon collection efficiency and a larger dark count. Silicon photomultipliers (SiPM) are an arrays of SPADs with a global anode and a global cathode and hence have a larger active area while maintaining all the advantages offered by SPADs. However, they suffer from a larger dark count and a broader timing response. Timing electronics The timing electronics is needed to losslessly reconstruct the histogram of the distribution of time of flight of photons. This is done by using the technique of time-correlated single photon counting (TCSPC), where the individual photon arrival times are marked with respect to a start/stop signal provided by the periodic laser cycle. These time-stamps can then be used to build up histograms of photon arrival times. The two main types of timing electronics are based on a combination of time-to-analog converter (TAC) and an analog-to-digital converter (ADC), and time-to-digital converter (TDC), respectively. In the first case, the difference between the start and the stop signal is converted into an analog voltage signal, which is then processed by the ADC. In the second method, the delay is directly converted into a digital signal. Systems based on ADCs generally have a better timing resolution and linearity while being expensive and the capability of being integrated. TDCs, on the other hand, can be integrated into a single chip and hence are better suited in multi-channel systems. However, they have a worse timing performance and can handle much lower sustained count-rates. Applications The usefulness of TD Diffuse optics lies in its ability to continually and noninvasive monitor optical properties of tissue. Making it a powerful diagnostic tool for long-term bedside monitoring in infants and adults alike. It has already been demonstrated that TD diffuse optics can be successfully applied to various biomedical applications such as cerebral monitoring, optical mammography, muscle monitoring, etc. See also Near-infrared spectroscopy Functional near-infrared spectroscopy Diffuse optical imaging Neuroimaging Functional neuroimaging References Neuroimaging Optical imaging Spectroscopy
Time-domain diffuse optics
[ "Physics", "Chemistry" ]
1,634
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
61,195,896
https://en.wikipedia.org/wiki/Lump%20sum%20contract
A lump sum contract in construction is one type of construction contract, sometimes referred to as stipulated-sum, where a single price is quoted for an entire project based on plans and specifications and covers the entire project and the owner knows exactly how much the work will cost in advance. This type of contract requires a full and complete set of plans and specifications and includes all the indirect costs plus the profit and the contractor will receive progress payments each month minus retention. The flexibility of this contract is very minimal and changes in design or deviation from the original plans would require a change order paid by the owner. In this contract the payment is made according to the percentage of work completed. The lump sum contract is different from guaranteed maximum price in a sense that the contractor is responsible for additional costs beyond the agreed price, however, if the final price is less than the agreed price then the contractor will gain and benefit from the savings. There are some factors that make for a successful execution of a lump sum contract on a project such as experience and confidence, management skills, communication skills, having a clear work plan, proper list of deliverables, contingency, and dividing the responsibility among the project team. According to Associated General Contractors of America (AGC), With a lump sum contract or fixed-price contract, the contractor assesses the value of work as per the documents available, primarily the specifications and the drawings. At pre-tender stage the contractor evaluates the cost to execute the project (based on the above documents such as drawings, specifications, schedules, tender instruction and any clarification received in response to queries) and quotes a fixed inclusive price. Advantages The owner's risk is reduced due to the price of the contract being fixed and variations are not as much like other contracts. There are fewer change orders. The bidding and contractor selection is less complicated. Obtaining construction loans are easier with this type of contract. The profit margins and percentages are greater for engineers and contractors. Payments and instalments are made on regular basis which provides the contractor with a reliable cash flow. Management of the contract is a lot easier for the owner. It creates an improved communication and relationship between the design team, contractor, and the owner. Disadvantages There is a higher risk for the contractor. Proper change order documentation is required which could be time-consuming. Higher fixed price due to unforeseen conditions. The contractor selection usually takes longer. The design has to be completed before the start of activities. Change orders could be rejected by the owner. It increases the adversarial relationship among the stakeholders of the project. The contractor has a freedom to choose its own methods. Potential for disputes between the client and the contractor, due to for example unbalanced bids, change orders, design changes, and compensation for early completion. Variations to lump sum contracts Variations occur due to fluctuation in prices and inflation, provisional items, statutory fees, relevant events such as failure of the owner to deliver goods, etc. Where the cost of a specific activity is identified as a "provisional sum", a variation in actual cost may be accepted by the employer. Variations are typically broken down into two categories, beneficial and detrimental, where the former is for improvement of work quality, cost and schedule reduction, and the latter is a negative change in performance or quality of work due to client's financial difficulties. There are many reasons for variations to occur but main causes are normally due to omission in design, inadequate design, changes in specifications and scope, and lack of coordination and communication among the stakeholders. Case law Harvey Shopfitters Limited -v- ADI Limited (2003): Work was commenced under a Letter of Intent but a formal contract document was never completed. Harvey wanted to be paid for work on a quantum meruit basis; ADI argued that a lump sum contract had been agreed and should be enforced. The England and Wales Court of Appeal held that there was sufficient certainty in the parties' prepared agreements to establish that a lump sum contract was in place. References Contract law construction
Lump sum contract
[ "Engineering" ]
820
[ "Construction" ]
56,367,582
https://en.wikipedia.org/wiki/ACS%20Mersin
ACS Mersin is a glass factory in Mersin, Turkey. ACS stands for Anadolu Cam Sanayii ("Anatolian Glass Industry") The factory is at in Yenitaşkent neighborhood to the north of the Turkish state highway which connects Mersin to Tarsus. Its distance to Mersin is about . The factory was put into operation in 1969. In 1975, it was acquired by Şişecam Group of Companies. In 1988, NNPB (narrow neck press and blow) technology was successfully used for the first time in Turkey at ACS. Current annual glass production is 260 822 metric tons. The number of employees is 461. But after the planned instauration the annual production will rise to 366685 metric tons and the number of employees will increase to 483. References Buildings and structures in Mersin Province Glassmaking companies Akdeniz District Industrial buildings in Turkey Industrial buildings completed in 1975 Companies based in Mersin Turkish brands Turkish companies established in 1969 Manufacturing companies established in 1969
ACS Mersin
[ "Materials_science", "Engineering" ]
208
[ "Glass engineering and science", "Glassmaking companies", "Engineering companies" ]
64,675,556
https://en.wikipedia.org/wiki/Vogel%E2%80%93Fulcher%E2%80%93Tammann%20equation
The Vogel–Fulcher–Tammann equation, also known as Vogel–Fulcher–Tammann–Hesse equation or Vogel–Fulcher equation (abbreviated: VFT equation), is used to describe the viscosity of liquids as a function of temperature, and especially its strongly temperature dependent variation in the supercooled regime, upon approaching the glass transition. In this regime the viscosity of certain liquids can increase by up to 13 orders of magnitude within a relatively narrow temperature interval. The VFT equation reads as follows: where and are empirical material-dependent parameters, and is also an empirical fitting parameter, and typically lies about 50 °C below the glass transition temperature. These three parameters are normally used as adjustable parameters to fit the VFT equation to experimental data of specific systems. The VFT equation is named after Hans Vogel, Gordon Scott Fulcher (1884–1971) and Gustav Tammann (1861–1938). References Eponymous equations of physics Equations of fluid dynamics
Vogel–Fulcher–Tammann equation
[ "Physics", "Chemistry" ]
203
[ "Equations of fluid dynamics", "Equations of physics", "Eponymous equations of physics", "Fluid dynamics stubs", "Fluid dynamics" ]
64,679,396
https://en.wikipedia.org/wiki/Depot%20injection
A depot injection, also known as a long-acting injectable (LAI), is a term for an injection formulation of a medication which releases slowly over time to permit less frequent administration of a medication. They are designed to increase medication adherence and consistency, especially in patients who commonly forget to take their medicine. Depot injections can be created by modifying the drug molecule itself, as in the case of prodrugs, or by modifying the way it is administered, as in the case of oil/lipid suspensions. Depot injections can have a duration of action of one month or greater and are available for many types of drugs, including antipsychotics and hormones. Purpose Depot injections provide longer duration drug action through slow absorption into the bloodstream. They are usually administered in the muscle, into the skin, or under the skin. The injected medication slowly releases the medication into the bloodstream. It may be used in patients who forget to take their medication; some doctors and patients consider the use of a depot injection to be coercion, and are opposed to their use for that reason. Mechanism Drugs may be modified to be slowly activated by the body, or be absorbed slowly by the body. Many are dissolved in an organic oil, as the compound is lipophilic due to the addition of functional groups to provide slow action. An example of this is adding a functional group such as decanoate. The combination of an oil base and modification to decrease metabolic activation prevent medications from being fully released. This can result in length of activity of 2–4 weeks or more. The alteration of the pharmacokinetics of the drug (the absorption and activation) does not change the side effect profile of the medication; thus, atypical antipsychotics are still preferred over typical antipsychotics. Discovery The first long-acting (depot) injections were antipsychotics fluphenazine and haloperidol. The concept of a depot injection arose before 1950, and originally was used to describe antibiotic injections that lasted longer to allow for less frequent administration. Pharmacokinetics Most commonly, depot injections are designed to have a duration of 2–4 weeks of action, however the pharmacokinetics of a specific formulation vary. Absorption and metabolism can both be affected by modifying the drug itself (for example, by attaching a functional group) or by the formulation of the product (examples are oil or microsphere preparations). Repeated administration of depot injections can lead to a half life over one month (as in some preparations of fluphenazine), but this can be variable in different patients. Hormonal depot injections of estradiol can last anywhere from one week to over one month. Medroxyprogesterone acetate is available as a depot injection which is injected once every three months to provide continuous hormonal contraception and releases for up to nine months after injection. Availability Many medications are available as depot injections, including many typical and atypical antipsychotics, as well as some hormonal medications and medication for opioid use disorder. Depot injections of antipsychotics are used to improve historically low adherence in patients with diseases such as schizophrenia. Different products may be administered or implanted either by a doctor or nurse, while some are designed to be administered by the patient themselves. Self-administered depot injections are used to increase healthcare access and decrease the need to visit the doctor as frequently, especially in low and middle income countries. Insulin may also be considered a depot injection depending on formulation. Insulin glargine, for example, is designed to precipitate after injection so it can be slowly absorbed by the body over a longer period than regular insulin would be. Depot injections of insulins have been studied to better replicate the body's natural basal rate of insulin production, and which can be activated by light to control the release of insulin from the injected depot. See also Flip–flop kinetics References Medical treatments Routes of administration Dosage forms Injection (medicine)
Depot injection
[ "Chemistry" ]
836
[ "Pharmacology", "Routes of administration" ]
64,684,602
https://en.wikipedia.org/wiki/Large%20diameter%20centrifuge
The large diameter centrifuge, or LDC, is any centrifuge extending several meters, which can rotate samples to change their acceleration in space to enhance the effect of gravity. Large diameter centrifuges are used to understand the effect of hyper-gravity (gravitational strengths stronger than that of the Earth) on biological samples, including and not limiting to plants, organs, bacteria, and astronauts (Such as NASA's Human Performance Centrifuge) or non-biological samples to undertake experiments in the field of fluid dynamics, geology, biochemistry and more. Description Frequently, "LDC" is used to refer to the centrifuge at the European Space Agency (ESA)'s campus known as ESTEC (European Space research and TEChnology center). This is an 8-m diameter, four-arm centrifuge covered by a dome, which is available for research. A total of six gondolas, each being able to carry an 80 kg payload, can spin at a maximum of 20 times Earth's gravity, equaling 67 revolutions per minute. Full technical specification are available for free at the ESA website. Competition and grants The European Space Agency (ESA) and UNOOSA lets students compete with a research proposal for the use of the LDC. These competitions are known as the 'Spin Your thesis!'. When the proposal is accepted, they are guided at ESA/ESTEC to use the LDC. Support is given for a variety of different fields including biology, physics and chemistry. See also Gravitropism Random positioning machine Free Fall Machine Clinostat References Laboratory equipment Gravitational instruments
Large diameter centrifuge
[ "Technology", "Engineering" ]
334
[ "Measuring instruments", "Gravitational instruments" ]
74,823,985
https://en.wikipedia.org/wiki/Mark%20D.%20Foster
Mark D. Foster is the Thomas A. Knowles Professor of Polymer Science and Polymer Engineering, and associate dean of programs, policy and engagement at the University of Akron. His area of research is polymer surfaces and interfaces. Education Foster completed his undergraduate studies in Chemical Engineering at Washington University in St. Louis in 1981. He completed his doctorate, also in Chemical Engineering at University of Minnesota, Twin Cities in 1987. He then took a postdoctoral Staff Scientist position at the Max Planck Institute for Polymer Research. He held a senior postdocotoral position under Frank S. Bates at the University of Minnesota during 1989–1990. Career Foster joined the University of Akron Polymer Science faculty in 1990 at the rank of assistant professor. He served as chair of the department of polymer science from 2005 to 2008. Since 2008, he has held roles as associate Dean, College of Polymer Science and Polymer Engineering, as well as Director of the Akron Global Polymer Academy. His most cited work treated the subject of epoxy-terminated self-assembled monolayers. Awards and recognition 2005 - Sparks–Thomas award from the ACS Rubber Division References Living people Polymer scientists and engineers Year of birth missing (living people) McKelvey School of Engineering alumni University of Minnesota College of Science and Engineering alumni University of Akron faculty
Mark D. Foster
[ "Chemistry", "Materials_science" ]
259
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
74,825,642
https://en.wikipedia.org/wiki/Fluoroether%20E-1
Fluoroether E-1 (known chemically as heptafluoropropyl 1,2,2,2-tetrafluoroethyl ether, is a chemical compound that is among the class of per- and polyfluoroalkyl substances (PFAS). This synthetic fluorochemical is used in the GenX process, and may arise from the degradation of GenX chemicals including FRD-903. Production The main production of Fluoroether E-1 is within the GenX process where FRD-903 (2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoic acid) is used to generate (FRD-902) ammonium 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoate, and Fluoroether E-1 (heptafluoropropyl 1,2,2,2-tetrafluoroethyl ether). Properties Fluoroether E-1 is a colorless liquid that is practically insoluble in water. It is volatile and has a low boiling point. References Perfluorinated compounds Chemours Chemical processes
Fluoroether E-1
[ "Chemistry" ]
273
[ "Chemical process engineering", "Chemical processes", "nan" ]
74,830,262
https://en.wikipedia.org/wiki/LY305
LY305 is a transdermally bioavailable selective androgen receptor modulator (SARM) developed by Eli Lilly for the treatment of osteoporosis in men. Its chemical structure includes N-arylhydroxyalkyl. A phase one trial found promising results. See also Compound 2f (SARM) References Selective androgen receptor modulators Chloroarenes Anilines Nitriles Cyclopentanols Benzonitriles
LY305
[ "Chemistry" ]
102
[ "Nitriles", "Functional groups" ]
74,831,326
https://en.wikipedia.org/wiki/Sobetirome
Sobetirome (GC-1) is a thyromimetic drug that binds to the thyroid hormone receptor TRβ1 preferentially compared to TRα1. It has been investigated for the treatment of dyslipidemia, obesity, Pitt–Hopkins syndrome, cholestatic liver disease, multiple sclerosis, bleomycin-induced lung fibrosis, and COVID-19 caused ARDS. It was designated as an orphan drug by the FDA for the treatment of X-linked adrenoleukodystrophy. References Orphan drugs Thyroid hormone receptor beta agonists Carboxylic acids Phenols Isopropyl compounds
Sobetirome
[ "Chemistry" ]
140
[ "Carboxylic acids", "Functional groups" ]
74,831,538
https://en.wikipedia.org/wiki/VK2809
VK2809 (formerly known as MB07811) is a thyromimetic prodrug whose active form is selective for the THR-β isoform. It is being developed by Viking Therapeutics in a phase II trial for the treatment of nonalcoholic steatohepatitis and is also being investigated for glycogen storage disease type Ia. In 2023, Viking Therapeutics filed a lawsuit against the developer of ASC41, Chinese company Ascletis BioScience, accusing it of stealing Viking's trade secrets to develop ASC41 which is allegedly similar to, or identical to, VK2809. References Thyroid hormone receptor beta agonists Prodrugs Dioxaphosphorinanes Phenols 3-Chlorophenyl compounds Isopropyl compounds Experimental drugs developed for non-alcoholic fatty liver disease
VK2809
[ "Chemistry" ]
185
[ "Chemicals in medicine", "Prodrugs" ]
74,832,110
https://en.wikipedia.org/wiki/Kreuzbau%20%28Hamburg%29
The Kreuzbau (also Klassenkreuz) is a building type for school buildings in Hamburg. Between 1957 and 1963, Kreuzbau buildings were erected there at a good 60 locations of state schools. They have four wings on a cruciform ground plan, from which the name is derived. The Kreuzbau building has three floors and a flat roof. Each floor has four classrooms and associated small group rooms, which means that the Kreuzbau building can accommodate twelve school classes. The classrooms are directly accessed by a central staircase - without a corridor and in the way of Schustertyp. The design for the Kreuzbau building came from the Hamburg building director Paul Seitz. The main advantage of this type of building was its rapid assembly; from today's perspective, its disadvantage is its lack of thermal insulation. More than 80% of the Kreuzbau buildings erected in Hamburg are still standing and mostly serve elementary schools as classrooms. History Antecedents After the end of World War II, almost half of Hamburg's former 463 school buildings were no longer usable: 21% of the schools were destroyed and 26% were so badly damaged that they could hardly be used. From 1945 to 1947, the number of students in Hamburg doubled from 95,000 to 186,000. The reasons for this increase were the return of families from the 1943 evacuation after the "Feuersturm," including students returning from the "Kinderlandverschickung." The settlement of 1944–50 flight and expulsion of Germans and refugees from the Soviet occupation zone had a reinforcing effect. Until 1948, school construction was limited to makeshift repairs of damage and the use of shacks and other temporary facilities. The shortage of space could only be met by "shift teaching." This situation generated considerable public pressure, since parents' gainful employment was severely hampered by staggered shift teaching of several children. In Hamburg's state politics, the shortage of space in schools was, along with the housing shortage, the irritant par excellence and contributed to the SPD's loss of its majority in the 1953 Hamburg elections, even though top candidate Max Brauer promised the completion of one new school per month in the election program "A flourishing Hamburg." The winner of the 1953 elections was the bourgeois Hamburg Block, which used the issue of the lack of school buildings to stop the school reform pushed by the SPD. But that did not change the urgent task of multiplying the pace of school construction. At the same time, however, the city's resources were limited - there were enough other expensive tasks in housing construction and industrial settlement. In 1952, Paul Seitz was appointed First Director of Construction and Head of the Structural Engineering Office in Hamburg, and thus also Deputy to the Senior Director of Construction at the Hamburg Building Authority, Werner Hebebrand. Seitz held these posts until 1963, when he left Hamburg for a professorship at the Berlin University of the Arts. Seitz designed mainly schools, university buildings and other public buildings during his ten-year tenure. For schools, he relied entirely on serial designs that were used according to the modular principle. In doing so, he pursued two concepts: the "Green School" and the "wachsenden Schule." The Green School was intended to realize ideas of the new education movement of a return to nature by placing smaller school buildings with a maximum of two stories on generously sized school grounds in the green, ideally with direct access to the garden from the classroom. This type of construction deliberately stood out from the imposing "school barracks" of the Wilhelminism Period and was intended to appear transparent and light. The dimensions of these buildings were to have a human scale, a deliberate contrast to the old school buildings in which a "whole generation was drilled in racial ideological and militaristic values." It is true that school construction was not the focus of Nazi architecture, since hardly any new schools were planned and completed between 1933 and 1945. However, the new school buildings were also intended to express a turning away from the secondary virtues that had made the Nazi state and world war possible. The "wachsenden Schule" was to be available quickly and then grow with the needs of the school. Initially, this concept also called for school buildings to be easily relocatable, and was accompanied by the abandonment of costly foundations and basements. The first series of pavilion schools was built in Hamburg according to this concept; a much-noted prototype for this type of construction at the time is the listed Mendelssohnstraße school in Bahrenfeld. The serial construction developed in the process was the pavilion type A, made of lightweight materials by Polensky & Zöllner. By 1961, 459 new classrooms of this type had been erected. Design phase The design task for Seitz and his working group in the structural engineering department was clear: How could new school construction in Hamburg be drastically accelerated despite a labour shortage of skilled workers and budget constraints without abandoning the ideals of the Green School"? What could a series design look like that would also work on smaller or denser school sites? And how would this serial design have to be designed in order to represent all functions of a school as a nucleus of a "wachsenden Schule" already in the first construction phase? The answer to these questions was the Kreuzbau building. The labour shortage of skilled workers in the construction industry was a major obstacle to accelerating the school building program. Conventional buildings needed skilled masons, foremen, scaffolders, and roofers. Public school construction competed for these skilled workers with residential construction and the private sector. The use of precast concrete reduced the need for such skilled workers on the job site, while the quick 15-day erection time allowed the erection crew to move on, after which the build-out continued. The standardized school construction reduced the need for skilled workers in the shell construction phase to specially trained assemblers - the construction time was already reduced to one fifth by the "Pavilion A" assembly system. The pavilion schools were cheap and quick to build, but they usually had only one or at most two stories, and a correspondingly high land consumption. This required a plot size of at least 24,000 m2 for an Volksschule with the usual class frequency of the time. This was still possible in the planned new development areas in the expansion areas of Hamburg (e.g. Rahlstedt and Bramfeld), but in densification areas such as Wilhelmsburg and Wandsbek or to replace war-damaged school buildings near the inner city, as in Horn and Hamm, these plot sizes were not available. In addition, it became apparent that the influx and birth rates in the new development areas were higher than expected, so that the already designed schools had to accommodate higher numbers of students. A classroom building of more than two stories with a good ratio of usable to circulation space was essential to achieve the necessary space efficiency. In response to the design task, Seitz developed the Kreuzbau building starting in 1955. The three-story arrangement made better use of the land than a single-story pavilion. In addition, the direct access to the classrooms from the staircase resulted in a very favorable ratio of 80% usable space to 20% circulation space. However, the lowest grades of a school were still to be housed in pavilions accessible at ground level. The design relied heavily on precast concrete elements and placed great emphasis on rapid erectability. The small heating cellar with oil-fired heating made it possible to start school operations immediately, even in winter, regardless of progress in further construction phases. In 1955/56, a pre-series type of the Klassenkreuz was built for the school at St. Catherine's Church. This prototype, in contrast to the series production, had four floors, because the plot of land at the Katharinenkirchhof was only 7,000 m2 and thus lacked space for the erection of further school buildings. The school at the St. Catherine's Church was listed as a historical monument, but was nevertheless demolished in 2011 in favor of the newly to be built "St. Catherine's Quarter". The school is located at the Katharinenkirchhof. Construction phase From now on, the Klassenkreuz was to serve as the centerpiece and first construction phase of the "wachsenden Schule". After it was erected, classes could begin immediately in the Kreuzbau building, while other school buildings were added all around. Ideally, the following sequence was typical: First construction phase: Klassenkreuz Second construction phase: classrooms with differentiation rooms in pavilions Third construction phase: administration rooms, janitor's apartment, common room / break hall Fourth phase of construction: classrooms, gymnasium (side hall), smaller gymnasium, auditorium The space available in the Klassenkreuz was in accordance with the specifications of the Room and Furnishing Program for Hamburg Schools of 1958. The production of the prefabricated parts for the Kreuzbau buildings was entrusted to the "Arbeitsgemeinschaft Kreuzschulen," which consisted of Polensky & Zöllner and Paul Thiele AG. The prototype at the St. Catherine's Church was accepted on August 9, 1957. Four more approvals of Kreuzbau buildings followed before the end of August 1957. Based on the experience of the first series of ten, the construction time for a Kreuzbau building was half a year, half of a "normal" school building. The planned cost would be 670,000 marks per Kreuzbau building. At the end of August 1957, school inspector Dressel announced that the new construction method would eliminate the school space shortage in Hamburg in four to five years. In October 1961, the topping out ceremony for the 50th Kreuzbau building was celebrated at the Gymnasium Corveystrasse. On October 21, 1963, the last Kreuzbau building was accepted at Krohnstieg. In just over six years, the type had been built 67 times in Hamburg, resulting in 796 classrooms. Kreuzbau buildings were also erected outside Hamburg, for example in a modified form at the Gottfried-Röhl Elementary School in Berlin, built between 1961 and 1964. In Freiburg im Breisgau, nine Kreuzbau buildings based on Hamburg's design were built by 1976 at school sites in new housing estates to the west of the city center. From the early 1960s, school construction in Hamburg could no longer keep up with the pace of new housing construction. In Bramfeld, shift teaching was reintroduced in 1961 in the newly built Hegholt housing estate, and in 1965, Wilhelm Dressel, the school inspector responsible for school construction, publicly announced that they had "lost the race with the new housing developments". As an emergency solution, "classrooms and more classrooms" had to be built in the outlying districts, and the construction of gymnasiums, break halls, subject rooms, and auditorium buildings was postponed. While new school construction proceeded in absolute numbers throughout the city, the "growing school" concept faltered at individual school sites, or led to growth from the mid-1960s onward only in classroom buildings of the newer "Type-65", "Honeycomb construction," and "Type-68" ("Double-H") series. Gymnasiums were not built at some sites until ten years after the opening of the school, and auditorium buildings were rarely built. The need for specialized rooms was concentrated at secondary schools after the abolition of VolksSchule in 1964; many of the Kreuzbau school sites of the early 1960s are now elementary schools. Across the various types of buildings, the Hamburg school construction program was "unique in scope" compared to other large West German cities. Nowhere else in the Federal Republic of Germany did new schools between 1950 and 1980 rely so heavily on assembly and type buildings as in Hamburg, accompanied by the extensive abandonment of individual designs. In the German-speaking world, this is surpassed only by the type school construction in the GDR. Description The Kreuzbau building is a three-story building with a cruciform floor plan. There are four classrooms on each floor, which are accessed through a central stairwell without corridors. From the stairwell, the schoolchild reaches his or her classroom through a small anteroom that serves as a checkroom. Each classroom is assigned a smaller differentiation room, which is separated by a glass wall. The classrooms are between 65 and 68 m2, the differentiation rooms between 8 and 11 m2. In addition, there are WC rooms on each floor. Room layout The ground plan of this type of building is not axial symmetry. The shape of the Kreuzbau building floor plan is sometimes compared to the wings of a windmill because the surfaces of the wings are laterally displaced with respect to the center of rotation. However, unlike the windmill, the Kreuzbau building floor plan is also not rotationally symmetrical because the wings are not equal in length. This results from the fact that the wings are "pushed" into the floor plan of the stairwell to different extents, because only two wings accommodate the WC rooms and emergency stairwells and are correspondingly longer. In the usual arrangement, the north wing protrudes furthest from the building at about 17 m, while the shortest wing at about 12 m is either the west wing (right tapered variant) or the east wing (left tapered variant). The floor plan of each classroom is in the shape of a rectangular trapezoid. In each of the four wings, one side extends from the stairwell at a right angle, while the other side of the wing tapers toward the front. The tapered side is the same for each wing; in some Kreuzbau buildings it is always the right side, in others it is always the left side. The four end faces are always parallel to the staircase. The depth of the classrooms is up to 8 m. With an economical room height, this requires two-sided lighting, which also enables vertical transverse flow system of ventilation. Typologically, the Klassenkreuz is thus a cobbler type, since there are no corridors and each classroom is lit and ventilated from two sides. Each classroom has a main window wall with tall windows and, opposite, a secondary window wall with a light-diffusing glazed window band just below the ceiling. The panel wall is always the front of the building. If the particular plot of land allowed it, the Kreuzbau building was always placed with maximum use of sunlight: The main window wall of the east and west wings faces south, while for the north and south wings it faces east. Thus, when two wings are viewed, there is a characteristic sequence of the main and secondary window walls, from which the cardinal direction of the respective wing is derived. Development The concept of the pavilion school provides for a loose, organic arrangement of the building structures on a generous plot, which are connected to each other by means of open arcades. These arcades were regularly used in Hamburg schools of the 1950s and early 1960s. These arcades were designed as elevated flat roofs supported by unadorned tubular frames. In Hamburg, the arcades usually have a clear height of little more than two meters and connect to the pavilions at the upper end of the doorway. Apart from the emergency exits, the Kreuzbau building has two entrances. These are located at the point of the abutting wings, one entrance on the southeast side and the other entrance diagonally opposite on the northwest side. Thus, one entrance is located between two secondary window walls and the other entrance between two main window walls. Due to the height of the windows in relation to the height of the pergolas, only the northwest entrance can be connected to the pergola system, where the pergola roof surface can be routed below the bottom edge of the secondary window band. At the lower reaching main windows, the roof would otherwise run in front of the window area. Many Kreuzbau buildings are not (no longer) connected to the arcade system at their locations at all. Behind the not particularly wide glass entrance door, a lobby leads into the central stairs. The lobby on the first floor is placed in the same position in the floor plan as the escape connecting corridors on the two upper floors, which serve as a separate escape route there. The stairs has a rectangular shape with its long side oriented in the direction of the north–south axis. The actual stairs located in the southwest corner of the stairs and leads to the next floor with two flights of stairs at right angles to each other over a landing. The stairwellhas the shape of a kite with strongly rounded corners. Together with the narrow metal handrails, this design looks very typical of the 1950s. The stairs are interior, so it has no windows. Although both the entrance doors on the first floor and the escape connecting corridors on the upper floors are glazed, they are on both sides of the vestibule or corridor. As a result, not an excessive amount of daylighting enters the stairwell. Two circular skylights are incorporated in the roof window for additional lighting. The fire protection concept of the Kreuzbau building provides that the main escape route leads through the stairwell, which thus forms the "necessary stairs". In addition to this main stairwell, there are two emergency stairwells located on the front sides of the north and west wings, from where they lead to the outside via lateral emergency exits. Each classroom therefore has a second escape route, either by direct connection to an emergency stairwell or by an escape connecting corridor to an adjacent classroom from where an emergency stairwell can be reached. The escape connecting corridor is separated from the main stairwell in a smoke-tight manner. This concept corresponded to the building police regulation of 1938 valid at the time of construction, which was confirmed in 1957 and 1958 in consultations of all responsible expert commissions (so-called "theater commission"). In 1961, the building police approval took place, which was again confirmed by the building regulation office in 1974. First, Kreuzbau buildings of the series "K1 V1" were erected. In this first series, the two emergency stairwells at the end faces of the north and west wings are glazed. Later series were less elaborate, and the emergency stairwells are still present but no longer visible from the outside. Interior In 2012, Hamburg's Office for the Protection of Historical Monuments commissioned a survey of the post-war buildings of the Uferstraße Vocational School, which included a Kreuzbau building, an eight-classroom wing and an administration building. This ensemble was listed in 1973 together with the buildings by Fritz Schumacher. In the process, the following original design of the Kreuzbau building was worked out: The interiors of the Kreuzbau building were designed on the vertical surfaces with glass elements, floor-to-ceiling wooden panels and light yellow exposed brickwork. These surfaces were articulated both vertically and horizontally. The ceilings of the rooms were clad with rectangular acoustic panels framed with simple wood trim at the transition to the wall surfaces. The doors leading from the stairwell were recessed into wood-clad wall surfaces and designed with exposed wood. The radiators and inner doors of the WCs were painted in a light yellow-red color. The doors leading from the classrooms were also designed with exposed wood and had glass panels. The slenderly designed staircase was finished in Béton brut, and the handrail was made of metal. The floor was covered with dark, iridescent floor tiles. The classrooms were equipped with mobile chairs and desks, which was in contrast to the rigid school desks of the pre-war period. Fixed rooms were equipped with sound-absorbing ceiling paneling and built-in cabinets. More than half of the Kreuzbau buildings were equipped with artworks purchased with funds from the "Art in Construction" program of the building authority. Most of these were murals or reliefs in the stairwells. Hamburg artists funded in this way included (street names of demolished Kreuzbau buildings (as of 2020) in italics): Ulrich Beier (Stephanstraße), Gerhard Brandes (Walddörferstraße), Annette Caspar (Potsdamer Straße), Jens Cords (Schenefelder Landstraße, Fahrenkrön), Hanno Edelmann (An der Berner Au), Arnold Fiedler (Alsterredder), Heinz Glüsing (Beltgens Garten), Erich Hartmann (Vermoor), Helmuth Heinsohn (Wesperloh), Volker Detlef Heydorn (Windmühlenweg), Fritz Husmann (Sanderstraße), Diether Kressel (Brucknerstraße, Humboldtstrasse), Nanette Lehmann (Fährstrasse), Max Hermann Mahlmann (Heinrich-Helbing-Strasse), Maria Pirwitz (Schimmelmannstrasse), Ursula Querner (Benzenbergweg), Albert Christoph Reck (Rahlaukamp), Walter Siebelist (An der Berner Au), Herbert Spangenberg (Stockflethweg), Eylert Spars (Francoper Straße, Hanhoopsfeld, Krohnstieg), Hans Sperschneider (Hinsbleek), Hann Trier (Struenseestraße) and Johannes Ufer (Neubergerweg). Building construction and assembly Structurally, the Kreuzbau building is a skeleton building made of precast concrete elements, built on a foundation without a full basement and finished with a flat roof. After setting up the construction site, the basement and foundation were built using conventional construction methods: A boiler room basement was excavated and removed under one of the four wings. The ceiling above the basement and the rest of the foundation were then constructed in reinforced concrete. All other slabs of the Kreuzbau building were assembled from 16 cm thick precast concrete elements. All these precast concrete elements were brought to the construction site by special trucks, where they were lifted to the assembly site by means of a single mobile crane - a tower crane was not required. The assembly of the Kreuzbau building was carried out by means of an auxiliary scaffold that extended over two stories. This scaffold was precisely aligned and served as falsework for the preliminary attachment of the reinforced concrete columns and wall sections as well as for the support of the floor slabs. The largest components were the 10.7 m long vertical main columns that run through all three floors. The slabs are designed as T-beams, with two to four main ribs (webs) transferring the load. Connecting steels project inward from the main columns, to which an edge beam of cast-in-place concrete is attached, connecting the main columns to the slabs. Once the inner corner section was installed, the structure had enough stability against torsion and the auxiliary scaffold was removed. After 15 working days the concrete skeleton was ready and the flat roof could be added. The construction was thus practically independent of the weather, which can be seen from the completion dates, which knew no winter break. After completion of the roof, the installation work and drywall construction took place, while at the same time the end faces of the wings were bricked up with masonry. Demolition or redevelopment? Some of the series and prefabricated buildings of the Hamburg Building Authority from the post-war period are now considered to be unsuitable for renovation. This applies in particular to the Type A pavilions, whose wood-frame walls were provided with an outer skin of "fulgurite" and an inner wall of "lignate." "Fulgurite" and "lignate" are brand names for fire-retardant fibre cement. Beginning in 1987, the city of Hamburg had its schools inspected for asbestos and in some cases closed. From 1988, 182 school pavilions in Hamburg were disposed of or demolished because of asbestos contamination. From 1993, the use of asbestos in new buildings was generally prohibited. Kreuzbau buildings are not structurally contaminated with asbestos, so the question of renovation was decided primarily on the basis of economic efficiency and space requirements. Space requirements and building condition From 2010 onwards, the decision on whether to renovate or demolish or replace most of the Kreuzbau buildings became urgent: the Kreuzbau buildings were now around 50 years old and no longer met current requirements, especially in terms of thermal insulation. Also, the accessibility required with increasing inclusion of pupils with walking disabilities is only available on the first floor in existing buildings. Unrenovated Kreuzbau buildings were therefore consistently rated 4 or 5 in the 2019 building classification of the Hamburg authorities (1 = new construction, 2 = basic renovation and meeting all current standards, 6 = practically unusable). A building classification (GKL) of 2 is targeted through renovation. At the same time, the number of pupils in Hamburg has risen sharply since the turn of the millennium. Between 2011 and 2020 alone, the total number of students increased by 11%, and the number of elementary school students even increased by more than 17%. The increase of 11,600 elementary school students from 2011 to 2020 alone is equivalent to more than 500 classrooms at a maximum class frequency of 23 students. By 2030, the number of students is expected to reach about 240,000, a further increase of about 20% over 2020. In view of this development - the need for renovation on the one hand, and rapidly growing space requirements on the other - the decision had to be made at most locations between renovation or demolition with replacement construction. In most cases, the state operation Schulbau Hamburg (SBH) decided in favor of renovation. On the other hand, when a school site was abandoned altogether or a comprehensive new building concept was implemented, the Kreuzbau buildings were also demolished. Kreuzbau buildings are difficult to integrate into existing or new buildings because of the lack of options for horizontal circulation. If a corridor is to lead from a directly adjacent building into a Kreuzbau building, this can only be done via the end face of a wing, which thus becomes a passageway and is lost as a classroom. About 80% of the Kreuzbau buildings were still standing in 2020 and were mainly used by elementary schools as classrooms. The majority of the surviving Kreuzbau buildings have been renovated. A few of this type of buildings are under ensemble protection, i.e. are listed. Accessibility for physically handicapped children has so far only been realized in one case - in the Kreuzbau building of the Schule Hinsbleek, an elevator was installed in the stairwell eye. The majority of the Kreuzbau buildings have been renovated. Thermal insulation Due to the almost always missing monument protection, the Energieeinsparverordnung applied to these renovations without any cutbacks. This led to "large thermal insulation package[s]" and an often "crude renovation." The slender profiles of the windows and the piers were often lost, and in some cases the shadow-giving louvers were also removed. A positive counterexample is the renovation of the Kreuzbau building at Schierenberg, where the reinforced concrete piers were excluded from the insulation. These are only in direct contact with the building structure at the floor slabs and thus contribute little to heat loss. The non-load-bearing walls and the windows, on the other hand, were rebuilt instead of packing the old structure with an insulation layer on wooden battens, as is usually done. The yellow clinker bricks of the original buildings are lost in any case, as the thermal insulation layer requires a new facing. At Schierenberg, green-white glass mosaics in the parapets and black clinker on the end walls were chosen for this purpose - a quotation from the architecture of the 1960s, but not a reconstruction. On the other hand, strong color contrasts were used in some renovated Kreuzbau buildings (Beltgens Garten, Stengelestraße), but often the aim is to use a color scheme that corresponds to the original materials. Art on building In front of and in some of the Kreuzbau buildings scheduled for demolition, works by renowned artists were installed or mounted. Where these works could be easily dismantled, they were usually moved to other buildings on the school site. Plastic arts in front of Kreuzbau buildings were moved. Murals or frescoes, on the other hand, are permanently attached to the building. In the Kreuzbau building of the Gymnasium Rahlstedt were three wall paintings by Eduard Bargheer from 1959. The removal of the picture carrier with the listed pictures was calculated at 150,000 euros. In 2019, two of the three paintings were installed in the atrium of the new Gymnasium building, the remaining picture had been damaged during the removal. The wall paintings were removed. Fire protection Despite repeated tightening of the fire protection regulations since 1938, the Kreuzbau buildings also comply with the current status. Hamburg's 2001 fire protection regulations for school buildings require that each classroom on the same floor have two independent escape routes to exits to the outside or to necessary stairs. The length of the escape route to reach the stairs is limited to a maximum of 35 m. Since the classrooms are 9 m long and up to 8 m wide, the furthest corner results in a diagonal of 12 m per classroom, which must be traversed twice in the worst case. This still leaves more than enough escape route length for the escape connecting corridor between the classrooms. The second escape route in the emergency stairwells must be kept clear, and the other requirements for door and aisle widths are met. If this had not been the case, an exterior stairwell would have to be added to at least two of the wings during renovation, with corresponding costs. The practical suitability of the escape concept has so far been "tested" in one case: At the Schule Eckerkoppel, a fire broke out in the Kreuzbau building while the school was in session. The fire originated on the second floor of the east wing and from there set fire to the floor above and the flat roof structure. All students were evacuated, there was no personal injury. Two classrooms were completely burned out, a third classroom was severely damaged by firewater and soot. The Kreuzbau building was subsequently demolished. As a replacement, after one year of planning and nine months of construction, a modular wooden building of the "Hamburger Klassenhaus" type was erected to accommodate twelve classes on two floors. The new building was occupied in January 2020. Thus, the next phase of serial construction of classrooms in Hamburg has been initiated. Classification and evaluation The Kreuzbau building was part of an attempt to greatly accelerate school construction and significantly increase space efficiency compared to the pavilion school, while maintaining the ideal of the "school in the green." These goals were partially achieved, but the "growing school" concept could not keep pace with the need for classrooms. A balance of classroom buildings, community buildings and green spaces was rarely achieved, certainly not with buildings that followed the same stylistic idea. Instead, many school sites in Hamburg feature a mixture of serial buildings of different generations and styles that wrap around the schoolyard like annual rings - starting with lightweight pavilions, then a Kreuzbau building, plus a Seitz gymnasium and administration with a clinker facade, and concluding with honeycomb buildings made of concrete or a string of Type 65 bricks. From a functional and aesthetic point of view, the Kreuzbau building is a successful design - as an individual building. The "communicative central staircase" and the "quality of the trapezoidal floor plan" of the classrooms, which provide a good framework for other forms of group work in addition to frontal teaching, are praised. Furthermore, the "generous sun protection" is emphasized. Finally, the "formative building shape" creates something like a focal point of the school grounds, especially in comparison to the low box shapes of the other Hamburg series buildings. However, the Kreuzbau building does not stand alone, but is part of an ensemble. In 1961, Egbert Kossak, who later became Hamburg's chief building director, expressed a scathing criticism of Hamburg's "inferior, template-like school construction" in a letter to the editor of an architectural magazine: "With striking but unresisting monotony, the 'famous' Klassenkreuz, pavilions, and gymnasium blocks are scattered over Hamburg. [...] Hamburg [...] boasts of the mass production of proportionless structures that stand out for their questionable modernist design." The main advantage of type construction was rapid assembly with only a few workers, since in post-war Hamburg there was an immense demand for school replacement buildings and new buildings that the construction industry could not satisfy by conventional means. On the other hand, the goal of cost reduction compared to individual designs or solid buildings was not achieved - this can be seen in a comparison with the designs for special schools, which were also executed by individual architects outside the structural engineering department during the Seitz era. The poor thermal insulation compared to today's standards results from the typical construction method of the time with slim profiles and many thermal bridges. In this respect, the Kreuzbau building is neither better nor worse than other post-war buildings. At least it is not contaminated with asbestos, and in most cases renovation is significantly less expensive than replacement construction. Due to its construction, the Kreuzbau building is difficult to connect to new buildings. Cautiously renovated, it can be an attractive solitaire. Locations The following list of Kreuzbau buildings in Hamburg does not claim to be complete. Legend: #: Numbering of the Kreuzbau buildings in alphabetical order by name. Name: current user of the Kreuzbau building. For elementary schools, the name is shortened to "school"; for district schools, the name is shortened to "STS" Address: Street address of school location, linked with coordinates. A map with all coordinates is linked at the top of the article. District: District of the location of the Kreuzbau building Borough: Borough of the location of the Kreuzbau building Year: year of construction of the Kreuzbau building, defined as the year of acceptance. Image: link to Commons category on school site: "Yes", there are images of the corresponding Kreuzbau buildings; "-", no images of the Kreuzbau buildings, but information on the school building. Notes: Building condition, historic preservation, renovation. "First series" refers to "K1 V1" type Kreuzbau buildings, which have two glazed emergency stairwells. For demolished Kreuzbau buildings, the corresponding line is grayed out. References Bibliography Boris Meyn: Der Architekt und Städteplaner Paul Seitz. Eine Werkmonographie. Verein für Hamburgische Geschichte, Hamburg 1996, Boris Meyn: Die Entwicklungsgeschichte des Hamburger Schulbaus (= Schriften zur Kulturwissenschaft. Band 18). Kovač, Hamburg 1998, Olaf Bartels: Kreuzbau am Schierenberg. In: Bauwelt, Nr. 47.2015, pp. 30–33. Das Hamburger Klassenkreuz. In: Das Werk : Schweizer Monatsschrift für Architektur, Kunst und künstlerisches Gewerbe, , Band 50 (1963), Heft 6 ("Schulbau"), pp. 234–236, Paul Seitz, Wilhelm Dressel (Hrsg.): Schulbau in Hamburg 1961. Verlag der Werkberichte, Hamburg 1961. Baubehörde der Freien und Hansestadt Hamburg (Hrsg.): Hamburger Schulen in Montagebau. Hamburg 1962, PPN 32144938X. External links Energetic renovation of the Kreuzbau elementary school Surenland, brochure by SBH Schulbau Hamburg with floor plan Video footage of the renovated Kreuzbau at Gymnasium Meiendorf (formerly Schule Schierenberg School): aerial view, entrance, first floor, stairs. Building 20th-century architecture in Germany Hamburg Education in Germany
Kreuzbau (Hamburg)
[ "Engineering" ]
7,536
[ "Construction", "Building" ]
69,072,077
https://en.wikipedia.org/wiki/Eat-me%20signals
Eat-me signals are molecules exposed on the surface of a cell to induce phagocytes to phagocytose (eat) that cell. Currently known eat-me signals include: phosphatidylserine, oxidized phospholipids, sugar residues (such as galactose), deoxyribonucleic acid (DNA), calreticulin, annexin A1, histones and pentraxin-3 (PTX3). The most well characterised eat-me signal is the phospholipid phosphatidylserine. Healthy cells do not expose phosphatidylserine on their surface, whereas dead, dying, infected, injured and some activated cells expose phosphatidylserine on their surface in order to induce phagocytes to phagocytose them. Most glycoproteins and glycolipids on the surface of our cells have short sugar chains that terminate in sialic acid residues, which inhibit phagocytosis, but removal of these residues reveals galactose residues (and subsequently N-acetylglucosamine and mannose residues) that can bind opsonins or directly activate phagocytic receptors. Calreticulin, annexin A1, histones, pentraxin-3 and DNA may be released by (and onto the surface of) dying cells to encourage phagocytes to eat these cells, thereby acting as self-opsonins. Eat-me signals, or the opsonins that bind them, are recognised by phagocytic receptors on phagocytes, inducing engulfment of the cell exposing the eat-me signal. See also Find-me signals References Cell biology Cellular senescence
Eat-me signals
[ "Biology" ]
363
[ "Senescence", "Cellular senescence", "Cell biology", "Cellular processes" ]
69,072,781
https://en.wikipedia.org/wiki/Find-me%20signals
Cells destined for apoptosis release molecules referred to as find-me signals. These signal molecules are used to attract phagocytes which engulf and eliminate damaged cells. Find-me signals are typically released by the apoptotic cells while the cell membrane remains intact. This ensures that the phagocytic cells are able to remove the dying cells before their membranes are compromised. A leaky membrane leads to secondary necrosis which may cause additional inflammation, therefore, it is best to remove dying cells before this occurs. One cell is capable of releasing multiple find-me signals. Should a cell lack the ability to release its find-me signal, other cells may release additional find-me signals to overcome the discrepancy. Inflammation can be suppressed by find-me signals during cell clearance. A phagocyte may also be able to engulf more material or enhance its ability to engulf materials when stimulated by find-me signals. A wide range of molecules, from cellular lipids, proteins, peptides, to nucleotides, act as find-me signals. History The correlation between the early stages of cell death and the removal of apoptotic cells was first studied in C. elegans. Mutants that could not carry out normal caspase-mediated apoptosis were used to demonstrate that cells in the beginning stages of death were still efficiently recognized and removed by phagocytes. This occurred because the engulfment machinery of the phagocytes was still functioning normally even though the apoptotic process in the dying cell was disrupted. A study done in 2003 showed the breast cancer cells release find me signals known as lysophosphatidylcholine. This research brought the concept of find-me signals to the fore front of cell clearance research and introduced the idea that dying cells release signals that flow throughout the body's tissues in order to alert and recruit monocytes to their location. Chemicals that act as find-me signals Known types of find-me signals include: Lipids: lysophosphatidylcholine (lysoPC) sphingosine-1-phosphate (S1P) Proteins and peptides: fractalkine (CX3CL1) interleukin-8 (IL-8) complement components C3a and C5a split tyrosyl tRNA synthetase (mini TyrRS) dimerized ribosomal protein S19 (RP S19) endothelial monocyte-activating polypeptide II (EMAP II) Formyl peptides, especially N-formylmethionine-leucyl-phenylalanine, fMLP) Nucleotides: adenosine triphosphate (ATP), adenosine diphosphate (ADP), uridine triphosphate (UTP) and uridine diphosphate (UDP). All of these molecules are linked to monocyte or macrophage recruitment towards dying cells. The receptor on the monocyte or other phagocyte for ATP and UTP signals has been shown to be P2Y2 in vivo. The receptor on the monocyte or other phagocyte for the CX3CL1 signal has been shown to be CX3CR1 in vivo. The roles of the S1P and LPC signals remained to be established through a model in vivo. Lipids Lysophosphatidylcholine (LPC) Identified in breast cancer cells, this find-me signals is released by MCF-7 cells to attract the THP-1 monocytes. Other cells and different methods of apoptosis may be able to release LPC, but MCF-7 cells have been the most thoroughly studied. The enzyme calcium-independent phospholipase A2 (iPLA2) is most likely responsible for the apoptotic cell releasing LPC as it is dying. The amount of LPC released is small, so it is unclear how it is able to set up a concentration gradient in the serum or plasma in order to attract phagocytes to their location. High concentrations of LPC cause lysis of many cells in its vicinity. LPC may be present in a different chemical from rather than its native form when released by an apoptotic cell. It may bind to components of the serum, making it unavailable to be modified or taken into other tissues. LPC may also be able to function with other soluble molecules. The receptor on the phagocyte that is thought to be linked to LPC is G2A, but it has not been confirmed. The role of LPC as a find-me signal has also not been characterized in vivo. Sphingosine 1-phosphate (S1P) It has been suggested that the induction of apoptosis results in increased expression of S1P kinase 1 (SphK1). The increased presence of SphK1 is linked to the creation of S1P, which then recruits macrophages to the immediate area surrounding apoptotic cells. It has also been suggested that S1P kinase 2 (SphK2) is a target of caspase 1, and that a cleaved fragment of SphK2 is what is released from dying cells into the surrounding extracellular space where it is transformed into S1P. All of the studies thus far characterizing S1P have been done in vitro, and the role or S1P in recruiting phagocytes to apoptotic cells in vivo has not been determined. Staurosine-induced cell death has been shown to influence caspase-1 to initiate the cleavage of SphK2. In other forms of apoptosis, caspase-1 is not normally induced, meaning the formation of S1P needs to be further studied. S1P can be recognized by the G protein-coupled receptors S1P1 through S1P5. Which one of these receptors is relevant in the recruitment of phagocytes to apoptotic cells is not yet known. Sphingosine kinase 1 and sphingosine kinase 2 have been linked to S1P generation during apoptosis through different pathways. The level of SphK1 is increased during apoptosis while caspases cleave SphK2. CX3CL1 CX3CL1 is a soluble fragment of fractalkine protein that serves as a find-me signal for monocytes. A soluble fragment of fractalkine that is usually on the plasma membrane as an intercellular adhesion molecule is sent out as a 60 kDa fragment during apoptosis as a find me signal. CX3CL1 release is dependent upon caspase indirectly. CX3CL1 could also be released as part of microparticles from the beginning stages of apoptotic death of Burkitt Lymphoma cells. The receptors on monocytes that are able to detect the presence of CX3CL1 are CX3R1 receptors, as shown in both in vivo and in vitro studies. Nucleotides: ATP and UTP These were the most recent find me signals to be characterized as components of the supernatant of apoptotic cells. Studies were able to show that the controlled release of the nucleotides ATP and UTP from cells in the beginning stages of apoptosis can potentially attract monocytes in vivo and in vitro. This has been observed in Jurkat cells (primary thymocytes), MCF-7 cells, and lung epithelial cells. Release is dependent upon caspase activity. Less than 2% of ATP released from the beginning stages of cell death is released when the dying cell's plasma membrane is still intact. The released ATP preferentially attracts phagocytes through chemotaxis, rather than random migration through chemokineses. The receptors on monocytes that are able to sense the release of nucleotides are in the P2Y family of nucleotide receptors. Monocytic P2Y2 has been shown to be able to recognize nucleotides in vitro and in genetically modified mice. Nucleotides are often degraded by nucleotide triphosphatases (NTPases) when they are in the extracellular space. Only a small amount of ATP is released during find me signaling, so it is unclear how the nucleotide avoids degradation by NTPases in order to establish a gradient used to signal clearing by monocytes. NTPases may serve as regulators in various tissues in order to control how far the nucleotide signal can travel. The signaling pathway within the monocyte downstream of P2Y receptor activation is still unknown. Others The ribosomal protein S19 has been suggested as a possible find me signal. Apoptosis causes a dimerization of S19, inducing a conformation change that allows it to bind to the C5a receptor on monocytes. Research suggests that S19 is released during the late to final stages of apoptosis. EMAPII, a fragment of tyrosyl tRNA synthetase, has also been shown to attract monocytes. This molecule has inflammatory properties, meaning it is capable of attracting and activating neutrophils. In apoptosis Background Humans turn over billions of cells as a part of normal bodily processes every day, which correlates with about 1 million cells being replaced per second. The ultimate goal of the body's intrinsic cell death mechanisms is to efficiently and asymptomatically clear dying cells. There are many reasons as to why the body needs to get rid of non diseased and diseased cells. As a part of the cell's natural division process, excess cells may be generated during normal growth, development, or tissue repair after illness or an injury. Only a fraction of these new cells will stay and become mature, while the rest will die and be cleared by the body's immune system. Cells may also need to be removed because they are too old or become damaged overtime. Cell damage can occur through environmental factors such as air pollution, UV radiation from the sun, or physical injury. In most cases, the cells that are dying are recognized by phagocytes through find-me signals and removed. Quick and efficient clearing of apoptotic cells is crucial to prevent secondary necrosis of dying cells and to avoid autoantigens causing immune responses. Find-me signals alert the presence of apoptotic cells to phagocytes when they are in the beginning states of dying. The phagocytes are able to use the find-me signals to locate the dying cell. Find-me signals set up a gradient within the tissue they are in to attract phagocytes to their location. The phagocytes migrate to the dying cell through the use of their receptors responding to the find-me signals initiating a signaling pathway within, causing them to move to the proximity of the cell emitting those signals. If the body's immune system, or more specifically phagocytes, fail to clear dying cells in the body, symptoms such as chronic inflammation, autoimmune disorders, and developmental abnormalities have been shown to occur. As long as the engulfment process is functioning and efficient, uncleared apoptotic cells go unnoticed in the body and do not cause any long-term symptoms. If this process is disrupted in any way, the accumulation of secondary necrotic cells in tissues of the body can occur. This is associated with autoimmune disorders, causing the immune system to attack self-antigens on the uncleared cells. Release from dying cells The main function of a find-me signal is to be released while a cell undergoing apoptosis is still intact in order to attract phagocytes to come and clear the dying cell before secondary necrosis can occur. This suggests that the initiation of apoptosis may be coupled with the release of find me signals from the dying cells. As of now, it is unknown how LPC is released from apoptotic cells. S1P generation involved caspase-1-dependent release of sphingosine kinase 2 (SphK2) fragments. CX3CL1 release is mediated through the release of a 60 kDa microparticle fragment of fractalkine from the beginning stages of Burkitt Lymphoma cell apoptosis. Nucleotide release is one of the better defined find me signal release mechanisms. They are released through a pannexin family channel known as PANX1. PANX1 is a four pass transmembrane protein that forms large pores in the plasma membrane of a cell, allowing molecules up to 1 kDa in size to pass through. The nucleotides are detected by P2Y2 on monocytes, which causes them to migrate to the location of the apoptotic cell. Engulfment and clearance of apoptotic cells by phagocytes Phagocytes are able to sense the find-me signals presented by an apoptotic cell during the beginning stages of cell death. They sense the find-me signal gradient and migrate to the vicinity of the signaling cell. Using the presented find-me signal along with the "eat-me" signal also exposed by the apoptotic cell, the phagocyte is able to recognize the dying cell and engulf it. Phagocytes contribute to the "final stages" of cell death by apoptosis. They are often already nearby a dying cell and do not have to travel far in order to engulf and clear it. In most mammalian systems, however, this is not the case. In the human thymus, for example, a dying thymocyte is likely to be engulfed by a healthy neighboring thymocyte, and a macrophage or dendritic cell that resides in the thymus is likely to carry out clearance of the corpse. In this case, a dying cell needs to be able to send out an advertisement of sorts to declare its state of death in order to recruit phagocytes to its location. Phagocytic cells use the soluble find-me signals released by the apoptotic signals to do this. Phagocytes detect the gradient set up by the find-me signals presented by the dying cell in order to navigate to their location. Steps in the engulfment and clearance of apoptotic cells by phagocytes: Phagocytes need to be in the vicinity of the cells presenting find-me signals. The phagocytes use the find-me signals to locate these cells and move to their location. The phagocytes interact with the dying cells through the presenting eat-me signals through specific eat-me signal receptors on the phagocytic cell. The phagocyte will engulf the eat-me signal presenting cell through induced signaling of engulfment receptors and by the reorganization of the phagocytic cell's cytoskeleton. The components of the dying cell are processed by the phagocytes within their lysosomes. Non-apoptotic roles Find me signals may also play a role in phagocytic activity of cell in the direct vicinity of cells undergoing apoptosis. This phenomenon allows neighboring cells adjacent to the apoptotic cell sending out the find me signal to be engulfed without going through the trouble of releasing find me signals of their own. Find me signals could possibly play a role in priming phagocytes to enhance their phagocytic capacity. In addition, they may also be able to enhance production of certain bridging molecules created by macrophages. See also Eat-me signals References Molecules Phagocytes
Find-me signals
[ "Physics", "Chemistry" ]
3,248
[ "Molecular physics", "Molecules", "Physical objects", "nan", "Atoms", "Matter" ]
67,654,122
https://en.wikipedia.org/wiki/Philip%20George%20Burke
Philip George Burke FRS (18 October 1932—4 June 2019) was a British theoretical and computational physicist who developed the R-matrix method for studying electron collisions with atoms and molecules. Life He was born in London. He graduated from University College of the South West, and University College London. He worked at the National Physical Laboratory, Teddington. From 1959 to 1960, he worked at the Lawrence Berkeley Radiation Laboratory. The majority of Burke's research career was based at Queen's University Belfast, where he was a member of the Centre for Theoretical Atomic, Molecular and Optical Physics. He was elected Fellow of the Royal Society in 1978 and was awarded a CBE in 1993. References British physicists Computational physicists British theoretical physicists Fellows of the Royal Society Academics of Queen's University Belfast 1932 births 2019 deaths Members of the Order of the British Empire
Philip George Burke
[ "Physics" ]
172
[ "Computational physicists", "Computational physics" ]
67,654,308
https://en.wikipedia.org/wiki/Tides%20in%20marginal%20seas
Tides in marginal seas are tides affected by their location in semi-enclosed areas along the margins of continents and differ from tides in the open oceans. Tides are water level variations caused by the gravitational interaction between the Moon, the Sun and the Earth. The resulting tidal force is a secondary effect of gravity: it is the difference between the actual gravitational force and the centrifugal force. While the centrifugal force is constant across the Earth, the gravitational force is dependent on the distance between the two bodies and is therefore not constant across the Earth. The tidal force is thus the difference between these two forces on each location on the Earth. In an idealized situation, assuming a planet with no landmasses (an aqua planet), the tidal force would result in two tidal bulges on opposite sides of the earth. This is called the equilibrium tide. However, due to global and local ocean responses different tidal patterns are generated. The complicated ocean responses are the result of the continental barriers, resonance due to the shape of the ocean basin, the tidal waves impossibility to keep up with the Moons tracking, the Coriolis acceleration and the elastic response of the solid earth. In addition, when the tide arrives in the shallow seas it interacts with the sea floor which leads to the deformation of the tidal wave. As a results, tides in shallow waters tend to be larger, of shorter wavelength, and possibly nonlinear relative to tides in the deep ocean. Tides on the continental shelf The transition from the deep ocean to the continental shelf, known as the continental slope, is characterized by a sudden decrease in water depth. In order to apply to the conservation of energy, the tidal wave has to deform as a result of the decrease in water depth. The total energy of a linear progressive wave per wavelength is the sum of the potential energy (PE) and the kinetic energy (KE). The potential and kinetic energy integrated over a complete wavelength are the same, under the assumption that the water level variations are small compared to the water depth (). where is the density, the gravitation acceleration and the vertical tidal elevation. The total wave energy becomes: If we now solve for a harmonic wave , where is the wave number and the amplitude, the total energy per unit area of surface becomes: A tidal wave has a wavelength that is much larger than the water depth. And thus according to the dispersion of gravity waves, they travel with the phase and group velocity of a shallow water wave: . The wave energy is transmitted by the group velocity of a wave and thus the energy flux () is given by: The energy flux needs to be conserved and with and constant, this leads to: where and thus . When the tidal wave propagates onto the continental shelf, the water depth decreases. In order to conserve the energy flux, the amplitude of the wave needs to increase (see figure 1). Transmission coefficient The above explanation is a simplification as not all tidal wave energy is transmitted, but it is partly reflected at the continental slope. The transmission coefficient of the tidal wave is given by: This equation indicates that when the transmitted tidal wave has the same amplitude as the original wave. Furthermore, the transmitted wave will be larger than the original wave when as is the case for the transition to the continental shelf. The reflected wave amplitude () is determined by the reflection coefficient of the tidal wave: This equation indicates that when there is no reflected wave and if the reflected tidal wave will be smaller than the original tidal wave. Internal tide and mixing At the continental shelf the reflection and transmission of the tidal wave can lead to the generation of internal tides on the pycnocline. The surface (i.e. barotropic) tide generates these internal tides where stratified waters are forced upwards over a sloping bottom topography. The internal tide extracts energy from the surface tide and propagates both in shoreward and seaward direction. The shoreward propagating internal waves shoals when reaching shallower water where the wave energy is dissipated by wave breaking. The shoaling of the internal tide drives mixing across the pycnocline, high levels carbon sequestration and sediment resuspension. Furthermore, through nutrient mixing the shoaling of the internal tide has a fundamental control on the functioning of ecosystems on the continental margin. Tidal propagation along coasts After entering the continental shelf, a tidal wave quickly faces a boundary in the form of a landmass. When the tidal wave reaches a continental margin, it continues as a boundary trapped Kelvin wave. Along the coast, a boundary trapped Kelvin is also known as a coastal Kelvin wave or Edge wave. A Kelvin wave is a special type of gravity wave that can exist when there is (1) gravity and stable stratification, (2) sufficient Coriolis force and (3) the presence of a vertical boundary. Kelvin waves are important in the ocean and shelf seas, they form a balance between inertia, the Coriolis force and the pressure gradient force. The simplest equations that describe the dynamics of Kelvin waves are the linearized shallow water equations for homogeneous, in-viscid flows. These equations can be linearized for a small Rossby number, no frictional forces and under the assumption that the wave height is small compared to the water depth (). The linearized depth-averaged shallow water equations become: u momentum equation: v momentum equation: the continuity equation: where is the zonal velocity ( direction), the meridional velocity ( direction), is time and is the Coriolis frequency. Kelvin waves are named after Lord Kelvin, who first described them after finding solutions to the linearized shallow water equations with the boundary condition . When this assumption is made the linearized depth-averaged shallow water equations that can describe a Kelvin wave become: u momentum equation: v momentum equation: the continuity equation: Now it is possible to get an expression for , by taking the time derivative of the continuity equation and substituting the momentum equation: The same can be done for , by taking the time derivative of the v momentum equation and substituting the continuity equation Both of these equations take the form of the classical wave equation, where . Which is the same velocity as the tidal wave and thus of a shallow water wave. These preceding equations govern the dynamics of a one-dimensional non-dispersive wave, for which the following general solution exist: where length is the Rossby radius of deformation and is an arbitrary function describing the wave motion. In the most simple form is a cosine or sine function which describes a wave motion in the positive and negative direction. The Rossby radius of deformation is a typical length scale in the ocean and atmosphere that indicates when rotational effects become important. The Rossby radius of deformation is a measure for the trapping distance of a coastal Kelvin wave. The exponential term results in an amplitude that decays away from the coast. The expression of tides as a bounded Kelvin wave is well observable in enclosed shelf seas around the world (e.g. the English channel, the North Sea or the Yellow sea). Animation 1 shows the behaviour of a simplified case of a Kelvin wave in an enclosed shelf sea for the case with (lower panel) and without friction (upper panel). The shape of an enclosed shelf sea is represented as a simple rectangular domain in the Northern Hemisphere which is open on the left hand side and closed on the right hand side. The tidal wave, a Kelvin wave, enters the domain in the lower left corner and travels to the right with the coast on its right. The sea surface height (SSH, left panels of animation 1), the tidal elevation, is maximum at the coast and decreases towards the centre of the domain. The tidal currents (right panels of animation 1) are in the direction of wave propagation under the crest and in the opposite direction under the through. They are both maximum under the crest and the trough of the waves and decrease towards the centre. This was expected as the equations for and are in phase as they both depend on the same arbitrary function describing the wave motion and exponential decay term. Therefore this set of equations describes a wave that travels along the coast with a maximum amplitude at the coast which declines towards the ocean. These solutions also indicate that a Kelvin wave always travels with the coast on their right hand side in the Northern Hemisphere and with the coast at their left hand side in the Southern Hemisphere. In the limit of no rotation where , the exponential term increase without a bound and the wave will become a simple gravity wave orientated perpendicular to the coast. In the next section, it will be shown how these Kelvin waves behaves when traveling along a coast, in an enclosed shelf seas or in estuaries and basins. Tides in enclosed shelf seas The expression of tides as a bounded Kelvin wave is well observable in enclosed shelf seas around the world (e.g. the English channel, the North Sea or the Yellow sea). Animation 1 shows the behaviour of a simplified case of a Kelvin wave in an enclosed shelf sea for the case with (lower panel) and without friction (upper panel). The shape of an enclosed shelf sea is represented as a simple rectangular domain in the Northern Hemisphere which is open on the left hand side and closed on the right hand side. The tidal wave, a Kelvin wave, enters the domain in the lower left corner and travels to the right with the coast on its right. The sea surface height (SSH, left panels of animation 1), the tidal elevation, is maximum at the coast and decreases towards the centre of the domain. The tidal currents (right panels of animation 1) are in the direction of wave propagation under the crest and in the opposite direction under the through. They are both maximum under the crest and the trough of the waves and decrease towards the centre. This was expected as the equations for and are in phase as they both depend on the same arbitrary function describing the wave motion and exponential decay term. On the enclosed right hand side, the Kelvin wave is reflected and because it always travels with the coast on its right, it will now travel in the opposite direction. The energy of the incoming Kelvin wave is transferred through Poincare waves along the enclosed side of the domain to the outgoing Kelvin wave. The final pattern of the SSH and the tidal currents is made up of the sum of the two Kelvin waves. These two can amplify each other and this amplification is maximum when the length of the shelf sea is a quarter wavelength of the tidal wave. Next to that, the sum of the two Kelvin waves result in several static minima's in the centre of the domain which hardly experience any tidal motion, these are called Amphidromic points. In the upper panel of figure 2, the absolute time averaged SSH is shown in red shading and the dotted lines show the zero tidal elevation level at roughly hourly intervals, also known as cotidal lines. Where these lines intersect the tidal elevation is zero during a full tidal period and thus this is the location of the Amphidromic points. In the real world, the reflected Kelvin wave has a lower amplitude due to energy loss as a result of friction and through the transfer via Poincare waves (lower left panel of animation 1). The tidal currents are proportional to the wave amplitude and therefore also decrease on the side of the reflected wave (lower right panel of animation 1). Finally, the static minima's are no longer in the centre of the domain as wave amplitude is no longer symmetric. Therefore, the Amphidromic points shift towards the side of the reflected wave (lower panel figure 2). The dynamics of a tidal Kelvin wave in enclosed shelf sea is well manifested and studied in the North Sea. Tides in estuaries and basins When tides enter estuaries or basins, the boundary conditions change as the geometry changes drastically. The water depth becomes shallower and the width decreases, next to that the depth and width become significantly variable over the length and width of the estuary or basin. As a result the tidal wave deforms which affects the tidal amplitude, phase speed and the relative phase between tidal velocity and elevation. The deformation of the tide is largely controlled by the competition between bottom friction and channel convergence. Channel convergence increases the tidal amplitude and phase speed as the energy of the tidal wave is traveling through a smaller area while bottom friction decrease the amplitude through energy loss. The modification of the tide leads to the creation of overtides (e.g. tidal constituents) or higher harmonics. These overtides are multiples, sums or differences of the astronomical tidal constituents and as a result the tidal wave can become asymmetric. A tidal asymmetry is a difference between the duration of the rise and the fall of the tidal water elevation and this can manifest itself as a difference in flood/ebb tidal currents. The tidal asymmetry and the resulting currents are important for the sediment transport and turbidity in estuaries and tidal basins. Each estuary and basin has its own distinct geometry and these can be subdivided in several groups of similar geometries with its own tidal dynamics. See also References Tides Planetary science Geophysics Oceanography Fluid dynamics
Tides in marginal seas
[ "Physics", "Chemistry", "Astronomy", "Engineering", "Environmental_science" ]
2,685
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Chemical engineering", "Geophysics", "Piping", "Planetary science", "Astronomical sub-disciplines", "Fluid dynamics" ]
67,662,946
https://en.wikipedia.org/wiki/Geological%20engineering
Geological engineering is a discipline of engineering concerned with the application of geological science and engineering principles to fields, such as civil engineering, mining, environmental engineering, and forestry, among others. The work of geological engineers often directs or supports the work of other engineering disciplines such as assessing the suitability of locations for civil engineering, environmental engineering, mining operations, and oil and gas projects by conducting geological, geoenvironmental, geophysical, and geotechnical studies. They are involved with impact studies for facilities and operations that affect surface and subsurface environments. The engineering design input and other recommendations made by geological engineers on these projects will often have a large impact on construction and operations. Geological engineers plan, design, and implement geotechnical, geological, geophysical, hydrogeological, and environmental data acquisition. This ranges from manual ground-based methods to deep drilling, to geochemical sampling, to advanced geophysical techniques and satellite surveying. Geological engineers are also concerned with the analysis of past and future ground behaviour, mapping at all scales, and ground characterization programs for specific engineering requirements. These analyses lead geological engineers to make recommendations and prepare reports which could have major effects on the foundations of construction, mining, and civil engineering projects. Some examples of projects include rock excavation, building foundation consolidation, pressure grouting, hydraulic channel erosion control, slope and fill stabilization, landslide risk assessment, groundwater monitoring, and assessment and remediation of contamination. In addition, geological engineers are included on design teams that develop solutions to surface hazards, groundwater remediation, underground and surface excavation projects, and resource management. Like mining engineers, geological engineers also conduct resource exploration campaigns, mine evaluation and feasibility assessments, and contribute to the ongoing efficiency, sustainability, and safety of active mining projects History While the term geological engineering was not coined until the 19th century, principles of geological engineering are demonstrated through millennia of human history. Ancient engineering One of the oldest examples of geological engineering principles is the Euphrates tunnel, which was constructed around 2180 B.C. – 2160 B.C... This, and other tunnels and qanats from around the same time were used by ancient civilizations such as Babylon and Persia for the purposes of irrigation. Another famous example where geological engineering principles were used in an ancient engineering project was the construction of the Eupalinos aqueduct tunnel in Ancient Greece. This was the first tunnel to be constructed inward from both ends using principles of geometry and trigonometry, marking a significant milestone for both civil engineering and geological engineering Geological engineering as a discipline Although projects that applied geological engineering principles in their design and construction have been around for thousands of years, these were included within the civil engineering discipline for most of this time. Courses in geological engineering have been offered since the early 1900s; however, these remained specialized offerings until a large increase in demand arose in the mid-20th century. This demand was created by issues encountered from development of increasingly large and ambitious structures, human-generated waste, scarcity of mineral and energy resources, and anthropogenic climate change – all of which created the need for a more specialized field of engineering with professional engineers who were also experts in geological or Earth sciences. Notable disasters that are attributed to the formal creation of the geological engineering discipline include dam failures in the United States and western Europe in the 1950s and 1960s. These most famously include the St Francis dam failure (1928), Malpasset dam failure (1959), and the Vajont dam failure (1963), where a lack of knowledge of geology resulted in almost 3,000 deaths between the latter two alone. The Malpasset dam failure is regarded as the largest civil engineering disaster of the 20th century in France and Vajont dam failure is still the deadliest landslide in European history. Education Post-secondary degrees in geological engineering are offered at various universities around the world but are concentrated primarily in North America. Geological engineers often obtain degrees that include courses in both geological or Earth sciences and engineering. To practice as a professional geological engineer, a bachelor's degree in a related discipline from an accredited institution is required. For certain positions, a Master’s or Doctorate degree in a related engineering discipline may be required. After obtaining these degrees, an individual who wishes to practice as a professional geological engineer must go through the process of becoming licensed by a professional association or regulatory body in their jurisdiction. Canadian institutions In Canada, 8 universities are accredited by Engineers Canada to offer undergraduate degrees in geological engineering. Many of these universities also offer graduate degree programs in geological engineering. These include: Queen’s University (Department of Geological Sciences and Geological Engineering) (1975 – present), École Polytechnique (1965 – present), Université Laval (1965 – present), Université du Québec à Chicoutimi (1983 – present), University of British Columbia (1965 – present), University of New Brunswick (jointly administered by Department of Earth Sciences and Department of Civil Engineering) (1984 – present), University of Saskatchewan (1965 – present), and University of Waterloo (1986 – present). American institutions In the United States there are 13 geological engineering programs recognized by the Engineering Accreditation Commission (EAC) of the Accreditation Board for Engineering and Technology (ABET). These include: Colorado School of Mines (1936 – present), Michigan Technological University (1951 – present), Missouri University of Science and Technology (1973 – present), Montana Technological University (1972–present), South Dakota School of Mines and Technology (1950 – present), The University of Utah (1952 – present), University of Alaska-Fairbanks (1941 – present), University of Minnesota Twin Cities (1950 – present), University of Mississippi (1987 – present), University of Nevada, Reno (1958 – present), University of North Dakota (1984 – present), University of Texas at Austin (1998 – present), and University of Wisconsin – Madison (1993 – present). Other institutions Universities in other countries that hold accreditation to offer degree programs in geological engineering from the EAC by the ABET include: Escuela Superior Politécnica Del Litoral, Guayaquil, Ecuador (2018 – present), Istanbul Technical University, Istanbul, Turkey (2009 – present), Universidad Nacional de Ingeniería, Rímac, Peru (2017 – present), and Universidad Politécnica de Madrid, Madrid, Spain (2014 – present). Specializations In geological engineering there are multiple subdisciplines which analyze different aspects of Earth sciences and apply them to a variety of engineering projects. The subdisciplines listed below are commonly taught at the undergraduate level, and each has overlap with disciplines external to geological engineering. However, a geological engineer who specializes in one of these subdisciplines throughout their education may still be licensed to work in any of the other subdisciplines. Geoenvironmental and hydrogeological engineering Geoenvironmental engineering is the subdiscipline of geological engineering that focuses on preventing or mitigating the environmental effects of anthropogenic contaminants within soil and water. It solves these issues via the development of processes and infrastructure for the supply of clean water, waste disposal, and control of pollution of all kinds. The work of geoenvironmental engineers largely deals with investigating the migration, interaction, and result of contaminants; remediating contaminated sites; and protecting uncontaminated sites. Typical work of a geoenvironmental engineer includes: The preparation, review, and update of environmental investigation reports, The design of projects such as water reclamation facilities or groundwater monitoring wells which lead to the protection of the environment, Conducting feasibility studies and economic analyses of environmental projects, Obtaining and revising permits, plans, and standard procedures, Providing technical expertise for environmental remediation projects which require legal actions, The analysis of groundwater data for the purpose of quality-control checks, The site investigation and monitoring of environmental remediation and sustainability projects to ensure compliance with environmental regulations, and Advising corporations and government agencies regarding procedures for cleaning up contaminated sites. Mineral and energy resource exploration engineering Mineral and energy resource exploration (commonly known as MinEx for short) is the subdiscipline of geological engineering that applies modern tools and concepts to the discovery and sustainable extraction of natural mineral and energy resources. A geological engineer who specializes in this field may work on several stages of mineral exploration and mining projects, including exploration and orebody delineation, mine production operations, mineral processing, and environmental impact and risk assessment programs for mine tailings and other mine waste. Like a mining engineer, mineral and energy resource exploration engineers may also be responsible for the design, finance, and management of mine sites. Geophysical engineering (applied geophysics) Geophysical engineering is the subdiscipline of geological engineering that applies geophysics principles to the design of engineering projects such as tunnels, dams, and mines or for the detection of subsurface geohazards, groundwater, and pollution. Geophysical investigations are undertaken from ground surface, in boreholes, or from space to analyze ground conditions, composition, and structure at all scales. Geophysical techniques apply a variety of physics principles such as seismicity, magnetism, gravity, and resistivity. This subdiscipline was created in the early 1990s as a result of an increased demand in more accurate subsurface information created by a rapidly increasing global population. Geophysical engineering and applied geophysics differ from traditional geophysics primarily by their need for marginal returns and optimized designs and practices as opposed to satisfying regulatory requirements at a minimum cost Job responsibilities Geological engineers are responsible for the planning, development, and coordination of site investigation and data acquisition programs for geological, geotechnical, geophysical, geoenvironmental, and hydrogeological studies. These studies are traditionally conducted for civil engineering, mining, petroleum, waste management, and regional development projects but are becoming increasingly focused on environmental and coastal engineering projects and on more specialized projects for long-term underground nuclear waste storage. Geological engineers are also responsible for analyzing and preparing recommendations and reports to improve construction of foundations for civil engineering projects such as rock and soil excavation, pressure grouting, and hydraulic channel erosion control. In addition, geological engineers analyze and prepare recommendations and reports on the settlement of buildings, stability of slopes and fills, and probable effects of landslides and earthquakes to support construction and civil engineering projects. They must design means to safely excavate and stabilize the surrounding rock or soil in underground excavations and surface construction, in addition to managing water flow from, and within these excavations. Geological engineers also perform a primary role in all forms of underground infrastructure including tunnelling, mining, hydropower projects, shafts, deep repositories and caverns for power, storage, industrial activities, and recreation. Moreover, geological engineers design monitoring systems, analyze natural and induced ground response, and prepare recommendations and reports on the settlement of buildings, stability of slopes and fills, and the probable effects of natural disasters to support construction and civil engineering projects. In some jobs, geological engineers conduct theoretical and applied studies of groundwater flow and contamination to develop site specific solutions which treat the contaminants and allow for safe construction. Additionally, they design means to manage and protect surface and groundwater resources and remediation solutions in the event of contamination. If working on a mine site, geological engineers may be tasked with planning, development, coordination, and conducting theoretical and experimental studies in mining exploration, mine evaluation and feasibility studies relative to the mining industry. They conduct surveys and studies of ore deposits, ore reserve calculations, and contribute mineral resource expertise, geotechnical and geomechanical design and monitoring expertise and environmental management to a developing or ongoing mining operation. In a variety of projects, they may be expected to design and perform geophysical investigations from surface using boreholes or from space to analyze ground conditions, composition, and structure at all scales Professional associations and licensing Professional Engineering Licenses may be issued through a municipal, provincial/state, or federal/national government organization, depending on the jurisdiction. The purpose of this licensing process is to ensure professional engineers possess the necessary technical knowledge, real-world experience, and basic understanding of the local legal system to practice engineering at a professional level. In Canada, the United States, Japan, South Korea, Bangladesh, and South Africa, the title of Professional Engineer is granted through licensure. In the United Kingdom, Ireland, India, and Zimbabwe the granted title is Chartered Engineer . In Australia, the granted title is Chartered Professional Engineer. Lastly, in the European Union, the granted title is European Engineer. All these titles have similar requirements for accreditation, including a recognized post-secondary degree and relevant work experience. Canada In Canada, Professional Engineer (P.Eng.) and Professional Geoscientist (P.Geo.) licenses are regulated by provincial professional bodies which have the groundwork for their legislation laid out by Engineers Canada and Geoscientists Canada. The provincial organizations are listed in the table below. United States In the United States, all individuals seeking to become a Professional Engineer (P.E.) must attain their license through the Engineering Accreditation Commission (EAC) of the Accreditation Board for Engineering and Technology (ABET). Licenses to be a Certified Professional Geologist in the United States are issued and regulated by the American Institute of Professional Geologists (AIPG) Professional Societies Professional societies in geological engineering are not-for-profit organizations that seek to advance and promote the represented profession(s) and connect professionals using networking, regular conferences, meetings, and other events, as well as provide platforms to publish technical literature through forms of conference proceedings, books, technical standards, and suggested methods, and provide opportunities for professional development such as short courses, workshops, and technical tours. Some regional, national, and international professional societies relevant to geological engineers are listed here: American Geophysical Union (AGU) American Geosciences Institute (AGI) American Rock Mechanics Association (ARMA) Association of Environmental and Engineering Geologists (AEG) Association for Mineral Exploration (AME) Atlantic Geoscience Society (AGS) Canadian Dam Association (CDA) Canadian Federation of Earth Sciences (CFES) Canadian Geophysical Union (CGU) Canadian Geotechnical Society (CGS) Canadian Institute of Mining, Metallurgy and Petroleum (CIM) Canadian Society of Petroleum Geologists (CSPG) Canadian Rock Mechanics Association (CARMA) European Association of Geoscientists & Engineers (EAGE) European Geosciences Union (EGU) European Federation of Geologists (EFG) Geological Association of Canada (GAC) Geological Society of America (GSA) Geoscience Information Society (GSIS) Institute of Materials, Minerals and Mining (IOM3) International Association for Engineering Geology and the Environment (IAEG) International Association of Hydrogeologists (IAH) International Council on Mining and Metals (ICMM) International Society for Rock Mechanics and Rock Engineering (ISRM) International Society for Soil Mechanics and Geotechnical Engineering (ISSMGE) International Tunnelling Association (ITA) International Union of Geological Sciences (IUGS) Mineralogical Association of Canada (MAC) Mining Association of Canada (MAC) Prospectors and Developers Association of Canada (PDAC) Society for Mining, Metallurgy & Exploration (SME) Society of Exploration Geophysicists (SEG) Tunnelling Association of Canada (TAC) U.S. Geological Survey (USGS) U.S. National Mining Association (NMA) Distinction from engineering geology Engineering geologists and geological engineers are both interested in the study of the Earth, its shifting movement, and alterations, and the interactions of human society and infrastructure with, on, and in Earth materials. Both disciplines require licenses from professional bodies in most jurisdictions to conduct related work. The primary difference between geological engineers and engineering geologists is that geological engineers are licensed professional engineers (and sometimes also professional geoscientists/geologists) with a combined understanding of Earth sciences and engineering principles, while engineering geologists are geological scientists whose work focusses on applications to engineering projects, and they may be licensed professional geoscientists/geologists, but not professional engineers. The following subsections provide more details on the differing responsibilities between engineering geologists and geological engineers. Engineering geology Engineering geologists are applied geological scientists who assess problems that might arise before, during, and after an engineering project. They are trained to be aware of potential problems like: landslides, faults, unstable ground, groundwater challenges, and floodplains. They use a variety of field and laboratory testing techniques to characterize ground materials that might affect the construction, the long-term safety, or environmental footprint of a project. Job responsibilities of an engineering geologist include: collecting samples and surveys, conducting lab tests on samples, assessing in situ soil or rock conditions at many scales, preparing reports based on testing and on-site observations for clients, and creating geological models, maps, and sections. Geological engineering Geological engineers are engineers with extensive knowledge of geological or Earth sciences as well as engineering geology, engineering principles, and engineering design practices. These professionals are qualified to perform the role of or interact with engineering geologists. Their primary focus, however, is the use of engineering geology data, as well as engineering skills to: Design advanced exploration programs, environmental management or remediation projects including: Groundwater extraction and sustainability, Natural hazard mitigation systems, Energy resource exploration and extraction, Mineral resource exploration and extraction, and Environmental remediation. Design Infrastructure, including: Surface works, Foundations, Tunnels, Dams, Caverns, and Other construction that interfaces with the ground. Oversee components of mining including: Advanced resource assessment and economics, Mineral processing, Mine planning, and Geomechanical and geotechnical stability. In all these activities, the geological model, geological history, and environment, as well as measured engineering properties of relevant Earth materials are critical to engineering design and decision making. References See also Civil engineering Engineering geology Geology Environmental engineering Mining engineering Petroleum engineering Engineering disciplines
Geological engineering
[ "Engineering" ]
3,671
[ "nan" ]
77,795,799
https://en.wikipedia.org/wiki/Gravity%20map
A gravity map is a map that depicts gravity measurements across an area of space, which are typically obtained via gravimetry. Gravity maps are an extension of the field of geodynamics. Readings are typically taken at regular intervals for surface analysis on Earth. Other methods include analysis of artificial satellite orbital mechanics, which can allow comprehensive gravity maps of planets, as has been done for Mars by NASA. Gravity maps typically are based on depictions of gravity anomalies or a planet's geoid. Creation of gravity maps Measurements are typically taken via measuring ground stations with surveys conducted at regular intervals. For surface mapping of gravity, placement of instruments can be randomized. Surface gravity mapping is often used to map out gravity anomalies such as a Bouguer anomaly or isostatic gravity anomalies. Derivative gravity maps are an extension of standard gravity maps, involving mathematical analysis of the local gravitational field strength, to present data in analogous formats to a geologic map. Gravity maps, in a 'heat' context, typically represent intensity being representative of concentrations of mass in a given area, which correlates to that area having a stronger gravitational field; an example would be a mountain range. In the inverse, geological structures such as oceanic trenches or landmass depressions such as those caused by glaciers or fault lines will depict lower gravitational field values, due to the lower underlying amount of mass in the area. Other methods include analysis of satellite orbital mechanics, which can allow comprehensive gravity maps of planets, as has been done for Mars by NASA. Goddard Mars Model (GMM) 3 is a gravity map of the gravitational field on the planet Mars. Three orbital craft over Mars, the Mars Global Surveyor (MGS), Mars Odyssey (ODY), and the Mars Reconnaissance Orbiter (MRO) assisted in the creation of the GMM 3 by the study of their orbital flight paths. Their travel times and the Doppler shift of radio communications between the respective craft and parabolic antennas belonging to the Deep Space Network, and incremental variations of the communication timing of radio signals and travel times of the craft allowed for the creation of an accurate GMM 3. The Martian gravity map was generated using more than sixteen years of data. External links World Gravity Map Project (WGM) References Geophysics Gravimetry
Gravity map
[ "Physics" ]
468
[ "Applied and interdisciplinary physics", "Geophysics" ]
62,436,503
https://en.wikipedia.org/wiki/Green%20finance%20and%20the%20Belt%20and%20Road%20Initiative
Green finance is officially promoted as an important feature of the Belt and Road Initiative, China's signature global economic development initiative. The official vision for the BRI calls for an environmentally friendly "Green Belt and Road". Policy Chinese policy documents for the BRI coordinate and encourage green finance and investment. The Ministry of Ecology and Environment with four other ministries released the "Guidance on Promoting a Green Belt and Road" in 2017. A section of the policy document covers mobilizing capital for financing green projects using "international multilateral and bilateral cooperative institutions and funds, such as Silk Road Fund, South-South Cooperation Assistance Fund, China-ASEAN Investment Cooperation Fund, China-Central and Eastern Europe Investment Cooperation Fund, China-ASEAN Maritime Cooperation Fund, Special Fund for Asian Regional Cooperation and LMC Special Fund." Policy institutions like the China Development Bank and Export-Import Bank of China are to play the "guiding role". The Development Research Center of the State Council and Export-Import Bank of China released a report in 2019 on green finance for the Belt and Road. The report gives recommendations and draws on lessons for China to develop Belt and Road green finance and goes into the details about "implementing the concept of green finance" by Export-Import Bank. Forms The various forms of green finance includes investments, lending, and insurance by Chinese state-owned financial entities and companies for renewable energy projects in host countries of the Belt and Road. Bonds The market for green bonds in China is the second largest in the world. In the international bond market, Chinese banks have also issued green bonds. China Development Bank in November 2017 issued the first green bond specifically for Belt and Road projects. This first green BRI bond had EUR and USD tranches of US$1.1 billion for "renewable energy, clean transportation and water resource management projects" in BRI countries. In the same month, the Bank of China issued a green bond on the London Stock Exchange although not specifically for projects in the BRI. Loans The two primary Chinese policy banks for financing BRI projects are China Development Bank and Export Import Bank and each states support for advancing more green loans. Both banks consider green loans to mean financing projects in renewable energy or environmental protection. The Export Import Bank claimed to fulfill green obligations under the Belt and Road by supporting "a large number of projects featuring low energy consumption and high value added in areas of new energy development and utilization and the circular economy." However, out of the energy project loans advanced by both banks between 2014 and 2017 for the BRI, 18% went to coal while solar and wind accounted for 3.4% and 2.9% respectively. Coal projects The primary contradiction with adherence to green finance and BRI projects is the large amount of lending by Chinese banks for coal fired power plants. In contrast, Western financial institutions have limited or prohibited financing of coal fired power plants starting with the World Bank and European Investment Bank in 2013. State owned Chinese commercial banks have shown a willingness to limit coal projects. In 2017, ICBC and China Construction Bank decided to not fund the Carmichael coal mine after environmental protests by the Australian public. In 2021, the International Institute of Green Finance reported that China didn't finance any coal projects via its Belt and Road Initiative in the first half of 2021, which is a first since 2013 when BRI was launched. References Belt and Road Initiative Bonds in foreign currencies Economy of China Finance in China Foreign trade of China Industrial ecology International finance Investment Natural resources Resource economics Sustainability
Green finance and the Belt and Road Initiative
[ "Chemistry", "Engineering" ]
707
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
62,436,652
https://en.wikipedia.org/wiki/Partial%20allocation%20mechanism
The Partial Allocation Mechanism (PAM) is a mechanism for truthful resource allocation. It is based on the max-product allocation - the allocation maximizing the product of agents' utilities (also known as the Nash-optimal allocation or the Proportionally-Fair solution; in many cases it is equivalent to the competitive equilibrium from equal incomes). It guarantees to each agent at least 0.368 of his/her utility in the max-product allocation. It was designed by Cole, Gkatzelis and Goel. Setting There are m resources that are assumed to be homogeneous and divisible. There are n agents, each of whom has a personal function that attributes a numeric value to each "bundle" (combination of resources). The valuations are assumed to be homogeneous functions. The goal is to decide what "bundle" to give to each agent, where a bundle may contain a fractional amount of each resource. Crucially, some resources may have to be discarded, i.e., free disposal is assumed. Monetary payments are not allowed. Algorithm PAM works in the following way. Calculate the max-product allocation; denote it by z. For each agent i: Calculate the max-product allocation when i is not present. Let fi = (the product of the other agents in z) / (the max-product of the other agents when i is not present). Give to agent i a fraction fi of each resource he gets in z. Properties PAM has the following properties. It is a truthful mechanism - each agent's utility is maximized by revealing his/her true valuations. For each agent i, the utility of i is at least 1/e ≈ 0.368 of his/her utility in the max-product allocation. When the agents have additive linear valuations, the allocation is envy-free. PA vs VCG The PA mechanism, which does not use payments, is analogous to the VCG mechanism, which uses monetary payments. VCG starts by selecting the max-sum allocation, and then for each agent i it calculates the max-sum allocation when i is not present, and pays i the difference (max-sum when i is present)-(max-sum when i is not present). Since the agents are quasilinear, the utility of i is reduced by an additive factor. In contrast, PA does not use monetary payments, and the agents' utilities are reduced by a multiplicative factor, by taking away some of their resources. Optimality It is not known whether the fraction of 0.368 is optimal. However, there is provably no truthful mechanism that can guarantee to each agent more than 0.5 of the max-product utility. Extensions The PAM has been used as a subroutine in a truthful cardinal mechanism for one-sided matching. References Fair division protocols Mechanism design
Partial allocation mechanism
[ "Mathematics" ]
588
[ "Game theory", "Mechanism design" ]
63,340,505
https://en.wikipedia.org/wiki/Adiabatic%20electron%20transfer
In chemistry, adiabatic electron-transfer is a type of oxidation-reduction process. The mechanism is ubiquitous in nature in both the inorganic and biological spheres. Adiabatic electron-transfers proceed without making or breaking chemical bonds. Adiabatic electron-transfer can occur by either optical or thermal mechanisms. Electron transfer during a collision between an oxidant and a reductant occurs adiabatically on a continuous potential energy surface. History Noel Hush is often credited with formulation of the theory of adiabatic electron-transfer. Figure 1 sketches the basic elements of adiabatic electron-transfer theory. Two chemical species (ions, molecules, polymers, protein cofactors, etc.) labelled D (for “donor”) and A (for “acceptor”) become a distance R apart, either through collisions, covalent bonding, location in a material, protein or polymer structure, etc. A and D have different chemical environments. Each polarizes their surrounding condensed media. Electron-transfer theories describe the influence of a variety of parameters on the rate of electron-transfer. All electrochemical reactions occur by this mechanism. Adiabatic electron-transfer theory stresses that intricately coupled to such charge transfer is the ability of any D-A system to absorb or emit light. Hence fundamental understanding of any electrochemical process demands simultaneous understanding of the optical processes that the system can undergo. Figure 2 sketches what happens if light is absorbed by just one of the chemical species, taken to be the charge donor. This produces an excited state of the donor. As the donor and acceptor are close to each other and surrounding matter, they experience a coupling . If the free energy change is favorable, this coupling facilitates primary charge separation to produce D+-A− , producing charged species. In this way, solar energy is captured and converted to electrical energy. This process is typical of natural photosynthesis as well as modern organic photovoltaic and artificial photosynthesis solar-energy capture devices. The inverse of this process is also used to make organic light-emitting diodes (OLEDs). Adiabatic electron-transfer is also relevant to the area of solar energy harvesting. Here, light absorption directly leads to charge separation D+-A−. Hush's theory for this process considers the donor-acceptor coupling , the energy required to rearrange the atoms from their initial geometry to the preferred local geometry and environment polarization of the charge-separated state, and the energy change associated with charge separation. In the weak-coupling limit ( ), Hush showed that the rate of light absorption (and hence charge separation) is given from the Einstein equation by … (1) This theory explained how Prussian blue absorbes light, creating the field of intervalence charge transfer spectroscopy. Adiabatic electron transfer is also relevant to the Robin-Day classification system, which codifies types of mixed valence compounds. An iconic system for understanding Inner sphere electron transfer is the mixed-valence Creutz-Taube ion, wherein otherwise equivalent Ru(III) and Ru(II) are linked by a pyrazine. The coupling is not small: charge is not localized on just one chemical species but is shared quantum mechanically between two Ru centers, presenting classically forbidden half-integral valence states. that the critical requirement for this phenomenon is … (2) Adiabatic electron-transfer theory stems from London's approach to charge-transfer and indeed general chemical reactions applied by Hush using parabolic potential-energy surfaces. Hush himself has carried out many theoretical and experimental studies of mixed valence complexes and long range electron transfer in biological systems. Hush's quantum-electronic adiabatic approach to electron transfer was unique; directly connecting with the Quantum Chemistry concepts of Mulliken, it forms the basis of all modern computational approaches to modeling electron transfer. Its essential feature is that electron transfer can never be regarded as an “instantaneous transition”; instead, the electron is partially transferred at all molecular geometries, with the extent of the transfer being a critical quantum descriptor of all thermal, tunneling, and spectroscopic processes. It also leads seamlessly to understanding electron-transfer transition-state spectroscopy pioneered by Zewail. In adiabatic electron-transfer theory, the ratio is of central importance. In the very strong coupling limit when Eqn. (2) is satisfied, intrinsically quantum molecules like the Creutz-Taube ion result. Most intervalence spectroscopy occurs in the weak-coupling limit described by Eqn. (1), however. In both natural photosynthesis and in artificial solar-energy capture devices, is maximized by minimizing through use of large molecules like chlorophylls, pentacenes, and conjugated polymers. The coupling can be controlled by controlling the distance R at which charge transfer occurs- the coupling typically decreases exponentially with distance. When electron transfer occurs during collisions of the D and A species, the coupling is typically large and the “adiabatic” limit applies in which rate constants are given by transition state theory. In biological applications, however, as well as some organic conductors and other device materials, R is externally constrained and so the coupling set at low or high values. In these situations, weak-coupling scenarios often become critical. In the weak-coupling (“non-adiabatic”) limit, the activation energy for electron transfer is given by the expression derived independently by Kubo and Toyozawa and by Hush. Using adiabatic electron-transfer theory, in this limit Levich and Dogonadze then determined the electron-tunneling probability to express the rate constant for thermal reactions as . … (3) This approach is widely applicable to long-range ground-state intramolecular electron transfer, electron transfer in biology, and electron transfer in conducting materials. It also typically controls the rate of charge separation in the excited-state photochemical application described in Figure 2 and related problems. Marcus showed that the activation energy in Eqn. (3) reduces to in the case of symmetric reactions with . In that work, he also derived the standard expression for the solvent contribution to the reorganization energy, making the theory more applicable to practical problems. Use of this solvation description (instead of the form that Hush originally proposed) in approaches spanning the adiabatic and non-adiabatic limits is often termed “Marcus-Hush Theory”. These and other contributions, including the widespread demonstration of the usefulness of Eqn. (3), led to the award of the 1992 Nobel Prize in Chemistry to Marcus. Adiabatic electron-transfer theory is also widely applied in Molecular Electronics. In particular, this reconnects adiabatic electron-transfer theory with its roots in proton-transfer theory and hydrogen-atom transfer, leading back to London's theory of general chemical reactions. References Physical chemistry Reaction mechanisms
Adiabatic electron transfer
[ "Physics", "Chemistry" ]
1,410
[ "Reaction mechanisms", "Applied and interdisciplinary physics", "Physical organic chemistry", "nan", "Chemical kinetics", "Physical chemistry" ]
63,346,732
https://en.wikipedia.org/wiki/Communications%20Materials
Communications Materials is a peer-reviewed, open access, scientific journal in the field materials science published by Nature Portfolio since 2020. The chief editor is John Plummer. The journal was created as one of several sub-journals to Nature Communications. Abstracting and indexing The journal is abstracted and indexed in selective databases such as Science Citation Index Expanded and Scopus. According to the Journal Citation Reports, the journal has a 2022 impact factor of 7.8. See also Nature Nature Communications Scientific Reports References External links Nature Research academic journals Materials science journals Open access journals Academic journals established in 2020 English-language journals Creative Commons-licensed journals Continuous journals 2020 establishments
Communications Materials
[ "Materials_science", "Engineering" ]
133
[ "Materials science journals", "Materials science" ]
63,346,757
https://en.wikipedia.org/wiki/Npj%202D%20Materials%20and%20Applications
npj 2D Materials and Applications, is an open access peer-reviewed scientific journal published by Nature Publishing Group. It focuses on 2D materials (such as thin films), including fundamental behaviour, synthesis, properties and applications. According to the Journal Citation Reports, npj 2D Materials and Applications has a 2022 impact factor of 9.7. The current editor-in-chief is Andras Kis (École Polytechnique Fédérale de Lausanne). Scope npj 2D Materials and Applications publishes articles, brief communication, comment, matters arising, perspective, and editorial on 2D materials in their entirety, including fundamental behaviour, synthesis, properties and applications. Specific materials of interest will include, but are not limited to: 2D materials in all their forms: graphene, transition metal dichalcogenides, phosphorene and molecular systems, including relevant allotropes and compounds, and topological materials fundamental understanding of their basic science synthesis by physical and chemical approaches behavior and properties: electronic, magnetic, spintronic, photonic, mechanical, including in heterostructures and other architectures applications: sensors, memory, high-frequency electronics, energy harvesting and storage, flexible electronics, water treatment, biomedical, thermal management. References External links Nature Research academic journals Materials science journals English-language journals
Npj 2D Materials and Applications
[ "Materials_science", "Engineering" ]
265
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
63,350,802
https://en.wikipedia.org/wiki/Methanedisulfonic%20acid
Methanedisulfonic acid is the organosulfur compound with the formula CH2(SO3H)2. It is the disulfonic acid of methane. It is prepared by treatment of methanesulfonic acid with oleum. Its acid strength (pKa) is comparable to that of sulfuric acid. History and synthesis The acid was first unknowingly prepared in 1833 by Gustav Magnus as a decomposition product of ethanedisulfonic acid during early attempts to synthesize diethyl ether from ethanol and anhydrous sulfuric acid by Magnus. Early investigations focused on ether production from alcohols and strong anhydrous acids. Liebig provided a detailed overview of the various sulfonic acids obtained from these reactions, and introduced the name "ethionic acid" for the sulfooxyethanesulfonic acid previously termed "Weinschwefelsäure". Josef Redtenbacher subsequently analyzed the barium salt of MDA and coined the name (still occasionally used) methionic acid, following Liebig's convention. In 1856, Adolph Strecker analyzed various methionate salts and improved the synthesis from ether and anhydrous sulfuric acid by trapping evolving gases within the reaction vessel to maximize conversion. The same year, Buckton and Hofmann discovered a synthesis reaction from acetonitrile or acetamide with fuming sulfuric acid but didn't identify their product, designating it methylotetrasulphuric acid. developed another method in 1897, treating acetylene with fuming sulfuric acid to obtain acetaldehyde disulfonic acids, which he then decomposed to methionic acid upon boiling in alkaline solution. However, all these early synthetic routes suffered from numerous byproducts. A higher-yielding synthesis was introduced by in 1929, treating dichloromethane (CH2Cl2) with potassium sulfite under hydrothermal conditions to get a methionate salt. See also 1,3-Propanedisulfonic acid References Sulfonic acids
Methanedisulfonic acid
[ "Chemistry" ]
434
[ "Functional groups", "Sulfonic acids" ]
63,352,173
https://en.wikipedia.org/wiki/EQUIL2
Equil 2 is a computer program used to estimate the risk of nephrolithiasis (renal stones). The input data includes excretion, concentration, and the saturation of trace elements or other substances, which are involved in the creation of kidney stones and the output will be provided in terms of PSF score (probability of stone formation) or other equivalent formats. In some studies SUPERSAT, another program, provided more accurate measurements in some of the parameters such as relative supersaturation (RSS). References Kidney diseases Medical software
EQUIL2
[ "Biology" ]
113
[ "Medical software", "Medical technology" ]
63,352,392
https://en.wikipedia.org/wiki/Target%20selection
Target selection is the process by which axons (nerve fibres) selectively target other cells for synapse formation. Synapses are structures which enable electrical or chemical signals to pass between nerves. While the mechanisms governing target specificity remain incompletely understood, it has been shown in many organisms that a combination of genetic and activity-based mechanisms govern initial target selection and refinement. The process of target selection has multiple steps that include axon pathfinding when neurons extend processes to specific regions, cellular target selection when neurons choose appropriate partners in a target region from a multitude of potential partners, and subcellular target selection where axons often target particular regions of a partner neuron. Description As bundled axons finish navigating through various neural circuits during neural development, the growth cones must selectively target with which cells it will synapse. This can be particularly well observed in the visual and olfactory systems of organisms. In order to develop into a properly functioning nervous system, there must be an extremely high degree of accuracy in which cell the growth cone forms neural connections. Although the target cell selection must be highly accurate, the degree of specificity that the neural connectivity achieves varies based on the neuronal circuitry system. The target selection process of an axon to develop synaptic connections with specific cells can be broken down into multiple stages that are not necessarily confined to exact chronological order. The stages of targeting include: region specification target cell specification subcellular specification synaptic refinement Region specification The first stage in target selection is specification of target region, a process known as axon pathfinding. Growing neurites follow gradients of cell surface molecules that serve as chemoattractants and repellents to the growth cone. This perspective is an evolution of the chemoaffinity hypothesis posited by the neurobiologist Roger Wolcott Sperry in the 1960s. Sperry studied how the neurons in the visual systems of amphibians and goldfish form topographic maps in the brain, noting that if the optic nerve is crushed and allowed to regenerate, the axons will trace back the same patterns of connections. Sperry hypothesized that the target cells carried "identification tags" that would guide the growing axon, which we now know as recognition molecules that bind the growth cone along a gradient. Neurons in sensory systems like the visual, auditory, or olfactory cortex grow into topographic maps such that neighboring neurons in the periphery correspond to adjacent target locations in the central nervous system. For example, neurons nearby on the retina will project to nearby cortical cells, creating a so-called retinotopic map. This cortical organization allows organisms to more easily decode stimuli. The mechanisms governing region specification have been well studied in numerous systems. In Drosophila, numerous axon guidance molecules have been shown to be involved in precise regionalization of the ventral nerve cord. Target cell specification Once a growing neuron has entered the target area, they must locate and enter the appropriate target cell with which to synapse. This is accomplished through sequential signaling of attractive and repulsive cues, largely neurotrophins. The axon grows along its chemoattractant gradient until approaching the target cell, when its growth is slowed down by a sudden drop in the concentration of chemoattractant. This serves as a signal to enter the target cell.[1] As the growth cone slows down, branches begin to form through one of two modalities: splitting of the growth cone, or interstitial branching. Growth cone splitting results in bifurcation of the main axon and is associated with axon guidance and innervating multiple faraway targets. Conversely, interstitial branching increases axonal coverage locally to define its presynaptic territory. Most mammalian CNS branches extend interstitially.[7] Branching can be caused by repulsive cues in the environment that cause the growth cone to pause and collapse, resulting in the formation of branches. [8] To ensure successful innervation, inappropriate targeting must be prevented. Once the axon has reached its target area and started to slow down and branch, it can be held within the target area by a perimeter of cues repulsive to the growth cone. Cell-to-cell interactions Axons express patterns of cell-surface adhesion molecules that allow them to match with specific layer targets. An important family of adhesion molecules is constituted by the cadherins, whose different combination on targeting cells allow the traction and guidance of the forming axons. A typical example of layers with combinatorial expression of these molecules is the tectal laminae in the chick tectum, where the N-cadherin molecule is present only in those layers that receive axons form the retina. Extracellular cues Matrix factors and secreted cues are also very important in the formation of layered structures, and can be divided into attractive and repulsive cues, though the same factor can have both functions under varying conditions. For example, semaphorin is a substance with a repulsive effect that has been shown to have a fundamental role in layering between different somatosensory modalities in the spinal cord system. Synapse formation The molecular mechanism of synapse formation is a process composed by different stages that relies on complex intracellular mechanisms involving both the pre- and postsynaptic cell. When the growth cone of the growing presynaptic axon makes contact with the target cell, it loses the filopodia, while both cells start expressing adhesion molecules on their respective membranes to form tight junctions, called "puncta adherens", which are similar to an adherens junction. Different classes of adhesion molecules, like SynCAM, cadherins and neuroligins/neurexins play an important role in synapse stabilization and enable synaptic formation. After the synapses have been stabilized, the pre- and postsynaptic cells undergo subcellular changes on each side of the synapses. Namely, there is an accumulation of the Golgi apparatus on the postsynaptic side, while there is an accumulation of vesicles in the presynaptic terminal. Finally at the end of synaptogenesis, there is an apposition of extracellular matrix between the cells with the formation of a synaptic cleft. Characteristic of the postsynaptic cell is the presence of a postsynaptic density (PSD), formed by PDZ-domain-containing scaffold proteins whose function is to keep the neurotransmitter receptors clustered inside the synapse. References Cell biology Neural circuitry Nervous system
Target selection
[ "Biology" ]
1,371
[ "Organ systems", "Cell biology", "Nervous system" ]
58,097,665
https://en.wikipedia.org/wiki/Nu-transform
In the theory of stochastic processes, a ν-transform is an operation that transforms a measure or a point process into a different point process. Intuitively the ν-transform randomly relocates the points of the point process, with the type of relocation being dependent on the position of each point. Definition For measures Let denote the Dirac measure on the point and let be a simple point measure on . This means that for distinct and for every bounded set in . Further, let be a Markov kernel from to . Let be independent random elements with distribution . Then the point process is called the ν-transform of the measure if it is locally finite, meaning that for every bounded set For point processes For a point process , a second point process is called a -transform of if, conditional on , the point process is a -transform of . Properties Stability If is a Cox process directed by the random measure , then the -transform of is again a Cox-process, directed by the random measure (see Transition kernel#Composition of kernels) Therefore, the -transform of a Poisson process with intensity measure is a Cox process directed by a random measure with distribution . Laplace transform It is a -transform of , then the Laplace transform of is given by for all bounded, positive and measurable functions . References Point processes
Nu-transform
[ "Mathematics" ]
268
[ "Point processes", "Point (geometry)" ]
58,098,655
https://en.wikipedia.org/wiki/Pirkko%20Eskola
Pirkko Eskola is a Finnish physicist. She discovered the chemical elements Rutherfordium and Dubnium whilst working at the Lawrence Berkeley National Laboratory. Research Eskola was a student of at the University of Helsinki. She worked on heavy ion physics. In 1961 Eskola demonstrated that the half-life of Nobelium was 25 seconds. Eskola joined Lawrence Berkeley National Laboratory in 1968 and stayed until 1972. She worked with Albert Ghiorso, James Andrew Harris and her husband, . In 1969 she was part of the team that discovered Rutherfordium by bombarding californium-249 with Carbon-12. In 1970 she discovered Dubnium using the Heavy Ion Linear Accelerator, bombarding a target of californium-249 with nitrogen nuclei. There was debate between Russia and America over who first discovered of Rutherfordium. Eskola studied the alpha decay of Nobelium 255 and 257. She went on to work on beta-unstable Alpha particle emitting nuclei. Using Alpha-particle spectroscopy she studied lawrencium isotopes 255 - 260 and Mendelevium isotopes 248 - 252. Her husband, Kari Eskola, was a professor of physics at the University of Helsinki. She went on to have a career in science education. She was an editor of the Finnish Physical Society journal Physica Fennica. Eskola was a member of the American Physical Society Committee for Women in Physics. References 21st-century Finnish physicists Nuclear physicists Women nuclear physicists Finnish expatriates in the United States Living people Year of birth missing (living people) University of Helsinki alumni
Pirkko Eskola
[ "Physics" ]
328
[ "Nuclear physicists", "Nuclear physics" ]
58,100,243
https://en.wikipedia.org/wiki/Spider%20shot
Spider shot was a variation of chain shot with multiple chains. See also Round shot Heated shot Canister shot Grapeshot References Projectiles Artillery ammunition Balls Chains Metallic objects
Spider shot
[ "Physics" ]
34
[ "Metallic objects", "Physical objects", "Matter" ]
58,103,878
https://en.wikipedia.org/wiki/Peano%20kernel%20theorem
In numerical analysis, the Peano kernel theorem is a general result on error bounds for a wide class of numerical approximations (such as numerical quadratures), defined in terms of linear functionals. It is attributed to Giuseppe Peano. Statement Let be the space of all functions that are differentiable on that are of bounded variation on , and let be a linear functional on . Assume that that annihilates all polynomials of degree , i.e.Suppose further that for any bivariate function with , the following is valid:and define the Peano kernel of asusing the notationThe Peano kernel theorem states that, if , then for every function that is times continuously differentiable, we have Bounds Several bounds on the value of follow from this result: where , and are the taxicab, Euclidean and maximum norms respectively. Application In practice, the main application of the Peano kernel theorem is to bound the error of an approximation that is exact for all . The theorem above follows from the Taylor polynomial for with integral remainder: defining as the error of the approximation, using the linearity of together with exactness for to annihilate all but the final term on the right-hand side, and using the notation to remove the -dependence from the integral limits. See also Divided differences References Numerical analysis
Peano kernel theorem
[ "Mathematics" ]
268
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Mathematical problems", "Approximations" ]
58,103,934
https://en.wikipedia.org/wiki/Mapping%20theorem%20%28point%20process%29
The mapping theorem is a theorem in the theory of point processes, a sub-discipline of probability theory. It describes how a Poisson point process is altered under measurable transformations. This allows construction of more complex Poisson point processes out of homogeneous Poisson point processes and can, for example, be used to simulate these more complex Poisson point processes in a similar manner to inverse transform sampling. Statement Let be locally compact and polish and let be a measurable function. Let be a Radon measure on and assume that the pushforward measure of under the function is a Radon measure on . Then the following holds: If is a Poisson point process on with intensity measure , then is a Poisson point process on with intensity measure . References Poisson point processes
Mapping theorem (point process)
[ "Mathematics" ]
156
[ "Point processes", "Point (geometry)", "Poisson point processes" ]
58,104,462
https://en.wikipedia.org/wiki/Lysenin
Lysenin is a pore-forming toxin (PFT) present in the coelomic fluid of the earthworm Eisenia fetida. Pore-forming toxins are a group of proteins that act as virulence factors of several pathogenic bacteria. Lysenin proteins are chiefly involved in the defense against cellular pathogens. Following the general mechanism of action of PFTs lysenin is segregated as a soluble monomer that binds specifically to a membrane receptor, sphingomyelin in the case of lysenin. After attaching to the membrane, the oligomerization begins, resulting in a nonamer on top of membrane, known as a prepore. After a conformational change, which could be triggered by a decrease of pH, the oligomer is inserted into the membrane in the so-called pore state. Monomer Lysenin is a protein produced in the coelomocyte-leucocytes of the earthworm Eisenia fetida. This protein was first isolated from the coelomic fluid in 1996 and named lysenin (from lysis and Eisenia). Lysenin is a relatively small water-soluble molecule with a molecular weight of 33 kDa. Using X-ray crystallography, lysenin was classified as a member of the Aerolysin protein family by structure and function. Structurally, each lysenin monomer consists of a receptor binding domain (grey globular part on right of Figure 1) and a Pore Forming Module (PFM); domains shared throughout the aerolysin family. The lysenin receptor binding domain shows three sphingomyelin binding motifs. The Pore Forming Module contains the regions that undergo large conformational changes to become the β-barrel in the pore. Membrane receptors The natural membrane target of lysenin is an animal plasma membrane lipid called sphingomyelin located mainly in its outer leaflet, involving at least three of its phosphatidylcholines (PC) groups. Sphingomyelin is usually found associated with cholesterol in lipid rafts. Cholesterol, which enhances oligomerization, provides a stable platform with high lateral mobility where monomer-monomer encounters are more probable. PFTs have shown to be able to remodel the membrane structure, sometimes even mixing lipid phases. The region of the lysenin pore β-barrel expected to be immersed in the hydrophobic region of the membrane is the 'detergent belt', the 3.2 nm high region occupied by detergent in Cryogenic Electron Microscopy (Cryo-EM) studies of the pore. On the other hand, sphingomyelin/Cholesterol bilayers are about 4.5 nm height. This difference in height between the detergent belt and the sphingomyelin/cholesterol bilayer implies a bend of the membrane in the region surrounding the pore, called negative mismatch. This bending results in a net attraction between pores that induce pores aggregation. Binding, oligomerization and insertion Membrane binding is a requisite to initiate PFT oligomerization. Lysenin monomers bind specifically to sphingomyelin via the receptor binding domain. The final lysenin oligomer is constituted by nine monomers without quantified deviations. When lysenin monomers bind to sphingomyelin-enriched membrane regions, they provide a stable platform with a high lateral mobility, hence favouring the oligomerization. As with most PFTs, lysenin oligomerization occurs in a two-step process, as was recently imaged. The process begins with monomers being adsorbed into the membrane by specific interactions, resulting in an increased concentration of monomers. This increase is promoted by the small area where the membrane receptor accumulates since the majority of PFT membrane receptors are associated with lipid rafts. Another side effect, aside from the increase of monomer concentration, is the monomer-monomer interaction. This interaction increases lysenin oligomerization. After a critical threshold concentration is reached, several oligomers are formed simultaneously, although sometimes these are incomplete. In contrast to PFTs of the cholesterol-dependent cytolysin family, the transition from incomplete lysenin oligomers to complete oligomers has not been observed. A complete oligomerization results in the so-called prepore state, a structure on the membrane. Determining the prepore's structure by X-ray or Cryo-EM is a challenging process that so far has not produced any results. The only available information about the prepore structure was provided by Atomic Force Microscopy (AFM). The measured prepore height was 90 Å; and the width 118 Å, with an inner pore of 50 Å. A model of the prepore was built aligning the monomer structure () with the pore structure () by their receptor-binding domains (residues 160 to 297). A recent study in aerolysin suggests that the currently accepted model for the lysenin prepore should be revisited, according to the new available data on the aerolysin insertion. A conformational change transforms the PFM into the transmembrane β-barrel, leading to the pore state. The trigger mechanism for the prepore-to-pore transition in lysenin depends on three glutamic acid residues (E92, E94 and E97), and is activated by a decrease in pH, from physiological conditions to the acidic conditions reached after endocytosis, or an increase in calcium extracellular concentration. These three glutamic acids are located in an α-helix that forms part of the PFM, and glutamic acids are found in aerolysin family members in its PFMs. Such a conformational change produces a decrease in the oligomer height of 2.5 nm according to AFM measurements. The main dimensions, using lysenin pore X-ray structure, are height 97 Å, width 115 Å and the inner pore of 30 Å. However, complete oligomerization into the nonamer is not a requisite for the insertion, since incomplete oligomers in the pore state can be found. The prepore to pore transition can be blocked in crowded conditions, a mechanism that could be general to all β-PFTs. The first hint of crowding effect on prepore to pore transition was given by congestion effects in electrophysiology experiments. Insertion consequences The ultimate consequences of lysenin pore formation are not well documented; however, it is thought to induce apoptosis via three possible hypotheses: Breaking the sphingomyelin asymmetry between the two leaflets of the lipid bilayer by punching holes in the membrane and inducing lipid flip-flop (reorientation of a lipid from one leaflet of a membrane bilayer to the other). Increasing the calcium concentration in the cytoplasm. Decreasing the potassium concentration in the cytoplasm. Biological role The biological role of lysenin remains unknown. It has been suggested that lysenin may play a role as a defence mechanism against attackers such as bacteria, fungi or small invertebrates. However, lysenin's activity is dependent upon binding to sphingomyelin, which is not present in the membranes of bacteria, fungi or most invertebrates. Rather, sphingomyelin is mainly present in the plasma membrane of chordates. Another hypothesis is that the earthworm, which is able to expel coelomic fluid under stress, generates an avoidance behaviour to its vertebrate predators (such as birds, hedgehogs or moles). If that is the case, the expelled lysenin might be more effective if the coelomic fluid reaches the eye, where the concentration of sphingomyelin is ten times higher than in other body organs. A complementary hypothesis is that the pungent smell of the coelomic fluid - giving the earthworm its specific epithet foetida - is an anti-predator adaptation. However, it remains unknown whether lysenin contributes to avoidance of Eisenia by predators. Applications Lysenin's conductive properties have been studied for years. Like most pore-forming toxins, lysenin forms a non-specific channel that is permeable to ions, small molecules, and small peptides. There have also been over three decades of studies into finding suitable pores for converting into nanopore sequencing systems that can have their conductive properties tuned by point mutation. Owing to its binding affinity for sphingomyelin, lysenin (or just the receptor binding domain) has been used as a fluorescence marker to detect the sphingomyelin domain in membranes. References External links https://www.theses.fr/2017AIXM0124 Protein toxins
Lysenin
[ "Chemistry" ]
1,853
[ "Protein toxins", "Toxins by chemical classification" ]
58,105,950
https://en.wikipedia.org/wiki/Lower%20Mississippi%20water%20resource%20region
The Lower Mississippi water resource region is one of 21 major geographic areas, or regions, in the first level of classification used by the United States Geological Survey to divide and sub-divide the United States into successively smaller hydrologic units. These geographic areas contain either the drainage area of a major river, or the combined drainage areas of a series of rivers. The Lower Mississippi region, which is listed with a 2-digit hydrologic unit code (HUC) of 08, has an approximate size of , and consists of 9 subregions, which are listed with the 4-digit HUCs 0801 through 0809. This region includes the drainage within the United States of: (a) the Mississippi River below its confluence with the Ohio River, excluding the Arkansas, Red, and White River basins above the points of highest backwater effect of the Mississippi River in those basins; and (b) coastal streams that ultimately discharge into the Gulf of Mexico from the Pearl River Basin boundary to the Sabine River and Sabine Lake drainage boundary. Includes parts of Arkansas, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. List of water resource subregions See also List of rivers in the United States Water resource region References Lists of drainage basins Drainage basins Watersheds of the United States Regions of the United States Resource Water resource regions
Lower Mississippi water resource region
[ "Environmental_science" ]
265
[ "Hydrology", "Drainage basins" ]
59,742,549
https://en.wikipedia.org/wiki/Tight%20junction%20proteins
Tight junction proteins (TJ proteins) are molecules situated at the tight junctions of epithelial, endothelial and myelinated cells. This multiprotein junctional complex has a regulatory function in passage of ions, water and solutes through the paracellular pathway. It can also coordinate the motion of lipids and proteins between the apical and basolateral surfaces of the plasma membrane. Thereby tight junction conducts signaling molecules, that influence the differentiation, proliferation and polarity of cells. So tight junction plays a key role in maintenance of osmotic balance and trans-cellular transport of tissue specific molecules. Nowadays is known more than 40 different proteins, that are involved in these selective TJ channels. Structure of tight junction The morphology of tight junction is formed by transmembrane strands in the inner side of plasma membrane with complementary grooves on the outer side. This TJ strand network is composed by transmembrane proteins, that interact with the actin in cytoskeleton and with submembrane proteins, which send a signal into the cell. The complexity of the network structure depends on the cell type and it can be visualized and analyzed by freeze-fracture electron microscopy, which shows the individual strands of the tight junction. Function of tight junction proteins TJ proteins could be divided in different groups according to their function or localization in tight junction. TJ proteins are mostly described in the epithelia and endothelia but also in myelinated cells. In the central and peripheral nervous system are TJ localized between a glia and an axon and within myelin sheaths, where they facilitate the signaling. Some of TJ proteins act as a scaffolds, that connect integral proteins with the actin in a cytoskeleton. Others have an ability to crosslink junctional molecules or transport vesicles through the tight junction. Some submembrane proteins are involved in the cell signaling and gene expression due to their specific binding to the transcription factor. The most important tight junction proteins are occludin, claudin and JAM family, that establish the backbone of tight junction and allow to passing of immune cells through the tissue. TJ proteins in epithelia and endothelia Proteins in epithelial and endothelial cells are occludin, claudin and tetraspanin, that each has a one or two different types of the conformation. All of them are created by four transmembrane regions with two (amino-, carboxyl-) extracellular domains, that are orientated towards the cytoplasm. But occludin has a structure with two similar extracellular loops compared to claudin and tetraspanin, which have one extracellular loop significantly longer than the other one. Occludin Occludin (60kDa) was the first identified component of tight junction. The tetraspan membrane protein is established by two extracellular loops, two extracellular domains and one short intracellular domain. The C-terminal domain of occludin is directly bound to ZO-1, which interacts with actin filaments in cytoskeleton. It works as a transmitter from and to the tight junction, because of its association with signaling molecules (PI3-kinase, PKC, YES, protein phosphases 2A, 1). This TJ protein also participate in a selective diffusion of solutes along concentration gradient and transmigration of leukocytes across the endothelium and epithelium. Therefore the result of the overexpression of mutant occludin in epithelial cells leads to break down the barrier function of tight junction and changes in a migration of neutrophils. Occludin cooperates with members of the claudin family directly or indirectly and together they form the long strands of tight junction. Claudin The claudin family is composed by 24 members. Some of them haven´t been well characterized yet but all members are encoded by 20-27kDa tetraspan proteins with two extracellular domains, one short intracellular domain and two extracellular loops, where is the first one notably larger than the second one. The C-terminal domain of claudins is required for their stability and targeting. This domain contains PDZ-binding motif, that facilitate to bind them to the PDZ membrane proteins, like a ZO-1, ZO-2, ZO-3,MUPP1. Each claudin has a specific variation and amount of charged aminoacids in the first extracellular loop. So through the repolarization of aminoacids could claudins selectively regulate the molecule transfer. In contrast to occludin, which makes paracellular holes for ion-trafficking between neighbour cells. Claudins seem to be on a tissue specific manner, because some of them are expressed only in a specific cell type. Claudin 11 is expressed in oligodendrocytes and Sertoli cells or Claudin 5 is expressed in the vascular endothelial cells. Claudin 2,3,4,7,8,12,15 are present in epithelial cells throughout the segments of intestinal tract. Claudin 7 is occurred also in epithelial cells of the lung and kidney. Claudin-18 is expressed in the alveolar epithelial cells of the lung. Most types of claudins have more than two isoforms, that have a distinguish size or function. The specific combination of these isoforms creates tight junction strands, while the occulin is not required for. Occludin play a role in selective regulation by an incorporating itself into the claudin-based strands. The different proportion of claudin species in the cell gives them specific barrier properties. Claudins also have a function in a signaling of the cell adhesion, for example Cldn 7 binds directly to adhesion molecule EpCAM on the cell membrane. And Cldn 16 is associated with reabsorption of divalent cations, because it locates in epithelial cells of thick ascending loop of Henle. TJ proteins in myelin sheaths OSP/Claudin 11 OSP/Claudin 11 is occurred in a myelin of nerve cells and between Sertoli cells, so it forms tight junctions in the CNS. This protein in a cooperation with the second loop of occludin maintains the blood-testis barrier and spermatogenesis. PMP22/gas-3 PMP22/gas-3, called peripheral myelin protein, is located in the myelin sheath. The expression of this protein is associated with a differentiation of Schwann cells, an establishment of tight junction in the Schwamm cell membrane or a compact formation of myelin. It is also present in epithelial cells of lungs and intestine, where interacts with occludin and ZO-1, that together create the TJ in the epithelia. PMP22/gas-3 belongs to the epithelial membrane protein family (EMP1-3), which conducts a growth and differentiation of cells. OAP-1/TSPAN-3 OAP-1/TSPAN-3 cooperates with β1-integrin and OSP/Claudin11 within myelin sheaths of oligodendrocytes, thereby affects the proliferation and migration. Junctional adhesion molecules JAM Junctional adhesion molecules are divided in subgroups according to their composition and binding motif. Glycosylated transmembrane proteins JAMs are classified in the immunoglobulin superfamily, that are formed by two extracellular Ig-like domains: the transmembrane region and the C-terminal cytoplasmatic domain. Members of this JAM family could express two distinguish binding motifs. First subgroup composed by JAM-A, JAM-B, JAM-C has a PDZ-domain binding motif type II at their C-termini, which interacts with the PDZ domain of ZO-1, AF-6, PAR-3 and MUPP1. JAM proteins are not a part of tight junction strands but they participate in a signalization that leads to an adhesion of monocytes and neutrophils and their transmigration through the epithelium. JAMs in epithelial cells can aggregate with TJ strands, that are made of polymers of claudin and occludin. JAM-A maintains barrier properties in the endothelium and the epithelium as well as JAM-B and -C in Sertoli cells and spermatids. The second subgroup of CAR, ESAM, CLMP and JAM4 proteins contains a PDZ-domain binding motif type I at their C-termini. CAR (coxsackie and adenovirus receptor) also belongs to the immunoglobulin superfamily, same like JAM proteins. CAR is expressed in the epithelia of trachea, bronchi, kidney, liver and intestine, where positively contributes to the barrier function of the tight junction. This protein mediates a neutrophil migration, cells contacts and an aggregation. It´s necessary for the embryonal heart development, especially for the organization of myofibrils in cardiomyocytes. CAR is associated with PDZ-scaffolding proteins MAGI-1b, PICK, PSD-95, MUPP1 and LNX. ESAM (endothelial cell selective adhesion molecule) is an immunoglobulin-transmembrane protein, which influences properties of the endothelial TJ. ESAM is present in endothelial cells and platelets but not in the epithelium and leukocytes. There, it directly binds to the MAGI-1 molecules through the ligation of C-terminal domain and PDZ-domain. This cooperation provides the formation of large molecular complex at tight junctions in the endothelium. JAM4 is a component of immunoglobulin superfamily JAM but it expresses a PDZ-domain binding motif class I (doesn´t express a class II like members JAM-A,-B,-C). The JAM4 is situated in a kidney glomeruli and an intestine epithelium, where cooperates with MAGI-1, ZO-1, occludin and effectively regulates the permeability of these cells. JAM4 has a cell adhesion activity, which is conducted by MAGI-1. Myelin Protein 0 Protein 0 is a major myelin protein of the peripheral nervous system, which integrates with PMP22. Together they form and compact myelin sheaths of nerve cells. Plaque proteins in the tight junction Plaque proteins are molecules, that are required for the coordination of signals coming from the plasma membrane. In recent years exist about 30 different proteins associated with cytoplasmatic properties of the tight junction. One group of these proteins are attended in the organization of transmembrane proteins and the interaction with actin filaments. This PDZ-containing group is composed by ZO-1, ZO-2, ZO-3, AF-6, MAGI, MUPP1, PAR, PATJ, and the PDZ domain gives them a scaffolding function. PDZ domains are important for a clustering and an anchoring of transmembrane proteins. With the first group interacts one plaque protein without PDZ domain, called cingulin, which plays a key role in the cell adhesion. The second group of plague proteins are used for a vesicular trafficking, barrier regulation and gene transcription, because certain of them are transcription factors or proteins with nuclear functions. Members of this second group are ZONAB, Ral-A, Raf-1, PKC, symplekin, cingulin and some more. They are characterized by lacking of the PDZ domain. References Proteins Cell biology
Tight junction proteins
[ "Chemistry", "Biology" ]
2,492
[ "Biomolecules by chemical classification", "Proteins", "Cell biology", "Molecular biology" ]
66,182,787
https://en.wikipedia.org/wiki/Surface%20magnon%20polariton
Surface magnon-polaritons (SMPs) are a type of quasiparticle in condensed matter physics. They arise from the coupling of incident electromagnetic (EM) radiations to the magnetic dipole polarization in the surface layers of a solid. Magnons are analogous to other forms of polaritons, such as plasmons and phonons, but represent an oscillation of the magnetic component of the solid's EM field rather than its electric component or a mechanical oscillation in the solid's atomic structure. They are sometimes referred to as magnetic surface polaritons (MSPs). By employing artificially constructed metamaterials whose properties mainly stem from their engineered internal fine structures rather than their bulk physical make up, it is possible to more easily achieve useful SMPs. However, they can be found in several natural magnetic materials, including at THz frequencies in antiferromagnetic crystals. Magnons offer a way to control light-matter interactions at Terahertz frequencies. References Quasiparticles Plasmonics
Surface magnon polariton
[ "Physics", "Chemistry", "Materials_science" ]
220
[ "Plasmonics", "Matter", "Materials science stubs", "Surface science", "Nanotechnology", "Condensed matter physics", "Quasiparticles", "Condensed matter stubs", "Solid state engineering", "Subatomic particles" ]
66,187,030
https://en.wikipedia.org/wiki/Arthur%20von%20Abramson
Arthur von Abramson (born 3 March 1854) was an Imperial Russian civil engineer. He was born to a Jewish family in Odessa, and was educated at the city's gymnasium. He studied mathematics at the University of Odessa, but left to take a course in civil engineering at the Zurich Polytechnikum, from which he was graduated in 1876. Returning to Russia in 1879, von Abramson passed the state examination at the Russian Imperial Institute of Roads and Communications, and was appointed one of the directors of the Russian state railway at Kiev. He devised, built, and managed the sewer system of Kiev, and constructed the street-railroad of that city. In 1881 he founded and became editor-in-chief of a technical monthly, Inzhener ('The Engineer'). He was appointed president of the local sewer company and director of the Kiev city railroad. Publications Published in English as References 1854 births Year of death unknown ETH Zurich alumni Civil engineers from the Russian Empire Editors from the Russian Empire Odesa Jews Print editors Railway civil engineers People from the Russian Empire in rail transport
Arthur von Abramson
[ "Engineering" ]
220
[ "Civil engineering", "Civil engineering stubs" ]
76,414,909
https://en.wikipedia.org/wiki/Immunoliposome%20therapy
Immunoliposome therapy is a targeted drug delivery method that involves the use of liposomes (artificial lipid bilayer vesicles) coupled with monoclonal antibodies to deliver therapeutic agents to specific sites or tissues in the body. The antibody modified liposomes target tissue through cell-specific antibodies with the release of drugs contained within the assimilated liposomes. Immunoliposome aims to improve drug stability, personalize treatments, and increased drug efficacy. This form of therapy has been used to target specific cells, protecting the encapsulated drugs from degradation in order to enhance their stability, to facilitate sustained drug release and hence to advance current traditional cancer treatment. History Alec D. Bangham discovered liposomes in the 1960s as spherical vesicles made of a phospholipid bilayer that houses hydrophilic cores. The liposomes were then studied to uncover the properties of biological membranes and a hydration method was discovered to prepare artificial liposomes from 1968 to 1975. Since then, multiple methods of preparing liposomes have been utilized and their characteristics (physical and chemical) have been studied. Monoclonal antibodies are proteins that stick to specific antigens that tag specific cells and can be synthesized in the lab. They were first generated in 1975 and have since advanced to being used for immunotherapy. Immunolipsomes were developed utilizing both of these components. The first anticancer drug made with this method was doxorubicin (DOX) in the 1990s. Composition and structure The core structure of immunoliposomes is a lipid bilayer. This lipid bilayer forms a hydrophilic core, which provides stable encapsulation for a therapeutic payload. Common lipids used are phosphatidylcholine (PC), phosphatidylethanolamine (PE), and cholesterol. The lipid bilayer is surface modified through conjugation using monoclonal antibodies for specific recognition of the target cells or tissues of interest. The core of the immunoliposome contains the therapeutic payload, which can be anything from small drugs, nucleic acids, peptides, or imaging agents. There are often stabilizers and excipients present for formulation, stability, and functionality. Some include polyethylene glycol (PEG), antioxidants to prevent degradation of lipids, and buffering agents for optimal pH. Synthesis Immunoliposomes are created when antibodies are conjugated to liposomes. One way to do this is through covalent bonds between the antibody (or its fragment) and the lipid. Another way is through chemical modification of antibodies so they have a higher affinity for the liposome. “In general, the conjugation methodology is based on three main reactions; a reaction between activated carboxyl groups and amino groups which yields an amide bond, a reaction between pyridyl dithiols and thiols which yields disulfide bonds, and a reaction between maleimide derivatives and thiols, which yields thioether bonds.” Conjugation via carboxyl and amino residues Amine groups are found throughout an antibody and are used as a target due to their easy steric accessibility and modification. An overview of this reaction is found in Figure 2. Most often amine groups found on lysine are covalently bonded to carboxyl groups of glutamic and aspartic acid on formed liposomes using certain agents. A two step process is utilized where the first step uses 1-ethyl-3-[3-dimethylaminopropyl] carbodiimide to create an amine reactive product from the carboxyl group. This product is a target for a nucleophilic attack by the amine but it hydrolyzes quickly, so EDC is added to stabilize it. As seen in the Figure 2, the intermediate can lead to the desired stable amide bond by chance or the recreation of a carboxyl group. To create more of the desired carboxyl-amine bond, N-hydroxysulfosuccinimide (sulfo-NHS) is added to form another intermediate that is an NHS ester. The second step to this reaction is for the antibodies to use the N-terminus of the lipid to covalently conjugate by creating an amide bond via displacement of sulfo-NHS groups. This leads to the final product of an antibody conjugated to a liposome to create an immunoliposome. This process is highly efficient and effective while maintaining the biological activity of the antibody. Conjugation via thiol group Another process of creating immunoliposomes is by using a thiol group and creating a thioether bond. The sulfhydryl group is a key player can is found in cysteine bridges on proteins and reagents like Traut’s reagents, SATA, and Sulfo-LC-SPDP. The reduction or hydrolysis of these groups generates thiol groups that create antibody conjugation to lipids. There are multiple methods of this process, and one uses the crosslinking agent SATA as shown in Figure 3. The ester end of SATA reacts with amino groups in proteins to form an amide link and a molecule with a protected sulfhydryl group. In order to continue the reaction, this group must be freed which is done by adding hydroxylamine. The following step is to add a chemical that can be an anchor between the lipid and the thiol group. Some examples of molecules that are capable of being this anchor are maleimide, iodoacetyl groups or 2-pyridyldithiol groups. Ultimately, these steps create an antibody-enzyme conjugate that has been formulated using a thiol group. Mechanism of action Immunoliposomes use similar functionality as liposomes with the added measure of conjugating monoclonal antibodies and their fragments to the liposomes. The use of antibodies allow for easy targeting as they can recognize many different types of antigens. Diseased cells typically contain more antigens than healthy cells, which is how antibodies are able to appropriately target certain extracellular domains (depending on antigen overexpression) and kill diseased cells. Liposomal drug delivery combined with antibodies as a targeting ligand are what help immunoliposomes function as an effective drug carrier. Once the immunoliposomes deliver the appropriate drugs to the targeted cells, they can enter the cell using either selective uptake of liposomes by endocytosis or liposome release near the targeted cell. Because of antibody conjunction, the cellular uptake amount is increased for immunoliposomes allowing greater drug entry into diseased cells. To control when a drug is released, immunoliposomes are being developed that can sense stimuli. This stimuli can come from the microenvironment of a tumor using factors such as reduced pH, temperature, and enzyme levels. External stimuli like light, heat, magnetic fields, or ultrasound can also act as a trigger for drug release. Immunoliposomes can target a wide variety of cell options. This can be split into two main types commonly known as intravascular and extravascular space as seen in Figure 4. Intravascular cells are more accessible during circulation and include erythrocytes, myeloid cells, lymphocytes, neutrophils, etc. Extravascular cells are located on tissue parenchymal or stromal cells. Because immunoliposomes have many antibody copies, they contain a higher avidity than just one antibody alone allowing for effective targeting against cancer cells and some drug resistant cells. Applications Immunoliposome applications use its ability to act as a drug delivery system and release specific drug components to target cells. This mechanism can be specifically highlighted in cancer cell targeting and through nutrient delivery systems. Cancer cell targeting The most common use of immunoliposomes is to target cancer cells using different antibodies. Folate receptors and transferrin receptors are typically overexpressed on cancer cells, so immunoliposomes will target these corresponding ligands. Folate receptors dictate tumor cell specificity and have been seen to be expressed in multiple inflammatory diseases including psoriasis, Crohn’s disease, atherosclerosis, and rheumatoid arthritis making folate-targeted immunoliposomes an efficient drug carrier to deliver antiinflammatory drugs. Transferrin receptors help with the iron demand in proliferating cancer cells and allow for formation of transferrin receptor-targeted anticancer therapies. EGFR (epidermal growth factor) is a tyrosine kinase receptor overexpressed in solid tumors such as colorectal, non small-cell lung cancer, squamous cell carcinoma, and breast cancer making it another target receptor for immunoliposomes. Some cancers create tumors that have multiple different receptors being overexpressed or utilize cancer stem cells, which allow for differentiation of numerous cancer types, so to combat this, dual-targeting immunoliposomes are being created to target multiple ligands and increase therapeutic efficacy. A study provides a promising preclinical demonstration of the effectiveness and ease of preparation of Valrubicin-loaded immunoliposomes (Val-ILs) as a novel nanoparticle technology. In the context of hematological cancers, Val-ILs have the potential to be used as a precise and effective therapy based on targeted vesicle-mediated cell death. Nutrient delivery system Immunoliposomes can also be used as nutrient delivery systems to help stimulate brain activity. The effective transport of certain nutrients to the hypothalamus in order to regulate brain activity is currently a huge problem. The leptin gene is used to regulate feedback loops and send signals from the adipose tissue to the hypothalamus. Using this physiological function of leptin, immunoliposome nutrient delivery systems can be integrated into the body to help with nutrition transport to the brain as seen in Figure 5. Transferrin receptors have high expression at the BBB (blood brain barrier) and can be used as targets for immunoliposomes to transport p-glycoprotein substances. Advantage and limitations Immunoliposomes have many possible applications as described above and have certain advantages in new research and ideas. Some advantages being researched include immunoliposomes targeting specific molecules in the body. In preclinical testing, they can be environmentally responsive to specific conditions of temperature, pH, enzymes, redox reactions, magnetic energy, and light to release drugs. This conditional ability allows immunoliposomes to focus on specific target areas which can be beneficial for drug delivery. Increased targeting allows the possibility for decreased systemic toxicity while increasing drug concentration at a certain site. Even with this advantage of immunoliposomes, there are some challenges in their application. The success of immunoliposomes during in vivo testing has been shown by multiple groups including Meeroekyai et al 2023 and animal testing has been successful in groups such as Refaat et al 2022. However, they struggle to thrive in higher level and clinical testing. This challenge stems from the variability and lack of understanding of tumors, pharmacokinetics, and large scale production of immunoliposomes. For example, tumors vary but typically have increased vascular permeability and decreased lymphatic drainage which lead to the EPR effect. EFR is the enhanced permeability and retention effect in which drug carriers depend on but the effect and environment can vary in solid tumors. This varying environment makes it hard to predict how the immunoliposome acts and quantify its pharmacokinetics. Additionally, it has been a concern that the animal models used in preclinical testing will not reflect the same effect in humans. Because of this idea, despite any preclinical success, there is a concern to test in humans due to unknown risks. For environmentally responsive immunoliposomes, more modification and purification steps are required to produce the final product. This increase in complexity for immunoliposomes and their behavior also increases costs. Another challenge to marketability and clinical research is the difficulty of scaling up the production of immunoliposomes. The procedure and use of small quantities in the laboratory make upscaling the production a challenge that has not been focused upon. Research Liposomal medicine research for cancer therapy has increased over the years as an alternative to conventional cancer treatment. There is an interest in liposomal medicine because it features targeted drug delivery while mitigating the damage to healthy cells and tissues. One of the combination products under liposome therapy that is being researched for cancer therapy applications is immunoliposome therapy. Other research areas in liposome combination therapy include photodynamic therapy, photothermal agents, radiotherapy, and gas therapy agents. A immunoliposome therapy clinical study that was completed was conducted by the Swiss Group for Clinical Cancer Research from 2006 to 2009. The study was a phase II clinical trial that looked at the combination of commercially sold Doxorubicin with bevacizumab, a monoclonal antibody that blocks tumor growth. The therapy was used to treat patients with locally recurrent or metastatic breast cancer. Out of the 43 patients, 16 had grade 3 palmar-plantar erythrodysesthesia, one had grade 3 mucositis, and one severe cardiotoxicity, according to the study. As a result, the combination therapy demonstrated higher than anticipated toxicity while only having modest therapeutic effect. These results concluded that, although immunoliposome therapy has promise, there is still more research needed before translating into commercial products. Commercialization There are several liposome medicines currently available commercially, which helps set the regulatory pathway for immunoliposome therapies. As immunoliposome therapy has progressed in research, big market players in pharmaceutical research and manufacturing have invested in the development of these therapies. A relevant example of this is a phase I/II trial that examined the effectiveness of PDS0101 in combination with pembrolizumab, an immune checkpoint inhibitor (sold under the brand name Keytruda). The study is funded by PDS Biotechnology and in partnership with Merck. The purpose of the study is to determine the effectiveness of PDS0101 + pembrolizumab in shrinking tumors in patients with virus-related oropharyngeal cancer tumors in humans. PDS0101 is a peptide-based vaccine that aids in the immune response to kill tumor cells. The study also relies on pembrolizumab monoclonal antibodies to help the body's immune system attack the cancer and interfere with the spread of tumor cells. Although immunoliposome therapy exhibits clinical and commercial promise, there are several known challenges in the translation from laboratory studies to clinical studies and ultimately to commercialization. One obstacle is that immunoliposome therapy is limited by having a short half-life and retention time once it reaches the tumor microenvironment. Additionally, immunoliposome therapies are often individualized which requires close clinical monitoring and comprehensive evaluation methods. From a biochemical perspective, other challenges that immunoliposome therapies face are drug instability due to the phospholipid bilayer and the known possibility for hepatotoxicity. From a manufacturing perspective, designing liposome drug delivery systems at an industrial scale can present a challenge due to the complexity of these drug release mechanisms and their related biosafety. Similar approaches Though immunoliposomes serve as a possible advancement, there are other therapies similar to it that trail on the role of targeted drug delivery systems. One example of such therapy is Immune Polymeric nanoparticles, which are similar to liposomes but consist of small particles composed of biodegradable polymers. These nanoparticles similarly encapsulate drugs and can function to enhance specificity towards targeted diseased cells with peptide ligands. Another type is Targeting Antibody Drug Conjugates, which combine monoclonal antibodies with the cytotoxicity of chemotherapy drugs. This specific type is catered towards cancer cells expressing a specific target antigen. They are well-tolerated by the body as they are biodegradable, eliminating many potential toxicity factors, and proving to be a possible new model for therapeutics. References Drug delivery devices Liposomally encapsulated antineoplastic agents
Immunoliposome therapy
[ "Chemistry" ]
3,482
[ "Pharmacology", "Drug delivery devices" ]
76,415,171
https://en.wikipedia.org/wiki/Waste%20input-output%20model
The Waste Input-Output (WIO) model is an innovative extension of the environmentally extended input-output (EEIO) model. It enhances the traditional Input-Output (IO) model by incorporating physical waste flows generated and treated alongside monetary flows of products and services. In a WIO model, each waste flow is traced from its generation to its treatment, facilitated by an allocation matrix. Additionally, the model accounts for the transformation of waste during treatment into secondary waste and residues, as well as recycling and final disposal processes. By including the end-of-life (EoL) stage of products, the WIO model enables a comprehensive consideration of the entire product life cycle, encompassing production, use, and disposal stages within the IO analysis framework. As such, it serves as a valuable tool for life cycle assessment (LCA). Background With growing concerns about environmental issues, the EEIO model evolved from the conventional IO model appended by integrating environmental factors such as resources, emissions, and waste. The standard EEIO model, which includes the economic input-output life-cycle assessment (EIO-LCA) model, can be formally expressed as follows Here represents the square matrix of input coefficients, denotes releases (such as emissions or waste) per unit of output or the intervention matrix, stands for the vector of final demand (or functional unit), is the identity matrix, and represents the resulting releases (For further details, refer to the input-output model). A model in which represents the generation of waste per unit of output is known as a Waste Extended IO (WEIO) model. In this model, waste generation is included as a satellite account. However, this formulation, while well-suited for handling emissions or resource use, encounters challenges when dealing with waste. It overlooks the crucial point that waste typically undergoes treatment before recycling or final disposal, leading to a form less harmful to the environment. Additionally, the treatment of emissions results in residues that require proper handling for recycling or final disposal (for instance, the pollution abatement process of sulfur dioxide involves its conversion into gypsum or sulfuric acid). Leontief's pioneering pollution abatement IO model did not address this aspect, whereas Duchin later incorporated it in a simplified illustrative case of wastewater treatment. In waste management, it is common for various treatment methods to be applicable to a single type of waste. For instance, organic waste might undergo landfilling, incineration, gasification, or composting. Conversely, a single treatment process may be suitable for various types of waste; for example, solid waste of any type can typically be disposed of in a landfill. Formally, this implies that there is no one-to-one correspondence between treatment methods and types of waste. A theoretical drawback of the Leontief-Duchin EEIO model is that it considers only cases where this one-to-one correspondence between treatment methods and types of waste applies, which makes the model difficult to apply to real waste management issues. The WIO model addresses this weakness by introducing a general mapping between treatment methods and types of waste, establishing a highly adaptable link between waste and treatment. This results in a model that is applicable to a wide range of real waste management issues. The Methodology We describe below the major features of the WIO model in its relationship to the Leontief-Duchin EEIO model, starting with notations. Let there be producing sectors (each producing a single primary product), waste treatment sectors, and waste categories. Now, let's define the matrices and variables: : an matrix representing the flow of products among producing sectros. : an matrix representing the generation of wastes from producing sectors. Typical examples include animal waste from livestock, slag from steel mills, sludge from paper mills and the chemical industry, and meal scrap from manufacturing processes. : an matrix representing the use (recycling) of wastes by producing sectors. Typical examples include the use of animal waste in fertilizer production and iron scrap in steel production based on an electric arc furnace. : an matrix representing the net flow of wastes: . : an matrix representing the flow of products in waste treatment sectors. : an matrix representing the net generation of (secondary) waste in waste treatment sectors: ( and are defined similar to and ). Typical examples of include ashes generated from incineration processes, sludge produced during wastewater treatment, and residues derived from automobile shredding facilities. : an vector representing the final demand for products. : an vector representing the generation of waste from final demand sectors, such as the generation of kitchen waste and end-of-life consumer appliances. : an vector representing the quantity of products produced. : an vector representing the quantity of waste for treatment. It is important to note that variables with or pertain to conventional components found in an IO table and are measured in monetary units. Conversely, variables with or typically do not appear explicitly in an IO table and are measured in physical units. The balance of goods and waste Using the notations introduced above, we can represent the supply and demand balance between products and waste for treatment by the following system of equations: Here, dednotes a vector of ones () used for summing the rows of , and similar definitions apply to other terms. The first line pertains to the standard balance of goods and services with the left-hand side referring to the demand and the right-hand-side supply. Similarly, the second line refers to the balance of waste, where the left-hand side signifies the generation of waste for treatment, and the right-hand side denotes the waste designated for treatment. It is important to note that increased recycling reduces the amount of waste for treatment . The IO model with waste and waste treatment We now define the input coefficient matrices and waste generation coefficients as follows Here, refers to a diagonal matrix where the element is the -th element of a vector . Using and as derived above, the balance () can be represented as: This equation () represents the Duchin-Leontief environmental IO model, an extension of the original Leontief model of pollution abatement to account for the generation of secondary waste. It is important to note that this system of equations is generally unsolvable due to the presence of on the left-hand side and on the right-hand side, resulting in asymmetry. This asymmetry poses a challenge for solving the equation. However, the Duchin-Leontief environmental IO model addresses this issue by introducing a simplifying assumption: This assumption () implies that a single treatment sector exclusively treats each waste. For instance, waste plastics are either landfilled or incinerated but not both simultaneously. While this assumption simplifies the model and enhances computational feasibility, it may not fully capture the complexities of real-world waste management scenarios. In reality, various treatment methods can be applied to a given waste; for example, organic waste might be landfilled, incinerated, or composted. Therefore, while the assumption facilitates computational tractability, it might oversimplify the actual waste management processes. The WIO model Nakamura and Kondo addressed the above problem by introducing the allocation matrix of order that assigns waste to treatment processes: Here, the element of of represents the proportion of waste treated by treatment . Since waste must be treated in some manner (even if illegally dumped, which can be considered a form of treatment), we have: Here, stands for the transpose operator. Note that the allocation matrix is essential for deriving from . The simplifying condition () corresponds to the special case where and is a unit matrix. The table below gives an example of for seven waste types and three treatment processes. Note that represents the allocation of waste for treatment, that is, the portion of waste that is not recycled. The application of the allocation matrix transforms equation () into the following fom: Note that, different from (), the variable occurs on both sides of the equation. This system of equations is thus solvable (provided it exists), with the solution given by: The WIO counterpart of the standard EEIO model of emissions, represented by equation (), can be formulated as follows: Here, represents emissions per output from production sectors, and denotes emissions from waste treatment sectors. Upon comparison of equation () with equation (), it becomes clear that the former expands upon the latter by incorporating factors related to waste and waste treatment. Finally, the amount of waste for treatment induced by the final demand sector can be given by: The Supply and Use Extension (WIO-SUT) In the WIO model (), waste flows are categorized based solely on treatment method, without considering the waste type. Manfred Lenzen addressed this limitation by allowing both waste by type and waste by treatment method to be presented together in a single representation within a supply-and-use framework. This extension of the WIO framework, given below, results in a symmetric WIO model that does not require the conversion of waste flows into treatment flows. </p>It is worth noting that despite the seemingly different forms of the two models, the Leontief inverse matrices of WIO and WIO-SUT are equivalent. The WIO Cost and Price Model Let's denote by , , , and the vector of product prices, waste treatment prices, value-added ratios of products, and value-added ratios of waste treatments, respectively. The case without waste recycling In the absence of recycling, the cost counterpart of equation () becomes: which can be solved for and as: The case with waste recycling When there is a recycling of waste, the simple representation given by equation () must be extended to include the rate of recycling and the price of waste : Here, is the vector of waste prices, is the diagonal matrix of the vector of the average waste recycling rates, , and ( and are defined in a similar fashion). Rebitzer and Nakamura used () to assess the life-cycle cost of washing machines under alternative End-of-Life scenarios. More recently, Liao et al. applied () to assess the economic effects of recycling copper waste domestically in Taiwan, amid the country's consideration of establishing a copper refinery to meet increasing demand. A caution about possible changes in the input-output coeffcieints of treatment processes when the composition of waste changes The input-output relationships of waste treatment processes are often closely linked to the chemical properties of the treated waste, particularly in incineration processes. The amount of recoverable heat, and thus the potential heat supply for external uses, including power generation, depends on the heat value of the waste. This heat value is strongly influenced by the waste's composition. Therefore, any change in the composition of waste can significantly impact and . To address this aspect of waste treatment, especially in incineration, Nakamura and Kondo recommended using engineering information about the relevant treatment processes. They suggest solving the entire model iteratively, which consists of the WIO model and a systems engineering model that incorporates the engineering information. Alternatively, Tisserant et al proposed addressing this issue by distinguishing each waste by its treatment processes. They suggest transforming the rectangular waste flow matrix () not into an matrix as done by Nakamura and Kondo, but into an matrix. The details of each column element were obtained based on the literature. WIO tables and applications Waste footprint studies The MOE-WIO table for Japan The WIO table compiled by the Japanese Ministry of the Environment (MOE) for the year 2011 stands as the only publicly accessible WIO table developed by a governmental body thus far. This MOE-WIO table distinguishes 80 production sectors, 10 waste treatment sectors, 99 waste categories, and encompasses 7 greenhouse gases (GHGs). The MOE-WIO table is available here. Equation () can be used to assess the waste footprint of products or the amount of waste embodied in a product in its supply chain. Applied to the MOE-WIO, it was found that public construction significantly contributes to reducing construction waste, which mainly originates from building construction and civil engineering sectors. Additionally, public construction is the primary user (recycler) of slag and glass scrap. Regarding waste plastics, findings indicate that the majority of plastic waste originates not from direct household discharge but from various production sectors such as medical services, commerce, construction, personal services, food production, passenger motor vehicles, and real estate. Other studies Many researchers have independently created their own WIO datasets and utilized them for various applications, encompassing different geographical scales and process complexities. Here, we provide a brief overview of a selection of them. End-of-Life electrical and electronic appliances Kondo and Nakamura assessed the environmental and economic impacts of various life-cycle strategies for electrical appliances using the WIO-table they developed for Japan for the year 1995. This dataset encompassed 80 industrial sectors, 5 treatment processes, and 36 types of waste. The assessment was based on Equation (). The strategies examined included disposal to a landfill, conventional recycling, intensive recycling employing advanced sorting technology, extension of product life, and extension of product life with functional upgrading. Their analysis revealed that intensive recycling outperformed landfilling and simple shredding in reducing final waste disposal and other impacts, including carbon emissions. Furthermore, they found that extending the product life significantly decreased environmental impact without negatively affecting economic activity and employment, provided that the reduction in spending on new purchases was balanced by increased expenditure on repair and maintenance. General and hazardous industrial waste Using detailed data on industrial waste, including 196 types of general industrial waste and 157 types of hazardous industrial waste, Liao et al. analyzed the final demand footprint of industrial waste in Taiwan across various final demand categories. Their analysis revealed significant variations in waste footprints among different final demand categories. For example, over 90% of the generation of "Waste acidic etchants" and "Copper and copper compounds" was attributed to exports. Conversely, items like "Waste lees, wine meal, and alcohol mash" and "Pulp and paper sludge" were predominantly associated with household activities Global waste flows Tisserant et al developed a WIO model of the global economy by constructing a harmonized multiregional solid waste account that covered 48 world regions, 163 production sectors, 11 types of solid waste, and 12 waste treatment processes for the year 2007. Russia was found to be the largest generator of waste, followed by China, the US, the larger Western European economies, and Japan. Decision Analytic Extension Based on Linear Programming (LP) Kondo and Nakamura applied linear programming (LP) methodology to extend the WIO model, resulting in the development of a decision analytic extension known as the WIO-LP model. The application of LP to the IO model has a well-established history. This model was applied to explore alternative treatment processes for end-of-life home electric and electronic appliances, aiming to identify the optimal combination of treatment processes to achieve specific objectives, such as minimization of carbon emissions or landfill waste. Lin applied this methodology to the regional Input-Output (IO) table for Tokyo, augmented to incorporate wastewater flows and treatment processes, and identified trade-off relationships between water quality and carbon emissions. A similar method was also employed to assess the environmental impacts of alternative treatment processes for waste plastics in China. See also References External links WIO table compiled by the Japanese Ministry of the Environment Industrial ecology Ecological economics Mathematical and quantitative methods (economics) Economics models Economic planning Environmental social science concepts Waste management
Waste input-output model
[ "Chemistry", "Engineering", "Environmental_science" ]
3,161
[ "Industrial engineering", "Environmental social science concepts", "Environmental engineering", "Industrial ecology", "Environmental social science" ]
76,419,161
https://en.wikipedia.org/wiki/Spiroketals
In chemistry, Spiroketals are structural motifs composed of two heterocycles sharing one central carbon which makes them a subclass of spiro compound. Their structural specificity lays on the presence of one oxygen atom in each ring, in alpha of the spiro carbon. Although there are no rules about the size of each ring, the most widely encountered spiroketal are composed of five and six membered rings. Occurrence in nature Many natural products of biological interest contain [6,5]- and [6,6]-spiroketal moieties that can adopt various configurations. The first example of a spiroketals in the literature appeared before 1970, such as the triterpenoid saponins and sapogenins. Then several works described the presence of spiroketals in various compounds. Like diarrheic shellfish poisoning (DSP) class of toxins containing in the okadaic acid and ancanthafolicin. The most noticeable occurring spiroketals are the whole range of fruit fly pheromones. Pharmacology interest Due to its non-planar substructure, the spiroketal motif gain interest among the academical and industrial pharmaceutical research fields, both in structure-based drug design (SBDD) and development of screening libraries. Avermectins have been found in fungus and are antiparasitic drugs. The avermectins appear to paralyze nematodes and arthropods by potentiating the presynaptic release of gamma-aminobutyric acid, thereby blocking post-synaptic transmission of nerve impulses Tofogliflozin is an inhibitor of human sodium glucose cotransporter 2 (hSGLT2) and was approved in 2014 in Japan for the treatment of Type 2 diabetes Chemical synthesis Acid catalyzed spiroketalisation The most employed method to ring close spiroketal consists in the hydrolysis of the dihydroxyketal in acidic conditions, but this method is not granting stereocontrol. Thus, several miscellaneous methods have emerged in order to control the stereoselectivity of the spirocyclisation. Notes References Heterocyclic compounds
Spiroketals
[ "Chemistry" ]
462
[ "Organic compounds", "Heterocyclic compounds" ]
76,419,906
https://en.wikipedia.org/wiki/Magnetic%20nanoparticles%20in%20drug%20delivery
Magnetic nanoparticle drug delivery is the use of external or internal magnets to increase the accumulation of therapeutic elements contained in nanoparticles to fight pathologies in specific parts of the body. It has been applied in cancer treatments, cardiovascular diseases, and diabetes. Scientific researches revealed that magnetic drug delivery can be made increasingly useful in clinical settings. Background The development of magnetic nanoparticle drug delivery started with Paul Ehrlich's concept of a "magic bullet". The concept was built during the 1970s with the application of the anticancer drug doxorubicin in animal models. The first successful clinical trial of the process occurred in 1996. The use of magnetic nanoparticles for drug delivery results in the accumulation of therapeutic elements at a disease site to increase their therapeutic effects as well as limit side-effects at non-target loci. There are many factors that act as variables to accumulation including blood circulation, adherence of therapeutic elements, diffusion of therapeutic elements, bodily response to increased concentrations of these particles, etc. Tumor hypoxia is one of the largest challenges regarding cancer drug delivery as tumors grow faster than vasculature, making initial targeting increasingly important in treatment. This tumor environment drives considerable attention towards magnetic nanoparticles as treatment modalities allowing faster and efficient delivery of drugs and treatment. Fundamentally, pulsatile artificial capillaries made to mimic blood flow show that the flow force of the capillaries inhibits accumulation of nanoparticles on a magnet downstream, but the magnetic force of the upstream magnet overcomes the force of flow to result in larger accumulation. As a result, for near-surface disease states, magnets should be placed downstream of the disease locus, and for intra-surface disease states, magnets should be placed upstream of the disease locus to maximize accumulation. Properties Magnetic nanoparticles for therapeutic applications are selected based on their properties determined by the nanoparticle composition which can be divided into three main groups - metal only, metal alloy, or metal oxide nanoparticles. Some key properties of magnetic nanoparticles include a large specific surface area, desirable biocompatibility, presence without causing disease or eliciting immune response, and superparamagnetism. Magnetic nanoparticles are influenced by an external magnetic field due to the magnetic moment found within the network unit. The external magnetic field is necessary for transport and activation of these nanoparticles. Therefore, when a drug is attached/encased in magnetic nanoparticles, these particles will be targeted using an external magnetic field to guide and concentrate the drug at desired disease locus. Design of magnetic nanoparticles for clinical application requires careful evaluation of the effects of surface modification, size, and shape on its magnetic properties. Ferromagnetic properties of nanoparticles have been used in magnetic drug delivery systems. This is important, as ferromagnetism is described as the coercivity of particles to form macro-materials on permanent magnets. The macro-materials include iron, cobalt, and nickel; these elements retain their magnetic properties when a magnet is removed, which is why they accumulate on the permanent magnets. Iron oxides, such as Fe₂O₄ and Fe₃O₄ in particular, play a key role in magnetic nanoparticle drug delivery. The particle sizes typically range from 3 nm to 30 nm. Overall, these iron oxides display good magnetic properties, lower toxicity, and high stability against degradation. For example, a Fe3-δO4 core-shell is used as a carrier for drug delivery. The designed magnetic nanoparticle-based structure displayed biocompatibility, the formation of a covalent bond between the carrier and drug, and glutathione-responsive drug release which prevents early drug release and increases bioavailability. Furthermore, the presence of magnetic nanoparticles in this drug delivery method allows for its response to external magnetic fields for functionalization. The combination of superparamagnetic iron oxide (SPIO) and polyethylene glycol (PEG) used as drug carriers for doxorubicin are influenced by external magnetism. In vivo SPIO-PEG-D under a magnetic field leads to greater tumor accumulation of therapeutic elements, shows lower tumor size, and reduces cardiotoxicity and hepatotoxicity in the magnetic field. Doxorubicin is known for being extremely toxic, and SPIO-PEG shows potential for use as a nanoparticle carrier for reduced toxicity in the periphery. Magnetic nanoparticle coating Coating defines the biocompatibility of the therapeutic agent and its ability to travel in the body. When the agent is not biocompatible, it will quickly be excreted from the body, and there will be magnetic accumulation or off-target therapeutic effects. The use of organic or inorganic coating molecules increases the half-life of the nanocarrier by delaying its clearance by the reticuloendothelial system (RES). This delay occurs because the coating overcomes the pH, hydrophobicity, and surface charge of the magnetic nanoparticles. Additionally, coating allows molecules to covalently bind to specific molecules, such as ligands, proteins, or antibodies, which provides binding specificity to target tissues. A common structure of coating includes the core-shell structure. In this structure, metal oxide cores are coated with biocompatible materials which allows for increased control and biocompatibility. The most common coatings used for optimum response involve the use of polysaccharides like dextran and polymers like polyethylene glycol. Furthermore, carbon coatings have proved to be biocompatible and have high capacity for absorption into cells. Even polyaniline with anti-cancer agent epirubicin can be used for tumor exploration of the brain. Polyethyleneimine has displayed high cellular accumulation and low toxicity. This coating was found to have poor pharmacokinetic properties when used alone, but with magnetic field induction, it was found to accumulate on tumors at clinically significant rates. Silica coatings increase the external surface area to assist in binding and are heat resistant. There are various coatings used to prevent leaching of the magnetic core of the nanoparticles; these coatings have a significant salt concentration with a slightly alkaline (basic) pH. Polyethylene glycol (PEG) is an example of a hydrophilic coating that has been used as a biocompatible targeting modality. Hydrophilic PEG interacts beneficially with the physiological environment to improve biocompatibility by preventing opsonization on the surface of the particles, thus increasing circulation time from minutes to hours, or even days, for magnetic nanoparticles. MRI shows prolonged  PEG circulation and increased SPIO-PEG-D particle accumulation within the tumor with magnetic guidance. Coating not only provides hydrophilic and hydrophobic properties but can also contribute to temperature- and pH-dependent properties. Particular substances, such as PNG, provide these two properties, allowing unique and efficient delivery of drugs. This also enables greater control of release, as body temperature allows a greater amount of drug released, while physiological pH allows a lower amount of drug released. Other coating options for similar pH-dependent properties include the hydrogel chitosan that is crosslinked to a polymer coating. These coating choices have displayed positive results in delivery of anticancer drugs. Impact The small sizes of magnetic nanoparticles allow them to target a variety of targets of different sizes for different purposes. These sizes range from targeting a small cell (10-100 μm), a virus (20-45 nm), a protein (5-50 nm), or a gene (2 nm wide and 10-100 nm long). If these magnetic nanoparticles are coated correctly, they can interact with and enter body structures, allowing adequate delivery of a drug. Additionally, using magnetic nanoparticles in drug delivery has remote control capability. This occurs through the external magnetic field gradient that is associated with the magnetic field's permeability within human tissue. With the application of this remote control, accumulation and transfer of the magnetic nanoparticles is promoted, which has been especially useful in the delivery of anticancer drugs to specific tumor tissues. Another advantage of drug delivery using magnetic nanoparticles is the personability of magnet placement depending on disease state location. While this may also be a limitation, it can be effective if the resources can be used for personally tailored medicine reception. Additionally, a major advantage of magnetic nanoparticles is that they can be visualized with ultrasound and/or MRI imaging. Increase in cellular uptake of SPIO-PEG-D was linked to distinguishable darker differences in MRI and increased tumor visibility. Limitations Limitations of magnetic drug delivery can range from their inherent magnetic properties to interactions with bodily barriers. When magnetic nanoparticles are in the bloodstream, they have high solubility and ionic strength, allowing them to interact with plasma proteins, stimulating the immune system to further inhibit their function. Additionally, the proportion of the nanoparticle size to the target tissue has shown limitations in effective drug delivery, especially in the kidneys and the brain. Intracellular barriers include the removal of the magnetic nanoparticles from the target membrane by ligand-dependent endocytosis followed by separation via acidification in the endosome chamber. Other barriers to consider are the depth of the target tissue, vascular sources, body weight, the speed and amount of blood flow to the target tissue, distance from the field source, injection route, and tumor volume. However, the use of magnetic nanoparticles is more effective when used in near-surface tissues that have slower blood flow, allowing for diffusion and/or endocytosis of nanoparticles into the tissue. Another limitation involves the accumulation of nanoparticles only 5 mm away from an external magnet. An accumulation distance of 5 mm may not be sufficient in larger applications of magnetic drug delivery. This may be effective enough for sites in closer proximity to the surface of the body, but when the site of interest is deeper within tissue, then the advantage of using magnetic nanoparticles for delivery decreases exponentially. It has been proposed to implant magnets within the body to overcome this limitation. Magnet location placement can be upstream or downstream of the location of the disease for maximum accumulation. Another question arises from the subject of cellular uptake. While the use of a magnetic field may guide particles to therapeutic sites, it is not an indicator of cellular uptake of particles. This raises questions regarding the effect, if any, of the therapeutic element on the effector area. Generally, nanoparticles efficiently cross cell barriers; however, this can change in the presence of other processes. Promise has been shown with the use of PEG coating. Hydrophilic coatings have shown enhanced cellular uptake at tumor cells with the use of a magnetic field. Another concern arises regarding the biotoxicity of magnetic nanoparticles. It is difficult to say for certain that all magnetic nanoparticles are toxic due the large variety of magnetic particles that can be used. The nanoparticles size, biodegradability, composition, and dosage are a few of the properties impacting this concern. However, it has been shown that magnetic nanoparticles that are either inhaled to enter the lungs or are swallowed and enter the gastrointestinal tract have unsatisfactory impacts on the body. PEG, linear neutral polyether coatings have a tendency to lose their targeting capabilities in response to their "immune stealthing" function. Applications Most magnetic nanoparticle applications in clinical settings are used for cancer therapies. Magnetic nanoparticles have the ability to target the specific locus of the tumor, use a decreased amount of drug to treat the tumor, and result in decreased off-target effects of the drug. The most common method of introducing magnetic nanoparticles into the body is through intravenous injection; from the site of injection, the nanoparticles travel through the bloodstream. They eventually migrate to the target site with the use of external or implanted magnetic forces. A pH/magnetic field dual responsive drug loaded nanomicelle was developed for targeted magnetothermal synergistic chemotherapy of cancer. In this drug delivery system, after the drug reaches the target site and tumor cell uptake is complete, an external magnetic field is applied causing a magnetothermal effect, raising the tumor cells' temperature and further promoting drug uptake. This nanocarrier system aims to improve drug stability, control drug release, and improve tumor targeting efficacy. This approach has shown increased treatment efficacy over traditional chemotherapy and has not demonstrated any noticeable biotoxicity. Cardiovascular disease treatment presents as another application of magnetic nanoparticle drug delivery. Atherosclerosis cardiovascular disease is a buildup of plaque in the inner lining of the arteries, and there are models on how magnetic nanoparticle drug delivery could be used as a treatment. However there have not been any in vivo or in vitro studies of magnetic nanoparticles being used to deliver drugs to the arteries to effectively reduce inflammation. Other potential applications of magnetic nanoparticles are brain imaging and drug delivery past the blood-brain barrier (BBB) using biodegradable magnetic iron oxide nanoparticles. The scope of this application is the treatment of central nervous system (CNS) disorders by functioning as contrast agents and drug carriers. To cross the BBB, these nanoparticles are designed by creating specificity to the BBB; this is achieved by designing the surface of the nanoparticles to be engrafted to ligands, antibodies, small molecules, cell-penetrating peptides, or conjugated RNA to target specific receptors situated along the BBB in order to facilitate entry. As opposed to methods of drug delivery that result in drugs being removed from the cerebrospinal fluid (CSF) or being degraded, magnetic nanoparticle delivery presents an opportunity to protect therapeutics as well as encourage more efficient delivery following the introduction of the nanoparticles. Magnetic nanoparticles can also be used in conjunction with imaging modalities like ultrasound to improve imaging. The use of nanoparticles in ophthalmic drug delivery is also being explored in clinical research. Magnetic nanoparticles inserted into rats' corneas or administered in an eye drop solution showed high adhesion to the target site. However, the exact mechanism by which the adhesion occurred is still being researched. When the rats were exposed to a bacterial substance that should induce keratitis of the cornea, the amount of inflammation in the treatment group of rats (received the eye drops after exposure) was inhibited. Magnetic nanoparticles have also been used in hyperthermic therapy of cancer, cell purification, biosensing, and immunocytochemical tests. References Drug delivery devices Nanotechnology
Magnetic nanoparticles in drug delivery
[ "Chemistry", "Materials_science" ]
3,097
[ "Nanomedicine", "Pharmacology", "Drug delivery devices", "Nanotechnology" ]
76,421,491
https://en.wikipedia.org/wiki/Deruxtecan
Deruxtecan is a chemical compound and a derivative of exatecan that acts as topoisomerase I inhibitor. It is available linked to specific monoclonal antibody (antibody–drug conjugate), such as: Trastuzumab deruxtecan. It is licensed for the treatment of breast cancer or gastric or gastroesophageal adenocarcinoma. Patritumab deruxtecan, an experimental antibody–drug conjugate to treat non-small-cell lung cancer. Ifinatamab deruxtecan, an experimental anti-cancer treatment. Datopotamab deruxtecan (Datroway), used for the treatment of breast cancer References Antibody-drug conjugates Topoisomerase inhibitors Amines Nitrogen heterocycles Maleimides Hexapeptides Organofluorides Lactams Lactones Tertiary alcohols Heterocyclic compounds with 6 rings
Deruxtecan
[ "Chemistry", "Biology" ]
200
[ "Antibody-drug conjugates", "Amines", "Bases (chemistry)", "Functional groups" ]
76,421,867
https://en.wikipedia.org/wiki/Geothermal%20Energy%20Act
For the Geothermal Energy Research, Development, and Demonstration Act of 1974, please see Geothermal Energy Research, Development, and Demonstration Act. The Geothermal Energy Act of 1980 (GEA) is an act authorized by the 96th U.S. Congress to address the issues of U.S. geothermal energy and its capabilities. It is one of six acts enacted by the Energy Security Act of 1980. It addresses geothermal energy legislation by means of the following: Domestic geothermal reserves could be developed into regionally significant energy sources promoting the economic health as well as the national security of the United States. However, there were institutional and economic barriers to the commercialization of geothermal technology; and Federal agencies should consider the use of geothermal energy for government buildings. Finance and insurance issues The majority of the subsections of the GEA address the financial and insurance issues of geothermal energy production by means of the following: Loans for geothermal reservoir confirmation, The study, establishment, and implementation of an insurance program, and The establishment of assistance programs. Prior to this legislation was the Federal Geothermal Research, Development and Demonstration Act of 1974. References Geothermal energy in the United States Geothermal areas in the United States Energy policy Energy policy of the United States 96th United States Congress United States federal energy legislation
Geothermal Energy Act
[ "Environmental_science" ]
266
[ "Environmental social science", "Energy policy" ]
72,049,053
https://en.wikipedia.org/wiki/TUM%20School%20of%20Computation%2C%20Information%20and%20Technology
The TUM School of Computation, Information and Technology (CIT) is a school of the Technical University of Munich, established in 2022 by the merger of three former departments. As of 2022, it is structured into the Department of Mathematics, the Department of Computer Engineering, the Department of Computer Science, and the Department of Electrical Engineering. Department of Mathematics The Department of Mathematics (MATH) is located at the Garching campus. History Mathematics was taught from the beginning at the Polytechnische Schule in München and the later Technische Hochschule München. Otto Hesse was the department's first professor for calculus, analytical geometry and analytical mechanics. Over the years, several institutes for mathematics were formed. In 1974, the Institute of Geometry was merged with the Institute of Mathematics to form the Department of Mathematics, and informatics, which had been part of the Institute of Mathematics, became a separate department. Research Groups As of 2022, the research groups at the department are: Algebra Analysis Analysis and Modelling Applied Numerical Analysis, Optimization and Data Analysis Biostatistics Discrete Optimization Dynamic Systems Geometry and Topology Mathematical Finance Mathematical Optimization Mathematical Physics Mathematical Modeling of Biological Systems Numerical Mathematics Numerical Methods for Plasma Physics Optimal Control Probability Theory Scientific Computing Statistics Department of Computer Science The Department of Computer Science (CS) is located at the Garching campus. History The first courses in computer science at the Technical University of Munich were offered in 1967 at the Department of Mathematics, when Friedrich L. Bauer introduced a two-semester lecture titled Information Processing. In 1968, Klaus Samelson started offering a second lecture cycle titled Introduction to Informatics. By 1992, the computer science department had separated from the Department of Mathematics to form an independent Department of Informatics. In 2002, the department relocated from its old campus in the Munich city center to the new building on the Garching campus. In 2017, the Department celebrated 50 Years of Informatics Munich with a series of lectures and ceremonies, together with the Ludwig Maximilian University of Munich and the Bundeswehr University Munich. Chairs As of 2022, the department consists of the following chairs: AI in Healthcare and Medicine Algorithmic Game Theory Algorithms and Complexity Application and Middleware Systems Augmented Reality Bioinformatics Computational Imaging and AI in Medicine Computational Molecular Medicine Computer Aided Medical Procedures Computer Graphics and Visualization Computer Vision and AI Cyber Trust Data Analytics and Machine Learning Data Science and Engineering Database Systems Decision Science & Systems Dynamic Vision and Learning Efficient Algorithms Engineering Software for Decentralized Systems Ethics in Systems Design and Machine Learning Formal Languages, Compiler & Software Construction Formal Methods for Software Reliability Hardware-aware Algorithms and Software for HPC Information Systems & Business Process Management Law and Security of Digitization Legal Tech Logic and Verification Machine Learning of 3D Scene Geometry Physics-based Simulation Quantum Computing Scientific Computing Software & Systems Engineering Software Engineering Software Engineering for Business Information Systems Theoretical Computer Science Theoretical Foundations of AI Visual Computing Notable people Seven faculty members of the Department of Informatics have been awarded the Gottfried Wilhelm Leibniz Prize, one of the highest endowed research prizes in Germany with a maximum of €2.5 million per award: 2020 – Thomas Neumann 2016 – Daniel Cremers 2008 – Susanne Albers 1997 – Ernst Mayr 1995 – Gerd Hirzinger 1994 – Manfred Broy 1991 – Friedrich L. Bauer was awarded the 1988 IEEE Computer Society Computer Pioneer Award for inventing the stack data structure. Gerd Hirzinger was awarded the 2005 IEEE Robotics and Automation Society Pioneer Award. and Burkhard Rost were awarded the Alexander von Humboldt Professorship in 2011 and 2008, respectively. Rudolf Bayer was known for inventing the B-tree and Red–black tree. Department of Electrical Engineering The Department of Electrical Engineering (EE) is located at the Munich campus. History The first lectures in the field of electricity at the Polytechnische Schule München were given as early as 1876 by the physicist Wilhelm von Bezold. Over the years, as the field of electrical engineering became increasingly important, a separate department for electrical engineering emerged within the mechanical engineering department. In 1967, the department was renamed the Faculty of Mechanical and Electrical Engineering, and six electrical engineering departments were permanently established. In April 1974, the formal establishment of the new TUM Department of Electrical and Computer Engineering took place. While still located in the Munich campus, a new building is currently in construction on the Garching campus and the department is expected to move by 2025. Professorships As of 2022, the department consists of the following chairs and professorships: Biomedical Electronics Circuit Design Computational Photonics Control and Manipulation of Microscale Living Objects Environmental Sensing and Modeling High Frequency Engineering Hybrid Electronic Systems Measurement Systems and Sensor Technology Micro- and Nanosystems Technology Microwave Engineering Molecular Electronics Nano and Microrobotics Nano and Quantum Sensors Neuroelectronics Physics of Electrotechnology Quantum Electronics and Computer Engineering Semiconductor Technology Simulation of Nanosystems for Energy Conversion Department of Computer Engineering The Department of Computer Engineering was separated from the former Department of Electrical and Computer Engineering as the result of merger into the School of Computation, Information and Technology. Professorships As of 2022, the department consists of the following chairs and professorships: Architecture of Parallel and Distributed Systems Audio Information Processing Automatic Control Engineering Bio-inspired Information Processing Coding and Cryptography Communications Engineering Communication Networks Computer Architecture & Operating Systems Computer Architecture and Parallel Systems Connected Mobility Cognitive Systems Cyber Physical Systems Data Processing Electronic Design Automation Embedded Systems and Internet of Things Healthcare and Rehabilitation Robotics Human-Machine Communication Information-oriented Control Integrated Systems Line Transmission Technology Machine Learning for Robotics Machine Learning in Engineering Machine Vision and Perception Media Technology Network Architectures and Services Neuroengineering Materials Real-Time Computer Systems Robotics Science and System Intelligence Robotics, AI and realtime systems Security in Information Technology Sensor-based Robot Systems and Intelligent Assistance Systems Signal Processing Methods Theoretical Information Technology Building The Department of Computer Science shares a building with the Department of Mathematics. In the building, two massive parabolic slides run from the fourth floor to the ground floor. Their shape corresponds to the equation and is supposed to represent the "connection of science and art". Rankings The Department of Computer Science has been consistently rated the top computer science department in Germany by major rankings. Globally, it ranks No. 29 (QS), No. 10 (THE), and within No. 51-75 (ARWU). In the 2020 national CHE University Ranking, the department is among the top rated departments for computer science and business informatics, being rated in the top group for the majority of criteria. The Department of Mathematics has been rated as one of the top mathematics departments in Germany, ranking 43rd in the world and 2nd in Germany (after the University of Bonn) in the QS World University Rankings, and within No. 51-75 in the Academic Ranking of World Universities. In Statistics & Operational Research, QS ranks TUM first in Germany and 28th in the world. The Departments of Electrical and Computer Engineering are leading in Germany. In Electrical & Electronic Engineering, TUM is rated 18th worldwide by QS and 22nd by ARWU. In engineering as a whole, TUM is ranked 20th globally and 1st nationally in the Times Higher Education World University Rankings. See also Summer School Marktoberdorf References External links 2022 establishments in Germany Universities and colleges established in 2022 Computer science departments Electrical and computer engineering departments Schools of mathematics
TUM School of Computation, Information and Technology
[ "Engineering" ]
1,481
[ "Electrical and computer engineering departments", "Electrical and computer engineering", "Engineering universities and colleges" ]
72,053,369
https://en.wikipedia.org/wiki/Journal%20of%20Crystal%20Growth
The Journal of Crystal Growth is a semi-monthly peer-reviewed scientific journal covering experimental and theoretical studies of crystal growth and its applications. It is published by Elsevier and the editor-in-chief is J. Derby (University of Minnesota). History The Journal of Crystal Growth was founded following the 1966 International Conference on Crystal Growth (ICCG) held in Boston, Massachusetts, United States. Ichiro Sunagawa, who participated in ICCG, wrote in the Journal of the Japanese Association of Crystal Growth that before then, "The crystal growth community was totally fragmented and had remained as a peripheral field at the mercy of other organizations." Michael Schieber (Hebrew University) later recounted feeling the need for an individual journal on the subject after the conference proceedings were published as a supplement to the Journal of Physics and Chemistry of Solids that had to be additionally ordered by journal subscribers. Feeling as though the crystal growth community should not remain at the "discretion of other disciplines for which crystal growth has a secondary importance", he spoke about the idea with a colleague, Kenneth Button, who informed an editor at the North-Holland Publishing Company (now Elsevier). The journal launched in 1967, with an editorial board consisting of Schieber as editor-in-chief and co-editors Charles Frank and Nicolás Cabrera. At the time the journal employed two U.S. editors, eighteen associate editors from around the world, and an editorial advisory board of sixteen members. As of 2015, the journal has continued to serve as the "major venue for papers on crystal growth theory, practice and characterization" and proceedings of various conferences in the field. According to Tony Stankus, the journal has historically emphasised research contribution on crystals grown from wet solutions and later strongly emphasised research on crystals grown from molten materials or those produced through other processes relevant to the semiconductor industry. The American Chemical Society and the Scholarly Publishing and Academic Resources Coalition partnered to develop Crystal Growth and Design as a lower-cost alternative to the Journal of Crystal Growth; its first issue was published in 2001. Retractions In 2017, Elsevier was reported to be retracting four articles from the journal after an author had falsified reviews. The journal was one of several publications affected by the falsifications. Abstracting and indexing The journal is abstracted and indexed in the following databases: Aluminium Industry Abstracts Chemical Abstracts Current Contents - Physical, Chemical & Earth Sciences El Compendex Plus Engineered Materials Abstracts Engineering Index INSPEC Metals Abstracts Science Citation Index Scopus According to Journal Citation Reports, the journal had a 2021 impact factor of 1.830. See also List of materials science journals References External links Crystallography journals Elsevier academic journals English-language journals Materials science journals Physics journals Academic journals established in 1967 Semi-monthly journals
Journal of Crystal Growth
[ "Chemistry", "Materials_science", "Engineering" ]
566
[ "Crystallography journals", "Crystallography", "Materials science journals", "Materials science" ]
72,059,982
https://en.wikipedia.org/wiki/Comayagua%20cathedral%20clock
The Comayagua cathedral clock, also known as the Arabic clock or the Comayagua clock, is a gear clock dated from the medieval times located in the city of Comayagua, in the Republic of Honduras. It is considered the oldest clock in the Americas and the oldest gear clock in the world still in operation since it has been working presumably for more than 900 years. History The gears were presumably made and assembled by the Spanish moors in Al-Andalus during the Almoravid Empire period around the year 1100 during the reign of Yusuf ibn Tashfin. Before being transferred to the Americas, according to the chronicles before it landed to American soil it was working on the Arab palace of the Alhambra in Granada, Spain. After the end of the Reconquista and the expulsion of the Muslims and Jews from Castille the palace was occupied by the kings of Spain since Charles V. during the 17th century by order of King Felipe III of Spain, it was transferred to Las Hibueras region (present-day Honduras) of New Spain, where it would function as the city clock. Initially it functioned in the Church of La Merced, which was at that time the cathedral of the city, being installed in 1636. However, by 1711 it was relocated to the recently completed Cathedral of the Immaculate Conception, which at that time was the largest building in the city and one of the largest cathedrals in Central America during the viceroyalty of New Spain, being installed in the bell tower of the temple. During 2007 it was subjected to a restoration process by the Municipal Mayor's Office, the National Congress of Honduras, the Comayagüense Cultural Committee and the supervision of the Honduran Institute of Anthropology and History, for which the master watchmaker Rodolfo Antonio Cerón Martínez from Guatemala was located, who after five months of hard work concluded his work on December 20, 2007. Characteristics The mechanism based on gears, ropes, weights and a pendulum, the whole set shows the time on the face located on the facade of the church where the number 4 is written in an old version of Roman numerals, showing as IIII instead of IV as most of us know it. Studies and debate For a time it was believed that the oldest gear clock was the one in Salisbury Cathedral in England, since it was made in 1386. However, when the material with which it was made was studied, it was identified that it was built with iron with a much older technique than the Salisbury one was made with, therefore several different historians and researchers assume that this is the oldest working gear clock in the world. However, the debate about its antiquity is still current, as some historians have said that the clock cannot be of the age attributed to it since there are no specific historical records that confirm that at the end of the 11th century and the beginning of the 12th century Mechanisms to measure time based on gears were common, since most clocks of the time were made of sand or water. Therefore, these historians give the real date of construction around the year 1374. However, those who support the theory that it actually belongs to the 11th century mention the study that was carried out by researchers where it was discovered that the way it was made was based on the wrought iron technique, a much older technique than that with which the Salisbury clock was made, in addition to finding some inscriptions on one of the gears that say "Espana 1100" which gives greater support to the possibility that it is from the period that is assigned. See also History of Honduras History of Spain Spanish Empire References Clocks Comayagua Time in Honduras 11th century in Spain History of Honduras 11th-century works
Comayagua cathedral clock
[ "Physics", "Technology", "Engineering" ]
742
[ "Physical systems", "Machines", "Clocks", "Measuring instruments" ]
72,061,861
https://en.wikipedia.org/wiki/Richard%20Alan%20Morton
Richard Alan Morton FRS was the Johnston Professor of Biochemistry at University of Liverpool from 1944 until 1966. He was a pioneer in the application of spectroscopy to biological molecules. His research group were the first to identify vitamin A2 and related compounds. They were also among the first to characterise several isoprenoids including ubiquinone, polyprenol and others. Early life and education Richard Alan Morton was the child of Welsh-speaking parents in Liverpool. His middle name was initially Alun. He attended the co-educational Oulton Secondary School in Liverpool. He left school in 1917 to work in a pharmacy and then joined the army towards the end of the First World War. He became ill with Spanish flu. From 1919 he studied chemistry at the University of Liverpool, graduating with B. Sc. first class in 1922. He then undertook doctoral research supervised by Edward Charles Cyril Baly into the application of optical spectroscopy. Selig Hecht was a post-doctoral fellow with Baly's group at this time, interested in applications of spectroscopy in biology, and this developed Morton's interest in this new application. Career He remained at this university for his entire career apart from spending 1931 on sabbatical as visiting professor at Ohio State University in the USA. From 1924 until 1944 he was a special lecturer in spectroscopy in the Chemistry Department. He was then appointed to the Johnston Chair of Biochemistry in the Department of Biochemistry in 1944 until he retired in 1966. He continued to be active in science after his retirement. His research focused initially on the application of spectroscopy to determining the structure of chemical compounds. From 1926 his work developed the use of absorption spectroscopy with biological molecules that absorbed light, allowing their concentration to be estimated in solutions. This technology, in collaboration with Ian Heilbron's interest in a therapy for rickets, led him to discover the vitamin A2 and several related compounds. His research group became focused on fat-soluble vitamins and was also among the first to identify ubiquinone and the polyprenol family of compounds. From 1955 until 1965 the focus of his group's research was isoprenoids. During the Second World War he was involved in studies to understand the requirements of vitamin A by people that gave him a new interest in nutrition. After the war he organised meetings for industrial scientists around Merseyside about the use of spectroscopy He was the chair of the government's Committee on Food Additives from 1963 to 1968. Publications Morton was the author or co-author of 282 scientific publications and several books. These included: RA Morton (1975) Biochemical Spectroscopy, two volumes RA Morton (1969) The Biochemical Society: its history and activities 1911-1969 R A Morton (1942) Absorption spectra of Vitamins and Hormones He was also the author of publications in Welsh including: (1965) Agweddau cemegol ar weled (Chemical aspects of sight) Y Gwyddonydd 3 issue 2 Honours and awards In 1929 he was awarded the Meldola Medal and Prize by the Chemical Society. In 1950 he was elected a Fellow of the Royal Society. In 1966In 1969 he was elected a member of the American Society for Nutrition. In 1966 he was made an Honorary Member of the Biochemical Society. In 1971 the University of Liverpool named a new student hostel Morton House after him. He was awarded honorary degrees by the University of Wales (1966), Trinity College Dublin (1967) and the University of Coimbra (1964). In 1978 the Biochemical Society established the annual Morton Lecture in his memory for contribution to lipid biochemistry. Personal life In 1926 he and Myfanwy Heulwen Roberts were married. They had one child together. He attended the Welsh Presbyterian Chapel in Garston and was involved with the Welsh community in Liverpool throughout his life. References 1899 births 1977 deaths British biochemists Spectroscopists Alumni of the University of Liverpool Academics of the University of Liverpool Welsh-speaking academics
Richard Alan Morton
[ "Physics", "Chemistry" ]
802
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
64,685,131
https://en.wikipedia.org/wiki/Expandable%20graphite
Expandable graphite is produced from the naturally occurring mineral graphite. The layered structure of graphite allows some molecules to be intercalated in between the graphite layers. Through incorporation of acids, usually sulfuric acid graphite can be converted into expandable graphite. Characteristics If expandable graphite is heated, the graphite flakes will expand to a multiple of their starting volume. The main products in the market have a starting temperature in the range of 200 °C. The expanded flakes have a “worm-like” appearance and are generally several millimeters long. Production To produce expandable graphite, natural graphite flakes are treated in a bath of acid and oxidizing agent.Usually used oxidizing agents are hydrogen peroxide, potassium permanganate or chromic acid. Concentrated sulphuric acid or nitric acid are usually used as the compound to be incorporated, with the reaction taking place at temperatures of 30 °C to 130 °C for up to four hours. After the reaction time, the flakes are washed with water and then dried. Starting temperature and expansion rate depend on the production conditions and particle size of the graphite. temperature and expansion rate are depending on the degree of fineness of the graphite used. Applications Flame retardant One of the main applications of expandable graphite is as a flame retardant. When exposed to heat, expandable graphite expands and forms an intumescent layer on the material surface. This slows down the spread of fire and counteracts the most dangerous consequences of fire for humans, the formation of toxic gases and smoke. Graphite foil By compressing expanded graphite, foils can be produced from pure graphite. These are mainly used as thermally and chemically highly resistant seals in chemical plant construction or as heat spreaders. Expandable graphite for metallurgy Expandable graphite is also used in metallurgy to cover melts and moulds. The material serves here as an oxidation protection and insulator. Expandable graphite for the chemical industry Expandable graphite is included in the chemical processes for paints and varnishes. References Fire protection Graphite
Expandable graphite
[ "Engineering" ]
444
[ "Building engineering", "Fire protection" ]
64,686,016
https://en.wikipedia.org/wiki/R%20Puppis
R Puppis is a variable star in the constellation Puppis. It is a rare yellow hypergiant and a candidate member of the open cluster NGC 2439. It is also an MK spectral standard for the class G2 0-Ia. Variability R Puppis was identified as a variable star in 1879, and described as having a range of over a magnitude. Numerous observations over the following 100 years failed to confirm the variations, until the 1970s when clear brightness changes were observed. These were confirmed by later observations, but with a total visual amplitude of only about 0.2 magnitudes. Variable stars such as R Puppis have been described as pseudo-Cepheids, because they lie above the high-luminosity portion of the instability strip and their variations are similar to those of Cepheids although less regular. R Puppis is formally classified as a semiregular variable of type SRd, meaning F, G, or K giants or supergiants. References Puppis CD-31 4910 037415 2974 062058 Puppis, R Semiregular variable stars G-type hypergiants
R Puppis
[ "Astronomy" ]
233
[ "Puppis", "Constellations" ]
64,688,677
https://en.wikipedia.org/wiki/Homological%20connectivity
In algebraic topology, homological connectivity is a property describing a topological space based on its homology groups. Definitions Background X is homologically-connected if its 0-th homology group equals Z, i.e. , or equivalently, its 0-th reduced homology group is trivial: . For example, when X is a graph and its set of connected components is C, and (see graph homology). Therefore, homological connectivity is equivalent to the graph having a single connected component, which is equivalent to graph connectivity. It is similar to the notion of a connected space. X is homologically 1-connected if it is homologically-connected, and additionally, its 1-th homology group is trivial, i.e. . For example, when X is a connected graph with vertex-set V and edge-set E, . Therefore, homological 1-connectivity is equivalent to the graph being a tree. Informally, it corresponds to X having no "holes" with a 1-dimensional boundary, which is similar to the notion of a simply connected space. In general, for any integer k, X is homologically k-connected if its reduced homology groups of order 0, 1, ..., k are all trivial. Note that the reduced homology group equals the homology group for 1,..., k (only the 0-th reduced homology group is different). Connectivity The homological connectivity of X, denoted connH(X), is the largest k ≥ 0 for which X is homologically k-connected. Examples: If all reduced homology groups of X are trivial, then connH(X) = infinity. This holds, for example, for any ball. If the 0th group is trivial but the 1th group is not, then connH(X) = 0. This holds, for example, for a connected graph with a cycle. If all reduced homology groups are non-trivial, then connH(X) = -1. This holds for any disconnected space. The connectivity of the empty space is, by convention, connH(X) = -2. Some computations become simpler if the connectivity is defined with an offset of 2, that is, . The eta of the empty space is 0, which is its smallest possible value. The eta of any disconnected space is 1. Dependence on the field of coefficients The basic definition considers homology groups with integer coefficients. Considering homology groups with other coefficients leads to other definitions of connectivity. For example, X is F2-homologically 1-connected if its 1st homology group with coefficients from F2 (the cyclic field of size 2) is trivial, i.e.: . Homological connectivity in specific spaces For homological connectivity of simplicial complexes, see simplicial homology. Homological connectivity was calculated for various spaces, including: The independence complex of a graph; A random 2-dimensional simplicial complex; A random k-dimensional simplicial complex; A random hypergraph; A random Čech complex. Relation with homotopical connectivity Hurewicz theorem relates the homological connectivity to the homotopical connectivity, denoted by . For any X that is simply-connected, that is, , the connectivities are the same:If X is not simply-connected (), then inequality holds:but it may be strict. See Homotopical connectivity. See also Meshulam's game is a game played on a graph G, that can be used to calculate a lower bound on the homological connectivity of the independence complex of G. References Homology theory Properties of topological spaces
Homological connectivity
[ "Mathematics" ]
758
[ "Properties of topological spaces", "Topological spaces", "Topology", "Space (mathematics)" ]
64,693,091
https://en.wikipedia.org/wiki/Magnetic%20Thermodynamic%20Systems
In thermodynamics and thermal physics, the theoretical formulation of magnetic systems entails expressing the behavior of the systems using the Laws of Thermodynamics. Common magnetic systems examined through the lens of Thermodynamics are ferromagnets and paramagnets as well as the ferromagnet to paramagnet phase transition. It is also possible to derive thermodynamic quantities in a generalized form for an arbitrary magnetic system using the formulation of magnetic work. Simplified thermodynamic models of magnetic systems include the Ising model, the mean field approximation, and the ferromagnet to paramagnet phase transition expressed using the Landau Theory of Phase Transitions. Arbitrary magnetic systems In order to incorporate magnetic systems into the first law of thermodynamics, it is necessary to formulate the concept of magnetic work. The magnetic contribution to the quasi-static work done by an arbitrary magnetic system is where is the magnetic field and is the magnetic flux density. So the first law of thermodynamics in a reversible process can be expressed as Accordingly the change during a quasi-static process in the Helmholtz free energy, , and the Gibbs free energy, , will be Paramagnetic systems In a paramagnetic system, that is, a system in which the magnetization vanishes without the influence of an external magnetic field, assuming some simplifying assumptions (such as the sample system being ellipsoidal), one can derive a few compact thermodynamic relations. Assuming the external magnetic field is uniform and shares a common axis with the paramagnet, the extensive parameter characterizing the magnetic state is , the magnetic dipole moment of the system. The fundamental thermodynamic relation describing the system will then be of the form . In the more general case where the paramagnet does not share an axis with the magnetic field, the extensive parameters characterizing the magnetic state will be . In this case, the fundamental relation describing the system will be . The intensive parameter corresponding to the magnetic moment is the external magnetic field acting on the paramagnet, . The relation between them is: where is the Entropy, is the Volume and is the number of particles in the system. Note that in this case, is the energy added to the system by the insertion of the paramagnet. The total energy in the space occupied by the system includes a component arising from the energy of a magnetic field in a vacuum. This component equals , where is the permeability of free space, and isn't included as a part of . The choice if to include in is arbitrary but it is important to note the convention chosen, otherwise, it may lead to confusion emanating from differing results. The Euler relation for a paramagnetic system is then: and the Gibbs-Duhem relation for such a system is: An experimental problem that distinguishes magnetic systems from other thermodynamical systems is that the magnetic moment can't be constrained. Typically in thermodynamic systems, all extensive quantities describing the system can be constrained to a specified value. Examples are volume and the number of particles, which can both be constrained by enclosing the system in a box. On the other hand, there is no experimental method that can directly hold the magnetic moment to a specified constant value. Nevertheless, this experimental concern does not affect the thermodynamic theory of magnetic systems. Ferromagnetic systems Ferromagnetic systems are systems in which the magnetization doesn't vanish in the absence of an external magnetic field. Multiple thermodynamic models have been developed in order to model and explain the behavior of ferromagnets, including the Ising model. The Ising model can be solved analytically in one and two dimensions, numerically in higher dimensions, or using the mean-field approximation in any dimensionality. Additionally, the ferromagnet to paramagnet phase transition is a second-order phase transition and so can be modeled using the Landau theory of phase transitions. See also Magnetism Thermodynamic systems Thermo-magnetic motor References Thermodynamic systems
Magnetic Thermodynamic Systems
[ "Physics", "Chemistry", "Mathematics" ]
870
[ "Physical systems", "Thermodynamic systems", "Thermodynamics", "Dynamical systems" ]
73,470,619
https://en.wikipedia.org/wiki/Ytterbium%28II%29%20fluoride
Ytterbium(II) fluoride is a binary inorganic compound of ytterbium and fluorine with the chemical formula . Synthesis Ytterbium(II) fluoride can be obtained by reacting ytterbium(III) fluoride with ytterbium or hydrogen. Physical properties Ytterbium(II) fluoride is a gray solid and crystallizes in the so-called fluorite type analogous to calcium fluoride with a unit cell a axis of 559.46 pm. In the crystal structure of ytterbium(II) fluoride, the Yb2+ cation is surrounded by eight F− anions in the form of a cube, which is tetrahedrally surrounded by four Yb2+. References Fluorides Lanthanide halides Ytterbium(II) compounds Fluorite crystal structure
Ytterbium(II) fluoride
[ "Chemistry" ]
187
[ "Inorganic compounds", "Fluorides", "Inorganic compound stubs", "Salts" ]
73,471,646
https://en.wikipedia.org/wiki/White%20etching%20cracks
White etching cracks (WEC), or white structure flaking or brittle flaking, is a type of rolling contact fatigue (RCF) damage that can occur in bearing steels under certain conditions, such as hydrogen embrittlement, high stress, inadequate lubrication, and high temperature. WEC is characterised by the presence of white areas of microstructural alteration in the material, which can lead to the formation of small cracks that can grow and propagate over time, eventually leading to premature failure of the bearing. WEC has been observed in a variety of applications, including wind turbine gearboxes, automotive engines, and other heavy machinery. The exact mechanism of WEC formation is still a subject of research, but it is believed to be related to a combination of microstructural changes, such as phase transformations and grain boundary degradation, and cyclic loading. Cause White etching cracks (WECs), first reported in 1996, are cracks that can form in the microstructure of bearing steel, leading to the development of a network of branched white cracks. They are usually observed in bearings that have failed due to rolling contact fatigue or accelerated rolling contact fatigue. These cracks can significantly shorten the reliability and operating life of bearings, both in the wind power industry and in several industrial applications. The exact cause of WECs and their significance in rolling bearing failures have been the subject of much research and discussion. Ultimately, the formation of WECs appears to be influenced by a complex interplay between material, mechanical, and chemical factors, including hydrogen embrittlement, high stresses from sliding contact, inclusions, electrical currents, and temperature. They all also have all been identified as potential drivers of WECs. Hydrogen embrittlement One of the most commonly quoted potential causes of WECs is hydrogen embrittlement caused by an unstable equilibrium between material, mechanical, and chemical aspects, which occurs when hydrogen atoms diffuse into the bearing steel, causing micro-cracks to form. Hydrogen can come from a variety of sources, including the hydrocarbon lubricant or water contamination, and it is often used in laboratory tests to reproduce WECs. Mechanisms behind the generation of hydrogen from lubricants was attributed to three primary factors contributing: decomposition of lubricants through catalytic reactions with a fresh metal surface, breakage of molecular chains within the lubricant due to shear on the sliding surface, and thermal decomposition of lubricants caused by heat generation during sliding. Hydrogen generation is influenced by lubricity, wear width, and the catalytic reaction of a fresh metal surface. Stress localisation Stresses higher than anticipated can also accelerate rolling contact fatigue, which is a known precursor to WECs. WECs commence at subsurface during the initial phases of their formation, particularly at non-metallic inclusions. As the sliding contact period extended, these cracks extended from the subsurface region to the contact surface, ultimately leading to flaking. Furthermore, there was an observable rise in the extent of microstructural modifications near the cracks, suggesting that the presence of the crack is a precursor to these alterations. The direction of sliding on the bearing surface played a significant role in WEC formation. When the traction force opposed the direction of over-rolling (referred to as negative sliding), it consistently led to the development of WECs. Conversely, when the traction force aligned with the over-rolling direction (positive sliding), WECs did not manifest. The magnitude of sliding exerted a dominant influence on WEC formation. Tests conducted at a sliding-to-rolling ratio (SRR) of -30% consistently resulted in the generation of WECs, while no WECs were observed in tests at -5% SRR. Furthermore, the number of WECs appeared to correlate with variations in contact severity, including changes in surface roughness, rolling speed, and lubricant temperature. Electrical current One of the primary causes of WECs is the passage of electrical current through the bearings. Both Alternating Current (AC) and Direct Current (DC) can lead to the formation of WECs, albeit through slightly different mechanisms. In general, hydrogen generation from lubricants can be accelerated by electric current, potentially accelerating WEC formation. Under certain conditions, when the current densities are low (less than 1 mA/mm2), electrical discharges can significantly shorten the lifespan of bearings by causing WECs. These WECs can develop in under 50 hours due to electrical discharges. Electrostatic sensors prove to be useful in detecting these critical discharges early on, which are associated with failures induced by WECs. The analysis revealed that different reaction layers form in the examined areas, depending on the electrical polarity. In the case of AC, the rapid change in polarity involves the creation of a plasma channel through the lubricant film in the bearing, leading to a momentary, intense discharge of energy. The localised heating and rapid cooling associated with these discharges can cause changes in the microstructure of the steel, leading to the formation of WEAs and WECs. On the other hand, DC can cause a steady flow of electrons through the bearing. This can lead to the electrochemical dissolution of the metal, a process known as fretting corrosion. The constant flow of current can also cause local heating, leading to thermal gradients within the bearing material. These gradients can cause stresses that lead to the formation of WECs. Microstructure WECs are sub-surface white cracks networks within local microstructural changes that are characterised by a changed microstructure known as white etching area (WEA). The term "white etching" refers to the white appearance of the altered microstructure of a polished and etched steel sample in the affected areas. The WEA is formed by amorphisation (phase transformation) of the martensitic microstructure due to friction at the crack faces during over-rolling, and these areas appear white under an optical microscope due to their low-etching response to the etchant. The microstructure of WECs consists of ultra-fine, nano-crystalline, carbide-free ferrite, or ferrite with a very fine distribution of carbide particles that exhibits a high degree of crystallographic misorientation. WEC propagation is mostly transgranular and does not follow a certain cleavage plane. Researchers observed three distinct types of microstructural alterations near the generated cracks: uniform white etching areas (WEAs), thin elongated regions of dark etching areas (DEA), and mixed regions comprising both light and dark etching areas with some misshaped carbides. During repeated stress cycles, the position of the crack constantly shifts, leaving behind an area of intense plastic deformation composed of ferritic, martensite, austenite (due to austenitization) and carbides. nano-grains, i.e., WEAs. The microscopic displacement of the crack plane in a single stress cycle accumulates to form micron-sized WEAs during repeated stress cycles. After the initial development of a fatigue crack around inclusions, the faces of the crack rub against each other during cycles of compressive stress. This results in the creation of WEAs through localised intense plastic deformation. It also causes partial bonding of the opposing crack faces and material transfer between them. Consequently, the WEC reopens at a slightly different location compared to its previous position during the release of stress. Furthermore, it has been acknowledged that WEA is one of the phases that arise from different processes and is generally observed as a result of a phase transformation in rolling contact fatigue. WEA is harder than the matrix and . Additionally, WECs are caused by stresses higher than anticipated and occur due to bearing rolling contact fatigue as well as accelerated rolling contact fatigue. WECs in bearings are accompanied with a white etching matter (WEM). WEM forms asymmetrically along WECs. There is no significant microstructural differences between the untransformed material adjacent to cracking and the parent material although WEM exhibits variable carbon content and increased hardness compared to the parent material. A study in 2019 suggests that WEM may initiate ahead of the crack, challenging the conventional crack-rubbing mechanism. Testing for WEC Triple disc rolling contact fatigue (RCF) Rig is a specialised testing apparatus used in the field of tribology and materials science to evaluate the fatigue resistance and durability of materials subjected to rolling contact. This rig is designed for simulating the conditions encountered in various mechanical systems, such as rolling bearings, gears, and other components exposed to repeated rolling and sliding motions. The rig typically consists of three discs or rollers arranged in a specific configuration. These discs can represent the interacting components of interest, such as a rolling bearing. The rig also allows precise control over the loading conditions, including the magnitude of the load, contact pressure, and contact geometry. PCS Instruments Micro-pitting Rig (MPR) is a specialised testing instrument used in the field of tribology and mechanical engineering to study micro-pitting, a type of surface damage that occurs in lubricated rolling and sliding contact systems. The MPR is designed to simulate real-world operating conditions by subjecting test specimens, often gears or rolling bearings, to controlled rolling and sliding contact under lubricated conditions. Impact Offshore wind turbines are subject to challenging environmental conditions, including corrosive saltwater, high wind forces, and potential electrical currents. These conditions can contribute to bearing failures and impact the reliability and maintenance of wind turbines. Several factors that can lead to bearing failures, such as corrosion, fatigue, wear, improper lubrication, high electric currents, and the need for improved materials and designs to ensure the longevity and performance of bearings in offshore wind turbines. WECs negatively affects the reliability of bearings, not only in the wind industry but also in various other industrial applications such as electric motors, paper machines, industrial gearboxes, pumps, ship propulsion systems, and the automotive sector. 60% of wind turbines failures are linked to WEC. In October 2018, a workshop on WECs was organised in Düsseldorf by a junior research group funded by the German Federal Ministry of Education and Research (BMBF). Representatives from academia and industry gathered to discuss the mechanisms behind WEC formation in wind turbines, focusing on the fundamental material processes causing this phenomenon. Further reading References Fracture mechanics Materials degradation Mechanical failure modes Metallurgy Tribology Friction
White etching cracks
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,162
[ "Mechanical phenomena", "Tribology", "Physical phenomena", "Force", "Friction", "Physical quantities", "Fracture mechanics", "Mechanical failure modes", "Structural engineering", "Metallurgy", "Technological failures", "Materials science", "Surface science", "nan", "Mechanical engineerin...
73,479,684
https://en.wikipedia.org/wiki/Bell%20diagonal%20state
Bell diagonal states are a class of bipartite qubit states that are frequently used in quantum information and quantum computation theory. Definition The Bell diagonal state is defined as the probabilistic mixture of Bell states: In density operator form, a Bell diagonal state is defined as where is a probability distribution. Since , a Bell diagonal state is determined by three real parameters. The maximum probability of a Bell diagonal state is defined as . Properties 1. A Bell-diagonal state is separable if all the probabilities are less or equal to 1/2, i.e., . 2. Many entanglement measures have a simple formulas for entangled Bell-diagonal states: Relative entropy of entanglement: , where is the binary entropy function. Entanglement of formation: ,where is the binary entropy function. Negativity: Log-negativity: 3. Any 2-qubit state where the reduced density matrices are maximally mixed, , is Bell-diagonal in some local basis. Viz., there exist local unitaries such that is Bell-diagonal. References Quantum information science Quantum states
Bell diagonal state
[ "Physics" ]
227
[ "Quantum states", "Quantum mechanics" ]