id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
66,617,145
https://en.wikipedia.org/wiki/Geometrical%20Product%20Specification%20and%20Verification
Geometrical Product Specification and Verification (GPS&V) is a set of ISO standards developed by ISO Technical Committee 213. The aim of those standards is to develop a common language to specify macro geometry (size, form, orientation, location) and micro-geometry (surface texture) of products or parts of products so that the language can be used consistently worldwide. Background GPS&V standards cover: dimensional specifications macrogeometrical specifications (form, orientation, location and run-out) surface texture specifications measuring equipment and calibration requirements uncertainty management for measurement and specification acceptance Other ISO technical committees are strongly related to ISO TC 213. ISO Technical Committee 10 is in charge of the standardization and coordination of technical product documentation (TPD). The GPS&V standards describe the rules to define geometrical specifications which are further included in the TPD. The TPD is defined as the: "means of conveying all or part of a design definition or specification of a product". The TPD can be either a conventional documentation made of two dimensional Engineering drawings or a documentation based on Computer-aided design (CAD) models with 3RD annotations. The ISO rules to write the documentation are mainly described in ISO 128 and ISO 129 series while the rules for 3RD annotations are described in ISO 16792. ISO Technical Committee 184 develops standards that are closely related to GPS&V standards. In particular ISO TC 184/SC4 develops ISO 10303 standard known as STEP standard. GPS&V shall not to be confused with the use of ASME Y.14.5 which is often referred to as Geometric Dimension and Tolerance (GD&T). History and concepts History ISO TC 213 was born in 1996 by merging three previous committees: ISO Technical Committee 10 Sub-committee 5 (ISO/TC 10/SC5) Geometrical Tolerancing ISO Technical Committee 57 (ISO/TC 57) Surface Texture ISO Technical Committee 3 (ISO/TC 3) Limits and fits Operation GPS&V standards are built on several basic operations defined in ISO 17450-1:2011: skin model partition extraction filtration association collection construction reconstruction reduction Those operations are supposed to completely describe the process of tolerancing from the point of view of the design and from the point of view of the measurement. They are presented in ISO 17450 standard series. Some of them are further described in other standards e.g ISO 16610 series for filtration. Those concepts are based on academic works. The key idea is to start from the real part with its imperfect geometry (skin model) and then to apply a sequence of well defined operations to completely describe the tolerancing process. The operations are used in the GPS&V standards to define the meaning of dimensional, geometrical or surface texture specifications. Skin model The skin model is a representation of the surface of the real part. The model in CAD systems describes the nominal geometry of the parts of a product. The nominal geometry is perfect. However, the geometrical tolerancing has to take into account the geometrical deviations that arise inevitably from the manufacturing process in order to limit them to what is considered as acceptable by the designer for the part and the complete product to be functional. This is why a representation of the real part with geometrical deviations (skin model) is introduced as the starting point in the tolerancing process. Partition The skin model is a representation of a whole real part. However, the designer very often, if not always, needs to identify some specific geometrical features of the part to apply well-suited specifications. The process of identifying geometrical features from the skin model or the nominal model is called a partition. The standardization of this operation is a work in progress in ISO TC 213 (ISO 18183 series). Several methods can be used to obtain a partition from a skin model as described in Extraction The skin model and the partitioned geometrical features are usually considered as continuous, however it is often necessary when measuring the part to consider only points extracted from a line or a surface. The process of e.g. selecting the number of points, their distribution over the real geometrical feature and the way to obtain them is part of the extraction operation. This operation is described in ISO 14406:2011 Filtration Filtration is an operation that is useful to select features of interest from other features in the data. This operation is heavily used for surface texture specifications however, it is a general operation that can be applied to define other specifications. This operation is well known in signal processing where it can be used for example to isolate some specific wave length in a raw signal. The filtration is standardized in ISO 16610 series where a lot of different filters are described. Association Association is useful when we need to fit an ideal (perfect) geometrical feature to a real geometrical feature e.g. to find a perfect cylinder that approximates a cloud of points that have been extracted from a real (imperfect) cylindrical geometrical feature. This can be viewed as a mathematical optimization process. A criterion for optimization has to be defined. This criterion can be the minimisation of a quantity such as the squares of the distances from the points to the ideal surface for example. Constraints can also be added such as a condition for the ideal geometrical feature to lie outside the material of the part or to have a specific orientation or location from an other geometrical feature. Different criteria and constraints are used as defaults throughout the GPS&V standards for different purposes such as geometrical specification on geometrical features or datum establishment for example. However, standardization of association as a whole is a work in progress in ISO TC 213. Collection Collection is a grouping operation. The designer can define a group of geometrical features that are contributing to the same function. It could be used to group two or more holes because they constitute one datum used for the assembly of a part. It could also be used to group nominally planar geometrical features that are constrained to lie inside the same flatness tolerance zone. This operation is described throughout several GPS&V standards. It is heavily used in ISO 5458:2018 for grouping planar geometrical feature and cylindrical geometrical features (holes or pins). The collection operation can be viewed as applying constraints of orientation and or constraints of location among the geometrical features of the considered group. Construction Construction is described as an operation used to build ideal geometrical features with perfect geometry from other geometrical features. An example, given in ISO 17450-1:2011 is the construction of a straight line resulting from the intersection of two perfect planes. No specific standard addresses this operation, however it is used and defined throughout a lot of standards in GPS&V system. Reconstruction Reconstruction is an operation allowing the build of a continuous geometrical feature from a discrete geometrical feature. It is useful for example when there is a need to obtain a point between two extracted points as can be the case when identifying a dimension between two opposite points in a particular section in the process of obtaining a linear size of a cylinder. The reconstruction operation is not yet standardized in the GPS&V system however the operation has been described in academic papers Reduction Reduction is an operation allowing to compute a new geometrical feature from an existing one. The new geometrical feature is a derived geometrical feature. Dimensional specification Dimensional tolerances are dealt with in ISO 14405: ISO 14405-1:2016 Linear sizes ISO 14405-2:2018 Dimensions other than linear or angular sizes ISO 14405-3:2016 Angular sizes The linear size is indicated above a line ended with arrows and numerical values for the nominal size and the tolerance.The linear size of a geometrical feature of size is defined by default, as the distances between opposite points taken from the surface of the real part. The process to build both the sections and the directions needed to identify the opposite points is defined in ISO 14405-1 standard. This process includes the definition of an associated perfect geometrical feature of the same type as the nominal geometrical feature. By default a least-squares criterion is used. This process is defined only for geometrical features where opposite points exist. ISO 14405-2 illustrates cases where dimensional specification are often misused because opposite points don't exist. In these cases, the use of linear dimensions is considered as ambiguous (see example). The recommendation is to replace dimensional specifications with geometrical specifications to properly specify the location of a geometrical feature with respect to an other geometrical feature, the datum feature (see examples). Angular sizes are useful for cones, wedges or opposite straight lines. They are defined in ISO 14405-3. The definition implies to associate perfect geometrical features e.g. planes for a wedge and to measure the angle between lines of those perfect geometrical features in different sections. The angular sizes are indicated with an arrow and numerical values for the nominal size and the tolerance. It is to be noted that angular size specification is different from angularity specification. Angularity specification controls the shape of the toleranced feature but it is not the case for angular size specification. Size of a cylinder We consider here the specification of a size of a cylinder to illustrate the definition of a size according to ISO 14405-1. The nominal model is assumed to be a perfect cylinder with a dimensional specification of the diameter without any modifiers changing the default definition of size. According to ISO 14405-1:2016 annex D, the process to establish a dimension between two opposite points starting from the real surface of the manufactured part which is nominally a cylinder is as follows: partition of the real surface to identify the portion of the whole surface of the part that is submitted to the specification extract points from the partitioned surface reconstruct the surface from extracted points if the number of extracted points is not infinite filter the reconstructed surface associate a perfect cylinder to the filtered surface using a least-squares criterion identify the straight line which is the axis of the associated cylinder built a plane perpendicular to the associated cylinder axis to identify a cross section consider the section line which is the intersection of the plane perpendicular to the associated cylinder axis, with the filtered surface associate a perfect circle to the section line using a least-squares criterion consider a straight line in the cross section passing through the centre of the associated circle two opposite points are defined as the intersection between the straight line and the section line See example hereafter for an illustration. Dimension with envelope requirement Ⓔ The envelope requirement is specified by adding the symbol Ⓔ after the tolerance value of a dimensional specification. The symbol Ⓔ modifies the definition of the dimensional specification in the following way (ISO 14405-1 3.8): the dimensional specification is applied between two opposite points for the least material side of the dimensional specification, the maximum inscribed dimension specification (for internal geometrical feature like a cylindrical hole) or the minimum circumscribed dimension specification (for external geometrical feature like a cylindrical pin) is applied. The maximum inscribed dimension for a nominally cylindrical hole is defined as the maximum diameter of a perfect cylinder associated to the real surface with a constraint applied to the associated cylinder to stay outside the material of the part. The minimum circumscribed dimension for a nominally cylindrical pin is defined as the minimum diameter of a perfect cylinder associated to the real surface with a constraint applied to the associated cylinder to stay outside the material of the part. See example hereafter for an illustration. Use of the envelope requirement The use of the envelope symbol Ⓔ is closely related to the very common function of fitting parts together. A dimensional specification without envelope on the two parts to be fitted is not sufficient to ensure the fitting because the shape deviation of the parts is not limited by the dimensional specifications. The fitting of a cylindrical pin inside a cylindrical hole, for example requires to limit the sizes of both geometrical features but also to limit the deviation of straightness of both geometrical features as it is the combination of the size specification and the geometrical specification (straightness) that will allow the fitting of the two parts. The use of the envelope requirement on a cylindrical hole allows to accept only the combinations of size and shape that guarantee a minimum passage for a perfect cylinder. The use of the envelope requirement on a cylindrical pin allows to accept only the combinations of size and shape that guarantee that the material of the pin is inside a maximum perfect cylinder. Then the cylindrical pin and the cylindrical hole will fit even in the worst conditions without over constraining the parts with specific form specifications. It is to be noted that the use of dimensional size with envelope does not constrain the orientation nor the location of the parts. The use of geometrical specification together with the maximum material requirement (symbol Ⓜ) allows to ensure fitting of parts when additional constraints on orientation or location are required. ISO 2692:2021 describes the use of the maximum material modifier. Form, orientation, location and run-out specifications GPS&V standards dealing with geometrical specifications are listed below: ISO 1101:2017 Tolerances of form, orientation, location and run-out ISO 5459:2011 Datums and datum systems ISO 5458:2018 Pattern and combined geometrical specification ISO 1660:2017 Profile tolerancing The word geometry, used in this paragraph is to be understood as macrogeometry as opposed to surface texture specifications which are dealt with in other standards. The main source for geometrical specifications in GPS&V standards is ISO 1101. ISO 5459 can be considered as a companion standard with ISO 1101 as it defines datum which are heavily used in ISO 1101. ISO 5458 and ISO 1660 are only focussing on subsets of ISO 1101. However, those standards are very useful for the user of GPS&V systems as they cover very common aspects of geometrical tolerancing namely groups of cylinders or planes and profile specifications (lines and surfaces). A geometrical specification allows to define the three following objects: toleranced features datums, if they are specified tolerance zones The steps to read a geometrical specification can be summarised as in follows: identify the toleranced feature as a portion of the skin model or a feature that can be built from the skin model like an imperfect line representing an axis for example, build the specified datum by first associating perfect geometrical features to a (real) datum feature and then building a situation feature from those associated datums to obtain the specified datum, build the tolerance zone as a perfect volume or surface that can be constrained in orientation or location from the datum check whether the toleranced feature lies entirely inside the tolerance zone. Toleranced feature Toleranced features are defined in ISO 1101. The toleranced feature is a real geometrical feature with imperfect geometry identified either directly from the skin model (integral feature) or by a process starting from the skin model (derived feature). The integral feature is a portion of the skin model directly identified by a partition with extraction and possibly filtration. The derived feature is built from the skin model from a specific process that is defined by default in GPS&V standards. For example, when the axis of a cylinder is indicated by the geometrical specification (see example) then the toleranced feature is a line made of the centres of associated circles in each section. The sections are defined to be perpendicular to the axis of a cylinder associated to the integral feature. The least-squares criterion is used by default. Whether the toleranced feature is an integral feature or a derived feature depends upon the precise writing of the corresponding specification: if the arrow of the leader line of the specification is in the prolongation of a dimension line otherwise it is an integral feature. A Ⓐ modifier can also be used in the specification to designate a derived feature. The nominal toleranced feature is a geometrical feature with perfect geometry defined in the TPD corresponding to the toleranced feature. Datum Datums are defined in ISO 5459 as a simulation of a contact partner at a single part specification, where the contact partner is missing. The contacts „planar touch“ and „fit of lineare size“ are covered by defaults. With this simulation a specification mistake appears against the nature function, which appears in assembly constrains. In essence, the datum is used to link the toleranced feature (imperfect real geometry) to the toleranced zone (perfect geometry). As such the datum object is a three folded object: the datum feature is a geometrical feature of imperfect geometry obtained from the skin model (real part) by a partition. The nominal datum is identified on the nominal model by a triangle connected to a frame containing the name of the datum (capital letter), the associated datum feature is obtained by associating a geometrical feature with perfect geometry to the datum feature (real). The default process and criterion to be applied for the association is defined in ISO 5459. The criterion can be different for different geometrical features. the specified datum is a situation feature built from the associated datums. The link between the orientation, location or run-out specification and the datums is specified in the geometrical specification frame as follows: the primary datum is in the third cell of a geometrical specification, if any; the secondary datum is in the fourth cell of the geometrical specification, if any; the tertiary datum is in the fifth cell of the geometrical specification, if any. Some geometrical specification may not have any datum section at all (e.g. form specification). The content of each cell can be either: a single datum identified by a capital letter such as 'A' (or several capital letters without separators like 'AA' or 'AAA') or a common datum identified by a sequence of capital letters with a dash separator such as A-B (or a sequence of several capital letters separated by dashes like 'AA-BBB'). The process to build a datum system is first described and the process for building a common datum follows. Datum system A datum is identified by at most three cells in the geometrical specification frame corresponding to primary, secondary and tertiary datums. For the primary, secondary and tertiary datum, a perfect geometry feature of the same kind as the nominal feature is associated to the real feature as described hereafter: The primary datum is built by associating a feature of perfect geometry with the default association. In ISO 5459:2011 for a plane, the default association is to minimize the maximum distance between the associated feature (a perfect plane) and the real feature with a constraint for the associated feature to stay outside the material of the part. The secondary datum is built in the same way as the primary datum with an additional constraint for the associated feature to be oriented from the primary datum as described on the nominal model. The tertiary datum is built in the same way as the secondary datum with an additional constraint for the associated feature to be oriented from the secondary datum as described on the nominal model. The result is a set of associated features. Finally, this set of associated features is used to build a situation feature which is the specified datum. Common datum The datum features are identified on the skin model from the datum component in the dash separated list of nominal datum appearing in a particular cell of an orientation or location specification. The common datum can be used as primary, secondary or tertiary datum. In either cases, the process to build a common datum is the same however additional orientation constraints shall be added when the common datum is used as secondary or tertiary datum as is done for datum systems and explained hereafter. The criterion for association of common datum is applied on all the associated features together with the following constraints: external material constraints orientation and location constraints between the associated features of the common datum addition orientation constraint with respect to preceding datum in the hierarchy. The result is a set of associated feature. Finally, this set of associated features is used to build a situation feature which is the specified datum. Situation feature The final step in the datum establishment process is to combine the associated features to obtain a final object defined as situation feature which is identified to the specified datum (ISO 5459:2011 Table B.1). It is a member of the following set: a point a straight line a plane a straight line containing a point a plane containing a straight line a plane containing a straight line and a point How to build the situation features and therefore the specified datum, is currently mainly defined through examples in ISO 5459:2011. More specific rules are under development. The specified datum concept is closely related to classes of surfaces invariant through displacements. It has been shown that surfaces can be classified according to the displacements that let them invariant. The number of classes is seven. If a displacement let a surface invariant then this displacement cannot be locked by the corresponding specified datum. So the displacement that are not invariant are used to lock specific degrees of freedom of the tolerance zone. For example a set of associated datums made of three mutually perpendicular planes corresponds to the following situation feature: a plane containing a straight line containing a point. The plane is the first associated plane obtained, the line is the intersection between the second associated plane and the first one and the point is the intersection between the line and the third associated plane. The specified datum is therefore belonging to the complex invariance class () and all the degrees of freedom of a tolerance zone can be locked with this specified datum. The invariance class graphic symbols are not defined in ISO standards but only used in literature as a useful reminder. An Helicoidal class () can also be defined however it is generally replaced with a cylindrical class in real world applications. Tolerance zone Tolerance zones are defined in ISO 1101. The tolerance zone is a surface or a volume with perfect geometry. It is a surface when it is intended to contain a tolerance feature which is a line. It is a volume when it is intended to contain a tolerance feature which is a surface It can often be described as a rigid body with the following attributes: the shape, is in most cases the volume between two opposite parallel planes (resp. the area between two parallel lines) or a cylinder if the symbol ⌀ is preceding the numerical value in the second section of the geometrical specification frame or a sphere if the symbol S⌀ is used, the size, given by a numerical value in the second section of the geometrical specification frame orientation constraints with respect to the specified datum from the geometrical specification frame if the geometrical specification is an orientation or a location specification, location constraints with respect to the specified datum from the geometrical specification frame if the geometrical specification is a location specification, orientation and location constraints between tolerance zones if the modifier CZ ('Combined Zone') is indicated in the second cell of the geometrical specification. Theoretical Exact Dimension (TED) TED are identified on a nominal model by dimensions with a framed nominal value without any tolerance. Those dimensions are not specification by themselves but are needed when applying constraints to build datum or to determine the orientation or location of the tolerance zone. TED can also be used for other purposes e.g. to define the nominal shape or dimensions of a profile. When applying constraints generally two types of TED are to be taken into account: explicit TED which are written on an engineering drawing or that may be obtained by querying a CAD model. implicit TED which are the distance of 0 mm for two coincident lines, 0° (modulo 180°) for parallel lines or 90° (modulo 180°) for perpendicular lines Geometrical specification families The geometrical specifications are divided into three categories: form orientation location Run-out specification is another family that involves both form and location. Examples Presentation This paragraph contains examples of dimensional and geometrical specification to illustrate the definition and use of dimensional and positional specifications. The dimensions and tolerance values (displayed in blue in the figures) shall be numerical values on actual drawings. d, l1, l2 are used for length values. Δd is used for a dimensional tolerance value and t, t1, t2 for positional tolerance values. For each example we present: the drawing showing the geometry of the nominal model and a specification figures illustrating the meaning of the specification on a particular real part with deviations The deviations are enlarged compared to actual parts in order to show as clearly as possible the steps necessary to build the GPS&V operators. The first angle projection is used in technical drawing. Dimensional specifications Diameter of a cylindrical part The drawing above shows a cylindrical part with the specification of the diameter. The nominal value d and the tolerance value Δd shall be replaced with numerical values on an actual drawing. The real part above (1) in orange is shown with its deviation. The green lines (2) represent an associated cylinder. The red axis line (3) represents the axis of the associated cylinder. The blue lines (4) represent two particular sections. All sections (an infinite number) shall be considered theoretically. At the verification stage only some sections will be measured introducing uncertainty in the result. A section of the real part is represented above with the real line in orange (4). The blue line (3) is an associated circle. The blue cross (2) is the centre of the associated circle. The green cross (1) represents the axis of the associated cylinder shown in green in the real part figure. The two dots (6) represent two opposite points on the real surface. The dimension (5) is one of the local dimension measured. Diameter of a cylindrical part with envelope Ⓔ The drawing shows a cylindrical part with the specification of the diameter with a modifier Ⓔ for the envelope requirement. The nominal value d and the tolerance value Δd shall be replaced with numerical values on an actual drawing. The real part (1) in orange is shown with its deviation. The green lines (2) represent an associated cylinder. The red axis line (3) represents the axis of the associated cylinder. The blue lines (4) represent two particular sections. All sections shall be considered. The orange dimensions (6) represent dimensions in particular sections. The purple line (5) represents the envelope cylinder (perfect cylinder). The dimension in purple (7) is the dimension of the envelope, specifically d+Δd/2. The verification is twofold: the local dimensions shall be greater than d-Δd, the surface of the real part shall fit into the envelope. Ambiguous dimension The drawing above shows a part with a dimensional specification. The red cross over this specification means that this type of specification is discouraged in ISO 14405-2 because it is not possible to find opposite points over the complete surface extent. The nominal value d and the tolerance value Δd shall be replaced with numerical values on an actual drawing. The real part above in orange is shown with its deviation. The upper dimension (orange) has two opposite points and therefore, could be defined however the lower one is missing an opposite point so that the dimensional specification is considered ambiguous and should be replaced with a geometrical specification. This example is often surprising for new practitioners of GPS&V. However, it is a direct consequence of the definition of a linear dimension in ISO 14405-1. The function targeted here is probably to locate the two planes, therefore a location specification on one surface with respect to the other surface or the location of the two surfaces with respect to one another is considered the right way to achieve the function. See examples. Positional specifications Location of a plane with respect to another plane (case 1) The drawing above shows a part with a location specification with respect to the datum named A which is indicated on the left planar surface. The real part below in orange is shown with its deviation. The process to build or identify the toleranced feature, the specified datum and the tolerance zone is described in the table below. This specification could be useful when one surface (datum plane in this case) has a higher priority in the assembly process. For example a second part could be required to fit inside the slot being guided by the plane where the datum has been indicated. The part is not conformant to the specification for this particular real part, as the toleranced feature (orange line segment) is not included in the tolerance zone (green). Location of a plane with respect to another plane (case 2) The drawing above shows a part with a location specification with respect to the datum named A which is indicated on the right planar surface. The real part below in orange is shown with its deviation. The process to build or identify the tolerance feature, the specified datum and the tolerance zone is described in the table here after. This case 2 is similar to case 1 above however the toleranced feature and the datum are switched so that the result is totally different as explained above. This specification could be useful when one surface (datum plane) has a higher priority over the other surface in the assembly process. For example a second part could be required to fit inside the slot being guided by the plane where the datum has been indicated. The part is not conformant to the specification for this particular real part, as the toleranced feature (orange line segment) is not included in the tolerance zone (green) Location of planes with respect to one another (case 3) The drawing above shows a part with a location specification with a CZ symbol. No datums are indicated on purpose. The real part above in orange is shown with its deviation. The building or identification of the toleranced feature and the tolerance zone is described in the table hereafter This specification could be useful when the two surfaces (plane in this case) have the same priority in the assembly process. For example a second part could be required to fit inside the slot being guided by the two planes. The part is conformant to the specification for this particular real part, as the toleranced feature (two orange line segments) is included in the tolerance zone (green). Location of a hole with respect to the edges of a plate The drawing above shows a part with a location specification for a hole with respect to a system of datums. The real part below in orange is shown with its deviation. The process to identify and build the toleranced feature, the specified datum and the tolerance zone is indicated below This specification could be useful when the holes is actually located from the edges of the plates in an assembly process and where the A surface has a higher priority over B. If the assembly process is modified then the datum specification shall be adapted in accordance. The order of the datum is important in a datum system as the resulting specified datum can be very different. The part is conformant to the specification for this particular real part, as the toleranced feature (purple line on the left, purple dot on the right) is included in the tolerance zone (green). Surface texture ISO 1302:2002 Indication of surface texture in technical product documentation Measuring equipment and calibration requirements ISO 14978:2018 General concepts and requirements for GPS measuring equipment ISO 10360 Acceptance and reverification tests for coordinate measuring machines (CMM) Uncertainty management for measurement and specification acceptance ISO 14253-1:2017 Inspection by measurement of workpieces and measuring equipment - Part 1: Decision rules for verifying conformity or nonconformity with specifications ISO 18391:2016 Population specification Notes References External links GPS Booklet TC 213 web site ISO standards Metrology Geometric measurement
Geometrical Product Specification and Verification
[ "Physics", "Mathematics" ]
6,454
[ "Geometric measurement", "Quantity", "Physical quantities", "Geometry" ]
72,500,875
https://en.wikipedia.org/wiki/Hexagonal%20ferrite
Hexagonal ferrites or hexaferrites are a family of ferrites with hexagonal crystal structure. The most common member is BaFe12O19, also called barium ferrite, BaM, etc. BaM is a strong room-temperature ferrimagnetic material with high anisotropy along the c axis. All the hexaferrite members are constructed by stacking a few building blocks in a certain order. Basic building blocks S block The S block is very common in hexaferrites, which has a chemical formula of MeS6O82+. MeS are smaller metal cations, for example, Fe and other transition metals or noble metals. The S block is essentially a slab cut along the plane of an AB2O4 spinel. Each S block has one A layer and one B layer. The A layer features MeS-centered tetrahedron and MeS-centered octahedron, while the B layer is made up of edge-sharing MeS-centered octahedron. Both A and B layers have the same chemical formula of MeS3O42+. R block The R block has a chemical formula of MeLMeS6O112-. MeL are larger metal cations, for example, alkaline earth metals (Ba, Sr,) rare earth metals, Pb, etc. The point group symmetry of the R block is . The large metal cations are located in the middle layer of the three hexagonally packed layers. This block is also composed of face-sharing MeS-centered octahedra and MeS-centered trigonal bipyramids. T block The T block has a chemical formula of MeL2MeS8O142-. The point group symmetry of the T block is . One T block consists of 4 oxygen layers with the two MeL atoms substituting two oxygen atoms in the middle two layers. In one T block, there are both MeS-centered octahedra and MeS-centered tetrahedra. Family nembers M-type ferrite M-type ferrite is made up of alternating S and R blocks in the sequence of SRS*R*. (* denotes rotating that layer around the c axis by 180°.) The chemical formula of M-type ferrite is MeLMeS12O19. Common examples are BaFe12O19, SrFe12O19. It exhibits space group symmetry. For BaFe12O19, a = 5.89 Å and c = 23.18 Å. M-ferrite is a very robust ferrimagnetic material, thus widely used as fridge magnets, card strips, magnets in speakers, magnetic material in linear tape-open. W-type ferrite W-type ferrite, like the M-type, consists of S and R blocks, but the stacking order and the number of blocks are different. The stacking sequence in a W-ferrite is SSRS*S*R* and its chemical formula is MeLMeS18O27. It exhibits space group symmetry. One example of W-type ferrite is BaFe18O27, with a = 5.88 Å and c = 32.85 Å. R-type ferrite R-type ferrite has a chemical formula of MeLMeS6O11 with a space group of . Unlike other hexaferrites, R-type ferrite doesn't have an S block. Instead, it only has single B layers extracted from the S block. The stacking sequence is BRB*R*. Y-type ferrite Y-type ferrite has a chemical formula of MeL2MeS14O22 with a space group of . One example is Ba2Co2Fe12O22 with a = 5.86 Å and c = 43.5 Å. Y-type ferrite is built up with S and T blocks with an order of 3(ST) in one unit cell. There is no horizontal mirror plane in a Y-type ferrite. Z-type ferrite Z-type ferrite has a chemical formula of MeL3MeS26O41 with a space group of . It has a complicated stacking of SRSTS*R*S*T* in one unit cell. Some Z-type members may have sophisticated magnetic properties along different directions. One example is Ba3Co2Fe24O41 with a = 5.88 Å and c = 52.3 Å. X-type ferrite X-type ferrite has a chemical formula of MeL2MeS30O46 with a space group of . The stacking order is 3(SRS*S*R*) in one unit cell. One example is Sr2Co2Fe28O46 with c = 83.74 Å. References Ferromagnetic materials Ceramic materials
Hexagonal ferrite
[ "Physics", "Engineering" ]
1,017
[ "Ferromagnetic materials", "Materials", "Ceramic materials", "Ceramic engineering", "Matter" ]
72,504,928
https://en.wikipedia.org/wiki/Colanic%20acid
Colanic acid is an exopolysaccharide synthesized by bacteria in the Enterobacteriaceae family. It is excreted by the cell to form a protective bacterial capsule, and it assists in the formation of biofilms. Structure Colanic acid is composed of polyanionic heteropolysaccharides with hexasaccharide repeating units, consisting of glucose, fucose, galactose, and glucuronic acid. It also consists of O-acetyl groups and pyruvate side chains attached to these sugar molecules. It forms a protective capsule around cells, primarily Enterobacteriaceae. Colanic acid's high molecular weight and branching structure contribute to its high viscosity, while the carboxylic acid groups in its structure are the primary contributors to its acidity. It is considered mildly toxic when injected intraperitoneally in mice, and its effect on mammals can be compared to the effects of low doses of endotoxin, which can cause diarrhea and malaise. E. coli colonies that produce colanic acid are said to be colicinogenic, and appear larger, smoother, and more opaque than those that do not. The colanic acid itself is observed as amorphous, white, and fibrous and is water-soluble as well as soluble in dilute salt solutions. Function The main function of colanic acid is to form a protective slimy capsule around the cell surface under stressful conditions to increase its chances of survival. The stressful environment can come in the forms of desiccation, oxidative stress, and a low pH. Expression of colanic acid in E. coli has been shown to be required for the creation of normal E. coli biofilm architecture. Colanic acid synthesis is up-regulated in biofilms, where acetylation plays a crucial role in modulating its structural conformation and physical and chemical properties. In E. coli, colanic acid plays an essential role in biofilm formation. However, it does not enhance bacterial adhesion, but instead blocks the establishment of specific binding between bacteria and the underlying substrate. Environmental factors Temperature and pH Colanic acid begins to be synthesized and accumulate at 19 °C. Nutrients modulate the production of colanic acid with maximal production occurring when glucose and proline are used as carbon and nitrogen sources. E. coli, a member of the Enterobacteriaceae family, is commonly used to study the conditions and effects of colanic acid production. A study showed that E. coli K92 is able to produce colanic acid at temperatures ranging from 19 °C to 42 °C, but it predominates at around 20 °C. Colanic acid is typically produced at a low pH to protect bacteria from the acidic environment. A study was conducted to determine the minimal pH that E.coli could withstand. It was concluded that the production of colanic acid can range from a pH of 2 to a pH of 8; with the initial response to acidity occurring at a pH of 5.5. Colanic acid production in E. coli is dependent on both lipopolysaccharide structure and glucose availability, because important nucleotide-sugar precursors are needed and provided by both. Activation and regulation Activation At least two positive protein regulators, RcsA and RcsB, are involved in the transcription of the operon for capsule (cps) gene expression in E. coli. The activation of colanic acid is due to an initial response to an environmental stimulus such as osmotic shock. This stimulus is relayed to the MdoH gene which is tied to the biosynthesis of MDOs. Unstable MDO levels due to changes within the environment, triggers the RcsC sensor to directly or indirectly relay the signal to the RcsB gene, which is a main activator of cps expression. The RcsA gene activates its own expression. Regulation The cps colanic acid operon can control the biosynthesis of colanic acid. It is composed of one large transcriptional unit that contains a ugd gene right outside the cps operon. It has been shown that the transcriptional antiterminator rfaH promotes said cps transcription. It does so by mediating the cps operon and promoting ugd expression. A study was conducted to test whether RfaH was able to enhance cps colanic acid transcription for colanic acid production. E. coli K92 wild-type and rfaH mutant strains were grown and analyzed. It was observed that the deletion of rfaH had dramatically decreased colanic acid production in both. References Escherichia coli Polysaccharides
Colanic acid
[ "Chemistry", "Biology" ]
964
[ "Model organisms", "Carbohydrates", "Escherichia coli", "Polysaccharides" ]
72,505,162
https://en.wikipedia.org/wiki/Rate-limiting%20step%20%28biochemistry%29
In biochemistry, a rate-limiting step is a reaction step that controls the rate of a series of biochemical reactions. The statement is, however, a misunderstanding of how a sequence of enzyme-catalyzed reaction steps operate. Rather than a single step controlling the rate, it has been discovered that multiple steps control the rate. Moreover, each controlling step controls the rate to varying degrees. Blackman (1905) stated as an axiom: "when a process is conditioned as to its rapidity by a number of separate factors, the rate of the process is limited by the pace of the slowest factor." This implies that it should be possible, by studying the behavior of a complicated system such as a metabolic pathway, to characterize a single factor or reaction (namely the slowest), which plays the role of a master or rate-limiting step. In other words, the study of flux control can be simplified to the study of a single enzyme since, by definition, there can only be one 'rate-limiting' step. Since its conception, the 'rate-limiting' step has played a significant role in suggesting how metabolic pathways are controlled. Unfortunately, the notion of a 'rate-limiting' step is erroneous, at least under steady-state conditions. Modern biochemistry textbooks have begun to play down the concept. For example, the seventh edition of Lehninger Principles of Biochemistry explicitly states: "It has now become clear that, in most pathways, the control of flux is distributed among several enzymes, and the extent to which each contributes to the control varies with metabolic circumstances". However, the concept is still incorrectly used in research articles. Historical perspective From the 1920s to the 1950s, there were a number of authors who discussed the concept of rate-limiting steps, also known as master reactions. Several authors have stated that the concept of the 'rate-limiting' step is incorrect. Burton (1936) was one of the first to point out that: "In the steady state of reaction chains, the principle of the master reaction has no application". Hearon (1952) made a more general mathematical analysis and developed strict rules for the prediction of mastery in a linear sequence of enzyme-catalysed reactions. Webb (1963) was highly critical of the concept of the rate-limiting step and of its blind application to solving problems of regulation in metabolism. Waley (1964) made a simple but illuminating analysis of simple linear chains. He showed that provided the intermediate concentrations were low compared to the values of the enzymes, the following expression was valid: where equals the pathway flux, and and are functions of the rate constants and intermediate metabolite concentrations. The terms are proportional to the limiting rate values of the enzymes. The first point to note from the above equation is that the pathway flux is a function of all the enzymes; there is no need for there to be a 'rate-limiting' step. If, however, all the terms from to , are small relative to then the first enzyme will contribute the most to determining the flux and therefore, could be termed the 'rate-limiting' step. Modern perspective The modern perspective is that rate-limitingness should be quantitative and that it is distributed through a pathway to varying degrees. This idea was first considered by Higgins in the late 1950s as part of his PhD thesis where he introduced the quantitative measure he called the ‘reflection coefficient.’ This described the relative change of one variable to another for small perturbations. In his Ph.D. thesis, Higgins describes many properties of the reflection coefficients, and in later work, three groups, Savageau, Heinrich and Rapoport and Jim Burns in his thesis (1971) and subsequent publications independently and simultaneously developed this work into what is now called metabolic control analysis or, in the specific form developed by Savageau, biochemical systems theory. These developments extended Higgins’ original ideas significantly, and the formalism is now the primary theoretical approach to describing deterministic, continuous models of biochemical networks. The variations in terminology between the different papers on metabolic control analysis were later harmonized by general agreement. See also Branched pathways Metabolic control analysis Biochemical systems theory Committed step References Biochemical reactions Enzyme kinetics
Rate-limiting step (biochemistry)
[ "Chemistry", "Biology" ]
855
[ "Biochemistry", "Chemical kinetics", "Enzyme kinetics", "Biochemical reactions" ]
72,507,672
https://en.wikipedia.org/wiki/NGC%206956
NGC 6956 is a barred spiral galaxy located in the constellation Delphinus. It is located at a distance of about 214 million light-years from Earth. Friedrich Wilhelm Herschel discovered this galaxy on 9 October 1784. Three supernovae have been observed in NGC 6956: SN 2006it (type IIP, mag. 17.6), SN 2013fa (type Ia, mag. 16.2), and SN PSNJ20435314+1230304 (type Ia, mag. 15.8, discovered 11 July 2015). References External links NGC 6956 on SIMBAD Barred spiral galaxies Seyfert galaxies Cepheus (constellation) 6956 11619 65269 +02-53-001 J20435368+1230429
NGC 6956
[ "Astronomy" ]
166
[ "Constellations", "Cepheus (constellation)" ]
63,826,387
https://en.wikipedia.org/wiki/Oxyhydride
An oxyhydride is a mixed anion compound containing both oxide O2− and hydride ions H−. These compounds may be unexpected as the hydrogen and oxygen could be expected to react to form water. But if the metals making up the cations are electropositive enough, and the conditions are reducing enough, solid materials can be made that combine hydrogen and oxygen in the negative ion role. Production The first oxyhydride to be discovered was lanthanum oxyhydride, a 1982 discovery. It was made by heating lanthanum oxide in an atmosphere of hydrogen at 900 °C. However, heating transition metal oxides with hydrogen usually results in water and the reduced metal. Topochemical synthesis retains the basic structure of the parent compound, and only does the minimum rearrangements of atoms to convert to the final product. Topotactic transitions retain the original crystal symmetry. Reactions at lower temperatures do not distort the existing structure. Oxyhydrides in a topochemical synthesis can be produced by heating oxides with sodium hydride NaH or calcium hydride CaH2 at temperatures from 200–600 °C. TiH2 or LiH can also be used as an agent to introduce hydride. If calcium hydroxide or sodium hydroxide is formed, it might be able to be washed away. However for some starting oxides, this kind of hydride reduction might just yield an oxygen-deficient oxide. Reactions under hot high-pressure hydrogen can result from heating hydrides with oxides. A suitable seal for the lid on the container is required, and one such substance is sodium chloride. Oxyhydrides all contain an alkali metal, alkaline earth metal, or rare-earth element, which are needed in order to put electronic charge on hydrogen. Properties The hydrogen bonding in oxyhydrides can be covalent, metallic, and ionic bonding, depending on the metals present in the compound. Oxyhydrides lose their hydrogen less than the pure metal hydrides. The hydrogen in oxyhydrides is much more exchangeable. For example oxynitrides can be made at much lower temperatures by heating the oxyhydride in ammonia or nitrogen gas (say around 400 °C rather than 900 °C required for an oxide) Acidic attack can replace the hydrogen, for example moderate heating in hydrogen fluoride yields compounds containing oxide, fluoride, and hydride ions (oxyfluorohydride.) The hydrogen is more thermolabile, and can be lost by heating yielding a reduced valence metal compound. Changing the ratio of hydrogen and oxygen can modify electrical or magnetic properties. Then band gap can be altered. The hydride atom can be mobile in a compound undergoing electron coupled hydride transfer. The hydride ion is highly polarisable, so it presence raised the dielectric constant and refractive index. Some oxyhydrides have photocatalytic capability. For example BaTiO2.5H0.5 can function as a catalyst for ammonia production from hydrogen and nitrogen. The hydride ion is quite variable in size, ranging from 130 to 153 pm. The hydride ion actually does not only have a −1 charge, but will have a charge dependent on its environment, so it is often written as Hδ−. In oxyhydrides, the hydride ion is much more compressible than the other atoms in compounds. Hydride is the only anion with no π orbital, so if it is incorporated into a compound, it acts as a π-blocker, reducing dimensionality of the solid. Oxyhydride structures with heavy metals cannot be properly studied with X-ray diffraction, as hydrogen hardly has any effect on X-rays. Neutron diffraction can be used to observe hydrogen, but not if there are heavy neutron absorbers like Eu, Sm, Gd, Dy in the material. List Three or more anions See also Hydrous oxide (oxide-hydroxide) Aldehyde References Hydrides Oxides Mixed anion compounds
Oxyhydride
[ "Physics", "Chemistry" ]
864
[ "Matter", "Mixed anion compounds", "Oxides", "Salts", "Ions" ]
68,116,358
https://en.wikipedia.org/wiki/Radon%20storm
A radon storm is a day-long episode of increased atmospheric radon concentration due to moving air masses. In Antarctica and over the Southern Ocean, they often occur due to the arrival of continental air from South America and Africa and the concept was coined to describe sudden radon concentration increases there. Naturally, radon increases in concentration threefold in antarctic air in the summer months of December and January. References Radon Regional climate effects Anomalous weather Climate of Antarctica
Radon storm
[ "Physics" ]
95
[ "Weather", "Physical phenomena", "Anomalous weather" ]
68,116,797
https://en.wikipedia.org/wiki/Assembly%20theory
Assembly theory is a framework developed to quantify the complexity of molecules and objects by assessing the minimal number of steps required to assemble them from fundamental building blocks. Proposed by chemist Lee Cronin and his team, the theory assigns an assembly index to molecules, which serves as a measurable indicator of their structural complexity. Cronin and colleagues argue that this approach allows for experimental verification and has applications in understanding selection processes, evolution, and the identification of biosignatures in astrobiology. However, the usefulness of the approach has been disputed. Background The hypothesis was proposed by chemist Leroy Cronin in 2017 and developed by the team he leads at the University of Glasgow, then extended in collaboration with a team at Arizona State University led by astrobiologist Sara Imari Walker, in a paper released in 2021. Assembly theory conceptualizes objects not as point particles, but as entities defined by their possible formation histories. This allows objects to show evidence of selection, within well-defined boundaries of individuals or selected units. Combinatorial objects are important in chemistry, biology and technology, in which most objects of interest (if not all) are hierarchical modular structures. For any object an 'assembly space' can be defined as all recursively assembled pathways that produce this object. The 'assembly index' is the number of steps on a shortest path producing the object. For such shortest path, the assembly space captures the minimal memory, in terms of the minimal number of operations necessary to construct an object based on objects that could have existed in its past. The assembly is defined as "the total amount of selection necessary to produce an ensemble of observed objects"; for an ensemble containing objects in total, of which are unique, the assembly is defined to be , where denotes 'copy number', the number of occurrences of objects of type having assembly index . For example, the word 'abracadabra' contains 5 unique letters (a, b, c, d and r) and is 11 symbols long. It can be assembled from its constituents as a + b --> ab + r --> abr + a --> abra + c --> abrac + a --> abraca + d --> abracad + abra --> abracadabra, because 'abra' was already constructed at an earlier stage. Because this requires at least 7 steps, the assembly index is 7. The word ‘abracadrbaa’, of the same length, for example, has no repeats so has an assembly index of 10. Take two binary strings and as another example. Both have the same length bits, both have the same Hamming weight . However, the assembly index of the first string is ("01" is assembled, joined with itself into "0101", and joined again with "0101" taken from the assembly pool), while the assembly index of the second string is , since in this case only "01" can be taken from the assembly pool. In general, for K subunits of an object O the assembly index is bounded by . Once a pathway to assemble an object is discovered, the object can be reproduced. The rate of discovery of new objects can be defined by the expansion rate , introducing a discovery timescale . To include copy number in the dynamics of assembly theory, a production timescale is defined, where is the production rate of a specific object . Defining these two distinct timescales , for the initial discovery of an object, and , for making copies of existing objects, allows to determine the regimes in which selection is possible. While other approaches can provide a measure of complexity, the researchers claim that assembly theory's molecular assembly number is the first to be measurable experimentally. Molecules with a high assembly index are very unlikely to form abiotically, and the probability of abiotic formation goes down as the value of the assembly index increases. The assembly index of a molecule can be obtained directly via spectroscopic methods. This method could be implemented in a fragmentation tandem mass spectrometry instrument to search for biosignatures. The theory was extended to map chemical space with molecular assembly trees, demonstrating the application of this approach in drug discovery, in particular in research of new opiate-like molecules by connecting the "assembly pool elements through the same pattern in which they were disconnected from their parent compound(s)". It is difficult to identify chemical signatures that are unique to life. For example, the Viking lander biological experiments detected molecules that could be explained by either living or natural non-living processes. It appears that only living samples can produce assembly index measurements above ~15. However, 2021, Cronin first explained how polyoxometalates could have large assembly indexes >15 in theory due to autocatalysis. Critical views Chemist Steven A. Benner has publicly criticized various aspects of assembly theory. Benner argues that it is transparently false that non-living systems, and with no life intervention, cannot contain molecules that are complex but people would be misled in thinking that because it was published in Nature journals after peer review, these papers must be right. A paper published in the Journal of Molecular Evolution concludes that "the hype around Assembly Theory reflects rather unfavorably both on the authors and the scientific publication system in general". The author concludes that what "assembly theory really does is to detect and quantify bias caused by higher-level constraints in some well-defined rule-based worlds"; one "can use assembly theory to check whether something unexpected is going on in a very broad range of computational model worlds or universes". Another paper authored by a group of chemists and planetary scientists published in the Journal of the Royal Society Interface demonstrated that abiotic chemical processes have the potential to form crystal structures of great complexity — values exceeding the proposed abiotic/biotic divide of MA index = 15. They conclude that "while the proposal of a biosignature based on a molecular assembly index of 15 is an intriguing and testable concept, the contention that only life can generate molecular structures with MA index ≥ 15 is in error". Two papers published in 2024 argue that assembly theory provides no insights beyond those already available using algorithmic complexity and Claude Shannon's information theory. See also List of interstellar and circumstellar molecules Smallest grammar problem Word problem for groups References Further reading Extraterrestrial life Molecular biology techniques Theories
Assembly theory
[ "Chemistry", "Astronomy", "Biology" ]
1,318
[ "Hypothetical life forms", "Extraterrestrial life", "Molecular biology techniques", "Astronomical controversies", "Molecular biology", "Biological hypotheses" ]
68,130,212
https://en.wikipedia.org/wiki/Salsabil%20%28fountain%29
A salsabil (or salasabil), also known as a shadirwan, is a type of fountain which maximizes the surface area of the water. It is used for evaporative cooling of buildings, cooling and aeration of drinking water, and ornament (it has also been used to prevent eavesdropping). The water may flow in a thin sheet or thin streams, often over a wavy surface with many little waterfalls. Its use extends from southern Spain through north Africa and the Middle East to northern India. Etymology and name The name salsabil () likely derives from a Qur'anic reference. The term shadirwan is also used for devices for aerating drinking water. However, the term shadirwan or shadirvan (, , ) has slightly different uses in other cultures, such as designating a central ablutions fountain for a mosque courtyard in Turkish (see shadirvan). Design and setting The water flows in a manner designed to maximize the surface area, and thus evaporation. A salsabil may be a near-vertical marble waterfall mounted on a wall, or the sheet of water may flow down a slanted chute. Evaporative cooling causes the water and the surrounding air to cool as some of the water evaporates. Passive ventilation may be used to maximize the flow of unsaturated air over the water surface and carry the cooled air to where it is needed in the building. Salasabils are often used with windcatchers. A salsabil may also be used to aerate water for drinking in a sabil (or sebil; , ). Salsabils, in the form of inclined marble slabs over which drinking water flowed before being dispensed, were often included inside the sabils of Mamluk architecture. Salasabils were used in Mughal architecture from the 1200s to the 1600s. They were also used in recent centuries in Iran. They were sometimes used as decorative features in Ottoman domestic architecture. See also Passive cooling References Passive cooling Passive ventilation Water and Islam Water treatment
Salsabil (fountain)
[ "Chemistry", "Engineering", "Environmental_science" ]
432
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
75,307,284
https://en.wikipedia.org/wiki/%CE%91-Halo%20carboxylic%20acids%20and%20esters
α-Halo carboxylic acids and esters are organic compounds with the respective formulas where R and R' are organic substituents. The X in these compounds is a halide, usually chloride and bromide. These compounds are often used as intermediates in the preparation of more elaborate derivatives. They are often potent alkylating agents. The mono halide derivatives are chiral. Preparation They are often prepared by reaction of the acid or the ester with halogen: A related method is the Hell-Volhard-Zelinsky halogenation. Amino acids are susceptible to diazotization in the presence of chloride, a process that affords chiral 2-chloro carboxylic acids and esters. Reactions Consistent with these compounds being alkylating agents, the α-halide is readily substituted, e.g. by azide. Similarly, the α-bromocarboxylic acid undergo nucleophilic substitution with ammonia to give the amino acid, The Darzens reaction involves a ketone or aldehyde with an α-haloester in the presence of a base to form an α,β-epoxy ester, also called a "glycidic ester". The reaction process begins with deprotonation at the halogenated position. In a related reaction, α-halo carboxylic esters can be reduced by lithium aluminium hydride to the α-halo alcohols, which can be converted to the α-epoxides. α-Halo-esters can be converted to vinyl halides. upon reaction with ketones and chromous chloride. Applications A prominent α-halo carboxylic acid is chloroacetic acid, which is used to produce carboxymethyl cellulose, carboxymethyl starch, as well as several phenoxy herbicides. 2,2-Dichloropropionic acid ("Dalapon") is an herbicide. Reference Functional groups Alkylating agents Organochlorides
Α-Halo carboxylic acids and esters
[ "Chemistry" ]
427
[ "Alkylating agents", "Functional groups", "Reagents for organic chemistry" ]
75,311,105
https://en.wikipedia.org/wiki/Gregory%20Odegard
Gregory M. Odegard is a materials researcher and academic. He is the John O. Hallquist Endowed Chair in Computational Mechanics in the Department of Mechanical Engineering – Engineering Mechanics at Michigan Technological University and the director of the NASA Institute for Ultra-Strong Composites by Computational Design. Odegard's work is focused on computational modeling of advanced composite systems, with his research interests spanning multiscale modeling, computational chemistry, materials science, and mechanics of materials. He is the recipient of 2008 Ferdinand P. Beer and E. Russell Johnston Jr. Outstanding New Mechanics Educator Award, 2011 Ralph R. Teetor Educational Award, 2021 Michigan Tech Distinguished Researcher Award, and 2023 NASA Outstanding Public Leadership Medal. Odegard is a Fellow of American Society of Mechanical Engineers (ASME), and an Associate Fellow of American Institute of Aeronautics and Astronautics (AIAA). Education Odegard earned his B.S. in Mechanical Engineering from the University of Colorado Boulder in 1995. He then completed his M.S. in Mechanical Engineering at the University of Denver in 1998, followed by his Ph.D. in materials science from the same institution in 2000 under Maciej S. Kumosa, with his doctoral thesis titled, "Shear-Dominated Biaxial Failure Analysis of Polymer-Matrix Composites at Room and Elevated Temperatures." Career Odegard worked as a National Research Council postdoctoral research associate in the Mechanics and Durability Branch at NASA Langley Research Center, Hampton, Virginia, from 2000 to 2002. Subsequently, he held positions as a Staff Scientist at ICASE in 2002 and as a Staff Scientist at the National Institute of Aerospace from 2003 to 2004, both at NASA Langley Research Center. He has been serving as a director of the NASA Space Technology Research Institute (STRI) for Ultra-Strong Composites by Computational Design (US-COMP). Odegard began his academic career at Michigan Technological University in 2004 as an assistant professor in the Department of Mechanical Engineering – Engineering Mechanics, and was appointed as an associate professor from 2009 to 2013. During this time, he briefly served as a Fulbright Research Scholar at the Norwegian University of Science and Technology, Trondheim, Norway. In 2014, he was named as the Richard and Elizabeth Henes Professor in Computational Mechanics in the Department of Mechanical Engineering – Engineering Mechanics at Michigan Technological University, a position he held until 2021. He has been holding an appointment as the John O. Hallquist Endowed Chair of Computational Mechanics at the same university since 2021. Research Odegard has led a multi-institution effort in developing ultra-strong composites for deep space exploration using carbon nanotubes (CNTs) and polymers, employing computational modeling for accurate property prediction, and has received media coverage for his contributions, including features in publications such as Chemical & Engineering News, CompositesWorld, Nature World News, and Space.com. For his efforts in leading US-COMP to achieve its goals, Odegard was awarded the NASA Outstanding Public Leadership Medal in 2023. Computational modeling of nanocomposites Odegard has conducted research on computational simulation of polymer and polymer-composite materials, and made contributions to the development of new multi-scale modeling approaches for advanced composite materials. During his time at NASA Langley Research Center, he developed techniques to connect computational chemistry with continuum mechanics. This new approach to materials modeling enabled the development of structure-property relationships in nano-structured materials. In collaboration with researchers from the National Institute of Aerospace and Langley Research Center in 2005, he used this approach to develop constitutive models for polymer composite systems reinforced with single-walled CNTs. Additionally, he developed a multiscale model for silica nanoparticle/polyimide composites, which integrated the molecular structures of the nanoparticle, polyimide, and interfacial regions into the bulk-level constitutive behavior. Odegard and his team further developed computational simulation techniques for nanocomposite materials. He developed the simulation of polymer materials using reactive force fields. These force fields allow for the simulation of chemical bond breakage during mechanical deformation, thus allowing for more accurate computational predictions of polymer mechanical behavior and failure. His team used these techniques to computationally design CNT nanocomposites with improved manufacturability and mechanical behavior. In addition, he was a contributor to the development of CNT yarn composites as part of US-COMP, which showed significant increases in mechanical stiffness and strength relative to state-of-the-art aerospace composite materials. Awards and honors 2006 – HJE Reid Award, NASA Langley Research Center 2008 – Ferdinand P. Beer and E. Russell Johnston Jr. Outstanding New Mechanics Educator Award, American Society of Engineering Education 2011 – Ralph R. Teetor Educational Award, Society of Automotive Engineers 2021 – Michigan Tech Distinguished Researcher Award 2023 – Outstanding Public Leadership Medal, NASA Selected articles Odegard, G. M., Gates, T. S., Nicholson, L. M., & Wise, K. E. (2002). Equivalent-continuum modeling of nano-structured materials. Composites Science and Technology, 62(14), 1869–1880. Odegard, G. M., Gates, T. S., Wise, K. E., Park, C., & Siochi, E. J. (2003). Constitutive modeling of nanotube–reinforced polymer composites. Composites science and technology, 63(11), 1671–1687 Odegard, G. M., & Bandyopadhyay, A. (2011). Physical aging of epoxy polymers and their composites. Journal of polymer science Part B: Polymer physics, 49(24), 1695–1716. Odegard, G. M., Jensen, B. D., Gowtham, S., Wu, J., He, J., & Zhang, Z. (2014). Predicting mechanical response of crosslinked epoxy using ReaxFF. Chemical Physics Letters, 591, 175–178. Odegard, G. M., Clancy, T. C., & Gates, T. S. (2017). Modeling of the mechanical properties of nanoparticle/polymer composites. In Characterization of Nanocomposites (pp. 319–342). Jenny Stanford Publishing. Odegard, G. M., Patil, S. U., Deshpande, P. P., Kanhaiya, K., Winetrout, J. J., Heinz, H., ... & Maiaru, M. (2021). Molecular dynamics modeling of epoxy resins using the reactive interface force field. Macromolecules, 54(21), 9815–9824. Odegard, G.M., Liang, Z., Siochi, E.J., & Warren, J.A. (2023). A successful strategy for MGI-inspired research. MRS Bulletin, 48(5), 434–438. References Materials scientists and engineers NASA people Michigan Technological University faculty University of Denver alumni University of Colorado Boulder alumni Living people Year of birth missing (living people)
Gregory Odegard
[ "Materials_science", "Engineering" ]
1,475
[ "Materials scientists and engineers", "Materials science" ]
75,313,681
https://en.wikipedia.org/wiki/SpaceX%20Starship%20design%20history
Before settling on the 2018 Starship design, SpaceX successively presented a number of reusable super-heavy lift vehicle proposals. These preliminary spacecraft designs were known under various names (Mars Colonial Transporter, Interplanetary Transport System, BFR). In November 2005, before SpaceX had launched its first rocket, the Falcon 1, CEO Elon Musk first mentioned a high-capacity rocket concept able to launch to low Earth orbit, dubbed the BFR. Later in 2012, Elon Musk first publicly announced plans to develop a rocket surpassing the capabilities of the existing Falcon 9. SpaceX called it the Mars Colonial Transporter, as the rocket was to transport humans to Mars and back. In 2016, the name was changed to Interplanetary Transport System, as the rocket was planned to travel beyond Mars as well. The design called for a carbon fiber structure, a mass in excess of when fully-fueled, a payload of to low Earth orbit while being fully reusable. By 2017, the concept was temporarily re-dubbed the BFR. In December 2018, the structural material was changed from carbon composites to stainless steel, marking the transition from early design concepts of the Starship. Musk cited numerous reasons for the design change; low cost, ease of manufacture, increased strength of stainless steel at cryogenic temperatures, and ability to withstand high temperatures. In 2019, SpaceX began to refer to the entire vehicle as Starship, with the second stage being called Starship and the booster Super Heavy. They also announced that Starship would use reusable heat shield tiles similar to those of the Space Shuttle. The second-stage design had also settled on six Raptor engines by 2019; three optimized for sea-level and three optimized for vacuum. In 2019 SpaceX announced a change to the second stage's design, reducing the number of aft flaps from three to two to reduce weight. In March 2020, SpaceX released a Starship Users Guide, in which they stated the payload of Starship to low Earth orbit (LEO) would be in excess of , with a payload to geostationary transfer orbit (GTO) of . Early heavy-lift concepts In November 2005, before SpaceX launched the Falcon 1, its first rocket, CEO Elon Musk first referenced a long-term and high-capacity rocket concept named BFR. The BFR would be able to launch to LEO and would be equipped with Merlin 2 engines. The Merlin 2 would have been in direct lineage to the Merlin engines used on the Falcon 9, described as a scaled up regeneratively cooled engine comparable to the F-1 engines used on the Saturn V. In July 2010, after the final launch of Falcon 1 a year prior, SpaceX presented launch vehicle and Mars space tug concepts at a conference. The launch vehicle concepts were called Falcon X (later named Falcon 9), Falcon X Heavy (later named Falcon Heavy), and Falcon XX (later named Starship); the largest of all was the Falcon XX with a capacity to low Earth orbit. To deliver such payload, the rocket would have been as tall as the Saturn V and use six powerful Merlin 2 engines. Mars Colonial Transporter In October 2012, the company made the first public articulation of plans to develop a fully reusable rocket system with substantially greater capabilities than SpaceX's existing Falcon 9. Later in 2012, the company first mentioned the Mars Colonial Transporter rocket concept in public. It was going to be able to carry 100 people or of cargo to Mars and would be powered by methane-fueled Raptor engines. Musk referred to this new launch vehicle under the unspecified acronym "MCT", revealed to stand for "Mars Colonial Transporter" in 2013, which would serve the company's Mars system architecture. SpaceX COO Gwynne Shotwell gave a potential payload range between 150–200 tons to low Earth orbit for the planned rocket. For Mars missions, the spacecraft would carry up to of passengers and cargo. According to SpaceX engine development head Tom Mueller, SpaceX could use nine Raptor engines on a single MCT booster or spacecraft. The preliminary design would be at least in diameter, and was expected to have up to three cores totaling at least 27 booster engines. Interplanetary Transport System In 2016, the name of the Mars Colonial Transporter system was changed to the Interplanetary Transport System (ITS), due to the vehicle being capable of other destinations. Additionally, Elon Musk provided more details about the space mission architecture, launch vehicle, spacecraft, and Raptor engines. The first test firing of a Raptor engine on a test stand took place in September 2016. On September 26, 2016, a day before the 67th International Astronautical Congress, a Raptor engine fired for the first time. At the event, Musk announced SpaceX was developing a new rocket using Raptor engines called the Interplanetary Transport System. It would have two stages, a reusable booster and spacecraft. The stages' tanks were to be made from carbon composite, storing liquid methane and liquid oxygen. Despite the rocket's launch capacity to low Earth orbit, it was expected to have a low launch price. The spacecraft featured three variants: crew, cargo, and tanker; the tanker variant is used to transfer propellant to spacecraft in orbit. The concept, especially the technological feats required to make such a system possible and the funds needed, garnered substantial skepticism. Both stages would use autogenous pressurization of the propellant tanks, eliminating the Falcon 9's problematic high-pressure helium pressurization system. In October 2016, Musk indicated that the initial tank test article, made of carbon-fiber pre-preg, and built with no sealing liner, had performed well in cryogenic fluid testing. A pressure test at about 2/3 of the design burst pressure was completed in November 2016. In July 2017, Musk indicated that the architecture design had evolved since 2016 in order to support commercial transport via Earth-orbit and cislunar launches. The ITS booster was to be a , , reusable first stage powered by 42 engines, each producing of thrust. Total booster thrust would have been at liftoff, increasing to in a vacuum, several times the thrust of the Saturn V. It weighed when empty and when completely filled with propellant. It would have used grid fins to help guide the booster through the atmosphere for a precise landing. The engine configuration included 21 engines in an outer ring and 14 in an inner ring. The center cluster of seven engines would be able to gimbal for directional control, although some directional control would be achieved via differential thrust with the fixed engines. Each engine would be capable of throttling between 20 and 100 percent of rated thrust. The design goal was to achieve a separation velocity of about while retaining about 7% of the initial propellant to achieve a vertical landing at the launch pad.The design called for grid fins to guide the booster during atmospheric reentry. The booster return flights were expected to encounter loads lower than the Falcon 9, principally because the ITS would have both a lower mass ratio and a lower density. The booster was to be designed for 20 g nominal loads, and possibly as high as 30–40 g. In contrast to the landing approach used on SpaceX's Falcon 9—either a large, flat concrete pad or downrange floating landing platform, the ITS booster was to be designed to land on the launch mount itself, for immediate refueling and relaunch. The ITS second stage was planned to be used for long-duration spaceflight, instead of solely being used for reaching orbit. The two proposed variants aimed to be reusable. Its maximum width would be , with three sea level Raptor engines, and six optimized for vacuum firing. Total engine thrust in a vacuum was to be about . The Interplanetary Spaceship would have operated as a second-stage and interplanetary transport vehicle for cargo and passengers. It aimed to transport up to per trip to Mars following refueling in Earth orbit. Its three sea-level Raptor engines were designed to be used for maneuvering, descent, landing, and initial ascent from the Mars surface. It would have had a maximum capacity of of propellant, and a dry mass of 150 tonnes (330,000 lb). The ITS tanker would serve as a propellant tanker, transporting up to of propellants to low Earth orbit in a single launch. After refueling operations, it would land and be prepared for another flight. It had a maximum capacity of of propellant and had a dry mass of . Big Falcon Rocket In September 2017, at the 68th annual meeting of the International Astronautical Congress, Musk announced a new launch vehicle calling it the BFR, again changing the name, though stating that the name was temporary. The acronym was alternatively stated as standing for Big Falcon Rocket or Big Fucking Rocket, a tongue-in-cheek reference to the BFG from the Doom video game series. Musk foresaw the first two cargo missions to Mars as early as 2022, with the goal to "confirm water resources and identify hazards" while deploying "power, mining, and life support infrastructure" for future flights. This would be followed by four ships in 2024, two crewed BFR spaceships plus two cargo-only ships carrying equipment and supplies for a propellant plant. The design balanced objectives such as payload mass, landing capabilities, and reliability. The initial design showed the ship with six Raptor engines (two sea-level, four vacuum) down from nine in the previous ITS design. By September 2017, Raptors had been test-fired for a combined total of 20 minutes across 42 test cycles. The longest test was 100 seconds, limited by the size of the propellant tanks. The test engine operated at . The flight engine aimed for , on the way to in later iterations. In November 2017, Shotwell indicated that about half of all development work on BFR was focused on the engine. SpaceX looked for manufacturing sites in California, Texas, Louisiana, and Florida. By September 2017, SpaceX had started building launch vehicle components: "The tooling for the main tanks has been ordered, the facility is being built, we will start construction of the first ship [in the second quarter of 2018.]" By early 2018, the first carbon composite prototype ship was under construction, and SpaceX had begun building a new production facility at the Port of Los Angeles, California. In March, SpaceX announced that it would manufacture its launch vehicle and spaceship at a new facility on Seaside Drive at the port. By May, about 40 SpaceX employees were working on the BFR. SpaceX planned to transport the launch vehicle by barge, through the Panama Canal, to Cape Canaveral for launch. Since then, the company has terminated the agreements to do this. In August 2018, the head of the US Air Force Air Mobility Command expressed interest in the ability of the BFR to move up to of cargo anywhere in the world in under 30 minutes, for "less than the cost of a C-5". The BFR was designed to be tall, in diameter, and made of carbon composites. The upper stage, known as Big Falcon Ship (BFS), included a small delta wing at the rear end with split flaps for pitch and roll control. The delta wing and split flaps were said to expand the flight envelope to allow the ship to land in a variety of atmospheric densities (vacuum, thin, or heavy atmosphere) with a wide range of payloads. The BFS design originally had six Raptor engines, with four vacuum and two sea-level. By late 2017, SpaceX added a third sea-level engine (totaling 7) to allow greater Earth-to-Earth payload landings and still ensure capability if one of the engines fails. Three BFS versions were described: BFS cargo, BFS tanker, and BFS crew. The cargo version would have been used to reach Earth orbit as well as carry cargo to the Moon or Mars. After refueling in an elliptical Earth orbit, BFS was designed to eventually be able to land on the Moon and return to Earth without another refueling. The BFR also aimed to carry passengers/cargo in Earth-to-Earth transport, delivering its payload anywhere within 90 minutes. Changes to early Starship design In December 2018, the structural material was changed from carbon composites to stainless steel, marking the transition from early design concepts of the Starship. Musk cited numerous reasons for the design change; low cost and ease of manufacture, increased strength of stainless steel at cryogenic temperatures, as well as its ability to withstand high heat. The high temperature at which 300-series steel transitions to plastic deformation would eliminate the need for a heat shield on Starship's leeward side, while the much hotter windward side would be cooled by allowing fuel or water to bleed through micropores in a double-wall stainless steel skin, removing heat by evaporation. The liquid-cooled windward side was changed in 2019 to use reusable heat shield tiles similar to those of the Space Shuttle. In 2019, SpaceX began to refer to the entire vehicle as Starship, with the second stage being called Starship and the booster Super Heavy. In September 2019, Musk held an event about Starship development during which he further detailed the lower-stage booster, the upper-stage's method of controlling its descent, the heat shield, orbital refueling capacity, and potential destinations besides Mars. Over the years of design, the proportion of sea-level engines to vacuum engines on the second stage varied drastically. By 2019, the second stage design had settled on six Raptor engines—three optimized for sea-level and three optimized for vacuum. To decrease weight, aft flaps on the second stage were reduced from three to two. Later in 2019, Musk stated that Starship was expected to have a mass of and be able to initially transport a payload of , growing to over time. Musk hinted at an expendable variant that could place 250 tonnes into low orbit. One possible future use of Starship that SpaceX has proposed is point-to-point flights (called "Earth to Earth" flights by SpaceX), traveling anywhere on Earth in under an hour. In 2017 SpaceX president and chief operating officer Gwynne Shotwell stated that point-to-point travel with passengers could become cost competitive with conventional business class flights. John Logsdon, an academic on space policy and history, said that the idea of transporting passengers in this manner was "extremely unrealistic", as the craft would switch between weightlessness to 5 g of acceleration. He also commented that “Musk calls all of this ‘aspirational,’ which is a nice code word for more than likely not achievable.” See also History of SpaceX Space Shuttle design process SpaceX ambition of colonizing Mars Studied Space Shuttle designs Notes References SpaceX Starship Spacecraft design
SpaceX Starship design history
[ "Engineering" ]
3,074
[ "Spacecraft design", "Design", "Aerospace engineering" ]
75,317,701
https://en.wikipedia.org/wiki/Exfoliation%20%28chemistry%29
Exfoliation is a process that separates layered materials into nanomaterials by breaking the bonds between layers using mechanical, chemical, or thermal procedures. While exfoliation has historical roots dating back centuries, significant advances and widespread research gained momentum after Novoselov and Geim's discovery of graphene using Scotch tape in 2004. Their Nobel Prize-winning research primarily relied on mechanical exfoliation for the production of graphene which sparked an immediate interest in the exfoliation process. Today, exfoliation is regarded as the most widely used nanomaterial production technique. Exfoliation typically involves breaking weak bonds called van der Waals bonds to create two-dimensional materials, such as graphene or transition metal dichalcogenide monolayers. While various reversible chemical processes, such as intercalation can disrupt the weak bonds in a lamellar structure and introduce guest species, many of them fail to produce single-sheet materials as the processes are not strong enough to cancel the interlayer attractions. However, during exfoliation, the high energy input leads to an extreme bond-breaking process that irreversibly separates the layers into single sheets. Lately, it has been shown that if the energy input is substantial enough, the procedure can even break much stronger, bonds such as metallic or ionic bonds to create non-van der Waals materials like hematene or other nanoplatelets. In recent years, exfoliation has found practical applications in a wide range of fields, from electronics to biomedical and beyond. It plays a vital role in creating advanced materials with properties tailored for specific uses, such as high-performance electronics, efficient energy storage devices, and lightweight yet robust materials for aerospace applications. This versatility and adaptability make exfoliation a crucial technique in cutting-edge material research and various industrial sectors. History While the use of exfoliation can be traced back to ancient Chinese and Mayan pottery, the earliest scientific work involving exfoliation was the production of vermiculite by Thomas H. Webb, in 1824. However, during this early period, no substantial research was conducted to understand the nature of the mechanisms that facilitated these reactions. Arguably, the first research that delved into the mechanism of the process rather than its usage was Brodie's work, which revealed that certain acids produced lamellar carbon structures in 1855. Despite this discovery, extensive research on the topic did not immediately follow. The exfoliation concepts we have today were not developed until the realization that graphite absorbed alkali metals in 1926. This discovery laid the groundwork for a more solid theoretical framework, enabling scientists to apply the method in their production processes. One method that made use of this theoretical background and eventually led to further development of the process as a production technique was Rüdorff and Hoffman's work, which introduced an electrochemical method for exfoliation in 1938. The development of electrochemical exfoliation piqued the interest of more researchers and more people started regarding the process as a production technique. One of the most notable examples of the success of the method as a mass production technique was the invention of the first commercial lithium carbon fluoride batteries in 1973. The real turning point for exfoliation research came in 2004 when Novoselov and Geim isolated graphene through mechanical exfoliation using Scotch tape. This innovative research earned them the Nobel Prize in Physics in 2010, reigniting the interest in exfoliation methods. In subsequent years, numerous processes were developed for more precise manufacturing and higher yields. While most of the exfoliation research focused on graphite and graphene during the last few decades, recently, the rather difficult processing of graphene and its lack of an obvious band structure led many research groups to begin working on different elements to utilize exfoliation for the production of other nanomaterials.  One of the most significant breakthroughs of this new wave of research has been the discovery of non-van der Waals nanoplatelets. This discovery demonstrated that exfoliation could occur without relying on weak bonds, which opened up new and promising applications in the industry. Types of Exfoliation The exfoliation process is typically applied to lamellar structures with weak bonds. While these bonds are weak enough to be easily broken by an external force, they are strong enough to not separate into single layers. In order to separate the material into single layers, the attraction between consecutive layers must be overcome. As the interest in exfoliated material research surged many researchers started to develop new and better ways to overcome these interlayer attractions. Despite the high number of methods it is possible to classify them into three distinct categories based on the source of energy used in the process: mechanical, chemical, and thermal exfoliation. Mechanical Exfoliation In mechanical exfoliation, external forces act upon the material, breaking the bonds due to the stress accumulated within the material. Depending on the intensity and the specific nature of the phenomenon, these forces break the van der Waals forces, separating materials into 2D nanostructures. Sometimes, a solvent is introduced to the material to facilitate complete breakdown, as liquid environments significantly reduce bond strength compared to vacuum conditions. While mechanical exfoliation is effective in separating the layers, it lacks predictability and systematic results. The process requires repetition until individual layers are achieved. To obtain consistent nanomaterials with specific properties, experimentation and fine-tuning of conditions based on the results is required. Therefore, mechanical exfoliation techniques are rather empirical, and most mathematical models rely on empirical results rather than the ab-initio calculations. The original method proposed by Novoselov and Geim, micromechanical cleavage,  was essentially a mechanical exfoliation method. Consequently, mechanical exfoliation methods were developed more rapidly than the others. Major mechanical exfoliation methods include micromechanical cleavage and liquid phase separation. Micromechanical Cleavage Micromechanical Cleavage is the original graphene production method proposed by Novoselov and Geim. This process involves using sticky tape to get graphite samples and separating the layers until getting a single layer. Although the process yielded high-purity single-layered materials, it fell out of favor quickly as it required several repetitions for a single layer of graphene and was likely to damage the graphene layers during the process. Liquid Phase Separation Liquid phase separation is one of the most widely used exfoliation methods. Its high yields, high purity, and scalability make it one of the most preferred exfoliation methods. It works by providing a liquid medium for the mechanical exfoliation methods. The liquid medium significantly reduces the strength of the bonds in the material compared to vacuum conditions, making it easier for mechanical forces to break the weak bonds in the material. However, due to the interfacial tension forces, liquid phase separation does not always yield uniform results. When the tension forces do not balance out, graphene single layers may break due to the tension forces. To achieve relatively uniform results, the overall energy of the system must be minimized. The best way to optimize this condition is to use solvents with similar surface tensions to the material of interest. Liquid phase separation utilizes various external forces to break the van der Waals forces. The most widely used liquid phase separation methods include sonication, which uses sound waves, and shearing, which uses shear forces. Sonication The sonication method utilizes ultrasonic sound waves to create micrometer-sized bubbles in liquid environments. When these bubbles reach a critical size, they collapse with an instantaneous temperature of 5000 K, a local pressure of 20MPa, and a heating/cooling rate up to 109 K s−1.These sudden physical differences create shock waves that can act on lamellar materials and break the weak forces in between the layers. Although sonication is a long-known laboratory technique, its implementation into graphene exfoliation was in 2008 and it led to liquid exfoliation becoming the predominant technique. While sonication is generally used as an exfoliation method on its own, it is also used as a further processing method to perfect the nanoflakes created with other exfoliation methods. Therefore it is a common technique used in combination with the other methods. One disadvantage of sonication is the reaction time though. A complete exfoliation reaction may take days to finish. However, prolonged exfoliation times allow the creation of more stable solutions, making long sonication times favorable for obtaining purer, defect-free products. Nanomaterials created with sonication yield 1.5 times larger unperturbed particle size. Shearing The shearing method makes use of lab mixers to exfoliate lamellar structures into single-layered nanomaterials. Lab mixers create a sufficient shear force that allows consecutive layers of the material to slip over each other. Which produces massive quantities of highly pure material.  Although the shearing method was widely used as a further processing method to break up relatively larger clusters of nanomaterials into single layers, before, in 2010, it was introduced as a direct method to exfoliate graphite into graphene. Later studies confirmed the applicability of the method to other lamellar materials such as h-BN, MoS2, WS2, MoSe2, and MoTe2. While this method has a high yield and purity among the other exfoliation methods, its known linear relations with concentration, mixing time, rotor speed, rotor diameter, and inverse relation with liquid volume, gives one of the best controllability out of all the exfoliation methods. This innovative procedure has been adapted for household kitchen mixers, significantly reducing the costs and complexity of the exfoliation methods, thereby sparking another wave of research in layered structures. Chemical Exfoliation Chemical exfoliation employs the intercalation process to separate material layers. During intercalation, guest ions or free electrons are introduced to the layers, disrupting the bond structure and forming new bonds. For example, in the case of van der Waals forces, which are common in chemical exfoliation, positive and negative regions are induced, attracting ions. Given that the bonds between layers are weak, they tend to break, forming new, stronger bonds with these ions. Typically, these stronger bonds lead to the creation of functional groups that significantly reduce interlayer attractions. At this stage, the interlayer attraction becomes low, and thanks to the ability of the functional groups to decompose with further processing, the layers can be easily separated. Chemical exfoliation's scalability advantages over other production methods have made it one of the most widely used techniques. In addition to its scalability, the variety of chemicals available plays an important role in encouraging researchers to explore various production methods. Chemical exfoliation is also commonly used in combination with mechanical and thermal exfoliation methods to obtain purer results. The most widely used chemical exfoliation methods are chemical vapor deposition, graphite oxide reduction, and electrochemical exfoliation. Chemical Vapor Deposition First introduced in 2008, chemical vapor deposition emerged as one of the most popular methods for graphene exfoliation. This method utilizes a transition metal film as a base layer and exposes it to hydrocarbons at high temperatures(900-1000°C) and ambient pressure. During the process, hydrocarbon decomposes, and carbon atoms form one to ten layers of graphene flakes over the metal film. The metal film is then cooled down at a determined rate to achieve specific particle sizes. This process is especially useful for applications such as circuit drawing and surface-based applications of graphene, including the production of photovoltaic cells. Although the method was widely used until the last decade, its relatively expensive process has been replaced by other methods. However, there is still ongoing research to further develop the process for more efficient use with various materials. Oxide Reduction The oxide reduction method is particularly widely used with graphite to create graphene. It involves introducing oxide functional groups into the lamellar structure, which doubles the distance between graphite layers and reduces van der Waals attractions. These functional groups are then removed using reductants, resulting in single graphene layers from the graphite, which can now be easily exfoliated due to reduced van der Waals attractions. This method is especially valuable for fine-tuning the band gap properties of graphene, which naturally lacks a band gap. While this method was widely used over the last decade, its impurity levels led to its decline in popularity. The presence of a large number of holes and defects made the produced graphene unsuitable for electronics, and the chemicals used were hazardous. In 2014, a research group succeeded in isolating graphene layers without the use of oxidants, significantly increasing the purity of the samples and eliminating the need for further processing of the products. This advancement is expected to reignite interest in oxide reduction exfoliation. Electrochemical Exfoliation One of the most promising exfoliation methods is electrochemical exfoliation, which has been popular among researchers since its introduction in 2008. This method is mainly based on 20th-century studies on electrolysis and electrochemical intercalation. Electrochemical exfoliation makes use of potential differences between a lamellar structured electrode and a platinum electrode to attract oppositely charged ions to the electrodes. These accumulations trigger the intercalation process in the material and ultimately result in the complete exfoliation of the material into single nanomaterial layers. However, intercalation is not always the only reaction mechanism, as sometimes bubbles are observed depending on the solvent and electrolyte used. These bubbles also facilitate exfoliation by creating a similar effect to the sonication method. The process might be called cathodic or anodic exfoliation, depending on which electrode is the lamellar structured electrode. Cathodic exfoliation requires an organic solvent medium with a lithium or alkylammonium electrolyte, while anodic exfoliation can be done with water and strong electrolytes. Anodic exfoliation is more efficient than cathodic exfoliation, as it forms oxide and hydroxide functional groups, significantly increasing intercalation in the material. However, anodic exfoliation also results in impure products, so the choice between the two methods depends on the specific application. Electrochemical exfoliation products may also require further processing. Unlike liquid exfoliation, electrochemical exfoliation eliminates most of the chemical reactions involved, resulting in purer products. This method increases scalability, controllability, and decreases contamination and reaction time for the exfoliated material. Therefore, many researchers aim to implement the method into the industry for the mass production of carbon nanomaterials and transition metal dichalcogenide monolayers. Thermal Exfoliation Thermal exfoliation uses heat as a source of energy for the exfoliation process. Despite heat being such a fundamental energy for most of the other chemical processes its use in exfoliation is relatively recent. Most thermal exfoliation methods have the same approach; chemically intercalated lamellar structures are subjected to extreme temperatures to decompose the functional groups created through chemical methods. The decomposition of these functional groups generates gases that build up pressure between layers, countering the van der Waals attractions between material layers. When well-chosen functional group/temperature combinations are used, complete separation of the layers occurs. One advantage of thermal exfoliation methods over others is their higher production rate, a crucial property for mass production applications. Additionally, their reaction times are the shortest among all exfoliation methods. A process that might take days to complete with mechanical exfoliation can be finished within seconds using thermal exfoliation methods. However, reduced reaction time and higher yields come at the cost of reduced control over particle size due to the nature of the process. Therefore, the process still lacks the optimization and reproducibility required by the industry. Today, the most widely used thermal methods are high-temperature, low-temperature, and microwave exfoliation methods. High Temperature Thermal Exfoliation High-temperature thermal exfoliation employs temperatures above 550°C to decompose functional groups. The biggest advantage of this method is its short reaction times. An exfoliation process that might take days to complete with mechanical exfoliation can be done in a matter of seconds through high-temperature thermal exfoliation. However, decreased reaction times come at the price of impure products. Due to the extremely high temperatures, operation costs increase significantly. Moreover, the carbon dioxide and water vapors produced during the decomposition of oxide groups react with the material, causing defects and impurities in the material. Low Temperature Thermal Exfoliation Low-temperature thermal exfoliation aims to retain the benefits of high-temperature thermal exfoliation while avoiding unexpected outcomes such as high costs and impurities. For this purpose, low-temperature thermal exfoliation employs relatively lower temperatures of 200°C-550°C to decompose the functional groups. These temperatures yield purer results than high-temperature thermal exfoliation because the chemicals produced at this temperature do not readily react with the layered material itself. Although this decrease in temperature affects reaction times, it is usually favored to achieve purer results. Even though the reaction time is shorter in low-temperature thermal exfoliation compared to high-temperature, it is still significantly shorter than in other methods. Additionally, low-temperature thermal exfoliation allows for fine-tuning of the bandgap properties of materials, making it an ideal method for electronic applications. Microwave Irradiation Exfoliation: Microwave Irradiation Exfoliation is another exfoliation method that would decrease the complexity of the exfoliation experiments greatly. First utilized for the production of exfoliated graphite, it was later adapted for other nanomaterials. In the microwave irradiation exfoliation method, materials partially intercalated through chemical processes are exposed to microwave radiation. Ions and molecules trapped between layers absorb microwaves, leading to local temperature changes. These local changes trigger significant physical and chemical phenomena that result in complete exfoliation of the lamellar material. Due to reduced costs and high efficiency, microwave irradiation exfoliation is one of the most popular exfoliation methods. The method also provides higher yields with pure results within shorter timeframes. Although microwave irradiation exfoliation has great benefits there is still some ambiguity in the mechanisms of this method as the products of the method are reported to be able to get exfoliated again through chemical exfoliation. Applications Ever since the isolation of graphene, exfoliation has been the most common and reliable method for creating graphene, with the ongoing development of new techniques to optimize the process. As graphene finds increasing applications in various areas of electronics, the quest for an optimized industrial production method for graphene becomes more significant. Currently, graphene is projected to play a crucial role in the production of low-cost solar cells, energy storage systems, and sensors. Therefore, various forms of graphene, from liquid suspensions to dispersions, coatings to dust, are necessary for implementation in industrial production methods. In addition to the graphene, the exfoliation process enables the production of various other carbon allotropes, with the most important ones being carbon nanotubes and carbon quantum dots. These materials are also expected to create billion-dollar industries, and as a result, commercialization of these materials are anticipated to show advancements in exfoliation methods. Although graphene is expected to be one of the most important materials in the future, there are still some disputes about some of its applications. The challenging processing of graphene and its lack of an obvious band structure have led many researchers to explore new uses of the exfoliation methods. This shift has recently increased research into efficient production methods for transition metal dichalcogenide (TMD) monolayers significantly. TMD monolayers have band gaps ranging from insulators to semiconductors, thanks to their quantum confinement effects. Therefore, they are expected to have significant applications in the near future, particularly with the further development of optoelectronics. Currently, TMD monolayers find applications in electronic devices such as solar cells, photodetectors, light-emitting diodes, and phototransistors. There is also a growing interest in their use in power storage systems, such as batteries and supercapacitors. Since exfoliation is TMD monolayers' most common production technique, it is projected that TMD monolayers' potential commercialization will require extensive use of exfoliation methods, eventually creating new applications for exfoliation. Theoretically, exfoliation requires the presence of weak bonds. However, recent studies have shown that even materials with metallic and ionic bonds can be exfoliated with the proper procedures. The materials created through these methods are called non-van der Waals nanoplatelets. One notable non-van der Waals material is the Hematane which is a single sheet of hematite, the most abundant form of iron ore. Hematane is known to have interesting photocatalytic properties due to its modified bandgap properties, offering potential applications in energy storage, optoelectronics, and biomedicine. Since one of the most common ways to create hematite is through liquid phase separation, applications of hematite would increase the interest in exfoliation. References Chemical processes
Exfoliation (chemistry)
[ "Chemistry" ]
4,392
[ "Chemical process engineering", "Chemical processes", "nan" ]
75,319,288
https://en.wikipedia.org/wiki/Clindamycin/adapalene/benzoyl%20peroxide
Clindamycin/adapalene/benzoyl peroxide, sold under the brand name Cabtreo, is a fixed-dose combination medication used for the treatment of acne. It contains clindamycin, as the phosphate, a lincosamide antibacterial; adapalene, a synthetic retinoid; and benzoyl peroxide, an oxidizing agent. It is applied to the skin. Clindamycin/adapalene/benzoyl peroxide was approved for medical use in the United States in October 2023. It is the first triple-combination topical acne treatment approved by the US Food and Drug Administration. References Anti-acne preparations Combination drugs
Clindamycin/adapalene/benzoyl peroxide
[ "Chemistry" ]
148
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
73,934,690
https://en.wikipedia.org/wiki/Airy%20process
The Airy processes are a family of stationary stochastic processes that appear as limit processes in the theory of random growth models and random matrix theory. They are conjectured to be universal limits describing the long time, large scale spatial fluctuations of the models in the (1+1)-dimensional KPZ universality class (Kardar–Parisi–Zhang equation) for many initial conditions (see also KPZ fixed point). The original process Airy2 was introduced in 2002 by the mathematicians Michael Prähofer and Herbert Spohn. They proved that the height function of a model from the (1+1)-dimensional KPZ universality class - the PNG droplet - converges under suitable scaling and initial condition to the Airy2 process and that it is a stationary process with almost surely continuous sample paths. The Airy process is named after the Airy function. The process can be defined through its finite-dimensional distribution with a Fredholm determinant and the so-called extended Airy kernel. It turns out that the one-point marginal distribution of the Airy2 process is the Tracy-Widom distribution of the GUE. There are several Airy processes. The Airy1 process was introduced by Tomohiro Sasomoto and the one-point marginal distribution of the Airy1 is a scalar multiply of the Tracy-Widom distribution of the GOE. Another Airy process is the Airystat process. Airy2 proces Let be in . The Airy2 process has the following finite-dimensional distribution where and is the extended Airy kernel Explanations If the extended Airy kernel reduces to the Airy kernel and hence where is the Tracy-Widom distribution of the GUE. is a trace class operator on with counting measure on and Lebesgue measure on , the kernel is . Literature References Stochastic processes Statistical mechanics
Airy process
[ "Physics" ]
391
[ "Statistical mechanics" ]
73,940,595
https://en.wikipedia.org/wiki/Accounts%20of%20Materials%20Research
Accounts of Materials Research () is a monthly peer-reviewed scientific journal published in partnership between ShanghaiTech University and American Chemical Society. It was rewarded by the Chinese government through Action Plan for the Excellence of Chinese STM Journals in 2020. The journal is a subscription-access publication that has committed to publishing an increasing number of open access articles, with a future target of transitioning to 100% open access. Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service Emerging Sources Citation Index Scopus According to the Journal Citation Reports, the journal has a 2022 impact factor of 14.6. References External links American Chemical Society academic journals ShanghaiTech University Academic journals established in 2020 Materials science journals English-language journals
Accounts of Materials Research
[ "Materials_science", "Engineering" ]
149
[ "Materials science journals", "Materials science" ]
78,285,925
https://en.wikipedia.org/wiki/Code%20ownership
In software engineering, code ownership is a term used to describe control of an individual software developer or a development team over source code modifications of a module or a product. Definitions While the term is very popular, there is no universally accepted definition of it. Koana et al., in their 2024 literature review, found 28 different definitions, and classified them as follows: Psychological ownership is a feeling by the developer of ownership and pride in the particular element of the project; Corporeal ownership is a set of formal or informal rules defining responsibility for a particular software piece. The rules depend on the development approach taken by the team, but generally can be partitioned along the lines of "what is being owned?" / "who owns it?" / "what is the degree of control?": while the answer to "what?" is typically some part of the source code, the ownership concept have been also applied to other artifacts of the software development as diverse as an entire project or a single software bug; the owner ("who?") might be an individual developer or a group that might include authors of the code, reviewers, and managers. The two extremes are represented by a dedicated ownership with just one developer responsible for any particular piece of code and a collective code ownership, where every member of the team owns all the code; the degree of control by an owner can vary from a mandatory code review to responsibility for testing to a complete implementation. Authorship Some researchers also use the term to describe the authorship of software (identifying who wrote a particular line of software). Koana et al. state that this is a different, although related, meaning, as the code owner might not be original author of the software piece. Influence upon quality It is generally accepted that the lack of clear code ownership (usually in the form of many developers freely applying small changes to a shared piece of code) is causing errors to be introduced. At the same time, with no code owner, the knowledge about an artifact can be lost. This is confirmed by large-scale studies, for example, involving Windows 7 and Windows Vista. Code owners in version control Modern version control systems allow explicit designation of code owners for particular files or directories (cf. GitHub CODEOWNERS feature). Typically, the code owner is either receiving notifications for all the changes in the owned code or is required to approve each change. References Sources Software engineering terminology
Code ownership
[ "Technology", "Engineering" ]
492
[ "Software engineering", "Computing terminology", "Software engineering stubs", "Software engineering terminology" ]
78,285,950
https://en.wikipedia.org/wiki/F-Yang%E2%80%93Mills%20equations
In differential geometry, the -Yang–Mills equations (or -YM equations) are a generalization of the Yang–Mills equations. Its solutions are called -Yang–Mills connections (or -YM connections). Simple important cases of -Yang–Mills connections include exponential Yang–Mills connections using the exponential function for and -Yang–Mills connections using as exponent of a potence of the norm of the curvature form similar to the -norm. Also often considered are Yang–Mills–Born–Infeld connections (or YMBI connections) with positive or negative sign in a function involving the square root. This makes the Yang–Mills–Born–Infeld equation similar to the minimal surface equation. F-Yang–Mills action functional Let be a strictly increasing function (hence with ) and . Let: Since is a function, one can also consider the following constant: Let be a compact Lie group with Lie algebra and be a principal -bundle with an orientable Riemannian manifold having a metric and a volume form . Let be its adjoint bundle. is the space of connections, which are either under the adjoint representation invariant Lie algebra–valued or vector bundle–valued differential forms. Since the Hodge star operator is defined on the base manifold as it requires the metric and the volume form , the second space is usually used. The -Yang–Mills action functional is given by: For a flat connection (with ), one has . Hence is required to avert divergence for a non-compact manifold , although this condition can also be left out as only the derivative is of further importance. F-Yang–Mills connections and equations A connection is called -Yang–Mills connection, if it is a critical point of the -Yang–Mills action functional, hence if: for every smooth family with . This is the case iff the -Yang–Mills equations are fulfilled: For a -Yang–Mills connection , its curvature is called -Yang–Mills field. A -Yang–Mills connection/field with: is just an ordinary Yang–Mills connection/field. (or for normalization) is called (normed) exponential Yang–Mills connection/field. In this case, one has . The exponential and normed exponential Yang–Mills action functional are denoted with and respectively. is called -Yang–Mills connection/field. In this case, one has . Usual Yang–Mills connections/fields are exactly the -Yang–Mills connections/fields. The -Yang–Mills action functional is denoted with . or is called Yang–Mills–Born–Infeld connection/field (or YMBI connection/field) with negative or positive sign respectively. In these cases, one has and respectively. The Yang–Mills–Born–Infeld action functionals with negative and positive sign are denoted with and respectively. The Yang–Mills–Born–Infeld equations with positive sign are related to the minimal surface equation: Stable F-Yang–Mills connection Analogous to (weakly) stable Yang–Mills connections, one can define (weakly) stable -Yang–Mills connections. A -Yang–Mills connection is called stable if: for every smooth family with . It is called weakly stable if only holds. A -Yang–Mills connection, which is not weakly stable, is called unstable. For a (weakly) stable or unstable -Yang–Mills connection , its curvature is furthermore called a (weakly) stable or unstable -Yang–Mills field. Properties For a Yang–Mills connection with constant curvature, its stability as Yang–Mills connection implies its stability as exponential Yang–Mills connection. Every non-flat exponential Yang–Mills connection over with and: is unstable. Every non-flat Yang–Mills–Born–Infeld connection with negative sign over with and: is unstable. All non-flat -Yang–Mills connections over with are unstable. This result includes the following special cases: All non-flat Yang–Mills connections with positive sign over with are unstable. James Simons presented this result without written publication during a symposium on "Minimal Submanifolds and Geodesics" in Tokyo in September 1977. All non-flat -Yang–Mills connections over with are unstable. All non-flat Yang–Mills–Born–Infeld connections with positive sign over with are unstable. For , every non-flat -Yang–Mills connection over the Cayley plane is unstable. Literature See also Bi-Yang–Mills equations, modification of the Yang–Mills equation References External links F-Yang-Mills equation at the nLab Differential geometry Mathematical physics Partial differential equations
F-Yang–Mills equations
[ "Physics", "Mathematics" ]
931
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
78,291,990
https://en.wikipedia.org/wiki/Hurwitz-stable%20matrix
In mathematics, a Hurwitz-stable matrix, or more commonly simply Hurwitz matrix, is a square matrix whose eigenvalues all have strictly negative real part. Some authors also use the term stability matrix. Such matrices play an important role in control theory. Definition A square matrix is called a Hurwitz matrix if every eigenvalue of has strictly negative real part, that is, for each eigenvalue . is also called a stable matrix, because then the differential equation is asymptotically stable, that is, as If is a (matrix-valued) transfer function, then is called Hurwitz if the poles of all elements of have negative real part. Note that it is not necessary that for a specific argument be a Hurwitz matrix — it need not even be square. The connection is that if is a Hurwitz matrix, then the dynamical system has a Hurwitz transfer function. Any hyperbolic fixed point (or equilibrium point) of a continuous dynamical system is locally asymptotically stable if and only if the Jacobian of the dynamical system is Hurwitz stable at the fixed point. The Hurwitz stability matrix is a crucial part of control theory. A system is stable if its control matrix is a Hurwitz matrix. The negative real components of the eigenvalues of the matrix represent negative feedback. Similarly, a system is inherently unstable if any of the eigenvalues have positive real components, representing positive feedback. See also M-matrix Perron–Frobenius theorem, which shows that any Hurwitz matrix must have at least one negative entry Z-matrix References External links Matrices
Hurwitz-stable matrix
[ "Mathematics" ]
345
[ "Matrices (mathematics)", "Mathematical objects" ]
78,292,145
https://en.wikipedia.org/wiki/Routh%E2%80%93Hurwitz%20matrix
In mathematics, the Routh–Hurwitz matrix, or more commonly just Hurwitz matrix, corresponding to a polynomial is a particular matrix whose nonzero entries are coefficients of the polynomial. Hurwitz matrix and the Hurwitz stability criterion Namely, given a real polynomial the square matrix is called Hurwitz matrix corresponding to the polynomial . It was established by Adolf Hurwitz in 1895 that a real polynomial with is stable (that is, all its roots have strictly negative real part) if and only if all the leading principal minors of the matrix are positive: and so on. The minors are called the Hurwitz determinants. Similarly, if then the polynomial is stable if and only if the principal minors have alternating signs starting with a negative one. Example As an example, consider the matrix and let be the characteristic polynomial of . The Routh–Hurwitz matrix associated to is then The leading principal minors of are Since the leading principal minors are all positive, all of the roots of have negative real part. Moreover, since is the characteristic polynomial of , it follows that all the eigenvalues of have negative real part, and hence is a Hurwitz-stable matrix. See also Routh–Hurwitz stability criterion Liénard–Chipart criterion P-matrix Notes References Matrices
Routh–Hurwitz matrix
[ "Mathematics" ]
268
[ "Matrices (mathematics)", "Mathematical objects" ]
70,979,708
https://en.wikipedia.org/wiki/Sigma%20%28signature%20format%29
Sigma is a signature format based on pattern matching for system logging, to detect malicious behavior in computer systems. See also YARA Snort Further reading References External links GitHub repository sigmatools on PyPi Computer forensics
Sigma (signature format)
[ "Engineering" ]
47
[ "Cybersecurity engineering", "Computer forensics" ]
70,984,019
https://en.wikipedia.org/wiki/Research%20transparency
Research transparency is a major aspect of scientific research. It covers a variety of scientific principles and practices: reproducibility, data and code sharing, citation standards or verifiability. The definitions and norms of research transparency significantly differ depending on the disciplines and fields of research. Due to the lack of consistent terminology, research transparency has frequently been defined negatively by addressing non-transparent usages (which are part of questionable research practices). After 2010, recurrent issues of research methodology have been increasingly acknowledged as structural crisis, that involve deep changes at all stages of the research process. Transparency has become a key value of the open science movement, which evolved from an initial focus on publishing to encompass a large diversity of research outputs. New common standards for research transparency, like the TOP Guidelines, aims to build and strengthen open research culture across disciplines and epistemic cultures. Definitions Confused terminologies There is no widespread consensus on the definition of research transparency. Differences between disciplines and epistemic cultures has largely contributed to different acceptions. The reproduction of past research has been a leading source of dissent. In an experimental setting, reproduction relies on the same set-up and apparatus, while replication only requires the use of the same methodology. Conversely, computational disciplines use reversed definitions of the term replicability and reproducibility. Alternative taxonomies have proposed to make do entirely with the ambiguity of reproducibility/replicability/repeatability. Goodman, Fanelli and Ioannidis recommended instead a distinction between method reproducibility (same experimental/computational setup) and result reproducibility (different setup but same overall principles). Core institutional actors continue to disagree on the meaning and usage of key concepts. In 2019, the National Academies of Science of the United States retained the experimental definition of replication and reproduction, which remains "at odds with the more flexible way they are used by [other] major organizations". The Association for Computing Machinery opted in 2016, for the computational definition and added also an intermediary notion of repeatability, where a different team of research use exactly the same measurement system and procedure. Debate over research transparency has also created new convergences between different disciplines and academic circles. In the Problem of science (2021), Rufus Barker Bausell argues that all disciplines, including the social sciences, currently face similar issues to medicine and physical sciences: "The problem, which has come to be known as the reproducibility crisis, affects almost all of science, not one or two individual disciplines." Negative definitions Due to lack of consistent terminology over research transparency, scientists, policy-makers and other major stake-holders have increasingly rely on negative definitions: what are the practices and forms that harm or disrupt any common ideal of research transparency. The taxonomy of scientific misconducts has been gradually expanded since the 1980s. The concept of questionable research practices (or QRP) was first incepted in a 1992 report of the Committee on Science, Engineering, and Public Policy as a way to address potentially non-intentional research failures (such as inadequacies in the research data management process). Questionable research practices uncover a large grey area of problematic practices, which are frequently associated to deficiencies in research transparency. In 2016, a study identified as much as 34 questionable research practices or "degree of freedom", that can occur at all the steps of the project (the initial hypothesis, the design of the study, collection of the data, the analysis and the reporting). Surveys of disciplinary practices have shown large differences in the admissibility and spread of questionable research practices. While data fabrication and, to a lesser extent, rounding of statistical indicators like the p value are largely rejected, the non-publication of negative results or the adjonctions of supplementary data are not identified as major issues. In 2009, a meta-analysis of 18 surveys estimated that less than 2% of scientists "admitted to have fabricated, falsified or modified data or results at least once". Real prevalence may be under-estimated due to self-reporting: regarding "the behaviour of colleagues admission rates were 14.12%". Questionable research practices are more widespread as more than one third of the respondents admit to have done it once. A large 2021 survey of 6,813 respondents in the Netherlands found significantly higher estimate, with 4% of the respondents engaging in data fabrication and more than half of the respondents engaging in questionable research practices. Higher rates can be either attributed to a deterioration of ethic norms or to "the increased awareness of research integrity in recent years". A new dimension of open science? Transparency has been increasingly acknowledged as an important component of open science. Until the 2010s, definitions of open science have been mostly focused on technical access and enhanced participation and collaboration between academics and non-academics. In 2016, Liz Lyon identified transparency as a "third dimension" of open science, due to the fact that "the concept of transparency and the associated term ‘reproducibility’, have become increasingly important in the current interdisciplinary research environment." According to Kevin Elliott, the open science movement "encompasses a number of different initiatives aimed at somewhat different forms of transparency." First drafted in 2014, the TOP guidelines have significantly contributed to bring transparency on the agenda of the open science movements. They aim to promote an "open research culture" and implement "strong incentives to be more transparent". They rely on eight standards, with different levels of compliance. While the standards are modular, they also aim to articulate a consistent ethos of science as "they also complement each other, in that commitment to one standard may facilitate adoption of others.". This open science framework of transparency has been in turn coopted by leading contributors and institutions on the topic of research transparency. After 2015, contributions from science historians underlined that there have been no significant deterioration of research quality, as past experiments and research design were not significantly better conceived and the rate of false or partially false has likely remained approximately constant for the last decades. Consequently, proponents of research transparency have come to embrace more explicitly the discourse of open science: the culture of scientific transparency becomes a new ideal to achieve rather than a fundamental principle to re-establish. The concept of transparency has contributed to create convergences between open science and other open movements in different areas such as open data or open government. In 2015, the OECD describe transparency as a common "rationale for open science and open data". History Discourse and practices of research transparency (before 1945) Transparency has been a fundamental criterion of experimental research for centuries. Successful replications have become an integral part of the institutional discourse of natural sciences (then called natural philosophy) in the 17th century. An early scientific society of Florence the Accademia del Cimento adopted in 1657 the motto provando e riprovando as a call for "repeated (public) performances of experimental trials" A key member of the Accademia, the naturalist Francesco Redi described extensively of the forms and benefits of procedural experimentation, that made it possible to check for random effects, the soundness of the experiment design, or causal relationships through repeated trials Replication and the open documentation of scientific experiments has become a key component of the diffusion of scientific knowledge in society: once they attained a satisfying rate of success, experiments could be performed in a variety of social spaces such as courts, marketplaces or learned salon. Although transparency has been early on acknowledged as a key component of science, it was not defined consistently. Most concept associated today with research transparency have arisen as terms of the art with no clear and widespread definitions. The concept of reproducibility appeared in an article on the "Methods of illuminations" first published in 1902: one of the methods examined was deemed limited regarding "reproducibility and constancy" In 2019, the National Academies underlined that the distinction between reproduction, repetition and replication has remained largely unclear and unharmonized across disciplines: "What one group means by one word, the other group means by the other word. These terms — and others, such as repeatability — have long been used in relation to the general concept of one experiment or study confirming the results of another." Beyond this lack of formalization, there was a significant drift between the institutional and disciplinary discourse on research transparency and the reality of research work, that has persisted till the 21st century. Due to the high cost of the apparatus and the lack of incentives, most experiences were not reproduced by contemporary researchers: even a committed proponent of experimentalism like Robert Doyle had to devolve to a form of virtual experimentalism, by describing in detail a research design that has only been run once For Friedrich Steinle, the gap between the postulated virtue of transparency and the material conditions of science has never been solved: "The rare cases in which replication actually is attempted are those that either are central for theory development (e.g., by being incompatible with existing theory) or promise broad attention due to major economical perspectives. Despite the formal ideal of replicability, we do not live in a culture of replication." Preconditions of the transparency crisis (1945–2000) The development of big science after the Second World War has created unprecedented challenges for research transparency. The generalization of statistical methods across a large number of fields, as well as the increasing breadth and complexity of research projects, entailed a series of concerns about the lack of proper documentation of the scientific process. Due to the expansion of the published research output, new quantitative methods for literature surveys have been developed under the label of meta-analysis or meta-science. These rely on the assumption that quantitative results and the details of the experimental and observational framework are sound (such as the size or the composition of the sample). In 1966, Stanley Schor and Irving Karten published one of the first generic evaluation of statistical methods in 67 leading medical journals. While few outright problematic papers were found, "in almost 73% of the reports read (those needing revision and those which should have been rejected), conclusions were drawn when the justification for these conclusions was invalid" In the 1970s and the 1980s, scientific misconducts gradually ceased to be presented as individual misconducts and became collective problems that need to be addressed by scientific institutions and communities. Between 1979 and 1981, several major cases of scientific frauds and plagiarism draw a larger focus to the issue from researchers and policy-makers in the United States In a well-publicized investigation, Betrayers of Science, two scientific journalists described scientific fraud as a structural problem: "As more cases of frauds broke into public view (…) we wondered if fraud wasn't a quite regular minor feature of the scientific landscape (…) Logic, replication, peer review — all had been successfully defied by scientific forgers, often for extended periods of time". The codification of research integrity has been the main institutional answer to this increased public scrutiny with "numerous codes of conduct field specific, national, and international alike." The reproducibility/transparency debate (2000–2015) In the 2000s, long-standing issues on the standardization of research methodology have been increasingly presented as a structural crisis which "if not addressed the general public will inevitably lose its trust in science." The early 2010s is commonly considered to be a turning point: "it wasn’t until sometime around 2011–2012 that the scientific community’s consciousness was bombarded with irreproducibility warnings". An early significant contribution to the debate has been the controversial and influential claim of John Ioannidis from 2005: "most published research findings are false. The main argument was based on the excessively lax experimental standards in place, with numerous weak result being presented as solid research: "the majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings" Due to being published in PLOS Medicine the study of Ioannidis had a considerable echo in psychology, medicine and biology. In the following decades, large range projects attempted to assess experimental reproducibility. In 2015, the Reproducibility Project: Psychology attempted to reproduced 100 studies from three top psychology journals (Journal of Personality and Social Psychology, Journal of Experimental Psychology: Learning, Memory, and Cognition, and Psychological Science): while nearly all paper had reproducible effects, it was found that only 36% of the replications were significant enough (p value above the common threshold of 0.05). In 2021, another Reproducibility Project, Cancer Biology, analyzed 53 top papers about cancer published between 2010 and 2012 and established that the effect sizes were 85% smaller on average than the original findings . During the 2010s, the concept of reproducibility crisis has been expanded to a wider array of disciplines. The share of citations per year of the seminal paper of John Ioannidis, Why Most Published Research Findings Are False in the main fields of research according to the metadata recorded by the academic search engine Semantic Scholar (6,349 citations as of June 2022) shows how this framing has especially expanded to computing sciences. In Economics, a replication of 18 experimental studies in two major journals, found a failure rate comparable to psychology or medicine (39%). Several global surveys have reported a growing uneasiness of scientific communities over reproducibility and other issues of research transparency. In 2016, Nature highlighted that "more than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments" The survey also found "no consensus on what reproducibility is or should be", in part due to disciplinary differences, which makes it harder to assess what could be the necessary steps to overcome the issue at plays. The Nature survey has also been criticized for its paradoxical lack of research transparency, since it was not based on a representative sample but an online survey: it has "relied on convenience samples and other methodological choices that limit the conclusions that can be made about attitudes among the larger scientific community" Despite mixed results, the Nature survey has been largely disseminated and ahs become a common entry data for any study of research transparency. Reproducibility crisis and other issues of research transparency have become a public topic addressed in the general press: "Reproducibility conversations are also unique compared to other methodological conversations because they have received sustained attention in both the scientific literature and the popular press". Research transparency and open science (2015–) Since 2000, the open science movement has expanded beyond access to scientific outputs (publication, data or software) to encompass the entire process of scientific production. In 2018, Vicente-Saez and Martinez-Fuentes have attempted to map the common values shared by the standard definitions of open science in the English-speaking scientific literature indexed on Scopus and the Web of Science. Access is no longer the main dimension of open science, as it has been extended by more recent commitments toward transparency, collaborative work and social impact. Through this process, open science has been increasingly structured over a consisting set of ethical principles: "novel open science practices have developed in tandem with novel organising forms of conducting and sharing research through open repositories, open physical labs, and transdisciplinary research platforms. Together, these novel practices and organising forms are expanding the ethos of science at universities." The global scale of the open science movement and its integration in a large variety of technical tools, standards and regulations makes it possible to overcome the "classic collective action problem" embodied by research transparency: there is a structural discrepancy between the stated objective of scientific institutions and the lack of incentives to implement them at an individual level. The formalization of open science as a potential framework to ensure research transparency has been initially undertaken by institutional and communities initiatives. The TOP guidelines were elaborated in 2014 by a committee for Transparency and Openness Promotion that included "disciplinary leaders, journal editors, funding agency representatives, and disciplinary experts largely from the social and behavioral sciences". The guidelines rely on eight standards, with different levels of compliance. While the standards are modular, they also aim to articulate a consistent ethos of science as "they also complement each other, in that commitment to one standard may facilitate adoption of others." After 2015, theses initiatives have partly influenced new regulations and code of ethics. The European Code of Conduct for Research Integrity from 2017 is strongly structured around open science and open data: it "pays data management almost an equal amount of attention as publishing and is also in this sense the most advanced of the four CoCs." First adopted in July 2020, the Hong Kong principles for assessing researchers acknowledge open science as one of the five pillars of scientific integrity: "It seems clear that the various modalities of open science need to be rewarded in the assessment of researchers because these behaviors strongly increase transparency, which is a core principle of research integrity." Forms of research transparency Research transparency has a large variety of forms depending on the disciplinary culture, the material condition of research and the interaction between scientists and other social circles (policy-makers, non-academic professionals, general audience). For Lyon, Jeng and Mattern, "the term ‘transparency’ has been applied in a range of contexts by diverse research stakeholders, who have articulated and framed the concept in a number of different ways." In 2020, Kevin Elliott introduced a taxonomy of eight dimensions of research transparency: purpose, audience, content, timeframe, actors, mechanism, venues and dangers. For Elliott not all forms of transparency are achievable and desirable, so that a proper terminology can help to make the more appropriate decisions: "While these are important objections, the taxonomy of transparency considered here suggests that the best response to them is typically not to abandon the goal of transparency entirely to consider what forms of transparency are best able to minimize them.". Method reproducibility Goodman, Fanelli and Ioannidis define method reproducibility as "the provision of enough detail about study procedures and data so the same procedures could, in theory or in actuality, be exactly repeated." This acception is largely synonymous with replicability in a computational context or reproducibility in an experimental context. In the report of the National Academies of Science, that opted for an experimental terminology, the counterpart of method reproducibility was described as "obtaining consistent results using the same input data; computational steps, methods, and code; and conditions of analysis". Method reproducibility is more attainable in computational sciences: as long as it behaves as expected, the same code should produce the same output. Open code, open data and more recently, research notebook are common recommendations to enhance method reproducibility. In principle, the wider availability of research output makes it possible to assess and audit the process of analysis. In practice, Roger Peng already underlined in 2011, that many projects require "computing power that may not be available to all researchers". This issue has worsened in some areas such as Artificial Intelligence or Computer vision, as the development of very large deep learning models makes it nearly impossible to recreate them (or at a prohibitive cost), even when the original code and data are open. Method reproducibility can also be affected by library dependency, as the open code can rely on external programs which may not always be available or compatible. Two studies in 2018 and 2019 have shown that a large share of research notebook hosted on GitHub are no longer usable, either due to the of required extensions no longer being available or issues in the code. In experimental sciences, there is no commonly agreed criterium of method reproducibility: "in practice, the level of procedural detail needed to describe a study as "methodologically reproducible" does not have consensus." Result reproducibility Goodman, Fanelli and Ioannidis define result reproducibility as "obtaining the same results from the conduct of an independent study whose procedures are as closely matched". Result reproducibility is comparable to replication in an experimental context and reproducibility in a computational context. The definition of replicability retained in the National Academies of Science, largely applies to it: "obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.". The reproducibility crisis met in experimental disciplines like psychology or medicine is mostly a crisis of "result reproducibility", since it concerns research that cannot been simply re-executed, but involve the independent recreation of the experimental design. As such it is arguably the most debated form of research transparency in the recent years. Result reproducibility is harder to achieve than other forms of research transparency. It involve a variety of issues that may include computational reproducibility, accuracy of scientific measurement and diversity of methodological approaches. There are no universal standard to determine how close are the original procedures matched and criterium may vary depending on the disciplines or, even on the field of research. Consequently, meta-analysis of reproducibility have faced significant challenges. A 2015 study of 100 psychology papers conducted by Open Science Collaboration has been confronted with the "lack of a single accepted definition" which "opened the door to controversy about their methodological approach and conclusions" and made it necessary to fall back on "subjective assessments" of result reproducibility. Observation reproducibility and verifiability In 2018 Sabina Leonelli defines observation reproducibility as the "expectation being that any skilled researcher placed in the same time and place would pick out, if not the same data, at least similar patterns". This expectation recovers a large range scientific and scholarly practices in non-experimental disciplines: "A tremendous amount of research in the medical, historical and social sciences does not rest on experimentation, but rather on observational techniques such as surveys, descriptions and case reports documenting unique circumstances" The development of open scientific infrastructure has radically transformed the status and the availability of scientific data and other primary sources. Access to theses resources has been thoroughly transformed by digitization and the attribution of unique identifiers. Permanent digital object identifiers (or DOI) have been first allocated to dataset since the early 2000s which solved a long-standing debate on the citability of scientific data. Increased transparency of citations to primary sources or research materials has been framed by Andrew Moravcsik as a "revolution in qualitative research". Access to theses resources has been thoroughly transformed by digitization and the attribution of unique identifiers. Permanent digital object identifiers (or DOI) have been first allocated to dataset since the early 2000s which solved a long-standing debate on the citability of scientific data. Value transparency Transparency of research values has been a major focus of disciplines with strong involvements in policy-making such as environment studies or social sciences. In 2009, Heather Douglas underlined that the public discourse on science has been largely dominated by normative ideals of objective research: if the procedures have been correctly applied, science results should be "value-free". For Douglas, this ideal remains largely at loss with the effective process of research and scientific advising as pre-defined values may largely predate choices about the concepts, the protocols and the data used. Douglas argued instead in favor of a disclosure of the values held by researchers: "the values should be made as explicit as possible in this indirect role, whether in policy documents or in the research papers of scientists." In the 2010s, several philosopher of sciences attempted to systematize value transparency in the context of open science. In 2017, Kevin Elliott emphasized three conditions for value transparency in research, the first one involved "being as transparent as possible about (…) data, methods, models and assumptions so that value influence can be scrutinized". Review and editorial transparency Until the 2010s, the editorial practices of scholarly publishing have remained largely unformal and little studied: "Despite 350 years of scholarly publishing (…) research on ItAs [Instruction to authors], and on their evolution and change, is scarce." Editorial transparency has been recently acknowledged as a natural expansion of the debate over research reproducibility. Several principles laid in the 2015 TOP guidelines already implied the existence of explicit editorial standards. Unprecedented attention given to editorial transparency has also been motivated by the diversification and the complexification of the open science publishing landscape: "Triggered by a wide variety of expectations for journals’ editorial processes, journals have started to experiment with new ways of organizing their editorial assessment and peer review systems (...) The arrival of these innovations in an already diverse set of practices of peer review and editorial selection means we can no longer assume that authors, readers, and reviewers simply know how editorial assessment operates." Transparent by design: developing open workflow The TOPs Guidelines have set up an influential transdisciplinary standard to establish result reproducibility in an open science context. While experimental and computational disciplines remains a primary focus, the standards have strived to integrate concerns and formats more specific to other disciplinary practices (such as research materials). Informal incentives like badges or indexes have been initially advocated as a way to support the adoption of harmonized policies in regard to research transparency. Due to the development of open science, regulation and standardized infrastructures or processes are increasingly favored. Sharing of research outputs Data sharing has been early on identified as major potential solution to the reproducibility crisis and the lack of solid guidelines for statistical indicators. In 2005, John Ioannidis hypothesized that "some kind of registration or networking of data collections or investigators within fields may be more feasible than registration of each and every hypothesis-generating experiment." The sharing of research outputs is covered by three standards of the TOPs guidelines: on Data transparency (2), Analytic/code methods transparency (3) and Research materials transparency (4). All the relevant data, code and research materials are to be stored on a "trusted repository" and all analysis being already reproduced independently prior to publication. Extended citation standards While citation standards are commonly applied to academic reference, there is much less formalization for all the other research output, such as data, code, primary sources or qualitative assessments. In 2012, the American Political Science Association adopted new policies for open qualitative research. They covered three dimensions of transparency: data transparency (in the sense of precise bibliographic data to the original sources), analytic transparency (in regards to claims extrapolated from the cited sources) and production transparency (in reference to the editorial choices made in the selection of the sources). In 2014, Andrew Moravcsik advocated the implementation of transparency appendix, containing detailed quotes of original sources as well as annotations "explaining how the source supports the claim being made". According to the TOP Guidelines, "appropriate citation for data and materials" should be provided each publication. Consequently, scientific outputs like code or dataset are fully acknowledged as citable contributions: "Regular and rigorous citation of these materials credit them as original intellectual contributions." Pre-registrations Pre-registrations are covered by two TOP guidelines: Preregistration of studies (6) and Preregistration of analysis plans (7). In both cases, for the highest level of compliance journal should provide "link and badge in article to meeting requirements". Pre-registrations aims to preventively address a variety of questionable research practices. It takes usually the form of "a timestamped uneditable research plan to a public archive [that] states the hypotheses to be tested, target sample sizes". Preregistration acts as an ethical contract as it theoretically constrains "the researcher degrees of freedom that make QRPs and p-hacking work". Preregistration do not solve all the range of questionable research practices. Selective reporting of the results would especially still be compatible with a predefined research plan: "preregistration does not fully counter publication bias as it does not guarantee that findings will be reported." It has been argued that preregistration may also in some cases harm the quality of the research output by creating artificial constraints that do not fit with the reality of the research field: "Preregistration may interfere with valid inference because nothing prevents a researcher from preregistering a poor analytical plan." While advocated as a relatively cost-free solution, preregistration may be in reality harder to implement as it relies on a significant commitment on the part of the researchers. An empiric study of the adoption of open science experiments in a psychology journals has shown that "Adoption of pre-registration lags relative to other open science practices (…) from 2015 to 2020". Consequently "even within researchers who see field-wide benefits of pre-registration, there is uncertainty surrounding the costs and benefits to individuals." Replication studies Replication studies or assessments of replicability aims to re-do one or several original studies. Although the concept has only appeared in the 2010s, replication studies have been existing for decades but were not acknowledged as such. The 2019 report of the National academies include a meta-analysis of 25 replications published between 1986 and 2019. It finds that the majority of the replication concern the medical and social sciences (especially, psychology and behavioral economics) and that there is for now no standardized evaluation criteria: "methods of assessing replicability are inconsistent and the replicability percentages depend strongly on the methods used." Consequently, at least as for 2019, replication studies cannot be aggregated to extrapolate a replicability rate: they "are not necessarily indicative of the actual rate of non-replicability across science for a number" The TOPs guidelines have called for an enhanced recognition and valorization of replication studies. The eighth standards state that compliant journals should use "registered Reports as a submission option for replication studies with peer review". Open editorial policies In July 2018, several publishers, librarians, journal editors and researchers drafted a Leiden Declaration for Transparent Editorial Policies. The declaration underlined that journals "often do not contain information about reviewer selection, review criteria, blinding, the use of digital tools such as text similarity scanners, as well as policies on corrections and retractions" and this lack of transparency. The declaration identifies four main publication and peer review phases that should be better documented: At submission: details on the governance of the journal, its scope, the editorial board or the rejection rates. During review: criteria for selection, timing of the review and model of peer review (double bind, single bind, open). Publication: disclosure of the "roles in the review process". Post-publication: "criteria and procedures for corrections, expressions of concern, retraction" and other changes. In 2020, the Leiden Declaration has been expanded and supplemented by a Platform for Responsible Editorial Policies (PREP). This initiative also aims to solve the structural scarcity of data and empirical information on editorial policies and peer review practices. As of 2022, this database contains partially crowdsourced information on the editorial procedures of 490 journals, from an initial base of 353 journals. The procedures evaluated include especially "the level of anonymity afforded to authors and reviewers; the use of digital tools such as plagiarism scanners; and the timing of peer review in the research and publication process". Despite this developments, research on editorial research still highlight the need for the "a comprehensive database that would allow authors or other stakeholders to compare journals based on their (…) requirements or recommendations" References Bibliography Standards and declarations Reports Books and theses Academic articles Chapters Conferences Other sources Ethics and statistics Open science Scientific method
Research transparency
[ "Technology" ]
6,454
[ "Ethics and statistics", "Ethics of science and technology" ]
69,481,211
https://en.wikipedia.org/wiki/Scandium%20phosphide
Scandium phosphide is an inorganic compound of scandium and phosphorus with the chemical formula . Synthesis ScP can be obtained by the reaction of scandium and phosphorus at 1000 °C. Physical properties This compound is calculated to be a semiconductor used in high power, high frequency applications and in laser diodes. Chemical properties ScP can be smelted with cobalt or nickel through electric arc to obtain ScCoP and ScNiP. References Phosphides Scandium compounds Semiconductors Rock salt crystal structure
Scandium phosphide
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
104
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
69,482,866
https://en.wikipedia.org/wiki/Particulate%20pollution
Particulate pollution is pollution of an environment that consists of particles suspended in some medium. There are three primary forms: atmospheric particulate matter, marine debris, and space debris. Some particles are released directly from a specific source, while others form in chemical reactions in the atmosphere. Particulate pollution can be derived from either natural sources or anthropogenic processes. Atmospheric particulate matter Atmospheric particulate matter, also known as particulate matter, or PM, describes solids and/or liquid particles suspended in a gas, most commonly the Earth's atmosphere. Particles in the atmosphere can be divided into two types, depending on the way they are emitted. Primary particles, such as mineral dust, are emitted into the atmosphere. Secondary particles, such as ammonium nitrate, are formed in the atmosphere through gas-to-particle conversion. Sources Some particulates occur naturally, originating from volcanoes, dust storms, forest and grassland fires, living vegetation and sea spray. Human activities, such as the burning of fossil fuels in vehicles, wood burning, stubble burning, power plants, road dust, wet cooling towers in cooling systems and various industrial processes, also generate significant amounts of particulates. Coal combustion in developing countries is the primary method for heating homes and supplying energy. Because salt spray over the oceans is the overwhelmingly most common form of particulate in the atmosphere, anthropogenic aerosols—those made by human activities—currently account for about 10 percent of the total mass of aerosols in our atmosphere. Microplastics are an emerging source of atmospheric pollution, particularly fine plastic fibers that are light enough to be carried by the wind. Microplastics traveling in the air cannot be traced back to their specific original sources, as the wind can blow the infinitesimal particles thousands of miles from where they were originally shed. Microplastics are being found in very remote regions of the Earth, where there are no apparent nearby sources of plastic. A common source of airborne microplastic fibers is plastic textiles. While most atmospheric microplastics tend to come from land, microplastics are also entering the atmosphere through ocean and sea mist. Domestic combustion and wood smoke Domestic combustion pollution is mainly composed of burning fuel including wood, gas, and charcoal in activities of heating, cooking, agriculture, and wildfires. Major domestic pollutants contain 17% of carbon dioxide, 13% of carbon monoxide, 6% of nitrogen monoxide, polycyclic aromatic hydrocarbons, and fine and ultrafine particles. In the United Kingdom domestic combustion is the largest single source of PM2.5 annually. In some towns and cities in New South Wales wood smoke may be responsible for 60% of fine particle air pollution in the winter. Research conducted about biomass burning in 2015, estimated that 38% of European total particulate pollution emissions are composed of domestic wood burning. The particulate pollutant is often in microscopic size that enables it to infiltrate into interior space even if windows and doors are closed. The main component of woodsmoke, black carbon significantly appears in the indoor environment compared to other ambient pollutants. If the room is sealed tight enough to prevent woodsmoke transmission, it will also prevent oxygen exchange from indoors to outdoor. The regular dusk mask also can help little with particulate pollutants since they are designed to filter out larger particles. Musk with HEPA filter can filter out microscopic pollutants but cause difficulty of breathing to the population with lung disease. Living under high concentrations of pollutants can lead to headaches, fatigue, lung disease, asthma, and throat and eye irritation. One of the most common diseases among those living among pollutants is chronic obstructive pulmonary disease (COPD). Exposure to wood and charcoal smoke is significantly associated with COPD diagnoses among those living in developing and developed countries. Exposure to woodsmoke intensifies the respiratory systems and increases the risk of hospital admissions. Marine debris Marine debris and marine aerosols refer to particulates suspended in a liquid, usually water on the Earth's surface. Particulates in water are a kind of water pollution measured as total suspended solids, a water quality measurement listed as a conventional pollutant in the U.S. Clean Water Act, a water quality law. Notably, some of the same kinds of particles can be suspended both in air and water, and pollutants specifically may be carried in the air and deposited in water, or fall to the ground as acid rain. The majority of marine aerosols are created through the bubble bursting of breaking waves and capillary action on the ocean surface due to the stress exerted from surface winds. Among common marine aerosols, pure sea salt aerosols are the major component of marine aerosols with an annual global emission between 2,000-10,000 teragrams annually. Through interactions with water, many marine aerosols help to scatter light, and aid in cloud condensation and ice nuclei (IN); thus, affecting the atmospheric radiation budget. When they interact with anthropogenic pollution, marine aerosols can affect biogeochemical cycles through the depletion of acids such as nitric acid and halogens. Space debris Space debris describes particulates in the vacuum of outer space, specifically particles originating from human activity that remain in geocentric orbit around the Earth. The International Association of Astronauts define space debris as "any man-made Earth orbiting object which is non-functional with no reasonable expectation of assuming or resuming its intended function or any other function for which it is or can be expected to be authorized, including fragments and parts thereof". Space debris is classified by size and operational purpose, and divided into four main subsets: inactive payloads, operational debris, fragmentation debris and microparticulate matter. Inactive payloads refer to any launched space objects that have lost the capability to reconnect to its corresponding space operator; thus, preventing a return to Earth. In contrast, operational debris describes the matter associated with the propulsion of a larger entity into space, which may include upper rocket stages and ejected nose cones. Fragmentation debris refers to any object in space that has become dissociated from a larger entity by means of explosion, collision or deterioration. Microparticulate matter describes space matter that typically cannot be seen singly with the naked eye, including particles, gases, and spaceglow. In response to research that concluded that impacts from Earth orbital debris could lead to greater hazards to spacecraft than the natural meteoroid environment, NASA began the orbital debris program in 1979, initiated by the Space Sciences branch at Johnson Space Center (JSC). Beginning with an initial budget of $70,000, the NASA orbital debris program began with the initial goals of characterizing hazards induced by space debris and creating mitigation standards that would minimize the growth of the orbital debris environment. By 1990, the NASA orbital debris program created a debris monitoring program, which included mechanisms to sample the low Earth orbit (LEO) environment for debris as small as 6mm using the Haystack X-band ground radar. Epidemiology Particulate pollution is observed around the globe in varying sizes and compositions and is the focus of many epidemiological studies. Particulate matter (PM) is generally classified into two main size categories: PM10 and PM2.5. PM10, also known as coarse particulate matter, consists of particles 10 micrometers (μm) and smaller, while PM2.5, also called fine particulate matter, consists of particles 2.5 μm and smaller. Particles 2.5 μm or smaller in size are especially notable as they can be inhaled into the lower respiratory system, and with enough exposure, absorbed into the bloodstream. Particulate pollution can occur directly or indirectly from a number of sources including, but not limited to: agriculture, automobiles, construction, forest fires, chemical pollutants, and power plants. Exposure to particulates of any size and composition may occur acutely over a short duration, or chronically over a long duration. Particulate exposure has been associated with adverse respiratory symptoms ranging from irritation of the airways, aggravated asthma, coughing, and difficulty breathing from acute exposure to symptoms such as irregular heartbeat, lung cancer, kidney disease, chronic bronchitis, and premature death in individuals who suffer from pre-existing cardiovascular or lung diseases due to chronic exposure. The severity of health effects generally depends upon the size of the particles as well as the health status of the individual exposed; older adults, children, pregnant women, and immunocompromised populations are at the greatest risk for adverse health outcomes. Short-term exposure to particulate pollution has been linked to adverse health impacts. As a result, the US Environmental Protection Agency (EPA) and various health agencies around the world have established thresholds for concentrations of PM2.5 and PM10 that are determined to be acceptable. However, there is no known safe level of exposure and thus, any exposure to particulate pollution is likely to increase an individual's risk of adverse health effects. In European countries, air quality at or above 10 micrograms per cubic meter of air (μg/m3) for PM2.5 increases the all-causes daily mortality rate by 0.2-0.6% and the cardiopulmonary mortality rate by 6-13%. Worldwide, PM10 concentrations of 70 μg/m3 and PM2.5 concentrations of 35 μg/m3 have been shown to increase long-term mortality by 15%. More so, approximately 4.2 million of all premature deaths observed in 2016 occurred due to airborne particulate pollution, 91% of which occurred in countries with low to middle socioeconomic status. Of these premature deaths, 58% were attributed to strokes and ischaemic heart diseases, 8% attributed to COPD (Chronic Obstructive Pulmonary Disease), and 6% to lung cancer. In 2006, the EPA conducted air quality designations in all 50 states, denoting areas of high pollution based on criteria such as air quality monitoring data, recommendations submitted by the states, and other technical information; and reduced the National Ambient Air Quality Standard for daily exposure to particulates in the 2.5 micrometers and smaller category from 15 μg/m3 to 12 μg/m3 in 2012. As a result, U.S. annual PM2.5 averages have decreased from 13.5 μg/m3 to 8.02 μg/m3, between 2000 and 2017. Microplastics prove to be particularly concerning as particulate matter for their reactivity and ability to become contaminated. Microplastic particles, depending on their composition, can form carbonyl bonds on the surface, causing contaminants such as heavy metals to be adsorbed by the particle. When microplastic particles are inhaled, they persist in the lungs and cause inflammation. More research is needed to understand the long-term health effects of microplastics in the human body. Environmental Risks Particulate matter (PM), particularly PM2.5, was found to be harmful to aquatic invertebrates. These aquatic invertebrates include fish, crustaceans, and Mollusca. In a study by Han et al, the effects of PM<2.5 micrometers on life history traits and oxidative stress were observed in Tigriopus japonicus. Exposure to particulate matter of less than 2.5 micrometers in diameter led to significant changes in ROS levels, indicating that particulate matter exposure was a causative agent of oxidative stress in Tigriopus japonicus. In addition to aquatic invertebrates, negative effects of particulate matter have been noted in mammals as well. Following acute exposure to ambient particulate matter, rats showed a significant increase in neutrophils and a significant decrease in lymphocytes, indicating that particulate matter exposure can result in activation of the sympathetic stress response. References External links Pollution Atmospheric sciences Environmental chemistry Environmental science
Particulate pollution
[ "Chemistry", "Environmental_science" ]
2,461
[ "Environmental chemistry", "nan" ]
63,832,562
https://en.wikipedia.org/wiki/Hypercentric%20lens
A hypercentric or pericentric lens is a lens system where the entrance pupil is located in front of the lens, in the space where an object could be located. In a certain region, objects that are further away from the lens produce larger images than objects that are closer to the lens. This is in stark contrast to the behavior of the human eye or any ordinary camera (both entocentric lenses), where further-away objects always appear smaller. The geometry of a hypercentric lens can be visualized by imagining a point source of light at the center of the entrance pupil sending rays in all directions. Any point on the object will be imaged to the point on the image plane found by continuing the ray that passes through it, so the shape of the image will be the same as the shadow cast by the object from the imaginary point of light. The closer an object gets to that point (the center of the entrance pupil), the larger its image will be. This inversion of normal perspectivity can be useful for machine vision. Imagine a six-sided die sitting on a conveyor belt being imaged by a hypercentric lens system directly above, whose entrance pupil is below the conveyor belt. The image of the die would contain the top and all four sides at once, because the bottom of the die appears larger than the top. See also Entocentric lens Telecentric lens References Photographic lenses Machine vision
Hypercentric lens
[ "Engineering" ]
293
[ "Machine vision", "Robotics engineering" ]
63,836,450
https://en.wikipedia.org/wiki/Joan%20van%20der%20Waals
Joan Henri van der Waals (2 May 1920 – 21 June 2022) was a Dutch physicist. He was professor of experimental physics at Leiden University between 1967 and 1989. He specialized in molecular physics and clathrate hydrates. One of Van der Waals's most significant contributions to the study of hydrates was a series of papers between 1953 and 1958, which eventually culminated in the 1959 publication of his paper on the canonical partition function for clathrates, along with J. C. Platteeuw. To create this partition function, van der Waals made a number of simplifying assumptions, most prominently that neighboring guest gas molecules cannot interact and there is a maximum of one guest per cage. Early life Van der Waals was born on 2 May 1920 in Amsterdam. A book on the Bohr model sparked his interest in physics. After finishing his high school in Amsterdam he moved to London to work as an intern-apprentice in a laboratory. When he returned to the Netherlands he started a combined study of physics, chemistry and maths at the University of Amsterdam. With the German invasion of the Netherlands in May 1940, Van der Waals was called for military service for the mounted artillery. He was made prisoner of war but was allowed to return to his studies in June 1940. In 1942, Van der Waals completed the long-distance tour-skating event, the Elfstedentocht. In 1943 he refused to sign the and went into hiding. He joined the underground courier service Rolls Royce. One of his activities was to make contact from The Hague with already liberated parts of the Netherlands to exchange communication. Van der Waals was caught by the authorities three times during the war-period, but managed to escape each time. Near the end of the war he went into hiding with family members living in the Veluwe region. When this area was liberated he was recruited as a translator for the Alsos Mission because he was able to speak German and English. In this capacity he was part of the liberation of Utrecht and saw the German technological facilities in Hook of Holland. Career After the war ended, Van der Waals finished his studies at the University of Amsterdam in October 1945. He then started working for the Koninklijke Shell Laboratorium Amsterdam. He obtained his doctorate at the University of Groningen in 1950, with a thesis titled Thermodynamic Properties of Mixtures of Alkanes Differing in Chain Length. In the 1950s, Van der Waals developed insights in the description of clathrates and hydrates related to noble gas compound, resulting in the 1959 Van der Waals–Platteeuw clathrate hydrate theory. Van der Waals was appointed professor of experimental physics at Leiden University in 1967, and retired in 1989. He specialized in molecular physics. In 1962, Van der Waals received the Bourke Award of the Royal Society of Chemistry. Van der Waals was elected a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) in 1971. He served on the board of the KNAW between 1984 and 1987. He has been an honorary member of the Royal Netherlands Chemical Society since 1998. Van der Waals was appointed Knight in the Order of the Netherlands Lion. Van der Waals has been involved in the conservation and restoration of the Trippenhuis, the seat of the KNAW, since the 1980s. Personal life Van der Waals was married to Liesbeth van der Waals (1920–2014), with whom he had three children. In 1967 the pair separated. Van der Waals was the first cousin, twice removed, of Dutch Nobel Prize–winning physicist Johannes Diderik van der Waals. He was an avid sailor and has made trips to the polar circle and Argentina. Van der Waals turned 100 on 2 May 2020, and died on 21 June 2022, at the age of 102. References 1920 births 2022 deaths 20th-century Dutch physicists Dutch men centenarians Dutch prisoners of war in World War II Dutch resistance members Experimental physicists Knights of the Order of the Netherlands Lion Academic staff of Leiden University Members of the Royal Netherlands Academy of Arts and Sciences Royal Netherlands Army personnel of World War II Scientists from Amsterdam University of Amsterdam alumni University of Groningen alumni World War II prisoners of war held by Germany
Joan van der Waals
[ "Physics" ]
871
[ "Experimental physics", "Experimental physicists" ]
76,953,474
https://en.wikipedia.org/wiki/Type%20SRs%208000%20bucket-wheel%20excavator
The Type SRs 8000 or less commonly known as the SRs 8000-class, is a family of bucket-wheel excavators known for being one of the largest terrestrial vehicles ever made by man, with Bagger 293 its - "lead vessel" - being the largest ground vehicle in history. The Type SRs 8000 classification was coined by TAKRAF to describe specifically, Bagger 293, although it is unclear if this extends to its other "sibling vehicles" within the same bulk. Whilst the "Bagger" family may indicate a copy/series of the same vehicle type, it is more of a loose denominator to group any BWEs of similar bulk, length, height and size within the Hambach surface mine. Indeed, some of the Baggers created aren't of the same size, construction period or even in the same built manufacturer. Bagger 293 and 288 for example, was constructed by TAKRAF and Krupp respectively. Specifications As aforementioned, the one factor that unites all of them are their size. All members of the Type SRs 8000 weigh at the bare minimum of over 7,000 tons. The smallest and oldest of the family, Bagger 281 (built in 1958) weigh over 7,800 tons, although the average weight range is around 13,000 tons. Likewise, all members reach lengths of over 200 meters and require a small crew of five. Such a size would mean that these vehicles would have its own on-board toiletry and kitchenette rooms. As BWEs, the Type SRs 8000s are all externally powered by a nearby coal production plant with an internal 6,413 kW (8,600 hp) powered electric motor to keep the machine operating smoothly. On average, all Baggers require a total output 16.56 MW (22,207 hp) of power to function with all-systems running. Their primary goal as BWEs, is in excavating lignite coal in Germany for processing to be turned into energy or 240,000 cubic metres of overburden daily. Currently, all Type SRs 8000s are in-service. They are Bagger 281 (1958), Bagger 285 (1975), Bagger 287 (1976), Bagger 288 (1978), Bagger 291 (1993) and Bagger 293 (1995). Gallery See also List of largest machines Bucket-wheel excavators Landships Bagger 288 Bagger 293 References Bucket-wheel excavators Coal mining in Germany RWE Buildings and structures in Rhein-Erft-Kreis Economy of North Rhine-Westphalia Takraf GmbH
Type SRs 8000 bucket-wheel excavator
[ "Engineering" ]
537
[ "Mining equipment", "Bucket-wheel excavators" ]
76,953,946
https://en.wikipedia.org/wiki/Type%20SRs%202000%20bucket-wheel%20excavator
The Type SRs 2000 (or Type SRs (K) 2000 in China) is a class of medium-sized bucket-wheel excavators built by TAKRAF. It is by far, one of the most common and recognizable BWEs built and sold by TAKRAF, with 56 Type SRs 2000s being commissioned and launched as of 2013. Specifications It is a medium-sized BWE that reach dimensions of , a length of , a width of and a height of ; its ground pressure is much lower than that of a D11 Dozer. It was built to replace the aging Type SRs 1200s BWEs, with the Type SRs 2000s possessing more efficient and cost-saving energy power lines and drive conveyor belt systems. The Type SRs 2000s are powered by at least four powered electric motor with 20-30kV trailing cable, and whilst it is externally powered like all BWEs, the total operational power is currently unknown. The bucket-wheel dimensions are in diameter with 14 buckets. It is able to excavate a total capacity that can range between 4900 and 7000 m3/h with a digging force of around 100 kN/m. The bucket wheel excavator reaches 30 m digging height with a cutting depth of -10 meters. Although construction of the Type SRs 2000 began in 1989 with one of the first batches being sent to Ekibastusz in then Kazakh SSR (USSR), it only began official global commissioning and serialization in 2008, with the China-export model, the Type SRs (K) 2000 being commissioned in 2013. The Type SRs (K) 2000 export model has a slight modification that allows the BWE to resist temperatures varying from +39 °C down to -39 °C, which is needed given the varied temperature change in Inner Mongolia. Current and former operators (former) (former) See also List of largest machines Bucket-wheel excavators Landships Type SRs 8000 References Bucket-wheel excavators Coal mining in Germany RWE Takraf GmbH
Type SRs 2000 bucket-wheel excavator
[ "Engineering" ]
418
[ "Mining equipment", "Bucket-wheel excavators" ]
76,967,306
https://en.wikipedia.org/wiki/Hydrotelluride
A hydrotelluride or tellanide is an ion or a chemical compound containing the [HTe]− anion which has a hydrogen atom connected to a tellurium atom. HTe is a pseudohalogen. Organic compounds containing the -TeH group are called tellurols. "Tellanide" is the IUPAC name from the Red Book, but hydrogen(tellanide)(1−) is also listed. "Tellanido" as a ligand is not named, however ditellanido is used for HTeTe−. Hydrotellurides are usually unstable at room temperature. List References Tellurium(II) compounds Anions
Hydrotelluride
[ "Physics", "Chemistry" ]
140
[ "Ions", "Matter", "Anions" ]
76,975,056
https://en.wikipedia.org/wiki/Caputo%20fractional%20derivative
In mathematics, the Caputo fractional derivative, also called Caputo-type fractional derivative, is a generalization of derivatives for non-integer orders named after Michele Caputo. Caputo first defined this form of fractional derivative in 1967. Motivation The Caputo fractional derivative is motivated from the Riemann–Liouville fractional integral. Let be continuous on , then the Riemann–Liouville fractional integral states that where is the Gamma function. Let's define , say that and that applies. If then we could say . So if is also , then This is known as the Caputo-type fractional derivative, often written as . Definition The first definition of the Caputo-type fractional derivative was given by Caputo as: where and . A popular equivalent definition is: where and is the ceiling function. This can be derived by substituting so that would apply and follows. Another popular equivalent definition is given by: where . The problem with these definitions is that they only allow arguments in . This can be fixed by replacing the lower integral limit with : . The new domain is . Properties and theorems Basic properties and theorems A few basic properties are: Non-commutation The index law does not always fulfill the property of commutation: where . Fractional Leibniz rule The Leibniz rule for the Caputo fractional derivative is given by: where is the binomial coefficient. Relation to other fractional differential operators Caputo-type fractional derivative is closely related to the Riemann–Liouville fractional integral via its definition: Furthermore, the following relation applies: where is the Riemann–Liouville fractional derivative. Laplace transform The Laplace transform of the Caputo-type fractional derivative is given by: where . Caputo fractional derivative of some functions The Caputo fractional derivative of a constant is given by: The Caputo fractional derivative of a power function is given by: The Caputo fractional derivative of an exponential function is given by: where is the -function and is the lower incomplete gamma function. References Further reading Ricardo Almeida, A Caputo fractional derivative of a function with respect to another function Fractional calculus
Caputo fractional derivative
[ "Mathematics" ]
453
[ "Fractional calculus", "Calculus" ]
76,975,616
https://en.wikipedia.org/wiki/Key%20Transparency
Key Transparency allows communicating parties to verify public keys used in end-to-end encryption. In many end-to-end encryption services, to initiate communication a user will reach out to a central server and request the public keys of the user with which they wish to communicate. If the central server is malicious or becomes compromised, a man-in-the-middle attack can be launched through the issuance of incorrect public keys. The communications can then be intercepted and manipulated. Additionally, legal pressure could be applied by surveillance agencies to manipulate public keys and read messages. With Key Transparency, public keys are posted to a public log that can be universally audited. Communicating parties can verify public keys used are accurate. See also Certificate Transparency References Cryptography End-to-end encryption Public-key cryptography
Key Transparency
[ "Mathematics", "Engineering" ]
162
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
65,372,852
https://en.wikipedia.org/wiki/WD%201856%2B534
WD 1856+534 is a white dwarf located in the constellation of Draco. At a distance of about from Earth, it is the outer component of a visual triple star system consisting of an inner pair of red dwarf stars, named G229-20. The white dwarf displays a featureless absorption spectrum, lacking strong optical absorption or emission features in its atmosphere. It has an effective temperature of , corresponding to an age of approximately 5.8 billion years. WD 1856+534 is approximately half as massive as the Sun, while its radius is much smaller, being 40% larger than Earth. Planetary system The white dwarf is known to host one exoplanet, WD 1856+534 b, in orbit around it. The exoplanet was detected through the transit method by the Transiting Exoplanet Survey Satellite (TESS) between July and August 2019. An analysis of the transit data in 2020 revealed that it is a Jupiter-like giant planet with a radius over ten times that of Earth's, and orbits its host star closely at a distance of 0.02 astronomical units (AU), with an orbital period 60 times shorter than that of Mercury around the Sun. The unexpectedly close distance of the exoplanet to the white dwarf implies that it must have migrated inward after its host star evolved from a red giant to a white dwarf, otherwise it would have been engulfed by its star. This migration may be related to the fact that WD 1856+534 belongs to a hierarchical triple-star system: the white dwarf and its planet are gravitationally bound to a distant companion, G 229–20, which itself is a binary system of two red dwarf stars. Gravitational interactions with the companion stars may have triggered the planet's migration through the Lidov–Kozai mechanism in a manner similar to some hot Jupiters. An alternative hypothesis is that the planet instead has survived a common envelope phase. In the latter scenario, other planets engulfed before may have contributed to the expulsion of the stellar envelope. JWST observations seem to disfavour the formation via common envelope and instead favour high eccentricity migration. The planetary transmission spectrum obtained with GTC OSIRIS is gray and featureless, likely because of the high level of hazes. The transmission spectrum was also obtained with Gemini GMOS. It does not show any features beside a possible dip at 0.55 μm. This feature could be caused be auroral emission at the nightside of the planet. The research find a minimum mass of 0.84 by accounting for the transit geometry of a grazing transit. The researchers also revised the white dwarf parameters and found a total age of 8-10 billion years, in agreement with the system belonging to the thin disk. A search with transit timing variations found no additional planets. The search exclude planets with a mass more than 2 with orbital periods as long as 500 days and planets with >10 with orbital periods as long as 1000 days. See also WD 1145+017, a white dwarf with a transiting disrupted planetary-mass object WD J0914+1914, a white dwarf with a disk of debris originating from a possible giant planet ZTF J0139+5245, another white dwarf with a disk of debris from a disrupted planetary-mass object CWISEP J1935-1546 a free-floating object with aurora emission in the infrared List of exoplanets and planetary debris around white dwarfs Notes References External links NASA Missions Spy First Possible ‘Survivor’ Planet Hugging White Dwarf Star, Sean Potter, NASA, 16 September 2020 Planet discovered transiting a dead star, Steven Parsons, Nature News and Views, 16 September 2020 White dwarfs Astronomical objects discovered in 2020 Draco (constellation) Planetary systems with one confirmed planet Gas giants 1690, TOI
WD 1856+534
[ "Astronomy" ]
775
[ "Constellations", "Draco (constellation)" ]
72,519,086
https://en.wikipedia.org/wiki/Miriam%20Pe%C3%B1a%20C%C3%A1rdenas
Miriam del Carmen Peña Cárdenas is a Chilean astronomer and cosmochemist whose research includes the chemical composition of interstellar clouds including H II regions and the planetary nebulae surrounding Wolf–Rayet stars. She is a professor and researcher at the National Autonomous University of Mexico (UNAM), in the UNAM Institute of Astronomy. Education Peña began her university studies in Chile, studying engineering, but moved to the National Autonomous University of Mexico to complete her bachelor's degree, and remained there for her graduate studies. Recognition Peña is a member of the Mexican Academy of Sciences. She was a 2007 winner of UNAM's Sor Juana Inés de la Cruz Recognition. References External links Year of birth missing (living people) Living people Chilean astronomers Women astronomers Astrochemists National Autonomous University of Mexico alumni Academic staff of the National Autonomous University of Mexico Members of the Mexican Academy of Sciences
Miriam Peña Cárdenas
[ "Chemistry", "Astronomy" ]
180
[ "Women astronomers", "Astronomers", "Astrochemists" ]
72,521,404
https://en.wikipedia.org/wiki/Toarcian%20Oceanic%20Anoxic%20Event
The Toarcian extinction event, also called the Pliensbachian-Toarcian extinction event, the Early Toarcian mass extinction, the Early Toarcian palaeoenvironmental crisis, or the Jenkyns Event, was an extinction event that occurred during the early part of the Toarcian age, approximately 183 million years ago, during the Early Jurassic. The extinction event had two main pulses, the first being the Pliensbachian-Toarcian boundary event (PTo-E). The second, larger pulse, the Toarcian Oceanic Anoxic Event (TOAE), was a global oceanic anoxic event, representing possibly the most extreme case of widespread ocean deoxygenation in the entire Phanerozoic eon. In addition to the PTo-E and TOAE, there were multiple other, smaller extinction pulses within this span of time. Occurring during the supergreenhouse climate of the Early Toarcian Thermal Maximum (ETTM), the Early Toarcian extinction was associated with large igneous province volcanism, which elevated global temperatures, acidified the oceans, and prompted the development of anoxia, leading to severe biodiversity loss. The biogeochemical crisis is documented by a high amplitude negative carbon isotope excursions, as well as black shale deposition. Timing The Early Toarcian extinction event occurred in two distinct pulses, with the first event being classified by some authors as its own event unrelated to the more extreme second event. The first, more recently identified pulse occurred during the mirabile subzone of the tenuicostatum ammonite zone, coinciding with a slight drop in oxygen concentrations and the beginning of warming following a late Pliensbachian cool period. This first pulse, occurring near the Pliensbachian-Toarcian boundary, is referred to as the PTo-E. The TOAE itself occurred near the tenuicostatum–serpentinum ammonite biozonal boundary, specifically in the elegantulum subzone of the serpentinum ammonite zone, during a marked, pronounced warming interval. The TOAE lasted for approximately 500,000 years, though a range of estimates from 200,000 to 1,000,000 years have also been given. The PTo-E primarily affected shallow water biota, while the TOAE was the more severe event for organisms living in deep water. Causes Geological, isotopic, and palaeobotanical evidence suggests the late Pliensbachian was an icehouse period. These ice sheets are believed to have been thin and stretched into lower latitudes, making them extremely sensitive to temperature changes. A warming trend lasting from the latest Pliensbachian to the earliest Toarcian was interrupted by a "cold snap" in the middle polymorphum zone, equivalent to the tenuicostatum ammonite zone, which was then followed by the abrupt warming interval associated with the TOAE. This global warming, driven by rising atmospheric carbon dioxide, was the mainspring of the early Toarcian environmental crisis. Carbon dioxide levels rose from about 500 ppm to about 1,000 ppm. Seawater warmed by anywhere between 3 °C and 7 °C, depending on latitude. At the height of this supergreenhouse interval, global sea surface temperatures (SSTs) averaged about 21 °C. The eruption of the Karoo-Ferrar Large Igneous Province is generally attributed to have caused the surge in atmospheric carbon dioxide levels. Argon-argon dating of Karoo-Ferrar rhyolites points to a link between Karoo-Ferrar volcanism and the extinction event, a conclusion reinforced by uranium-lead dating and palaeomagnetism. Occurring during a broader, gradual positive carbon isotope excursion as measured by δ13C values, the TOAE is preceded by a global negative δ13C excursion recognised in fossil wood, organic carbon, and carbonate carbon in the tenuicostatum ammonite zone of northwestern Europe, with this negative δ13C shift being the result of volcanic discharge of light carbon. The global ubiquity of this negative δ13C excursion has been called into question, however, due to its absence in certain deposits from the time, such as the Bächental bituminous marls, though its occurrence in areas like Greece has been cited as evidence of its global nature. The negative δ13C shift is also known from the Arabian Peninsula, the Ordos Basin, and the Neuquén Basin. The negative δ13C excursion has been found to be up to -8% in bulk organic and carbonate carbon, although analysis of compound specific biomarkers suggests a global value of around -3% to -4%. In addition, numerous smaller scale carbon isotope excursions are globally recorded on the falling limb of the larger negative δ13C excursion. Although the PTo-E is not associated with a decrease in δ13C analogous to the TOAE's, volcanism is nonetheless believed to have been responsible for its onset as well, with the carbon injection most likely having an isotopically heavy, mantle-derived origin. The Karoo-Ferrar magmatism released so much carbon dioxide that it disrupted the imprint of the 9 Myr long-term carbon cycle that was otherwise steady and stable during the Jurassic and Early Cretaceous. The values of 187Os/188Os rose from ~0.40 to ~0.53 during the PTo-E and from ~0.42 to ~0.68 during the TOAE, and many scholars conclude this change in osmium isotope ratios evidences the responsibility of this large igneous province for the biotic crises. Mercury anomalies from the approximate time intervals corresponding to the PTo-E and TOAE have likewise been invoked as tell-tale evidence of the ecological calamity's cause being a large igneous province, although some researchers attribute these elevated mercury levels to increased terrigenous flux. There is evidence that the motion of the African Plate suddenly changed in velocity, shifting from mostly northward movement to southward movement. Such shifts in plate motion are associated with similar large igneous provinces emplaced in other time intervals. A 2019 geochronological study found that the emplacement of the Karoo-Ferrar large igneous province and the TOAE were not causally linked, and simply happened to occur rather close in time, contradicting mainstream interpretations of the TOAE. The authors of the study conclude that the timeline of the TOAE does not match up with the course of activity of the Karoo-Ferrar magmatic event. The large igneous province also intruded into coal seams, releasing even more carbon dioxide and methane than it otherwise would have. Magmatic sills are also known to have intruded into shales rich in organic carbon, causing additional venting of carbon dioxide into the atmosphere. Carbon release via metamorphic heating of coal has been criticised as a major driver of the environmental perturbation, however, on the basis that coal transects themselves do not show the δ13C excursions that would be expected if significant quantities of thermogenic methane were released, suggesting that much of the degassed emissions were either condensed as pyrolytic carbon or trapped as coalbed methane. In addition, possible associated release of deep sea methane clathrates has been potentially implicated as yet another cause of global warming. Episodic melting of methane clathrates dictated by Milankovitch cycles has been put forward as an explanation fitting the observed shifts in the carbon isotope record. Other studies contradict and reject the methane hydrate hypothesis, however, concluding that the isotopic record is too incomplete to conclusively attribute the isotopic excursion to methane hydrate dissociation, that carbon isotope ratios in belemnites and bulk carbonates are incongruent with the isotopic signature expected from a massive release of methane clathrates, that much of the methane released from ocean sediments was rapidly sequestered, buffering its ability to act as a major positive feedback, and that methane clathrate dissociation occurred too late to have had an appreciable causal impact on the extinction event. Hypothetical release of methane clathrates extremely depleted in heavy carbon isotopes has furthermore been considered unnecessary as an explanation for the carbon cycle disruption. It has also been hypothesised that the release of cryospheric methane trapped in permafrost amplified the warming and its detrimental effects on marine life. Obliquity-paced carbon isotope excursions have been interpreted as some researchers as reflective of permafrost decline and consequent greenhouse gas release. The TOAE is believed to be the second largest anoxic event of the last 300 Ma, and possibly the largest of the Phanerozoic. A positive δ13C excursion, likely resulting from the mass burial of organic carbon during the anoxic event, is known from the falciferum ammonite zone, chemostratigraphically identifying the TOAE. Large igneous province resulted in increased silicate weathering and an acceleration of the hydrological cycle, as shown by a increased amount of terrestrially derived organic matter found in sedimentary rocks of marine origin during the TOAE. Concentrations of phosphorus, magnesium, and manganese rose in the oceans. A -0.5% excursion in δ44/40Ca provides further evidence of increased continental weathering. Osmium isotope ratios confirm further still a major increase in weathering. The enhanced continental weathering in turn led to increased eutrophication that helped drive the anoxic event in the oceans. Continual transport of continentally weathered nutrients into the ocean enabled high levels of primary productivity to be maintained over the course of the TOAE. Rising sea levels contributed to ocean deoxygenation; as rising sea levels inundated low-lying lands, organic plant matter was transported outwards into the ocean. An alternate model for the development of anoxia is that epicontinental seaways became salinity stratified with strong haloclines, chemoclines, and thermoclines. This caused mineralised carbon on the seafloor to be recycled back into the photic zone, driving widespread primary productivity and in turn anoxia. The freshening of the Arctic Ocean by way of melting of Northern Hemisphere ice caps was a likely trigger of such stratification and a slowdown of global thermohaline circulation. Stratification also occurred due to the freshening of surface water caused by an enhanced water cycle. Rising seawater temperatures amidst a transition from icehouse to greenhouse conditions further retarded ocean circulation, aiding the establishment of anoxic conditions. Geochemical evidence from what was then the northwestern European epicontinental sea suggests that a shift from cooler, more saline water conditions to warmer, fresher conditions prompted the development of significant density stratification of the water column and induced anoxia. Extensive organic carbon burial induced by anoxia was a negative feedback loop retarding the otherwise pronounced warming and may have caused global cooling in the aftermath of the TOAE. In anoxic and euxinic marine basins in Europe, organic carbon burial rates increased by ~500%. Furthermore, anoxia was not limited to oceans; large lakes also experienced oxygen depletion and black shale deposition. Euxinia occurred in the northwestern Tethys Ocean during the TOAE, as shown by a positive δ34S excursion in carbonate-associated sulphate occurs synchronously with the positive δ13C excursion in carbonate carbon during the falciferum ammonite zone. This positive δ34S excursion has been attributed to the depletion of isotopically light sulphur in the marine sulphate reservoir that resulted from microbial sulphur reduction in anoxic waters. Similar positive δ34S excursions corresponding to the onset of TOAE are known from pyrites in the Sakahogi and Sakuraguchi-dani localities in Japan, with the Sakahogi site displaying a less extreme but still significant pyritic positive δ34S excursion during the PTo-E. Euxinia is further evidenced by enhanced pyrite burial in Zázrivá, Slovakia, enhanced molybdenum burial totalling about 41 Gt of molybdenum, and δ98/95Mo excursions observed in sites in the Cleveland, West Netherlands, and South German Basins. Valdorbia, a site in the Umbria-Marche Apennines, also exhibited euxinia during the anoxic event. There is less evidence of euxinia outside the northwestern Tethys, and it likely only occurred transiently in basins in Panthalassa and the southwestern Tethys. Due to the clockwise circulation of the oceanic gyre in the western Tethys and the rough, uneven bathymetry in the northward limb of this gyre, oxic bottom waters had relatively few impediments to diffuse into the southwestern Tethys, which spared it from the far greater prevalence of anoxia and euxinia that characterised the northern Tethys. The Panthalassan deep water site of Sakahogi was mainly anoxic-ferruginous across the interval spanning the late Pliensbachian to the TOAE, but transient sulphidic conditions did occur during the PTo-E and TOAE. In northeastern Panthalassa, in what is now British Columbia, euxinia dominated anoxic bottom waters. The early stages of the TOAE were accompanied by a decrease in the acidity of seawater following a substantial decrease prior to the TOAE. Seawater pH then dropped close to the middle of the event, strongly acidifying the oceans. The sudden decline of carbonate production during the TOAE is widely believed to be the result of this abrupt episode of ocean acidification. Additionally, the enhanced recycling of phosphorus back into seawater as a result of high temperatures and low seawater pH inhibited its mineralisation into apatite, helping contribute to oceanic anoxia. The abundance of phosphorus in marine environments created a positive feedback loop whose consequence was the further exacerbation of eutrophication and anoxia. The extreme and rapid global warming at the start of the Toarcian promoted intensification of tropical storms across the globe. Effects on life Marine invertebrates The extinction event associated with the TOAE primarily affected marine life as a result the collapse of the carbonate factory. Brachiopods were particularly severely hit, with the TOAE representing one of the most dire crises in their evolutionary history. Brachiopod taxa of large size declined significantly in abundance. Uniquely, the brachiopod genus Soaresirhynchia thrived during the later stages of the TOAE due to its low metabolic rate and slow rate of growth, making it a disaster taxon. The species S. bouchardi is known to have been a pioneer species that colonised areas denuded of brachiopods in the northwestern Tethyan region. Ostracods also suffered a major diversity loss, with almost all ostracod clades’ distributions during the time interval corresponding to the serpentinum zone shifting towards higher latitudes to escape intolerably hot conditions near the Equator. Bivalves likewise experienced a significant turnover. The decline of bivalves exhibiting high endemism with narrow geographic ranges was particularly severe. At Ya Ha Tinda, a replacement of the pre-TOAE bivalve assemblage by a smaller, post-TOAE assemblage occurred, while in the Cleveland Basin, the inoceramid Pseudomytiloides dubius experienced the Lilliput effect. Ammonoids, having already experienced a major morphological bottleneck thanks to the Gibbosus Event, about a million years before the Toarcian extinction, suffered further losses in the Early Toarcian diversity collapse. Belemnite richness in the northwestern Tethys dropped during the PTo-E but slightly increased across the TOAE. Belemnites underwent a major change in habitat preference from cold, deep waters to warm, shallow waters. Their average rostrum size also increased, though this trend heavily varied depending on the lineage of belemnites. The Toarcian extinction was unbelievably catastrophic for corals; 90.9% of all Tethyan coral species and 49% of all genera were wiped out. Calcareous nannoplankton that lived in the deep photic zone suffered, with the decrease in abundance of the taxon Mitrolithus jansae used as an indicator of shoaling of the oxygen minimum zone in the Tethys and the Hispanic Corridor. Other affected invertebrate groups included echinoderms, radiolarians, dinoflagellates, and foraminifera. Trace fossils, an indicator of bioturbation and ecological diversity, became highly undiverse following the TOAE. Carbonate platforms collapsed during both the PTo-E and the TOAE. Enhanced continental weathering and nutrient runoff was the dominant driver of carbonate platform decline in the PTo-E, while the biggest culprits during the TOAE were heightened storm activity and a decrease in the pH of seawater. The recovery from the mass extinction among benthos commenced with the recolonisation of barren locales by opportunistic pioneer taxa. Benthic recovery was slow and sluggish, being regularly set back thanks to recurrent episodes of oxygen depletion, which continued for hundreds of thousands of years after the main extinction interval. Evidence from the Cleveland Basin suggests it took ~7 Myr for the marine benthos to recover, on par with the Permian-Triassic extinction event. Many marine invertebrate taxa found in South America migrated through the Hispanic Corridor into European seas after the extinction event, aided in their dispersal by higher sea levels. Marine vertebrates The TOAE had minor effects on marine reptiles, in stark contrast to the major impact it had on many clades of marine invertebrates. In fact, in the Southwest German Basin, ichthyosaur diversity was higher after the extinction interval, although this may be in part a sampling artefact resulting from a sparse Pliensbachian marine vertebrate fossil record. Terrestrial animals The TOAE is suggested to have caused the extinction of various clades of dinosaurs, including coelophysids, dilophosaurids, and many basal sauropodomorph clades, as a consequence of the remodelling of terrestrial ecosystems caused by global climate change. Some heterodontosaurids and thyreophorans also perished in the extinction event. In the wake of the extinction event, many derived clades of ornithischians, sauropods, and theropods emerged, with most of these post-extinction clades greatly increasing in size relative to dinosaurs before the TOAE. Eusauropods were propelled to ecological dominance after their survival of the Toarcian cataclysm. Megalosaurids experienced a diversification event in the latter part of the Toarcian that was possibly a post-extinction radiation that filled niches vacated by the mass death of the Early Toarcian extinction. Insects may have experienced blooms as fish moved en masse to surface waters to escape anoxia and then died in droves due to limited resources. Terrestrial plants The volcanogenic extinction event initially impacted terrestrial ecosystems more severely than marine ones. A shift towards a low diversity assemblage of cheirolepid conifers, cycads, and Cerebropollenites-producers adapted for high aridity from a higher diversity ecological assemblage of lycophytes, conifers, seed ferns, and wet-adapted ferns is observed in the palaeobotanical and palynological record over the course of the TOAE. The coincidence of the zenith of Classopolis and the decline of seed ferns and spore producing plants with increased mercury loading implicates heavy metal poisoning as a key contributor to the floristic crisis during the Toarcian mass extinction. Poisoning by mercury, along with chromium, copper, cadmium, arsenic, and lead is speculated to be responsible for heightened rates of spore malformation and dwarfism concomitant with enrichments in all these toxic metals. Geologic effects The TOAE was associated with widespread phosphatisation of marine fossils believed to result from the warming-induced increase in weathering that increased phosphate flux into the ocean. This produced exquisitely preserved lagerstätten across the world, such as Ya Ha Tinda, Strawberry Bank, and the Posidonia Shale. As is common during anoxic events, black shale deposition was widespread during the deoxygenation events of the Toarcian. Toarcian anoxia was responsible for the deposition of commercially extracted oil shales, particularly in China. Enhanced hydrological cycling caused clastic sedimentation to accelerate during the TOAE; the increase in clastic sedimentation was synchronous with excursions in 187Os/188Os, 87Sr/86Sr, and δ44/40Ca. Additionally, the Toarcian was punctuated by intervals of extensive kaolinite enrichment. These kaolinites correspond to negative oxygen isotope excursions and high Mg/Ca ratios and are thus reflective of climatic warming events that characterised much of the Toarcian. Likewise, illitic/smectitic clays were also common during this hyperthermal perturbation. Palaeogeographic changes The Intertropical Convergence Zone (ITCZ) migrated southwards across southern Gondwana, turning much of the region more arid. This aridification was interrupted, however, in the spinatus ammonite biozone and across the Pliensbachian-Toarcian boundary itself. The large rise in sea levels resulting from the intense global warming led to the formation of the Laurasian Seaway, which enabled the flow of cool water low in salt content to flow into the Tethys Ocean from the Arctic Ocean. The opening of this seaway may have potentially acted as a mitigating factor that ameliorated to a degree the oppressively anoxic conditions that were widespread across much of the Tethys. The enhanced hydrological cycle during early Toarcian warming caused lakes to grow in size. During the anoxic event, the Sichuan Basin was transformed into a giant lake, which was believed to be approximately thrice as large as modern-day Lake Superior. Lacustrine sediments deposited as a result of this lake's existence are represented by the Da’anzhai Member of the Ziliujing Formation. Roughly ~460 gigatons (Gt) of organic carbon and ~1,200 Gt of inorganic carbon were likely sequestered by this lake over the course of the TOAE. Comparison with present global warming The TOAE and the Palaeocene-Eocene Thermal Maximum have been proposed as analogues to modern anthropogenic global warming based on the comparable quantity of greenhouse gases released into the atmosphere in all three events. Some researchers argue that evidence for a major increase in Tethyan tropical cyclone intensity during the TOAE suggests that a similar increase in magnitude of tropical storms is bound to occur as a consequence of present climate change. See also Weissert Event Selli Event Bonarelli Event References Extinction events Toarcian Stage Isotope excursions
Toarcian Oceanic Anoxic Event
[ "Chemistry", "Biology" ]
4,832
[ "Evolution of the biosphere", "Isotope excursions", "Extinction events", "Isotopes" ]
72,527,197
https://en.wikipedia.org/wiki/Organoastatine%20chemistry
Organoastatine chemistry describes the synthesis and properties of organoastatine compounds, chemical compounds containing a carbon to astatine chemical bond. Astatine is extremely radioactive, with the longest-lived isotope (210At) having a half-life of only 8.1 hours. Consequently, organoastatine chemistry can only be studied by tracer techniques on extremely small quantities. The problems caused by radiation damage as well as difficulties in separation and identification are worse for organic astatine derivatives than for inorganic compounds. Most studies of organoastatine chemistry focus on 211At (half-life 7.21 hours), which is the subject of ongoing studies in nuclear medicine: it is better than 131I at destroying abnormal thyroid tissue. Astatine-labelled iodine reagents have been used to synthesise RAt, RAtCl2, R2AtCl, and RAtO2 (R = phenyl or p-tolyl). Alkyl and aryl astatides are relatively stable and have been analysed at high temperatures (120 °C) with radio gas chromatography. Demercuration reactions have produced with good yields trace quantities of 211At-containing aromatic amino acids, steroids, and imidazoles, among other compounds. Astatine has both halogen-like and metallic properties, so that analogies with iodine sometimes hold, but sometimes do not. Astatine can be incorporated into organic molecules via halogen exchange, halodediazotation (replacing a diazonium group), halodeprotonation, or halodemetallation. Initial attempts to radiolabel proteins with 211At exemplify its intermediate behaviour, as astatination (analogous to radioiodination) produces unstable results and it is instead AtO+ (or a hydrolysed species) that probably bonds to proteins. Two-step procedures are used today, first synthesising stable astatoaryl prosthetic groups before incorporating them into the protein. Not only is the C–At bond the weakest of all carbon–halogen bonds (following periodic trends), but also the bond easily breaks as the astatine is oxidised back to free astatine. References Further reading Astatine Organometallic chemistry
Organoastatine chemistry
[ "Chemistry" ]
470
[ "Organometallic chemistry" ]
66,624,528
https://en.wikipedia.org/wiki/Diaphonization
Diaphonization (or diaphonisation), also known as clearing and staining, is a staining technique used on animal specimens that first renders the body of the animal transparent by bathing it in trypsin, and then stains the bones and cartilage with various dyes, usually alizarin red and alcian blue. History Diaphonization was first developed by O. Schultze in 1897, and later was modified by numerous researchers. Technique Clearing renders the animals transparent and is achieved by bathing the specimens in a soup of trypsin, a digestive enzyme that slowly breaks down flesh. The dyes alizarin red and alcian blue are most commonly used in the staining of bone and cartilage accordingly. When cleared, the specimen is put in glycerin. Despite its merits, diaphonization is not widely used in the scientific field. Advancements in imaging technology have rendered the practice all but obsolete, though it is expanding as an art form. Diaphonization is not suitable for animals longer than 30 centimeters (except for snakes) due to the limited ability of the trypsin bath to penetrate the tissues of larger animals. It is usually used to preserve animals that are too delicate to dissect, and instead are kept as wet specimens. References Staining Staining dyes Scientific techniques Laboratory techniques Zoology Skeletal system
Diaphonization
[ "Chemistry", "Biology" ]
278
[ "Staining", "Microbiology techniques", "Zoology", "nan", "Microscopy", "Cell imaging" ]
66,638,383
https://en.wikipedia.org/wiki/Sequence%20covering%20map
In mathematics, specifically topology, a sequence covering map is any of a class of maps between topological spaces whose definitions all somehow relate sequences in the codomain with sequences in the domain. Examples include maps, , , and . These classes of maps are closely related to sequential spaces. If the domain and/or codomain have certain additional topological properties (often, the spaces being Hausdorff and first-countable is more than enough) then these definitions become equivalent to other well-known classes of maps, such as open maps or quotient maps, for example. In these situations, characterizations of such properties in terms of convergent sequences might provide benefits similar to those provided by, say for instance, the characterization of continuity in terms of sequential continuity or the characterization of compactness in terms of sequential compactness (whenever such characterizations hold). Definitions Preliminaries A subset of is said to be if whenever a sequence in converges (in ) to some point that belongs to then that sequence is necessarily in (i.e. at most finitely many points in the sequence do not belong to ). The set of all sequentially open subsets of forms a topology on that is finer than 's given topology By definition, is called a if Given a sequence in and a point in if and only if in Moreover, is the topology on for which this characterization of sequence convergence in holds. A map is called if is continuous, which happens if and only if for every sequence in and every if in then necessarily in Every continuous map is sequentially continuous although in general, the converse may fail to hold. In fact, a space is a sequential space if and only if it has the following : for every topological space and every map the map is continuous if and only if it is sequentially continuous. The in of a subset is the set consisting of all for which there exists a sequence in that converges to in A subset is called in if which happens if and only if whenever a sequence in converges in to some point then necessarily The space is called a if for every subset which happens if and only if every subspace of is a sequential space. Every first-countable space is a Fréchet–Urysohn space and thus also a sequential space. All pseudometrizable spaces, metrizable spaces, and second-countable spaces are first-countable. Sequence coverings A sequence in a set is by definition a function whose value at is denoted by (although the usual notation used with functions, such as parentheses or composition might be used in certain situations to improve readability). Statements such as "the sequence is injective" or "the image (i.e. range) of a sequence is infinite" as well as other terminology and notation that is defined for functions can thus be applied to sequences. A sequence is said to be a of another sequence if there exists a strictly increasing map (possibly denoted by instead) such that for every where this condition can be expressed in terms of function composition as: As usual, if is declared to be (such as by definition) a subsequence of then it should immediately be assumed that is strictly increasing. The notation and mean that the sequence is valued in the set The function is called a if for every convergent sequence in there exists a sequence such that It is called a if for every there exists some such that every sequence that converges to in there exists a sequence such that and converges to in It is a if is surjective and also for every and every every sequence and converges to in there exists a sequence such that and converges to in A map is a if for every compact there exists some compact subset such that Sequentially quotient mappings In analogy with the definition of sequential continuity, a map is called a if is a quotient map, which happens if and only if for any subset is sequentially open if and only if this is true of in Sequentially quotient maps were introduced in who defined them as above. Every sequentially quotient map is necessarily surjective and sequentially continuous although they may fail to be continuous. If is a sequentially continuous surjection whose domain is a sequential space, then is a quotient map if and only if is a sequential space and is a sequentially quotient map. Call a space if is a Hausdorff space. In an analogous manner, a "sequential version" of every other separation axiom can be defined in terms of whether or not the space possess it. Every Hausdorff space is necessarily sequentially Hausdorff. A sequential space is Hausdorff if and only if it is sequentially Hausdorff. If is a sequentially continuous surjection then assuming that is sequentially Hausdorff, the following are equivalent: is sequentially quotient. Whenever is a convergent sequence in then there exists a convergent sequence in such that and is a subsequence of Whenever is a convergent sequence in then there exists a convergent sequence in such that is a subsequence of This statement differs from (2) above only in that there are no requirements placed on the limits of the sequences (which becomes an important difference only when is not sequentially Hausdorff). If is a continuous surjection onto a sequentially compact space then this condition holds even if is not sequentially Hausdorff. If the assumption that is sequentially Hausdorff were to be removed, then statement (2) would still imply the other two statement but the above characterization would no longer be guaranteed to hold (however, if points in the codomain were required to be sequentially closed then any sequentially quotient map would necessarily satisfy condition (3)). This remains true even if the sequential continuity requirement on was strengthened to require (ordinary) continuity. Instead of using the original definition, some authors define "sequentially quotient map" to mean a surjection that satisfies condition (2) or alternatively, condition (3). If the codomain is sequentially Hausdorff then these definitions differs from the original in the added requirement of continuity (rather than merely requiring sequential continuity). The map is called if for every convergent sequence in such that is not eventually equal to the set is sequentially closed in where this set may also be described as: Equivalently, is presequential if and only if for every convergent sequence in such that the set is sequentially closed in A surjective map between Hausdorff spaces is sequentially quotient if and only if it is sequentially continuous and a presequential map. Characterizations If is a continuous surjection between two first-countable Hausdorff spaces then the following statements are true: is almost open if and only if it is a 1-sequence covering. An is surjective map with the property that for every there exists some such that is a for which by definition means that for every open neighborhood of is a neighborhood of in is an open map if and only if it is a 2-sequence covering. If is a compact covering map then is a quotient map. The following are equivalent: is a quotient map. is a sequentially quotient map. is a sequence covering. is a pseudo-open map. A map is called if for every and every open neighborhood of (meaning an open subset such that ), necessarily belongs to the interior (taken in ) of and if in addition both and are separable metric spaces then to this list may be appended: is a hereditarily quotient map. Properties The following is a sufficient condition for a continuous surjection to be sequentially open, which with additional assumptions, results in a characterization of open maps. Assume that is a continuous surjection from a regular space onto a Hausdorff space If the restriction is sequentially quotient for every open subset of then maps open subsets of to sequentially open subsets of Consequently, if and are also sequential spaces, then is an open map if and only if is sequentially quotient (or equivalently, quotient) for every open subset of Given an element in the codomain of a (not necessarily surjective) continuous function the following gives a sufficient condition for to belong to 's image: A family of subsets of a topological space is said to be at a point if there exists some open neighborhood of such that the set is finite. Assume that is a continuous map between two Hausdorff first-countable spaces and let If there exists a sequence in such that (1) and (2) there exists some such that is locally finite at then The converse is true if there is no point at which is locally constant; that is, if there does not exist any non-empty open subset of on which restricts to a constant map. Sufficient conditions Suppose is a continuous open surjection from a first-countable space onto a Hausdorff space let be any non-empty subset, and let where denotes the closure of in Then given any and any sequence in that converges to there exists a sequence in that converges to as well as a subsequence of such that for all In short, this states that given a convergent sequence such that then for any other belonging to the same fiber as it is always possible to find a subsequence such that can be "lifted" by to a sequence that converges to The following shows that under certain conditions, a map's fiber being a countable set is enough to guarantee the existence of a point of openness. If is a sequence covering from a Hausdorff sequential space onto a Hausdorff first-countable space and if is such that the fiber is a countable set, then there exists some such that is a point of openness for Consequently, if is quotient map between two Hausdorff first-countable spaces and if every fiber of is countable, then is an almost open map and consequently, also a 1-sequence covering. See also Notes Citations References Topological graph theory
Sequence covering map
[ "Mathematics" ]
2,078
[ "Mathematical relations", "Topological graph theory", "Topology", "Graph theory" ]
68,132,396
https://en.wikipedia.org/wiki/Blumeviridae
Blumeviridae is a family of RNA viruses, which infect prokaryotes. Taxonomy Blumeviridae contains 31 genera: Alehndavirus Bonghivirus Cehntrovirus Dahmuivirus Dehgumevirus Dehkhevirus Espurtavirus Gifriavirus Hehrovirus Ivolevirus Kahnayevirus Kahraivirus Kemiovirus Kerishovirus Konmavirus Lirnavirus Lonzbavirus Marskhivirus Nehohpavirus Nehpavirus Obhoarovirus Pacehavirus Pahdacivirus Rhohmbavirus Semodevirus Shihmovirus Shihwivirus Tibirnivirus Tinebovirus Wahdswovirus Yenihzavirus References Virus families Riboviria
Blumeviridae
[ "Biology" ]
161
[ "Virus stubs", "Viruses", "Riboviria" ]
68,133,482
https://en.wikipedia.org/wiki/Transequatorial%20loop
In solar physics, a transequatorial loop is a structure present in the solar corona that connects two different regions of opposite magnetic polarity in opposite hemispheres of the Sun. These connected regions are not limited to active regions, but are most commonly found during the times of maximum solar activity, the solar maximum. Transequatorial loops play an integral role in the Babcock Model of solar dynamics and are therefore important to the future study of the solar dynamo. Babcock Model The idea of transequatorial loops was first developed by Horace W. Babcock in his 1961 model for the 11 year sunspot cycle. It explains that during each cycle, starting around the time of solar minimum, the Sun's internal, poloidal (parallel with the solar meridian) magnetic field is wrapped around the Sun via solar differential rotation. Over time, this process turns the field from primarily poloidal, to primarily toroidal (parallel with the solar equator) until the toroidal field reaches its maximum strength at the solar maximum. In order for it to return to its initial, poloidal state, Babcock theorized that the magnetic field from different hemispheres, which continuously emerges from the inside of the Sun into the solar atmosphere, would reconnect with each other forming transequatorial loops. After evidence for the existence of transequatorial loops was first observed in Skylab X-ray data, they were found to be more common during solar maximum then during solar minimum in accordance with the Babcock Model. Characteristics Transequatorial loops connect regions of opposite magnetic polarity on opposite solar hemispheres. Typically, they connect active regions with inactive regions, but can also connect active regions together and inactive regions together. Some regions may possess multiple transequatorial loops. In addition to this, about one third of all active regions possess at least one transequatorial loop and about one third of those possessing one have it associated with the preceding, or westernmost, polarity of the active region. Transequatorial loops have also been associated with flare activity and coronal mass ejections. See also Solar corona Coronal seismology Coronal loop Solar prominence References Further reading Stellar phenomena Solar phenomena
Transequatorial loop
[ "Physics" ]
460
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
68,135,719
https://en.wikipedia.org/wiki/Natural%20and%20Built%20Environment%20Act%202023
The Natural and Built Environment Act 2023 (NBA), now repealed, was one of the three laws intended to replace New Zealand's Resource Management Act 1991 (RMA). The NBA aimed to promote the protection and enhancement of the natural and built environment, while providing for housing and preparing for the effects of climate change. An exposure draft of the bill was released in June 2021 to allow for public submissions. The bill passed its third reading on 15 August 2023, and received royal assent on 23 August 2023. On 23 December 2023, the NBA and the Spatial Planning Act (SPA) were both repealed by the National-led coalition government. Exposure draft The Natural and Built Environment Bill exposure draft features many contrasts to its RMA predecessor. This includes the ability to set environmental limits, the goal to reduce greenhouse gas emissions, the provisions to increase housing supply, and the ability for planners to access activities based on outcomes. A notable difference is the bill's stronger attention to Māori involvement in decision making and Māori environmental issues. Greater emphasis is put on upholding the nation's founding document, the Treaty of Waitangi. Under the bill, over 100 plans and policy statements will be replaced by just 14 plans. These plans will be prepared by new Regional Council Planning Committees and their planning secretariats. The planning committee will be composed of one person to represent the Minister of Conservation, appointed representatives of , and elected people from each district within the region. The committee will have an array of responsibilities, including the ability to vote on plan changes, set environmental limits for the region, and consider recommendations from hearings. The planning secretariat would draft the plans and provide expert advice. Provisions In mid November 2022, the Natural and Built Environment Act was introduced into parliament. In its initial version, the bill establishes a National Planning Framework (NPF) setting out rules for land use and regional resource allocation. The NPF also replaces the Government's policy statements on water, air quality and other issues with an umbrella framework. Under NPF's framework, all 15 regions will be required to develop a Natural and Built Environment Plan (NBE) that will replace the 100 district and regional plans, harmonising consenting and planning rules. An independent national Māori entity will also be established to provide input into the NPF and ensure compliance with the Treaty of Waitangi's provisions. Key provisions have included: Every person has a responsibility to protect and sustain the health and well-being of the natural environment for the benefit of all New Zealanders. Every person has a duty to avoid, minimise, remedy, offset, or provide redress for adverse effects including "unreasonable noise." Prescribes restrictions relating to land, coastal marine area, river and lake beds, water, and discharges. Establishes a national planning framework (NPF) to provide directions on integrated environmental management, resolve conflicts on environmental matters, and to set environmental limits and strategic directions. This framework will take the form of regulations, which will be considered secondary legislation. Sets Te Ture Whaimana as the primary direction-setting document for the Waikato and Waipā rivers and activities within their catchments affecting the rivers. Resource allocation are guided by the principles of sustainability, efficiency, and equity. Prescribes the criteria for setting environmental limits, human health limits, exemptions, targets, and management units. Outlines the process for submitting and appealing case to the Environment Court. Outlines the resource consent process. History Background A 2020 review of the Resource Management Act 1991 (RMA) found various problems with the existing resource management system, and concluded that it could not cope with modern environmental pressures. In January 2021, the government announced that the RMA will be replaced by three acts, with the Natural and Built Environment Bill being the primary of the three. An exposure draft of the NBA was released in late June 2021. Introduction On 14 November 2022, the Sixth Labour Government of New Zealand introduced the Natural and Built Environment Bill into parliament alongside the companion Spatial Planning Act 2023 (SPA) as part of its efforts to replace the Resource Management Act. In response, the opposition National and ACT parties criticised the two replacement bills on the grounds that it created more centralisation, bureaucracy, and did little to reform the problems associated with the RMA process. The Green Party expressed concerns about the perceived lack of environment protection in the proposed legislation. A third bill, the Climate Adaptation Bill (CAA), was expected to be introduced in 2023 with the goal of passing it into law in 2024. The CAA would have established the systems and mechanisms for protecting communities against the effects of climate change such as managed retreat in response to rising levels. The Climate Adaptation Bill also would have dealt with funding the costs of managing climate change. First reading The Natural and Built Environment Bill passed its first reading in the New Zealand House of Representatives on 22 November 2022 by a margin of 74 to 45 votes. The governing Labour and allied Green parties supported the bill while the opposition National, ACT, and Māori parties voted against the bill. The bill's sponsor David Parker and other Labour Members of Parliament including Associate Environment Minister Phil Twyford, Rachel Brooking, and Green MP Eugenie Sage advocated revamping the resource management system due to the unwieldy nature of the Resource Management Act. National MPs Scott Simpson, Chris Bishop, Sam Uffindell, and ACT MP Simon Court argued that the NBA would do little to improve the resource management system and address the centralisation of power and decision-making regarding resource management. Māori Party co-leader Debbie Ngarewa-Packer argued that the bill was insufficient in advancing co-governance and expressed concern that a proposed national Māori entity would undermine the power of Māori iwi (tribes) and hapū (sub-groups). The bill was subsequently referred to the Environment Select Committee. Select committee On 27 June 2023, the Environment select committee presented its final report on the Natural Built and Environment Bill. The committee made several recommendations including: Inserting clauses to emphasise the protection of the health of the natural environment and intergenerational well being. Inserting a new Clause 3A to outline the key aims of the legislation. Clarifying clauses around geoheritage sites, greenhouse gas emissions, coastal marine areas, fishing, land supply, customary rights, cultural heritage, and public access. Defining other natural environment aspects: air, soil, and estuaries. Allowing the National Planning Framework (NPF) to set management units for freshwater and air and provide direction on them. Amending Clause 58 to ease restrictions on non-commercial housing on Māori land. Adding directions on protecting urban trees and the supply of fresh fruits and vegetables to the NPF. A majority of Environment committee members voted to pass the amendments. The National, ACT and Green parties released minority submissions on the bill. While supporting a revamp of the Resource Management Act, the National Party argued that the NBA failed to address the problems with the RMA framework, and criticised the NBA as complex, bureaucratic, detrimental to local democracy and property rights. Similarly, the ACT party criticised the legislation as complex, confusing, and claimed it would discourage development. Meanwhile, the Green Party opined that the NBA was insufficient in protecting the environment and reducing environmental degradation. Second reading The NBA passed its second reading on 18 July 2023 by a margin of 72 to 47 votes. While it was supported by the Labour, Green parties, and former Green Member of Parliament Elizabeth Kerekere, it was opposed by the National, ACT, Māori parties, and former Labour MP Meka Whaitiri. The House of Representatives also voted to accept the Environment select committee's recommendations. Labour MPs Parker, Brooking, Twyford, Angie Warren-Clark, Neru Leavasa, and Stuart Nash, and Green MP Sage gave speeches defending the bill while National MPs Chris Bishop, Scott Simpson, Barbara Kuriger, Tama Potaka, and ACT MP Simon Court criticised the bill in their speeches. Third reading The NBA passed its third reading on 15 August 2023 by margin of 72 to 47 votes. The Labour, Green parties, and Kerekere supported the bill while the National, ACT, Māori parties, and Whaitiri opposed it. Labour MPs Parker, Brooking, Twyford, Warren-Clark, Angela Roberts, Arena Williams, and Lydia Sosene and Green MP Sage defended the bill while National MPs Bishop, Kuriger, and Simpson opposed the bill. Repeal On 23 December, the National-led coalition government repealed the Natural and Built Environment Act and Spatial Planning Act. RMA Reform Minister Chris Bishop announced that New Zealand would revert to the Resource Management Act 1991 while the Government developed replacement legislation. References External links 2021 in New Zealand law 2021 in the environment 2022 in New Zealand law 2023 in New Zealand law 2022 in the environment Environmental law in New Zealand Environmental mitigation Natural resource management Repealed New Zealand legislation Urban planning in New Zealand Open environmental policy proposals
Natural and Built Environment Act 2023
[ "Chemistry", "Engineering" ]
1,826
[ "Environmental mitigation", "Environmental engineering" ]
68,149,028
https://en.wikipedia.org/wiki/Parity%20measurement
Parity measurement (also referred to as Operator measurement) is a procedure in quantum information science used for error detection in quantum qubits. A parity measurement checks the equality of two qubits to return a true or false answer, which can be used to determine whether a correction needs to occur. Additional measurements can be made for a system greater than two qubits. Because parity measurement does not measure the state of singular bits but rather gets information about the whole state, it is considered an example of a joint measurement. Joint measurements do not have the consequence of destroying the original state of a qubit as normal quantum measurements do. Mathematically speaking, parity measurements are used to project a state into an eigenstate of an operator and to acquire its eigenvalue. Parity measurement is an essential concept of quantum error correction. From the parity measurement, an appropriate unitary operation can be applied to correct the error without knowing the beginning state of the qubit. Parity and parity checking A qubit is a two-level system, and when we measure one qubit, we can have either 1 or 0 as a result. One corresponds to odd parity, and zero corresponds to even parity. This is what a parity check is. This idea can be generalized beyond single qubits. This can be generalized beyond a single qubit and it is useful in QEC. The idea of parity checks in QEC is to have just parity information of multiple data qubits over one (auxiliary) qubit without revealing any other information. Any unitary can be used for the parity check. If we want to have the parity information of a valid quantum observable U, we need to apply the controlled-U gates between the ancilla qubit and the data qubits sequentially. For example, for making parity check measurement in the X basis, we need to apply CNOT gates between the ancilla qubit and the data qubits sequentially since the controlled gate in this case is a CNOT (CX) gate. The unique state of the ancillary qubit is then used to determine either even or odd parity of the qubits. When the qubits of the input states are equal, an even parity will be measured, indicating that no error has occurred. When the qubits are unequal, an odd parity will be measured, indicating a single bit-flip error. With more than two qubits, additional parity measurements can be performed to determine if the qubits are the same value, and if not, to find which is the outlier. For example, in a system of three qubits, one can first perform a parity measurement on the first and second qubit, and then on the first and third qubit. Specifically, one is measuring to determine if an error has occurred on the first two qubits, and then to determine if an error has occurred on the first and third qubits. In a circuit, an ancillary qubit is prepared in the state. During measurement, a CNOT gate is performed on the ancillary bit dependent on the first qubit being checked, followed by a second CNOT gate performed on the ancillary bit dependent on the second qubit being checked. If these qubits are the same, the double CNOT gates will revert the ancillary qubit to its initial state, which indicates even parity. If these qubits are not the same, the double CNOT gates will alter the ancillary qubit to the opposite state, which indicates odd parity. Looking at the ancillary qubits, a corresponding correction can be performed. Alternatively, the parity measurement can be thought of as a projection of a qubit state into an eigenstate of an operator and to acquire its eigenvalue. For the measurement, checking the ancillary qubit in the basis will return the eigenvalue of the measurement. If the eigenvalue here is measured to be +1, this indicates even parity of the bits without error. If the eigenvalue is measured to be -1, this indicates odd parity of the bits with a bit-flip error. Example Alice, a sender, wants to transmit a qubit to Bob, a receiver. The state of any qubit that Alice would wish to send can be written as where and are coefficients. Alice encodes this into three qubits, so that the initial state she transmits is . Following noise in the channel, the three qubits state can be seen in the following table with the corresponding probability: A parity measurement can be performed on the altered state, with two ancillary qubits storing the measurement. First, the first and second qubits' parity is checked. If they are equal, a is stored in the first ancillary qubit. If they are not equal, a is stored in the first ancillary qubit. The same action is performed comparing the first and third qubits, with the check being stored in the second ancillary qubit. Important to note is that we do not actually need to know the input qubit state, and can perform the CNOT operations indicating the parity without this knowledge. The ancillary qubits are what indicates what bit has been altered, and the correction operation can be performed as needed. An easy way to visualize this is in the circuit above. First, the input state is encoded into 3 bits, and parity checks are performed with subsequent error correction performed based on the results of the ancilla qubits at the bottom. Finally, decoding is performing to put get back to the same basis of the input state. Parity check matrix A parity check matrix for a quantum circuit can also be constructed using these principles. For some message x encoded as Gx, where G corresponds to the generator matrix, Hx = 0 where H is the parity matrix containing 0's and 1's for a situation where there is no error. However, if an error occurs at one component, then the pattern in the errors can be used to find which bit is incorrect. Types of parity measurements Two types of parity measurement are indirect and direct. Indirect parity measurements coincide with the typical way we think of parity measurement as described above, by measuring an ancilla qubit to determine the parity of the input bits. Direct parity measurements differ from the previous type in that a common mode with the parities coupled to the qubits is measured, without the need for an ancilla qubit. While indirect parity measurements can put a strain on experimental capacity, direct measurements may interfere with the fidelity of the initial states. Example For example, given a Hermitian and Unitary operator (whose eigenvalues are ) and a state , the circuit on the top right performs a Parity measurement on . After the first Hadamard gate, the state of the circuit is After applying the controlled-U gate, the state of the circuit evolves to After applying the second Hadamard gate, the state of the circuit turns into If the state of the top qubit after measurement is , then ; which is the eigenstate of . If the state of the top qubit is , then ; which is the eigenstate of . Experiments and applications In experiments, parity measurements are not only a mechanism for quantum error correction, but they can also help combat non-ideal conditions. Given the existent possibility for bit flip errors, there is an additional likelihood for errors as a result of leakage. This phenomenon is due to unused high-energy qubits becoming excited. It has been demonstrated in superconducting transmon qubits that parity measurements can be applied repetitively during quantum error correction to remove leakage errors. Repetitive parity measurements can be used to stabilize an entangled state and prevent leakage errors (which normally is not possible with typical quantum error correction), but the first group to accomplish this did so in 2020. By performing interleaving XX and ZZ checks, which can ultimately tell whether an X (bit), Y (iXZ), or Z (phase) flip error occurs. The outcomes of these parity measurements of ancilla qubits are used with Hidden Markov Models to complete leakage detection and correction. References Quantum information theory Quantum measurement Quantum computing
Parity measurement
[ "Physics" ]
1,753
[ "Quantum measurement", "Quantum mechanics" ]
68,150,155
https://en.wikipedia.org/wiki/Parts%20departing%20aircraft
In aviation safety, parts departing aircraft or parts detached from aeroplanes (PDA), also known as objects falling off airplanes (OFA), things falling off aircraft (TFOA), and other analogous variations, can range from small fasteners like screws and rivets up to major sub-assemblies like hatch covers and doors. PDA are a safety concern because they may be critical parts needed to continue safe flight, may damage other critical parts of the aircraft as they depart, may cause foreign object damage to other aircraft, or may cause serious injuries or damage to people and property on the ground. These occurrences are a longstanding worldwide problem in aviation. In a 2018 study, the European Aviation Safety Agency concluded that the likelihood of fatally injuring people on the ground due to a PDA event is low enough that it does not constitute an unsafe condition according to their standards; they also noted the absence of any people fatally injured from PDA. But in Japan, preventing objects falling off airplanes is required by all air carriers after a series of serious incidents at Haneda Airport which is close to Tokyo. Regardless of whether PDA are considered acceptable risk by aviation regulators, things that fall from the sky are generally not well tolerated outside the aviation community. The United States Navy found complaints about TFOA increased as residential development has encroached around naval air stations. Leaking aviation lavatory liquids have been known to build up on the exterior of the aircraft in sub-freezing temperatures at altitude, only to fall to earth as blue ice after the airplane descends to land. London Heathrow Airport has had a recurring problem of wheel-well stowaway bodies dropping in residential areas around the airport when airplanes extend their landing gear as they prepare to land. See also Index of aviation articles External links "Cases of objects, including human stowaways, falling from planes" References Aircraft maintenance Aviation risks Aviation safety
Parts departing aircraft
[ "Engineering" ]
385
[ "Aircraft maintenance", "Aerospace engineering" ]
68,152,323
https://en.wikipedia.org/wiki/Konstantinos%20Drosatos
Konstantinos Drosatos (Greek: Κωνσταντίνος Δροσάτος), born in Athens, Greece, is a Greek-American molecular biologist, who is the Ohio Eminent Scholar and Professor of Pharmacology and Systems Physiology at the University of Cincinnati College of Medicine in Cincinnati, Ohio, U.S. His parents were Georgios Drosatos and Sofia Drosatou; his family originates in Partheni, Euboea, Greece. Education and career Drosatos received his B.Sc. from the department of biology at the Aristotle University of Thessaloniki, Greece in 2000. In 2000, he continued with graduate studies at the Molecular Biology-Biomedicine graduate program of the department of biology and the medical school of the University of Crete. He received his M.Sc. in 2002 and his Ph.D. in molecular biology-biomedicine in 2007. During his graduate studies (2002–2007) he was a visiting research scholar in the laboratory of Vassilis I. Zannis at Boston University Medical School. Following his graduation with a PhD in molecular biology-biomedicine in 2007, he joined the laboratory of Ira J. Goldberg at Columbia University, where he pursued post-doctoral training until 2012, when he was promoted to associate research scientist in the department of medicine at Columbia University. In 2014 he joined the faculty of the Lewis Katz School of Medicine at Temple University as an assistant professor in pharmacology and in 2020, he was promoted to associate professor with tenure in cardiovascular sciences (primary affiliation). In 2022, he was recruited at the University of Cincinnati College of Medicine, which he joined as the Ohio Eminent Scholar and Professor of Pharmacology and Systems Physiology Research interests The research in his laboratory focuses on cardiovascular and systemic metabolism and particularly on signaling mechanisms that link cardiac stress in diabetes, sepsis and ischemia with altered myocardial fatty acid metabolism. His published work focuses on the transcriptional regulation of proteins that underlie lipoprotein metabolism, cardiac and systemic fatty acid metabolism, and mitochondrial function. His work has identified the role of Krüppel-like factor 5 (KLF5) in the regulation of cardiac fatty acid metabolism in diabetes and ischemic heart failure, as well as how cardiac lipotoxicity leads to cardiac dysfunction, and the importance of cardiac fatty acid oxidation and mitochondrial integrity for the treatment of cardiac dysfunction in sepsis. Distinctions and awards 2014 Outstanding Early Career Award recipient, American Heart Association, BCVS Council 2016 Honorary Citizen, Eastern Mani Municipality, Greece 2016 Visiting Professorship, UCLA Center for Systems Biomedicine 2017 Early Research Investigator Award, Lewis Katz School of Medicine at Temple University 2017 Elected Fellow (FAHA), American Heart Association 2019 Elected Full Member, Sigma Xi, The Scientific Research Honor Society 2022 Visiting Professorship, School of Medicine, Aristotle University of Thessaloniki, Greece 2022 Top-Reviewer 2022 for JACC: Basic to Translational Science 2023 Honorary Membership at the Biology Society of Cyprus 2023 Adjunct Professorship, European University of Cyprus 2023 Keynote Speaker, Trinity Translational Medicine Institute, Trinity College Dublin, Ireland 2023 Keynote Speaker, Vascular and Heart Research Symposium, The Fralin Biomedical Research Institute, Virginia Tech at Roanoke 2024 Elected Fellow of the Graduate College of the University of Cincinnati Leadership positions 2006–2010 – founding president of the board of directors, Hellenic Bioscientific Association of the USA 2012–2014 – president of the executive board, World Hellenic Biomedical Association 2019–2023 – vice-president of the executive council, ARISTEiA-Institute for the Advancement of Research & Education in Arts, Sciences & Technology 2020-2021 - Chair-elect of the Mid-career Committee, International Society for Heart Research-North American Section 2023-2026 - General Secretary of the Executive Board, KOMVOS-NODE 2024-2028 - President of the executive council, ARISTEiA-Institute for the Advancement of Research & Education in Arts, Sciences & Technology References External links Publications list Greek scientists Greek biologists 21st-century American biologists American biologists Metabolism Aristotle University of Thessaloniki alumni University of Crete alumni Year of birth missing (living people) Living people Scientists from Athens
Konstantinos Drosatos
[ "Chemistry", "Biology" ]
861
[ "Biochemistry", "Metabolism", "Cellular processes" ]
75,324,191
https://en.wikipedia.org/wiki/Epstein%20drag
In fluid dynamics, Epstein drag is a theoretical result, for the drag force exerted on spheres in high Knudsen number flow (i.e., rarefied gas flow). This may apply, for example, to sub-micron droplets in air, or to larger spherical objects moving in gases more rarefied than air at standard temperature and pressure. Note that while they may be small by some criteria, the spheres must nevertheless be much more massive than the species (molecules, atoms) in the gas that are colliding with the sphere, in order for Epstein drag to apply. The reason for this is to ensure that the change in the sphere's momentum due to individual collisions with gas species is not large enough to substantially alter the sphere's motion, such as occurs in Brownian motion. The result was obtained by Paul Sophus Epstein in 1924. His result was used for. high-precision measurements of the charge on the electron in the oil drop experiment performed by Robert A. Millikan, as cited by Millikan in his 1930 review paper on the subject. For the early work on that experiment, the drag was assumed to follow Stokes' law. However, for droplets substantially below the submicron scale, the drag approaches Epstein drag instead of Stokes drag, since the mean free path of air species (atoms and molecules) is roughly of order of a tenth of a micron. Statement of the law The magnitude of the force on a sphere moving through a rarefied gas, in which the diameter of the sphere is of order or less than the collisional mean free path in the gas, is where is the radius of the spherical particle, is the number density of gas species, is their mass, is the arithmetic mean speed of gas species, and is the relative speed of the sphere with respect to the rest frame of the gas. The factor encompasses the microphysics of the gas-sphere interaction and the resultant distribution of velocities of the reflected particles, which is not a trivial problem. It is not uncommon to assume (see below) presumably in part because empirically is found to be close to 1 numerically, and in part because in many applications, the uncertainty due to is dwarfed by other uncertainties in the problem. For this reason, one sometimes encounters Epstein drag written with the factor left absent. The force acts in a direction opposite to the direction of motion of the sphere. Forces acting normal to the direction of motion are known as "lift", not "drag", and in any case are not present in the stated problem when the sphere is not rotating. For mixtures of gases (e.g. air), the total force is simply the sum of the forces due to each component of the gas, noting with care that each component (species) will have a different , a different and a different . Note that where is the gas density, noting again, with care, that in the case of multiple species, there are multiple different such densities contributing to the overall force. The net force is due both to momentum transfer to the sphere due to species impinging on it, and momentum transfer due to species leaving, due either to reflection, evaporation, or some combination of the two. Additionally, the force due to reflection depends upon whether the reflection is purely specular or, by contrast, partly or fully diffuse, and the force also depends upon whether the reflection is purely elastic, or inelastic, or some other assumption regarding the velocity distribution of reflecting particles, since the particles are, after all, in thermal contact - albeit briefly - with the surface. All of these effects are combined in Epstein's work in an overall prefactor "". Theoretically, for purely elastic specular reflection, but may be less than or greater than unity in other circumstances. For reference, note that kinetic theory gives For the specific cases considered by Epstein, ranges from a minimum value of 1 up to a maximum value of 1.444. For example, Epstein predicts for diffuse elastic collisions. One may sometimes encounter where is the accommodation coefficient, which appears in the Maxwell model for the interaction of gas species with surfaces, characterizing the fraction of reflection events that are diffuse (as opposed to specular). (There are other accommodation coefficients that describe thermal energy transfer as well, but are beyond the scope of this article.) In-line with theory, an empirical measurement, for example, for melamine-formaldehyde spheres in argon gas, gives as measured by one method, and by another method, as reported by the same authors in the same paper. According to Epstein himself, Millikan found for oil drops, whereas Knudsen found for glass spheres. In his paper, Epstein also considered modifications to allow for nontrivial . That is, he treated the leading terms in what happens if the flow is not fully in the rarefied regime. Also, he considered the effects due to rotation of the sphere. Normally, by "Epstein drag," one does not include such effects. As noted by Epstein himself, previous work on this problem had been performed by Langevin by Cunningham, and by Lenard. These previous results were in error, however, as shown by Epstein; as such, Epstein's work is viewed as definitive, and the result goes by his name. Applications As mentioned above, the original practical application of Epstein drag was to refined estimates of the charge on the electron in the Millikan oil-drop experiment. Several substantive practical applications have ensued. One application among many in astrophysics is the problem of gas-dust coupling in protostellar disks. See also section 4.1.1, "Epstein drag," page 110-111 of. Another application is the drag on stellar dust in red giant atmospheres, which counteracts the acceleration due to radiation pressure Another application is to dusty plasmas. References Drag (physics) Theoretical physics
Epstein drag
[ "Physics", "Chemistry" ]
1,204
[ "Drag (physics)", "Theoretical physics", "Fluid dynamics" ]
75,324,901
https://en.wikipedia.org/wiki/ZLY18
ZLY18 is an experimental drug that acts as an agonist of the free fatty acid receptor 1 (FFA1) and all three types of peroxisome proliferator-activated receptor (alpha, delta, and gamma). It is in development for the treatment of non-alcoholic fatty liver disease. References PPAR agonists Stilbenoids Carboxylic acids 4-Methoxyphenyl compounds Fluoroarenes Phenol ethers Free fatty acid receptor 1 agonists
ZLY18
[ "Chemistry" ]
105
[ "Pharmacology", "Carboxylic acids", "Functional groups", "Medicinal chemistry stubs", "Pharmacology stubs" ]
75,328,204
https://en.wikipedia.org/wiki/Nilsequence
In mathematics, a nilsequence is a type of numerical sequence playing a role in ergodic theory and additive combinatorics. The concept is related to nilpotent Lie groups and almost periodicity. The name arises from the part played in the theory by compact nilmanifolds of the type where is a nilpotent Lie group and a lattice in it. The idea of a basic nilsequence defined by an element of and continuous function on is to take , for an integer, as . General nilsequences are then uniform limits of basic nilsequences. For the statement of conjectures and theorems, technical side conditions and quantifications of complexity are introduced. Much of the combinatorial importance of nilsequences reflects their close connection with the Gowers norm. As explained by Host and Kra, nilsequences originate in evaluating functions on orbits in a "nilsystem"; and nilsystems are "characteristic for multiple correlations". Case of the circle group The circle group arises as the special case of the real line and its subgroup of the integers. It has nilpotency class equal to 1, being abelian, and the requirements of the general theory are to generalise to nilpotency class The semi-open unit interval is a fundamental domain, and for that reason the fractional part function is involved in the theory. Functions involving the fractional part of the variable in the circle group occur, under the name "bracket polynomials". Since the theory is in the setting of Lipschitz functions, which are a fortiori continuous, the discontinuity of the fractional part at 0 has to be managed. That said, the sequences , where is a given irrational real number, and an integer, and studied in diophantine approximation, are simple examples for the theory. Their construction can be thought of in terms of the skew product construction in ergodic theory, adding one dimension. Polynomial sequences The imaginary exponential function maps the real numbers to the circle group (see Euler's formula#Topological interpretation). A numerical sequence where is a polynomial function with real coefficients, and is an integer variable, is a type of trigonometric polynomial, called a "polynomial sequence" for the purposes of the nilsequence theory. The generalisation to nilpotent groups that are not abelian relies on the Hall–Petresco identity from group theory for a workable theory of polynomials. In particular the polynomial sequence comes with a definite degree. Möbius function and nilsequences A family of conjectures was made by Ben Green and Terence Tao, concerning the Möbius function of prime number theory and -step nilsequences. Here the underlying Lie group is assumed simply connected and nilpotent with length at most . The nilsequences considered are of type with some fixed in , and the function continuous and taking values in . The form of the conjecture, which requires a stated metric on the nilmanifold and Lipschitz bound in the implied constant, is that the average of up to is smaller asymptotically than any fixed inverse power of As a subsequent paper published in 2012 proving the conjectures put it, The Möbius function is strongly orthogonal to nilsequences. Subsequently Green, Tao and Tamar Ziegler also proved a family of inverse theorems for the Gowers norm, stated in terms of nilsequences. This completed a program of proving asymptotics for simultaneous prime values of linear forms. Tao has commented in his book Higher Order Fourier Analysis on the role of nilsequences in the inverse theorem proof. The issue being to extend IG results from the finite field case to general finite cyclic groups, the "classical phases"—essentially the exponentials of polynomials natural for the circle group—had proved inadequate. There were options other than nilsequences, in particular direct use of bracket polynomials. But Tao writes that he prefers nilsequences for the underlying Lie theory structure. Equivalent form for averaged Chowla and Sarnak conjectures Tao has proved that a conjecture on nilsequences is an equivalent of an averaged form of a noted conjecture of Sarvadaman Chowla involving only the Möbius function, and the way it self-correlates. Peter Sarnak made a conjecture on the non-correlation of the Möbius function with more general sequences from ergodic theory, which is a consequence of Chowla's conjecture. Tao's result on averaged forms showed all three conjectures are equivalent. The 2018 paper The logarithmic Sarnak conjecture for ergodic weights by Frantzikinakis and Host used this approach to prove unconditional results on the Liouville function. Notes Sequences and series Nilpotent groups Ergodic theory Additive combinatorics
Nilsequence
[ "Mathematics" ]
1,017
[ "Sequences and series", "Mathematical analysis", "Mathematical structures", "Additive combinatorics", "Mathematical objects", "Combinatorics", "Ergodic theory", "Dynamical systems" ]
75,329,920
https://en.wikipedia.org/wiki/Fubini%27s%20nightmare
Fubini's nightmare is a seeming violation of Fubini's theorem, where a nice space, such as the square is foliated by smooth fibers, but there exists a set of positive measure whose intersection with each fiber is singular (at most a single point in Katok's example). There is no real contradiction to Fubini's theorem because despite smoothness of the fibers, the foliation is not absolutely continuous, and neither are the conditional measures on fibers. Existence of Fubini's nightmare complicates fiber-wise proofs for center foliations of partially hyperbolic dynamical systems: these foliations are typically Hölder but not absolutely continuous. A hands-on example of Fubuni's nightmare was suggested by Anatole Katok and published by John Milnor. A dynamical version for center foliation was constructed by Amie Wilkinson and Michael Shub. Katok's construction Foliation For a consider the coding of points of the interval by sequences of zeros and ones, similar to the binary coding, but splitting the intervals in the ratio . (As for the binary coding, we identify with ) The point, corresponding to a sequence is given explicitly by where is the length of the interval after first splits. For a fixed sequence the map is analytic. This follows from the Weierstrass M-test: the series for converges uniformly on compact subsets of the intersection In particular, is an analytic curve. Now, the square is foliated by analytic curves Set For a fixed and random sampled according to the Lebesgue measure, the coding digits are independent Bernoulli random variables with parameter , namely and By the law of large numbers, for each and almost every By Fubini's theorem, the set has full Lebesgue measure in the square . However, for each fixed sequence the limit of its Cesàro averages is unique, if it exists. Thus every curve either does not intersect at all (if there is no limit), or intersects it at the single point where Therefore, for the above foliation and set , we observe a Fubini's nightmare. Wilkinson–Shub construction Wilkinson and Shub considered diffeomorphisms which are small perturbations of the diffeomorphism of the three dimensional torus where  is the Arnold's cat map. This map and its small perturbations are partially hyperbolic. Moreover, the center fibers of the perturbed maps are smooth circles, close to those for the original map. The Wilkinson and Shub perturbation is designed to preserve the Lebesgue measure and to make the diffeomorphism ergodic with the central Lyapunov exponent Suppose that is positive (otherwise invert the map). Then the set of points, for which the central Lyapunov exponent is positive, has full Lebesgue measure in On the other hand, the length of the circles of the central foliation is bounded above. Therefore, on each circle, the set of points with positive central Lyapunov exponent has to have zero measure. More delicate arguments show that this set is finite, and we have the Fubini's nightmare. References Theorems in measure theory Articles containing proofs
Fubini's nightmare
[ "Mathematics" ]
681
[ "Articles containing proofs", "Theorems in mathematical analysis", "Theorems in measure theory" ]
75,336,808
https://en.wikipedia.org/wiki/List%20of%20experiments%20in%20physics
This is a list of notable experiments in physics. The list includes only experiments with Wikipedia articles. For hypothetical experiments, see thought experiment. Historical experiments Articles on several experiments Bell tests BICEP and Keck Array Coincidence method Discovery of the neutron Large Hadron Collider experiments List of Super Proton Synchrotron experiments Precision tests of QED Tests of special relativity Tests of relativistic energy and momentum Modern searches for Lorentz violation Measurements of neutrino speed Tests of general relativity Experimental testing of time dilation On-going experiments Collider Detector at Fermilab China Dark Matter Experiment Cosmic Ray Energetics and Mass Experiment General antiparticle spectrometer GlueX The E and B Experiment VIP2 experiment VITO experiment See also List of accelerators in particle physics History of physics Science experiments Physics experiments Physics-related lists
List of experiments in physics
[ "Physics" ]
169
[ "Experimental physics", "Physics experiments" ]
63,848,237
https://en.wikipedia.org/wiki/NFA%20minimization
In automata theory (a branch of theoretical computer science), NFA minimization is the task of transforming a given nondeterministic finite automaton (NFA) into an equivalent NFA that has a minimum number of states, transitions, or both. While efficient algorithms exist for DFA minimization, NFA minimization is PSPACE-complete. No efficient (polynomial time) algorithms are known, and under the standard assumption P ≠ PSPACE, none exist. The most efficient known algorithm is the Kameda‒Weiner algorithm. Non-uniqueness of minimal NFA Unlike deterministic finite automata, minimal NFAs may not be unique. There may be multiple NFAs of the same size which accept the same regular language, but for which there is no equivalent NFA or DFA with fewer states. References External links A modified C# implementation of Kameda-Weiner (1970) PSPACE-complete problems Finite automata
NFA minimization
[ "Mathematics" ]
201
[ "PSPACE-complete problems", "Mathematical problems", "Computational problems" ]
70,996,932
https://en.wikipedia.org/wiki/Newton%27s%20sine-square%20law%20of%20air%20resistance
Isaac Newton's sine-squared law of air resistance is a formula that implies the force on a flat plate immersed in a moving fluid is proportional to the square of the sine of the angle of attack. Although Newton did not analyze the force on a flat plate himself, the techniques he used for spheres, cylinders, and conical bodies were later applied to a flat plate to arrive at this formula. In 1687, Newton devoted the second volume of his Principia Mathematica to fluid mechanics. The analysis assumes that the fluid particles are moving at a uniform speed prior to impacting the plate and then follow the surface of the plate after contact. Particles passing above and below the plate are assumed to be unaffected and any particle-to-particle interaction is ignored. This leads to the following formula: where F is the force on the plate (oriented perpendicular to the plate), is the density of the fluid, v is the velocity of the fluid, S is the surface area of the plate, and is the angle of attack. More sophisticated analysis and experimental evidence have shown that this formula is inaccurate; although Newton's analysis correctly predicted that the force was proportional to the density, the surface area of the plate, and the square of the velocity, the proportionality to the square of the sine of the angle of attack is incorrect. The force is directly proportional to the sine of the angle of attack, or for small values of itself. The assumed variation with the square of the sine predicted that the lift component would be much smaller than it actually is. This was frequently cited by detractors of heavier-than-air flight to "prove" it was impossible or impractical. Ironically, the sine squared formula has had a rebirth in modern aerodynamics; the assumptions of rectilinear flow and non-interactions between particles are applicable at hypersonic speeds and the sine-squared formula leads to reasonable predictions. In 1744, 17-years after Newton's death, the French mathematician Jean le Rond d'Alembert attempted to use the mathematical methods of the day to describe and quantify the forces acting on a body moving relative to a fluid. It proved impossible and d'Alembert was forced to conclude that he could not devise a mathematical method to describe the force on a body, even though practical experience showed such a force always exists. This has become known as D'Alembert's paradox. See also Graph of Sine Squared References Aerodynamics Classical mechanics Force
Newton's sine-square law of air resistance
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
513
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Aerodynamics", "Mechanics", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Matter", "Fluid dynamics" ]
71,000,648
https://en.wikipedia.org/wiki/Gunnar%20H%C3%A4gg
Gunnar Hägg (December 14, 1903 in Stockholm – May 28, 1986 in Uppsala) was a Swedish chemist and crystallographer. Education and career Hägg studied chemistry at Stockholm University from 1922, was a Ramsay Fellow at the University of London in 1926, studying under Frederick G. Donnan. He obtained his PhD in Stockholm in 1929 under Arne Westgren for the work X-ray studies on the binary systems of iron with nitrogen, phosphorus, arsenic, antimony and bismuth. After that he became a lecturer at the Stockholm University and in 1930 at the University of Jena, Germany. In 1937 he became professor of inorganic and general chemistry at Uppsala University. He retired in 1969. Hägg's research dealt with nitrides, borides, carbides and hydrides of transition metals and determined their crystal structure with X-ray diffraction. He also developed X-ray cameras and calculating machines for this purpose. His investigations into phases and phase transformations in steel had practical applications. In Sweden he is known for his university chemistry textbooks. Honors and awards He was a member of the Royal Society of Sciences in Uppsala (1940), the Royal Swedish Academy of Sciences (1942), the Royal Physiographic Society in Lund (1943) and the Royal Swedish Academy of Engineering Sciences, from which he received the Great Gold Medal in 1969. In 1960 he also became a member of the German National Academy of Sciences Leopoldina. A room in Uppsala University's Ångstrom Laboratory is named after him. In 1968 he received the Oscar Carlson Medal and in 1997 the Gunnar Starck Medal from the Swedish Chemical Society. From 1965 to 1976 he was a member of the Nobel Committee for Chemistry (and chairman in 1976). Bibliography References 1986 deaths 1903 births Members of the Royal Swedish Academy of Sciences Members of the Royal Physiographic Society in Lund Academic staff of Uppsala University 20th-century Swedish chemists Crystallographers Stockholm University alumni Academic staff of Stockholm University Academic staff of the University of Jena Members of the German National Academy of Sciences Leopoldina Inorganic chemists Swedish chemists
Gunnar Hägg
[ "Chemistry", "Materials_science" ]
427
[ "Crystallographers", "Crystallography", "Inorganic chemists" ]
71,001,950
https://en.wikipedia.org/wiki/Sulfoxidation
in chemistry, sulfoxidation refers to two distinct reactions. In one meaning, sulfoxidation refers to the reaction of alkanes with a mixture of sulfur dioxide and oxygen. This reaction is employed industrially to produce alkyl sulfonic acids, which are used as surfactants. The reaction requires UV-radiation. RH + SO2 + 1/2 O2 -> RSO3H The reaction favors secondary positions in accord with its free-radical mechanism. Mixtures are produced. Semiconductor-sensitized variants have been reported. Sulfoxidation can also refer to the oxidation of a thioether to a sulfoxide. R2S + O -> R2SO A typical source of "O" is hydrogen peroxide. References Sulfoxides
Sulfoxidation
[ "Chemistry" ]
166
[ "Functional groups", "nan", "Sulfonic acids" ]
73,947,499
https://en.wikipedia.org/wiki/Megan%20Robertson%20%28scientist%29
''For the Australian former rowing coxswain, see Megan Robertson.’' Megan L. Robertson is a professor of chemical and biomolecular engineering at the University of Houston noted for her work in polymer chemistry towards achieving "green birth, green life, and green death" via recycling and via biosourced oils and fatty acids to develop new elastomers with the aim of replacing petrochemical sources. Education Robertson earned her B.S. in Chemical Engineering at Washington University in St. Louis and her Ph.D. in Chemical Engineering at the University of California, Berkeley working under the direction of Prof. Nitash Balsara. After working at Rohm and Haas (now Dow Chemical) as a senior scientist for two years, she joined the group of Marc Hillmyer at the University of Minnesota as a postdoctoral research associate. Career In 2010 she joined the Department of Chemical and Biomolecular Engineering at the University of Houston, and in 2021 she became a full professor. She has received funding from the Department of Defense to investigate chitin-based bulletproof coatings and leads an interdisciplinary team funded through the Welch Foundation to transform polyolefin plastic waste into useful materials. Her most cited work, which was published in Science, is a review on the topic of plastics and recycling. She is an Associate Editor at Macromolecules (journal) and is on the editorial advisory board of the European Polymer Journal. She is a member of the National Academies of Sciences, Engineering, and Medicine Board on Chemical Sciences and Technology. Awards and recognition 2014 – NSF CAREER Award 2015 – Kavli Fellow of the National Academy of Sciences 2017 – PMSE Young Investigator 2018 – Sparks–Thomas award from the ACS Rubber Division 2022 – Fellow of the American Chemical Society 2023 – National Science Foundation Special Creativity Award References Living people Polymer scientists and engineers Women materials scientists and engineers Bioplastics Biomaterials Fellows of the American Chemical Society Year of birth missing (living people) University of Houston faculty UC Berkeley College of Engineering alumni McKelvey School of Engineering alumni
Megan Robertson (scientist)
[ "Physics", "Chemistry", "Materials_science", "Technology", "Biology" ]
425
[ "Biomaterials", "Women materials scientists and engineers", "Physical chemists", "Materials", "Materials scientists and engineers", "Polymer chemistry", "Polymer scientists and engineers", "Women in science and technology", "Matter", "Medical technology" ]
73,949,649
https://en.wikipedia.org/wiki/1%2C2%2C3-Cyclohexatriene
1,2,3-Cyclohexatriene is an unstable chemical compound with the molecular formula . It is an unusual isomer of benzene in which the three double bonds are cumulated. This highly strained compound was first prepared in 1990, by reacting a cyclohexadiene derivative with cesium fluoride. The product was too reactive to be isolated on its own, so its existence was confirmed by trapping via a cycloaddition reaction. 1,2,3-Cyclohexatriene and its derivatives undergo a variety of reactions including cycloadditions, nucleophilic additions, and σ-bond insertions, and therefore they can be versatile reagents for organic synthesis. References Benzene Isomerism
1,2,3-Cyclohexatriene
[ "Chemistry" ]
162
[ "Isomerism", "Stereochemistry" ]
73,949,761
https://en.wikipedia.org/wiki/Chain%20extender
In polymer chemistry, a chain extender is a low molecular weight (MW) reagent that converts polymeric precursors to higher molecular weight derivatives. Often, it is convenient to prepare a polymer at an intermediate MW, which are suitable for solution- or melt-processing. At or near the final stages of production, the material is treated with a chain extender. Typically, chain extenders are bifunctional, i.e., they have two functional groups, which can link together two polymers. Representative classes of chain extenders are diglycidyl ethers, diols, diamines, or dianhydrides. Chain extenders are often applied to polyurethanes. References Coatings Elastomers Plastics
Chain extender
[ "Physics", "Chemistry" ]
155
[ "Synthetic materials", "Coatings", "Unsolved problems in physics", "Elastomers", "Amorphous solids", "Plastics" ]
73,954,382
https://en.wikipedia.org/wiki/198%20%28number%29
198 (one hundred [and] ninety-eight) is the natural number following 197 and preceding 199. In mathematics 198 is a companion Pell number. Its corresponding Pell number is 70. References Integers
198 (number)
[ "Mathematics" ]
43
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
73,955,301
https://en.wikipedia.org/wiki/Paquier%20Event
The Paquier Event (OAE1b) was an oceanic anoxic event (OAE) that occurred around 111 million years ago (Ma), in the Albian geologic stage, during a climatic interval of Earth's history known as the Middle Cretaceous Hothouse (MKH). Timeline OAE1b had three main subevents: the Kilian, Paquier, and Leenhardt. The Kilian subevent was defined by a negative δ13C excursion from about 2-2.5% to 0.5-1.5% followed by a gradual δ13C rise in the Atlantic Ocean, though the magnitude of these carbon isotope fluctuations was higher in areas like the Basque-Cantabrian Basin. The Paquier subevent was the most extreme subevent of OAE1b, exhibiting a δ13C drop of ~3% in marine organic matter and of 1.5-2% in marine carbonate, which was succeeded by a gradual positive δ13C excursion. The Leenhardt subevent was the last OAE1b subevent and is associated in the eastern Tethys Ocean with a negative δ13C excursion of 0.09‰ to -0.48‰ followed by a positive δ13C excursion to 0.58%, although the magnitude of the carbon isotope shifts varies considerably in other marine regions, the negative δ13C excursion being around 1% in the Atlantic and western Tethys but ~4% in the Basque-Cantabrian Basin and ~3% in the Andean Basin. Causes Pulsed volcanic activity of the Kerguelen Plateau is suggested to be the cause of OAE1b based on mercury anomalies recorded from this interval. Five different mercury anomalies relative to total organic carbon are known from strata from the Jiuquan Basin spanning the OAE1b interval, strongly supporting a causal relationship with massive volcanism. Prominent negative osmium isotope excursions coeval with biotic changes among planktonic foraminifera further confirm the occurrence of multiple episodes of submarine volcanic activity over the course of OAE1b. Nonetheless, volcanism is not unequivocally supported as OAE1b's mainspring. Mercury anomalies associated with OAE1b have been interpreted by some to reflect mineralisation associated with salt diapirism instead of volcanism. Another line of evidence contradicting the volcanism hypothesis involves the massive diachrony between thallium isotope records and intervals of deoxygenation. Global warming intensified chemical weathering, leading to increased terrestrial inputs of organic matter into oceans and lakes. This promoted eutrophication that rapidly depleted bodies of water of dissolved oxygen. A contemporary increase in 187Os/188Os reflects an increase in continentally derived, radiogenic osmium sources in the ocean, confirming an increase in terrestrial runoff. Alternatively, rather than volcanism, some research points to orbital cycles as the governing cause of OAE1b. It has been hypothesised that enhanced monsoonal activity modulated by Earth's axial precession drove the development of OAE1b. Evidence supporting this explanation includes regular variations in detrital and weathering indices between humid intervals of high weathering and anoxia and drier intervals of decreased weathering and better oxygenated waters; these variations are suggested to correspond to precession cycles. A different analysis of orbital forcing purports the long eccentricity cycle as the most significant orbital driver of monsoonal modulation. δ18O records in planktic foraminifera from the Boreal Ocean show a 100 kyr periodicity, indicating that the short eccentricity cycle governed the ingression of hot Tethyan waters into the Boreal Ocean and consequent Boreal warming. The 405 kyr eccentricity cycle appears to have dominated the advance and retreat of anoxia in the Vocontian Basin. The tectonic isolation of the Atlantic and Tethys Oceans restricted their ventilation, enabling their stagnation and facilitating ideal conditions for thermohaline stratification, which would in turn promote the widespread development of anoxia during a speedily warming climate. OAE1b's coincidence with a peak in a 5-6 Myr oscillation in marine phosphorus accumulation suggests that enhanced phosphorus regeneration may have been one of the causal factors behind the development of widespread anoxia. As more phosphorus built up in marine environments and caused spikes in biological productivity and decreases in dissolved oxygen, it caused a strong positive feedback loop in which phosphorus deposited on the seafloor was recycled back into the water column at faster rates, facilitating further increase in productivity and decrease in seawater oxygen content. Eventually, a negative feedback loop of increased atmospheric oxygen terminated this phosphorus spike and the OAE itself by causing increased wildfire activity and a consequent decline in vegetation and chemical weathering. Effects Unlike other OAEs during the MKH, such as the OAE1a and the OAE2, OAE1b was not associated with an extinction event of benthic foraminifera. Identical benthic foraminiferal assemblages occur both below and above the black shales deposited in association with OAE1b, indicating that this OAE was limited in its geographic and bathymetric extent. Although some parts of the ocean floor became devoid of life, benthic foraminifera survived in refugia and recolonised previously abandoned areas after the OAE with no faunal turnover. Planktonic foraminifera, however, significantly declined during OAE1b. In the eastern Pacific, the Paquier Level of OAE1b is associated with the demise of heterozoan-dominated carbonate production. As with other OAEs, OAE1b left its mark on the geologic record in the form of widespread and abundant deposition of black shales. See also Jenkyns Event Selli Event Breistroffer Event Bonarelli Event References Albian Stage Anoxic events
Paquier Event
[ "Chemistry" ]
1,248
[ "Chemical oceanography", "Anoxic events" ]
76,987,869
https://en.wikipedia.org/wiki/4D%20N%20%3D%201%20global%20supersymmetry
In supersymmetry, 4D global supersymmetry is the theory of global supersymmetry in four dimensions with a single supercharge. It consists of an arbitrary number of chiral and vector supermultiplets whose possible interactions are strongly constrained by supersymmetry, with the theory primarily fixed by three functions: the Kähler potential, the superpotential, and the gauge kinetic matrix. Many common models of supersymmetry are special cases of this general theory, such as the Wess–Zumino model, super Yang–Mills theory, and the Minimal Supersymmetric Standard Model. When gravity is included, the result is described by 4D supergravity. Background Global supersymmetry has a spacetime symmetry algebra given by the super-Poincaré algebra with a single supercharge. In four dimensions this supercharge can be expressed either as a pair of Weyl spinors or as a single Majorana spinor. The particle content of this theory must belong to representations of the super-Poincaré algebra, known as supermultiplets. Without including gravity, there are two types of supermultiplets: a chiral supermultiplet consisting of a complex scalar field and its Majorana spinor superpartner, and a vector supermultiplet consisting of a gauge field along with its Majorana spinor superpartner. The general theory has an arbitrary number of chiral multiplets indexed by , along with an arbitrary number of gauge multiplets indexed by . Here are complex scalar fields, are gauge fields, and and are Majorana spinors known as chiralini and gaugini, respectively. Supersymmetry imposes stringent conditions on the way that the supermultiplets can be combined in the theory. In particular, most of the structure is fixed by three arbitrary functions of the scalar fields. The dynamics of the chiral multiplets is fixed by the holomorphic superpotential and the Kähler potential , while the mixing between the chiral and gauge sectors is primarily fixed by the holomorphic gauge kinetic matrix . When such mixing occurs, the gauge group must also be consistent with the structure of the chiral sector. Scalar manifold geometry The complex scalar fields in the chiral supermultiplets can be seen as coordinates of a -dimensional manifold, known as the scalar manifold. This manifold can be parametrized using complex coordinates , where the barred index represents the complex conjugate . Supersymmetry ensures that the manifold is necessarily a complex manifold, which is a type of manifold that locally looks like and whose transition functions are holomorphic. This is because supersymmetry transformations map into left-handed Weyl spinors, and into right-handed Weyl spinors, so the geometry of the scalar manifold must reflect the fermion spacetime chirality by admitting an appropriate decomposition into complex coordinates. For any complex manifold there always exists a special metric compatible with the manifolds complex structure, known as a Hermitian metric. The only non-zero components of this metric are , with a line element given by Using this metric on the scalar manifold makes it a Hermitian manifold. The chirality properties inherited from supersymmetry imply that any closed loop around the scalar manifold has to maintain the splitting between and . This implies that the manifold has a holonomy group. Such manifolds are known as Kähler manifolds and can alternatively be defined as being manifolds that admit a two-form, known as a Kähler form, defined by such that . This also implies that the scalar manifold is a symplectic manifold. These manifolds have the useful property that their metric can be expressed in terms of a function known as a Kähler potential through where this function is invariant up to the addition of the real part of an arbitrary holomorphic function Such transformations are known as Kähler transformations and since they do not affect the geometry of the scalar manifold, any supersymmetric action must be invariant under these transformations. Coupling the chiral and gauge sectors The gauge group of a general supersymmetric theory is heavily restricted by the interactions of the theory. One key condition arises when chiral multiplets are charged under the gauge group, in which case the gauge transformation must be such as to leave the geometry of the scalar manifold unchanged. More specifically, they leave the scalar metric as well as the complex structure unchanged. The first condition implies that the gauge symmetry belongs to the isometry group of the scalar manifold, while the second further restricts them to be holomorphic Killing symmetries. Therefore, the gauge group must be a subgroup of this symmetry group, although additional consistency conditions can restrict the possible gauge groups further. The generators of the isometry group are known as Killing vectors, with these being vectors that preserve the metric, a condition mathematically expressed by the Killing equation , where are the Lie derivatives for the corresponding vector. The isometry algebra is then the algebra of these Killing vectors where are the structure constants. Not all of these Killing vectors can necessarily be gauged. Rather, the Kähler structure of the scalar manifolds also demands the preservation of the complex structure , with this imposing that the Killing vectors must also be holomorphic functions . It is these holomorphic Killing vectors that define symmetries of Kähler manifolds, and so a gauge group can only be formed by gauging a subset of these. An implication of is that there exists a set of real holomorphic functions known as Killing prepotentials which satisfy , where is the interior product. The Killing prepotentials entirely fix the holomorphic Killing vectors Conversely, if the holomorphic Killing vectors are known, then the prepotential can be explicitly written in terms of the Kähler potential as The holomorphic functions describe how the Kähler potential changes under isometry transformations , allowing them to be calculated up to the addition of an imaginary constant. A key consistency condition on the prepotentials is that they must satisfy the equivariance condition For non-abelian symmetries, this condition fixes the imaginary constants associated to the holomorphic functions , known as Fayet–Iliopoulos terms. For abelian subalgebras of the gauge algebra, the Fayet–Iliopoulos terms remain unfixed since these have vanishing structure constants. Lagrangian The derivatives in the Lagrangian are covariant with respect to the symmetries under which the fields transform, these being the gauge symmetries and the scalar manifold coordinate redefinition transformations. The various covariant derivatives are given by where the hat indicates that the derivative is covariant with respect to gauge transformations. Here are the holomorphic Killing vectors that have been gauged, while are the scalar manifold Christoffel symbols and are the gauge algebra structure constants. Additionally, second derivatives on the scalar manifold must also be covariant . Meanwhile, the left-handed and right-handed Weyl fermion projections of the Majorana spinors are denoted by . The general four-dimensional Lagrangian with global supersymmetry is given by Here are the so-called D-terms. The first line is the kinetic term for the chiral multiplets whose structure is primarily fixed by the scalar metric while the second line is the kinetic term for the gauge multiplets which is instead primarily fixed by the real part of the holomorphic gauge kinetic matrix . The third line is the generalized supersymmetric theta-like term for the gauge multiplet, with this being a total derivative when the imaginary part of the gauge kinetic function is a constant, in which case it does not contribute to the equations of motion. The next line is an interaction term while the second-to-last line are the fermion mass terms given by where is the superpotential, an arbitrary holomorphic function of the scalars. It is these terms that determine the masses of the fermions since in a particular vacuum state with scalar fields expanded around some value , then the mass matrices become fixed matrices to leading order in the scalar field. Higher order terms give rise to interaction terms between the scalars and the fermions. The mass basis will generally involve diagonalizing the entire mass matrix implying that the mass eigenbasis are generally linear combinations of the chiral and gauge fermion fields. The last line includes the scalar potential where the first term is called the F-term and the second is known as the D-term. Finally this line also contains the four-fermion interaction terms with is the Riemann tensor of the scalar manifold. Properties Supersymmetry transformations Neglecting three-fermion terms, the supersymmetry transformation rules that leave the Lagrangian invariant are given by The second part of the fermion transformations, proportional to for the chiralino and for the gaugino, are referred to as fermion shifts. These dictate a lot of the physical properties of the supersymmetry model such as the form of the potential and the goldstino when supersymmetry is spontaneously broken. Spontaneous symmetry breaking At the quantum level, supersymmetry is broken if the supercharges do not annihilate the vacuum . Since the Hamiltonian can be written in terms of these supercharges, this implies that unbroken supersymmetry corresponds to vanishing vacuum energy, while broken supersymmetry necessarily requires positive vacuum energy. In contrast to supergravity, global supersymmetry does not admit negative vacuum energies, with this being a direct consequence of the supersymmetry algebra. In the classical approximation, supersymmetry is unbroken if the scalar potential vanishes, which is equivalent to the condition that If any of these are non-zero, then supersymmetry is classically broken. Due to the superpotential nonrenormalization theorem, which states that the superpotential does not receive corrections at any level of quantum perturbation theory, the above condition holds at all orders of quantum perturbation theory. Only non-perturbative quantum corrections can modify the condition for supersymmetry breaking. Spontaneous symmetry breaking of global supersymmetry necessarily leads to the presence of a massless Nambu–Goldstone fermion, referred to as a goldstino . This fermion is given by the linear combination of the fermion fields multiplied by their fermion shifts and contracted with appropriate metrics with this being the eigenvector corresponding to the zero eigenvalue of the fermion mass matrix. The goldstino vanishes when the conditions for supersymmetry are meet, that being the vanishing of the superpotential and the prepotential. Mass sum rules One important set of quantities are the supertraces of powers of the mass matrices , usually expressed as a sum over all the eigenvalues modified by the spin of the state In unbroken global supersymmetry, for all . The case is referred to as the mass sum formula, which in the special case of a trivial gauge kinetic matrix can be expressed as showing that this vanishes in the case of a Ricci-flat scalar manifold, unless spontaneous symmetry breaking occurs through non-vanishing D-terms. For most models , even when supersymmetry is spontaneously broken. An implication of this is that the mass difference between bosons and fermions cannot be very large. The result can be generalized variously, such as for vanishing vacuum energy but a general gauge kinetic term, or even to a general formula using the superspace formalism. In the full quantum theory the masses can get additional quantum corrections so the above results only hold at tree-level. Special cases and generalizations A theory with only chiral multiplets and no gauge multiplets is sometimes referred to as the supersymmetric sigma model, with this determined by the Kähler potential and the superpotential. From this, the Wess–Zumino model is acquired by restricting to a trivial Kähler potential corresponding to a Euclidean metric, together with a superpotential that is at most cubic This model has the useful property of being fully renormalizable. If instead there are no chiral multiplets, then the theory with a Euclidean gauge kinetic matrix is known as super Yang–Mills theory. In the case of a single gauge multiplet with a gauge group, this corresponds to super Maxwell theory. Super quantum chromodynamics is meanwhile acquired using a Euclidean scalar metric, together with an arbitrary number of chiral multiplets behaving as matter and a single gauge multiplet. When the gauge group is an abelian group this is referred to a super quantum electrodynamics. Models with extended supersymmetry arise as special cases of supersymmetry models with particular choices of multiplets, potentials, and kinetic terms. This is in contrast to supergravity where extended supergravity models are not special cases of supergravity and necessarily include additional structures that must be added to the theory. Gauging global supersymmetry gives rise to local supersymmetry which is equivalent to supergravity. In particular, 4D N = 1 supergravity has a matter content similar with the case of global supersymmetry except with the addition of a single gravity supermultiplet, consisting of a graviton and a gravitino. The resulting action requires a number of modifications to account for the coupling to gravity, although structurally shares many similarities with the case of global supersymmetry. The global supersymmetry model can be directly acquired from its supergravity generalization through the decoupling limit whereby the Planck mass is taken to infinity . These models are also applied in particle physics to construct supersymmetric generalizations of the Standard Model, most notably the Minimal Supersymmetric Standard Model. This is the minimal extension of the Standard Model that is consistent with phenomenology and includes supersymmetry that is broken at some high scale. Construction There are a number of ways to construct a four dimensional global supersymmetric action. The most common approach is the superspace approach. In this approach, Minkowski spacetime is extended to an eight-dimensional supermanifold which additionally has four Grassmann coordinates. The chiral and vector multiplets are then packaged into fields known as superfields. The supersymmetry action is subsequently constructed by considering general invariant actions of the superfields and integrating over the Grassmann subspace to get a four-dimensional Lagrangian in Minkowski spacetime. An alternative approach to the superspace formalism is the multiplet calculus approach. Rather than working with superfields, this approach works with multiplets, which are sets of fields on which the supersymmetry algebra is realized. Invariant actions are then constructed from these. For global supersymmetry this is more complicated than the superspace approach, although a generalized approach is very useful when constructing supergravity actions. Notes References Supersymmetric quantum field theory
4D N = 1 global supersymmetry
[ "Physics" ]
3,140
[ "Supersymmetric quantum field theory", "Supersymmetry", "Symmetry" ]
76,988,609
https://en.wikipedia.org/wiki/Applied%20Computational%20Electromagnetics%20Society%20Journal
The Applied Computational Electromagnetics Society Journal, also known as ACES Journal, is a peer-reviewed open access scientific journal published monthly by The Applied Computational Electromagnetics Society and River Publishers. It covers fundamental and applied research on computational electromagnetics. It was established in 1986 and its editors-in-chief are Sami Barmada (University of Pisa) and Atef Elsherbeni (Colorado School of Mines). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 0.6. References External links Electromagnetism journals Monthly journals Academic journals established in 1986 English-language journals Open access journals Electrical and electronic engineering journals Computational modeling journals Computational electromagnetics
Applied Computational Electromagnetics Society Journal
[ "Physics", "Engineering" ]
151
[ "Computational electromagnetics", "Computational physics", "Electronic engineering", "Electrical engineering", "Electrical and electronic engineering journals" ]
76,989,062
https://en.wikipedia.org/wiki/Journal%20of%20Micro/Nanopatterning%2C%20Materials%2C%20and%20Metrology
Journal of Micro/Nanopatterning, Materials, and Metrology is a peer-reviewed scientific journal published quarterly by SPIE. It covers science, development, and practice of micro and nanofabrication processes and metrology. Established in 2002 under the name Journal of Microlithography, Microfabrication, and Microsystems, it was subsequently retitled to Journal of Micro/Nanolithography, MEMS, and MOEMS in 2007. The journal title was changed to its current name in 2021. The editor-in-chief of the journal is Harry Levinson (HJL Lithography). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.3. References External links Quarterly journals SPIE academic journals Academic journals established in 2002 English-language journals Mechanical engineering journals Semiconductor journals Materials science journals
Journal of Micro/Nanopatterning, Materials, and Metrology
[ "Materials_science", "Engineering" ]
189
[ "Mechanical engineering journals", "Nanotechnology journals", "Materials science journals", "Materials science", "Mechanical engineering" ]
76,989,865
https://en.wikipedia.org/wiki/Mindar
Mindar (), also known as Android Kannon Mindar, is an android preacher at the Kōdai-ji temple in Kyoto, Japan. The humanoid robot regularly gives sermons on the Heart Sutra at the 400-year-old Zen Buddhist temple. It was created to represent and embody Kannon, a bodhisattva associated with compassion. Mindar was designed through a collaboration between staff of Kōdai-ji and roboticists from Osaka University, including Hiroshi Ishiguro. Construction of the tall android began in 2017 at Osaka University's robotics laboratory. Development of the android cost (US$227,250), while the total cost of the project was (US$909,090). Mindar was unveiled to the public at a ceremony in March 2019. Its 25-minute pre-programmed sermon was written by monks and addresses the Buddhist concepts of emptiness and compassion. Background and development Kōdai-ji is a Zen Buddhist temple established in 1606 in the Higashiyama ward of Kyoto. It is part of the Rinzai school. Roboticist Ishiguro Hiroshi of Osaka University visited Kōdai-ji in July 2017. Gotō Tenshō, then the temple's chief steward, suggested to Ishiguro the creation of a robotic Buddha statue. They met again two months later and initially considered having several robots discussing the Buddha's teachings, though it was determined that a single robot would be preferable from a technical standpoint. A monologue on the Heart Sutra was chosen and it was decided that the android would take the form of Kannon, a bodhisattva associated with compassion. The Lotus Sutra mentions that Kannon is capable of manifesting in various forms. The Android Kannon Production Committee was established in September 2017 and included staff from Kōdai-ji as well as engineers from Osaka University. Ishiguro proposed that the 'Alter' model of robot be used as a prototype. The subject matter of Mindar's sermon was determined by Buddhist monks of the Rinzai school—Honda Dōryū of , Sakaida Taisen of Kennin-ji, and Unrin'in Sōseki of Reigen-in. They devised a narrative explaining the Buddhist concepts of compassion and emptiness, based on works by Hajime Nakamura and Mumon Yamada. The name 'Mindar' was proposed by Ogawa Kōhei, a roboticist at Osaka University. Mindar is not powered by artificial intelligence, though the designers originally had aspirations of endowing the android with machine-learning capabilities. Gotō said "This robot will never die; it will just keep updating itself and evolving. With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It's changing Buddhism." The android Kannon was constructed at a robotics laboratory at Osaka University. Ogawa Kōhei engineered the android and it was completed in February 2019. The total cost of the project was (US$909,090), though development of the android only cost (US$227,250). A traditional Buddhist ceremony was held for the android upon its introduction to the public in March 2019. The ceremony was attended by monks and included chanting, bell-ringing, and drumming. Mindar has historical precedents that were drawn on by its designers. Mechanized Karakuri puppets were produced in Japan from the 17th century and the country's first robot, the Gakutensoku, debuted in the late 1920s and could write calligraphy, change its facial expression, and move its head and hands through an air pressure mechanism. As a religion-oriented android, Mindar was preceded by other 21st-century robots, including the Chinese chatbot Robot Monk Xian'er and Pepper (produced 2015–2021), which could be programmed to perform Buddhist funeral rites, including chanting sutras and banging drums. Description Mindar is a stationary, tall android, weighing . It has a slender mechatronic body made from aluminum with silicone skin covering its face, hands, and shoulders. Mindar stands on a platform and does not have working legs. It is capable of blinking and smiling, and moves its head, torso, and arms through air hydraulics. Mindar accompanies its preaching with a variety of gestures, such as joining its palms together in gasshō. A camera implanted in Mindar's left eye allows the android to give the impression of eye contact by focusing on a person. The top of Mindar's skull is exposed, showing blinking lights and wires within its cranial cavity. Similarly most of the android's body is not covered in silicone, exposing wires and servo motors. Similar to Ishiguro's telenoid robots, Mindar has an androgynous appearance. The voice has been described as feminine and soothing. Sermons Mindar is situated within the Kōdai-ji temple complex at Kyōka Hall. Its sermons are open to the public, and are typically given twice daily on Saturdays and Sundays. Mindar gives a 25-minute sermon in Japanese on the Heart Sutra, addressing concepts of compassion and emptiness within Buddhism. Chinese and English subtitles are projected on the back wall of the room. In the pre-programmed sermon's introduction, a spotlight shines on Mindar, and the android begins speaking. It refers to itself as the bodhisattva, saying: The multimedia presentation is accompanied by operatic piano music and augmented through 360-degree projection mapping, including the projection of a virtual audience on the walls of the room. Mindar interacts with members of the projected audience, answering their questions in a pre-programmed dialogue. The sermon ends with Mindar chanting the Heart Sutra. Reception Mindar's introduction in 2019 received international news coverage. Media coverage focused on the novelty of a robot preacher, the cost of the project, and the potential for Mindar to change perceptions about Buddhism in Japan. Public reception of Mindar has been mixed. A survey by Osaka University found that some people found the android easy to follow, surprisingly human-like, and warm, while others said that it felt unnatural or fake, with expressions that seemed engineered. Several people who have listened to Mindar's sermon have cried, with some considering the shadow cast by the android to be the "real" Kannon. Foreigners, especially those from Western countries, have been more critical of the android. Some raised concerns that the android upset the sanctity of religion, while others likened Mindar to Frankenstein's monster. A 2020 paper in Frontiers in Artificial Intelligence discusses whether androids such as Mindar can express Buddha-nature. It concludes that Mindar could be considered an authentic incarnation of Kannon were it to become self-aware. A 2023 paper in the Journal of Experimental Psychology describes a field study based on interviews with people who had heard Mindar's sermon. The paper indicates that people do not assign android preachers the same credibility as they do for human preachers. The authors conclude that the automation of religious duties would likely result in a reduction of religious commitment. See also Buddhism and artificial intelligence Human–robot interaction Japanese robotics Notes References External links Mindar at the Kōdai-ji website (in Japanese) Humanoid robots Robots of Japan 2019 robots Android (robot) Bodhisattvas Guanyin Buddhism and technology Japanese Buddhist clergy 2019 establishments in Japan Buddhism in Japan Tourist attractions in Kyoto
Mindar
[ "Engineering" ]
1,515
[ "Android (robot)", "Human–machine interaction" ]
69,494,053
https://en.wikipedia.org/wiki/Tribovoltaic%20effect
The tribovoltaic effect is a type of triboelectric current where a direct-current (DC) current is generated by sliding a P-type semiconductor on top of a N-type semiconductor or a metal surface without the illumination of photons, which was firstly proposed by Wang et al. in 2019 and later observed experimentally in 2020. When a P-type semiconductor slides over a N-type semiconductor, electron-hole pairs can be produced at the interface, which separate in the built-in electric field (contact potential difference) at the semiconductor interface, generating a DC current. Research has shown that the tribovoltaic effect can occur at various interfaces, such as metal-semiconductor interface, P-N semiconductors interface, metal-insulator-semiconductor interface, metal-insulator-metal interface, and liquid-semiconductor interface. The tribovoltaic effect may find applications in the fields of energy harvesting and smart sensing. Nomenclature It has been suggested that the generation of tribo-current at the sliding PN junction or Schottky junction is analogous to the generation of photo-current in the photovoltaic effect, and the only difference is that the energy for exciting the electron-hole pairs is different, so it was named “tribovoltaic effect” by Wang et al. Experimental evidence The tribovoltaic effect was observed at both macro- and nano-scale. It was found that a direct current can be generated by sliding the N-type diamond coated tip over the P-type Si samples, and the direction of the tribo-current depends on the direction of the built-in electric field at the PN and Schottky junctions. Tribovoltaic effect at different interfaces Metal-semiconductor interface. When a Pt-coated silicon atomic force microscopy (AFM) tip rubs on molybdenum disulfide (MoS2) surface, a DC current with a maximum density of 106 A/m2 is generated. Similarly using a pure Pt tip to rub both p-type and N-type silicon samples, the current follows the contact potential. P-N semiconductors interface. When using a N-type silicon to rub with a P-type Si, a DC current from the P-type Si to the N-type silicon is produced, with the same direction as the built-in electric field at the PN junction. Furthermore, when a N-type diamond-coated silicon tip is used to rub with the surfaces of N-type silicon and P-type Si, tribocurrent can be generated at the interfaces of N-type tip and P-type Si. Metal-insulator-semiconductor interface. When a conducting tip rubs with a silicon, the tribovoltaic effect can induce water molecules to form an oxide layer on the silicon surface, and the tribo-current decreases gradually with increasing the thickness of oxide layer. Metal-insulator-metal interface. The studies of DC output characteristics of Al-TiO2-Ti heterojunctions show that the open-circuit voltage increases with increasing the thickness of TiO2, while the short-circuit current first increases and then decreases. The experiments have revealed that the tribo-current is contributed by quantum tunneling, thermionic emission and trap-assisted transport. Liquid-semiconductor interface. The tribovoltaic effect can also occur at aqueous solution and solid semiconductor interface, in which the aqueous solution is considered as a liquid semiconductor. The tribovoltaic effect at liquid-solid interface was also observed by Wang et al. References Electrical phenomena Electrostatics Electricity Tribology
Tribovoltaic effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
752
[ "Tribology", "Physical phenomena", "Materials science", "Surface science", "Electrical phenomena", "Mechanical engineering" ]
69,496,472
https://en.wikipedia.org/wiki/Occupant-centric%20building%20controls
Occupant-centric building controls or Occupant-centric controls (OCC) is a control strategy for the indoor environment, that specifically focuses on meeting the current needs of building occupants while decreasing building energy consumption. OCC can be used to control lighting and appliances, but is most commonly used to control heating, ventilation, and air conditioning (HVAC). OCC use real-time data collected on indoor environmental conditions, occupant presence and occupant preferences as inputs to energy system control strategies. By responding to real-time inputs, OCC is able to flexibly provide the proper level of energy services, such as heating and cooling, when and where it is needed by occupants. Ensuring that building energy services are provided in the right quantity is intended to improve occupant comfort while providing these services only at the right time and in the right location is intended to reduce overall energy use. In contrast to OCC, conventional building control strategies, known as Building Energy Management Systems (BEMS), typically use predetermined temperature setpoints and setback schedules. These temperatures and temperature schedules are often determined by industry standards with no input from the building occupants. Conventional BEMS typically have static operation parameters that give minimal flexibility to meet the changing needs of building occupants throughout the day, the changing needs of new building tenants, or the diverse thermal needs of any given group of building occupants. The American Society for Heating, Refrigeration and Air-conditioning Engineers has outlined that thermal comfort of occupants is influenced both by environmental conditions such as radiative heat, humidity, air speed and season as well as personal factors such as physiology, clothing worn and activity level. This dynamic and personalized nature of thermal comfort has traditionally made it complex it integrate into HVAC controls but an increase in sensing and computing capabilities along with a decrease in sensing and computing costs has made it possible for OCC to be an effective and scalable means of controlling building energy systems. With buildings consuming over 33% of global energy, and producing almost 40% of emissions, OCC could play a significant role in reducing global energy consumption and emissions. Background Occupant-Centric Control Inputs OCC relies on real-time occupancy and occupant preference data as inputs to the control algorithm. This data must be continually collected by various methods and can be collected on various scales including whole-building, floor, room, and sub-room. Often, it is most useful to collect data on a scale that matches the thermal zoning of the building. A thermal zone is a section of a building that is all conditioned under the same temperature setpoint. Data on occupant presence (occupied or unoccupied) and occupancy levels (number of occupants) can be collected with either explicit or implicit sensors. Explicit sensors directly measure occupancy and can include passive infrared sensors, ultrasonic motion detectors, and entranceway counting cameras. Implicit sensors measure a parameter that can be correlated to occupancy through some calibrated relationship. Examples of implicit occupancy sensors includes sensors and Wi-Fi-connected device count. The selection of occupancy sensing devices depends on the size of the space being monitored, the budget for sensors, the desired accuracy, the goal of the sensor (detecting occupant presence or count), and security considerations. Unlike occupant presence data, acquiring occupant preference data requires direct feedback from building occupants. This feedback can be solicited or unsolicited. Unsolicited occupant preference data can include the time and magnitude of a manual thermostat setpoint change. While this can be a good indicator of occupant thermal dissatisfaction, thermostat setpoint changes can be infrequent creating a barrier to integrating occupant preference into OCC. Solicited occupant preference information is often used as a means of acquiring more occupant preference information and takes the form of just-in-time surveys or Ecological Momentary Assessments (EMA). These surveys, typically deployed to computers, smart phones, or smart watches, can ask participants about their thermal sensation, thermal satisfaction or any other factor that reflects their comfort in the space. Implementing occupant preference information into OCC is still in its early stages and its practical application is still being studied in the academic environment. Predictive Controls OCC can be categorized as either reactive control or predictive control. Reactive control uses the real-time occupant preference and presents feedback to immediately alter the conditions of the space. While this approach is useful for controlling systems with fast response times such as lighting systems, reactive OCC is not ideal for systems with slow response times such as HVAC. For these slow response systems, predictive control allows building services, such as heating, to be provided at the right time without a lag between the time a service is needed and the time when the service is provided. Unlike reactive controls, predictive controls use real-time occupant preference and presence data to inform and train predictive control algorithms rather than directly impact the system operation. Predictive controls have a ‘prediction horizon’ which is the amount of time ahead that an OCC will need to change a setpoint or ventilation rate to achieve a certain temperature or indoor air quality level. The needed prediction horizon for an OCC will vary depending on the response time of the building. Building attributes that contribute to the need for a longer prediction horizon when controlling HVAC systems include large open rooms, high thermal mass, and spaces with rapid changes in occupancy levels. For commercial HVAC OCC, predictive algorithms will be informed by the six information grades (IGs) outlined by ASHRAE. These IGs are occupant presence, occupant count, and occupant preference, each considered at the building and thermal zone level. From occupant presence data, an OCC may predict the earliest occupant arrival time and latest departure time. From occupant count, an OCC may predict the maximum expected number of building occupants and when. From occupant preference data, an OCC may predict the desired temperature and humidity of the space throughout the day. With this information, an OCC could predict when it would need to change temperature setpoints and ventilation rates to achieve a desired temperature, and air quality level at a specific time. Predictive algorithms needs a sufficient amount of data as well as relatively regular occupant preference and presence patterns to develop accurate control predictions. Occupant-Centric Control Development The development of OCC is currently being supported by the International Energy Agency (IEA) Energy in Buildings and Community (EBC) Annex 79. Annex 79, which will run from 2018 to 2023, is an international collaborative initiative focused on developing and deploying technology, data collection methods, simulation methods, control algorithms, implementation policies, and application strategies aimed at occupant-centric building design and controls. This collaborative is focused on applying the knowledge gained from the previous Annex 66 which ran from 2013 to 2018. Annex 66 worked to understand how occupant behavior relates to building energy consumption as well as how building operation and design influence occupant thermal comfort. This was done primarily by collecting occupant behavior data and developing occupant simulation methods. Additional groups working to develop OCC include the ASHRAE Multidisciplinary Task Group on Occupant Behavior in Buildings (MGT.OBB), and the National Science Foundation Future of Work Center for Intelligent Environments. Occupant-Centric Control Algorithms OCC is still in development where the creation and evaluation of various control algorithms are the main focus of study. Algorithms that have been studied for OCC include, but are not limited to, iterative data fusion methods, unsupervised machine learning, and reinforcement learning. Each of these algorithms has varying levels of computational complexity, needed inputs, and energy reduction potential. Iterative data fusion methods are an example of reactive OCC controls and are a means of combining data from two or more sources. For this method, preference data from multiple occupants and data on indoor environmental conditions is used to balance the two optimization goals of average occupant satisfaction and energy savings. To balance these goals, each time new data is put into the system, the algorithm will determine if any control action is needed, such as changing the temperature setpoint, based on a set of control rules that determine how well the optimization goals are being met Unsupervised machine learning can be used to cluster occupants based on their ‘thermal personalities’. These clusters can then be used to inform reactive or predictive controls by understanding the thermal preferences of the specific occupants in the space. For this method, solicited occupant preference information is fed into an unsupervised machine algorithm that will group occupants based on how similar their thermal preferences are. The number and size of the groups depends on the type of unsupervised algorithm used as well as the data being analyzed. Reinforcement machine learning can be used as a predictive control algorithm with the goal of optimizing occupant satisfaction and energy savings. For this method, the algorithm accepts occupant presence and preference data and uses it to learn occupant preferences without the need to train the algorithm on previous data. The algorithm will evaluate each control decision it makes in order to maximize its reward which is based on its ability to optimize occupant satisfaction and energy savings. This algorithm is capable of making continual adjustments based on new information it receives. References Ergonomics Human–computer interaction Ventilation
Occupant-centric building controls
[ "Engineering" ]
2,001
[ "Human–computer interaction", "Human–machine interaction" ]
69,496,724
https://en.wikipedia.org/wiki/Lutetium%20phosphide
Lutetium phosphide is an inorganic compound of lutetium and phosphorus with the chemical formula . The compound forms dark crystals, does not dissolve in water. Synthesis Heating powdered lutetium and red phosphorus in an inert atmosphere or vacuum: 4Lu + P4 -> 4LuP It can also be formed in the reaction of lutetium and phosphine. Physical properties Lutetium phosphide forms dark cubic crystals, space group Fmm, cell parameters a = 0.5533 nm, Z = 4. Stable in air, does not dissolve in water and reacts actively with nitric acid. Uses The compound is a semiconductor used in high power, high-frequency applications, and in laser diodes. Also used in gamma radiation detectors due to its ability to absorb radiation. References Phosphides Lutetium compounds Semiconductors Rock salt crystal structure
Lutetium phosphide
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
182
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
75,340,267
https://en.wikipedia.org/wiki/Efbemalenograstim%20alfa
Efbemalenograstim alfa, sold under the brand name Ryzneuta, is a medication used to decrease the incidence of infection in chemotherapy-induced neutropenia. It is a leukocyte growth factor. It is given by subcutaneous injection. The most common side effects of efbemalenograstim alfa are nausea, anemia, and thrombocytopenia. Efbemalenograstim alfa is an immunostimulant/colony stimulating factor that belongs to the class of hematopoietic growth factors (granulocyte colony stimulating factor; G CSF) which increase the production and differentiation of mature and functionally active neutrophils from bone marrow precursor cells. It was approved for medical use in China in May 2023, in the United States in November 2023, and in the European Union in March 2024. Medical uses In the US, efbemalenograstim alfa is indicated to decrease the incidence of infection, as manifested by febrile neutropenia, in adults with non-myeloid malignancies receiving myelosuppressive anti-cancer drugs associated with a clinically significant incidence of febrile neutropenia. In the EU, efbemalenograstim alfa is indicated for the reduction in the duration of neutropenia and the incidence of febrile neutropenia in adults treated with cytotoxic chemotherapy for malignancy (with the exception of chronic myeloid leukaemia and myelodysplastic syndromes). Side effects Efbemalenograstim alfa can cause fatal splenic rupture, acute respiratory distress syndrome, serious allergic reactions including anaphylaxis, sickle cell crises in patients with sickle cell disorders, glomerulonephritis, thrombocytopenia, capillary leak syndrome, and myelodysplastic syndrome and acute myeloid leukemia in people with breast and lung cancer. History The US Food and Drug Administration approved efbemalenograstim alfa based on evidence from two main clinical trials, GC-627-04 and GC-627-05, in 515 participants with breast cancer receiving chemotherapy. There was one participant included in the trial from the United States, and 514 participants were included from sites outside of the United States. The trials were conducted at 52 sites in five countries including Hungary, Russia, Ukraine, Bulgaria, and the United States. The same trials (GC-627-04 and GC-627-05) were used to assess efficacy and safety. Efbemalenograstim alfa was evaluated in two main clinical trials that were randomized and controlled. A total of 515 participants were randomized to receive efbemalenograstim alfa or placebo, or Neulasta, after receiving myelosuppressive anticancer drugs associated with a clinically significant incidence of febrile neutropenia to treat metastatic breast cancer. Both trials evaluated the benefit and side effects of efbemalenograstim alfa in participants. The benefit of efbemalenograstim alfa was based on the mean duration of severe neutropenia seen in participants after receiving either efbemalenograstim alfa or control (placebo or Neulasta). Society and culture Legal status It was approved for medical use in China in May 2023, in the United States in November 2023, and in the European Union in March 2024. In January 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Ryzneuta, intended to reduce the duration of neutropenia and the incidence of febrile neutropenia due to chemotherapy. The applicant for this medicinal product is Evive Biotechnology Ireland Limited. Efbemalenograstim alfa was approved for medical use in the European Union in March 2024. References External links Drugs acting on the blood and blood forming organs Growth factors Immunostimulants Recombinant proteins
Efbemalenograstim alfa
[ "Chemistry", "Biology" ]
879
[ "Growth factors", "Recombinant proteins", "Biotechnology products", "Signal transduction" ]
75,346,266
https://en.wikipedia.org/wiki/Stavros%20Avramidis
Stavros Avramidis (born in Kavala, Greece, in 1958) is a Greek Canadian wood scientist and professor at the University of British Columbia in Canada, who is an elected fellow (FIAWS) and president of the International Academy of Wood Science for the period 2023-2026. First years and education Avramidis was born in Kavala, Greece, on April 6, 1958, and grew up in Thessaloniki. He attended the Department of Forestry at the Aristotle University of Thessaloniki and received his university degree in 1981. Following that, he pursued research-based postgraduate (1982-1983) (M.S. in the area of composite products) and doctoral studies (1983-1986) in the United States at the State University of New York College of Environmental Science and Forestry, in the area of biopolymer physics under the guidance of John F. Siau. Academic career Avramidis began his academic career in 1987 in Canada at the University of British Columbia as an assistant professor at Department of Wood Science in the Faculty of Forestry. He was appointed associate professor in 1993 and full professor in 1998. Avramidis served as the Head of the UBC Department of Wood Science for two consecutive terms, from 2016 to the present. Avramidis's research team has presented research work on the physical and drying properties of wood. His applied research addresses practical issues in the Canadian wood industry related to energy optimization and upgrading production methods, using acoustic, electrical, and optical techniques, as well as radio wave methods, simulation, and artificial intelligence. Research work and recognition Avramidis along with his colleagues have authored over 250 scientific articles, more than 100 industrial studies, and his research work has received almost 3,000 citations in the Scopus database, until July 2024. In 2012, Avramidis was selected as a member of the editorial board of the journal, Wood Material Science and Engineering. He has been a member of the editorial boards of Holzforschung, Drying Technology, Wood Research, European Journal of Wood and Wood Products, Maderas. Ciencia y tecnologia and Drying Technology. In 2020, his name was included in the Mendeley Data, published in the journal Plos Biology for the international impact of his yearlong research in wood drying. In 2022, Avramidis received the Ternryd Award 2022 from the Swedish Linnaeus Academy Research Foundation for his research in wood science. In June 2023, Avramidis was elected as the president of the International Academy of Wood Science, for the years 2023–2026. In October 2023, a referred metaresearch conducted by John Ioannidis and his team at Stanford University, included Avramidis in Elsevier Data 2022, where he was placed at the top 2% of researchers in the area of wood physics. In August 2024, Avramidis has acquired the same international distinction for his research work in wood science (Elsevier Data 2023; career data). References External links Google Scholar ResearchGate 1958 births Living people Academic staff of the University of British Columbia State University of New York College of Environmental Science and Forestry alumni Aristotle University of Thessaloniki alumni Greek scientists Wood sciences Fellows of the International Academy of Wood Science Wood scientists
Stavros Avramidis
[ "Materials_science", "Engineering" ]
674
[ "Wood sciences", "Wood scientists", "Materials science" ]
75,349,990
https://en.wikipedia.org/wiki/Elisabeth%20Wheeler
Elisabeth A. Wheeler (born January 10, 1944) is an American biologist, botanist, and wood scientist, who is an emeritus professor at the North Carolina State University. Her research work is in the area of wood anatomy (softwoods and hardwoods) and paleontology (late cretaceous and early tertiary fossil woods), Most of her pioneering research work has been jointly made with the Dutch botanist, Pieter Baas. Education Wheeler studied biology in the Reed College at Portland, Oregon, and got her BA in 1965. She did her MS studies in botany at Southern Illinois University Carbondale (1968-1970), and she continued with doctorate research in botany obtaining her PhD in 1972. Research career During the years 1972–1976, she worked as a curatorial assistant and honorary research fellow at the Bailey-Wetmore Laboratory of Plant Anatomy and Morphology at Harvard University. In 1976, Wheeler became an assistant professor at North Carolina State University in the Department of Wood and Paper Science, where she worked continually until 2008, when she retired as a full professor. Wheeler has coordinated the NCSU initiative for the creation of the InsideWood, a unique and versatile purely-educational database containing thousands of wood anatomical descriptions and over 66,000 photomicrographs, and its free, open coverage is worldwide. She is a member of the International Association of Wood Anatomists, the Botanical Society of America, and the International Organization of Paleobotany, and is a Fellow at the International Society of Wood Science and Technology. She served as a co-editor of the IAWA Journal, in cooperation with the then editor, Pieter Baas. In October 2023, a meta-research carried out by John Ioannidis et al. at the Stanford University included Wheeler in Elsevier Data 2022, where she was ranked at the top 2% of researchers of all time in forestry – paleontology. Until March 2024, Wheeler's research work has been cited more than 7,000 times in Google Scholar (h-index: 46). The standard author abbreviation Wheeler is used to indicate this scientist as the author, when citing a botanical name, e.g. Alangium oregonensis Scott & Wheeler. Personal life She lives permanently in Raleigh, North Carolina. References External links ResearchGate Elisabeth Wheeler InsideWood North Carolina State University 1944 births Living people American botanists North Carolina State University staff Wood scientists
Elisabeth Wheeler
[ "Materials_science" ]
484
[ "Wood sciences", "Wood scientists" ]
75,350,246
https://en.wikipedia.org/wiki/Proto-metabolism
A proto-metabolism is a series of linked chemical reactions in a prebiotic environment that preceded and eventually turned into modern metabolism. Combining ongoing research in astrobiology and prebiotic chemistry, work in this area focuses on reconstructing the connections between potential metabolic processes that may have occurred in early Earth conditions. Proto-metabolism is believed to be simpler than modern metabolism and the Last Universal Common Ancestor (LUCA), as simple organic molecules likely gave rise to more complex metabolic networks. Prebiotic chemists have demonstrated abiotic generation of many simple organic molecules including amino acids, fatty acids, simple sugars, and nucleobases. There are multiple scenarios bridging prebiotic chemistry to early metabolic networks that occurred before the origins of life, also known as abiogenesis. In addition, there are hypotheses made on the evolution of biochemical pathways including the metabolism-first hypothesis, which theorizes how reaction networks dissipate free energy from which genetic molecules and proto-cell membranes later emerge. To determine the composition of key early metabolic networks, scientists have also used top-down approaches to study LUCA and modern metabolism. Autocatalytic prebiotic chemistries Autocatalytic reactions are reactions where the reaction product acts as a catalyst for its own formation. Many researchers that study proto-metabolism agree that early metabolic networks likely originated as a set of chemical reactions that form self-sustaining networks. This set of reactions is commonly referred to as an autocatalytic set. Some prebiotic chemistries focus on these autocatalytic reactions including the formose reaction, HCN oligomerization, and formamide chemistry. Formose reaction Discovered in 1861 by Aleksandr Butlerov, the formose reaction is a set of two reactions converting formaldehyde (CH2O) to a mixture of simple sugars. Formaldehyde is an intermediate in the oxidation of simple carbon molecules (e.g. methane) and was likely present in early Earth's atmosphere. The first reaction is the slow conversion of formaldehyde (C1 carbon) to glycolaldehyde (C2 carbon) and occurs through an unknown mechanism. The second reaction is the faster and autocatalytic formation of higher weight aldoses and ketoses. The kinetics of the formose reaction are often described as autocatalytic, as the alkaline reaction uses lowest molecular weight sugars as feedstocks or input molecules into the reaction. Self-organized autocatalytic networks, like the formose reaction, would allow for adaptation to changing prebiotic environmental conditions. As a proof-of-concept, Robinson and colleagues demonstrated how changing environmental conditions and catalyst availability can impact the resultant sugar products. In the past, many researchers have suggested the importance of this reaction for abiogenesis and the origins of metabolism because it can lead to ribose. Ribose is a building block of RNA and an important precursor in proto-metabolism. However, there are limitations for the formose reaction to be the chemical origin of sugars including the low chemoselectivity for ribose and high complexity of the final reaction mixture. In addition, a direct joining together of ribose, a nucleobase, and phosphate to make a ribonucleotide (the building block of RNA) is not currently chemically feasible. Alternative prebiotic mechanisms have been proposed including cyanosulfidic prebiotic chemistries. HCN oligomerization On Earth, hydrogen cyanide (HCN) is made in volcanos, lightning, and reducing atmospheres like the Miller-Urey experiment. On the Hadean Earth, large impactor events and active hydrothermal processes likely contributed to widespread metal production and metal-based proto-metabolism. Hydrogen cyanide has also been detected in meteorites and atmospheres in the outer solar system. HCN-derived polymers are the oligomer or hydrolysis products of HCN. These polymers can be synthesized from HCN or cyanide salts often in alkaline conditions, but they have been observed in a wide range of experimental conditions. HCN readily reacts with itself to produce many HCN polymers and biologically-relevant compounds like nucleobases, amino acids, and carboxylic acids. The diversity of products could point to a plausible proto-metabolic network of HCN oligomerization reactions. Although, some groups point to low HCN concentrations in early Earth and low chemioselectivity of key biologically-relevant products, similar to the formose reaction. Others have shown that abundant HCN is produced after large impacts and that high specificity and yield can be achieved. Formamide chemistry Formamide (NH2CHO) is the simplest naturally-occurring amide. Similar to HCN, formamide can form naturally. Formamide has specific physical and stability properties possibly suitable for a universal prebiotic precursor for early proto-metabolic networks. For example, it has four universal atomic elements ubiquitous to life: C, H, O, N. The presence of unique functional groups involving oxygen and nitrogen support reaction chemistries to build key biomolecules like amino acids, sugars, nucleosides and other key intermediates of other prebiotic reactions (e.g. the citric acid cycle). In addition, early Earth geological features like hydrothermal pores might support formamide chemistry and synthesis of key prebiotic biomolecules with concentration requirements. Overall, formamide chemistry can support connections and substrates needed to support prebiotic biomolecule synthesis including the formose reaction, Strecker synthesis, HCN oligomerization, or the Fischer-Tropsch process. In addition, formamide can be easily concentrated through evaporation reactions as it has a boiling point of 210C. Although this reaction has high versatility across one-carbon atom precursors, the connections between different biosynthetic pathways are yet to be directly explored experimentally. Experimental reconstruction Many research groups are actively attempting experimental reconstruction of the interactions between prebiotic reactions. One major consideration is the ability for these reactions to operate in the same environmental conditions. These one-pot syntheses would likely push the reaction towards specific subgroups of molecules. The key to building proto-metabolic scenarios involves coupling constructive and interconversion reactions. Constructive reactions use autocatalytic prebiotic chemistries to increase the structural complexity of the original molecule, while interconversion reactions connect different prebiotic chemistries by changing the functional groups appended to the original molecule. A functional group is a group of atoms that has similar properties whenever it appears in different molecules. These interconversion reactions and functional group transformations can lead to new prebiotic chemistries and precursor molecules. Cyanosulfidic scenario Cyanosulfidic scenarios are mechanisms for proto-metabolism proposed by the Eschenmoser and Sutherland groups. Research from the Eschenmoser group suggested that interactions between HCN and aldehydes can catalyze the formation of diaminomaleodinitrile (DAMN). Iterations of this cycle would generate multiple intermediate metabolites and key biomolecular precursors through functional group transformations by hydrolytic and redox processes. To expand upon this finding, the Sutherland group experimentally assessed the assembly of biomolecular building blocks from prebiotic intermediates and one-carbon feedstocks. They synthesized precursors of ribonucleotides, amino acids and lipids from the reactants of hydrogen cyanide, acetylene, acrylonitrile (product of cyanide and acetylene), and dihydroxyacetone (stable triose isomer of glyceraldehyde and phosphate). These reactions are driven by UV light and use hydrogen sulfide (H2S) as the primary reductant in these reactions. As each of these synthesis reactions was tested independently and some reactions require periodic input of additional reactants, these biomolecular precursors were not strictly generated through a one-pot synthesis expected of early Earth environments. In the same work, these authors argue that flow chemistry or the movement of reactants through water could generate the conditions favorable for the synthesis of these molecules. Glyoxylate scenario Eschenmoser also proposed a parallel scenario where the connections between prebiotic reactions would be connected by glyoxylate, a simple α-ketoacid, produced by HCN oligomerization and hydrolysis. In this work, Eschenmoser proposes potential schemes to generate both informational oligomers and other key autocatalytic reactions from plausible one-carbon sources (HCN, CO, CO2). The Krishnamurthy group at Scripps experimentally expanded on this theory. In mild aqueous conditions, they demonstrated that the reaction of glyoxylate and pyruvate can produce a series of α-ketoacid intermediates constituting the reductive tricarboxylic acid (TCA) cycle. This reaction proceeded without metal or enzyme catalysts as glyoxylate acted as both the carbon source and reducing agent in the reaction. Similarly, the Moran group have also reported pyruvate and glyoxylate can react in warm iron-rich water to produce TCA intermediates and some amino acids. Their work has successfully reconstructed 9 out of 11 TCA intermediates and 5 universal metabolic precursors. Additional experimental analysis is needed to connect this scenario to modern metabolism. Energy sources Unlike proto-metabolism, the bioenergetic pathways powering modern metabolism are well understood. In early Earth conditions, there were mainly three kinds of energy to support early metabolic pathways: high energy sources to catalyze monomers, lower energy sources to support condensation or polymerization, and energy carriers that support transfer of energy from the environment to metabolic networks. Examples of high energy sources include photochemical energy from ultraviolet light, atmospheric electric discharge, and geological electrochemical energy. These energy sources would support synthesis of biological monomers or feedstocks for proto-metabolism. In contrast, examples of lower energy sources for assembly of more complex molecules include anhydrous heat, mineral-catalyzed synthesis, and sugar-driven reactions. Energy carrier molecules could allow for propagation of the energy through the metabolic networks likely resembled modern energy carriers including ATP and NADH. Both energy carriers are nucleotide-based molecules and likely originated early in metabolism. Metabolism-first hypothesis Metabolism-first hypothesis suggests that autocatalytic networks of metabolic reactions were the first forms of life. This is an alternative hypothesis to RNA-world, which is a genes-first hypothesis. It was first proposed by Martynas Ycas in 1955. Many recent work in this area is focused in computational modeling of theoretical prebiotic networks. Metabolism-first proponents postulate that replication and genetic machinery could not arise without the accumulation of the molecules needed for replication. Alone, simple connections between prebiotic synthesis reactions could form key organic molecules and once encapsulated by a membrane would constitute the first cells. These reactions could be catalyzed by various inorganic molecules or ions and stabilized by solid surfaces. Molecular self-replicators and enzymes would emerge later, with these future metabolisms better resembling modern metabolism. One critique for the metabolism-first hypothesis for abiogenesis is they would also need self-replicating abilities with a high degree of fidelity. If not, the chemical networks with greater fitness in early Earth would not be preserved. There is limited experimental evidence for these theories, so additional exploration in this area is needed to determine the feasibility of a metabolism-first origins of life. References Wikipedia Student Program Origin of life
Proto-metabolism
[ "Biology" ]
2,387
[ "Biological hypotheses", "Origin of life" ]
78,311,114
https://en.wikipedia.org/wiki/Green%20photocatalyst
Green photocatalysts are photocatalysts derived from environmentally friendly sources. They are synthesized from natural, renewable, and biological resources, such as plant extracts, biomass, or microorganisms, minimizing the use of toxic chemicals and reducing the environmental impact associated with conventional photocatalyst production. A photocatalyst is a material that absorbs light energy to initiate or accelerate a chemical reaction without being consumed in the process. They are semiconducting materials which generate electron-hole pairs upon light irradiation. These photogenerated charge carriers then migrate to the surface of the photocatalyst and interact with adsorbed species, triggering redox reactions. They are promising candidates for a wide range of applications, including the degradation of organic pollutants in wastewater, the reduction of harmful gases, and the production of hydrogen or solar fuels. Many methods exist to produce photocatalysts via both conventional and more green approaches including hydrothermal synthesis or sol-gel, the difference being in the material sources used. Green precursor materials for photocatalysts Green sources A green source for photocatalyst synthesis refers to a material that is renewable, biodegradable, and has minimal environmental impact during its extraction and processing. This approach aligns with the principles of green chemistry, which aim to reduce or eliminate the use and generation of hazardous substances in chemical processes. Green sources are abundant, readily available, and often considered as waste materials, thus offering a sustainable and cost-effective alternative to conventional photocatalyst precursors. Plant-based precursors Plant extracts and agricultural waste products have emerged as promising green sources for photocatalyst production, offering attractive alternatives to conventional precursors due to their abundance, biodegradability, and cost-effectiveness. Extracts from various plant parts, such as leaves, roots, and fruits, contain phyto-chemicals that can act as reducing and stabilizing agents in nanoparticle synthesis, contributing to the formation of desired photocatalyst morphologies. Meanwhile, waste materials from agricultural processes, such as rice husks and sugarcane bagasse, are rich in cellulose and lignin. These components can be used as precursors for carbon-based photocatalyst or as templates for the synthesis of porous nano-materials. Notes: NPs: Nanoparticles CSS: Core-Shell Structure The table summarizes various plant-based nanoparticles and nanocatalysts, including their synthesis methods, particle sizes, shapes, and corresponding references. Bio-waste precursors Utilizing bio-waste, such as food waste and animal waste, for green photocatalyst synthesis offers a dual benefit of waste management and material production. These waste streams are rich in organic matter, which can be converted into valuable carbon-based photocatalyst through various thermochemical processes. Notes/Explanations: NPs: Nanoparticles nHAp/ZnO/GO: Nano-hydroxyapatite/Zinc Oxide/Graphene Oxide composite CaO@NiO: Calcium Oxide coated with Nickel Oxide y-Fe2O3/Si: Gamma-Iron(III) Oxide supported on Silicon Fe2O3-SnO2: Iron Oxide-Tin Oxide composite Marine macroalgae/seaweed precursors Seaweed is a highly promising green source for photocatalyst synthesis due to its rapid growth rates and minimal environmental requirements. It does not require freshwater or fertilizers for cultivation, making it a sustainable and environmentally friendly option. Various seaweed species have been explored for their ability to produce nanoparticles and to act as templates for the synthesis of photocatalytic materials. Notes/Explanations: NPs: Nanoparticles Dispersion and stability of green sources Notes/Explanations: NPs: Nanoparticles Zeta Potential: A measure of the surface charge of nanoparticles, which influences their stability and dispersion. PDI: Polydispersity Index, a measure of the size distribution of nanoparticles. Common green precursor materials for photocatalysts Photocatalyst synthesis methods Hydrothermal synthesis Hydrothermal synthesis is a green method that utilizes water under high pressure and temperature to facilitate chemical reactions. It often avoids the need for organic solvents and offers control over crystal size and morphology, making it a versatile approach for producing various photocatalyst materials. Microwave-assisted synthesis Microwave-assisted synthesis employs microwaves to provide rapid and uniform heating, leading to faster reaction rates and potential for significant energy savings compared to conventional heating methods. This technique is increasingly favored in green synthesis due to its reduced energy consumption and potential for shorter reaction times. Sol-gel method The sol-gel method involves the formation of a gel from a solution, followed by its conversion into a solid material through controlled drying and calcination. It is a versatile technique widely used in the production of various photocatalyst materials, offering advantages in terms of controlling material composition and morphology. Comparing photocatalyst synthesis methods The table below provides a comparison of the advantages, potential limitations, and suitability of different green synthesis methods: Applications of photocatalysts Wastewater treatment Degradation of organic pollutants Green photocatalyst effectively break down organic contaminants in wastewater into less harmful products through a process known as photocatalytic oxidation. Upon light irradiation, the photocatalyst generates reactive oxygen species (ROS), such as hydroxyl radicals (•OH) and superoxide radicals (O2•-), which attack and decompose organic pollutants. Green photocatalyst synthesized from plant extracts or agricultural waste have shown promising results in degrading various dye molecules, including methylene blue, rhodamine B, and methyl orange. Green photocatalyst have demonstrated the ability to remove pharmaceutical contaminants such as carbamazepine, ibuprofen, tetracycline from wastewater. Additionally, green photocatalyst have been successfully employed in the degradation of pesticides such as alachlor. Notes/Explanations: NPs: Nanoparticles CoFe2O4: Cobalt Ferrite Removal of heavy metals In addition to degrading organic pollutants, green photocatalyst can also contribute to the removal of toxic heavy metals from wastewater. The large surface area and functional groups present on green photocatalyst, particularly those derived from carbon-based sources like bio-waste, can effectively adsorb heavy metal ions from the water. Furthermore, photogenerated electrons from the green photocatalyst can reduce heavy metal ions to their less toxic elemental forms, which can then be more easily removed from the wastewater. Antibacterial activity Mechanisms of action Green photocatalyst exhibit potent antibacterial properties due to their ability to generate ROS upon light irradiation. These ROS, including hydroxyl radicals and superoxide radicals, can damage bacterial cell walls and membranes, leading to cell death. Examples and applications Several green photocatalyst have shown promising antibacterial activity. ZnO nanoparticles synthesized using plant extracts have demonstrated strong antibacterial activity against a wide range of bacteria, including E. coli and Staphylococcus aureus. TiO2-based photocatalyst, particularly those doped with silver or copper, exhibit enhanced antibacterial properties under visible light irradiation, making them suitable for disinfection applications. Potential applications of these materials include water disinfection and the creation of antibacterial surfaces. Green photocatalyst can be used to disinfect water by killing harmful bacteria, offering a sustainable alternative to conventional disinfection methods. Incorporating them into coatings or surfaces can create self-sterilizing materials, reducing the risk of bacterial contamination in healthcare settings and other environments. Notes/Explanations: NPs: Nanoparticles Ag/Ag2O: Silver/Silver Oxide Composite Toxicity assessments Importance of toxicity evaluation Despite their sustainable origins, a thorough evaluation of the potential toxicity of green photocatalyst is essential to ensure their safe and responsible application in various settings. Even though they are synthesized from environmentally benign materials, their unique properties and nanoscale dimensions can potentially pose risks to human health and the environment. It is crucial to assess the potential for adverse effects before widespread implementation of these materials in water treatment, air purification, or biomedical applications. Methods for toxicity assessment Various methods are employed to assess the potential toxicity of green photocatalyst. Eco-toxicity tests expose organisms such as algae, daphnia, or fish to varying concentrations of the photocatalyst to evaluate their effects on growth, reproduction, or mortality. These tests provide valuable insights into the potential impact of green photocatalyst on aquatic ecosystems. Cytotoxicity assays are conducted in laboratory settings using human cell lines to evaluate the potential toxicity of green photocatalysts to human cells. These assays help determine the potential for adverse effects on human health upon exposure to these materials. Notes/Explanations: NPs: Nanoparticles MTT Assay: 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay, a colorimetric method to assess cell viability. See also References Catalysis Sustainable technologies Photochemistry
Green photocatalyst
[ "Chemistry" ]
1,931
[ "Catalysis", "Chemical kinetics", "nan" ]
78,313,123
https://en.wikipedia.org/wiki/Topological%20functor
In category theory and general topology, a topological functor is one which has similar properties to the forgetful functor from the category of topological spaces. The domain of a topological functor admits construction similar to initial topology (and equivalently the final topology) of a family of functions. The notion of topological functors generalizes (and strengthens) that of fibered categories, for which one considers a single morphism instead of a family. Definition Source and sink A source in a category consists of the following data: an object , a (possibly proper) class of objects and a class of morphisms . Dually, a sink in consists of an object , a class of objects and a class of morphisms . In particular, a source is an object if is empty, a morphism if is a set of a single element. Similarly for a sink. Initial source and final sink Let be a source in a category and let be a functor. The source is said to be a -initial source if it satisfies the following universal property. For every object , a morphism and a family of morphisms such that for each , there exists a unique -morphism such that and . Similarly one defines the dual notion of -final sink. When is a set of a single element, the initial source is called a Cartesian morphism. Lift Let , be two categories. Let be a functor. A source in is a -structured source if for each we have for some . One similarly defines a -structured sink. A lift of a -structured source is a source in such that and for each A lift of a -structured sink is similarly defined. Since initial and final lifts are defined via universal properties, they are unique up to a unique isomorphism, if they exist. If a -structured source has an initial lift , we say that is an initial -structure on with respect to . Similarly for a final -structure with respect to a -structured sink. Topological functor Let be a functor. Then the following two conditions are equivalent. Every -structured source has an initial lift. That is, an initial structure always exists. Every -structured sink has a final lift. That is, a final structure always exists. A functor satisfying this condition is called a topological functor. One can define topological functors in a different way, using the theory of enriched categories. A concrete category is called a topological (concrete) category if the forgetful functor is topological. (A topological category can also mean an enriched category enriced over the category of topological spaces.) Some require a topological category to satisfy two additional conditions. Constant functions in lift to -morphisms. Fibers () are small (they are sets and not proper classes). Properties Every topological functor is faithful. Let be one of the following four properties of categories: complete category cocomplete category well-powered category co-well-powered category If is topological and has property , then also has property . Let be a category. Then the topological functors are unique up to natural isomorphism. Examples An example of a topological category is the category of all topological spaces with continuous maps, where one uses the standard forgetful functor. References Category theory General topology
Topological functor
[ "Mathematics" ]
667
[ "General topology", "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Topology", "Mathematical relations", "Category theory" ]
78,316,349
https://en.wikipedia.org/wiki/Algorithmic%20party%20platforms%20in%20the%20United%20States
Algorithmic party platforms are a recent development in political campaigning where artificial intelligence (AI) and machine learning are used to shape and adjust party messaging dynamically. Unlike traditional platforms that are drafted well before an election, these platforms adapt based on real-time data such as polling results, voter sentiment, and trends on social media. This allows campaigns to remain responsive to emerging issues throughout the election cycle. These platforms rely on predictive analytics to segment voters into smaller, highly specific groups. AI analyzes demographic data, behavioral patterns, and online activities to identify which issues resonate most with each group. Campaigns then tailor their messages accordingly, ensuring that different voter segments receive targeted communication. This approach optimizes resources and enhances voter engagement by focusing on relevant issues. During the 2024 U.S. election, campaigns utilized these tools to adjust messaging on-the-fly. For example, the AI firm Resonate identified a voter segment labeled "Cyber Crusaders," consisting of socially conservative yet fiscally liberal individuals. Campaigns used this insight to quickly focus outreach and policy discussions around the concerns of this group, demonstrating how AI-driven platforms can influence strategy as events unfold. Background and relevance in modern campaigns The integration of artificial intelligence (AI) into political campaigns has introduced a significant shift in how party platforms are shaped and communicated. Traditionally, platforms were drafted months before elections and remained static throughout the campaign. However, algorithmic platforms now rely on continuous data streams to adjust messaging and policy priorities in real time. This allows campaigns to adapt to emerging voter concerns, ensuring their strategies remain relevant throughout the election cycle. AI systems analyze large volumes of data, including polling results, social media interactions, and voter behavior patterns. Predictive analytics tools segment voters into specific micro-groups based on demographic and behavioral data. Campaigns can then customize their messaging to align with the priorities of these smaller segments, adjusting their stances as trends develop during the campaign. This level of segmentation and customization ensures that outreach resonates with voters and maximizes engagement. Beyond messaging, AI also optimizes resource allocation by helping campaigns target specific efforts more effectively. With predictive analytics, campaigns can identify which areas or demographics are most likely to benefit from increased outreach, such as canvassing or targeted advertisements. AI tools monitor shifts in voter sentiment in real time, allowing campaigns to quickly pivot their strategies in response to developing events and voter priorities. This capability ensures that campaign resources are used efficiently, minimizing waste while maximizing impact throughout the election cycle. AI's use extends beyond national campaigns, with local and grassroots campaigns also leveraging these technologies to compete more effectively. By automating communication processes and generating customized voter outreach, smaller campaigns can now utilize AI to a degree previously available only to well-funded candidates. However, this growing reliance on AI raises concerns around transparency and the ethical implications of automated content creation, such as AI-generated ads and responses. AI technology, which was previously accessible only to large, well-funded campaigns, has become increasingly available to smaller, local campaigns. With declining costs and easier access, grassroots campaigns now have the ability to implement predictive analytics, automate communications, and generate targeted ads. This democratization of technology allows smaller campaigns to compete more effectively by dynamically adjusting to the concerns of their constituents. However, the growing use of AI in political campaigns raises concerns about transparency and the potential manipulation of voters. The ability to adjust messaging in real time introduces ethical questions about the authenticity of platforms and voter trust. Additionally, the use of synthetic media, including AI-generated ads and deepfakes, presents challenges in maintaining accountability and preventing disinformation in political discourse. Impact on political platforms Artificial intelligence (AI) has become instrumental in enabling political campaigns to adapt their platforms in real time, responding swiftly to evolving voter sentiments and emerging issues. By analyzing extensive datasets—including polling results, social media activity, and demographic information—AI systems provide campaigns with actionable insights that inform dynamic strategy adjustments. A study by Sanders, Ulinich, and Schneier (2023) demonstrated the potential of AI-based political issue polling, where AI chatbots simulated public opinion on various policy issues. The findings indicated that AI could effectively anticipate both the mean level and distribution of public opinion, particularly in ideological breakdowns, with correlations typically exceeding 85%. This suggests that AI can serve as a valuable tool for campaigns to gauge voter sentiment accurately and promptly. Moreover, AI facilitates the segmentation of voters into micro-groups based on demographic and behavioral data, allowing for tailored messaging that resonates with specific audiences. This targeted approach enhances voter engagement and optimizes resource allocation, as campaigns can focus their efforts on demographics most receptive to their messages. The dynamic nature of AI-driven platforms ensures that campaign strategies remain relevant and responsive throughout the election cycle. However, the integration of AI in political platforms also raises ethical and transparency concerns, particularly regarding the authenticity of dynamically adjusted messaging and the potential for voter manipulation. Addressing these challenges is crucial to maintaining voter trust and the integrity of the democratic process. In summary, AI significantly shapes political platforms in real time by providing campaigns with the tools to analyze voter sentiment, segment audiences, and adjust strategies dynamically. While offering substantial benefits in responsiveness and engagement, it is imperative to navigate the accompanying ethical considerations to ensure the responsible use of AI in political campaigning. Ethical and transparency challenges While AI-driven platforms offer significant advantages, they also introduce ethical and transparency challenges. One primary concern is the potential for AI to manipulate voter perception. The ability to adjust messaging dynamically raises questions about the authenticity of political platforms, as voters may feel deceived if they perceive platforms as opportunistic or insincere. The use of synthetic media, including AI-generated advertisements and deepfakes, exacerbates these challenges. These tools have the potential to blur the line between reality and fiction, making it difficult for voters to discern genuine content from fabricated material. This has led to concerns about misinformation, voter manipulation, and the erosion of trust in democratic processes. Additionally, the lack of transparency in how AI systems operate poses significant risks. Many algorithms function as "black boxes," with their decision-making processes opaque even to their developers. This opacity makes it challenging to ensure accountability, particularly when AI-generated strategies lead to controversial or unintended outcomes. Efforts to address these challenges include calls for greater transparency in AI usage within campaigns. Policymakers and advocacy groups have proposed regulations requiring campaigns to disclose when AI is used in content creation or voter outreach. These measures aim to balance the benefits of AI with the need for ethical integrity and accountability. Benefits of AI-driven platforms Despite the challenges, AI-driven platforms offer numerous benefits that can enhance the democratic process. By tailoring messaging to specific voter concerns, AI helps campaigns address diverse needs more effectively. This targeted approach ensures that underrepresented groups receive attention, fostering a more inclusive political discourse. AI also democratizes access to advanced campaign tools. Smaller campaigns, which previously lacked the resources to compete with well-funded opponents, can now utilize AI to level the playing field. Predictive analytics, automated communications, and targeted advertisements empower grassroots movements to amplify their voices and engage constituents more effectively. Moreover, AI's ability to process vast amounts of data provides valuable insights into voter sentiment. By identifying trends and patterns, campaigns can address pressing issues proactively, fostering a more informed and responsive political environment. These capabilities also extend to crisis management, as AI enables campaigns to adjust swiftly in response to unforeseen events, ensuring stability and resilience. References Political campaign technology Artificial intelligence Machine learning
Algorithmic party platforms in the United States
[ "Engineering" ]
1,572
[ "Artificial intelligence engineering", "Machine learning" ]
66,643,309
https://en.wikipedia.org/wiki/Miyaura%20borylation
Miyaura borylation, also known as the Miyaura borylation reaction, is a named reaction in organic chemistry that allows for the generation of boronates from vinyl or aryl halides with the cross-coupling of bis(pinacolato)diboron in basic conditions with a catalyst such as PdCl2(dppf). The resulting borylated products can be used as coupling partners for the Suzuki reaction. Scope The Miyaura borylation has shown to work for: Alkyl halides, aryl halides, aryl halides using tetrahydroxydiboron, aryl halides using bis-boronic acid, aryl triflates, aryl mesylates, vinyl halides, vinyl halides of α,β-unsaturated carbonyl compounds, and vinyl triflates. See also Chan-Lam coupling Heck reaction Hiyama coupling Kumada coupling Negishi coupling Petasis reaction Sonogashira coupling Stille reaction Suzuki reaction List of organic reactions References Name reactions
Miyaura borylation
[ "Chemistry" ]
222
[ "Name reactions", "Organic redox reactions", "Organic reactions" ]
66,645,249
https://en.wikipedia.org/wiki/Atom%20localization
Atom localization deals with estimating the position of an atom using techniques of quantum optics with increasing precision. This field finds its origins in the thought experiment by Werner Heisenberg called Heisenberg's microscope, which is commonly used as an illustration of Heisenberg's Uncertainty relation in quantum mechanics textbooks. The techniques have matured enough to offer atom localization along all three spatial dimensions in the subwavelength domain. Atom localization techniques have been applied to other fields requiring precise control or measurement of the position of atom-like entities such as microscopy, nanolithography, optical trapping of atoms, optical lattices, and atom optics. Atom localization is based on employing atomic coherence to determine the position of the atom to a precision smaller than the wavelength of the light being used. This seemingly surpasses the Rayleigh limit of resolution and opens up possibilities of super-resolution for a variety of fields. Subwavelength atom localization: surpassing the Rayleigh limit Given that in the discussion of the Heisenberg's microscope, Rayleigh limit of resolution and Heisenberg's Uncertainty are intricately related creates an impression that surpassing Rayleigh limit would lead to violation of Heisenberg's Uncertainty limit. It can be mathematically shown that the spatial resolution can be enhanced to any amount without violating Heisenberg's Uncertainty relation. The price to be paid is the momentum kick received by the particle whose position is being measured. This is depicted in the figure on the right. One dimensional atom localization Localization of an atom in a transverse direction from its direction of motion can be easily achieved using techniques such as quantum interference effects, coherent population trapping, via modification of atomic spectra such as through Autler-Towns Spectroscopy, resonance fluorescence, Ramsey interferometry, and via the monitoring of probe susceptibility through electromagnetically induced transparency, when the atom is interacting with at least one spatially-dependent standing wave field. Applications The study of atom localization has offered practical applications to the area of nanolithography at the Heisenberg limit along with its fundamental importance to the areas of atom optics, and laser cooling and trapping of neutral atoms. Extending the atom localization schemes to two dimensions, optical lattices with tighter than usual confinement at each lattice point can be obtained. Such strongly confined lattice structures could be useful to study several predictions of the Bloch theory of solids, and Mott transitions in much cleaner systems as compared to conventional solids. Such tighter trapping potentials could have further applications to the area of quantum information specifically for the development of deterministic sources of single atoms and single-atom quantum register. Techniques of atom localization are also important to the subwavelength microscopy and imaging and determination of the center-of-mass wavefunction of atom-like entities. Footnotes Atomic physics
Atom localization
[ "Physics", "Chemistry" ]
583
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
63,860,914
https://en.wikipedia.org/wiki/Stahl%20oxidation
The Stahl oxidation is a copper-catalyzed aerobic oxidation of primary and secondary alcohols to aldehydes and ketones. Known for its high selectivity and mild reaction conditions, the Stahl oxidation offers several advantages over classical alcohol oxidations. Key features of the Stahl oxidation are the use of a 2,2'-bipyridyl-ligated copper(I) species in the presence of a nitroxyl radical and N-methyl imidazole in polar aprotic solvent, most commonly acetonitrile or acetone. Copper(I) sources can vary, though sources with non-coordinating anions like triflate, tetrafluoroborate, and hexafluorophosphate are preferred, with copper(I) bromide and copper(I) iodide salts demonstrating utility in select applications. Frequently, tetrakis(acetonitrile)copper(I) salts are used. For most applications, reactions can be run at room temperature and ambient air contains sufficiently high enough oxygen concentrations to be used as the terminal oxidant. Compared to chromium-, DMSO-, or periodinane-mediated oxidations, this proves safe, environmentally-friendly, practical, and highly economical. In general, the Stahl oxidation is selective for oxidizing primary alcohols over secondary alcohols (both aliphatic and benzylic), and favors the oxidation of primary benzylic alcohols over primary aliphatic alcohols when TEMPO is used as the nitroxyl radical. This is in contrast to the Oppenauer oxidation, which favors the oxidation of secondary alcohols over primary and several other specialty oxidations. Over-oxidation of primary alcohols to carboxylic acids is rare, though lactones can form in certain diol-containing substrates. The use of less hindered nitroxyl radicals like ABNO or AZADO allow for the oxidation of both primary and secondary alcohols. History In 2011, Jessica Hoover and Shannon Stahl disclosed improved conditions for selective oxidation of primary alcohols to aldehydes using a (bpy)copper(I)/TEMPO system. While several catalytic aerobic oxidation systems were known at the time, many utilized palladium, which can be prohibitive through its expense and its cross-reactivity with alkene-bearing substrates. Aerobic oxidative catalysis of alcohols by copper, though known since at least 1984, was generally lower performing, requiring some combination of elevated reaction temperatures, higher catalyst loading, handling of pure oxygen, and biphasic or otherwise non-common solvent systems. Following the success of this initial disclosure, Hoover and Stahl went on to publish a further simplified protocol for rapid benzylic alcohol oxidation with Nicolas Hill, director of undergraduate organic chemistry laboratories at the University of Wisconsin - Madison. Utilizing a less expensive solvent and copper source, Hill, Hoover, and Stahl demonstrated that higher catalyst loadings could be economically achieved. In doing so, the oxidation of alcohols could be accelerated for use as a practical educational tool in undergraduate labs. Furthermore, reaction completion is typically indicated by a change in solution color for red/brown to green resulting from a change in the copper species' resting state. This is unique for benzylic and other activated alcohols, as the rate-limiting-step for these substrates is catalyst re-oxidation, which differs from aliphatic alcohols where the rate limiting step is C-H cleavage. The Stahl oxidation is a component of the undergraduate organic chemistry laboratory curriculum at UW-Madison and the University of Utah. In 2013, the mechanism for the copper(I)/TEMPO oxidation of alcohols was elucidated, and it was found the use of less hindered nitroxyl radical sources allowed for the oxidation of secondary alcohols. Modifications Hoover–Stahl oxidation The Hoover–Stahl oxidation explicitly indicates the earliest of the Stahl oxidation conditions allowing for the selective oxidation of primary alcohols. The system utilizes 2,2'-bipyridine (bpy), a copper(I) source (typically tetrakis(acetonitrile) copper(I) triflate, tetrafluoroborate, or hexafluorophosphate), TEMPO, and N-methylimidazole. The reaction is conducted in acetonitrile at room temperature under an atmosphere or air. Catalyst loadings are typically around 5 mol %, with N-methylimidazole being used at 10 mol %. The reaction is selective for oxidation of primary alcohols to aldehydes and generally does not oxidize secondary alcohols. Solutions for the Hoover–Stahl oxidation are commercially available from Millipore-Sigma, though the catalyst can be easily prepared in situ from common laboratory reagents. Steves–Stahl oxidation The Steves–Stahl oxidation indicates the use of a less hindered nitroxyl radical in the Stahl oxidation, allowing for the oxidation of secondary alcohols in addition to primary alcohols. The reaction is conducted in acetonitrile at room temperature under an atmosphere of air, or less commonly, under an atmosphere of oxygen. Typically, the nitroxyl radical used in the Steves–Stahl is 9-Azabicyclo[3.3.1]nonane N-Oxyl (ABNO) and is used in conjunction with a more strongly electron-donating 2,2'-bipyridyl ligand compared to bpy, like 4,4'-dimethoxy-2,2'-bipyridine, as this is shown to accelerate alcohol oxidation. Due to the comparatively high price and reactivity of ABNO, common practice is to use it sparingly, oftentimes at catalytic loading of 1 mol % or less. Solutions for the Steves–Stahl oxidation are commercially available through Millipore-Sigma, though the mixture can be easily prepared in situ. Due to the high reagent cost associated with the Steves–Stahl oxidation, it is generally only employed for oxidation of secondary alcohols or after the Hoover–Stahl oxidation has proved fruitless. Several improved methods for the scalable preparation of ABNO have been recently published. Xie–Stahl oxidative lactonization The Xie–Stahl oxidative lactonization is a lactonization reaction which generally employs Steves–Stahl conditions for the oxidative cyclization of diols. The Xie–Stahl reaction lends itself toward selective formation of γ-, δ-, and ε-lactones, forming the carbonyl at the less-hindered primary alcohol. In some instances, higher selectivity can be afforded through the use of 1 mol % TEMPO in place of ABNO. Zultanski–Zhao–Stahl oxidative amide coupling The Zultanski–Zhao–Stahl oxidative amide coupling is a reaction between a primary alcohol and an amine to form an amide. In the Zultanski–Zhao–Stahl reaction, a primary alcohol is oxidized to an aldehyde which, in the presence of an amine, reversibly forms a hemiaminal which is then irreversibly oxidized to the amide by the catalyst. The reaction is performed under an atmosphere of oxygen in the presence of 3Å molecular sieves using relatively high ABNO loading of 3 mol %. Optimal reaction conditions are substrate dependent, requiring specific copper(I) sources, ligands, and solvents depending on the structure of the starting alcohol and amines. References External links Stahl Research Group Hoover Research Group Organic oxidation reactions
Stahl oxidation
[ "Chemistry" ]
1,602
[ "Organic oxidation reactions", "Organic redox reactions", "Organic reactions" ]
63,862,751
https://en.wikipedia.org/wiki/Erbium%28III%29%20fluoride
Erbium(III) fluoride is the fluoride of erbium, a rare earth metal, with the chemical formula ErF3. It can be used to make infrared light-transmitting materials and up-converting luminescent materials. Production Erbium(III) fluoride can be produced by reacting erbium(III) nitrate and ammonium fluoride: Er(NO3)3 + 3 NH4F → 3 NH4NO3 + ErF3 References Further reading Erbium compounds Fluorides Lanthanide halides
Erbium(III) fluoride
[ "Chemistry" ]
119
[ "Fluorides", "Salts" ]
63,863,048
https://en.wikipedia.org/wiki/Proximity%20labeling
Enzyme-catalyzed proximity labeling (PL), also known as proximity-based labeling, is a laboratory technique that labels biomolecules, usually proteins or RNA, proximal to a protein of interest. By creating a gene fusion in a living cell between the protein of interest and an engineered labeling enzyme, biomolecules spatially proximal to the protein of interest can then be selectively marked with biotin for pulldown and analysis. Proximity labeling has been used for identifying the components of novel cellular structures and for determining protein-protein interaction partners, among other applications. History Before the development of proximity labeling, determination of protein proximity in cells relied on studying protein-protein interactions through methods such as affinity purification-mass spectrometry and proximity ligation assays. DamID is a method developed in 2000 by Steven Henikoff for identifying parts of the genome proximal to a chromatin protein of interest. DamID relies on a DNA methyltransferase fusion to the chromatin protein to nonnaturally methylate DNA, which can then be subsequently sequenced to reveal genome methylation sites near the protein. Researchers were guided by the fusion protein strategy of DamID to create a method for site-specific labeling of protein targets, culminating in the creation of the biotin protein labelling-based BioID in 2012. Alice Ting and the Ting lab at Stanford University have engineered several proteins that demonstrate improvements in biotin-based proximity labeling efficacy and speed. Principles Proximity labeling relies on a labeling enzyme that can biotinylate nearby biomolecules promiscuously. Biotin labeling can be achieved through several different methods, depending on the species of labeling enzyme. BioID, also known as BirA*, is a mutant E. coli biotin ligase that catalyzes the activation of biotin by ATP. The activated biotin is short-lived and thus can only diffuse to a region proximal to BioID. Labeling is achieved when the activated biotin reacts with nearby amines, such as the lysine sidechain amines found in proteins. TurboID is a biotin ligase engineered via yeast surface display directed evolution. TurboID enables ~10 minute labeling times instead of the ~18 hour labeling times required by BioID. APEX is an ascorbate peroxidase derivative reliant on hydrogen peroxide for catalyzing the oxidation of biotin-tyramide, also known as biotin-phenol, to a short-lived and reactive biotin-phenol free radical. Labeling is achieved when this intermediate reacts with various functional groups of nearby biomolecules. APEX can also be used for local deposition of diaminobenzidine, a precursor for an electron microscopy stain. APEX2 is a derivative of APEX engineered via yeast surface display directed evolution. APEX2 shows improved labeling efficiency and cellular expression levels. To label proteins nearby a protein of interest, a typical proximity labeling experiment begins by cellular expression of an APEX2 fusion to the protein of interest, which localizes to the protein of interest's native environment. Cells are next incubated with biotin-phenol, then briefly with hydrogen peroxide, initiating biotin-phenol free radical generation and labeling. To minimize cellular damage, the reaction is then quenched using an antioxidant buffer. Cells are lysed and the labeled proteins are pulled down with streptavidin beads. The proteins are digested with trypsin, and finally the resulting peptidic fragments are analyzed using shotgun proteomics methods such as LC-MS/MS or SPS-MS3. If instead a protein fusion is not genetically accessible (such as in human tissue samples) but an antibody for the protein of interest is known, proximity labeling can still be enabled by fusing a labeling enzyme with the antibody, then incubating the fusion with the sample. Applications Proximity labeling methods have been used to study the proteomes of biological structures that are otherwise difficult to isolate purely and completely, such as cilia, mitochondria, postsynaptic clefts, p-bodies, stress granules, and lipid droplets. Fusion of APEX2 with G-protein coupled receptors (GPCRs) allows for both tracking GPCR signaling at a 20-second temporal resolution and also identification of unknown GPCR-linked proteins. Proximity labeling has also been used for transcriptomics and interactomics. In 2019, Alice Ting and the Ting lab have used APEX to identify RNA localized to specific cellular compartments. In 2019, BioID has been tethered to the beta-actin mRNA transcript to study its localization dynamics. Proximity labeling has also been used to find interaction partners of heterodimeric protein phosphatases, of the miRISC (microRNA-induced silencing complex) protein Ago2, and of ribonucleoproteins. Recent developments TurboID-based proximity labeling has been used to identify regulators of a receptor involved in the innate immune response, a NOD-like receptor. BioID-based proximity labeling has been used to identify the molecular composition of breast cancer cell invadopodia, which are important for metastasis. Biotin-based proximity labeling studies demonstrate increased protein tagging of intrinsically disordered regions, suggesting that biotin-based proximity labeling can be used to study the roles of IDRs. A photosensitizer nucleus-targeted small molecule has also been developed for photoactivatable proximity labeling. Photocatalytic-based Proximity Labeling A new frontier in the field of proximity labeling exploits the utility of photocatalysis to achieve high spatial and temporal resolution of proximal protein microenvironments. This photocatalytic technology leverages the photonic energy of iridium-based photocatalysts to activate diazirine probes that can tag proximal proteins within a tight radius of about four nanometers. This technology was developed by the Merck Exploratory Science Center in collaboration with researchers at Princeton University. References Protein methods Molecular biology techniques
Proximity labeling
[ "Chemistry", "Biology" ]
1,255
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Molecular biology techniques", "Molecular biology" ]
72,529,865
https://en.wikipedia.org/wiki/Moreau%20envelope
The Moreau envelope (or the Moreau-Yosida regularization) of a proper lower semi-continuous convex function is a smoothed version of . It was proposed by Jean-Jacques Moreau in 1965. The Moreau envelope has important applications in mathematical optimization: minimizing over and minimizing over are equivalent problems in the sense that the sets of minimizers of and are the same. However, first-order optimization algorithms can be directly applied to , since may be non-differentiable while is always continuously differentiable. Indeed, many proximal gradient methods can be interpreted as a gradient descent method over . Definition The Moreau envelope of a proper lower semi-continuous convex function from a Hilbert space to is defined as Given a parameter , the Moreau envelope of is also called as the Moreau envelope of with parameter . Properties The Moreau envelope can also be seen as the infimal convolution between and . The proximal operator of a function is related to the gradient of the Moreau envelope by the following identity: . By defining the sequence and using the above identity, we can interpret the proximal operator as a gradient descent algorithm over the Moreau envelope. Using Fenchel's duality theorem, one can derive the following dual formulation of the Moreau envelope: where denotes the convex conjugate of . Since the subdifferential of a proper, convex, lower semicontinuous function on a Hilbert space is inverse to the subdifferential of its convex conjugate, we can conclude that if is the maximizer of the above expression, then is the minimizer in the primal formulation and vice versa. By Hopf–Lax formula, the Moreau envelope is a viscosity solution to a Hamilton–Jacobi equation. Stanley Osher and co-authors used this property and Cole–Hopf transformation to derive an algorithm to compute approximations to the proximal operator of a function. See also Proximal operator Proximal gradient method References External links A Hamilton–Jacobi-based Proximal Operator: a YouTube video explaining an algorithm to approximate the proximal operator Mathematical optimization
Moreau envelope
[ "Mathematics" ]
440
[ "Mathematical optimization", "Mathematical analysis" ]
72,531,863
https://en.wikipedia.org/wiki/Polyfullerene
Polyfullerene is a basic polymer of the C60 monomer group, in which fullerene segments are connected via covalent bonds into a polymeric chain without side or bridging groups. They are called intrinsic polymeric fullerenes, or more often all C60 polymers. Fullerene can be part of a polymer chain in many different ways. Fullerene-containing polymers are divided into following structural categories: Intrinsic polymeric fullerene (homopolymer), Main-chain polymers, Side-chain polymers, Star polymers, Crosslinked polymers, End-caped polymers. History Fullerene is a relatively new substance in chemistry sciences. Buckminsterfullerene itself was discovered in 1985 and the first fullerene-containing polymers were reported at least 6 years later. The main milestones in the use of fullerene in polymer chemistry are listed below: 1992 – Synthesis of organometallic C60 polymer (C60Pd3)n 1995 – Synthesis of C60 containing polyurethane and C60-styrene copolymer 1996 – Synthesis of fullerene side-chain polymer 1997 – Synthesis of fullerene polymer with C60 in the backbone by Diels-Alder reaction 2001 – Synthesis of star-shaped C60 (co)polymers Fullerene polymers High content of double bonds in the fullerene molecule (30 double bonds in Buckminsterfullerene) leads to crosslinking and formation of regioisomers. Polymerization without any sophisticated control of forming structure leads to very high randomization of polymer grid. Thus, linking units of second monomer are needed to prepare linear copolymers (see main-chain polymers). This group includes heteroatomic C60 polymers containing non-carbon atoms in polyfullerene chains. Preparations This section describes most of the main structural types of fullerene-containing polymers. Homopolymer Polyfullerenes can be prepared via many polymerization mechanisms. Research is mainly focused on photopolymerization, polymerization under high pressure and charge-transfer polymerization. The most likely connection of fullerene units is [2+2] cycloaddition of two double bonds of the benzene parts of fullerene molecules. Cycloaddition provides a cyclobutane ring connecting two fullerene molecules. Main-chain polymers Main-chain polymers are characterized by the presence of fullerene units in the polymer backbone. They are not heteroatomic fullerene homopolymers but linear fullerene copolymers. The structure can be described as necklace-type. One approach to achieving fullerene main-chain polymers is by copolymerizing fullerene with a difunctional monomer. Second option is polycondensation of bifunctionalized fullerene with monomer bearing compatible functional groups. Fullerene copolymers can be obtained through standard polymerization techniques used for industrially standard polymers. Examples of first approach are Diels-Alder addition and free radical copolymerization. Fullerene can be copolymerized with methylmethacrylate by initiation with azobisisobutyronitrile (AIBN). In Diels-Alder copolymerization fullerene acts as a dienophile with diene to form a cyclohexene ring. The figure below shows Diels-Alder reaction with the simplest diene – buta-1,3-diene. Comonomer must contain two pairs of conjugated double bonds in order to react with two fullerene molecules obtaining linear polymeric chain molecules. Used monomers are usually bulkier than conventional monomers in order to compensate the space requirements of fullerene spheres. Side-chain polymers Most fullerene polymers fall into this category. Similarly to the previous polymer type, two synthetic approaches are available. First, bonding fullerene spheres onto a polymerized chain or second, polymerizing monomer unit already bearing fullerene.An example of the second approach is ring-opening metathesis polymerisation (ROMP) of norbornene bearing C60 or copolymerization of pure norbornene and C60 functionalized norbornene. Cross-linked polymers As mentioned earlier, Buckminsterfullerene is capable of multiple additions and basic polymerization conditions lead to a polymer grid. Fullerene behaves the same way in copolymerization. In free radical copolymerization of styrene and C60 fullerene, the resulting copolymer is cross-linked and heterogeneous. Easy preparation of cross-linked fullerene polymer is copolymerization with polyurethanes. In this technique, fullerenol bearing up to 44 hydroxyl groups C60(OH)4 – 44 and di- or tri- isocyanate prepolymers are used as initial substances. Successful syntheses were conducted in a mixture of dimethylformamide (DMF) and tetrahydrofuran (THF)(1:3) at 60°C. Fullerene End-caped polymers Also incorrectly named “telechelic” polymers, but telechelic polymers have reactive functional end-groups. They can be synthesized by incorporating fullerenes onto the ends of polymerized chains or growth of a polymeric chain from a functionalized fullerene derivative and additionally closure. Introducing fullerene spheres into the end of the macromolecule significantly increases hydrophobicity of the original polymer. Star-shaped polymers Star fullerene polymers can be prepared by two major approaches. Reported star fullerene polymers were prepared by anionic copolymerization with polystyrene to form C60(CH2CH(C6H5))x)n, where n stands for the number of polystyrene star “arms” from 2 to 6. Second approach is growing polymer chains directly from fullerene derivative C60Cln (n = 16–20) by atom transfer radical polymerization. The chlorine fullerene derivative virtually works as an ATRP initiator. Countless polymers can be used for star arms. Polyphenylakyne polymers can be used as an example since they give photoemitting macromolecules when grafted onto fullerene. C60-poly(1-phenyl-1-propyne) can be prepared via wolfram-catalyzed metathesis reaction connecting prepared poly(1-phenyl-1-propyne) onto the fullerene by carbene addition resulting in cyclopropane connecting ring. Fullerene acts as a cocatalyst since tungsten catalyst (WCl6-Ph4Sn) is not able to polymerize 1-phenyl-1-propyne itself. Applications Polyfullerenes are currently in an early research phase and real-world applications or even industrial production solutions are yet to be found. The main reasons for this are the novelty of combining fullerene chemistry with polymer chemistry and the fact that fullerene can be currently synthesized on a scale of a few grams. All-C60 polymers exhibit practically no solubility, thus preventing proper testing of processability and chemical properties. Upcoming text only refers to potential applications of fullerene polymers according to founded properties of particular macromolecules. Electronics Fullerene itself stands out in the class of organic compounds because of its electronic properties. Current research studies utilization of fullerene by bonding it onto an optimal polymeric substrate. Practical reasons are easy processability of polymers and low price in comparison to pure C60 fullerene. Polymer backbones bearing fullerene spheres exhibit good or great photoconductivity and even generate photocurrent when exposed to white light. C60-polyvinylcarbazole (C60–PVK) exhibits photoinduced electron transfer within the polymer, which could be used for digital rewritable memory electronic parts. Prototype of such part of indium tin oxide, fullerene polymer and aluminum (ITO/ C60–PVK /Al) was capable to read, write and erase information for about 100 million times. Polyvinylcabazole polymer grown from fullerene polychloride (C60Cln) was observed to increase the intensity of radiated light of an electroluminescent device. This star polymer with three arms is acting as a hole-transporting layer for semiconductor parts of a device. On the other hand, hole-trapping materials affect electroluminescence the same way. Double-cable polymers are also candidates for functional layers for OLED displays. Adding 1 wt. % into basic OLED material increased luminescence of the diode. Very promising hole-trapping materials are polyacetylene backbone polymers with fullerene in combination with different electron-accepting groups in branches. Star copolymer (PS)xC60(PMMA)y (polystyrene and polymethylmethacrylate being different star “arms”) acted as an active electroluminescence layer. It improved emitting of a semiconductor electroluminescence device by up to 20 times. C60-poly(1-phenyl-1-propyne) is also reported to exhibit light emission. Fullerene moiety increased emission of blue light two times in comparison to pure poly(phenyl propyne). Stability and processability of such polymer is very good. Solar panels Fullerene polymers are widely studied in organic solar cells for active layers of new-generation photovoltaic panels. Examples are homopolymers of C60-polystyrene and C60-polyethyleneglycols or C60 copolymers prepared by ROMP polymerization. The current efficiency of converting incoming sun radiation to electricity is about 3%. Another polymer type with intrinsic properties are “Double-Cable” polymers. They are brush-like structures consisting of 𝜋-electron conjugated backbone (P-type part) bearing electron-accepting branches (N-type part). Optical limiting properties Particular fullerene (co)polymers exhibit an optical limiting property, meaning they block intense light flux passing through them. Low intensity light flux is not affected. It is useful for light control parts in optics and as sensor or eye protection. Surface activity Currently, fullerene copolymerized with palladium showed some practical aspects, particularly (C60Pd3)n due to the content of palladium on its surface, exhibits catalytic effect for hydrogenation of alkenes and can lead to the development of new catalytic systems and products. (C60Pd)n polymers can adsorb gases, making them useful as adsorbents for volatile and toxic species. For example, a great affinity to toluene was proved. The palladium atoms in the backbone are partially positive and thus attract 𝜋-electrons of aromatic core of toluene. Introducing correct amount of fullerene as side groups onto poly(2,6-dimethyl-1,4-phenylene oxide) (PPO) increases permeability of gas separation membranes by 80 % in comparison with pure PPO. Bulky fullerene probably increases the free volume of PPO. Exceptional mechanical properties Materials originating from polyurethane synthesis exhibit improved thermal mechanical stability. Fullerene-containing polyurethanes also exhibit strong optical response and are potentially applicable for optical signal processing. Linear polymer chains containing fullerene undergo crosslinking. Resulting material exhibits elastomeric behavior with 10 times higher tensile strength and 17 times higher elongation at break than the same material without fullerene. Blending of fullerene end-capped polymers (polyethylene glycols for example) with H-donating polymers (polyvinylchloride, poly(p-vinyl phenol), polymethylmethacrylate, etc.) leads to the enhancement of mechanical properties of H-donating polymers. Scavengers of free radicals Fullerene end-caped poly(N-isopropylacrylamide) is a water-soluble polymer with the tendency to form clusters. It is a very good scavenger of free radicals, and it can be used for controlling radical polymerizations. Depolymerizeable polymers Fullerene polymers are potential candidates for establishing polymer circular economy. Depolymerizeable polymers are the hope of polymer recycling. C60 fullerene copolymerized with [4,4′-bithiazole]-2,2′-bis(diazonium)chloride (see Magnetic behavior) was observed to depolymerize in a temperature range of 60-75°C. Polymerization and depolymerization can be done several times before degradation of initial components. The depolymerization temperature and the difference between polymerization and depolymerization temperatures must be increased. Cancer treatment Basic fullerene polymers without polar functional groups are strongly hydrophobic, thus incompatible for medicinal use in the human body. An example of water-soluble derivatives are polyfullerocyclodextrins. They are prepared by reaction of 𝛽-cyclodextrin complexes with fullerene. They exhibit excellent DNA-cleaving activity (in the presence of visible light, they cleave DNA quantitatively). This phenomenon can be used for eliminating cancer cells. The introduction of hydrophilic groups into the macromolecule is the principle of preparing water-soluble polymers. Examples of backbones for water-soluble fullerene side-chain polymers are for example poly(maleic anhydride-co-vinyl acetate) (52) or pullulan. Magnetic behavior Polymers with C60-backbone with ferromagnetic properties were reported in literature, although fullerene itself is antiferromagnetic. An example of a successful synthesis of ferromagnetic C60–polymer uses [4,4′-bithiazole]-2,2′-bis(diazonium)dichloride, C60 and FeSO4. References Fullerenes Polymers
Polyfullerene
[ "Chemistry", "Materials_science" ]
2,874
[ "Polymers", "Polymer chemistry" ]
72,543,322
https://en.wikipedia.org/wiki/B3%201715%2B425
B3 1715+425 is an astronomical radio source which is theorized to be a nearly naked black hole. Discovery B3 1715+425 was discovered during a systematic search for supermassive black holes (SMBH) by James Condon and his team at the National Radio Astronomy Observatory in 2016. Condon recalls how his team had been looking for “orbiting pairs of supermassive black holes, with one offset from the centre of a galaxy, as telltale evidence of a previous galaxy merger.” Instead, they found B3 1715+425. Description It is speculated that B3 1715+425 was originally enclosed by a host galaxy, like most other SMBHs. Models predict that in most black hole collisions, the two objects will combine to form a larger black hole. However, in B3 1715+425‘s case, Condon speculates that a collision with a much larger galaxy resulted in most of the host galaxy for B3 1715+425 being pulled away, leaving it with a small remaining galaxy of diameter just 3,000 lightyears across (in comparison, the Milky Way Galaxy is 87,400 light years across). The galaxy responsible for removing most B3 1715+425's stars is an elliptical brightest cluster galaxy at the center of the ZwCl 8193 cluster. The galaxy shows a distorted morphology and a starburst, probably as a result of the interaction with B3 1715+425. Resources Supermassive black holes
B3 1715+425
[ "Physics" ]
299
[ "Black holes", "Unsolved problems in physics", "Supermassive black holes" ]
71,004,765
https://en.wikipedia.org/wiki/Omnigeneity
Omnigeneity (sometimes also called omnigenity) is a property of a magnetic field inside a magnetic confinement fusion reactor. Such a magnetic field is called omnigenous if the path a single particle takes does not drift radially inwards or outwards on average. A particle is then confined to stay on a flux surface. All tokamaks are exactly omnigenous by virtue of their axisymmetry, and conversely an unoptimized stellarator is generally not omnigenous. Because an exactly omnigenous reactor has no neoclassical transport (in the collisionless limit), stellarators are usually optimized in a way such that this criterion is met. One way to achieve this is by making the magnetic field quasi-symmetric, and the Helically Symmetric eXperiment takes this approach. One can also achieve this property without quasi-symmetry, and Wendelstein 7-X is an example of a device which is close to omnigeneity without being quasi-symmetric. Theory The drifting of particles across flux surfaces is generally only a problem for trapped particles, which are trapped in a magnetic mirror. Untrapped (or passing) particles, which can circulate freely around the flux surface, are automatically confined to stay on a flux surface. For trapped particles, omnigeneity relates closely to the second adiabatic invariant (often called the parallel or longitudinal invariant). One can show that the radial drift a particle experiences after one full bounce motion is simply related to a derivative of ,where is the charge of the particle, is the magnetic field line label, and is the total radial drift expressed as a difference in toroidal flux. With this relation, omnigeneity can be expressed as the criterion that the second adiabatic invariant should be the same for all the magnetic field lines on a flux surface,This criterion is exactly met in axisymmetric systems, as the derivative with respect to can be expressed as a derivative with respect to the toroidal angle (under which the system is invariant). References Fusion reactors Electromagnetism
Omnigeneity
[ "Physics", "Chemistry" ]
423
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions", "Nuclear fusion", "Fusion reactors" ]
71,013,576
https://en.wikipedia.org/wiki/Borotellurites
Borotellurites are chemical mixed anion compounds that contain any kind of borate and tellurite ions bound together via oxygen. They are distinct from borotellurates in which tellurium is in +6 oxidation state. There are also analogous boroselenites, with selenium instead of tellurium, and borosulfates containing sulfur. Borotellurites are colourless. List References Borates Tellurites Mixed anion compounds
Borotellurites
[ "Physics", "Chemistry" ]
98
[ "Ions", "Matter", "Mixed anion compounds" ]
71,016,858
https://en.wikipedia.org/wiki/List%20of%20female%20mass%20spectrometrists
This is a list of notable women mass spectrometrists with significant scientific contribution towards advancement in theories, instrumentation and applications of mass spectrometry. The list is organized by the chemical societies and their major awards related to mass spectrometry, as well as presidency. American Chemical Society The Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry is the major mass spectrometry award offered by the American Chemical Society. Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry (since 1985) 2021 Veronica M. Bierbaum 2020 Kimberly A. Prather 2019 Jennifer S. Brodbelt 2018 Carol Vivien Robinson 2017 Vicki H. Wysocki 2015 Hilkka I. Kenttämaa 2010 Catherine E. Costello 2008 Catherine C. Fenselau 1990 Marjorie G. Horning American Society for Mass Spectrometry The major awards from the American Society for Mass Spectrometry are John B. Fenn Award for a Distinguished Contribution in Mass Spectrometry, Biemann Medal, Research Award, Research at Primarily Undergraduate Institutions (PUIs) Award, and Al Yergey Mass Spectrometry Scientist Award. A number of notable women mass spectrometrists served as presidents of the American Society for Mass Spectrometry. John B. Fenn Award for a Distinguished Contribution in Mass Spectrometry (since 1990) 2024 Jennifer Brodbelt 2023 Carol Vivien Robinson 2017 Catherine E. Costello 2012 Catherine C. Fenselau 2009 Vicki H. Wysocki Biemann Medal (since 1997) 2022 Erin S. Baker 2020 Ying Ge 2019 Sarah Trimpin 2016 Kristina Håkansson 2014 Lingjun Li 2008 Julia Laskin 2000 Julie A. Leary Research Award (since 1986) 2024 Elizabeth K. Neumann 2023 Kelly Marie Hines and Stacy Malaker 2022 Gloria Sheynkman 2021 Xin Yan 2019 Eleanor Browne 2014 Kerri A. Pratt 2013 Yu Xia 2012 Ileana M. Cristea and Sharon J. Pitteri 2011 Judit Villen 2010 Sarah Trimpin 2007 Rebecca Jockusch 2006 Heather Desaire 2005 Kristina Hăkansson 2004 Lingjun Li 2003 Andrea Grottoli 2001 Deborah S. Gross 2000 Elaine Marzluff 1998 Mary T. Rodgers 1997 M. Judith Charles 1994 Kimberly A. Prather 1993 Susan Graul 1992 Vicki H. Wysocki 1991 Hilkka I. Kenttämaa 1990 Jennifer Brodbelt 1987 Susan Olesik Research at Primarily Undergraduate Institutions (PUIs) Award (since 2019) 2023 Erica Jacobs 2021 Christine Hughey 2019 Callie Cole Al Yergey Mass Spectrometry Scientist Award (since 2019) 2023 Amina Woods 2022 Martha M. Vestling 2020 Rachel Ogorzalek Loo President and Past Presidents (since 1953) 2022–2024 Julia Laskin 2020–2022 Susan Richardson 2016–2018 Vicki Wysocki 2014–2016 Jennifer Brodbelt 2012–2014 Susan Weintraub 2006–2008 Barbara S. Larsen 2002–2004 Catherine E. Costello 1996–1998 Veronica M. Bierbaum 1982–1984 Catherine Fenselau Australian and New Zealand Society for Mass Spectrometry The major awards from the Australian and New Zealand Society for Mass Spectrometry (ANZSMS) are the ANZSMS Medal, Morrison Medal, Bowel Medal, Michael Guilhaus Research Award and ANZSMS Fellows. ANZSMS Medal (since 2009) The award has not been given to a female mass spectrometrist since award inception in 2009. Morrison Medal (since 1990) 2023 Ute Roessner 2017 Kliti Grice 2003 Margaret Sheil Bowie Medal (since 2009) 2021 Michelle Colgrave 2017 Tara Pukala Michael Guilhaus Research Award (since 2015) 2024 Sarah E. Hancock ANZSMS Fellows (since 2014) 2014 Margaret Sheil Brazilian Society of Mass Spectrometry The major award of the Brazilian Society of Mass Spectrometry is the BrMASS Manuel Riveros Medal. BrMASS Manuel Riveros Medal 2022 Lidija Nikolaevna Gall, Julia Laskin, Claudia Moraes de Rezende, Rosa Erra-Balsells Maria Fernanda Georgina Gine Rosias Conchetta Cacheres British Mass Spectrometry Society The major awards of the British Mass Spectrometry Society (BMSS) are the Aston Medal, the BMSS Medal, and the BMSS Life Membership. Aston Medal (since 1987) 2011 Carol V. Robinson BMSS Medal (since 2002) 2019 Alison Ashcroft BMSS Life Membership Alison Ashcroft Anna Upton Mira Doig Canadian National Proteomics Network The major awards of the Canadian National Proteomics Network (CNPN) are the CNPN-Tony Pawson Proteomics Award, and the New Investigator Award. CNPN-Tony Pawson Proteomics Award (since 2010) 2020 Anne-Claude Gingras 2019 Jennifer Van Eyk New Investigator Award (since 2020) 2022 Jennifer Geddes-McAlister Canadian Society for Mass Spectrometry The major award of the Canadian Society for Mass Spectrometry is the Fred P. Lossing Award. Fred P. Lossing Award (since 1994) 2018 Ann English 2017 Helene Perreault Young Investigator Award (since 2018) 2019 Dajana Vuckovic Chinese American Society for Mass Spectrometry The major awards of the Chinese American Society for Mass Spectrometry (CASMS) is the Young Investigator Award. Young Investigator Award (since 2022) 2023 Xueyun Zheng 2022 Ling Hao, Xin Yang, Hui Ye Females in Mass Spectrometry The major awards of the Females in Mass Spectrometry (FeMS) are the Catherine E. Costello Award and the Indigo BioAutomation FeMS Distinguished Contribution Award. Catherine E. Costello Award (since 2020) 2020 Sarah Brown Riley Indigo BioAutomation FeMS Distinguished Contribution Award (since 2022) 2022 Olga Vitek German Mass Spectrometry Society (Deutsche Gesellschaft für Massenspektrometrie, DGMS) The major awards of the German Mass Spectrometry Society (Deutsche Gesellschaft für Massenspektrometrie, DGMS) are the Mattauch-Herzog Award for Mass Spectrometry, Wolfgang Paul Lecture, Mass Spectrometry in the Life Sciences Award, and Life Science Prize. Mattauch-Herzog Award for Mass Spectrometry (since 1988) 2022 Charlotte Uetrecht 2004 Andrea Sinz Wolfgang Paul Lecture (since 1997) 2019 Vicki Wysocki 2015 Catherine E. Costello 1998 Chava Lifshitz Mass Spectrometry in the Life Sciences Award (2009–2024) 2022 Andrea Sinz 2018 Michal Sharon 2015 Jana Seifert 2009 J. Sabine Becker Life Science Prize (2002–2007) 2002 Jasna Peter-Katalinic Human Proteome Organization The major awards of the Human Proteome Organization are the Distinguished Achievement in Proteomic Sciences Award, Discovery in Proteomic Sciences Award, Clinical & Translational Proteomics Award, Science & Technology Award, and Distinguished Service Award. Current and Past Presidents 2023-2024 Jennifer Van Eyk 2021-2022 Yu-Ju Chen 2011-2012 Catherine E. Costello Distinguished Achievement in Proteomic Sciences Award (since 2004) 2021 Nicolle H. Packer 2020 Karin Rodland 2019 Jennifer Van Eyk 2018 Kathryn K. Lilley 2015 Amanda Paulovich 2012 Carol Robinson 2004 Angelika Görg Discovery in Proteomic Sciences Award (since 2007) 2021 Paola Picotti 2019 Anne-Claude Gingras 2018 Ulrike Kusebauch 2017 Ileana Cristea 2008 Catherine E. Costello Clinical & Translational Proteomics Award (since 2014) 2023 Rebekah Gundry 2022 Connie Jimenez 2021 Ying Ge 2018 Peipei Ping 2015 Jennifer Van Eyk Science & Technology Award (since 2011) 2019 Olga Ornatsky 2015 Selena Larkin 2014 Rosa Viner 2013 Christie Hunter Distinguished Service Award (since 2004) 2015 Catherine E. Costello 2013 Peipei Ping 2006 Catherine C. Fenselau International Mass Spectrometry Foundation The major awards from the International Mass Spectrometry Foundation are the Thomson Medal Award, the Curt Brunnée Award, and the Jochen Franzen Award. Thomson Medal Award (since 1985) 2024 Jennifer S. Brodbelt 2022 Vicki Wysocki and Lidia Gall 2020 Alison E. Ashcroft 2014 Carol V. Robinson 2009 Catherine E. Costello and Catherine. C. Fenselau Curt Brunnée Award (since 1994) 2022 Erin S. Baker 2020 Livia Eberlin Jochen Franzen Award (since 2022) 2024 Ljiljana Paša-Tolić Israeli Society for Mass Spectrometry A number of notable women mass spectrometrists served as presidents of the Israeli Society for Mass Spectrometry. Past Presidents (since 1985) Michal Sharon 2009 Alla Shainskaya Tsippy Tamiri Chagit Denekamp 1991 Chava Lifshitz Royal Society of Chemistry The Mass Spectrometry Award is the only award of the Royal Society of Chemistry, which is specifically for the field mass spectrometry. Mass Spectrometry Award (2001-2008) 2001 Carol V. Robinson Swedish Mass Spectrometry Society The Swedish Mass Spectrometry Society recognizes distinguished contribution to Swedish mass spectrometry with its Gold Berzelius Medal and early career contribution with its Silver Berzelius Medal. Gold Berzelius Medal (since 2014) 2022 Kristina Håkansson Silver Berzelius Medal (since 2015) 2024 Anneli Kruve 2018 Ingela Lanekoff Swiss Group for Mass Spectrometry The major award of the Swiss Group for Mass Spectrometry (SGMS) is the SGMS Award. SGMS Award (since 2014) 2016 Paola Picotti Taiwan Society for Mass Spectrometry The major awards of the Taiwan Society for Mass Spectrometry include the Taiwan Society for Mass Spectrometry Medal and the Outstanding Scholar Research Award. Taiwan Society for Mass Spectrometry Medal (since 2017) 2020 Yu-Ju Chen 陳玉如 Outstanding Scholar Research Award (since 2011) 2018 Shu-Hui Chen 陳淑慧 2012 Mei-Chun Tseng 曾美郡 2011 Yu-Ju Chen 陳玉如 The Association for Mass Spectrometry and Advances in Clinical Lab The major award of the Association for Mass Spectrometry and Advances in Clinical Lab (MSACL) is the MSACL Distinguished Contribution Award. MSACL Distinguished Contribution Award (since 2015) 2023 US Jennifer Van Eyk 2017 EU Isabelle Fournier 2017 US Catherine C. Fenselau 2015 EU Linda Thienpont U.S. Human Proteome Organization The major awards of the U.S. Human Proteome Organization are the Donald F. Hunt Distinguished Contribution in Proteomics Award, Catherine E. Costello Lifetime Achievement in Proteomics Award, Gilbert S. Omenn Computational Proteomics Award, and Robert J. Cotter New Investigator Award. Donald F. Hunt Distinguished Contribution in Proteomics Award (since 2018) 2021 Peipei Ping 2019 Jennifer Van Eyk Catherine E. Costello Award for Exemplary Achievements in Proteomics (the former Catherine E. Costello Lifetime Achievement in Proteomics Award) (since 2019) 2024 Jennifer Van Eyk 2022 Catherine C. Fenselau 2019 Catherine E. Costello Gilbert S. Omenn Computational Proteomics Award (since 2016) 2021 Olga Vitek Robert J. Cotter New Investigator Award (since 2013) 2022 Stephanie M. Cologna 2020 Si Wu 2018 Leslie Hicks 2016 Paola Picotti 2014 Judit Villen 2013 Rebecca Gundry References Mass spectrometrists Women chemists Women scientists by field Lists of women scientists
List of female mass spectrometrists
[ "Physics", "Chemistry" ]
2,460
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
68,175,797
https://en.wikipedia.org/wiki/Properties%20of%20nonmetals%20%28and%20metalloids%29%20by%20group
Nonmetals show more variability in their properties than do metals. Metalloids are included here since they behave predominately as chemically weak nonmetals. Physically, they nearly all exist as diatomic or monatomic gases, or polyatomic solids having more substantial (open-packed) forms and relatively small atomic radii, unlike metals, which are nearly all solid and close-packed, and mostly have larger atomic radii. If solid, they have a submetallic appearance (with the exception of sulfur) and are brittle, as opposed to metals, which are lustrous, and generally ductile or malleable; they usually have lower densities than metals; are mostly poorer conductors of heat and electricity; and tend to have significantly lower melting points and boiling points than those of most metals. Chemically, the nonmetals mostly have higher ionisation energies, higher electron affinities (nitrogen and the noble gases have negative electron affinities) and higher electronegativity values than metals noting that, in general, the higher an element's ionisation energy, electron affinity, and electronegativity, the more nonmetallic that element is. Nonmetals, including (to a limited extent) xenon and probably radon, usually exist as anions or oxyanions in aqueous solution; they generally form ionic or covalent compounds when combined with metals (unlike metals, which mostly form alloys with other metals); and have acidic oxides whereas the common oxides of nearly all metals are basic. Properties Abbreviations used in this section are: AR Allred-Rochow; CN coordination number; and MH Moh's hardness Group 1 Hydrogen is a colourless, odourless, and comparatively unreactive diatomic gas with a density of 8.988 × 10−5 g/cm3 and is about 14 times lighter than air. It condenses to a colourless liquid −252.879 °C and freezes into an ice- or snow-like solid at −259.16 °C. The solid form has a hexagonal crystalline structure and is soft and easily crushed. Hydrogen is an insulator in all of its forms. It has a high ionisation energy (1312.0 kJ/mol), moderate electron affinity (73 kJ/mol), and moderate electronegativity (2.2). Hydrogen is a poor oxidising agent (H2 + 2e− → 2H– = –2.25 V at pH 0). Its chemistry, most of which is based around its tendency to acquire the electron configuration of the noble gas helium, is largely covalent in nature, noting it can form ionic hydrides with highly electropositive metals, and alloy-like hydrides with some transition metals. The common oxide of hydrogen (H2O) is a neutral oxide. Group 13 Boron is a lustrous, barely reactive solid with a density 2.34 g/cm3 (cf. aluminium 2.70), and is hard (MH 9.3) and brittle. It melts at 2076 °C (cf. steel ~1370 °C) and boils at 3927 °C. Boron has a complex rhombohedral crystalline structure (CN 5+). It is a semiconductor with a band gap of about 1.56 eV. Boron has a moderate ionisation energy (800.6 kJ/mol), low electron affinity (27 kJ/mol), and moderate electronegativity (2.04). Being a metalloid, most of its chemistry is nonmetallic in nature. Boron is a poor oxidizing agent (B12 + 3e → BH3 = –0.15 V at pH 0). While it bonds covalently in nearly all of its compounds, it can form intermetallic compounds and alloys with transition metals of the composition MnB, if n > 2. The common oxide of boron (B2O3) is weakly acidic. Group 14 Carbon (as graphite, its most thermodynamically stable form) is a lustrous and comparatively unreactive solid with a density of 2.267 g/cm3, and is soft (MH 0.5) and brittle. It sublimes to vapour at 3642 °C. Carbon has a hexagonal crystalline structure (CN 3). It is a semimetal in the direction of its planes, with an electrical conductivity exceeding that of some metals, and behaves as a semiconductor in the direction perpendicular to its planes. It has a high ionisation energy (1086.5 kJ/mol), moderate electron affinity (122 kJ/mol), and high electronegativity (2.55). Carbon is a poor oxidising agent (C + 4e− → CH4 = 0.13 V at pH 0). Its chemistry is largely covalent in nature, noting it can form salt-like carbides with highly electropositive metals. The common oxide of carbon (CO2) is a medium-strength acidic oxide. Silicon is a metallic-looking relatively unreactive solid with a density of 2.3290 g/cm3, and is hard (MH 6.5) and brittle. It melts at 1414 °C (cf. steel ~1370 °C) and boils at 3265 °C. Silicon has a diamond cubic structure (CN 4). It is a non-conductive with a band gap of about 1.11 eV. Silicon has a moderate ionisation energy (786.5 kJ/mol), moderate electron affinity (134 kJ/mol), and moderate electronegativity (1.9). It is a poor oxidising agent (Si + 4e → Si4 = –0.147 at pH 0). As a metalloid the chemistry of silicon is largely covalent in nature, noting it can form alloys with metals such as iron and copper. The common oxide of silicon (SiO2) is weakly acidic. Germanium is a shiny, mostly unreactive grey-white solid with a density of 5.323 g/cm3 (about two-thirds that of iron), and is hard (MH 6.0) and brittle. It melts at 938.25 °C (cf. silver 961.78 °C) and boils at 2833 °C. Germanium has a diamond cubic structure (CN 4). It is a semiconductor with a band gap of about 0.67 eV. Germanium has a moderate ionisation energy (762 kJ/mol), moderate electron affinity (119 kJ/mol), and moderate electronegativity (2.01). It is a poor oxidising agent (Ge + 4e → GeH4 = –0.294 at pH 0). As a metalloid the chemistry of germanium is largely covalent in nature, noting it can form alloys with metals such as aluminium and gold. Most alloys of germanium with metals lack metallic or semimetallic conductivity. The common oxide of germanium (GeO2) is amphoteric. Group 15 Nitrogen is a colourless, odourless, and relatively inert diatomic gas with a density of 1.251 × 10−3 g/cm3 (marginally heavier than air). It condenses to a colourless liquid at −195.795 °C and freezes into an ice- or snow-like solid at −210.00 °C. The solid form (density 0.85 g/cm3; cf. lithium 0.534) has a hexagonal crystalline structure and is soft and easily crushed. Nitrogen is an insulator in all of its forms. It has a high ionisation energy (1402.3 kJ/mol), low electron affinity (–6.75 kJ/mol), and high electronegativity (3.04). The latter property manifests in the capacity of nitrogen to form usually strong hydrogen bonds, and its preference for forming complexes with metals having low electronegativities, small cationic radii, and often high charges (+3 or more). Nitrogen is a poor oxidising agent (N2 + 6e− → 2NH3 = −0.057 V at pH 0). Only when it is in a positive oxidation state, that is, in combination with oxygen or fluorine, are its compounds good oxidising agents, for example, 2NO3− → N2 = 1.25 V. Its chemistry is largely covalent in nature; anion formation is energetically unfavourable owing to strong inter electron repulsions associated with having three unpaired electrons in its outer valence shell, hence its negative electron affinity. The common oxide of nitrogen (NO) is weakly acidic. Many compounds of nitrogen are less stable than diatomic nitrogen, so nitrogen atoms in compounds seek to recombine if possible and release energy and nitrogen gas in the process, which can be leveraged for explosive purposes. Phosphorus in its most thermodynamically stable black form, is a lustrous and comparatively unreactive solid with a density of 2.69 g/cm3, and is soft (MH 2.0) and has a flaky comportment. It sublimes at 620 °C. Black phosphorus has an orthorhombic crystalline structure (CN 3). It is a semiconductor with a band gap of 0.3 eV. It has a high ionisation energy (1086.5 kJ/mol), moderate electron affinity (72 kJ/mol), and moderate electronegativity (2.19). In comparison to nitrogen, phosphorus usually forms weak hydrogen bonds, and prefers to form complexes with metals having high electronegativities, large cationic radii, and often low charges (usually +1 or +2. Phosphorus is a poor oxidising agent (P4 + 3e− → PH3– = −0.046 V at pH 0 for the white form, −0.088 V for the red). Its chemistry is largely covalent in nature, noting it can form salt-like phosphides with highly electropositive metals. Compared to nitrogen, electrons have more space on phosphorus, which lowers their mutual repulsion and results in anion formation requiring less energy. The common oxide of phosphorus (P2O5) is a medium-strength acidic oxide. When assessing periodicity in the properties of the elements it needs to be borne in mind that the quoted properties of phosphorus tend to be those of its least stable white form rather than, as is the case with all other elements, the most stable form. White phosphorus is the most common, industrially important, and easily reproducible allotrope. For those reasons it is the standard state of the element. Paradoxically, it is also thermodynamically the least stable, as well as the most volatile and reactive form. It gradually changes to red phosphorus. This transformation is accelerated by light and heat, and samples of white phosphorus almost always contain some red phosphorus and, accordingly, appear yellow. For this reason, white phosphorus that is aged or otherwise impure is also called yellow phosphorus. When exposed to oxygen, white phosphorus glows in the dark with a very faint tinge of green and blue. It is highly flammable and pyrophoric (self-igniting) upon contact with air. White phosphorus has a density of 1.823 g/cm3, is soft (MH 0.5) as wax, pliable and can be cut with a knife. It melts at 44.15 °C and, if heated rapidly, boils at 280.5 °C; it otherwise remains solid and transforms to violet phosphorus at 550 °C. It has a body-centred cubic structure, analogous to that of manganese, with unit cell comprising 58 P4 molecules. It is an insulator with a band gap of about 3.7 eV. Arsenic is a grey, metallic looking solid which is stable in dry air but develops a golden bronze patina in moist air, which blackens on further exposure. It has a density of 5.727 g/cm3, and is brittle and moderately hard (MH 3.5; more than aluminium; less than iron). Arsenic sublimes at 615 °C. It has a rhombohedral polyatomic crystalline structure (CN 3). Arsenic is a semimetal, with an electrical conductivity of around 3.9 × 104 S•cm−1 and a band overlap of 0.5 eV. It has a moderate ionisation energy (947 kJ/mol), moderate electron affinity (79 kJ/mol), and moderate electronegativity (2.18). Arsenic is a poor oxidising agent (As + 3e → AsH3 = –0.22 at pH 0). As a metalloid, its chemistry is largely covalent in nature, noting it can form brittle alloys with metals, and has an extensive organometallic chemistry. Most alloys of arsenic with metals lack metallic or semimetallic conductivity. The common oxide of arsenic (As2O3) is acidic but weakly amphoteric. Antimony is a silver-white solid with a blue tint and a brilliant lustre. It is stable in air and moisture at room temperature. Antimony has a density of 6.697 g/cm3, and is moderately hard (MH 3.0; about the same as copper). It has a rhombohedral crystalline structure (CN 3). Antimony melts at 630.63 °C and boils at 1635 °C. It is a semimetal, with an electrical conductivity of around 3.1 × 104 S•cm−1 and a band overlap of 0.16 eV. Antimony has a moderate ionisation energy (834 kJ/mol), moderate electron affinity (101 kJ/mol), and moderate electronegativity (2.05). It is a poor oxidising agent (Sb + 3e → SbH3 = –0.51 at pH 0). As a metalloid, its chemistry is largely covalent in nature, noting it can form alloys with one or more metals such as aluminium, iron, nickel, copper, zinc, tin, lead and bismuth, and has an extensive organometallic chemistry. Most alloys of antimony with metals have metallic or semimetallic conductivity. The common oxide of antimony (Sb2O3) is amphoteric. Group 16 Oxygen is a colourless, odourless, and unpredictably reactive diatomic gas with a gaseous density of 1.429 × 10−3 g/cm3 (marginally heavier than air). It is generally unreactive at room temperature. Thus, sodium metal will "retain its metallic lustre for days in the presence of absolutely dry air and can even be melted (m.p. 97.82 °C) in the presence of dry oxygen without igniting". On the other hand, oxygen can react with many inorganic and organic compounds either spontaneously or under the right conditions, (such as a flame or a spark) [or ultra-violet light?]. It condenses to pale blue liquid −182.962 °C and freezes into a light blue solid at −218.79 °C. The solid form (density 0.0763 g/cm3) has a cubic crystalline structure and is soft and easily crushed. Oxygen is an insulator in all of its forms. It has a high ionisation energy (1313.9 kJ/mol), moderately high electron affinity (141 kJ/mol), and high electronegativity (3.44). Oxygen is a strong oxidising agent (O2 + 4e → 2H2O = 1.23 V at pH 0). Metal oxides are largely ionic in nature. Sulfur is a bright-yellow moderately reactive solid. It has a density of 2.07 g/cm3 and is soft (MH 2.0) and brittle. It melts to a light yellow liquid 95.3 °C and boils at 444.6 °C. Sulfur has an abundance on earth one-tenth that of oxygen. It has an orthorhombic polyatomic (CN 2) crystalline structure, and is brittle. Sulfur is an insulator with a band gap of 2.6 eV, and a photoconductor meaning its electrical conductivity increases a million-fold when illuminated. Sulfur has a moderate ionisation energy (999.6 kJ/mol), high electron affinity (200 kJ/mol), and high electronegativity (2.58). It is a poor oxidising agent (S8 + 2e− → H2S = 0.14 V at pH 0). The chemistry of sulfur is largely covalent in nature, noting it can form ionic sulfides with highly electropositive metals. The common oxide of sulfur (SO3) is strongly acidic. Selenium is a metallic-looking, moderately reactive solid with a density of 4.81 g/cm3 and is soft (MH 2.0) and brittle. It melts at 221 °C to a black liquid and boils at 685 °C to a dark yellow vapour. Selenium has a hexagonal polyatomic (CN 2) crystalline structure. It is a semiconductor with a band gap of 1.7 eV, and a photoconductor meaning its electrical conductivity increases a million-fold when illuminated. Selenium has a moderate ionisation energy (941.0 kJ/mol), high electron affinity (195 kJ/mol), and high electronegativity (2.55). It is a poor oxidising agent (Se + 2e− → H2Se = −0.082 V at pH 0). The chemistry of selenium is largely covalent in nature, noting it can form ionic selenides with highly electropositive metals. The common oxide of selenium (SeO3) is strongly acidic. Tellurium is a silvery-white, moderately reactive, shiny solid, that has a density of 6.24 g/cm3 and is soft (MH 2.25) and brittle. It is the softest of the commonly recognised metalloids. Tellurium reacts with boiling water, or when freshly precipitated even at 50 °C, to give the dioxide and hydrogen: Te + 2 H2O → TeO2 + 2 H2. It has a melting point of 450 °C and a boiling point of 988 °C. Tellurium has a polyatomic (CN 2) hexagonal crystalline structure. It is a semiconductor with a band gap of 0.32 to 0.38 eV. Tellurium has a moderate ionisation energy (869.3 kJ/mol), high electron affinity (190 kJ/mol), and moderate electronegativity (2.1). It is a poor oxidising agent (Te + 2e− → H2Te = −0.45 V at pH 0). The chemistry of tellurium is largely covalent in nature, noting it has an extensive organometallic chemistry and that many tellurides can be regarded as metallic alloys. The common oxide of tellurium (TeO2) is amphoteric. Group 17 Fluorine is an extremely toxic and reactive pale yellow diatomic gas that, with a gaseous density of 1.696 × 10−3 g/cm3, is about 40% heavier than air. Its extreme reactivity is such that it was not isolated (via electrolysis) until 1886 and was not isolated chemically until 1986. Its occurrence in an uncombined state in nature was first reported in 2012, but is contentious. Fluorine condenses to a pale yellow liquid at −188.11 °C and freezes into a colourless solid at −219.67 °C. The solid form (density 1.7 g/cm−3) has a cubic crystalline structure and is soft and easily crushed. Fluorine is an insulator in all of its forms. It has a high ionisation energy (1681 kJ/mol), high electron affinity (328 kJ/mol), and high electronegativity (3.98). Fluorine is a powerful oxidising agent (F2 + 2e → 2HF = 2.87 V at pH 0); "even water, in the form of steam, will catch fire in an atmosphere of fluorine". Metal fluorides are generally ionic in nature. Chlorine is an irritating green-yellow diatomic gas that is extremely reactive, and has a gaseous density of 3.2 × 10−3 g/cm3 (about 2.5 times heavier than air). It condenses at −34.04 °C to an amber-coloured liquid and freezes at −101.5 °C into a yellow crystalline solid. The solid form (density 1.9 g/cm−3) has an orthorhombic crystalline structure and is soft and easily crushed. Chlorine is an insulator in all of its forms. It has a high ionisation energy (1251.2 kJ/mol), high electron affinity (349 kJ/mol; higher than fluorine), and high electronegativity (3.16). Chlorine is a strong oxidising agent (Cl2 + 2e → 2HCl = 1.36 V at pH 0). Metal chlorides are largely ionic in nature. The common oxide of chlorine (Cl2O7) is strongly acidic. Bromine is a deep brown diatomic liquid that is quite reactive, and has a liquid density of 3.1028 g/cm3. It boils at 58.8 °C and solidifies at −7.3 °C to an orange crystalline solid (density 4.05 g/cm−3). It is the only element, apart from mercury, known to be a liquid at room temperature. The solid form, like chlorine, has an orthorhombic crystalline structure and is soft and easily crushed. Bromine is an insulator in all of its forms. It has a high ionisation energy (1139.9 kJ/mol), high electron affinity (324 kJ/mol), and high electronegativity (2.96). Bromine is a strong oxidising agent (Br2 + 2e → 2HBr = 1.07 V at pH 0). Metal bromides are largely ionic in nature. The unstable common oxide of bromine (Br2O5) is strongly acidic. Iodine, the rarest of the nonmetallic halogens, is a metallic looking solid that is moderately reactive, and has a density of 4.933 g/cm3. It melts at 113.7 °C to a brown liquid and boils at 184.3 °C to a violet-coloured vapour. It has an orthorhombic crystalline structure with a flaky habit. Iodine is semiconductor in the direction of its planes, with a band gap of about 1.3 eV and a conductivity of 1.7 × 10−8 S•cm−1 at room temperature. This is higher than selenium but lower than boron, the least electrically conducting of the recognised metalloids. Iodine is an insulator in the direction perpendicular to its planes. It has a high ionisation energy (1008.4 kJ/mol), high electron affinity (295 kJ/mol), and high electronegativity (2.66). Iodine is a moderately strong oxidising agent (I2 + 2e → 2I− = 0.53 V at pH 0). Metal iodides are predominantly ionic in nature. The only stable oxide of iodine (I2O5) is strongly acidic. Astatine, is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance. Astatine is predicted to be a semiconductor, with a band gap of about 0.7 eV. It has a moderate ionisation energy (900 kJ/mol), high electron affinity (233 kJ/mol), and moderate electronegativity (2.2). Astatine is a moderately weak oxidising agent (At2 + 2e → 2At− = 0.3 V at pH 0). Group 18 Helium has a density of 1.785 × 10−4 g/cm3 (cf. air 1.225 × 10−3 g/cm3), liquifies at −268.928 °C, and cannot be solidified at normal pressure. It has the lowest boiling point of all of the elements. Liquid helium exhibits super-fluidity, superconductivity, and near-zero viscosity; its thermal conductivity is greater than that of any other known substance (more than 1,000 times that of copper). Helium can only be solidified at −272.20 °C under a pressure of 2.5 MPa. It has a very high ionisation energy (2372.3 kJ/mol), low electron affinity (estimated at −50 kJ/mol), and high electronegativity (4.16 χSpec). No normal compounds of helium have so far been synthesised. Neon has a density of 9.002 × 10−4 g/cm3, liquifies at −245.95 °C, and solidifies at −248.45 °C. It has the narrowest liquid range of any element and, in liquid form, has over 40 times the refrigerating capacity of liquid helium and three times that of liquid hydrogen. Neon has a very high ionisation energy (2080.7 kJ/mol), low electron affinity (estimated at −120 kJ/mol), and very high electronegativity (4.787 χSpec). It is the least reactive of the noble gases; no normal compounds of neon have so far been synthesised. Argon has a density of 1.784 × 10−3 g/cm3, liquifies at −185.848 °C, and solidifies at −189.34 °C. Although non-toxic, it is 38% denser than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because (like all the noble gases) it is colourless, odourless, and tasteless. Argon has a high ionisation energy (1520.6 kJ/mol), low electron affinity (estimated at −96 kJ/mol), and high electronegativity (3.242 χSpec). One interstitial compound of argon, Ar1C60, is a stable solid at room temperature. Krypton has a density of 3.749 × 10−3 g/cm3, liquifies at −153.415 °C, and solidifies at −157.37 °C. It has a high ionisation energy (1350.8 kJ/mol), low electron affinity (estimated at −60 kJ/mol), and high electronegativity (2.966 χSpec). Krypton can be reacted with fluorine to form the difluoride, KrF2. The reaction of with produces an unstable compound, , that contains a krypton-oxygen bond. Xenon has a density of 5.894 × 10−3 g/cm3, liquifies at −161.4 °C, and solidifies at −165.051 °C. It is non-toxic, and belongs to a select group of substances that penetrate the blood–brain barrier, causing mild to full surgical anesthesia when inhaled in high concentrations with oxygen. Xenon has a high ionisation energy (1170.4 kJ/mol), low electron affinity (estimated at −80 kJ/mol), and high electronegativity (2.582 χSpec). It forms a relatively large number of compounds, mostly containing fluorine or oxygen. An unusual ion containing xenon is the tetraxenonogold(II) cation, , which contains Xe–Au bonds. This ion occurs in the compound , and is remarkable in having direct chemical bonds between two notoriously unreactive atoms, xenon and gold, with xenon acting as a transition metal ligand. The compound contains a Xe–Xe bond, the longest element-element bond known (308.71 pm = 3.0871 Å). The most common oxide of xenon (XeO3) is strongly acidic. Radon, which is radioactive, has a density of 9.73 × 10−3 g/cm3, liquifies at −61.7 °C, and solidifies at −71 °C. It has a high ionisation energy (1037 kJ/mol), low electron affinity (estimated at −70 kJ/mol), and a high electronegativity (2.60 χSpec). The only confirmed compounds of radon, which is the rarest of the naturally occurring noble gases, are the difluoride RnF2, and trioxide, RnO3. It has been reported that radon is capable of forming a simple Rn2+ cation in halogen fluoride solution, which is highly unusual behaviour for a nonmetal, and a noble gas at that. Radon trioxide (RnO3) is expected to be acidic. Oganesson, the heaviest element on the periodic table, has only recently been synthesized. Owing to its short half-life, its chemical properties have not yet been investigated. Due to the significant relativistic destabilisation of the 7p3/2 orbitals, it is expected to be significantly reactive and behave more similarly to the group 14 elements, as it effectively has four valence electrons outside a pseudo-noble gas core. Its predicted melting and boiling points are 52±15 °C and 177±10 °C respectively, so that it is probably neither noble nor a gas; it is expected to have a density of about 6.6–7.4 g/cm3 around room temperature. It is expected to have a barely positive electron affinity (estimated as 5 kJ/mol) and a moderate ionisation energy of about 860 kJ/mol, which is rather low for a nonmetal and close to those of tellurium and astatine. The oganesson fluorides OgF2 and OgF4 are expected to show significant ionic character, suggesting that oganesson may have at least incipient metallic properties. The oxides of oganesson, OgO and OgO2, are predicted to be amphoteric. See also Nonmetal Notes Citations Bibliography Brown WH & Rogers EP 1987, General, organic and biochemistry, 3rd ed., Brooks/Cole, Monterey, California, Cotton FA, Darlington C & Lynch LD 1976, Chemistry: An investigative approach, Houghton Mifflin, Boston Greenwood NN & Earnshaw A 2002, Chemistry of the elements, 2nd ed., Butterworth-Heinemann, Moeller T 1952, Inorganic chemistry: An advanced textbook, John Wiley & Sons, New York Wiberg N 2001, Inorganic chemistry, Academic Press, San Diego, Wulfsberg G 1987, Principles of descriptive Inorganic chemistry, Brooks/Cole Publishing Company, Monterey, California Yoder CH, Suydam FH & Snavely FA 1975, Chemistry, 2nd ed, Harcourt Brace Jovanovich, New York, Metals Nonmetals
Properties of nonmetals (and metalloids) by group
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
6,824
[ "Nonmetals", "Metals", "Condensed matter physics", "Materials science" ]
68,177,737
https://en.wikipedia.org/wiki/Mitoquinone%20mesylate
Mitoquinone mesylate (MitoQ) is a synthetic analogue of coenzyme Q10 which has antioxidant effects. It was first developed in New Zealand in the late 1990s. It has significantly improved bioavailability and improved mitochondrial penetration compared to coenzyme Q10, and has shown potential in a number of medical indications, being widely sold as a dietary supplement. A 2014 review found insufficient evidence for the use of mitoquinone mesylate in Parkinson's disease and other movement disorders. See also Idebenone Nicotinamide mononucleotide Pyrroloquinoline quinone References Antioxidants 1,4-Benzoquinones
Mitoquinone mesylate
[ "Chemistry" ]
146
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,185,380
https://en.wikipedia.org/wiki/Telecommunication%20Instructional%20Modeling%20System
TIMS, or Telecommunication Instructional Modeling System, is an electronic device invented by Tim Hooper and developed by Australian engineering company Emona Instruments that is used as a telecommunications trainer in educational settings and universities. History TIMS was designed at the University of New South Wales by Tim Hooper in 1971. It was developed to run student experiments for electrical engineering communications courses. Hooper’s concept was developed into the current TIMS model in the late 1980s. In 1986, the project won a competition organized by Electronics Australia for development work using the Texas Instruments TMS320. Emona Instruments also received an award for TIMS at the fifth Secrets of Australian ICT Innovation Competition. Methodology TIMS uses a block diagram-based interface for experiments in the classroom. It can model mathematical equations to simulate electric signals, or it can use block diagrams to simulate telecommunications systems. It uses a different hardware card to represent functions for each block of the diagram. TIMS consists of a server, a chassis, and boards that can emulate the configurations of a telecommunications system. It uses electronic circuits as modules to simulate the components of analog and digital communications systems. The modules can perform different functions such as signal generation, signal processing, signal measurement, and digital signal processing. Variants The block diagram approach to modeling the mathematics of a telecommunication system has also been ported across to other domains. Simulation Where the blocks are patched together onscreen to mimic the hardware implementation but with a simulation engine (known as TutorTIMS). Remote access It can be used by multiple students at once across the internet or LAN via a browser based client screen. This utilises a statistical time division multiplexing architecture in the control unit. The method is applied both to Telecommunications and Electronics Laboratories (known as netCIRCUITlabs). V References External links Official website 1971 establishments Electronics Electrical engineering Telecommunications
Telecommunication Instructional Modeling System
[ "Technology", "Engineering" ]
370
[ "Information and communications technology", "Electrical engineering", "Telecommunications" ]
75,354,267
https://en.wikipedia.org/wiki/Baxdrostat
Baxdrostat is an investigational drug that is being evaluated for the treatment of hypertension. It is an aldosterone synthase inhibitor. References Tetrahydroisoquinolines Lactams Amides Tetrahydroquinolines
Baxdrostat
[ "Chemistry" ]
54
[ "Pharmacology", "Functional groups", "Medicinal chemistry stubs", "Pharmacology stubs", "Amides" ]
75,355,463
https://en.wikipedia.org/wiki/Enlicitide%20chloride
Enlicitide chloride (INN; previously known as MK-0616) is a macrocyclic peptide investigational drug who is being evaluated for the treatment of hypercholesterolaemia. It is a PCSK9 inhibitor. Merck has launched a Phase 3 clinical trial to evaluate the efficacy and safety of MK-0616 in Adults With Hypercholesterolemia. References Cyclic peptides PCSK9 inhibitors Fluoroarenes Triazoles Quaternary ammonium compounds
Enlicitide chloride
[ "Chemistry" ]
101
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,356,088
https://en.wikipedia.org/wiki/SGT-003
SGT-003 is an experimental gene therapy being tested for Duchenne's muscular dystrophy. It is hoped to be an improvement on Solid Bioscience's earlier gene therapy SGT-001. References Experimental gene therapies
SGT-003
[ "Chemistry" ]
52
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,356,464
https://en.wikipedia.org/wiki/Vixarelimab
Vixarelimab (KPL-716) is a fully human monoclonal antibody that works by binding to the oncostatin M receptor β, thus inhibiting both interleukin 31 and oncostatin M. It is developed by Kiniksa Pharmaceuticals for prurigo nodularis. References Monoclonal antibodies
Vixarelimab
[ "Chemistry" ]
73
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,365,758
https://en.wikipedia.org/wiki/Ophirite
Ophirite is a tungstate mineral first discovered in the Ophir Hill Consolidated mine at Ophir district, Oquirrh Mountains, Tooele County, Utah, United States of America. It was found underground near a calcite cave in one veinlet, six centimeters wide by one meter long, surrounded by different sulfides. Before the closing of the mine in 1972, it was dominated by sulfide minerals, and the Ophir district was known for being a source of zinc, copper, silver, and lead ores. The crystals are formed as tablets. It is the first known mineral to contain a heteropolyanion, a lacunary defect derivative of the Keggin anion. The chemical formula of ophirite is Ca2Mg4[Zn2Mn3+2(H2O)2(Fe3+W9O34)2] · 46•H2O. The mineral has been approved by the Commission on New Minerals and Mineral Names, IMA, to be named ophirite for its type locality, the Ophir Consolidated mine. Occurrence Ophirite is found in association with scheelite and pyrite. The production of the mineral is thought to be from oxidative alteration of sulfides: a reaction between dolomite and scheelite with oxidizing and late acidic hydrothermal solutions that are in the presence of calcium-rich and pyrite hornfels. It occurs in one veinlet, which is surrounded by sphalerite, galena, bournonite, unidentified sulfide minerals, foci of apatite, and sericite-containing pyrite, and is typically interface between scheelite and dolomite. Also present in the vein are crystals of sulfur and fluorite. Physical properties Ophirite is an orange-brown, transparent mineral with a vitreous luster. It exhibits a hardness of 2 on the Mohs hardness scale. Ophirite occurs as tablet-shaped crystals on {001} with irregular {100} and {110} bounding forms. Ophirite has no observed cleavage and irregular/uneven fracture. The measured specific gravity is 4.060 g/cm3. Optical properties Ophirirte is biaxial positive, which means it will refract light along two axes. The mineral is optically biaxial positive, 2Vmeas. 43(2)°. The refractive indices are: α ~ 1.730(3), β ~ 1.735(3), and γ ~ 1.770(3)°. Dispersion is strong, r > v. Its pleochroism is light orange brown for X and Y, and orange brown for Z, where X<Y<<Z. Observations indicate that chemical species are in their fully oxidized states. Chemical properties Ophirite is a tungstate, and is the first mineral discovered containing [4]Fe3+[6]W6+9O34, a group in the structural unit of the ophirite polyanion. Tri-lacunary Keggin anions are well known in synthetic compounds, but ophirite is the first known example of a mineral with a tri-lacunary Keggin polyanion.The empirical chemical formula for ophirite calculated on the basis of 30 cations, is Ca1.73Mg3.99[Zn2.02Mn3+1.82(H2O)2(Fe3+2.34W17.99O68)2] · 45.95•H2O. The ideal formula for ophirite is Ca2Mg4[Zn2Mn3+2(H2O)2(Fe3+W9O34)2] · 46•H2O. Chemical composition X-ray crystallography A Rigaku R-Axis Rapid II curved imaging plate microdiffractometer using monochromatized MoKα radiation was used to collect X-ray diffraction data for ophirite. Ophirite is in the triclinic crystal system and in the space group P. Its unit-cell dimensions were determined to be a = 11.9860(2) Å; b = 13.2073(2) Å; c = 17.689(1) Å; β= 85.364(6)°; α = 69.690(5)°; γ = 64.875(5)°; Z = 1. See also List of minerals References Natural materials Tungstate minerals Triclinic minerals Minerals in space group 1 Wikipedia Student Program Zinc minerals Manganese minerals
Ophirite
[ "Physics" ]
979
[ "Natural materials", "Materials", "Matter" ]
63,871,781
https://en.wikipedia.org/wiki/System%20of%20differential%20equations
In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations. Linear systems of differential equations A first-order linear system of ODEs is a system in which every equation is first order and depends on the unknown functions linearly. Here we consider systems with an equal number of unknown functions and equations. These may be written as where is a positive integer, and are arbitrary functions of the independent variable t. A first-order linear system of ODEs may be written in matrix form: or simply . Homogeneous systems of differential equations A linear system is said to be homogeneous if for each and for all values of , otherwise it is referred to as non-homogeneous. Homogeneous systems have the property that if are linearly independent solutions to the system, then any linear combination of these, , is also a solution to the linear system where are constant. The case where the coefficients are all constant has a general solution: , where is an eigenvalue of the matrix with corresponding eigenvectors for . This general solution only applies in cases where has n distinct eigenvalues, cases with fewer distinct eigenvalues must be treated differently. Linear independence of solutions For an arbitrary system of ODEs, a set of solutions are said to be linearly-independent if: is satisfied only for . A second-order differential equation may be converted into a system of first order linear differential equations by defining , which gives us the first-order system: Just as with any linear system of two equations, two solutions may be called linearly-independent if implies , or equivalently that is non-zero. This notion is extended to second-order systems, and any two solutions to a second-order ODE are called linearly-independent if they are linearly-independent in this sense. Overdetermination of systems of differential equations Like any system of equations, a system of linear differential equations is said to be overdetermined if there are more equations than the unknowns. For an overdetermined system to have a solution, it needs to satisfy the compatibility conditions. For example, consider the system: Then the necessary conditions for the system to have a solution are: See also: Cauchy problem and Ehrenpreis's fundamental principle. Nonlinear system of differential equations Perhaps the most famous example of a nonlinear system of differential equations is the Navier–Stokes equations. Unlike the linear case, the existence of a solution of a nonlinear system is a difficult problem (cf. Navier–Stokes existence and smoothness.) Other examples of nonlinear systems of differential equations include the Lotka–Volterra equations. Differential system A differential system is a means of studying a system of partial differential equations using geometric ideas such as differential forms and vector fields. For example, the compatibility conditions of an overdetermined system of differential equations can be succinctly stated in terms of differential forms (i.e., for a form to be exact, it needs to be closed). See integrability conditions for differential systems for more. See also Integral geometry Cartan–Kuranishi prolongation theorem Notes References L. Ehrenpreis, The Universality of the Radon Transform, Oxford Univ. Press, 2003. Gromov, M. (1986), Partial differential relations, Springer, M. Kuranishi, "Lectures on involutive systems of partial differential equations", Publ. Soc. Mat. São Paulo (1967) Pierre Schapira, Microdifferential systems in the complex domain, Grundlehren der Math- ematischen Wissenschaften, vol. 269, Springer-Verlag, 1985. Further reading https://mathoverflow.net/questions/273235/a-very-basic-question-about-projections-in-formal-pde-theory https://www.encyclopediaofmath.org/index.php/Involutional_system https://www.encyclopediaofmath.org/index.php/Complete_system https://www.encyclopediaofmath.org/index.php/Partial_differential_equations_on_a_manifold Differential equations Differential systems Multivariable calculus
System of differential equations
[ "Mathematics" ]
903
[ "Mathematical analysis", "Mathematical analysis stubs", "Calculus", "Mathematical objects", "Differential equations", "Equations", "Multivariable calculus" ]
63,876,619
https://en.wikipedia.org/wiki/Sulfate%20carbonate
The sulfate carbonates are a compound carbonates, or mixed anion compounds that contain sulfate and carbonate ions. Sulfate carbonate minerals are in the 7.DG and 5.BF Nickel-Strunz groupings. They may be formed by crystallization from a water solution, or by melting a carbonate and sulfate together. In some structures carbonate and sulfate can substitute for each other. For example a range from 1.4 to 2.2 Na2SO4•Na2CO3 is stable as a solid solution. Silvialite can substitute about half its sulfate with carbonate and the high temperature hexagonal form of sodium sulfate (I) Na2SO4 can substitute unlimited proportions of carbonate instead of sulfate. Minerals Artificial References Sulfates Sulfate minerals Carbonate minerals Carbonates Mixed anion compounds
Sulfate carbonate
[ "Physics", "Chemistry" ]
160
[ "Matter", "Mixed anion compounds", "Sulfates", "Salts", "Ions" ]
66,657,885
https://en.wikipedia.org/wiki/Tetrahydroxozincate
In chemistry, tetrahydroxozincate or tetrahydroxidozincate is a divalent anion (negative ion) with formula , with a central zinc atom in the +2 or (II) valence state coordinated to four hydroxide groups. It has Sp3 hybridization. It is the most common of the zincate anions, and is often called just zincate. These names are also used for the salts containing that anion, such as sodium zincate Na2Zn(OH)4 and calcium zincate CaZn(OH)4·2H2O Zincate salts can be obtained by reaction of zinc oxide (ZnO) or zinc hydroxide () and a strong base like sodium hydroxide. It is now generally accepted that the resulting solutions contain the tetrahydroxozincate ion. Earlier Raman studies had been interpreted as indicating the existence of linear ions. Related anions and salts The name "zincate" may also refer to a polymeric anion with formula approaching []n, which forms salts such as ·, or to mixed oxides of zinc and less electronegative elements, such as . See also tetrachlorozincate or tetrachloridozincate, tetranitratozincate, References Anions Inorganic chemistry
Tetrahydroxozincate
[ "Physics", "Chemistry" ]
275
[ "nan", "Ions", "Matter", "Anions" ]
66,660,778
https://en.wikipedia.org/wiki/Chlorophyllum%20agaricoides
Chlorophyllum agaricoides, commonly known as the gasteroid lepiota, puffball parasol, false puffball, or puffball agaric, is a species of fungus belonging to the family Agaricaceae. When young, it is edible, and has been traditionally eaten in Turkey for many years. It has cosmopolitan distribution, with notable documentation in China, Mongolia, Bulgaria, and Turkey. It is also a protected species in Hungary, and is believed to be in decline across Europe due to habitat destruction. Description It is a secotioid mushroom, meaning its hymenium takes the form of a gleba made of underdeveloped gills, completely enclosed by the cap, which never fully opens. This protects the mushroom from desiccation. The cap is egg-shaped to spherical, often tapering upward to form a blunt, conical point 1-7cm wide and 2-10cm tall. It is white, and becomes dark brown with age. It is mostly smooth, with some small fibrils, though it may also develop fibrous scales. The gills are contorted, irregularly chambered, and underdeveloped, making up an enclosed gleba which is white, aging to a mustardy yellow-brown. The stipe is 0-3cm long and 0.5-2cm thick. There is no ring. Its odor becomes cabbagey with age. It grows singularly or in clusters mostly on cultivated land or grass, though occasionally on the forest floor. The spores are 6.5-9.5 x 5-7 μm, globose to elliptic, green to yellow-brown, turning reddish brown in Melzer's. The germ pore is indistinct. Cheilocystidia and pleurocystidia are absent. Agaricus inapertus is a look-alike, although unlike C. agaricoides, it prefers forests and develops a black gleba with age. References Agaricaceae Fungus species Secotioid fungi
Chlorophyllum agaricoides
[ "Biology" ]
427
[ "Fungi", "Fungus species" ]
66,664,550
https://en.wikipedia.org/wiki/Gnu%20code
In quantum information, the gnu code refers to a particular family of quantum error correcting codes, with the special property of being invariant under permutations of the qubits. Given integers g (the gap), n (the occupancy), and m (the length of the code), the two codewords are where are the Dicke states consisting of a uniform superposition of all weight-k words on m qubits, e.g. The real parameter scales the density of the code. The length , hence the name of the code. For odd and , the gnu code is capable of correcting erasure errors, or deletion errors. References Quantum information science Fault-tolerant computer systems
Gnu code
[ "Technology", "Engineering" ]
149
[ "Fault-tolerant computer systems", "Reliability engineering", "Computer systems" ]
66,664,708
https://en.wikipedia.org/wiki/Galileo%27s%20law%20of%20odd%20numbers
In classical mechanics and kinematics, Galileo's law of odd numbers states that the distance covered by a falling object in successive equal time intervals is linearly proportional to the odd numbers. That is, if a body falling from rest covers a certain distance during an arbitrary time interval, it will cover 3, 5, 7, etc. times that distance in the subsequent time intervals of the same length. This mathematical model is accurate if the body is not subject to any forces besides uniform gravity (for example, it is falling in a vacuum in a uniform gravitational field). This law was established by Galileo Galilei who was the first to make quantitative studies of free fall. Explanation Using a speed-time graph The graph in the figure is a plot of speed versus time. Distance covered is the area under the line. Each time interval is coloured differently. The distance covered in the second and subsequent intervals is the area of its trapezium, which can be subdivided into triangles as shown. As each triangle has the same base and height, they have the same area as the triangle in the first interval. It can be observed that every interval has two more triangles than the previous one. Since the first interval has one triangle, this leads to the odd numbers. Using the sum of first n odd numbers From the equation for uniform linear acceleration, the distance covered for initial speed constant acceleration (acceleration due to gravity without air resistance), and time elapsed it follows that the distance is proportional to (in symbols, ), thus the distance from the starting point are consecutive squares for integer values of time elapsed. The middle figure in the diagram is a visual proof that the sum of the first odd numbers is In equations: {| |1 || = 1 || = 12 |- |1 + 3 || = 4 || = 22 |- |1 + 3 + 5 || = 9 || = 32 |- |1 + 3 + 5 + 7 || = 16 || = 42 |- |1 + 3 + 5 + 7 + 9 || = 25 || = 52 |} That the pattern continues forever can also be proven algebraically: To clarify this proof, since the th odd positive integer is if denotes the sum of the first odd integers then so that Substituting and gives, respectively, the formulas where the first formula expresses the sum entirely in terms of the odd integer while the second expresses it entirely in terms of which is 's ordinal position in the list of odd integers See also Notes and references Classical mechanics Kinematics
Galileo's law of odd numbers
[ "Physics", "Technology" ]
523
[ "Machines", "Kinematics", "Physical phenomena", "Classical mechanics stubs", "Classical mechanics", "Physical systems", "Motion (physics)", "Mechanics" ]
66,671,811
https://en.wikipedia.org/wiki/Dravidian%20numerals
Dravidian numerals are a numeral system that originated in ancient India and remained the usual way of writing numbers throughout Dravidian-speaking regions in South Asia. Numbers in this system are represented by combinations of letters from the various Indian scripts. In modern usage it has been replaced by Hindu-Arabic numeral systems. References Dravidian
Dravidian numerals
[ "Mathematics" ]
72
[ "Numeral systems", "Numerals" ]
66,672,615
https://en.wikipedia.org/wiki/Mars%20Guy%20Fontana
Mars Guy Fontana was a corrosion engineer, professor of Metallurgical Engineering at Ohio State University. He was born April 6, 1910, in Iron Mountain, Michigan, and died February 29, 1988. Education and other work Mars Guy Fontana graduated with a Bachelor of Science followed by a Master of Science and then awarded a Doctor of Philosophy in the field of metallurgical engineering from the University of Michigan. He was known as a researcher/engineer who added to the field of knowledge in the fairly specialized area of corrosion and its various applications in engineering – corrosion engineering. As well as writing numerous papers he wrote the textbook Corrosion Engineering which was first published in 1967 ; there have been a number of updated editions since then. This book has been used as the primary textbook and recommended reading for at least one highly ranked University masters degree course. In his lifetime he wrote many papers in various scientific and engineering journals/periodicals. He also authored Corrosion: A Compilation. In the late 1940s, he was given the chair in the Corrosion Center at Ohio State University. At the time it was the largest university corrosion research department in the United States. He combined the disciplines of engineering design, Material Science and Corrosion so they could be viewed together. His contribution at the university was of such significance that he has a building named after him, the Fontana Laboratories. He also has a professorship named after him. See also Herbert H. Uhlig Ulick Richardson Evans Melvin Romanoff Michael Faraday Marcel Pourbaix References External links Ohio State University website AIME Website 1910 births 1988 deaths Corrosion University of Michigan College of Engineering alumni Metallurgy
Mars Guy Fontana
[ "Chemistry", "Materials_science", "Engineering" ]
325
[ "Metallurgy", "Materials science", "Corrosion", "Electrochemistry", "nan", "Materials degradation" ]
73,961,995
https://en.wikipedia.org/wiki/Selenite%20sulfate
Selenite sulfates are mixed anion compounds containing both selenite (SeO32−) and sulfate (SO42−) anions. They have transparent crystals that may be coloured by cations. Selenite sulfate minerals are known including pauladamsite and munakataite. List References Sulfates Selenites Mixed anion compounds
Selenite sulfate
[ "Physics", "Chemistry" ]
73
[ "Matter", "Mixed anion compounds", "Sulfates", "Salts", "Ions" ]
73,964,350
https://en.wikipedia.org/wiki/Plancherel%E2%80%93Rotach%20asymptotics
The Plancherel–Rotach asymptotics are asymptotic results for orthogonal polynomials. They are named after the Swiss mathematicians Michel Plancherel and his PhD student Walter Rotach, who first derived the asymptotics for the Hermite polynomial and Laguerre polynomial. Nowadays asymptotic expansions of this kind for orthogonal polynomials are referred to as Plancherel–Rotach asymptotics or of Plancherel–Rotach type. The case for the associated Laguerre polynomial was derived by the Swiss mathematician Egon Möcklin, another PhD student of Plancherel and George Pólya at ETH Zurich. Hermite polynomials Let denote the n-th Hermite polynomial. Let and be positive and fixed, then for and for and for and complex and bounded where denotes the Airy function. (Associated) Laguerre polynomials Let denote the n-th associate Laguerre polynomial. Let be arbitrary and real, and be positive and fixed, then for and for and for and complex and bounded . Literature References Analysis Asymptotic analysis Orthogonal polynomials
Plancherel–Rotach asymptotics
[ "Mathematics" ]
226
[ "Mathematical analysis", "Asymptotic analysis" ]